diff --git a/.gitattributes b/.gitattributes index 8141adb153ef7fefc25b2f15c687121f267757ab..e85905a38698ee9cef4a206b4c4f028cfdefbff6 100644 --- a/.gitattributes +++ b/.gitattributes @@ -104,3 +104,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text 2025/A[[:space:]]Simple[[:space:]]yet[[:space:]]Effective[[:space:]]Layout[[:space:]]Token[[:space:]]in[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]for[[:space:]]Document[[:space:]]Understanding/5f860180-3af4-4fff-abf7-4bbdf8e78ae2_origin.pdf filter=lfs diff=lfs merge=lfs -text 2025/A[[:space:]]Stitch[[:space:]]in[[:space:]]Time[[:space:]]Saves[[:space:]]Nine_[[:space:]]Small[[:space:]]VLM[[:space:]]is[[:space:]]a[[:space:]]Precise[[:space:]]Guidance[[:space:]]for[[:space:]]Accelerating[[:space:]]Large[[:space:]]VLMs/81ce4c52-dac7-4add-aaa7-e959289e0870_origin.pdf filter=lfs diff=lfs merge=lfs -text 2025/A[[:space:]]Tale[[:space:]]of[[:space:]]Two[[:space:]]Classes_[[:space:]]Adapting[[:space:]]Supervised[[:space:]]Contrastive[[:space:]]Learning[[:space:]]to[[:space:]]Binary[[:space:]]Imbalanced[[:space:]]Datasets/7be69b4e-6240-4897-87a0-9e54582fed6f_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/A[[:space:]]Theory[[:space:]]of[[:space:]]Learning[[:space:]]Unified[[:space:]]Model[[:space:]]via[[:space:]]Knowledge[[:space:]]Integration[[:space:]]from[[:space:]]Label[[:space:]]Space[[:space:]]Varying[[:space:]]Domains/71226fdf-7aa9-4bda-88d3-c62f01e56520_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/A[[:space:]]Unified[[:space:]]Approach[[:space:]]to[[:space:]]Interpreting[[:space:]]Self-supervised[[:space:]]Pre-training[[:space:]]Methods[[:space:]]for[[:space:]]3D[[:space:]]Point[[:space:]]Clouds[[:space:]]via[[:space:]]Interactions/99ce7f16-2914-4f50-bc65-0054f0b31f07_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/A[[:space:]]Unified[[:space:]]Framework[[:space:]]for[[:space:]]Heterogeneous[[:space:]]Semi-supervised[[:space:]]Learning/341a6b6d-e6ba-48c2-98b8-5f373e8e2473_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/A[[:space:]]Unified[[:space:]]Image-Dense[[:space:]]Annotation[[:space:]]Generation[[:space:]]Model[[:space:]]for[[:space:]]Underwater[[:space:]]Scenes/c77f859c-1439-4915-ba1a-a9314ac3d9a9_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/A[[:space:]]Unified[[:space:]]Latent[[:space:]]Schrodinger[[:space:]]Bridge[[:space:]]Diffusion[[:space:]]Model[[:space:]]for[[:space:]]Unsupervised[[:space:]]Anomaly[[:space:]]Detection[[:space:]]and[[:space:]]Localization/0371bea6-a128-4eff-9f4f-dffd7eab7a85_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/A[[:space:]]Unified[[:space:]]Model[[:space:]]for[[:space:]]Compressed[[:space:]]Sensing[[:space:]]MRI[[:space:]]Across[[:space:]]Undersampling[[:space:]]Patterns/e66975d9-0abf-4453-9e0f-887ea6234025_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/A[[:space:]]Unified,[[:space:]]Resilient,[[:space:]]and[[:space:]]Explainable[[:space:]]Adversarial[[:space:]]Patch[[:space:]]Detector/b9c12ba3-81c5-4427-ad44-e661e361941c_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/A[[:space:]]Universal[[:space:]]Scale-Adaptive[[:space:]]Deformable[[:space:]]Transformer[[:space:]]for[[:space:]]Image[[:space:]]Restoration[[:space:]]across[[:space:]]Diverse[[:space:]]Artifacts/36bcd46e-32c4-4da4-aff4-67b68f83d335_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/A3_[[:space:]]Few-shot[[:space:]]Prompt[[:space:]]Learning[[:space:]]of[[:space:]]Unlearnable[[:space:]]Examples[[:space:]]with[[:space:]]Cross-Modal[[:space:]]Adversarial[[:space:]]Feature[[:space:]]Alignment/641deceb-eb2b-45b8-97bb-8e2f7fe97a5e_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/A4A_[[:space:]]Adapter[[:space:]]for[[:space:]]Adapter[[:space:]]Transfer[[:space:]]via[[:space:]]All-for-All[[:space:]]Mapping[[:space:]]for[[:space:]]Cross-Architecture[[:space:]]Models/66a56aa1-a030-49e2-8d93-323c4a070c6f_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/AA-CLIP_[[:space:]]Enhancing[[:space:]]Zero-Shot[[:space:]]Anomaly[[:space:]]Detection[[:space:]]via[[:space:]]Anomaly-Aware[[:space:]]CLIP/1a15f5c2-189a-4bf7-936d-11fcae85b257_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/ABBSPO_[[:space:]]Adaptive[[:space:]]Bounding[[:space:]]Box[[:space:]]Scaling[[:space:]]and[[:space:]]Symmetric[[:space:]]Prior[[:space:]]based[[:space:]]Orientation[[:space:]]Prediction[[:space:]]for[[:space:]]Detecting[[:space:]]Aerial[[:space:]]Image[[:space:]]Objects/4a21cd8b-8cbb-4db3-b4a2-0a741c570b46_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/ABC-Former_[[:space:]]Auxiliary[[:space:]]Bimodal[[:space:]]Cross-domain[[:space:]]Transformer[[:space:]]with[[:space:]]Interactive[[:space:]]Channel[[:space:]]Attention[[:space:]]for[[:space:]]White[[:space:]]Balance/531c4970-08d2-402e-8972-1a2e62e92c0a_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/AC3D_[[:space:]]Analyzing[[:space:]]and[[:space:]]Improving[[:space:]]3D[[:space:]]Camera[[:space:]]Control[[:space:]]in[[:space:]]Video[[:space:]]Diffusion[[:space:]]Transformers/61cdf9ca-5840-4279-a8c2-d8f7abcf95b7_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/ACAttack_[[:space:]]Adaptive[[:space:]]Cross[[:space:]]Attacking[[:space:]]RGB-T[[:space:]]Tracker[[:space:]]via[[:space:]]Multi-Modal[[:space:]]Response[[:space:]]Decoupling/6f190354-412a-4587-a766-f43d48cafb75_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/ACE_[[:space:]]Anti-Editing[[:space:]]Concept[[:space:]]Erasure[[:space:]]in[[:space:]]Text-to-Image[[:space:]]Models/b5a87e68-e26d-46f8-a43c-98bb87af5dc7_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/ACL_[[:space:]]Activating[[:space:]]Capability[[:space:]]of[[:space:]]Linear[[:space:]]Attention[[:space:]]for[[:space:]]Image[[:space:]]Restoration/b3cdf41c-47b4-40a8-888d-383f4742ab99_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/ADD_[[:space:]]Attribution-Driven[[:space:]]Data[[:space:]]Augmentation[[:space:]]Framework[[:space:]]for[[:space:]]Boosting[[:space:]]Image[[:space:]]Super-Resolution/e29ca394-45e6-4bf5-b8ea-58a74dbe24fa_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/ADU_[[:space:]]Adaptive[[:space:]]Detection[[:space:]]of[[:space:]]Unknown[[:space:]]Categories[[:space:]]in[[:space:]]Black-Box[[:space:]]Domain[[:space:]]Adaptation/a3a5f623-9c8e-40c7-93d2-175e52310d81_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/AFL_[[:space:]]A[[:space:]]Single-Round[[:space:]]Analytic[[:space:]]Approach[[:space:]]for[[:space:]]Federated[[:space:]]Learning[[:space:]]with[[:space:]]Pre-trained[[:space:]]Models/1961e1d7-9e21-46e8-8ee1-ddb51cc88578_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/AG-VPReID_[[:space:]]A[[:space:]]Challenging[[:space:]]Large-Scale[[:space:]]Benchmark[[:space:]]for[[:space:]]Aerial-Ground[[:space:]]Video-based[[:space:]]Person[[:space:]]Re-Identification/4cdeaea8-7584-41e0-a7e3-68147957d6c6_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/AI-Face_[[:space:]]A[[:space:]]Million-Scale[[:space:]]Demographically[[:space:]]Annotated[[:space:]]AI-Generated[[:space:]]Face[[:space:]]Dataset[[:space:]]and[[:space:]]Fairness[[:space:]]Benchmark/69a93785-ab72-45f7-87fa-9182ea82821d_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/AIGV-Assessor_[[:space:]]Benchmarking[[:space:]]and[[:space:]]Evaluating[[:space:]]the[[:space:]]Perceptual[[:space:]]Quality[[:space:]]of[[:space:]]Text-to-Video[[:space:]]Generation[[:space:]]with[[:space:]]LMM/6da59f8b-88b1-4dd7-9656-385f5fb4c136_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/AIM-Fair_[[:space:]]Advancing[[:space:]]Algorithmic[[:space:]]Fairness[[:space:]]via[[:space:]]Selectively[[:space:]]Fine-Tuning[[:space:]]Biased[[:space:]]Models[[:space:]]with[[:space:]]Contextual[[:space:]]Synthetic[[:space:]]Data/e3f123cb-93dd-4a70-bf37-7825bb88c80c_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/AIpparel_[[:space:]]A[[:space:]]Multimodal[[:space:]]Foundation[[:space:]]Model[[:space:]]for[[:space:]]Digital[[:space:]]Garments/cb5e1214-45b4-45fb-973d-1b1e69c31e9e_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/AKiRa_[[:space:]]Augmentation[[:space:]]Kit[[:space:]]on[[:space:]]Rays[[:space:]]for[[:space:]]Optical[[:space:]]Video[[:space:]]Generation/a6c7a556-f741-4830-97d0-e5bff3fbd90e_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/ALIEN_[[:space:]]Implicit[[:space:]]Neural[[:space:]]Representations[[:space:]]for[[:space:]]Human[[:space:]]Motion[[:space:]]Prediction[[:space:]]under[[:space:]]Arbitrary[[:space:]]Latency/36dd2f51-bc2c-4f32-bce3-0f31cc93b437_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/AMO[[:space:]]Sampler_[[:space:]]Enhancing[[:space:]]Text[[:space:]]Rendering[[:space:]]with[[:space:]]Overshooting/36ba09a6-cb41-4431-b966-1bb4ac405663_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/AMR-Transformer_[[:space:]]Enabling[[:space:]]Efficient[[:space:]]Long-range[[:space:]]Interaction[[:space:]]for[[:space:]]Complex[[:space:]]Neural[[:space:]]Fluid[[:space:]]Simulation/fde66b48-7438-40ff-858c-23379a28df70_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/ANNEXE_[[:space:]]Unified[[:space:]]Analyzing,[[:space:]]Answering,[[:space:]]and[[:space:]]Pixel[[:space:]]Grounding[[:space:]]for[[:space:]]Egocentric[[:space:]]Interaction/3ad66852-c283-4b9f-ae72-5f19b6b801d3_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/APHQ-ViT_[[:space:]]Post-Training[[:space:]]Quantization[[:space:]]with[[:space:]]Average[[:space:]]Perturbation[[:space:]]Hessian[[:space:]]Based[[:space:]]Reconstruction[[:space:]]for[[:space:]]Vision[[:space:]]Transformers/959d7612-e866-4f28-9fa9-c45c76adc117_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/APT_[[:space:]]Adaptive[[:space:]]Personalized[[:space:]]Training[[:space:]]for[[:space:]]Diffusion[[:space:]]Models[[:space:]]with[[:space:]]Limited[[:space:]]Data/e1426338-b974-42f5-bbf7-281f898f4fda_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/AR-Diffusion_[[:space:]]Asynchronous[[:space:]]Video[[:space:]]Generation[[:space:]]with[[:space:]]Auto-Regressive[[:space:]]Diffusion/c0879fca-0022-40d5-b619-70f13ce4b0b8_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/ARKit[[:space:]]LabelMaker_[[:space:]]A[[:space:]]New[[:space:]]Scale[[:space:]]for[[:space:]]Indoor[[:space:]]3D[[:space:]]Scene[[:space:]]Understanding/f04d603b-51d3-4844-a6bb-a0dc57e1988d_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/ARM_[[:space:]]Appearance[[:space:]]Reconstruction[[:space:]]Model[[:space:]]for[[:space:]]Relightable[[:space:]]3D[[:space:]]Generation/b42d0494-c967-413d-a0db-d7029a7a1cb7_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/ART_[[:space:]]Anonymous[[:space:]]Region[[:space:]]Transformer[[:space:]]for[[:space:]]Variable[[:space:]]Multi-Layer[[:space:]]Transparent[[:space:]]Image[[:space:]]Generation/3c46533d-a542-49b4-875c-2472e2ae6f45_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/ASAP_[[:space:]]Advancing[[:space:]]Semantic[[:space:]]Alignment[[:space:]]Promotes[[:space:]]Multi-Modal[[:space:]]Manipulation[[:space:]]Detecting[[:space:]]and[[:space:]]Grounding/f7fca8ee-e28d-4985-93bf-eb9e003b5e6f_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/ASHiTA_[[:space:]]Automatic[[:space:]]Scene-grounded[[:space:]]HIerarchical[[:space:]]Task[[:space:]]Analysis/d2f3fd11-c995-44f8-bb50-b38a1023e5bc_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/ASIGN_[[:space:]]An[[:space:]]Anatomy-aware[[:space:]]Spatial[[:space:]]Imputation[[:space:]]Graphic[[:space:]]Network[[:space:]]for[[:space:]]3D[[:space:]]Spatial[[:space:]]Transcriptomics/f188ab42-f900-4f45-99a8-fdccbbb31e53_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/ATA_[[:space:]]Adaptive[[:space:]]Transformation[[:space:]]Agent[[:space:]]for[[:space:]]Text-Guided[[:space:]]Subject-Position[[:space:]]Variable[[:space:]]Background[[:space:]]Inpainting/14199472-5326-4a35-8c14-097431f77de1_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/ATP-LLaVA_[[:space:]]Adaptive[[:space:]]Token[[:space:]]Pruning[[:space:]]for[[:space:]]Large[[:space:]]Vision[[:space:]]Language[[:space:]]Models/8d1f0df9-f203-48c9-9342-281daba45228_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/ATP_[[:space:]]Adaptive[[:space:]]Threshold[[:space:]]Pruning[[:space:]]for[[:space:]]Efficient[[:space:]]Data[[:space:]]Encoding[[:space:]]in[[:space:]]Quantum[[:space:]]Neural[[:space:]]Networks/79b52e33-68a9-4a4f-ba48-65f0d682cbf4_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/AToM_[[:space:]]Aligning[[:space:]]Text-to-Motion[[:space:]]Model[[:space:]]at[[:space:]]Event-Level[[:space:]]with[[:space:]]GPT-4Vision[[:space:]]Reward/53c55dc0-83e6-442b-888e-b224dec66e72_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/AVF-MAE++_[[:space:]]Scaling[[:space:]]Affective[[:space:]]Video[[:space:]]Facial[[:space:]]Masked[[:space:]]Autoencoders[[:space:]]via[[:space:]]Efficient[[:space:]]Audio-Visual[[:space:]]Self-Supervised[[:space:]]Learning/83d6d292-6d99-48f9-bbdc-1103770d2580_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/AVQACL_[[:space:]]A[[:space:]]Novel[[:space:]]Benchmark[[:space:]]for[[:space:]]Audio-Visual[[:space:]]Question[[:space:]]Answering[[:space:]]Continual[[:space:]]Learning/b674e7c6-26b4-4ecd-89fa-8c07ae0c7e73_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/Acc3D_[[:space:]]Accelerating[[:space:]]Single[[:space:]]Image[[:space:]]to[[:space:]]3D[[:space:]]Diffusion[[:space:]]Models[[:space:]]via[[:space:]]Edge[[:space:]]Consistency[[:space:]]Guided[[:space:]]Score[[:space:]]Distillation/17bd19a6-d3a1-4433-b775-c08e6ecd64ca_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/Accelerating[[:space:]]Diffusion[[:space:]]Transformer[[:space:]]via[[:space:]]Increment-Calibrated[[:space:]]Caching[[:space:]]with[[:space:]]Channel-Aware[[:space:]]Singular[[:space:]]Value[[:space:]]Decomposition/7888968c-48cf-4d4f-a959-67393862ef73_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/Accelerating[[:space:]]Multimodal[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]by[[:space:]]Searching[[:space:]]Optimal[[:space:]]Vision[[:space:]]Token[[:space:]]Reduction/a2ebe21a-dd1c-4482-ab11-128a62fd12ae_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/Accurate[[:space:]]Differential[[:space:]]Operators[[:space:]]for[[:space:]]Hybrid[[:space:]]Neural[[:space:]]Fields/c44d2865-cbb6-462e-9dd7-d5a3d2a2b9e8_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/Accurate[[:space:]]Scene[[:space:]]Text[[:space:]]Recognition[[:space:]]with[[:space:]]Efficient[[:space:]]Model[[:space:]]Scaling[[:space:]]and[[:space:]]Cloze[[:space:]]Self-Distillation/8b6df24f-3fc1-4d1c-b1ba-505adaf9019f_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/Acquire[[:space:]]and[[:space:]]then[[:space:]]Adapt_[[:space:]]Squeezing[[:space:]]out[[:space:]]Text-to-Image[[:space:]]Model[[:space:]]for[[:space:]]Image[[:space:]]Restoration/07b6f944-c037-4310-b515-257765e135f8_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/Action[[:space:]]Detail[[:space:]]Matters_[[:space:]]Refining[[:space:]]Video[[:space:]]Recognition[[:space:]]with[[:space:]]Local[[:space:]]Action[[:space:]]Queries/b601f295-f3c7-4954-9e4e-6e0e21d54f88_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/Activating[[:space:]]Sparse[[:space:]]Part[[:space:]]Concepts[[:space:]]for[[:space:]]3D[[:space:]]Class[[:space:]]Incremental[[:space:]]Learning/fabc8d8a-8823-4cd5-b9dd-08fc72902ce3_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/Active[[:space:]]Data[[:space:]]Curation[[:space:]]Effectively[[:space:]]Distills[[:space:]]Large-Scale[[:space:]]Multimodal[[:space:]]Models/975ddfce-0026-4a44-8121-44c16bac99b5_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/Active[[:space:]]Event-based[[:space:]]Stereo[[:space:]]Vision/facc70e9-e2cd-4cca-ac3d-9def9aed72a7_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/Active[[:space:]]Hyperspectral[[:space:]]Imaging[[:space:]]Using[[:space:]]an[[:space:]]Event[[:space:]]Camera/a03728ba-8a8f-49b7-934d-19cc07606a93_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/ActiveGAMER_[[:space:]]Active[[:space:]]GAussian[[:space:]]Mapping[[:space:]]through[[:space:]]Efficient[[:space:]]Rendering/bf80f70a-9b60-4e61-a832-fff7dd165d0b_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/AdMiT_[[:space:]]Adaptive[[:space:]]Multi-Source[[:space:]]Tuning[[:space:]]in[[:space:]]Dynamic[[:space:]]Environments/d96341ee-6b6d-4f3e-84a0-f056b4651277_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/AdaCM^2_[[:space:]]On[[:space:]]Understanding[[:space:]]Extremely[[:space:]]Long-Term[[:space:]]Video[[:space:]]with[[:space:]]Adaptive[[:space:]]Cross-Modality[[:space:]]Memory[[:space:]]Reduction/d9da8c6c-84df-4285-8556-905770af79d2_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/AdaDARE-gamma_[[:space:]]Balancing[[:space:]]Stability[[:space:]]and[[:space:]]Plasticity[[:space:]]in[[:space:]]Multi-modal[[:space:]]LLMs[[:space:]]through[[:space:]]Efficient[[:space:]]Adaptation/40308d8d-f99a-4e31-9931-f5a321e85728_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/AdaMMS_[[:space:]]Model[[:space:]]Merging[[:space:]]for[[:space:]]Heterogeneous[[:space:]]Multimodal[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]with[[:space:]]Unsupervised[[:space:]]Coefficient[[:space:]]Optimization/4a08ffb8-8904-4ef5-923a-c10f3aeb33f5_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/AdaptCMVC_[[:space:]]Robust[[:space:]]Adaption[[:space:]]to[[:space:]]Incremental[[:space:]]Views[[:space:]]in[[:space:]]Continual[[:space:]]Multi-view[[:space:]]Clustering/73638180-8aa5-4d61-9e73-03b5c7fd1d88_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/Adapter[[:space:]]Merging[[:space:]]with[[:space:]]Centroid[[:space:]]Prototype[[:space:]]Mapping[[:space:]]for[[:space:]]Scalable[[:space:]]Class-Incremental[[:space:]]Learning/22e4c0ac-bef5-4540-8a32-a7412685f6a1_origin.pdf filter=lfs diff=lfs merge=lfs -text +2025/Adapting[[:space:]]Dense[[:space:]]Matching[[:space:]]for[[:space:]]Homography[[:space:]]Estimation[[:space:]]with[[:space:]]Grid-based[[:space:]]Acceleration/2f8fff98-4047-461c-a364-929dae730a0f_origin.pdf filter=lfs diff=lfs merge=lfs -text diff --git a/2025/A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains/71226fdf-7aa9-4bda-88d3-c62f01e56520_content_list.json b/2025/A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains/71226fdf-7aa9-4bda-88d3-c62f01e56520_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9296b54643ed72798053e9452b644ac27de95459 --- /dev/null +++ b/2025/A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains/71226fdf-7aa9-4bda-88d3-c62f01e56520_content_list.json @@ -0,0 +1,2027 @@ +[ + { + "type": "text", + "text": "A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains", + "text_level": 1, + "bbox": [ + 91, + 128, + 906, + 176 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Dexuan Zhang1 Thomas Westfechtel1 Tatsuya Harada1,2 \n1The University of Tokyo, 2RIKEN", + "bbox": [ + 236, + 202, + 759, + 238 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "{dexuan.zhang, thomas, harada}@mi.t.u-tokyo.ac.jp", + "bbox": [ + 279, + 242, + 712, + 257 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 246, + 291, + 326, + 306 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Existing domain adaptation systems can hardly be applied to real-world problems with new classes presenting at deployment time, especially regarding source-free scenarios where multiple source domains do not share the label space despite being given a few labeled target data. To address this, we consider a challenging problem: multi-source semi-supervised open-set domain adaptation and propose a learning theory via joint error, effectively tackling strong domain shift. To generalize the algorithm into source-free cases, we introduce a computationally efficient and architecture-flexible attention-based feature generation module. Extensive experiments on various data sets demonstrate the significant improvement of our proposed algorithm over baselines.", + "bbox": [ + 88, + 324, + 485, + 521 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 89, + 571, + 220, + 585 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Generally, a supervised learning algorithm trained on a particular distribution of labeled samples (source domain) often fails to generalize when deployed on a new environment (target domain) in the presence of domain shift. In this regard, Domain adaptation (DA) [1] algorithms address the domain-shift problem by aligning the data distributions of the source and target domains through learning a domain-invariant feature space using statistical or adversarial learning approaches, which have made remarkable success. However, the problem setting still needs to be relaxed for real-world applications when we aim to integrate the knowledge learned from multiple source domains. Current DA methods can hardly cover the case for varying label space across domains, and the corresponding learning theory has yet to be proposed. Moreover, the solution is limited if this heterogeneous domain setting is extended to source-free situations where each source model may be trained on different network architectures and the label space. This work tackles a challenging multi-source semi-supervised open-set domain adaptation paradigm with varying label space, illustrated in Fig. 1.", + "bbox": [ + 91, + 598, + 485, + 900 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "In general, multi-source DA (MSDA) [45] and semi-supervised DA (SSDA) [41] are regarded as more practical than the single-source DA setup, considering that labeled data may come from various domains. More precisely, in these cases, the labeled samples can be differently distributed among themselves in addition to the usual domain shit between the source and the target domains. One naive approach to MSDA and SSDA is to group all labeled data into a single domain and deploy any unsupervised DA (UDA) method. However, such a trivial solution may lead to sub-optimal classification due to the gaps among labeled data [41].", + "bbox": [ + 511, + 292, + 906, + 459 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Most DA techniques assume the same label space in source and target domains, usually called the closed-set setting. The paradigm of closed-set DA has been substantially explored in the literature for UDA [11, 32, 33, 39, 47, 62, 64], SSDA [23, 37, 41, 43, 59, 61], and MSDA [19, 36, 51, 53, 57, 65, 67]. In contrast, the open-set DA (OSDA) [4] setting allows the presence of target-specific classes in addition to the shared classes. Such an open-set arrangement is more challenging due to a huge label shift across domains. The closed-set DA techniques cannot be directly applied in this case since these target-specific open-set samples, in turn, may jeopardize the domain alignment process. This work formalizes a more generalized problem where source domains may not share the label space, and the unlabeled target domain additionally contains novel classes.", + "bbox": [ + 511, + 460, + 908, + 686 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Motivated by these, we consider a learning scenario in this work as the multi-source semi-supervised open-set DA (MSODA) where each source domain has a diverse label space, the labeled target domain consists of a few-shot of target data whose label space, i.e., the known class, is a superset of any source label space, and the unlabeled target domain contains data either from the known or a combined unknown class. Under this setup, the task is to classify the unlabeled data either into one of the known categories or a common unknown class. Such a setup invariably holds huge applications in fields relating to real-world visual perception like medical imaging and remote sensing, where acquisition of multi-domain data is feasible, and novel categories may show up abruptly [4]. Nonetheless, this MSODA problem", + "bbox": [ + 511, + 688, + 908, + 900 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 46 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "10142", + "bbox": [ + 480, + 944, + 519, + 955 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "cannot be effectively solved by directly utilizing the single source open-set paradigm of [2, 10, 20, 27, 34, 40, 54, 63] mainly because of the following factors: i) the varying label space of multiple source domains becomes an obstacle to the traditional OSDA techniques, and ii) the unknown recognition can be non-trivial since the target domain may be related to each source domain in a different degree. Regarding multi-source models, recent works [25, 55] consider various label spaces but require largely shared common classes for all domains to align the features, where we accept source domains with zero overlap.", + "bbox": [ + 89, + 90, + 483, + 256 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "[36] argued that reducing the domain gap among the source domains leads to a more robust and effective MSDA model. This idea is particularly relevant to our problem setting since aligning the source domains among themselves inherently helps distinguish unknown from known categories in the target domain. Otherwise, the domain shift among the source domains may lead to an unstable alignment of unlabeled target data. Inspired by this idea, we combine the theoretical results from [62, 63] to build a learning theory that can align all source domains with the labeled target domain via joint error, which is crucial to dealing with label shift [66]. Then we introduce PU learning [58] to detect unknowns with an end-to-end algorithm such that the generalization error is guaranteed, unlike those methods applying closed-set DA after unknown separation. Our major contributions can be summarized as:", + "bbox": [ + 89, + 258, + 483, + 500 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- We introduce a challenging problem setting of multi-source semi-supervised open-set DA with varying label space and propose a learning theory via joint error.", + "- We design a framework to generate labeled source features via an attention-based mechanism for source-free cases.", + "- We demonstrate the efficacy of our proposal through extensive experiments on two benchmark datasets where we perform thorough robustness analysis." + ], + "bbox": [ + 91, + 503, + 483, + 625 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/9f82b9808ce1520db69fff4acb45a3db7e5889bd39e3c49c8ed40582377becd7.jpg", + "image_caption": [ + "Figure 1. Knowledge integration from heterogeneous domains can be considered a task for multi-source semi-supervised open-set domain adaptation. Given a few labeled target data as the key, we aim to build a unified target model from multiple source domains with varying label space, which can be applied to query data containing unknown categories." + ], + "image_footnote": [], + "bbox": [ + 133, + 643, + 444, + 803 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Learning Theory of Unified Model", + "text_level": 1, + "bbox": [ + 513, + 89, + 825, + 107 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this section, we present the theory that transfers the knowledge from multiple source domains to the target domain given a few labeled target data under the open-set situation. First, we propose a target error bound via joint error based on the theoretical results from [62, 63]. Then, we derive the generalization error of the proposed learning theory based on the generalized Vapnik-Chervonenkis (VC) complexity [49, 50] of real-valued function space. Finally, we proceed with the empirical objective function as an upper bound of a trivial convex combination with the log-sum-exp trick, which leads to a smoother optimization process.", + "bbox": [ + 511, + 114, + 906, + 280 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "We consider the unified model (UM) as a solution to multi-source semi-supervised open-set domain adaptation (MSODA) tasks, where the learning algorithm has access to multiple source domains that may have different label spaces. A set of $n_i$ ( $i = 1,..,N$ ) labeled points $\\{(x_{s_i}^j, y_{s_i}^j) \\in (\\mathcal{X} \\subseteq \\mathbb{R}^D \\times \\mathcal{Y}_i' \\subseteq \\mathcal{Y}')\\}_{j=1}^{n_i}$ sampled i.i.d. from each source domain $S_i'$ . In addition, a set of $l$ labeled points (few-shot) $\\{(x_v^j, y_v^j) \\in (\\mathcal{X} \\subseteq \\mathbb{R}^D \\times \\mathcal{Y}' = \\{1, ..., K-1\\})\\}_{j=1}^l$ sampled i.i.d. from the labeled target domain $V'$ is available during learning. We seek a hypothesis that can classify a set of $m$ unlabeled points $\\{(x_t^j) \\in X \\subseteq \\mathbb{R}^D\\}_{j=1}^m$ sampled i.i.d. from target domain $T$ where $\\mathcal{Y} = \\{1,..,K\\}$ containing unknown class $K$ . Let $\\mathcal{K} = \\{k | k \\in \\mathbb{R}^K : \\sum_{y \\in \\mathcal{Y}} k[y] = 1, k[y] \\in [0,1]\\}$ denotes output space and $S_i, V$ indicate the complete domains with label space $\\mathcal{Y}$ .", + "bbox": [ + 511, + 281, + 906, + 507 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Theorem 2.1 (Target Error Bound for MSODA via Joint Error1). Given source $(S_i)$ , labeled $(V)$ and unlabeled target $(T)$ domains that contain data from the unknown class, let $f_{S_i}, f_V, f_T: \\mathcal{X} \\to \\mathcal{K}$ be the true labeling functions of $S_i, V, T$ respectively whose outputs are one-hot vectors denoting the corresponding classes of inputs. Let $\\epsilon: \\mathcal{K} \\times \\mathcal{K} \\to \\mathbb{R}$ denote a distance metric and $\\epsilon_D(f, f') := \\mathbb{E}_{x \\sim D} \\epsilon(f(x), f'(x))$ measure the expected disagreement between the outputs of $f, f': \\mathcal{X} \\to \\mathcal{K}$ over a distribution $D$ on $\\mathcal{X}$ . Regarding the source error of a hypothesis $h \\in \\mathcal{H}: \\mathcal{X} \\to \\mathcal{K}$ where $h(x)[y]$ indicates the probability of $x \\in \\mathcal{X}$ labeled as $y \\in \\mathcal{Y}$ , we use the shorthand $\\epsilon_{S_i}(h) := \\epsilon_{S_i}(h, f_{S_i})$ . Similarly, we use $\\epsilon_V(h), \\epsilon_T(h)$ to denote the labeled and unlabeled target error. For $\\forall h, f_{S_i}', f_V', f_T' \\in \\mathcal{H}: \\mathcal{X} \\to \\mathcal{K}$ , the expected target error is bounded,", + "bbox": [ + 511, + 516, + 908, + 743 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} 2 \\epsilon_ {T} (h) \\leq \\epsilon_ {V} (h) + \\sum_ {i = 1} ^ {N} \\alpha_ {i} U _ {i} (h), \\quad s. t. \\quad \\sum_ {i = 1} ^ {N} \\alpha_ {i} = 1 \\\\ = \\epsilon_ {V} (h) + \\sum_ {i = 1} ^ {N} \\alpha_ {i} \\left[ \\epsilon_ {S _ {i}} (h) + 2 D _ {S _ {i}, V, T} \\left(f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}, f _ {T} ^ {*}, h\\right) + 2 \\theta_ {i} \\right], \\tag {1} \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 517, + 760, + 906, + 834 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} 2 D _ {S _ {i}, V, T} (f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}, f _ {T} ^ {*}, h) \\\\ = \\epsilon_ {T} (f _ {S _ {i}} ^ {*}, f _ {T} ^ {*}) + \\epsilon_ {T} (f _ {V} ^ {*}, f _ {T} ^ {*}) + \\epsilon_ {T} (h, f _ {S _ {i}} ^ {*}) + \\epsilon_ {T} (h, f _ {V} ^ {*}) \\\\ + \\epsilon_ {V} (f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}) + \\epsilon_ {S _ {i}} (f _ {V} ^ {*}, f _ {S _ {i}} ^ {*}) - \\epsilon_ {V} (h, f _ {S _ {i}} ^ {*}) - \\epsilon_ {S _ {i}} (h, f _ {V} ^ {*}) \\tag {2} \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 542, + 838, + 905, + 886 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "10143", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\theta_ {i} = \\underbrace {\\epsilon_ {S _ {i}} (f _ {S _ {i}} , f _ {S _ {i}} ^ {*}) / 2 + \\epsilon_ {V} (f _ {S _ {i}} , f _ {S _ {i}} ^ {*}) + \\epsilon_ {T} (f _ {S _ {i}} , f _ {S _ {i}} ^ {*})} _ {\\theta_ {S} ^ {i}} \\\\ + \\underbrace {\\epsilon_ {V} (f _ {V} , f _ {V} ^ {*}) / 2 + \\epsilon_ {S _ {i}} (f _ {V} , f _ {V} ^ {*}) + \\epsilon_ {T} (f _ {V} , f _ {V} ^ {*})} _ {\\theta_ {V} ^ {i}} + \\underbrace {\\epsilon_ {T} (f _ {T} , f _ {T} ^ {*})} _ {\\theta_ {T} ^ {i}} \\quad (3) \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 107, + 114, + 483, + 181 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In the following, we discuss the approach to obtain generalization guarantees for multiple source domain adaptation in classification settings by a trivial union-bound argument.", + "bbox": [ + 89, + 191, + 483, + 238 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Assumption 2.2 (Substitutes for True Labeling Functions). For finite training data $\\{\\hat{S}_i\\}_{i = 1}^N,\\hat{T},\\hat{V}$ , we assume there exist approximated labeling functions $\\{f_{S_i}^*\\}_{i = 1}^N,f_T^*,f_V^*$ that can lead the empirical deviation $\\sum_{i}\\alpha_{i}\\hat{\\theta}_{i}$ very close to zero such that it can be ignored during the practical learning process.", + "bbox": [ + 89, + 247, + 483, + 325 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Theorem 2.3 (Generalization Error $^1$ ). Let $\\hat{S}_i, \\hat{V}, \\hat{T}$ denote the empirical distributions generated with $m$ i.i.d. samples from each domain. Let $\\mathcal{F} = \\{f(x) = \\epsilon(h(x), h'(x)): \\mathcal{X} \\to [0, M] | h, h' \\in \\mathcal{H}\\}$ be a function space with complexity measured by uniform covering number $\\mathcal{N}_1(\\xi, \\mathcal{F}, m)$ . Let $\\alpha_i = \\frac{\\exp(\\nu \\hat{U}_i(h))}{\\sum_j \\exp(\\nu \\hat{U}_j(h))}$ , $\\nu > 0$ , given Jensen's & Cauchy's inequality and Assumption 2.2, there exist $f_{S_i}^* \\in \\mathcal{H}_{S_i} \\subseteq \\mathcal{H}$ , $f_V^* \\in \\mathcal{H}_V \\subseteq \\mathcal{H}$ , $f_T^* \\in \\mathcal{H}_T \\subseteq \\mathcal{H}$ , such that for $0 < \\delta < 1$ , with probability at least $1 - \\delta$ , for $\\forall h \\in \\mathcal{H}$ :", + "bbox": [ + 88, + 333, + 483, + 481 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\epsilon_ {T} (h) \\leq \\frac {1}{2} [ \\underbrace {\\epsilon_ {\\hat {V}} (h)} _ {L _ {c l s} ^ {V} (h)} + \\frac {1}{\\nu} \\log \\sum_ {i = 1} ^ {N} \\exp (\\nu \\hat {U} _ {i} (h)) ] \\\\ + \\mathcal {O} \\left(\\inf _ {\\sqrt {\\frac {2}{m}} \\leq \\gamma \\leq M} (\\gamma + \\int_ {\\gamma} ^ {M} \\sqrt {\\frac {1}{m} \\log \\frac {2 (1 1 N + 6) \\mathcal {N} _ {1} (\\frac {\\xi}{8} , \\mathcal {F} , 2 m)}{\\delta}} d \\xi)\\right) \\tag {4} \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 94, + 498, + 503, + 585 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\hat {U} _ {i} (h) = \\underbrace {\\epsilon_ {\\hat {S} _ {i}} (h)} _ {L _ {c l s} ^ {S _ {i}} (h)} + 2 \\underbrace {D _ {\\hat {S} _ {i} , \\hat {V} , \\hat {T}} \\left(f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}, f _ {T} ^ {*}, h\\right)} _ {L _ {d i s} ^ {i} \\left(f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}, f _ {T} ^ {*}, h\\right)} \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 94, + 590, + 483, + 626 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The log-sum-exp trick [30] yields an upper bound of the convex combination as Theorem 2.3, where we no longer need to heuristically decide the value of $\\alpha_{i}$ in the unified model. It smooths the objective and provides a principled and adaptive way to combine all the gradients from the $N$ source domains. This often leads to better generalizations in practice because of the ensemble effect of multiple sources implied by the upper bound [65].", + "bbox": [ + 89, + 636, + 482, + 756 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "According to [13, 56, 63], for $\\forall f, f': \\mathcal{X} \\to \\mathcal{K}, \\epsilon_V(f, f')$ can be approximated by the expectation on $V'$ based on PU learning. Moreover, source data $S_i'$ may be unavailable due to privacy concerns (e.g., medical data) during the adaptation phase. To tackle this source-free domain adaptation (SFDA) problem, we propose a source features generation pipeline based on the attention mechanism, which can transfer the knowledge between models with different architectures.", + "bbox": [ + 89, + 757, + 483, + 878 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. Methodology", + "text_level": 1, + "bbox": [ + 513, + 89, + 648, + 107 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In this section, we first recall several preliminaries crucial to the learning algorithm of open-set domain adaptation. Then, we propose a pipeline to transfer the knowledge between models with different architectures based on the attention mechanism to recover feature space under the source-free setting. Finally, we define constrained hypothesis space to obtain a rigorous objective function.", + "bbox": [ + 511, + 114, + 906, + 220 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Discrepancy Measurement", + "text_level": 1, + "bbox": [ + 511, + 229, + 746, + 244 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "As introduced in [63], we recall the definition of Open-set Margin Discrepancy and Unknown Predictive Discrepancy, which serve as key components to bridging the gap between the theory and algorithm for open-set domain adaptation.", + "bbox": [ + 511, + 252, + 906, + 313 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Definition 3.1 (Open-set Margin Discrepancy). Let $y, y'$ denote outputs of $f, f': \\mathcal{X} \\to \\mathcal{K}$ where $y = l(f(x)), l(f'(x)) = y'$ given induced labeling function:", + "bbox": [ + 511, + 318, + 906, + 361 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\nl \\circ f: x \\rightarrow \\underset {y \\in \\mathcal {Y}} {\\arg \\max } f (x) [ y ] \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 630, + 367, + 906, + 386 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The Open-set Margin Discrepancy between two functions $f, f'$ over a distribution $D$ is given by:", + "bbox": [ + 513, + 393, + 905, + 422 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\epsilon_ {D} (f, f ^ {\\prime}) = \\mathbb {E} _ {x \\sim D} [ \\operatorname {o m d} (f (x), f ^ {\\prime} (x)) ] (7) \\\\ \\operatorname {o m d} (f (x), f ^ {\\prime} (x)) = \\max \\left(\\left| \\log (1 - f (x) [ y ]) - \\log (1 - f ^ {\\prime} (x) [ y ]) \\right|\\right), \\\\ | \\log (1 - f (x) [ y ^ {\\prime} ]) - \\log (1 - f ^ {\\prime} (x) [ y ^ {\\prime} ]) |) (8) \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 526, + 431, + 906, + 477 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Definition 3.2 (Unknown Predictive Discrepancy). Let $v: \\mathcal{K} \\times \\mathcal{K} \\to \\mathbb{R}$ denote the Unknown Predictive Discrepancy as a distance metric and $v_{D}(f, f') := \\mathbb{E}_{x \\sim D} v(f(x), f'(x))$ measure the expected disagreement between the $K$ -th outputs of $f, f': \\mathcal{X} \\to \\mathcal{K}$ over a distribution $D$ on $\\mathcal{X}$ . Let $e^{K}: \\mathcal{X} \\to [0, \\dots, 1] \\in \\mathcal{K}$ denote a function that can predict any input as the unknown class. The deviation from $e^{K}$ for a hypothesis $h \\in \\mathcal{H}$ is further referred to as the shorthand $v_{D}(h) := v_{D}(h, e^{K})$ that measures the probability that samples from $D$ not classified as unknowns.", + "bbox": [ + 511, + 483, + 906, + 621 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\nv _ {D} (f, f ^ {\\prime}) = \\mathbb {E} _ {x \\sim D} | \\log (1 - f (x) [ K ]) - \\log (1 - f ^ {\\prime} (x) [ K ]) | \\tag {9}\n$$\n", + "text_format": "latex", + "bbox": [ + 545, + 628, + 906, + 643 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.2. Inference on Expectation with PU Learning", + "text_level": 1, + "bbox": [ + 511, + 652, + 883, + 667 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In this section, we introduce the techniques from PU learning [58] to estimate the expectation over source domain $S_{i}$ by the incomplete source domain $S_{i}^{\\prime}$ and target domain $T$ for the open-set scenario. The expectation over $V$ can be derived analogously.", + "bbox": [ + 511, + 675, + 905, + 751 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Assumption 3.3. Let $S_{i}^{k} = P_{S_{i}}(x|y = k), V^{k} = P_{V}(x|y = k), T^{k} = P_{T}(x|y = k)$ denote class conditional distributions, $S_{i}^{\\backslash K} = P_{S_{i}}(x|y \\neq K), V' = P_{V}(x|y \\neq K), T' = P_{T}(x|y \\neq K)$ indicate incomplete domains that do not contain unknown class $S_{i}^{K}, V^{K}, T^{K}$ . Given a feature extractor $g: \\mathcal{X} \\subseteq \\mathbb{R}^{D} \\to \\mathcal{Z} \\subseteq \\mathbb{R}^{F}$ , assume that the feature space can be aligned by DA techniques such that $Z^{K} = P_{S_{i}^{K}}(z) = P_{V^{K}}(z) = P_{T^{K}}(z), Z' = P_{S_{i}^{\\backslash K}}(z) = P_{V'}(z) = P_{T'}(z)$ .", + "bbox": [ + 511, + 757, + 906, + 901 + ], + "page_idx": 2 + }, + { + "type": "page_footnote", + "text": "1 see proofs in supplementary material.", + "bbox": [ + 107, + 886, + 315, + 900 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "10144", + "bbox": [ + 480, + 944, + 519, + 955 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Lemma 3.4 (PU Estimation $^1$ ). Let $g: \\mathcal{X} \\subseteq \\mathbb{R}^D \\to \\mathcal{Z} \\subseteq \\mathbb{R}^F$ denote the feature extractor. Let $h \\in \\mathcal{H}^F: \\mathcal{Z} \\to \\mathcal{K}$ where $h \\circ g \\in \\mathcal{H}: \\mathcal{X} \\to \\mathcal{K}$ and $f_V^* \\in \\mathcal{H}_V^F, f_T^* \\in \\mathcal{H}_T^F, f_{S_i}^* \\in \\mathcal{H}_{S_i}^F: \\mathcal{Z} \\to \\mathcal{K}$ denote the decomposed approximated labeling functions. Let $\\sum_{k=1}^{K} \\pi_{S_i}^k = 1, \\sum_{k=1}^{K} \\pi_V^k = 1, \\sum_{k=1}^{K} \\pi_T^k = 1$ denote the class priors of each domain. Given Assumption 3.3, the expectation on $S_i$ can be estimated by expectation on $S_i^{\\backslash K}$ and Unknown Predictive Discrepancy (Definition 3.2) with a mild condition that $\\pi_{S_i}^K = \\pi_T^K = 1 - \\alpha$ :", + "bbox": [ + 89, + 89, + 486, + 232 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\epsilon_ {S _ {i}} (h \\circ g) = \\alpha \\left[ \\epsilon_ {S _ {i} ^ {\\backslash K}} (h \\circ g) - v _ {S _ {i} ^ {\\backslash K}} (h \\circ g) \\right] + v _ {T} (h \\circ g) \\tag {10}\n$$\n", + "text_format": "latex", + "bbox": [ + 142, + 247, + 485, + 276 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\epsilon_ {S _ {i}} (f _ {S _ {i}} ^ {*} \\circ g, f _ {V} ^ {*} \\circ g) = \\alpha [ \\epsilon_ {S _ {i} \\backslash K} (f _ {S _ {i}} ^ {*} \\circ g, f _ {V} ^ {*} \\circ g) - v _ {S _ {i} \\backslash K} (f _ {S _ {i}} ^ {*} \\circ g, f _ {V} ^ {*} \\circ g) ] \\\\ + v _ {T} \\left(f _ {S _ {i}} ^ {*} \\circ g, f _ {V} ^ {*} \\circ g\\right) (11) \\\\ \\epsilon_ {S _ {i}} (f _ {V} ^ {*} \\circ g, h \\circ g) = \\alpha [ \\epsilon_ {S _ {i} ^ {\\backslash} K} (f _ {V} ^ {*} \\circ g, h \\circ g) - v _ {S _ {i} ^ {\\backslash} K} (f _ {V} ^ {*} \\circ g, h \\circ g) ] \\\\ + v _ {T} \\left(f _ {V} ^ {*} \\circ g, h \\circ g\\right) (12) \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 91, + 279, + 496, + 352 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Assumption 3.5. Given a feature extractor $g: \\mathcal{X} \\to \\mathcal{Z}$ , assume that the covariate shift between each source and labeled target domain can be addressed for known categories as $P_{S_i^k}(z) = P_{V^k}(z), k = 1,..K - 1$ .", + "bbox": [ + 89, + 363, + 485, + 425 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Corollary 3.6. Let $\\mathcal{Y}_i^{\\prime \\prime} = \\{k|k\\notin \\mathcal{Y}_i^\\prime ,k = 1,..K - 1\\}$ denote the label space that is absent from $S_{i}^{\\prime}$ . Given Assumption 3.5, we further decompose the source error as:", + "bbox": [ + 89, + 433, + 485, + 477 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\alpha \\epsilon_ {S _ {i} ^ {\\backslash K}} (h \\circ g) = \\sum_ {k \\in \\mathcal {Y} _ {i} ^ {\\prime \\prime}} \\pi_ {S _ {i}} ^ {k} \\epsilon_ {S _ {i} ^ {k}} (h \\circ g) + \\sum_ {k \\in \\mathcal {Y} _ {i} ^ {\\prime}} \\pi_ {S _ {i}} ^ {k} \\epsilon_ {S _ {i} ^ {k}} (h \\circ g) \\\\ = \\rho_ {i} \\sum_ {k \\in \\mathcal {Y} _ {i} ^ {\\prime \\prime}} \\epsilon_ {V _ {i} ^ {k}} (h \\circ g) + (1 - \\rho_ {i}) \\epsilon_ {S _ {i} ^ {\\prime}} (h \\circ g), \\tag {13} \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 119, + 484, + 483, + 544 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\rho_{i} = |\\mathcal{Y}_{i}^{\\prime \\prime}| / K$ under a mild condition that $\\pi_{S_i}^k = 1 / K$ for $k\\in \\mathcal{Y}_i^{\\prime \\prime}$", + "bbox": [ + 89, + 556, + 482, + 588 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Remark 3.7. According to Definition 3.2, minimizing $v_{\\hat{T}}(h \\circ g)$ means mapping target samples to the unknown class. In practice, a multiplier $\\beta < 1$ is applied on $v_{\\hat{T}}(h \\circ g)$ to prevent all target samples from being classified as unknown.", + "bbox": [ + 89, + 595, + 483, + 657 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.3. Towards Source-Free Knowledge Transfer with Attention-based Feature Generation", + "text_level": 1, + "bbox": [ + 89, + 665, + 483, + 696 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Source-free domain adaptation (SFDA) has been considered a means of reducing reliance on source data. As described in [24], the existing SFDA research can generally be categorized into data-centric and model-centric methods. Model-centric methods employ techniques such as self-supervision, while data-centric methods focus on image-based reconstruction. Model-centric methods like [28, 29, 35, 60] require source model fine-tuning, where the generalization to multi-source cases with label shift can be nontrivial since it may fail to fully leverage the few-shot labeled data due to the missing classes in source domains. Meanwhile, for data-centric methods like [6, 26], the pipeline to generate source-like images is generally computationally intensive", + "bbox": [ + 89, + 703, + 485, + 901 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/b924781068c2bfcbf6d052fefbd06b27a184ae76704098abb9c471e600243125.jpg", + "image_caption": [ + "Figure 2. The mechanism of attention-based feature generation for source-free domain adaptation. Given a similarity-based weight estimated by the knowledge preserved in the pre-trained source model consisting of a black-box feature extractor $g_{i}$ and a visible classifier $f_{i}$ , labeled features generated with the attention module can be considered a weighted average of unlabeled target features, which serve as the anchor for the target distribution alignment in the adaptation phase." + ], + "image_footnote": [], + "bbox": [ + 553, + 89, + 867, + 191 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "and time-consuming, which can hardly be applied to highly structured domains. Furthermore, it might violate the intention of SFDA to protect privacy by recovering source-like images. Motivated by this, in this section, we propose a novel attention-based feature generation (AFG) algorithm that can produce labeled anchors for the alignment of unlabeled target data by leveraging the knowledge equipped in source models, which is more computationally efficient and independent from source model fine-tuning.", + "bbox": [ + 511, + 345, + 906, + 482 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The SFDA scenario involves two phases: pre-training and adaptation. During pre-training, $N$ models are trained on labeled data from each source domain $x_{s_i} \\sim S_i'$ , $i = 1\\dots N$ . Subsequently, the goal of the adaptation stage is to adapt the pre-trained source model to the unlabeled target data $x_t \\sim T$ given few-shot labeled target data $x_v \\sim V'$ . The proposed approach assumes a challenging open-set form, implying that the label spaces among the target and source domains are distinct.", + "bbox": [ + 511, + 486, + 908, + 619 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Inspired by [28], which uses a single-layer linear classifier in source models to store the cluster center of source features, we choose the Bayesian linear classifier during pre-training such that source features can be sampled by the re-parameterization trick [21] in the adaptation phase. Let $g_{i}:\\mathcal{X}\\rightarrow \\mathcal{Z}_{i}\\subseteq \\mathbb{R}^{F_{i}}$ and $f_{i}\\coloneqq \\{\\mu_{i},\\sigma_{i}\\}$ denote each pre-trained source model. As illustrated in Fig. 2, given the source features approximated by the weight samples", + "bbox": [ + 511, + 621, + 908, + 723 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "of Bayesian linear classifier as $g_{i}(\\hat{S}_{i}^{\\prime}) = \\left( \\begin{array}{c} g_{i}(x_{s_{i}}^{1}) \\\\ \\vdots \\\\ g_{i}(x_{s_{i}}^{\\| \\mathcal{Y}_{i}^{\\prime}\\|}) \\end{array} \\right) :=$", + "bbox": [ + 511, + 723, + 903, + 781 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mu_ {i} + \\sigma_ {i} \\odot \\left( \\begin{array}{c} \\zeta_ {i} ^ {1} \\\\ \\vdots \\\\ \\zeta_ {i} ^ {\\| \\mathcal {Y} _ {i} ^ {\\prime} \\|} \\end{array} \\right), \\zeta_ {i} ^ {j} \\sim \\mathcal {N} (0, I) \\text {w i t h s i z e} \\| \\mathcal {Y} _ {i} ^ {\\prime} \\| \\times F _ {i}\n$$\n", + "text_format": "latex", + "bbox": [ + 511, + 781, + 903, + 840 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "(multiple samples can be generated from each class in practice), along with the query and key mapping functions $w_{q_i}, w_{k_i}: \\mathcal{Z}_i \\to \\mathcal{Z}_i' \\subseteq \\mathbb{R}^{F_i'}$ , the corresponding labeled anchor defined as $\\{(g(x_{s_i}^j), y_i^j \\in \\mathcal{Y}_i')\\}_{j=1}^{\\|\\mathcal{Y}_i'\\|}, y_i^j \\neq y_i^{j'}$ is given", + "bbox": [ + 511, + 839, + 908, + 902 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "10145", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "by:", + "bbox": [ + 89, + 92, + 116, + 106 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\ng (\\hat {S} _ {i} ^ {\\prime}) = \\operatorname {s o f t m a x} \\left(\\frac {w _ {q _ {i}} \\left(g _ {i} \\left(\\hat {S} _ {i} ^ {\\prime}\\right)\\right) \\cdot w _ {k _ {i}} \\left(g _ {i} \\left(\\hat {T} ^ {\\prime}\\right)\\right) ^ {\\top}}{\\sqrt {F _ {i} ^ {\\prime}}}\\right) g (\\hat {T} ^ {\\prime}), \\tag {14}\n$$\n", + "text_format": "latex", + "bbox": [ + 137, + 117, + 483, + 147 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\hat{T}'$ denotes the estimated known-class data from the target. To produce meaningful features for the distribution alignment in the adaptation phase, we propose two objective functions to learn the query and key mapping $\\{w_{q_i}, w_{k_i}\\}_{i=1}^N$ of each source domain. Analogous to [5], we train $w_{q_i}, w_{k_i}$ by maximizing the similarity between the projections of the same target features extracted by the per-trained source model $g_i$ while pushing the different target features far apart, which can be achieved with minimizing reconstruction loss $L_{rec}^i$ such that the output of the attention module can approximate target features $g(\\hat{T}')$ given target data as query and key. To further regularize $w_{q_i}, w_{k_i}$ , we introduce a cycle-consistency loss $L_{cyc}^i$ that can bring the features generated by labeled and unlabeled target data $\\hat{V}', \\hat{T}'$ close to each other.", + "bbox": [ + 89, + 160, + 483, + 367 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nL _ {r e c} ^ {i} = \\left| \\operatorname {s o f t m a x} \\left(\\frac {w _ {q _ {i}} \\left(g _ {i} \\left(\\hat {T} ^ {\\prime}\\right)\\right) \\cdot w _ {k _ {i}} \\left(g _ {i} \\left(\\hat {T} ^ {\\prime}\\right)\\right) ^ {\\top}}{\\sqrt {F _ {i} ^ {\\prime}}}\\right) g \\left(\\hat {T} ^ {\\prime}\\right) - g \\left(\\hat {T} ^ {\\prime}\\right) \\right| \\tag {15}\n$$\n", + "text_format": "latex", + "bbox": [ + 104, + 376, + 483, + 406 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nL _ {c y c} ^ {i} = | \\mathrm {s o f t m a x} (\\frac {w _ {q _ {i}} (g _ {i} (\\hat {S} _ {i} ^ {\\prime})) \\cdot w _ {k _ {i}} (g _ {i} (\\hat {V} ^ {\\prime})) ^ {\\top}}{\\sqrt {F _ {i} ^ {\\prime}}}) g (\\hat {V} ^ {\\prime}) - g (\\hat {S} _ {i} ^ {\\prime}) | \\quad (1 6)\n$$\n", + "text_format": "latex", + "bbox": [ + 106, + 407, + 483, + 438 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Progressive Unknown Rejection (PUR) is additionally proposed to improve the recognition accuracy on unknown class. In the open-set setting, empirical target data $\\hat{T}$ includes the unknown class, while the generated labeled anchors $g(\\hat{S}_i^{\\prime})$ should be limited to the known class. According to the generation mechanism defined by Eq. (14), labeled anchors can be considered a similarity-based weighted average of target features, which are not supposed to contain components from irrelevant features of the unknown class. However, it is impractical to learn the ideal results where the weights assigned to those unrelated target features are zero by pure regularization of mapping functions $w_{q_i}, w_{k_i}$ . To address this problem, we introduce a scheme to gradually reject the target features from the unknown class by removing the target data labeled as unknown given the current hypothesis $h$ from $\\hat{T}$ . Specifically, at each training iteration during the adaptation stage, for a batch of input target data, we rank the likelihood of the unknown class for each target sample $p(y = K|x_t) = h(x_t)[K]$ in ascending order. Given a threshold $0 < \\tau < 1$ progressively increasing from zero according to the exponential ramp-up function [22], we select bottom $1 - \\tau$ target samples as $\\hat{T}'$ .", + "bbox": [ + 89, + 454, + 483, + 787 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.4. Hypothesis Constraint", + "text_level": 1, + "bbox": [ + 89, + 797, + 302, + 814 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Proposition 3.8. If $\\mathcal{H}_{S_i}^F, \\mathcal{H}_V^F, \\mathcal{H}_T^F$ are sets of functions that can minimize a part of $\\hat{\\theta}_S^i, \\sum_i \\hat{\\theta}_V^i, \\sum_i \\hat{\\theta}_T^i$ respectively, then $f_{S_i}^* \\in \\mathcal{H}_{S_i}^F, f_V^* \\in \\mathcal{H}_V^F, f_T^* \\in \\mathcal{H}_T^F$ must hold such that we can relax $L_{\\text{dis}}$ in Theorem 2.3 by considering maximum w.r.t. functions $f_{S_i}', f_V', f_T'$ as:", + "bbox": [ + 88, + 820, + 485, + 902 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\log \\sum_ {i} \\exp \\left(\\nu \\left[ L _ {c l s} ^ {S _ {i}} (h; g) + 2 L _ {d i s} ^ {i} \\left(f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}, f _ {T} ^ {*}, h; g\\right) \\right]\\right) \\\\ \\leq \\sup _ {\\{f _ {S _ {i}} ^ {\\prime} \\in \\mathcal {H} _ {S _ {i}} ^ {F} \\} _ {i = 1} ^ {N}, f _ {V} ^ {\\prime} \\in \\mathcal {H} _ {V} ^ {F}, f _ {T} ^ {\\prime} \\in \\mathcal {H} _ {T} ^ {F}} \\log \\sum_ {i} \\exp (\\nu [ L _ {c l s} ^ {S _ {i}} (h; g) + 2 L _ {d i s} ^ {i} (f _ {S _ {i}} ^ {\\prime}, f _ {V} ^ {\\prime}, f _ {T} ^ {\\prime}, h; g) ]) \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 532, + 98, + 942, + 146 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "(17)", + "bbox": [ + 882, + 157, + 906, + 169 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Definition 3.9 (Approximated Labeling Function Space). Let $L_{\\mathcal{H}_S}^i, L_{\\mathcal{H}_V}^i, L_{\\mathcal{H}_T}^i$ denote the hypothesis constraints, i.e., a part of the empirical deviation between approximated and true labeling functions $\\hat{\\theta}_S^i, \\hat{\\theta}_T^i, \\hat{\\theta}_V^i$ . Approximated Labeling Function Space $\\mathcal{H}_{S_i}^F, \\mathcal{H}_V^F, \\mathcal{H}_T^F$ can be defined as the sets whose members $f_{S_i}', f_V', f_T' \\in \\mathcal{H}^F$ can minimize $L_{\\mathcal{H}_S}^i, \\sum_i L_{\\mathcal{H}_V}^i, \\sum_i L_{\\mathcal{H}_T}^i$ :", + "bbox": [ + 511, + 181, + 906, + 292 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\left\\{\\mathcal {H} _ {S _ {i}} ^ {F} = \\left\\{f _ {S _ {i}} ^ {\\prime} | \\arg \\min _ {g, f _ {S _ {i}} ^ {\\prime} \\in \\mathcal {H} ^ {F}} \\left[ L _ {\\mathcal {H} _ {S}} ^ {i} \\left(f _ {S _ {i}} ^ {\\prime}; g\\right) = L _ {c l s} ^ {S _ {i}} \\left(f _ {S _ {i}} ^ {\\prime}; g\\right) / 2 + L _ {c l s} ^ {V} \\left(f _ {S _ {i}} ^ {\\prime}; g\\right) \\right] \\right\\} \\right. \\\\ \\left\\{\\mathcal {H} _ {V} ^ {F} = \\left\\{f _ {V} ^ {\\prime} \\mid \\arg \\min _ {g, f _ {V} ^ {\\prime} \\in \\mathcal {H} ^ {F}} \\sum_ {i} \\left[ L _ {\\mathcal {H} _ {V}} ^ {i} \\left(f _ {V} ^ {\\prime}; g\\right) = L _ {c l s} ^ {V} \\left(f _ {V} ^ {\\prime}; g\\right) / 2 + L _ {c l s} ^ {S _ {i}} \\left(f _ {V} ^ {\\prime}; g\\right) \\right] \\right\\} \\right. \\\\ \\left\\{\\mathcal {H} _ {T} ^ {F} = \\left\\{f _ {T} ^ {\\prime} \\mid \\arg \\min _ {g, f _ {T} ^ {\\prime} \\in \\mathcal {H} ^ {F}} \\sum_ {i} \\left[ L _ {\\mathcal {H} _ {T}} ^ {i} \\left(f _ {T} ^ {\\prime}; g\\right) = \\left[ L _ {\\text {c l s}} ^ {S _ {i}} \\left(f _ {T} ^ {\\prime}; g\\right) + L _ {\\text {c l s}} ^ {V} \\left(f _ {T} ^ {\\prime}; g\\right) \\right] / 2 + L _ {\\text {s s l}} \\right] \\right\\} \\right. \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 532, + 300, + 949, + 344 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "(18)", + "bbox": [ + 882, + 364, + 906, + 375 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "To build a more reliable target function space $\\mathcal{H}_T^F$ , we approximate the target error with the error rate on labeled samples and a semi-supervised regularization term $L_{ssl}^2$ including entropy minimization [14, 15], pseudo labeling [44, 46] and consistency regularization [22, 42], which has been intensively discussed in [23, 43, 59, 63].", + "bbox": [ + 511, + 388, + 906, + 479 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.5. Algorithm", + "text_level": 1, + "bbox": [ + 513, + 489, + 630, + 506 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "As described in Algorithm 1, we introduce a gradient reversal layer [12] to train the overall objective together. ImageNet [8] pre-trained ResNet-50 [16] is used as feature extractor $g$ and randomly initialized 2-layer fully-connected networks are used for classifiers $f_{S_i}^{\\prime}, f_V^{\\prime}, f_T^{\\prime}, h$ . We adopt SGD with a momentum of 0.9 for optimization, where the initial learning rate is empirically set to 0.001. We employ the learning rate annealing strategy proposed in [12]. We use RandomFlip, RandomCrop, and RandAugment [7] as data augmentation with the batch size fixed to 24.", + "bbox": [ + 511, + 513, + 908, + 664 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/2368c1f59ad63ea02d5288c77d00848fb6a4405f5d48019ba52a15b5fac7f00b.jpg", + "image_caption": [ + "Figure 3. Alignment mechanism of UM, where unknown target data $\\hat{T}^K$ (green) is pushed away from labeled data into a separated cluster, while known target data $\\hat{T}'$ is aligned back towards labeled clusters by $\\min_g L_{dis}$ ." + ], + "image_footnote": [], + "bbox": [ + 516, + 672, + 906, + 795 + ], + "page_idx": 4 + }, + { + "type": "page_footnote", + "text": "2see details in supplementary material.", + "bbox": [ + 529, + 886, + 740, + 900 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "10146", + "bbox": [ + 480, + 944, + 519, + 955 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Algorithm 1 UM", + "text_level": 1, + "bbox": [ + 91, + 90, + 210, + 104 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Input: source $\\{\\hat{S}_i^{\\prime}\\}_{i = 1}^{N}$ , labeled target $\\hat{V}^{\\prime}$ , unlabeled target $\\hat{T}$", + "bbox": [ + 104, + 109, + 431, + 125 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Output: updated parameters $\\phi = (\\{f_{S_i}^{\\prime}\\}_{i=1}^{N}, g, h, f_V^{\\prime}f_T^{\\prime})$ , $w = \\{w_{q_i}, w_{k_i}\\}_{i=1}^{N}$", + "bbox": [ + 106, + 125, + 480, + 154 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Parameter: trade-off parameter $\\lambda$ ; learning rate $\\eta$ ; known class ratio estimator $\\alpha$ ; coefficients $\\nu, \\beta, \\tau$", + "bbox": [ + 106, + 154, + 480, + 176 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Notation: gradient reversal operator $R(\\cdot)$", + "bbox": [ + 106, + 176, + 326, + 189 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "for epoch $= 1,2,\\dots$ do", + "bbox": [ + 106, + 189, + 240, + 200 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Estimate known class ratio $\\alpha$ on $\\hat{T}$ with $g,h$", + "bbox": [ + 125, + 200, + 359, + 212 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "if source-free then", + "bbox": [ + 125, + 213, + 225, + 223 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Estimate $\\hat{T}^{\\prime}$ according to PUR and Update $w$ to optimize AFG:", + "bbox": [ + 145, + 224, + 478, + 236 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\nw \\leftarrow w - \\eta \\Delta w, \\Delta w = \\frac {\\partial \\sum_ {i = 1} ^ {N} \\left(L _ {r c c} ^ {i} + L _ {c y c} ^ {i}\\right)}{\\partial w}\n$$\n", + "text_format": "latex", + "bbox": [ + 156, + 237, + 406, + 256 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Generate labeled features $g(\\hat{S}_i^{\\prime})$ according to Eq. (14)", + "bbox": [ + 145, + 257, + 428, + 268 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "end if", + "text_level": 1, + "bbox": [ + 125, + 268, + 161, + 279 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Compute labeled target error $L_{cls}^{V}(h;g) = L_{V}$ , source error $L_{cls}^{S_i}(h;g) = L_S^i$ , hypothesis constraints $L_{\\mathcal{H}_S}^i (f_{S_i}';g) + L_{\\mathcal{H}_V}^i (f_V';g) + L_{\\mathcal{H}_T}^i (f_T';g) = L_H^i$ for $i = 1,..N$", + "bbox": [ + 104, + 280, + 483, + 325 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Compute discrepancy $L_{dis}^{i}(f_{S_{i}}^{\\prime}, f_{V}^{\\prime}, f_{T}^{\\prime}, R \\circ h \\circ R; R \\circ g) = L_{D}^{i}$ given the gradient reversal layer for $i = 1,..N$", + "bbox": [ + 104, + 325, + 480, + 349 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Update $\\phi$ to minimize the target error bound:", + "bbox": [ + 125, + 351, + 362, + 362 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\phi \\leftarrow \\phi - \\eta \\Delta \\phi ,\n$$\n", + "text_format": "latex", + "bbox": [ + 138, + 363, + 225, + 373 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\Delta \\phi = \\frac {\\partial (\\frac {1}{2} [ L _ {V} + \\frac {1}{\\nu} \\log \\sum_ {i = 1} ^ {N} \\exp (\\nu [ L _ {S} ^ {i} + L _ {H} ^ {i} - \\lambda L _ {D} ^ {i} ]) ])}{\\partial \\phi}\n$$\n", + "text_format": "latex", + "bbox": [ + 138, + 373, + 424, + 393 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "end for", + "bbox": [ + 104, + 393, + 148, + 402 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4. Evaluation", + "text_level": 1, + "bbox": [ + 89, + 431, + 207, + 448 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We evaluated our proposal using two benchmarks, Office-Home and DomainNet. The trade-off parameter $\\lambda$ is set to 0.01 during the training procedure according to [62, 63]. In addition, we empirically set the PU, scaling, and threshold coefficients $\\beta$ to 0.15, $\\nu$ to 0.1, and $\\tau$ to 0.3 for all experiments. For the semi-supervised setting, we select the same few-shot labeled target data according to [41]. Regarding the open-set setting, we assign a distinct label space for each source domain as a subset of the target label space described below. We quantitatively compare our results against various baselines, including OSBP [40], PGL [34], ANNA [27], PUJE [63], MOSDANET [38], HyMOS [3], and MPU [58].", + "bbox": [ + 89, + 458, + 483, + 640 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Evaluation Metrics for the proposed method and baselines are the widely used measures [34, 40], i.e., normalized accuracy for the known class only $(\\mathrm{OS}^{*})$ and harmonic mean $\\mathrm{HOS} = 2(\\mathrm{OS}^{*} \\times \\mathrm{UNK}) / (\\mathrm{OS}^{*} + \\mathrm{UNK})$ [2, 27, 31, 54, 63].", + "bbox": [ + 89, + 660, + 483, + 722 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Office-Home [52] is a widely-used domain adaptation benchmark, which consists of 15,500 images from 65 categories and four domains: Art, Clipart, Product, and RealWorld. We select the first 30 classes alphabetically as the known class and group the rest as the unknown. Each source domain contains 10 classes without overlap, leading to a large label shift scenario.", + "bbox": [ + 89, + 742, + 483, + 849 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "DomainNet [36] is a more challenging benchmark dataset for large-scale domain adaptation that has 345 classes and", + "bbox": [ + 89, + 869, + 483, + 900 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/76650fc36cf69332371d9a6f28460ba260f0c8ede3c8f39b81b5b8e1c03d3710.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
METHODTYPE→Clipart→Product→RealWorld→ArtAvg.
1-shot3-shot1-shot3-shot1-shot3-shot1-shot3-shot1-shot3-shot
OSBPSource-Combine60.462.670.172.369.768.360.764.365.266.9
PGL59.061.867.769.966.768.961.264.063.766.2
ANNA65.867.771.073.470.370.361.063.767.068.8
PUJE65.871.773.374.275.078.165.567.369.972.8
MOSDANETMulti-Source61.565.970.073.871.469.661.663.666.168.2
HyMOS56.664.464.467.366.268.459.062.261.665.6
UM68.072.179.083.079.480.867.770.373.576.6
MPU*Source-Free46.354.459.766.357.860.258.362.555.560.9
OSBP*44.556.555.665.159.364.355.659.953.861.5
PUJE*52.258.465.070.366.270.058.762.760.565.4
UM+AFG61.166.077.080.172.078.860.364.667.672.4
", + "bbox": [ + 514, + 87, + 905, + 184 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/9e436e408be32af3f8175ff3f064ffb58b95c052ce0c3c743895f7bbaba59363.jpg", + "table_caption": [ + "Table 1. HOS (%) of ResNet-50 model fine-tuned on Office-Home dataset under 1-shot/3-shot setting" + ], + "table_footnote": [], + "table_body": "
METHODTYPE→Clipart→Painting→Real→SketchAvg.
1-shot3-shot1-shot3-shot1-shot3-shot1-shot3-shot1-shot3-shot
OSBPSource-Combine54.257.449.853.162.664.049.550.154.056.2
PGL59.862.059.461.467.469.459.761.261.663.5
ANNA55.661.553.654.367.566.557.958.158.760.1
PUJE64.466.259.861.767.769.361.264.263.365.4
MOSDANETMulti-Source56.455.355.658.268.569.854.154.958.759.6
HyMOS53.054.454.156.065.167.456.357.157.158.7
UM70.371.566.068.875.178.566.169.569.472.1
MPU*Source-Free54.557.655.060.162.466.448.452.955.159.3
MOSDANET*58.160.554.359.363.262.549.454.356.359.2
PUJE*60.562.255.361.464.067.853.156.258.261.9
UM+AFG64.869.760.064.267.673.460.064.863.168.0
", + "bbox": [ + 514, + 234, + 905, + 330 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Table 2. HOS (%) of ResNet-50 model fine-tuned on DomainNet dataset under 1-shot/3-shot setting", + "bbox": [ + 511, + 340, + 905, + 369 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "6 domains. Following the protocol established in [41], we pick 4 domains (Real, Clipart, Painting, Sketch) with 126 classes for the experiments. We select the first 60 classes alphabetically as the known class and group the rest as the unknown. Similarly, each source domain contains 20 classes without any overlap.", + "bbox": [ + 511, + 397, + 905, + 487 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "As reported in Tabs. 1 and 2, under the same setting given 1-shot/3-shot labeled target data (1/3 samples per class), we observe that our method UM consistently outperforms the state-of-the-art results, improving HOS by $3.6\\% / 3.8\\%$ and $6.1\\% / 6.7\\%$ on the benchmark datasets of Office-Home and DomainNet respectively, when source data is available. Furthermore, $\\mathrm{UM + AFG}$ enhances HOS by $7.1\\% / 7.0\\%$ in Office-Home and $4.9\\% / 6.1\\%$ in DomainNet under the challenging source-free setting. Note that our proposed approach provides significant performance gains for the more complex datasets like DomainNet, which requires knowledge transfer across different modalities, regardless of covariate or label shift. We group all source domains with labeled target data as a single domain for the baselines that require the source-combine strategy. For the source-free cases, we introduce a few confident target data labeled by pre-trained models as pseudo-source data to enable several algorithms denoted by $*$ under this problem setting since none of the existing methods can directly address the open-set task under the source-free condition with a huge label shift across source domains.", + "bbox": [ + 511, + 489, + 908, + 805 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.1. Feature Space Visualization", + "text_level": 1, + "bbox": [ + 511, + 816, + 764, + 832 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "To intuitively visualize the effectiveness of different approaches, we extracted features from the baseline models and our proposed model on the $\\rightarrow$ Art task (Office-Home) and $\\rightarrow$ Real task (DomainNet) with the ResNet-50 backbone", + "bbox": [ + 511, + 839, + 906, + 900 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "10147", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/db6c36da38841c2143345382b8b42c594ddfef500ed43ad2c082ecc63ba899f9.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
METHODTYPEOffice-Home →RealWorldDomainNet →Clipart
UNKOS*HOSUNKOS*HOS
DEFAULTSource-Free73.770.472.072.658.664.8
w/o Lsim70.970.670.769.359.163.8
w/o Lcyc74.068.571.172.856.963.9
w/o PUR39.687.054.447.372.957.4
", + "bbox": [ + 91, + 88, + 482, + 172 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "[16]. The feature distributions were processed with t-SNE [48] afterward. As shown in Fig. 4, compared with baselines, our method UM achieves a better alignment between source and target distributions, especially when the domain shift is large. Benefiting from our joint error-based adversarial alignment mechanism, the extracted feature space, including the cluster of unknown target data (green), has a more discriminative class-wise decision boundary.", + "bbox": [ + 88, + 239, + 485, + 361 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.2. Ablation Study", + "text_level": 1, + "bbox": [ + 89, + 371, + 243, + 386 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Self-supervised learning methods have shown that, by relying only on unlabeled data, it is still possible to obtain classification performance close to those of the supervised approaches [5, 17, 18]. In the source-free setting, we adopt the typical SimCLR [5] to help group the feature of unknown target data into a single cluster. As expected in Tab. 3, $L_{sim}^2$ can slightly improve the accuracy of the unknown class for a higher HOS. Furthermore, Progressive Unknown Rejection (PUR), a denoising of generated labeled features, is crucial to detecting unknowns in source-free cases. As also illustrated in Fig. 7d, generally, a larger threshold $\\tau$ will lead to a higher UNK at the cost of low OS*, characterized as the trade-off between recognizing known and unknown data for open-set tasks. In addition, we verify the effectiveness of cycle-consistency regularization $L_{cyc}$ and find it helps maintain the normalized accuracy of the known class.", + "bbox": [ + 91, + 393, + 483, + 636 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.3. Robustness against Varying Openness", + "text_level": 1, + "bbox": [ + 89, + 646, + 416, + 662 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "To verify the robustness of the proposed method, we conducted experiments on the $\\rightarrow$ Painting task (DomainNet) with the openness varying in $\\{0.25, 0.5, 0.75\\}$ . Here, openness is defined by the ratio of unknown samples in the entire target data. PGL approach heuristically sets the hyperparameter according to the true unknown ratio to control the openness, while PUJE and UM automatically estimate the weight $\\alpha$ during the training procedure. From Fig. 5a, we observe that our proposal consistently outperforms baselines by a large margin, which confirms its robustness to the change in openness.", + "bbox": [ + 89, + 670, + 485, + 837 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.4. Stabel Coverage", + "text_level": 1, + "bbox": [ + 89, + 845, + 269, + 862 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "In Fig. 5b, we illustrate the recognition performance of UM over training steps on the $\\rightarrow$ Art task of the Office-Home", + "bbox": [ + 89, + 869, + 483, + 902 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/3dc45f4f0984867be4e1fe1529fb979e0229e4d20357170310ee4cc5c77da353.jpg", + "table_caption": [ + "Table 3. Ablation study verified with ResNet-50 model on OfficeHome & DomainNet dataset" + ], + "table_footnote": [], + "table_body": "
METHODTYPEBACKBONEOffice-HomeDomainNetAvg. \nO*HOS
→Art→Product→Painting→Real
UNKOS*HOSUNKOS*HOSUNKOS*HOSUNKOS*HOSUNKOS*HOS
UMMulti-SourceResNet-5072.863.367.778.779.379.077.957.366.074.975.275.176.168.872.0
UM+AFGSource-FreeResNet-5066.155.560.383.371.677.063.956.560.076.660.667.672.561.166.2
UM+AFGViT-1677.759.867.587.680.984.168.557.462.582.873.277.779.267.873.0
", + "bbox": [ + 514, + 88, + 905, + 125 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 4. Accuracy of ViT-B/16 model fine-tuned on Office-Home & DomainNet dataset under 1-shot setting", + "bbox": [ + 511, + 135, + 906, + 164 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "dataset. OS* experiences a downward while the UNK keeps improving, which characterizes a trade-off between the accuracy of knowns and the accuracy of unknowns. We further observe that some previous works [27, 34] do not converge at the optimum. In contrast, our method always reaches a reliable convergence without suffering from a severe performance drop in recognizing known classes.", + "bbox": [ + 511, + 191, + 908, + 297 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.5. Flexibility in Backbone Architecture", + "text_level": 1, + "bbox": [ + 511, + 308, + 830, + 324 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "As presented in Sec. 3.3, AFG allows the target model to use a different backbone architecture from the pre-trained source models. Therefore, unlike those model-centric methods whose performance is deeply limited by source model architecture, our method can be effectively applied to real-world problems where each source model is trained using various networks by leveraging the power of advanced backbones like ViT [9] for the target model. Tab. 4 reveals a clear advantage of AFG when changing the target backbone to ViT-B/16 as the HOS scores under the source-free condition approach and even outperform the source data results. The same ResNet-50 backbone is used for pre-trained source models across different experiments.", + "bbox": [ + 511, + 332, + 908, + 529 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.6. Advantage in Increasing Labeled Target Data", + "text_level": 1, + "bbox": [ + 511, + 539, + 898, + 556 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Sec. 4.6 shows the behavior of different methods when the number of labeled examples in the target domain increases from 1 to 10 per class on DomainNet using ResNet50 backbone. Cluster-based methods like OSBP, MOSDANET, and HyMOS will finally be caught up by a simple multi-class PU learning (MPU) when the sample size increases. On the contrary, our method consistently outperforms the most competitive baseline PUJE for various sizes of labeled target data. Furthermore, along with the growth in the size of $\\hat{V}$ , the HOS score achieved by UM+AFG in the source-free setting gradually approaches, even surpasses those methods using source data.", + "bbox": [ + 511, + 563, + 908, + 744 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.7. Sensitivity to PU, Scaling, and Threshold Coefficients", + "text_level": 1, + "bbox": [ + 511, + 755, + 906, + 787 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We show the sensitivity of our approach to varying PU coefficient $\\beta$ , scaling factor $\\nu$ , and threshold $\\tau$ in Sec. 4.7. We can draw two observations from this: the OS* score is relatively stable, and the unknown recognition achieves a more reliable performance for a larger coefficient $\\beta$ ; generally, a larger $\\nu$ means focusing on the source domain that contributes more error and ignoring others, while a smaller $\\nu$ will equalize", + "bbox": [ + 511, + 794, + 908, + 902 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "10148", + "bbox": [ + 480, + 944, + 519, + 955 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/baead833bffaee584e19426d1998eeead018567ac396ee5e16b320c5b2abac07.jpg", + "image_caption": [ + "(a) OSBP" + ], + "image_footnote": [], + "bbox": [ + 178, + 89, + 331, + 181 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/d2c42b2a171394521b9bf8602debd0a11083dadfd7c65d32b226e4d545974e23.jpg", + "image_caption": [ + "(b) HyMOS" + ], + "image_footnote": [], + "bbox": [ + 341, + 90, + 491, + 181 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/f15e018d85423071d5e641ae072ae0e60c6ba0d2528ecb2c32ee97ab6e125b84.jpg", + "image_caption": [ + "(c) PUJE" + ], + "image_footnote": [], + "bbox": [ + 503, + 90, + 630, + 181 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/6ec085147406cebbe3d592a8c7a65096c4e7d31086a9c5de0aa4cc22b480166b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 663, + 90, + 816, + 181 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/7223fd9ce990de23916bc3816c1301a5e021cae6e1c49bb981d2921d50924a21.jpg", + "image_caption": [ + "(e) MOSDANET" + ], + "image_footnote": [], + "bbox": [ + 178, + 202, + 331, + 292 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/3f9362d1ac223f824da343dcedf9a097dfb6bc69ee51c5583e18eef0313724f7.jpg", + "image_caption": [ + "(f) PGL" + ], + "image_footnote": [], + "bbox": [ + 341, + 202, + 491, + 292 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/dd384efbc1b66ee6ffd33a478ba5bc330de776d7c1b837d36e0facd77a7d23eb.jpg", + "image_caption": [ + "(g) ANNA" + ], + "image_footnote": [], + "bbox": [ + 503, + 202, + 653, + 292 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/229d3e1d4f2e004257d80251515d4af69f89ddd3649ecec6018adea93c384ae3.jpg", + "image_caption": [ + "(d) UM", + "(h) UM" + ], + "image_footnote": [], + "bbox": [ + 663, + 202, + 816, + 292 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/9f9bbd10253eda6ef4040bce1e232b300a2c7f34eed873b250f6a8e023b35dbf.jpg", + "image_caption": [ + "Figure 4. T-SNE visualization of feature distributions in (a)-(d) $\\rightarrow$ Art task (Office-Home dataset); (e)-(h) $\\rightarrow$ Real task (DomainNet dataset).", + "(a) robust against openness", + "Figure 5. (a) Performance comparisons w.r.t. varying openness of the $\\rightarrow$ Painting task from DomainNet dataset; (b) Convergence analysis of the $\\rightarrow$ Art task from Office-Home dataset compared to other baselines with confidence intervals" + ], + "image_footnote": [], + "bbox": [ + 117, + 364, + 282, + 463 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/8788f58a6b02ee9cf8f99ba6e949cce3625a03ac7f02f767a2ac7316ee280f69.jpg", + "image_caption": [ + "(b) stable convergence" + ], + "image_footnote": [], + "bbox": [ + 290, + 364, + 454, + 463 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/910dc367fec1548161cea36b899c04a7a077d125ae14d606c26f5d6fe125312a.jpg", + "image_caption": [ + "(a) $\\rightarrow$ Clipart task", + "Figure 6. Accuracy vs the number of labeled target samples on DomainNet using ResNet50 backbone. Our method maintains a high level of performance for different sample sizes of the labeled target data." + ], + "image_footnote": [], + "bbox": [ + 117, + 590, + 282, + 688 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/4674bd3fa5c9b459b0e1b137015581139b0181d2b1023e114cefbc8320254cbc.jpg", + "image_caption": [ + "(b) $\\rightarrow$ Sketch task" + ], + "image_footnote": [], + "bbox": [ + 292, + 590, + 455, + 688 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "the importance of each domain, which can harm the performance when a remarkable label shift exists among source domains implied by Fig. 7c (the imbalance setting indicates a case where one source contains 20 classes while the other two sources take 5 classes respectively).", + "bbox": [ + 89, + 825, + 483, + 900 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/797e41538c541eec8477180d0c59dd7dda8051c6221e3cd0a25f29e3f626c2e0.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 540, + 364, + 705, + 463 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/7874ae66c62a8388641219cdc0de361c3cbc0776ee47813622ef4eccd1aab535.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 714, + 364, + 879, + 463 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/212f295cdfc1c696e682eaa82a99ff0decf163f161fdf18f0d73bfabf7838eee.jpg", + "image_caption": [ + "(a) sensitivity to $\\beta$", + "(c) sensitivity to $\\nu$", + "Figure 7. (a)-(d) Sensitivity to varying loss coefficient $\\beta, \\nu, \\tau$ verified in Office-Home dataset." + ], + "image_footnote": [], + "bbox": [ + 540, + 488, + 705, + 584 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/74305a12e89c6566e6ac7d20ad9e0e21e0845f102794138569e075a441842f7b.jpg", + "image_caption": [ + "(b) sensitivity to $\\beta$", + "(d) sensitivity to $\\tau$" + ], + "image_footnote": [], + "bbox": [ + 714, + 488, + 877, + 583 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5. Conclusion", + "text_level": 1, + "bbox": [ + 513, + 675, + 633, + 690 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this work, we addressed the semi-supervised open-set domain shift problem in multi-source cases with inconsistent label space by introducing a novel learning theory based on joint error and multi-class PU learning that can reduce the open-set risk, where the generalization error is bounded by the extension of VC learning theory based on uniform covering number. We generalize our method into source-free scenarios by attention-based feature generation, which is computationally efficient with reliable performance. We conduct extensive experiments on multiple domain adaptation benchmarks. Our model achieves the best performance regardless of source data, compared with recent baseline methods, proving our proposed approach's efficacy.", + "bbox": [ + 511, + 704, + 906, + 900 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "10149", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Acknowledgements", + "text_level": 1, + "bbox": [ + 91, + 90, + 258, + 107 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 186, + 187, + 203 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Vaughan. A theory of learning from different domains. Machine Learning, 79:151-175, 2010. 1", + "[2] Silvia Bucci, Mohammad Reza Loghmani, and Tatiana Tommasi. On the effectiveness of image rotation for open set domain adaptation. In 16th European Conference on Computer Vision, pages 422-438. Springer International Publishing, 2020. 2, 6", + "[3] Silvia Bucci, Francesco Cappio Borlino, Barbara Caputo, and Tatiana Tommasi. Distance-based hyperspherical classification for multi-source open-set domain adaptation. In IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1030-1039. IEEE, 2022. 6", + "[4] Pau Panareda Busto and Juergen Gall. Open set domain adaptation. In IEEE International Conference on Computer Vision, pages 754-763, 2017. 1", + "[5] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning. JMLR.org, 2020. 5, 7", + "[6] Shivang Chopra, Suraj Kothawade, Houda Aynaou, and Aman Chadha. Source-free domain adaptation with diffusion-guided source data generation. CoRR, abs/2402.04929, 2024. 4", + "[7] Ekin D. Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V. Le. Randaugment: Practical automated data augmentation with a reduced search space. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 3008-3017, 2020. 5", + "[8] Jun Deng, Wei Dong, Richard Socher, Li-Jia Li, Kuntai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, 2009. 5", + "[9] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2021. 7", + "[10] Zhen Fang, Jie Lu, Feng Liu, Junyu Xuan, and Guangquan Zhang. Open set domain adaptation: Theoretical bound and algorithm. IEEE Transactions on Neural Networks and Learning Systems, 32:4309-4322, 2020. 2", + "[11] Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In Proceedings of the 32nd International Conference on Machine Learning, pages 1180-1189. JMLR.org, 2015. 1" + ], + "bbox": [ + 93, + 213, + 483, + 900 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "This research is partially supported by JST Moonshot R&D Grant Number JPMJPS2011, CREST Grant Number JPMJCR2015 and Basic Research Grant (Super AI) of Institute for AI and Beyond of the University of Tokyo.", + "[12] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17 (1):2096-2030, 2016. 5", + "[13] Saurabh Garg, Sivaraman Balakrishnan, and Zachary C. Lipton. Domain adaptation under open set label shift. In Proceedings of the 36th International Conference on Neural Information Processing Systems. Curran Associates Inc., 2022. 3", + "[14] Ryan Gomes, Andreas Krause, and Pietro Perona. Discriminative clustering by regularized information maximization. In Proceedings of the 23rd International Conference on Neural Information Processing Systems, pages 775-783. Curran Associates Inc., 2010. 5", + "[15] Yves Grandvalet and Yoshua Bengio. Semi-supervised learning by entropy minimization. In Proceedings of the 17th International Conference on Neural Information Processing Systems, pages 529-536. MIT Press, 2004. 5", + "[16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. IEEE Conference on Computer Vision and Pattern Recognition, pages 770-778, 2015. 5, 7", + "[17] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9726-9735, 2020. 7", + "[18] Olivier J. Henaff, Aravind Srinivas, Jeffrey De Fauw, Ali Razavi, Carl Doersch, S. M. Ali Eslami, and Aaron Van Den Oord. Data-efficient image recognition with contrastive predictive coding. In Proceedings of the 37th International Conference on Machine Learning. JMLR.org, 2020. 7", + "[19] Judy Hoffman, Mehryar Mohri, and Ningshan Zhang. Algorithms and theory for multiple-source adaptation. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 8256-8266. Curran Associates Inc., 2018. 1", + "[20] JoonHo Jang, Byeonghu Na, DongHyeok Shin, Mingi Ji, Kyungwoo Song, and Il-Chul Moon. Unknown-aware domain adversarial learning for open-set domain adaptation. In Proceedings of the 36th International Conference on Neural Information Processing Systems. Curran Associates Inc., 2022. 2", + "[21] Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In 2nd International Conference on Learning Representations, 2014. 4", + "[22] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. In 5th International Conference on Learning Representations, 2017. 5", + "[23] Jichang Li, Guanbin Li, Yemin Shi, and Yizhou Yu. Cross-domain adaptive clustering for semi-supervised domain adaptation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2505-2514, 2021. 1, 5", + "[24] Jingjing Li, Zhiqi Yu, Zhekai Du, Lei Zhu, and Heng Tao Shen. A comprehensive survey on source-free domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(8):5743-5762, 2024. 4" + ], + "bbox": [ + 93, + 92, + 906, + 900 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "10150", + "bbox": [ + 480, + 944, + 519, + 955 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[25] Keqiuyin Li, Jie Lu, Hua Zuo, and Guangquan Zhang. Multisource domain adaptation handling inaccurate label spaces. Neurocomputing, 594:127824, 2024. 2", + "[26] Rui Li, Qianfen Jiao, Wenming Cao, Hau-San Wong, and Si Wu. Model adaptation: Unsupervised domain adaptation without source data. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9638-9647, 2020. 4", + "[27] Wuyang Li, Jie Liu, Bo Han, and Yixuan Yuan. Adjustment and alignment for unbiased open set domain adaptation. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 24110-24119, 2023. 2, 6, 7", + "[28] Jian Liang, Dapeng Hu, and Jiashi Feng. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In Proceedings of the 37th International Conference on Machine Learning. JMLR.org, 2020. 4", + "[29] Jian Liang, Dapeng Hu, Yunbo Wang, Ran He, and Jiashi Feng. Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11): 8602-8617, 2022. 4", + "[30] Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. In 7th International Conference on Learning Representations, 2019. 3", + "[31] Mohammad Reza Loghmania, Markus Vinczea, and Tatiana Tommasi. Positive-unlabeled learning for open set domain adaptation. Pattern Recognition Letters, 136:198-204, 2020. 6", + "[32] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I. Jordan. Learning transferable features with deep adaptation networks. In Proceedings of the 32nd International Conference on Machine Learning, pages 97-105. JMLR.org, 2015. 1", + "[33] Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 1647-1657. Curran Associates Inc., 2018. 1", + "[34] Yadan Luo, Zijian Wang, Zi Huang, and Mahsa Baktashmotlagh. Progressive graph learning for open-set domain adaptation. In Proceedings of the 37th International Conference on Machine Learning, pages 6468-6478. PMLR, 2020. 2, 6, 7", + "[35] Yadan Luo, Zijian Wang, Zhuoxiao Chen, Zi Huang, and Mahsa Baktashmotlagh. Source-free progressive graph learning for open-set domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(9):11240-11255, 2023. 4", + "[36] Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In IEEE/CVF International Conference on Computer Vision, pages 1406-1415, 2019. 1, 2, 6", + "[37] Md Mahmudur Rahman, Rameswar Panda, and Mohammad Arif Ul Alam. Semi-supervised domain adaptation with autoencoder via simultaneous learning. In IEEE/CVF Winter Conference on Applications of Computer Vision, pages 402-411, 2023. 1" + ], + "bbox": [ + 91, + 90, + 485, + 898 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[38] Sayan Rakshit, Dipesh Tamboli, Pragati Shuddhodhan Meshram, Biplab Banerjee, Gemma Roig, and Subhasis Chaudhuri. Multi-source open-set deep adversarial domain adaptation. In 16th European Conference on Computer Vision, pages 735-750. Springer International Publishing, 2020. 6", + "[39] Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. Maximum classifier discrepancy for unsupervised domain adaptation. IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3723-3732, 2017. 1", + "[40] Kuniaki Saito, Shohei Yamamoto, Yoshitaka Ushiku, and Tatsuya Harada. Open set domain adaptation by backpropagation. In 15th European Conference on Computer Vision, pages 156-171. Springer International Publishing, 2018. 2, 6", + "[41] Kuniaki Saito, Donghyun Kim, Stan Sclaroff, Trevor Darrell, and Kate Saenko. Semi-supervised domain adaptation via minimax entropy. In IEEE/CVF International Conference on Computer Vision, pages 8049-8057, 2019. 1, 6", + "[42] Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. In Proceedings of the 30th International Conference on Neural Information Processing Systems, pages 1171-1179. Curran Associates Inc., 2016. 5", + "[43] Ankit Singh. Clda: contrastive learning for semi-supervised domain adaptation. In Proceedings of the 35th International Conference on Neural Information Processing Systems. Curran Associates Inc., 2021. 1, 5", + "[44] Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Han Zhang, and Colin Raffel. Fixmatch: simplifying semi-supervised learning with consistency and confidence. In Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., 2020. 5", + "[45] Shiliang Sun, Honglei Shi, and Yuanbin Wu. A survey of multi-source domain adaptation. Information Fusion, 24: 84-92, 2015. 1", + "[46] Hui Tang, Ke Chen, and Kui Jia. Unsupervised domain adaptation via structurally regularized deep clustering. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8722-8732, 2020. 5", + "[47] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2962-2971, 2017. 1", + "[48] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 9: 2579-2605, 2008. 7", + "[49] Vladimir N. Vapnik. The nature of statistical learning theory. Springer-Verlag New York, Inc., 1995. 2", + "[50] V. N. Vapnik and A. Ya. Chervonenkis. On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities, pages 11-30. Springer International Publishing, 2015. 2", + "[51] Naveen Venkat, Jogendra Nath Kundu, Durgesh Kumar Singh, Ambareesh Revanur, and R. Venkatesh Babu. Your classifier can secretly suffice multi-source domain adaptation. In Proceedings of the 34th International Conference on Neural Information Processing Systems, 2020. 1" + ], + "bbox": [ + 516, + 90, + 906, + 898 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10151", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[52] Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In IEEE Conference on Computer Vision and Pattern Recognition, pages 5385-5394, 2017. 6", + "[53] Hang Wang, Minghao Xu, Bingbing Ni, and Wenjun Zhang. Learning to combine: Knowledge aggregation for multi-source domain adaptation. In 16th European Conference on Computer Vision, pages 727-744. Springer-Verlag, 2020. 1", + "[54] Qian Wang, Fanlin Meng, and Toby P. Breckon. Progressively select and reject pseudolabeled samples for open-set domain adaptation. IEEE Transactions on Artificial Intelligence, 5(9): 4403-4414, 2024. 2, 6", + "[55] Zixin Wang, Yadan Luo, Peng-Fei Zhang, Sen Wang, and Zi Huang. Discovering domain disentanglement for generalized multi-source domain adaptation. In IEEE International Conference on Multimedia and Expo, pages 1–6. IEEE, 2022. 2", + "[56] Jun Wu and Jingrui He. Domain adaptation with dynamic open-set targets. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 2039-2049. Association for Computing Machinery, 2022. 3", + "[57] R. Xu, Z. Chen, W. Zuo, J. Yan, and L. Lin. Deep cocktail network: Multi-source unsupervised domain adaptation with category shift. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3964-3973. IEEE Computer Society, 2018. 1", + "[58] Yixing Xu, Chang Xu, Chao Xu, and Dacheng Tao. Multi-positive and unlabeled learning. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 3182-3188. AAAI Press, 2017. 2, 3, 6", + "[59] Luyu Yang, Yan Wang, Mingfei Gao, Abhinav Shrivastava, Kilian Q. Weinberger, Wei-Lun Chao, and Ser-Nam Lim. Deep co-training with task decomposition for semi-supervised domain adaptation. In IEEE/CVF International Conference on Computer Vision, pages 8886-8896, 2021. 1, 5", + "[60] S. Yang, Y. Wang, J. van de Weijer, L. Herranz, S. Jui, and J. Yang. Trust your good friends: Source-free domain adaptation by reciprocal neighborhood clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(12):15883-15895, 2023. 4", + "[61] Jeongbeen Yoon, Dahiyun Kang, and Minsu Cho. Semi-supervised domain adaptation via sample-to-sample self-distillation. In IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1686-1695, 2022. 1", + "[62] Dexuan Zhang, Thomas Westfechtel, and Tatsuya Harada. Unsupervised domain adaptation via minimized joint error. Transactions on Machine Learning Research, 2023. 1, 2, 6", + "[63] Dexuan Zhang, Thomas Westfechtel, and Tatsuya Harada. Open-set domain adaptation via joint error based multi-class positive and unlabeled learning. In 18th European Conference on Computer Vision. Springer International Publishing, 2024. 2, 3, 5, 6", + "[64] Yuchen Zhang, Tianle Liu, Mingsheng Long, and Michael Jordan. Bridging theory and algorithm for domain adaptation. In Proceedings of the 36th International Conference on Machine Learning, pages 7404-7413. PMLR, 2019. 1" + ], + "bbox": [ + 91, + 90, + 485, + 900 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[65] Han Zhao, Shanghang Zhang, Guanhang Wu, Joao P. Costeira, Jose M. F. Moura, and Geoffrey J. Gordon. Adversarial multiple source domain adaptation. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 8568-8579. Curran Associates Inc., 2018. 1, 3", + "[66] Han Zhao, Remi Tachet des Combes, Kun Zhang, and Geoffrey J. Gordon. On learning invariant representation for domain adaptation. In Proceedings of the 36th International Conference on Machine Learning, 2019. 2", + "[67] Yongchun Zhu, Fuzhen Zhuang, and Deqing Wang. Aligning domain-specific distribution and classifier for cross-domain classification from multiple sources. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence. AAAI Press, 2019. 1" + ], + "bbox": [ + 514, + 90, + 906, + 301 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "10152", + "bbox": [ + 480, + 944, + 519, + 955 + ], + "page_idx": 10 + } +] \ No newline at end of file diff --git a/2025/A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains/71226fdf-7aa9-4bda-88d3-c62f01e56520_model.json b/2025/A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains/71226fdf-7aa9-4bda-88d3-c62f01e56520_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e7f133d50433dd097f5f8c479c44738b5f00a214 --- /dev/null +++ b/2025/A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains/71226fdf-7aa9-4bda-88d3-c62f01e56520_model.json @@ -0,0 +1,2884 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.043 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.001, + 0.812, + 0.047 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.093, + 0.13, + 0.908, + 0.178 + ], + "angle": 0, + "content": "A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains" + }, + { + "type": "text", + "bbox": [ + 0.237, + 0.203, + 0.761, + 0.239 + ], + "angle": 0, + "content": "Dexuan Zhang1 Thomas Westfechtel1 Tatsuya Harada1,2 \n1The University of Tokyo, 2RIKEN" + }, + { + "type": "text", + "bbox": [ + 0.281, + 0.243, + 0.714, + 0.258 + ], + "angle": 0, + "content": "{dexuan.zhang, thomas, harada}@mi.t.u-tokyo.ac.jp" + }, + { + "type": "title", + "bbox": [ + 0.248, + 0.292, + 0.327, + 0.308 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.325, + 0.486, + 0.522 + ], + "angle": 0, + "content": "Existing domain adaptation systems can hardly be applied to real-world problems with new classes presenting at deployment time, especially regarding source-free scenarios where multiple source domains do not share the label space despite being given a few labeled target data. To address this, we consider a challenging problem: multi-source semi-supervised open-set domain adaptation and propose a learning theory via joint error, effectively tackling strong domain shift. To generalize the algorithm into source-free cases, we introduce a computationally efficient and architecture-flexible attention-based feature generation module. Extensive experiments on various data sets demonstrate the significant improvement of our proposed algorithm over baselines." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.572, + 0.222, + 0.587 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.093, + 0.599, + 0.486, + 0.901 + ], + "angle": 0, + "content": "Generally, a supervised learning algorithm trained on a particular distribution of labeled samples (source domain) often fails to generalize when deployed on a new environment (target domain) in the presence of domain shift. In this regard, Domain adaptation (DA) [1] algorithms address the domain-shift problem by aligning the data distributions of the source and target domains through learning a domain-invariant feature space using statistical or adversarial learning approaches, which have made remarkable success. However, the problem setting still needs to be relaxed for real-world applications when we aim to integrate the knowledge learned from multiple source domains. Current DA methods can hardly cover the case for varying label space across domains, and the corresponding learning theory has yet to be proposed. Moreover, the solution is limited if this heterogeneous domain setting is extended to source-free situations where each source model may be trained on different network architectures and the label space. This work tackles a challenging multi-source semi-supervised open-set domain adaptation paradigm with varying label space, illustrated in Fig. 1." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.293, + 0.908, + 0.46 + ], + "angle": 0, + "content": "In general, multi-source DA (MSDA) [45] and semi-supervised DA (SSDA) [41] are regarded as more practical than the single-source DA setup, considering that labeled data may come from various domains. More precisely, in these cases, the labeled samples can be differently distributed among themselves in addition to the usual domain shit between the source and the target domains. One naive approach to MSDA and SSDA is to group all labeled data into a single domain and deploy any unsupervised DA (UDA) method. However, such a trivial solution may lead to sub-optimal classification due to the gaps among labeled data [41]." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.461, + 0.909, + 0.688 + ], + "angle": 0, + "content": "Most DA techniques assume the same label space in source and target domains, usually called the closed-set setting. The paradigm of closed-set DA has been substantially explored in the literature for UDA [11, 32, 33, 39, 47, 62, 64], SSDA [23, 37, 41, 43, 59, 61], and MSDA [19, 36, 51, 53, 57, 65, 67]. In contrast, the open-set DA (OSDA) [4] setting allows the presence of target-specific classes in addition to the shared classes. Such an open-set arrangement is more challenging due to a huge label shift across domains. The closed-set DA techniques cannot be directly applied in this case since these target-specific open-set samples, in turn, may jeopardize the domain alignment process. This work formalizes a more generalized problem where source domains may not share the label space, and the unlabeled target domain additionally contains novel classes." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.689, + 0.909, + 0.901 + ], + "angle": 0, + "content": "Motivated by these, we consider a learning scenario in this work as the multi-source semi-supervised open-set DA (MSODA) where each source domain has a diverse label space, the labeled target domain consists of a few-shot of target data whose label space, i.e., the known class, is a superset of any source label space, and the unlabeled target domain contains data either from the known or a combined unknown class. Under this setup, the task is to classify the unlabeled data either into one of the known categories or a common unknown class. Such a setup invariably holds huge applications in fields relating to real-world visual perception like medical imaging and remote sensing, where acquisition of multi-domain data is feasible, and novel categories may show up abruptly [4]. Nonetheless, this MSODA problem" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "10142" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.485, + 0.257 + ], + "angle": 0, + "content": "cannot be effectively solved by directly utilizing the single source open-set paradigm of [2, 10, 20, 27, 34, 40, 54, 63] mainly because of the following factors: i) the varying label space of multiple source domains becomes an obstacle to the traditional OSDA techniques, and ii) the unknown recognition can be non-trivial since the target domain may be related to each source domain in a different degree. Regarding multi-source models, recent works [25, 55] consider various label spaces but require largely shared common classes for all domains to align the features, where we accept source domains with zero overlap." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.259, + 0.485, + 0.501 + ], + "angle": 0, + "content": "[36] argued that reducing the domain gap among the source domains leads to a more robust and effective MSDA model. This idea is particularly relevant to our problem setting since aligning the source domains among themselves inherently helps distinguish unknown from known categories in the target domain. Otherwise, the domain shift among the source domains may lead to an unstable alignment of unlabeled target data. Inspired by this idea, we combine the theoretical results from [62, 63] to build a learning theory that can align all source domains with the labeled target domain via joint error, which is crucial to dealing with label shift [66]. Then we introduce PU learning [58] to detect unknowns with an end-to-end algorithm such that the generalization error is guaranteed, unlike those methods applying closed-set DA after unknown separation. Our major contributions can be summarized as:" + }, + { + "type": "text", + "bbox": [ + 0.093, + 0.505, + 0.484, + 0.55 + ], + "angle": 0, + "content": "- We introduce a challenging problem setting of multi-source semi-supervised open-set DA with varying label space and propose a learning theory via joint error." + }, + { + "type": "text", + "bbox": [ + 0.093, + 0.551, + 0.482, + 0.58 + ], + "angle": 0, + "content": "- We design a framework to generate labeled source features via an attention-based mechanism for source-free cases." + }, + { + "type": "text", + "bbox": [ + 0.093, + 0.58, + 0.484, + 0.626 + ], + "angle": 0, + "content": "- We demonstrate the efficacy of our proposal through extensive experiments on two benchmark datasets where we perform thorough robustness analysis." + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.505, + 0.484, + 0.626 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.134, + 0.645, + 0.445, + 0.804 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.814, + 0.485, + 0.897 + ], + "angle": 0, + "content": "Figure 1. Knowledge integration from heterogeneous domains can be considered a task for multi-source semi-supervised open-set domain adaptation. Given a few labeled target data as the key, we aim to build a unified target model from multiple source domains with varying label space, which can be applied to query data containing unknown categories." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.09, + 0.826, + 0.108 + ], + "angle": 0, + "content": "2. Learning Theory of Unified Model" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.115, + 0.907, + 0.281 + ], + "angle": 0, + "content": "In this section, we present the theory that transfers the knowledge from multiple source domains to the target domain given a few labeled target data under the open-set situation. First, we propose a target error bound via joint error based on the theoretical results from [62, 63]. Then, we derive the generalization error of the proposed learning theory based on the generalized Vapnik-Chervonenkis (VC) complexity [49, 50] of real-valued function space. Finally, we proceed with the empirical objective function as an upper bound of a trivial convex combination with the log-sum-exp trick, which leads to a smoother optimization process." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.282, + 0.907, + 0.508 + ], + "angle": 0, + "content": "We consider the unified model (UM) as a solution to multi-source semi-supervised open-set domain adaptation (MSODA) tasks, where the learning algorithm has access to multiple source domains that may have different label spaces. A set of \\( n_i \\) (\\( i = 1,..,N \\)) labeled points \\( \\{(x_{s_i}^j, y_{s_i}^j) \\in (\\mathcal{X} \\subseteq \\mathbb{R}^D \\times \\mathcal{Y}_i' \\subseteq \\mathcal{Y}')\\}_{j=1}^{n_i} \\) sampled i.i.d. from each source domain \\( S_i' \\). In addition, a set of \\( l \\) labeled points (few-shot) \\( \\{(x_v^j, y_v^j) \\in (\\mathcal{X} \\subseteq \\mathbb{R}^D \\times \\mathcal{Y}' = \\{1, ..., K-1\\})\\}_{j=1}^l \\) sampled i.i.d. from the labeled target domain \\( V' \\) is available during learning. We seek a hypothesis that can classify a set of \\( m \\) unlabeled points \\( \\{(x_t^j) \\in X \\subseteq \\mathbb{R}^D\\}_{j=1}^m \\) sampled i.i.d. from target domain \\( T \\) where \\( \\mathcal{Y} = \\{1,..,K\\} \\) containing unknown class \\( K \\). Let \\( \\mathcal{K} = \\{k | k \\in \\mathbb{R}^K : \\sum_{y \\in \\mathcal{Y}} k[y] = 1, k[y] \\in [0,1]\\} \\) denotes output space and \\( S_i, V \\) indicate the complete domains with label space \\( \\mathcal{Y} \\)." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.517, + 0.909, + 0.744 + ], + "angle": 0, + "content": "Theorem 2.1 (Target Error Bound for MSODA via Joint Error1). Given source \\((S_i)\\), labeled \\((V)\\) and unlabeled target \\((T)\\) domains that contain data from the unknown class, let \\(f_{S_i}, f_V, f_T: \\mathcal{X} \\to \\mathcal{K}\\) be the true labeling functions of \\(S_i, V, T\\) respectively whose outputs are one-hot vectors denoting the corresponding classes of inputs. Let \\(\\epsilon: \\mathcal{K} \\times \\mathcal{K} \\to \\mathbb{R}\\) denote a distance metric and \\(\\epsilon_D(f, f') := \\mathbb{E}_{x \\sim D} \\epsilon(f(x), f'(x))\\) measure the expected disagreement between the outputs of \\(f, f': \\mathcal{X} \\to \\mathcal{K}\\) over a distribution \\(D\\) on \\(\\mathcal{X}\\). Regarding the source error of a hypothesis \\(h \\in \\mathcal{H}: \\mathcal{X} \\to \\mathcal{K}\\) where \\(h(x)[y]\\) indicates the probability of \\(x \\in \\mathcal{X}\\) labeled as \\(y \\in \\mathcal{Y}\\), we use the shorthand \\(\\epsilon_{S_i}(h) := \\epsilon_{S_i}(h, f_{S_i})\\). Similarly, we use \\(\\epsilon_V(h), \\epsilon_T(h)\\) to denote the labeled and unlabeled target error. For \\(\\forall h, f_{S_i}', f_V', f_T' \\in \\mathcal{H}: \\mathcal{X} \\to \\mathcal{K}\\), the expected target error is bounded," + }, + { + "type": "equation", + "bbox": [ + 0.518, + 0.761, + 0.907, + 0.835 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} 2 \\epsilon_ {T} (h) \\leq \\epsilon_ {V} (h) + \\sum_ {i = 1} ^ {N} \\alpha_ {i} U _ {i} (h), \\quad s. t. \\quad \\sum_ {i = 1} ^ {N} \\alpha_ {i} = 1 \\\\ = \\epsilon_ {V} (h) + \\sum_ {i = 1} ^ {N} \\alpha_ {i} \\left[ \\epsilon_ {S _ {i}} (h) + 2 D _ {S _ {i}, V, T} \\left(f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}, f _ {T} ^ {*}, h\\right) + 2 \\theta_ {i} \\right], \\tag {1} \\\\ \\end{array}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.543, + 0.839, + 0.906, + 0.887 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} 2 D _ {S _ {i}, V, T} (f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}, f _ {T} ^ {*}, h) \\\\ = \\epsilon_ {T} (f _ {S _ {i}} ^ {*}, f _ {T} ^ {*}) + \\epsilon_ {T} (f _ {V} ^ {*}, f _ {T} ^ {*}) + \\epsilon_ {T} (h, f _ {S _ {i}} ^ {*}) + \\epsilon_ {T} (h, f _ {V} ^ {*}) \\\\ + \\epsilon_ {V} (f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}) + \\epsilon_ {S _ {i}} (f _ {V} ^ {*}, f _ {S _ {i}} ^ {*}) - \\epsilon_ {V} (h, f _ {S _ {i}} ^ {*}) - \\epsilon_ {S _ {i}} (h, f _ {V} ^ {*}) \\tag {2} \\\\ \\end{array}\n\\]" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "10143" + } + ], + [ + { + "type": "equation", + "bbox": [ + 0.109, + 0.116, + 0.484, + 0.183 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\theta_ {i} = \\underbrace {\\epsilon_ {S _ {i}} (f _ {S _ {i}} , f _ {S _ {i}} ^ {*}) / 2 + \\epsilon_ {V} (f _ {S _ {i}} , f _ {S _ {i}} ^ {*}) + \\epsilon_ {T} (f _ {S _ {i}} , f _ {S _ {i}} ^ {*})} _ {\\theta_ {S} ^ {i}} \\\\ + \\underbrace {\\epsilon_ {V} (f _ {V} , f _ {V} ^ {*}) / 2 + \\epsilon_ {S _ {i}} (f _ {V} , f _ {V} ^ {*}) + \\epsilon_ {T} (f _ {V} , f _ {V} ^ {*})} _ {\\theta_ {V} ^ {i}} + \\underbrace {\\epsilon_ {T} (f _ {T} , f _ {T} ^ {*})} _ {\\theta_ {T} ^ {i}} \\quad (3) \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.193, + 0.485, + 0.239 + ], + "angle": 0, + "content": "In the following, we discuss the approach to obtain generalization guarantees for multiple source domain adaptation in classification settings by a trivial union-bound argument." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.248, + 0.484, + 0.327 + ], + "angle": 0, + "content": "Assumption 2.2 (Substitutes for True Labeling Functions). For finite training data \\(\\{\\hat{S}_i\\}_{i = 1}^N,\\hat{T},\\hat{V}\\) , we assume there exist approximated labeling functions \\(\\{f_{S_i}^*\\}_{i = 1}^N,f_T^*,f_V^*\\) that can lead the empirical deviation \\(\\sum_{i}\\alpha_{i}\\hat{\\theta}_{i}\\) very close to zero such that it can be ignored during the practical learning process." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.334, + 0.485, + 0.482 + ], + "angle": 0, + "content": "Theorem 2.3 (Generalization Error\\(^1\\)). Let \\(\\hat{S}_i, \\hat{V}, \\hat{T}\\) denote the empirical distributions generated with \\(m\\) i.i.d. samples from each domain. Let \\(\\mathcal{F} = \\{f(x) = \\epsilon(h(x), h'(x)): \\mathcal{X} \\to [0, M] | h, h' \\in \\mathcal{H}\\}\\) be a function space with complexity measured by uniform covering number \\(\\mathcal{N}_1(\\xi, \\mathcal{F}, m)\\). Let \\(\\alpha_i = \\frac{\\exp(\\nu \\hat{U}_i(h))}{\\sum_j \\exp(\\nu \\hat{U}_j(h))}\\), \\(\\nu > 0\\), given Jensen's & Cauchy's inequality and Assumption 2.2, there exist \\(f_{S_i}^* \\in \\mathcal{H}_{S_i} \\subseteq \\mathcal{H}\\), \\(f_V^* \\in \\mathcal{H}_V \\subseteq \\mathcal{H}\\), \\(f_T^* \\in \\mathcal{H}_T \\subseteq \\mathcal{H}\\), such that for \\(0 < \\delta < 1\\), with probability at least \\(1 - \\delta\\), for \\(\\forall h \\in \\mathcal{H}\\):" + }, + { + "type": "equation", + "bbox": [ + 0.095, + 0.499, + 0.504, + 0.587 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\epsilon_ {T} (h) \\leq \\frac {1}{2} [ \\underbrace {\\epsilon_ {\\hat {V}} (h)} _ {L _ {c l s} ^ {V} (h)} + \\frac {1}{\\nu} \\log \\sum_ {i = 1} ^ {N} \\exp (\\nu \\hat {U} _ {i} (h)) ] \\\\ + \\mathcal {O} \\left(\\inf _ {\\sqrt {\\frac {2}{m}} \\leq \\gamma \\leq M} (\\gamma + \\int_ {\\gamma} ^ {M} \\sqrt {\\frac {1}{m} \\log \\frac {2 (1 1 N + 6) \\mathcal {N} _ {1} (\\frac {\\xi}{8} , \\mathcal {F} , 2 m)}{\\delta}} d \\xi)\\right) \\tag {4} \\\\ \\end{array}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.095, + 0.591, + 0.484, + 0.627 + ], + "angle": 0, + "content": "\\[\n\\hat {U} _ {i} (h) = \\underbrace {\\epsilon_ {\\hat {S} _ {i}} (h)} _ {L _ {c l s} ^ {S _ {i}} (h)} + 2 \\underbrace {D _ {\\hat {S} _ {i} , \\hat {V} , \\hat {T}} \\left(f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}, f _ {T} ^ {*}, h\\right)} _ {L _ {d i s} ^ {i} \\left(f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}, f _ {T} ^ {*}, h\\right)} \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.637, + 0.483, + 0.757 + ], + "angle": 0, + "content": "The log-sum-exp trick [30] yields an upper bound of the convex combination as Theorem 2.3, where we no longer need to heuristically decide the value of \\(\\alpha_{i}\\) in the unified model. It smooths the objective and provides a principled and adaptive way to combine all the gradients from the \\(N\\) source domains. This often leads to better generalizations in practice because of the ensemble effect of multiple sources implied by the upper bound [65]." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.758, + 0.484, + 0.879 + ], + "angle": 0, + "content": "According to [13, 56, 63], for \\(\\forall f, f': \\mathcal{X} \\to \\mathcal{K}, \\epsilon_V(f, f')\\) can be approximated by the expectation on \\(V'\\) based on PU learning. Moreover, source data \\(S_i'\\) may be unavailable due to privacy concerns (e.g., medical data) during the adaptation phase. To tackle this source-free domain adaptation (SFDA) problem, we propose a source features generation pipeline based on the attention mechanism, which can transfer the knowledge between models with different architectures." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.09, + 0.65, + 0.108 + ], + "angle": 0, + "content": "3. Methodology" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.115, + 0.907, + 0.222 + ], + "angle": 0, + "content": "In this section, we first recall several preliminaries crucial to the learning algorithm of open-set domain adaptation. Then, we propose a pipeline to transfer the knowledge between models with different architectures based on the attention mechanism to recover feature space under the source-free setting. Finally, we define constrained hypothesis space to obtain a rigorous objective function." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.23, + 0.747, + 0.246 + ], + "angle": 0, + "content": "3.1. Discrepancy Measurement" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.253, + 0.907, + 0.314 + ], + "angle": 0, + "content": "As introduced in [63], we recall the definition of Open-set Margin Discrepancy and Unknown Predictive Discrepancy, which serve as key components to bridging the gap between the theory and algorithm for open-set domain adaptation." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.319, + 0.907, + 0.362 + ], + "angle": 0, + "content": "Definition 3.1 (Open-set Margin Discrepancy). Let \\(y, y'\\) denote outputs of \\(f, f': \\mathcal{X} \\to \\mathcal{K}\\) where \\(y = l(f(x)), l(f'(x)) = y'\\) given induced labeling function:" + }, + { + "type": "equation", + "bbox": [ + 0.632, + 0.368, + 0.907, + 0.387 + ], + "angle": 0, + "content": "\\[\nl \\circ f: x \\rightarrow \\underset {y \\in \\mathcal {Y}} {\\arg \\max } f (x) [ y ] \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.395, + 0.906, + 0.424 + ], + "angle": 0, + "content": "The Open-set Margin Discrepancy between two functions \\( f, f' \\) over a distribution \\( D \\) is given by:" + }, + { + "type": "equation", + "bbox": [ + 0.527, + 0.432, + 0.907, + 0.478 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\epsilon_ {D} (f, f ^ {\\prime}) = \\mathbb {E} _ {x \\sim D} [ \\operatorname {o m d} (f (x), f ^ {\\prime} (x)) ] (7) \\\\ \\operatorname {o m d} (f (x), f ^ {\\prime} (x)) = \\max \\left(\\left| \\log (1 - f (x) [ y ]) - \\log (1 - f ^ {\\prime} (x) [ y ]) \\right|\\right), \\\\ | \\log (1 - f (x) [ y ^ {\\prime} ]) - \\log (1 - f ^ {\\prime} (x) [ y ^ {\\prime} ]) |) (8) \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.484, + 0.907, + 0.622 + ], + "angle": 0, + "content": "Definition 3.2 (Unknown Predictive Discrepancy). Let \\( v: \\mathcal{K} \\times \\mathcal{K} \\to \\mathbb{R} \\) denote the Unknown Predictive Discrepancy as a distance metric and \\( v_{D}(f, f') := \\mathbb{E}_{x \\sim D} v(f(x), f'(x)) \\) measure the expected disagreement between the \\( K \\)-th outputs of \\( f, f': \\mathcal{X} \\to \\mathcal{K} \\) over a distribution \\( D \\) on \\( \\mathcal{X} \\). Let \\( e^{K}: \\mathcal{X} \\to [0, \\dots, 1] \\in \\mathcal{K} \\) denote a function that can predict any input as the unknown class. The deviation from \\( e^{K} \\) for a hypothesis \\( h \\in \\mathcal{H} \\) is further referred to as the shorthand \\( v_{D}(h) := v_{D}(h, e^{K}) \\) that measures the probability that samples from \\( D \\) not classified as unknowns." + }, + { + "type": "equation", + "bbox": [ + 0.547, + 0.63, + 0.907, + 0.645 + ], + "angle": 0, + "content": "\\[\nv _ {D} (f, f ^ {\\prime}) = \\mathbb {E} _ {x \\sim D} | \\log (1 - f (x) [ K ]) - \\log (1 - f ^ {\\prime} (x) [ K ]) | \\tag {9}\n\\]" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.653, + 0.885, + 0.669 + ], + "angle": 0, + "content": "3.2. Inference on Expectation with PU Learning" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.676, + 0.906, + 0.752 + ], + "angle": 0, + "content": "In this section, we introduce the techniques from PU learning [58] to estimate the expectation over source domain \\( S_{i} \\) by the incomplete source domain \\( S_{i}^{\\prime} \\) and target domain \\( T \\) for the open-set scenario. The expectation over \\( V \\) can be derived analogously." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.758, + 0.907, + 0.902 + ], + "angle": 0, + "content": "Assumption 3.3. Let \\( S_{i}^{k} = P_{S_{i}}(x|y = k), V^{k} = P_{V}(x|y = k), T^{k} = P_{T}(x|y = k) \\) denote class conditional distributions, \\( S_{i}^{\\backslash K} = P_{S_{i}}(x|y \\neq K), V' = P_{V}(x|y \\neq K), T' = P_{T}(x|y \\neq K) \\) indicate incomplete domains that do not contain unknown class \\( S_{i}^{K}, V^{K}, T^{K} \\). Given a feature extractor \\( g: \\mathcal{X} \\subseteq \\mathbb{R}^{D} \\to \\mathcal{Z} \\subseteq \\mathbb{R}^{F} \\), assume that the feature space can be aligned by DA techniques such that \\( Z^{K} = P_{S_{i}^{K}}(z) = P_{V^{K}}(z) = P_{T^{K}}(z), Z' = P_{S_{i}^{\\backslash K}}(z) = P_{V'}(z) = P_{T'}(z) \\)." + }, + { + "type": "page_footnote", + "bbox": [ + 0.109, + 0.887, + 0.316, + 0.901 + ], + "angle": 0, + "content": "1 see proofs in supplementary material." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "10144" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.09, + 0.487, + 0.233 + ], + "angle": 0, + "content": "Lemma 3.4 (PU Estimation\\(^1\\)). Let \\( g: \\mathcal{X} \\subseteq \\mathbb{R}^D \\to \\mathcal{Z} \\subseteq \\mathbb{R}^F \\) denote the feature extractor. Let \\( h \\in \\mathcal{H}^F: \\mathcal{Z} \\to \\mathcal{K} \\) where \\( h \\circ g \\in \\mathcal{H}: \\mathcal{X} \\to \\mathcal{K} \\) and \\( f_V^* \\in \\mathcal{H}_V^F, f_T^* \\in \\mathcal{H}_T^F, f_{S_i}^* \\in \\mathcal{H}_{S_i}^F: \\mathcal{Z} \\to \\mathcal{K} \\) denote the decomposed approximated labeling functions. Let \\( \\sum_{k=1}^{K} \\pi_{S_i}^k = 1, \\sum_{k=1}^{K} \\pi_V^k = 1, \\sum_{k=1}^{K} \\pi_T^k = 1 \\) denote the class priors of each domain. Given Assumption 3.3, the expectation on \\( S_i \\) can be estimated by expectation on \\( S_i^{\\backslash K} \\) and Unknown Predictive Discrepancy (Definition 3.2) with a mild condition that \\( \\pi_{S_i}^K = \\pi_T^K = 1 - \\alpha \\):" + }, + { + "type": "equation", + "bbox": [ + 0.143, + 0.248, + 0.486, + 0.277 + ], + "angle": 0, + "content": "\\[\n\\epsilon_ {S _ {i}} (h \\circ g) = \\alpha \\left[ \\epsilon_ {S _ {i} ^ {\\backslash K}} (h \\circ g) - v _ {S _ {i} ^ {\\backslash K}} (h \\circ g) \\right] + v _ {T} (h \\circ g) \\tag {10}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.093, + 0.28, + 0.498, + 0.353 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\epsilon_ {S _ {i}} (f _ {S _ {i}} ^ {*} \\circ g, f _ {V} ^ {*} \\circ g) = \\alpha [ \\epsilon_ {S _ {i} \\backslash K} (f _ {S _ {i}} ^ {*} \\circ g, f _ {V} ^ {*} \\circ g) - v _ {S _ {i} \\backslash K} (f _ {S _ {i}} ^ {*} \\circ g, f _ {V} ^ {*} \\circ g) ] \\\\ + v _ {T} \\left(f _ {S _ {i}} ^ {*} \\circ g, f _ {V} ^ {*} \\circ g\\right) (11) \\\\ \\epsilon_ {S _ {i}} (f _ {V} ^ {*} \\circ g, h \\circ g) = \\alpha [ \\epsilon_ {S _ {i} ^ {\\backslash} K} (f _ {V} ^ {*} \\circ g, h \\circ g) - v _ {S _ {i} ^ {\\backslash} K} (f _ {V} ^ {*} \\circ g, h \\circ g) ] \\\\ + v _ {T} \\left(f _ {V} ^ {*} \\circ g, h \\circ g\\right) (12) \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.364, + 0.486, + 0.426 + ], + "angle": 0, + "content": "Assumption 3.5. Given a feature extractor \\( g: \\mathcal{X} \\to \\mathcal{Z} \\), assume that the covariate shift between each source and labeled target domain can be addressed for known categories as \\( P_{S_i^k}(z) = P_{V^k}(z), k = 1,..K - 1 \\)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.434, + 0.486, + 0.478 + ], + "angle": 0, + "content": "Corollary 3.6. Let \\(\\mathcal{Y}_i^{\\prime \\prime} = \\{k|k\\notin \\mathcal{Y}_i^\\prime ,k = 1,..K - 1\\}\\) denote the label space that is absent from \\(S_{i}^{\\prime}\\). Given Assumption 3.5, we further decompose the source error as:" + }, + { + "type": "equation", + "bbox": [ + 0.12, + 0.486, + 0.484, + 0.545 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\alpha \\epsilon_ {S _ {i} ^ {\\backslash K}} (h \\circ g) = \\sum_ {k \\in \\mathcal {Y} _ {i} ^ {\\prime \\prime}} \\pi_ {S _ {i}} ^ {k} \\epsilon_ {S _ {i} ^ {k}} (h \\circ g) + \\sum_ {k \\in \\mathcal {Y} _ {i} ^ {\\prime}} \\pi_ {S _ {i}} ^ {k} \\epsilon_ {S _ {i} ^ {k}} (h \\circ g) \\\\ = \\rho_ {i} \\sum_ {k \\in \\mathcal {Y} _ {i} ^ {\\prime \\prime}} \\epsilon_ {V _ {i} ^ {k}} (h \\circ g) + (1 - \\rho_ {i}) \\epsilon_ {S _ {i} ^ {\\prime}} (h \\circ g), \\tag {13} \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.557, + 0.483, + 0.589 + ], + "angle": 0, + "content": "where \\(\\rho_{i} = |\\mathcal{Y}_{i}^{\\prime \\prime}| / K\\) under a mild condition that \\(\\pi_{S_i}^k = 1 / K\\) for \\(k\\in \\mathcal{Y}_i^{\\prime \\prime}\\)" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.597, + 0.484, + 0.658 + ], + "angle": 0, + "content": "Remark 3.7. According to Definition 3.2, minimizing \\( v_{\\hat{T}}(h \\circ g) \\) means mapping target samples to the unknown class. In practice, a multiplier \\( \\beta < 1 \\) is applied on \\( v_{\\hat{T}}(h \\circ g) \\) to prevent all target samples from being classified as unknown." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.666, + 0.484, + 0.697 + ], + "angle": 0, + "content": "3.3. Towards Source-Free Knowledge Transfer with Attention-based Feature Generation" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.704, + 0.486, + 0.902 + ], + "angle": 0, + "content": "Source-free domain adaptation (SFDA) has been considered a means of reducing reliance on source data. As described in [24], the existing SFDA research can generally be categorized into data-centric and model-centric methods. Model-centric methods employ techniques such as self-supervision, while data-centric methods focus on image-based reconstruction. Model-centric methods like [28, 29, 35, 60] require source model fine-tuning, where the generalization to multi-source cases with label shift can be nontrivial since it may fail to fully leverage the few-shot labeled data due to the missing classes in source domains. Meanwhile, for data-centric methods like [6, 26], the pipeline to generate source-like images is generally computationally intensive" + }, + { + "type": "image", + "bbox": [ + 0.555, + 0.09, + 0.868, + 0.192 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.512, + 0.201, + 0.908, + 0.314 + ], + "angle": 0, + "content": "Figure 2. The mechanism of attention-based feature generation for source-free domain adaptation. Given a similarity-based weight estimated by the knowledge preserved in the pre-trained source model consisting of a black-box feature extractor \\( g_{i} \\) and a visible classifier \\( f_{i} \\), labeled features generated with the attention module can be considered a weighted average of unlabeled target features, which serve as the anchor for the target distribution alignment in the adaptation phase." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.347, + 0.907, + 0.483 + ], + "angle": 0, + "content": "and time-consuming, which can hardly be applied to highly structured domains. Furthermore, it might violate the intention of SFDA to protect privacy by recovering source-like images. Motivated by this, in this section, we propose a novel attention-based feature generation (AFG) algorithm that can produce labeled anchors for the alignment of unlabeled target data by leveraging the knowledge equipped in source models, which is more computationally efficient and independent from source model fine-tuning." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.487, + 0.909, + 0.621 + ], + "angle": 0, + "content": "The SFDA scenario involves two phases: pre-training and adaptation. During pre-training, \\(N\\) models are trained on labeled data from each source domain \\(x_{s_i} \\sim S_i'\\), \\(i = 1\\dots N\\). Subsequently, the goal of the adaptation stage is to adapt the pre-trained source model to the unlabeled target data \\(x_t \\sim T\\) given few-shot labeled target data \\(x_v \\sim V'\\). The proposed approach assumes a challenging open-set form, implying that the label spaces among the target and source domains are distinct." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.622, + 0.909, + 0.724 + ], + "angle": 0, + "content": "Inspired by [28], which uses a single-layer linear classifier in source models to store the cluster center of source features, we choose the Bayesian linear classifier during pre-training such that source features can be sampled by the re-parameterization trick [21] in the adaptation phase. Let \\( g_{i}:\\mathcal{X}\\rightarrow \\mathcal{Z}_{i}\\subseteq \\mathbb{R}^{F_{i}} \\) and \\( f_{i}\\coloneqq \\{\\mu_{i},\\sigma_{i}\\} \\) denote each pre-trained source model. As illustrated in Fig. 2, given the source features approximated by the weight samples" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.724, + 0.905, + 0.782 + ], + "angle": 0, + "content": "of Bayesian linear classifier as \\(g_{i}(\\hat{S}_{i}^{\\prime}) = \\left( \\begin{array}{c} g_{i}(x_{s_{i}}^{1}) \\\\ \\vdots \\\\ g_{i}(x_{s_{i}}^{\\| \\mathcal{Y}_{i}^{\\prime}\\|}) \\end{array} \\right) :=\\)" + }, + { + "type": "equation", + "bbox": [ + 0.513, + 0.782, + 0.905, + 0.841 + ], + "angle": 0, + "content": "\\[\n\\mu_ {i} + \\sigma_ {i} \\odot \\left( \\begin{array}{c} \\zeta_ {i} ^ {1} \\\\ \\vdots \\\\ \\zeta_ {i} ^ {\\| \\mathcal {Y} _ {i} ^ {\\prime} \\|} \\end{array} \\right), \\zeta_ {i} ^ {j} \\sim \\mathcal {N} (0, I) \\text {w i t h s i z e} \\| \\mathcal {Y} _ {i} ^ {\\prime} \\| \\times F _ {i}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.84, + 0.909, + 0.904 + ], + "angle": 0, + "content": "(multiple samples can be generated from each class in practice), along with the query and key mapping functions \\(w_{q_i}, w_{k_i}: \\mathcal{Z}_i \\to \\mathcal{Z}_i' \\subseteq \\mathbb{R}^{F_i'}\\), the corresponding labeled anchor defined as \\(\\{(g(x_{s_i}^j), y_i^j \\in \\mathcal{Y}_i')\\}_{j=1}^{\\|\\mathcal{Y}_i'\\|}, y_i^j \\neq y_i^{j'}\\) is given" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "10145" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.091, + 0.093, + 0.117, + 0.107 + ], + "angle": 0, + "content": "by:" + }, + { + "type": "equation", + "bbox": [ + 0.138, + 0.118, + 0.484, + 0.148 + ], + "angle": 0, + "content": "\\[\ng (\\hat {S} _ {i} ^ {\\prime}) = \\operatorname {s o f t m a x} \\left(\\frac {w _ {q _ {i}} \\left(g _ {i} \\left(\\hat {S} _ {i} ^ {\\prime}\\right)\\right) \\cdot w _ {k _ {i}} \\left(g _ {i} \\left(\\hat {T} ^ {\\prime}\\right)\\right) ^ {\\top}}{\\sqrt {F _ {i} ^ {\\prime}}}\\right) g (\\hat {T} ^ {\\prime}), \\tag {14}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.161, + 0.485, + 0.368 + ], + "angle": 0, + "content": "where \\(\\hat{T}'\\) denotes the estimated known-class data from the target. To produce meaningful features for the distribution alignment in the adaptation phase, we propose two objective functions to learn the query and key mapping \\(\\{w_{q_i}, w_{k_i}\\}_{i=1}^N\\) of each source domain. Analogous to [5], we train \\(w_{q_i}, w_{k_i}\\) by maximizing the similarity between the projections of the same target features extracted by the per-trained source model \\(g_i\\) while pushing the different target features far apart, which can be achieved with minimizing reconstruction loss \\(L_{rec}^i\\) such that the output of the attention module can approximate target features \\(g(\\hat{T}')\\) given target data as query and key. To further regularize \\(w_{q_i}, w_{k_i}\\), we introduce a cycle-consistency loss \\(L_{cyc}^i\\) that can bring the features generated by labeled and unlabeled target data \\(\\hat{V}', \\hat{T}'\\) close to each other." + }, + { + "type": "equation", + "bbox": [ + 0.106, + 0.377, + 0.484, + 0.407 + ], + "angle": 0, + "content": "\\[\nL _ {r e c} ^ {i} = \\left| \\operatorname {s o f t m a x} \\left(\\frac {w _ {q _ {i}} \\left(g _ {i} \\left(\\hat {T} ^ {\\prime}\\right)\\right) \\cdot w _ {k _ {i}} \\left(g _ {i} \\left(\\hat {T} ^ {\\prime}\\right)\\right) ^ {\\top}}{\\sqrt {F _ {i} ^ {\\prime}}}\\right) g \\left(\\hat {T} ^ {\\prime}\\right) - g \\left(\\hat {T} ^ {\\prime}\\right) \\right| \\tag {15}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.107, + 0.408, + 0.484, + 0.439 + ], + "angle": 0, + "content": "\\[\nL _ {c y c} ^ {i} = | \\mathrm {s o f t m a x} (\\frac {w _ {q _ {i}} (g _ {i} (\\hat {S} _ {i} ^ {\\prime})) \\cdot w _ {k _ {i}} (g _ {i} (\\hat {V} ^ {\\prime})) ^ {\\top}}{\\sqrt {F _ {i} ^ {\\prime}}}) g (\\hat {V} ^ {\\prime}) - g (\\hat {S} _ {i} ^ {\\prime}) | \\quad (1 6)\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.455, + 0.485, + 0.789 + ], + "angle": 0, + "content": "Progressive Unknown Rejection (PUR) is additionally proposed to improve the recognition accuracy on unknown class. In the open-set setting, empirical target data \\(\\hat{T}\\) includes the unknown class, while the generated labeled anchors \\(g(\\hat{S}_i^{\\prime})\\) should be limited to the known class. According to the generation mechanism defined by Eq. (14), labeled anchors can be considered a similarity-based weighted average of target features, which are not supposed to contain components from irrelevant features of the unknown class. However, it is impractical to learn the ideal results where the weights assigned to those unrelated target features are zero by pure regularization of mapping functions \\(w_{q_i}, w_{k_i}\\). To address this problem, we introduce a scheme to gradually reject the target features from the unknown class by removing the target data labeled as unknown given the current hypothesis \\(h\\) from \\(\\hat{T}\\). Specifically, at each training iteration during the adaptation stage, for a batch of input target data, we rank the likelihood of the unknown class for each target sample \\(p(y = K|x_t) = h(x_t)[K]\\) in ascending order. Given a threshold \\(0 < \\tau < 1\\) progressively increasing from zero according to the exponential ramp-up function [22], we select bottom \\(1 - \\tau\\) target samples as \\(\\hat{T}'\\)." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.799, + 0.303, + 0.815 + ], + "angle": 0, + "content": "3.4. Hypothesis Constraint" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.821, + 0.486, + 0.904 + ], + "angle": 0, + "content": "Proposition 3.8. If \\(\\mathcal{H}_{S_i}^F, \\mathcal{H}_V^F, \\mathcal{H}_T^F\\) are sets of functions that can minimize a part of \\(\\hat{\\theta}_S^i, \\sum_i \\hat{\\theta}_V^i, \\sum_i \\hat{\\theta}_T^i\\) respectively, then \\(f_{S_i}^* \\in \\mathcal{H}_{S_i}^F, f_V^* \\in \\mathcal{H}_V^F, f_T^* \\in \\mathcal{H}_T^F\\) must hold such that we can relax \\(L_{\\text{dis}}\\) in Theorem 2.3 by considering maximum w.r.t. functions \\(f_{S_i}', f_V', f_T'\\) as:" + }, + { + "type": "equation", + "bbox": [ + 0.533, + 0.099, + 0.943, + 0.147 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\log \\sum_ {i} \\exp \\left(\\nu \\left[ L _ {c l s} ^ {S _ {i}} (h; g) + 2 L _ {d i s} ^ {i} \\left(f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}, f _ {T} ^ {*}, h; g\\right) \\right]\\right) \\\\ \\leq \\sup _ {\\{f _ {S _ {i}} ^ {\\prime} \\in \\mathcal {H} _ {S _ {i}} ^ {F} \\} _ {i = 1} ^ {N}, f _ {V} ^ {\\prime} \\in \\mathcal {H} _ {V} ^ {F}, f _ {T} ^ {\\prime} \\in \\mathcal {H} _ {T} ^ {F}} \\log \\sum_ {i} \\exp (\\nu [ L _ {c l s} ^ {S _ {i}} (h; g) + 2 L _ {d i s} ^ {i} (f _ {S _ {i}} ^ {\\prime}, f _ {V} ^ {\\prime}, f _ {T} ^ {\\prime}, h; g) ]) \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.883, + 0.159, + 0.907, + 0.17 + ], + "angle": 0, + "content": "(17)" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.182, + 0.908, + 0.294 + ], + "angle": 0, + "content": "Definition 3.9 (Approximated Labeling Function Space). Let \\(L_{\\mathcal{H}_S}^i, L_{\\mathcal{H}_V}^i, L_{\\mathcal{H}_T}^i\\) denote the hypothesis constraints, i.e., a part of the empirical deviation between approximated and true labeling functions \\(\\hat{\\theta}_S^i, \\hat{\\theta}_T^i, \\hat{\\theta}_V^i\\). Approximated Labeling Function Space \\(\\mathcal{H}_{S_i}^F, \\mathcal{H}_V^F, \\mathcal{H}_T^F\\) can be defined as the sets whose members \\(f_{S_i}', f_V', f_T' \\in \\mathcal{H}^F\\) can minimize \\(L_{\\mathcal{H}_S}^i, \\sum_i L_{\\mathcal{H}_V}^i, \\sum_i L_{\\mathcal{H}_T}^i\\):" + }, + { + "type": "equation", + "bbox": [ + 0.534, + 0.301, + 0.95, + 0.345 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\left\\{\\mathcal {H} _ {S _ {i}} ^ {F} = \\left\\{f _ {S _ {i}} ^ {\\prime} | \\arg \\min _ {g, f _ {S _ {i}} ^ {\\prime} \\in \\mathcal {H} ^ {F}} \\left[ L _ {\\mathcal {H} _ {S}} ^ {i} \\left(f _ {S _ {i}} ^ {\\prime}; g\\right) = L _ {c l s} ^ {S _ {i}} \\left(f _ {S _ {i}} ^ {\\prime}; g\\right) / 2 + L _ {c l s} ^ {V} \\left(f _ {S _ {i}} ^ {\\prime}; g\\right) \\right] \\right\\} \\right. \\\\ \\left\\{\\mathcal {H} _ {V} ^ {F} = \\left\\{f _ {V} ^ {\\prime} \\mid \\arg \\min _ {g, f _ {V} ^ {\\prime} \\in \\mathcal {H} ^ {F}} \\sum_ {i} \\left[ L _ {\\mathcal {H} _ {V}} ^ {i} \\left(f _ {V} ^ {\\prime}; g\\right) = L _ {c l s} ^ {V} \\left(f _ {V} ^ {\\prime}; g\\right) / 2 + L _ {c l s} ^ {S _ {i}} \\left(f _ {V} ^ {\\prime}; g\\right) \\right] \\right\\} \\right. \\\\ \\left\\{\\mathcal {H} _ {T} ^ {F} = \\left\\{f _ {T} ^ {\\prime} \\mid \\arg \\min _ {g, f _ {T} ^ {\\prime} \\in \\mathcal {H} ^ {F}} \\sum_ {i} \\left[ L _ {\\mathcal {H} _ {T}} ^ {i} \\left(f _ {T} ^ {\\prime}; g\\right) = \\left[ L _ {\\text {c l s}} ^ {S _ {i}} \\left(f _ {T} ^ {\\prime}; g\\right) + L _ {\\text {c l s}} ^ {V} \\left(f _ {T} ^ {\\prime}; g\\right) \\right] / 2 + L _ {\\text {s s l}} \\right] \\right\\} \\right. \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.883, + 0.365, + 0.907, + 0.376 + ], + "angle": 0, + "content": "(18)" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.389, + 0.907, + 0.481 + ], + "angle": 0, + "content": "To build a more reliable target function space \\(\\mathcal{H}_T^F\\), we approximate the target error with the error rate on labeled samples and a semi-supervised regularization term \\(L_{ssl}^2\\) including entropy minimization [14, 15], pseudo labeling [44, 46] and consistency regularization [22, 42], which has been intensively discussed in [23, 43, 59, 63]." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.49, + 0.631, + 0.507 + ], + "angle": 0, + "content": "3.5. Algorithm" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.514, + 0.909, + 0.665 + ], + "angle": 0, + "content": "As described in Algorithm 1, we introduce a gradient reversal layer [12] to train the overall objective together. ImageNet [8] pre-trained ResNet-50 [16] is used as feature extractor \\( g \\) and randomly initialized 2-layer fully-connected networks are used for classifiers \\( f_{S_i}^{\\prime}, f_V^{\\prime}, f_T^{\\prime}, h \\). We adopt SGD with a momentum of 0.9 for optimization, where the initial learning rate is empirically set to 0.001. We employ the learning rate annealing strategy proposed in [12]. We use RandomFlip, RandomCrop, and RandAugment [7] as data augmentation with the batch size fixed to 24." + }, + { + "type": "image", + "bbox": [ + 0.517, + 0.674, + 0.907, + 0.796 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.513, + 0.804, + 0.909, + 0.862 + ], + "angle": 0, + "content": "Figure 3. Alignment mechanism of UM, where unknown target data \\(\\hat{T}^K\\) (green) is pushed away from labeled data into a separated cluster, while known target data \\(\\hat{T}'\\) is aligned back towards labeled clusters by \\(\\min_g L_{dis}\\)." + }, + { + "type": "page_footnote", + "bbox": [ + 0.531, + 0.887, + 0.741, + 0.901 + ], + "angle": 0, + "content": "2see details in supplementary material." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "10146" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.093, + 0.091, + 0.212, + 0.106 + ], + "angle": 0, + "content": "Algorithm 1 UM" + }, + { + "type": "text", + "bbox": [ + 0.106, + 0.111, + 0.433, + 0.126 + ], + "angle": 0, + "content": "Input: source \\(\\{\\hat{S}_i^{\\prime}\\}_{i = 1}^{N}\\), labeled target \\(\\hat{V}^{\\prime}\\), unlabeled target \\(\\hat{T}\\)" + }, + { + "type": "text", + "bbox": [ + 0.107, + 0.126, + 0.482, + 0.155 + ], + "angle": 0, + "content": "Output: updated parameters \\(\\phi = (\\{f_{S_i}^{\\prime}\\}_{i=1}^{N}, g, h, f_V^{\\prime}f_T^{\\prime})\\), \\(w = \\{w_{q_i}, w_{k_i}\\}_{i=1}^{N}\\)" + }, + { + "type": "text", + "bbox": [ + 0.107, + 0.155, + 0.482, + 0.177 + ], + "angle": 0, + "content": "Parameter: trade-off parameter \\(\\lambda\\); learning rate \\(\\eta\\); known class ratio estimator \\(\\alpha\\); coefficients \\(\\nu, \\beta, \\tau\\)" + }, + { + "type": "text", + "bbox": [ + 0.107, + 0.178, + 0.327, + 0.19 + ], + "angle": 0, + "content": "Notation: gradient reversal operator \\( R(\\cdot) \\)" + }, + { + "type": "text", + "bbox": [ + 0.107, + 0.19, + 0.241, + 0.201 + ], + "angle": 0, + "content": "for epoch \\(= 1,2,\\dots\\) do" + }, + { + "type": "text", + "bbox": [ + 0.126, + 0.202, + 0.361, + 0.213 + ], + "angle": 0, + "content": "Estimate known class ratio \\(\\alpha\\) on \\(\\hat{T}\\) with \\(g,h\\)" + }, + { + "type": "text", + "bbox": [ + 0.126, + 0.214, + 0.227, + 0.224 + ], + "angle": 0, + "content": "if source-free then" + }, + { + "type": "text", + "bbox": [ + 0.146, + 0.225, + 0.479, + 0.237 + ], + "angle": 0, + "content": "Estimate \\(\\hat{T}^{\\prime}\\) according to PUR and Update \\(w\\) to optimize AFG:" + }, + { + "type": "equation", + "bbox": [ + 0.158, + 0.238, + 0.408, + 0.257 + ], + "angle": 0, + "content": "\\[\nw \\leftarrow w - \\eta \\Delta w, \\Delta w = \\frac {\\partial \\sum_ {i = 1} ^ {N} \\left(L _ {r c c} ^ {i} + L _ {c y c} ^ {i}\\right)}{\\partial w}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.146, + 0.258, + 0.429, + 0.27 + ], + "angle": 0, + "content": "Generate labeled features \\(g(\\hat{S}_i^{\\prime})\\) according to Eq. (14)" + }, + { + "type": "title", + "bbox": [ + 0.126, + 0.27, + 0.162, + 0.28 + ], + "angle": 0, + "content": "end if" + }, + { + "type": "text", + "bbox": [ + 0.106, + 0.281, + 0.485, + 0.327 + ], + "angle": 0, + "content": "Compute labeled target error \\( L_{cls}^{V}(h;g) = L_{V} \\), source error \\( L_{cls}^{S_i}(h;g) = L_S^i \\), hypothesis constraints \\( L_{\\mathcal{H}_S}^i (f_{S_i}';g) + L_{\\mathcal{H}_V}^i (f_V';g) + L_{\\mathcal{H}_T}^i (f_T';g) = L_H^i \\) for \\( i = 1,..N \\)" + }, + { + "type": "text", + "bbox": [ + 0.106, + 0.327, + 0.482, + 0.351 + ], + "angle": 0, + "content": "Compute discrepancy \\( L_{dis}^{i}(f_{S_{i}}^{\\prime}, f_{V}^{\\prime}, f_{T}^{\\prime}, R \\circ h \\circ R; R \\circ g) = L_{D}^{i} \\) given the gradient reversal layer for \\( i = 1,..N \\)" + }, + { + "type": "text", + "bbox": [ + 0.126, + 0.352, + 0.364, + 0.363 + ], + "angle": 0, + "content": "Update \\(\\phi\\) to minimize the target error bound:" + }, + { + "type": "equation", + "bbox": [ + 0.14, + 0.364, + 0.227, + 0.374 + ], + "angle": 0, + "content": "\\[\n\\phi \\leftarrow \\phi - \\eta \\Delta \\phi ,\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.14, + 0.374, + 0.426, + 0.394 + ], + "angle": 0, + "content": "\\[\n\\Delta \\phi = \\frac {\\partial (\\frac {1}{2} [ L _ {V} + \\frac {1}{\\nu} \\log \\sum_ {i = 1} ^ {N} \\exp (\\nu [ L _ {S} ^ {i} + L _ {H} ^ {i} - \\lambda L _ {D} ^ {i} ]) ])}{\\partial \\phi}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.106, + 0.394, + 0.149, + 0.403 + ], + "angle": 0, + "content": "end for" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.433, + 0.208, + 0.449 + ], + "angle": 0, + "content": "4. Evaluation" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.459, + 0.485, + 0.641 + ], + "angle": 0, + "content": "We evaluated our proposal using two benchmarks, Office-Home and DomainNet. The trade-off parameter \\(\\lambda\\) is set to 0.01 during the training procedure according to [62, 63]. In addition, we empirically set the PU, scaling, and threshold coefficients \\(\\beta\\) to 0.15, \\(\\nu\\) to 0.1, and \\(\\tau\\) to 0.3 for all experiments. For the semi-supervised setting, we select the same few-shot labeled target data according to [41]. Regarding the open-set setting, we assign a distinct label space for each source domain as a subset of the target label space described below. We quantitatively compare our results against various baselines, including OSBP [40], PGL [34], ANNA [27], PUJE [63], MOSDANET [38], HyMOS [3], and MPU [58]." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.661, + 0.485, + 0.723 + ], + "angle": 0, + "content": "Evaluation Metrics for the proposed method and baselines are the widely used measures [34, 40], i.e., normalized accuracy for the known class only \\((\\mathrm{OS}^{*})\\) and harmonic mean \\(\\mathrm{HOS} = 2(\\mathrm{OS}^{*} \\times \\mathrm{UNK}) / (\\mathrm{OS}^{*} + \\mathrm{UNK})\\) [2, 27, 31, 54, 63]." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.743, + 0.484, + 0.85 + ], + "angle": 0, + "content": "Office-Home [52] is a widely-used domain adaptation benchmark, which consists of 15,500 images from 65 categories and four domains: Art, Clipart, Product, and RealWorld. We select the first 30 classes alphabetically as the known class and group the rest as the unknown. Each source domain contains 10 classes without overlap, leading to a large label shift scenario." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.871, + 0.484, + 0.901 + ], + "angle": 0, + "content": "DomainNet [36] is a more challenging benchmark dataset for large-scale domain adaptation that has 345 classes and" + }, + { + "type": "table", + "bbox": [ + 0.516, + 0.088, + 0.906, + 0.185 + ], + "angle": 0, + "content": "
METHODTYPE→Clipart→Product→RealWorld→ArtAvg.
1-shot3-shot1-shot3-shot1-shot3-shot1-shot3-shot1-shot3-shot
OSBPSource-Combine60.462.670.172.369.768.360.764.365.266.9
PGL59.061.867.769.966.768.961.264.063.766.2
ANNA65.867.771.073.470.370.361.063.767.068.8
PUJE65.871.773.374.275.078.165.567.369.972.8
MOSDANETMulti-Source61.565.970.073.871.469.661.663.666.168.2
HyMOS56.664.464.467.366.268.459.062.261.665.6
UM68.072.179.083.079.480.867.770.373.576.6
MPU*Source-Free46.354.459.766.357.860.258.362.555.560.9
OSBP*44.556.555.665.159.364.355.659.953.861.5
PUJE*52.258.465.070.366.270.058.762.760.565.4
UM+AFG61.166.077.080.172.078.860.364.667.672.4
" + }, + { + "type": "table_caption", + "bbox": [ + 0.513, + 0.195, + 0.906, + 0.223 + ], + "angle": 0, + "content": "Table 1. HOS (%) of ResNet-50 model fine-tuned on Office-Home dataset under 1-shot/3-shot setting" + }, + { + "type": "table", + "bbox": [ + 0.516, + 0.236, + 0.906, + 0.332 + ], + "angle": 0, + "content": "
METHODTYPE→Clipart→Painting→Real→SketchAvg.
1-shot3-shot1-shot3-shot1-shot3-shot1-shot3-shot1-shot3-shot
OSBPSource-Combine54.257.449.853.162.664.049.550.154.056.2
PGL59.862.059.461.467.469.459.761.261.663.5
ANNA55.661.553.654.367.566.557.958.158.760.1
PUJE64.466.259.861.767.769.361.264.263.365.4
MOSDANETMulti-Source56.455.355.658.268.569.854.154.958.759.6
HyMOS53.054.454.156.065.167.456.357.157.158.7
UM70.371.566.068.875.178.566.169.569.472.1
MPU*Source-Free54.557.655.060.162.466.448.452.955.159.3
MOSDANET*58.160.554.359.363.262.549.454.356.359.2
PUJE*60.562.255.361.464.067.853.156.258.261.9
UM+AFG64.869.760.064.267.673.460.064.863.168.0
" + }, + { + "type": "table_caption", + "bbox": [ + 0.513, + 0.342, + 0.906, + 0.371 + ], + "angle": 0, + "content": "Table 2. HOS (%) of ResNet-50 model fine-tuned on DomainNet dataset under 1-shot/3-shot setting" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.398, + 0.906, + 0.488 + ], + "angle": 0, + "content": "6 domains. Following the protocol established in [41], we pick 4 domains (Real, Clipart, Painting, Sketch) with 126 classes for the experiments. We select the first 60 classes alphabetically as the known class and group the rest as the unknown. Similarly, each source domain contains 20 classes without any overlap." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.49, + 0.909, + 0.806 + ], + "angle": 0, + "content": "As reported in Tabs. 1 and 2, under the same setting given 1-shot/3-shot labeled target data (1/3 samples per class), we observe that our method UM consistently outperforms the state-of-the-art results, improving HOS by \\(3.6\\% / 3.8\\%\\) and \\(6.1\\% / 6.7\\%\\) on the benchmark datasets of Office-Home and DomainNet respectively, when source data is available. Furthermore, \\(\\mathrm{UM + AFG}\\) enhances HOS by \\(7.1\\% / 7.0\\%\\) in Office-Home and \\(4.9\\% / 6.1\\%\\) in DomainNet under the challenging source-free setting. Note that our proposed approach provides significant performance gains for the more complex datasets like DomainNet, which requires knowledge transfer across different modalities, regardless of covariate or label shift. We group all source domains with labeled target data as a single domain for the baselines that require the source-combine strategy. For the source-free cases, we introduce a few confident target data labeled by pre-trained models as pseudo-source data to enable several algorithms denoted by \\(*\\) under this problem setting since none of the existing methods can directly address the open-set task under the source-free condition with a huge label shift across source domains." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.817, + 0.765, + 0.833 + ], + "angle": 0, + "content": "4.1. Feature Space Visualization" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.84, + 0.908, + 0.901 + ], + "angle": 0, + "content": "To intuitively visualize the effectiveness of different approaches, we extracted features from the baseline models and our proposed model on the \\(\\rightarrow\\) Art task (Office-Home) and \\(\\rightarrow\\) Real task (DomainNet) with the ResNet-50 backbone" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "10147" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.092, + 0.089, + 0.483, + 0.174 + ], + "angle": 0, + "content": "
METHODTYPEOffice-Home →RealWorldDomainNet →Clipart
UNKOS*HOSUNKOS*HOS
DEFAULTSource-Free73.770.472.072.658.664.8
w/o Lsim70.970.670.769.359.163.8
w/o Lcyc74.068.571.172.856.963.9
w/o PUR39.687.054.447.372.957.4
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.185, + 0.485, + 0.213 + ], + "angle": 0, + "content": "Table 3. Ablation study verified with ResNet-50 model on OfficeHome & DomainNet dataset" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.24, + 0.486, + 0.362 + ], + "angle": 0, + "content": "[16]. The feature distributions were processed with t-SNE [48] afterward. As shown in Fig. 4, compared with baselines, our method UM achieves a better alignment between source and target distributions, especially when the domain shift is large. Benefiting from our joint error-based adversarial alignment mechanism, the extracted feature space, including the cluster of unknown target data (green), has a more discriminative class-wise decision boundary." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.372, + 0.245, + 0.387 + ], + "angle": 0, + "content": "4.2. Ablation Study" + }, + { + "type": "text", + "bbox": [ + 0.093, + 0.395, + 0.485, + 0.637 + ], + "angle": 0, + "content": "Self-supervised learning methods have shown that, by relying only on unlabeled data, it is still possible to obtain classification performance close to those of the supervised approaches [5, 17, 18]. In the source-free setting, we adopt the typical SimCLR [5] to help group the feature of unknown target data into a single cluster. As expected in Tab. 3, \\( L_{sim}^2 \\) can slightly improve the accuracy of the unknown class for a higher HOS. Furthermore, Progressive Unknown Rejection (PUR), a denoising of generated labeled features, is crucial to detecting unknowns in source-free cases. As also illustrated in Fig. 7d, generally, a larger threshold \\( \\tau \\) will lead to a higher UNK at the cost of low OS*, characterized as the trade-off between recognizing known and unknown data for open-set tasks. In addition, we verify the effectiveness of cycle-consistency regularization \\( L_{cyc} \\) and find it helps maintain the normalized accuracy of the known class." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.647, + 0.418, + 0.664 + ], + "angle": 0, + "content": "4.3. Robustness against Varying Openness" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.671, + 0.486, + 0.838 + ], + "angle": 0, + "content": "To verify the robustness of the proposed method, we conducted experiments on the \\(\\rightarrow\\) Painting task (DomainNet) with the openness varying in \\(\\{0.25, 0.5, 0.75\\}\\). Here, openness is defined by the ratio of unknown samples in the entire target data. PGL approach heuristically sets the hyperparameter according to the true unknown ratio to control the openness, while PUJE and UM automatically estimate the weight \\(\\alpha\\) during the training procedure. From Fig. 5a, we observe that our proposal consistently outperforms baselines by a large margin, which confirms its robustness to the change in openness." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.847, + 0.271, + 0.863 + ], + "angle": 0, + "content": "4.4. Stabel Coverage" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.871, + 0.485, + 0.903 + ], + "angle": 0, + "content": "In Fig. 5b, we illustrate the recognition performance of UM over training steps on the \\(\\rightarrow\\)Art task of the Office-Home" + }, + { + "type": "table", + "bbox": [ + 0.515, + 0.089, + 0.906, + 0.126 + ], + "angle": 0, + "content": "
METHODTYPEBACKBONEOffice-HomeDomainNetAvg. \nO*HOS
→Art→Product→Painting→Real
UNKOS*HOSUNKOS*HOSUNKOS*HOSUNKOS*HOSUNKOS*HOS
UMMulti-SourceResNet-5072.863.367.778.779.379.077.957.366.074.975.275.176.168.872.0
UM+AFGSource-FreeResNet-5066.155.560.383.371.677.063.956.560.076.660.667.672.561.166.2
UM+AFGViT-1677.759.867.587.680.984.168.557.462.582.873.277.779.267.873.0
" + }, + { + "type": "table_caption", + "bbox": [ + 0.513, + 0.136, + 0.907, + 0.165 + ], + "angle": 0, + "content": "Table 4. Accuracy of ViT-B/16 model fine-tuned on Office-Home & DomainNet dataset under 1-shot setting" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.192, + 0.909, + 0.299 + ], + "angle": 0, + "content": "dataset. OS* experiences a downward while the UNK keeps improving, which characterizes a trade-off between the accuracy of knowns and the accuracy of unknowns. We further observe that some previous works [27, 34] do not converge at the optimum. In contrast, our method always reaches a reliable convergence without suffering from a severe performance drop in recognizing known classes." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.309, + 0.831, + 0.325 + ], + "angle": 0, + "content": "4.5. Flexibility in Backbone Architecture" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.333, + 0.909, + 0.53 + ], + "angle": 0, + "content": "As presented in Sec. 3.3, AFG allows the target model to use a different backbone architecture from the pre-trained source models. Therefore, unlike those model-centric methods whose performance is deeply limited by source model architecture, our method can be effectively applied to real-world problems where each source model is trained using various networks by leveraging the power of advanced backbones like ViT [9] for the target model. Tab. 4 reveals a clear advantage of AFG when changing the target backbone to ViT-B/16 as the HOS scores under the source-free condition approach and even outperform the source data results. The same ResNet-50 backbone is used for pre-trained source models across different experiments." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.54, + 0.9, + 0.557 + ], + "angle": 0, + "content": "4.6. Advantage in Increasing Labeled Target Data" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.564, + 0.909, + 0.746 + ], + "angle": 0, + "content": "Sec. 4.6 shows the behavior of different methods when the number of labeled examples in the target domain increases from 1 to 10 per class on DomainNet using ResNet50 backbone. Cluster-based methods like OSBP, MOSDANET, and HyMOS will finally be caught up by a simple multi-class PU learning (MPU) when the sample size increases. On the contrary, our method consistently outperforms the most competitive baseline PUJE for various sizes of labeled target data. Furthermore, along with the growth in the size of \\(\\hat{V}\\), the HOS score achieved by UM+AFG in the source-free setting gradually approaches, even surpasses those methods using source data." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.756, + 0.908, + 0.788 + ], + "angle": 0, + "content": "4.7. Sensitivity to PU, Scaling, and Threshold Coefficients" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.795, + 0.909, + 0.903 + ], + "angle": 0, + "content": "We show the sensitivity of our approach to varying PU coefficient \\(\\beta\\), scaling factor \\(\\nu\\), and threshold \\(\\tau\\) in Sec. 4.7. We can draw two observations from this: the OS* score is relatively stable, and the unknown recognition achieves a more reliable performance for a larger coefficient \\(\\beta\\); generally, a larger \\(\\nu\\) means focusing on the source domain that contributes more error and ignoring others, while a smaller \\(\\nu\\) will equalize" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "10148" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.179, + 0.09, + 0.332, + 0.182 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.229, + 0.187, + 0.283, + 0.199 + ], + "angle": 0, + "content": "(a) OSBP" + }, + { + "type": "image", + "bbox": [ + 0.342, + 0.091, + 0.493, + 0.182 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.385, + 0.187, + 0.451, + 0.2 + ], + "angle": 0, + "content": "(b) HyMOS" + }, + { + "type": "image", + "bbox": [ + 0.504, + 0.091, + 0.631, + 0.182 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.555, + 0.187, + 0.605, + 0.199 + ], + "angle": 0, + "content": "(c) PUJE" + }, + { + "type": "image", + "bbox": [ + 0.665, + 0.092, + 0.817, + 0.182 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.721, + 0.187, + 0.762, + 0.199 + ], + "angle": 0, + "content": "(d) UM" + }, + { + "type": "image", + "bbox": [ + 0.179, + 0.203, + 0.332, + 0.293 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.21, + 0.299, + 0.303, + 0.31 + ], + "angle": 0, + "content": "(e) MOSDANET" + }, + { + "type": "image", + "bbox": [ + 0.342, + 0.203, + 0.493, + 0.293 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.397, + 0.299, + 0.441, + 0.311 + ], + "angle": 0, + "content": "(f) PGL" + }, + { + "type": "image", + "bbox": [ + 0.504, + 0.203, + 0.655, + 0.293 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.55, + 0.299, + 0.609, + 0.311 + ], + "angle": 0, + "content": "(g) ANNA" + }, + { + "type": "image", + "bbox": [ + 0.665, + 0.203, + 0.817, + 0.293 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.721, + 0.299, + 0.762, + 0.311 + ], + "angle": 0, + "content": "(h) UM" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.323, + 0.904, + 0.338 + ], + "angle": 0, + "content": "Figure 4. T-SNE visualization of feature distributions in (a)-(d) \\(\\rightarrow\\) Art task (Office-Home dataset); (e)-(h) \\(\\rightarrow\\) Real task (DomainNet dataset)." + }, + { + "type": "image", + "bbox": [ + 0.119, + 0.366, + 0.283, + 0.464 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.129, + 0.471, + 0.272, + 0.483 + ], + "angle": 0, + "content": "(a) robust against openness" + }, + { + "type": "image", + "bbox": [ + 0.292, + 0.366, + 0.455, + 0.464 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.315, + 0.471, + 0.434, + 0.483 + ], + "angle": 0, + "content": "(b) stable convergence" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.495, + 0.484, + 0.551 + ], + "angle": 0, + "content": "Figure 5. (a) Performance comparisons w.r.t. varying openness of the \\(\\rightarrow\\) Painting task from DomainNet dataset; (b) Convergence analysis of the \\(\\rightarrow\\) Art task from Office-Home dataset compared to other baselines with confidence intervals" + }, + { + "type": "image", + "bbox": [ + 0.119, + 0.592, + 0.283, + 0.689 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.152, + 0.697, + 0.25, + 0.709 + ], + "angle": 0, + "content": "(a) \\(\\rightarrow\\) Clipart task" + }, + { + "type": "image", + "bbox": [ + 0.293, + 0.592, + 0.456, + 0.689 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.326, + 0.697, + 0.424, + 0.708 + ], + "angle": 0, + "content": "(b) \\(\\rightarrow\\) Sketch task" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.721, + 0.484, + 0.777 + ], + "angle": 0, + "content": "Figure 6. Accuracy vs the number of labeled target samples on DomainNet using ResNet50 backbone. Our method maintains a high level of performance for different sample sizes of the labeled target data." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.826, + 0.484, + 0.901 + ], + "angle": 0, + "content": "the importance of each domain, which can harm the performance when a remarkable label shift exists among source domains implied by Fig. 7c (the imbalance setting indicates a case where one source contains 20 classes while the other two sources take 5 classes respectively)." + }, + { + "type": "image", + "bbox": [ + 0.541, + 0.366, + 0.706, + 0.464 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.575, + 0.471, + 0.672, + 0.483 + ], + "angle": 0, + "content": "(a) sensitivity to \\(\\beta\\)" + }, + { + "type": "image", + "bbox": [ + 0.715, + 0.366, + 0.88, + 0.464 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.748, + 0.471, + 0.847, + 0.483 + ], + "angle": 0, + "content": "(b) sensitivity to \\(\\beta\\)" + }, + { + "type": "image", + "bbox": [ + 0.541, + 0.489, + 0.706, + 0.585 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.575, + 0.593, + 0.672, + 0.604 + ], + "angle": 0, + "content": "(c) sensitivity to \\(\\nu\\)" + }, + { + "type": "image", + "bbox": [ + 0.715, + 0.489, + 0.879, + 0.584 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.749, + 0.593, + 0.846, + 0.604 + ], + "angle": 0, + "content": "(d) sensitivity to \\(\\tau\\)" + }, + { + "type": "image_caption", + "bbox": [ + 0.513, + 0.617, + 0.907, + 0.644 + ], + "angle": 0, + "content": "Figure 7. (a)-(d) Sensitivity to varying loss coefficient \\(\\beta, \\nu, \\tau\\) verified in Office-Home dataset." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.676, + 0.634, + 0.691 + ], + "angle": 0, + "content": "5. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.705, + 0.907, + 0.901 + ], + "angle": 0, + "content": "In this work, we addressed the semi-supervised open-set domain shift problem in multi-source cases with inconsistent label space by introducing a novel learning theory based on joint error and multi-class PU learning that can reduce the open-set risk, where the generalization error is bounded by the extension of VC learning theory based on uniform covering number. We generalize our method into source-free scenarios by attention-based feature generation, which is computationally efficient with reliable performance. We conduct extensive experiments on multiple domain adaptation benchmarks. Our model achieves the best performance regardless of source data, compared with recent baseline methods, proving our proposed approach's efficacy." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "10149" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.092, + 0.091, + 0.259, + 0.108 + ], + "angle": 0, + "content": "Acknowledgements" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.116, + 0.486, + 0.173 + ], + "angle": 0, + "content": "This research is partially supported by JST Moonshot R&D Grant Number JPMJPS2011, CREST Grant Number JPMJCR2015 and Basic Research Grant (Super AI) of Institute for AI and Beyond of the University of Tokyo." + }, + { + "type": "title", + "bbox": [ + 0.092, + 0.188, + 0.188, + 0.204 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.214, + 0.484, + 0.268 + ], + "angle": 0, + "content": "[1] Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Vaughan. A theory of learning from different domains. Machine Learning, 79:151-175, 2010. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.1, + 0.271, + 0.484, + 0.341 + ], + "angle": 0, + "content": "[2] Silvia Bucci, Mohammad Reza Loghmani, and Tatiana Tommasi. On the effectiveness of image rotation for open set domain adaptation. In 16th European Conference on Computer Vision, pages 422-438. Springer International Publishing, 2020. 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.343, + 0.484, + 0.412 + ], + "angle": 0, + "content": "[3] Silvia Bucci, Francesco Cappio Borlino, Barbara Caputo, and Tatiana Tommasi. Distance-based hyperspherical classification for multi-source open-set domain adaptation. In IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1030-1039. IEEE, 2022. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.414, + 0.484, + 0.455 + ], + "angle": 0, + "content": "[4] Pau Panareda Busto and Juergen Gall. Open set domain adaptation. In IEEE International Conference on Computer Vision, pages 754-763, 2017. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.458, + 0.485, + 0.526 + ], + "angle": 0, + "content": "[5] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning. JMLR.org, 2020. 5, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.529, + 0.484, + 0.571 + ], + "angle": 0, + "content": "[6] Shivang Chopra, Suraj Kothawade, Houda Aynaou, and Aman Chadha. Source-free domain adaptation with diffusion-guided source data generation. CoRR, abs/2402.04929, 2024. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.573, + 0.484, + 0.642 + ], + "angle": 0, + "content": "[7] Ekin D. Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V. Le. Randaugment: Practical automated data augmentation with a reduced search space. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 3008-3017, 2020. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.645, + 0.484, + 0.7 + ], + "angle": 0, + "content": "[8] Jun Deng, Wei Dong, Richard Socher, Li-Jia Li, Kuntai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, 2009. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.702, + 0.484, + 0.785 + ], + "angle": 0, + "content": "[9] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2021. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.788, + 0.484, + 0.843 + ], + "angle": 0, + "content": "[10] Zhen Fang, Jie Lu, Feng Liu, Junyu Xuan, and Guangquan Zhang. Open set domain adaptation: Theoretical bound and algorithm. IEEE Transactions on Neural Networks and Learning Systems, 32:4309-4322, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.845, + 0.484, + 0.901 + ], + "angle": 0, + "content": "[11] Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In Proceedings of the 32nd International Conference on Machine Learning, pages 1180-1189. JMLR.org, 2015. 1" + }, + { + "type": "list", + "bbox": [ + 0.094, + 0.214, + 0.485, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.093, + 0.908, + 0.161 + ], + "angle": 0, + "content": "[12] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17 (1):2096-2030, 2016. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.164, + 0.908, + 0.232 + ], + "angle": 0, + "content": "[13] Saurabh Garg, Sivaraman Balakrishnan, and Zachary C. Lipton. Domain adaptation under open set label shift. In Proceedings of the 36th International Conference on Neural Information Processing Systems. Curran Associates Inc., 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.234, + 0.908, + 0.303 + ], + "angle": 0, + "content": "[14] Ryan Gomes, Andreas Krause, and Pietro Perona. Discriminative clustering by regularized information maximization. In Proceedings of the 23rd International Conference on Neural Information Processing Systems, pages 775-783. Curran Associates Inc., 2010. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.305, + 0.907, + 0.36 + ], + "angle": 0, + "content": "[15] Yves Grandvalet and Yoshua Bengio. Semi-supervised learning by entropy minimization. In Proceedings of the 17th International Conference on Neural Information Processing Systems, pages 529-536. MIT Press, 2004. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.362, + 0.907, + 0.417 + ], + "angle": 0, + "content": "[16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. IEEE Conference on Computer Vision and Pattern Recognition, pages 770-778, 2015. 5, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.419, + 0.907, + 0.475 + ], + "angle": 0, + "content": "[17] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9726-9735, 2020. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.476, + 0.906, + 0.546 + ], + "angle": 0, + "content": "[18] Olivier J. Henaff, Aravind Srinivas, Jeffrey De Fauw, Ali Razavi, Carl Doersch, S. M. Ali Eslami, and Aaron Van Den Oord. Data-efficient image recognition with contrastive predictive coding. In Proceedings of the 37th International Conference on Machine Learning. JMLR.org, 2020. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.547, + 0.907, + 0.616 + ], + "angle": 0, + "content": "[19] Judy Hoffman, Mehryar Mohri, and Ningshan Zhang. Algorithms and theory for multiple-source adaptation. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 8256-8266. Curran Associates Inc., 2018. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.617, + 0.908, + 0.699 + ], + "angle": 0, + "content": "[20] JoonHo Jang, Byeonghu Na, DongHyeok Shin, Mingi Ji, Kyungwoo Song, and Il-Chul Moon. Unknown-aware domain adversarial learning for open-set domain adaptation. In Proceedings of the 36th International Conference on Neural Information Processing Systems. Curran Associates Inc., 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.702, + 0.907, + 0.744 + ], + "angle": 0, + "content": "[21] Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In 2nd International Conference on Learning Representations, 2014. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.746, + 0.906, + 0.787 + ], + "angle": 0, + "content": "[22] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. In 5th International Conference on Learning Representations, 2017. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.788, + 0.907, + 0.845 + ], + "angle": 0, + "content": "[23] Jichang Li, Guanbin Li, Yemin Shi, and Yizhou Yu. Cross-domain adaptive clustering for semi-supervised domain adaptation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2505-2514, 2021. 1, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.846, + 0.906, + 0.901 + ], + "angle": 0, + "content": "[24] Jingjing Li, Zhiqi Yu, Zhekai Du, Lei Zhu, and Heng Tao Shen. A comprehensive survey on source-free domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(8):5743-5762, 2024. 4" + }, + { + "type": "list", + "bbox": [ + 0.094, + 0.093, + 0.908, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "10150" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.092, + 0.486, + 0.134 + ], + "angle": 0, + "content": "[25] Keqiuyin Li, Jie Lu, Hua Zuo, and Guangquan Zhang. Multisource domain adaptation handling inaccurate label spaces. Neurocomputing, 594:127824, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.136, + 0.483, + 0.192 + ], + "angle": 0, + "content": "[26] Rui Li, Qianfen Jiao, Wenming Cao, Hau-San Wong, and Si Wu. Model adaptation: Unsupervised domain adaptation without source data. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9638-9647, 2020. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.192, + 0.483, + 0.248 + ], + "angle": 0, + "content": "[27] Wuyang Li, Jie Liu, Bo Han, and Yixuan Yuan. Adjustment and alignment for unbiased open set domain adaptation. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 24110-24119, 2023. 2, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.25, + 0.483, + 0.318 + ], + "angle": 0, + "content": "[28] Jian Liang, Dapeng Hu, and Jiashi Feng. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In Proceedings of the 37th International Conference on Machine Learning. JMLR.org, 2020. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.321, + 0.483, + 0.389 + ], + "angle": 0, + "content": "[29] Jian Liang, Dapeng Hu, Yunbo Wang, Ran He, and Jiashi Feng. Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11): 8602-8617, 2022. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.391, + 0.483, + 0.434 + ], + "angle": 0, + "content": "[30] Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. In 7th International Conference on Learning Representations, 2019. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.435, + 0.483, + 0.49 + ], + "angle": 0, + "content": "[31] Mohammad Reza Loghmania, Markus Vinczea, and Tatiana Tommasi. Positive-unlabeled learning for open set domain adaptation. Pattern Recognition Letters, 136:198-204, 2020. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.491, + 0.483, + 0.559 + ], + "angle": 0, + "content": "[32] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I. Jordan. Learning transferable features with deep adaptation networks. In Proceedings of the 32nd International Conference on Machine Learning, pages 97-105. JMLR.org, 2015. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.562, + 0.483, + 0.631 + ], + "angle": 0, + "content": "[33] Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 1647-1657. Curran Associates Inc., 2018. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.633, + 0.483, + 0.702 + ], + "angle": 0, + "content": "[34] Yadan Luo, Zijian Wang, Zi Huang, and Mahsa Baktashmotlagh. Progressive graph learning for open-set domain adaptation. In Proceedings of the 37th International Conference on Machine Learning, pages 6468-6478. PMLR, 2020. 2, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.704, + 0.483, + 0.773 + ], + "angle": 0, + "content": "[35] Yadan Luo, Zijian Wang, Zhuoxiao Chen, Zi Huang, and Mahsa Baktashmotlagh. Source-free progressive graph learning for open-set domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(9):11240-11255, 2023. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.775, + 0.483, + 0.831 + ], + "angle": 0, + "content": "[36] Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In IEEE/CVF International Conference on Computer Vision, pages 1406-1415, 2019. 1, 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.832, + 0.483, + 0.9 + ], + "angle": 0, + "content": "[37] Md Mahmudur Rahman, Rameswar Panda, and Mohammad Arif Ul Alam. Semi-supervised domain adaptation with autoencoder via simultaneous learning. In IEEE/CVF Winter Conference on Applications of Computer Vision, pages 402-411, 2023. 1" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.486, + 0.9 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.092, + 0.908, + 0.174 + ], + "angle": 0, + "content": "[38] Sayan Rakshit, Dipesh Tamboli, Pragati Shuddhodhan Meshram, Biplab Banerjee, Gemma Roig, and Subhasis Chaudhuri. Multi-source open-set deep adversarial domain adaptation. In 16th European Conference on Computer Vision, pages 735-750. Springer International Publishing, 2020. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.176, + 0.908, + 0.232 + ], + "angle": 0, + "content": "[39] Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. Maximum classifier discrepancy for unsupervised domain adaptation. IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3723-3732, 2017. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.233, + 0.908, + 0.288 + ], + "angle": 0, + "content": "[40] Kuniaki Saito, Shohei Yamamoto, Yoshitaka Ushiku, and Tatsuya Harada. Open set domain adaptation by backpropagation. In 15th European Conference on Computer Vision, pages 156-171. Springer International Publishing, 2018. 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.289, + 0.907, + 0.343 + ], + "angle": 0, + "content": "[41] Kuniaki Saito, Donghyun Kim, Stan Sclaroff, Trevor Darrell, and Kate Saenko. Semi-supervised domain adaptation via minimax entropy. In IEEE/CVF International Conference on Computer Vision, pages 8049-8057, 2019. 1, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.344, + 0.907, + 0.413 + ], + "angle": 0, + "content": "[42] Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. In Proceedings of the 30th International Conference on Neural Information Processing Systems, pages 1171-1179. Curran Associates Inc., 2016. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.414, + 0.907, + 0.468 + ], + "angle": 0, + "content": "[43] Ankit Singh. Clda: contrastive learning for semi-supervised domain adaptation. In Proceedings of the 35th International Conference on Neural Information Processing Systems. Curran Associates Inc., 2021. 1, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.469, + 0.907, + 0.551 + ], + "angle": 0, + "content": "[44] Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Han Zhang, and Colin Raffel. Fixmatch: simplifying semi-supervised learning with consistency and confidence. In Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., 2020. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.552, + 0.907, + 0.593 + ], + "angle": 0, + "content": "[45] Shiliang Sun, Honglei Shi, and Yuanbin Wu. A survey of multi-source domain adaptation. Information Fusion, 24: 84-92, 2015. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.595, + 0.907, + 0.65 + ], + "angle": 0, + "content": "[46] Hui Tang, Ke Chen, and Kui Jia. Unsupervised domain adaptation via structurally regularized deep clustering. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8722-8732, 2020. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.65, + 0.907, + 0.705 + ], + "angle": 0, + "content": "[47] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2962-2971, 2017. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.706, + 0.907, + 0.747 + ], + "angle": 0, + "content": "[48] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 9: 2579-2605, 2008. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.748, + 0.907, + 0.775 + ], + "angle": 0, + "content": "[49] Vladimir N. Vapnik. The nature of statistical learning theory. Springer-Verlag New York, Inc., 1995. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.776, + 0.907, + 0.83 + ], + "angle": 0, + "content": "[50] V. N. Vapnik and A. Ya. Chervonenkis. On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities, pages 11-30. Springer International Publishing, 2015. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.832, + 0.907, + 0.9 + ], + "angle": 0, + "content": "[51] Naveen Venkat, Jogendra Nath Kundu, Durgesh Kumar Singh, Ambareesh Revanur, and R. Venkatesh Babu. Your classifier can secretly suffice multi-source domain adaptation. In Proceedings of the 34th International Conference on Neural Information Processing Systems, 2020. 1" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.092, + 0.908, + 0.9 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.518, + 0.957 + ], + "angle": 0, + "content": "10151" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.092, + 0.486, + 0.16 + ], + "angle": 0, + "content": "[52] Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In IEEE Conference on Computer Vision and Pattern Recognition, pages 5385-5394, 2017. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.161, + 0.485, + 0.232 + ], + "angle": 0, + "content": "[53] Hang Wang, Minghao Xu, Bingbing Ni, and Wenjun Zhang. Learning to combine: Knowledge aggregation for multi-source domain adaptation. In 16th European Conference on Computer Vision, pages 727-744. Springer-Verlag, 2020. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.232, + 0.484, + 0.287 + ], + "angle": 0, + "content": "[54] Qian Wang, Fanlin Meng, and Toby P. Breckon. Progressively select and reject pseudolabeled samples for open-set domain adaptation. IEEE Transactions on Artificial Intelligence, 5(9): 4403-4414, 2024. 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.288, + 0.484, + 0.355 + ], + "angle": 0, + "content": "[55] Zixin Wang, Yadan Luo, Peng-Fei Zhang, Sen Wang, and Zi Huang. Discovering domain disentanglement for generalized multi-source domain adaptation. In IEEE International Conference on Multimedia and Expo, pages 1–6. IEEE, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.357, + 0.484, + 0.413 + ], + "angle": 0, + "content": "[56] Jun Wu and Jingrui He. Domain adaptation with dynamic open-set targets. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 2039-2049. Association for Computing Machinery, 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.413, + 0.484, + 0.483 + ], + "angle": 0, + "content": "[57] R. Xu, Z. Chen, W. Zuo, J. Yan, and L. Lin. Deep cocktail network: Multi-source unsupervised domain adaptation with category shift. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3964-3973. IEEE Computer Society, 2018. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.484, + 0.484, + 0.538 + ], + "angle": 0, + "content": "[58] Yixing Xu, Chang Xu, Chao Xu, and Dacheng Tao. Multi-positive and unlabeled learning. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 3182-3188. AAAI Press, 2017. 2, 3, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.539, + 0.484, + 0.609 + ], + "angle": 0, + "content": "[59] Luyu Yang, Yan Wang, Mingfei Gao, Abhinav Shrivastava, Kilian Q. Weinberger, Wei-Lun Chao, and Ser-Nam Lim. Deep co-training with task decomposition for semi-supervised domain adaptation. In IEEE/CVF International Conference on Computer Vision, pages 8886-8896, 2021. 1, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.609, + 0.484, + 0.677 + ], + "angle": 0, + "content": "[60] S. Yang, Y. Wang, J. van de Weijer, L. Herranz, S. Jui, and J. Yang. Trust your good friends: Source-free domain adaptation by reciprocal neighborhood clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(12):15883-15895, 2023. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.678, + 0.484, + 0.734 + ], + "angle": 0, + "content": "[61] Jeongbeen Yoon, Dahiyun Kang, and Minsu Cho. Semi-supervised domain adaptation via sample-to-sample self-distillation. In IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1686-1695, 2022. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.735, + 0.484, + 0.776 + ], + "angle": 0, + "content": "[62] Dexuan Zhang, Thomas Westfechtel, and Tatsuya Harada. Unsupervised domain adaptation via minimized joint error. Transactions on Machine Learning Research, 2023. 1, 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.776, + 0.484, + 0.845 + ], + "angle": 0, + "content": "[63] Dexuan Zhang, Thomas Westfechtel, and Tatsuya Harada. Open-set domain adaptation via joint error based multi-class positive and unlabeled learning. In 18th European Conference on Computer Vision. Springer International Publishing, 2024. 2, 3, 5, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.846, + 0.484, + 0.901 + ], + "angle": 0, + "content": "[64] Yuchen Zhang, Tianle Liu, Mingsheng Long, and Michael Jordan. Bridging theory and algorithm for domain adaptation. In Proceedings of the 36th International Conference on Machine Learning, pages 7404-7413. PMLR, 2019. 1" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.486, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.092, + 0.908, + 0.175 + ], + "angle": 0, + "content": "[65] Han Zhao, Shanghang Zhang, Guanhang Wu, Joao P. Costeira, Jose M. F. Moura, and Geoffrey J. Gordon. Adversarial multiple source domain adaptation. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 8568-8579. Curran Associates Inc., 2018. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.178, + 0.908, + 0.233 + ], + "angle": 0, + "content": "[66] Han Zhao, Remi Tachet des Combes, Kun Zhang, and Geoffrey J. Gordon. On learning invariant representation for domain adaptation. In Proceedings of the 36th International Conference on Machine Learning, 2019. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.234, + 0.907, + 0.302 + ], + "angle": 0, + "content": "[67] Yongchun Zhu, Fuzhen Zhuang, and Deqing Wang. Aligning domain-specific distribution and classifier for cross-domain classification from multiple sources. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence. AAAI Press, 2019. 1" + }, + { + "type": "list", + "bbox": [ + 0.516, + 0.092, + 0.908, + 0.302 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "10152" + } + ] +] \ No newline at end of file diff --git a/2025/A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains/71226fdf-7aa9-4bda-88d3-c62f01e56520_origin.pdf b/2025/A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains/71226fdf-7aa9-4bda-88d3-c62f01e56520_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a552722ce91e67dc92cd433ed9f7d69bdefadcec --- /dev/null +++ b/2025/A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains/71226fdf-7aa9-4bda-88d3-c62f01e56520_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9bc9da750ccea8e301e9fb4b2b983fb63554161223d2283bb7b5a3005abb7509 +size 3332036 diff --git a/2025/A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains/full.md b/2025/A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b910af92bc4b69720dec935c65b2d2f1c25db054 --- /dev/null +++ b/2025/A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains/full.md @@ -0,0 +1,430 @@ +# A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains + +Dexuan Zhang1 Thomas Westfechtel1 Tatsuya Harada1,2 +1The University of Tokyo, 2RIKEN + +{dexuan.zhang, thomas, harada}@mi.t.u-tokyo.ac.jp + +# Abstract + +Existing domain adaptation systems can hardly be applied to real-world problems with new classes presenting at deployment time, especially regarding source-free scenarios where multiple source domains do not share the label space despite being given a few labeled target data. To address this, we consider a challenging problem: multi-source semi-supervised open-set domain adaptation and propose a learning theory via joint error, effectively tackling strong domain shift. To generalize the algorithm into source-free cases, we introduce a computationally efficient and architecture-flexible attention-based feature generation module. Extensive experiments on various data sets demonstrate the significant improvement of our proposed algorithm over baselines. + +# 1. Introduction + +Generally, a supervised learning algorithm trained on a particular distribution of labeled samples (source domain) often fails to generalize when deployed on a new environment (target domain) in the presence of domain shift. In this regard, Domain adaptation (DA) [1] algorithms address the domain-shift problem by aligning the data distributions of the source and target domains through learning a domain-invariant feature space using statistical or adversarial learning approaches, which have made remarkable success. However, the problem setting still needs to be relaxed for real-world applications when we aim to integrate the knowledge learned from multiple source domains. Current DA methods can hardly cover the case for varying label space across domains, and the corresponding learning theory has yet to be proposed. Moreover, the solution is limited if this heterogeneous domain setting is extended to source-free situations where each source model may be trained on different network architectures and the label space. This work tackles a challenging multi-source semi-supervised open-set domain adaptation paradigm with varying label space, illustrated in Fig. 1. + +In general, multi-source DA (MSDA) [45] and semi-supervised DA (SSDA) [41] are regarded as more practical than the single-source DA setup, considering that labeled data may come from various domains. More precisely, in these cases, the labeled samples can be differently distributed among themselves in addition to the usual domain shit between the source and the target domains. One naive approach to MSDA and SSDA is to group all labeled data into a single domain and deploy any unsupervised DA (UDA) method. However, such a trivial solution may lead to sub-optimal classification due to the gaps among labeled data [41]. + +Most DA techniques assume the same label space in source and target domains, usually called the closed-set setting. The paradigm of closed-set DA has been substantially explored in the literature for UDA [11, 32, 33, 39, 47, 62, 64], SSDA [23, 37, 41, 43, 59, 61], and MSDA [19, 36, 51, 53, 57, 65, 67]. In contrast, the open-set DA (OSDA) [4] setting allows the presence of target-specific classes in addition to the shared classes. Such an open-set arrangement is more challenging due to a huge label shift across domains. The closed-set DA techniques cannot be directly applied in this case since these target-specific open-set samples, in turn, may jeopardize the domain alignment process. This work formalizes a more generalized problem where source domains may not share the label space, and the unlabeled target domain additionally contains novel classes. + +Motivated by these, we consider a learning scenario in this work as the multi-source semi-supervised open-set DA (MSODA) where each source domain has a diverse label space, the labeled target domain consists of a few-shot of target data whose label space, i.e., the known class, is a superset of any source label space, and the unlabeled target domain contains data either from the known or a combined unknown class. Under this setup, the task is to classify the unlabeled data either into one of the known categories or a common unknown class. Such a setup invariably holds huge applications in fields relating to real-world visual perception like medical imaging and remote sensing, where acquisition of multi-domain data is feasible, and novel categories may show up abruptly [4]. Nonetheless, this MSODA problem + +cannot be effectively solved by directly utilizing the single source open-set paradigm of [2, 10, 20, 27, 34, 40, 54, 63] mainly because of the following factors: i) the varying label space of multiple source domains becomes an obstacle to the traditional OSDA techniques, and ii) the unknown recognition can be non-trivial since the target domain may be related to each source domain in a different degree. Regarding multi-source models, recent works [25, 55] consider various label spaces but require largely shared common classes for all domains to align the features, where we accept source domains with zero overlap. + +[36] argued that reducing the domain gap among the source domains leads to a more robust and effective MSDA model. This idea is particularly relevant to our problem setting since aligning the source domains among themselves inherently helps distinguish unknown from known categories in the target domain. Otherwise, the domain shift among the source domains may lead to an unstable alignment of unlabeled target data. Inspired by this idea, we combine the theoretical results from [62, 63] to build a learning theory that can align all source domains with the labeled target domain via joint error, which is crucial to dealing with label shift [66]. Then we introduce PU learning [58] to detect unknowns with an end-to-end algorithm such that the generalization error is guaranteed, unlike those methods applying closed-set DA after unknown separation. Our major contributions can be summarized as: + +- We introduce a challenging problem setting of multi-source semi-supervised open-set DA with varying label space and propose a learning theory via joint error. +- We design a framework to generate labeled source features via an attention-based mechanism for source-free cases. +- We demonstrate the efficacy of our proposal through extensive experiments on two benchmark datasets where we perform thorough robustness analysis. + +![](images/9f82b9808ce1520db69fff4acb45a3db7e5889bd39e3c49c8ed40582377becd7.jpg) +Figure 1. Knowledge integration from heterogeneous domains can be considered a task for multi-source semi-supervised open-set domain adaptation. Given a few labeled target data as the key, we aim to build a unified target model from multiple source domains with varying label space, which can be applied to query data containing unknown categories. + +# 2. Learning Theory of Unified Model + +In this section, we present the theory that transfers the knowledge from multiple source domains to the target domain given a few labeled target data under the open-set situation. First, we propose a target error bound via joint error based on the theoretical results from [62, 63]. Then, we derive the generalization error of the proposed learning theory based on the generalized Vapnik-Chervonenkis (VC) complexity [49, 50] of real-valued function space. Finally, we proceed with the empirical objective function as an upper bound of a trivial convex combination with the log-sum-exp trick, which leads to a smoother optimization process. + +We consider the unified model (UM) as a solution to multi-source semi-supervised open-set domain adaptation (MSODA) tasks, where the learning algorithm has access to multiple source domains that may have different label spaces. A set of $n_i$ ( $i = 1,..,N$ ) labeled points $\{(x_{s_i}^j, y_{s_i}^j) \in (\mathcal{X} \subseteq \mathbb{R}^D \times \mathcal{Y}_i' \subseteq \mathcal{Y}')\}_{j=1}^{n_i}$ sampled i.i.d. from each source domain $S_i'$ . In addition, a set of $l$ labeled points (few-shot) $\{(x_v^j, y_v^j) \in (\mathcal{X} \subseteq \mathbb{R}^D \times \mathcal{Y}' = \{1, ..., K-1\})\}_{j=1}^l$ sampled i.i.d. from the labeled target domain $V'$ is available during learning. We seek a hypothesis that can classify a set of $m$ unlabeled points $\{(x_t^j) \in X \subseteq \mathbb{R}^D\}_{j=1}^m$ sampled i.i.d. from target domain $T$ where $\mathcal{Y} = \{1,..,K\}$ containing unknown class $K$ . Let $\mathcal{K} = \{k | k \in \mathbb{R}^K : \sum_{y \in \mathcal{Y}} k[y] = 1, k[y] \in [0,1]\}$ denotes output space and $S_i, V$ indicate the complete domains with label space $\mathcal{Y}$ . + +Theorem 2.1 (Target Error Bound for MSODA via Joint Error1). Given source $(S_i)$ , labeled $(V)$ and unlabeled target $(T)$ domains that contain data from the unknown class, let $f_{S_i}, f_V, f_T: \mathcal{X} \to \mathcal{K}$ be the true labeling functions of $S_i, V, T$ respectively whose outputs are one-hot vectors denoting the corresponding classes of inputs. Let $\epsilon: \mathcal{K} \times \mathcal{K} \to \mathbb{R}$ denote a distance metric and $\epsilon_D(f, f') := \mathbb{E}_{x \sim D} \epsilon(f(x), f'(x))$ measure the expected disagreement between the outputs of $f, f': \mathcal{X} \to \mathcal{K}$ over a distribution $D$ on $\mathcal{X}$ . Regarding the source error of a hypothesis $h \in \mathcal{H}: \mathcal{X} \to \mathcal{K}$ where $h(x)[y]$ indicates the probability of $x \in \mathcal{X}$ labeled as $y \in \mathcal{Y}$ , we use the shorthand $\epsilon_{S_i}(h) := \epsilon_{S_i}(h, f_{S_i})$ . Similarly, we use $\epsilon_V(h), \epsilon_T(h)$ to denote the labeled and unlabeled target error. For $\forall h, f_{S_i}', f_V', f_T' \in \mathcal{H}: \mathcal{X} \to \mathcal{K}$ , the expected target error is bounded, + +$$ +\begin{array}{l} 2 \epsilon_ {T} (h) \leq \epsilon_ {V} (h) + \sum_ {i = 1} ^ {N} \alpha_ {i} U _ {i} (h), \quad s. t. \quad \sum_ {i = 1} ^ {N} \alpha_ {i} = 1 \\ = \epsilon_ {V} (h) + \sum_ {i = 1} ^ {N} \alpha_ {i} \left[ \epsilon_ {S _ {i}} (h) + 2 D _ {S _ {i}, V, T} \left(f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}, f _ {T} ^ {*}, h\right) + 2 \theta_ {i} \right], \tag {1} \\ \end{array} +$$ + +$$ +\begin{array}{l} 2 D _ {S _ {i}, V, T} (f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}, f _ {T} ^ {*}, h) \\ = \epsilon_ {T} (f _ {S _ {i}} ^ {*}, f _ {T} ^ {*}) + \epsilon_ {T} (f _ {V} ^ {*}, f _ {T} ^ {*}) + \epsilon_ {T} (h, f _ {S _ {i}} ^ {*}) + \epsilon_ {T} (h, f _ {V} ^ {*}) \\ + \epsilon_ {V} (f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}) + \epsilon_ {S _ {i}} (f _ {V} ^ {*}, f _ {S _ {i}} ^ {*}) - \epsilon_ {V} (h, f _ {S _ {i}} ^ {*}) - \epsilon_ {S _ {i}} (h, f _ {V} ^ {*}) \tag {2} \\ \end{array} +$$ + +$$ +\begin{array}{l} \theta_ {i} = \underbrace {\epsilon_ {S _ {i}} (f _ {S _ {i}} , f _ {S _ {i}} ^ {*}) / 2 + \epsilon_ {V} (f _ {S _ {i}} , f _ {S _ {i}} ^ {*}) + \epsilon_ {T} (f _ {S _ {i}} , f _ {S _ {i}} ^ {*})} _ {\theta_ {S} ^ {i}} \\ + \underbrace {\epsilon_ {V} (f _ {V} , f _ {V} ^ {*}) / 2 + \epsilon_ {S _ {i}} (f _ {V} , f _ {V} ^ {*}) + \epsilon_ {T} (f _ {V} , f _ {V} ^ {*})} _ {\theta_ {V} ^ {i}} + \underbrace {\epsilon_ {T} (f _ {T} , f _ {T} ^ {*})} _ {\theta_ {T} ^ {i}} \quad (3) \\ \end{array} +$$ + +In the following, we discuss the approach to obtain generalization guarantees for multiple source domain adaptation in classification settings by a trivial union-bound argument. + +Assumption 2.2 (Substitutes for True Labeling Functions). For finite training data $\{\hat{S}_i\}_{i = 1}^N,\hat{T},\hat{V}$ , we assume there exist approximated labeling functions $\{f_{S_i}^*\}_{i = 1}^N,f_T^*,f_V^*$ that can lead the empirical deviation $\sum_{i}\alpha_{i}\hat{\theta}_{i}$ very close to zero such that it can be ignored during the practical learning process. + +Theorem 2.3 (Generalization Error $^1$ ). Let $\hat{S}_i, \hat{V}, \hat{T}$ denote the empirical distributions generated with $m$ i.i.d. samples from each domain. Let $\mathcal{F} = \{f(x) = \epsilon(h(x), h'(x)): \mathcal{X} \to [0, M] | h, h' \in \mathcal{H}\}$ be a function space with complexity measured by uniform covering number $\mathcal{N}_1(\xi, \mathcal{F}, m)$ . Let $\alpha_i = \frac{\exp(\nu \hat{U}_i(h))}{\sum_j \exp(\nu \hat{U}_j(h))}$ , $\nu > 0$ , given Jensen's & Cauchy's inequality and Assumption 2.2, there exist $f_{S_i}^* \in \mathcal{H}_{S_i} \subseteq \mathcal{H}$ , $f_V^* \in \mathcal{H}_V \subseteq \mathcal{H}$ , $f_T^* \in \mathcal{H}_T \subseteq \mathcal{H}$ , such that for $0 < \delta < 1$ , with probability at least $1 - \delta$ , for $\forall h \in \mathcal{H}$ : + +$$ +\begin{array}{l} \epsilon_ {T} (h) \leq \frac {1}{2} [ \underbrace {\epsilon_ {\hat {V}} (h)} _ {L _ {c l s} ^ {V} (h)} + \frac {1}{\nu} \log \sum_ {i = 1} ^ {N} \exp (\nu \hat {U} _ {i} (h)) ] \\ + \mathcal {O} \left(\inf _ {\sqrt {\frac {2}{m}} \leq \gamma \leq M} (\gamma + \int_ {\gamma} ^ {M} \sqrt {\frac {1}{m} \log \frac {2 (1 1 N + 6) \mathcal {N} _ {1} (\frac {\xi}{8} , \mathcal {F} , 2 m)}{\delta}} d \xi)\right) \tag {4} \\ \end{array} +$$ + +$$ +\hat {U} _ {i} (h) = \underbrace {\epsilon_ {\hat {S} _ {i}} (h)} _ {L _ {c l s} ^ {S _ {i}} (h)} + 2 \underbrace {D _ {\hat {S} _ {i} , \hat {V} , \hat {T}} \left(f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}, f _ {T} ^ {*}, h\right)} _ {L _ {d i s} ^ {i} \left(f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}, f _ {T} ^ {*}, h\right)} \tag {5} +$$ + +The log-sum-exp trick [30] yields an upper bound of the convex combination as Theorem 2.3, where we no longer need to heuristically decide the value of $\alpha_{i}$ in the unified model. It smooths the objective and provides a principled and adaptive way to combine all the gradients from the $N$ source domains. This often leads to better generalizations in practice because of the ensemble effect of multiple sources implied by the upper bound [65]. + +According to [13, 56, 63], for $\forall f, f': \mathcal{X} \to \mathcal{K}, \epsilon_V(f, f')$ can be approximated by the expectation on $V'$ based on PU learning. Moreover, source data $S_i'$ may be unavailable due to privacy concerns (e.g., medical data) during the adaptation phase. To tackle this source-free domain adaptation (SFDA) problem, we propose a source features generation pipeline based on the attention mechanism, which can transfer the knowledge between models with different architectures. + +# 3. Methodology + +In this section, we first recall several preliminaries crucial to the learning algorithm of open-set domain adaptation. Then, we propose a pipeline to transfer the knowledge between models with different architectures based on the attention mechanism to recover feature space under the source-free setting. Finally, we define constrained hypothesis space to obtain a rigorous objective function. + +# 3.1. Discrepancy Measurement + +As introduced in [63], we recall the definition of Open-set Margin Discrepancy and Unknown Predictive Discrepancy, which serve as key components to bridging the gap between the theory and algorithm for open-set domain adaptation. + +Definition 3.1 (Open-set Margin Discrepancy). Let $y, y'$ denote outputs of $f, f': \mathcal{X} \to \mathcal{K}$ where $y = l(f(x)), l(f'(x)) = y'$ given induced labeling function: + +$$ +l \circ f: x \rightarrow \underset {y \in \mathcal {Y}} {\arg \max } f (x) [ y ] \tag {6} +$$ + +The Open-set Margin Discrepancy between two functions $f, f'$ over a distribution $D$ is given by: + +$$ +\begin{array}{l} \epsilon_ {D} (f, f ^ {\prime}) = \mathbb {E} _ {x \sim D} [ \operatorname {o m d} (f (x), f ^ {\prime} (x)) ] (7) \\ \operatorname {o m d} (f (x), f ^ {\prime} (x)) = \max \left(\left| \log (1 - f (x) [ y ]) - \log (1 - f ^ {\prime} (x) [ y ]) \right|\right), \\ | \log (1 - f (x) [ y ^ {\prime} ]) - \log (1 - f ^ {\prime} (x) [ y ^ {\prime} ]) |) (8) \\ \end{array} +$$ + +Definition 3.2 (Unknown Predictive Discrepancy). Let $v: \mathcal{K} \times \mathcal{K} \to \mathbb{R}$ denote the Unknown Predictive Discrepancy as a distance metric and $v_{D}(f, f') := \mathbb{E}_{x \sim D} v(f(x), f'(x))$ measure the expected disagreement between the $K$ -th outputs of $f, f': \mathcal{X} \to \mathcal{K}$ over a distribution $D$ on $\mathcal{X}$ . Let $e^{K}: \mathcal{X} \to [0, \dots, 1] \in \mathcal{K}$ denote a function that can predict any input as the unknown class. The deviation from $e^{K}$ for a hypothesis $h \in \mathcal{H}$ is further referred to as the shorthand $v_{D}(h) := v_{D}(h, e^{K})$ that measures the probability that samples from $D$ not classified as unknowns. + +$$ +v _ {D} (f, f ^ {\prime}) = \mathbb {E} _ {x \sim D} | \log (1 - f (x) [ K ]) - \log (1 - f ^ {\prime} (x) [ K ]) | \tag {9} +$$ + +# 3.2. Inference on Expectation with PU Learning + +In this section, we introduce the techniques from PU learning [58] to estimate the expectation over source domain $S_{i}$ by the incomplete source domain $S_{i}^{\prime}$ and target domain $T$ for the open-set scenario. The expectation over $V$ can be derived analogously. + +Assumption 3.3. Let $S_{i}^{k} = P_{S_{i}}(x|y = k), V^{k} = P_{V}(x|y = k), T^{k} = P_{T}(x|y = k)$ denote class conditional distributions, $S_{i}^{\backslash K} = P_{S_{i}}(x|y \neq K), V' = P_{V}(x|y \neq K), T' = P_{T}(x|y \neq K)$ indicate incomplete domains that do not contain unknown class $S_{i}^{K}, V^{K}, T^{K}$ . Given a feature extractor $g: \mathcal{X} \subseteq \mathbb{R}^{D} \to \mathcal{Z} \subseteq \mathbb{R}^{F}$ , assume that the feature space can be aligned by DA techniques such that $Z^{K} = P_{S_{i}^{K}}(z) = P_{V^{K}}(z) = P_{T^{K}}(z), Z' = P_{S_{i}^{\backslash K}}(z) = P_{V'}(z) = P_{T'}(z)$ . + +Lemma 3.4 (PU Estimation $^1$ ). Let $g: \mathcal{X} \subseteq \mathbb{R}^D \to \mathcal{Z} \subseteq \mathbb{R}^F$ denote the feature extractor. Let $h \in \mathcal{H}^F: \mathcal{Z} \to \mathcal{K}$ where $h \circ g \in \mathcal{H}: \mathcal{X} \to \mathcal{K}$ and $f_V^* \in \mathcal{H}_V^F, f_T^* \in \mathcal{H}_T^F, f_{S_i}^* \in \mathcal{H}_{S_i}^F: \mathcal{Z} \to \mathcal{K}$ denote the decomposed approximated labeling functions. Let $\sum_{k=1}^{K} \pi_{S_i}^k = 1, \sum_{k=1}^{K} \pi_V^k = 1, \sum_{k=1}^{K} \pi_T^k = 1$ denote the class priors of each domain. Given Assumption 3.3, the expectation on $S_i$ can be estimated by expectation on $S_i^{\backslash K}$ and Unknown Predictive Discrepancy (Definition 3.2) with a mild condition that $\pi_{S_i}^K = \pi_T^K = 1 - \alpha$ : + +$$ +\epsilon_ {S _ {i}} (h \circ g) = \alpha \left[ \epsilon_ {S _ {i} ^ {\backslash K}} (h \circ g) - v _ {S _ {i} ^ {\backslash K}} (h \circ g) \right] + v _ {T} (h \circ g) \tag {10} +$$ + +$$ +\begin{array}{l} \epsilon_ {S _ {i}} (f _ {S _ {i}} ^ {*} \circ g, f _ {V} ^ {*} \circ g) = \alpha [ \epsilon_ {S _ {i} \backslash K} (f _ {S _ {i}} ^ {*} \circ g, f _ {V} ^ {*} \circ g) - v _ {S _ {i} \backslash K} (f _ {S _ {i}} ^ {*} \circ g, f _ {V} ^ {*} \circ g) ] \\ + v _ {T} \left(f _ {S _ {i}} ^ {*} \circ g, f _ {V} ^ {*} \circ g\right) (11) \\ \epsilon_ {S _ {i}} (f _ {V} ^ {*} \circ g, h \circ g) = \alpha [ \epsilon_ {S _ {i} ^ {\backslash} K} (f _ {V} ^ {*} \circ g, h \circ g) - v _ {S _ {i} ^ {\backslash} K} (f _ {V} ^ {*} \circ g, h \circ g) ] \\ + v _ {T} \left(f _ {V} ^ {*} \circ g, h \circ g\right) (12) \\ \end{array} +$$ + +Assumption 3.5. Given a feature extractor $g: \mathcal{X} \to \mathcal{Z}$ , assume that the covariate shift between each source and labeled target domain can be addressed for known categories as $P_{S_i^k}(z) = P_{V^k}(z), k = 1,..K - 1$ . + +Corollary 3.6. Let $\mathcal{Y}_i^{\prime \prime} = \{k|k\notin \mathcal{Y}_i^\prime ,k = 1,..K - 1\}$ denote the label space that is absent from $S_{i}^{\prime}$ . Given Assumption 3.5, we further decompose the source error as: + +$$ +\begin{array}{l} \alpha \epsilon_ {S _ {i} ^ {\backslash K}} (h \circ g) = \sum_ {k \in \mathcal {Y} _ {i} ^ {\prime \prime}} \pi_ {S _ {i}} ^ {k} \epsilon_ {S _ {i} ^ {k}} (h \circ g) + \sum_ {k \in \mathcal {Y} _ {i} ^ {\prime}} \pi_ {S _ {i}} ^ {k} \epsilon_ {S _ {i} ^ {k}} (h \circ g) \\ = \rho_ {i} \sum_ {k \in \mathcal {Y} _ {i} ^ {\prime \prime}} \epsilon_ {V _ {i} ^ {k}} (h \circ g) + (1 - \rho_ {i}) \epsilon_ {S _ {i} ^ {\prime}} (h \circ g), \tag {13} \\ \end{array} +$$ + +where $\rho_{i} = |\mathcal{Y}_{i}^{\prime \prime}| / K$ under a mild condition that $\pi_{S_i}^k = 1 / K$ for $k\in \mathcal{Y}_i^{\prime \prime}$ + +Remark 3.7. According to Definition 3.2, minimizing $v_{\hat{T}}(h \circ g)$ means mapping target samples to the unknown class. In practice, a multiplier $\beta < 1$ is applied on $v_{\hat{T}}(h \circ g)$ to prevent all target samples from being classified as unknown. + +# 3.3. Towards Source-Free Knowledge Transfer with Attention-based Feature Generation + +Source-free domain adaptation (SFDA) has been considered a means of reducing reliance on source data. As described in [24], the existing SFDA research can generally be categorized into data-centric and model-centric methods. Model-centric methods employ techniques such as self-supervision, while data-centric methods focus on image-based reconstruction. Model-centric methods like [28, 29, 35, 60] require source model fine-tuning, where the generalization to multi-source cases with label shift can be nontrivial since it may fail to fully leverage the few-shot labeled data due to the missing classes in source domains. Meanwhile, for data-centric methods like [6, 26], the pipeline to generate source-like images is generally computationally intensive + +![](images/b924781068c2bfcbf6d052fefbd06b27a184ae76704098abb9c471e600243125.jpg) +Figure 2. The mechanism of attention-based feature generation for source-free domain adaptation. Given a similarity-based weight estimated by the knowledge preserved in the pre-trained source model consisting of a black-box feature extractor $g_{i}$ and a visible classifier $f_{i}$ , labeled features generated with the attention module can be considered a weighted average of unlabeled target features, which serve as the anchor for the target distribution alignment in the adaptation phase. + +and time-consuming, which can hardly be applied to highly structured domains. Furthermore, it might violate the intention of SFDA to protect privacy by recovering source-like images. Motivated by this, in this section, we propose a novel attention-based feature generation (AFG) algorithm that can produce labeled anchors for the alignment of unlabeled target data by leveraging the knowledge equipped in source models, which is more computationally efficient and independent from source model fine-tuning. + +The SFDA scenario involves two phases: pre-training and adaptation. During pre-training, $N$ models are trained on labeled data from each source domain $x_{s_i} \sim S_i'$ , $i = 1\dots N$ . Subsequently, the goal of the adaptation stage is to adapt the pre-trained source model to the unlabeled target data $x_t \sim T$ given few-shot labeled target data $x_v \sim V'$ . The proposed approach assumes a challenging open-set form, implying that the label spaces among the target and source domains are distinct. + +Inspired by [28], which uses a single-layer linear classifier in source models to store the cluster center of source features, we choose the Bayesian linear classifier during pre-training such that source features can be sampled by the re-parameterization trick [21] in the adaptation phase. Let $g_{i}:\mathcal{X}\rightarrow \mathcal{Z}_{i}\subseteq \mathbb{R}^{F_{i}}$ and $f_{i}\coloneqq \{\mu_{i},\sigma_{i}\}$ denote each pre-trained source model. As illustrated in Fig. 2, given the source features approximated by the weight samples + +of Bayesian linear classifier as $g_{i}(\hat{S}_{i}^{\prime}) = \left( \begin{array}{c} g_{i}(x_{s_{i}}^{1}) \\ \vdots \\ g_{i}(x_{s_{i}}^{\| \mathcal{Y}_{i}^{\prime}\|}) \end{array} \right) :=$ + +$$ +\mu_ {i} + \sigma_ {i} \odot \left( \begin{array}{c} \zeta_ {i} ^ {1} \\ \vdots \\ \zeta_ {i} ^ {\| \mathcal {Y} _ {i} ^ {\prime} \|} \end{array} \right), \zeta_ {i} ^ {j} \sim \mathcal {N} (0, I) \text {w i t h s i z e} \| \mathcal {Y} _ {i} ^ {\prime} \| \times F _ {i} +$$ + +(multiple samples can be generated from each class in practice), along with the query and key mapping functions $w_{q_i}, w_{k_i}: \mathcal{Z}_i \to \mathcal{Z}_i' \subseteq \mathbb{R}^{F_i'}$ , the corresponding labeled anchor defined as $\{(g(x_{s_i}^j), y_i^j \in \mathcal{Y}_i')\}_{j=1}^{\|\mathcal{Y}_i'\|}, y_i^j \neq y_i^{j'}$ is given + +by: + +$$ +g (\hat {S} _ {i} ^ {\prime}) = \operatorname {s o f t m a x} \left(\frac {w _ {q _ {i}} \left(g _ {i} \left(\hat {S} _ {i} ^ {\prime}\right)\right) \cdot w _ {k _ {i}} \left(g _ {i} \left(\hat {T} ^ {\prime}\right)\right) ^ {\top}}{\sqrt {F _ {i} ^ {\prime}}}\right) g (\hat {T} ^ {\prime}), \tag {14} +$$ + +where $\hat{T}'$ denotes the estimated known-class data from the target. To produce meaningful features for the distribution alignment in the adaptation phase, we propose two objective functions to learn the query and key mapping $\{w_{q_i}, w_{k_i}\}_{i=1}^N$ of each source domain. Analogous to [5], we train $w_{q_i}, w_{k_i}$ by maximizing the similarity between the projections of the same target features extracted by the per-trained source model $g_i$ while pushing the different target features far apart, which can be achieved with minimizing reconstruction loss $L_{rec}^i$ such that the output of the attention module can approximate target features $g(\hat{T}')$ given target data as query and key. To further regularize $w_{q_i}, w_{k_i}$ , we introduce a cycle-consistency loss $L_{cyc}^i$ that can bring the features generated by labeled and unlabeled target data $\hat{V}', \hat{T}'$ close to each other. + +$$ +L _ {r e c} ^ {i} = \left| \operatorname {s o f t m a x} \left(\frac {w _ {q _ {i}} \left(g _ {i} \left(\hat {T} ^ {\prime}\right)\right) \cdot w _ {k _ {i}} \left(g _ {i} \left(\hat {T} ^ {\prime}\right)\right) ^ {\top}}{\sqrt {F _ {i} ^ {\prime}}}\right) g \left(\hat {T} ^ {\prime}\right) - g \left(\hat {T} ^ {\prime}\right) \right| \tag {15} +$$ + +$$ +L _ {c y c} ^ {i} = | \mathrm {s o f t m a x} (\frac {w _ {q _ {i}} (g _ {i} (\hat {S} _ {i} ^ {\prime})) \cdot w _ {k _ {i}} (g _ {i} (\hat {V} ^ {\prime})) ^ {\top}}{\sqrt {F _ {i} ^ {\prime}}}) g (\hat {V} ^ {\prime}) - g (\hat {S} _ {i} ^ {\prime}) | \quad (1 6) +$$ + +Progressive Unknown Rejection (PUR) is additionally proposed to improve the recognition accuracy on unknown class. In the open-set setting, empirical target data $\hat{T}$ includes the unknown class, while the generated labeled anchors $g(\hat{S}_i^{\prime})$ should be limited to the known class. According to the generation mechanism defined by Eq. (14), labeled anchors can be considered a similarity-based weighted average of target features, which are not supposed to contain components from irrelevant features of the unknown class. However, it is impractical to learn the ideal results where the weights assigned to those unrelated target features are zero by pure regularization of mapping functions $w_{q_i}, w_{k_i}$ . To address this problem, we introduce a scheme to gradually reject the target features from the unknown class by removing the target data labeled as unknown given the current hypothesis $h$ from $\hat{T}$ . Specifically, at each training iteration during the adaptation stage, for a batch of input target data, we rank the likelihood of the unknown class for each target sample $p(y = K|x_t) = h(x_t)[K]$ in ascending order. Given a threshold $0 < \tau < 1$ progressively increasing from zero according to the exponential ramp-up function [22], we select bottom $1 - \tau$ target samples as $\hat{T}'$ . + +# 3.4. Hypothesis Constraint + +Proposition 3.8. If $\mathcal{H}_{S_i}^F, \mathcal{H}_V^F, \mathcal{H}_T^F$ are sets of functions that can minimize a part of $\hat{\theta}_S^i, \sum_i \hat{\theta}_V^i, \sum_i \hat{\theta}_T^i$ respectively, then $f_{S_i}^* \in \mathcal{H}_{S_i}^F, f_V^* \in \mathcal{H}_V^F, f_T^* \in \mathcal{H}_T^F$ must hold such that we can relax $L_{\text{dis}}$ in Theorem 2.3 by considering maximum w.r.t. functions $f_{S_i}', f_V', f_T'$ as: + +$$ +\begin{array}{l} \log \sum_ {i} \exp \left(\nu \left[ L _ {c l s} ^ {S _ {i}} (h; g) + 2 L _ {d i s} ^ {i} \left(f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}, f _ {T} ^ {*}, h; g\right) \right]\right) \\ \leq \sup _ {\{f _ {S _ {i}} ^ {\prime} \in \mathcal {H} _ {S _ {i}} ^ {F} \} _ {i = 1} ^ {N}, f _ {V} ^ {\prime} \in \mathcal {H} _ {V} ^ {F}, f _ {T} ^ {\prime} \in \mathcal {H} _ {T} ^ {F}} \log \sum_ {i} \exp (\nu [ L _ {c l s} ^ {S _ {i}} (h; g) + 2 L _ {d i s} ^ {i} (f _ {S _ {i}} ^ {\prime}, f _ {V} ^ {\prime}, f _ {T} ^ {\prime}, h; g) ]) \\ \end{array} +$$ + +(17) + +Definition 3.9 (Approximated Labeling Function Space). Let $L_{\mathcal{H}_S}^i, L_{\mathcal{H}_V}^i, L_{\mathcal{H}_T}^i$ denote the hypothesis constraints, i.e., a part of the empirical deviation between approximated and true labeling functions $\hat{\theta}_S^i, \hat{\theta}_T^i, \hat{\theta}_V^i$ . Approximated Labeling Function Space $\mathcal{H}_{S_i}^F, \mathcal{H}_V^F, \mathcal{H}_T^F$ can be defined as the sets whose members $f_{S_i}', f_V', f_T' \in \mathcal{H}^F$ can minimize $L_{\mathcal{H}_S}^i, \sum_i L_{\mathcal{H}_V}^i, \sum_i L_{\mathcal{H}_T}^i$ : + +$$ +\begin{array}{l} \left\{\mathcal {H} _ {S _ {i}} ^ {F} = \left\{f _ {S _ {i}} ^ {\prime} | \arg \min _ {g, f _ {S _ {i}} ^ {\prime} \in \mathcal {H} ^ {F}} \left[ L _ {\mathcal {H} _ {S}} ^ {i} \left(f _ {S _ {i}} ^ {\prime}; g\right) = L _ {c l s} ^ {S _ {i}} \left(f _ {S _ {i}} ^ {\prime}; g\right) / 2 + L _ {c l s} ^ {V} \left(f _ {S _ {i}} ^ {\prime}; g\right) \right] \right\} \right. \\ \left\{\mathcal {H} _ {V} ^ {F} = \left\{f _ {V} ^ {\prime} \mid \arg \min _ {g, f _ {V} ^ {\prime} \in \mathcal {H} ^ {F}} \sum_ {i} \left[ L _ {\mathcal {H} _ {V}} ^ {i} \left(f _ {V} ^ {\prime}; g\right) = L _ {c l s} ^ {V} \left(f _ {V} ^ {\prime}; g\right) / 2 + L _ {c l s} ^ {S _ {i}} \left(f _ {V} ^ {\prime}; g\right) \right] \right\} \right. \\ \left\{\mathcal {H} _ {T} ^ {F} = \left\{f _ {T} ^ {\prime} \mid \arg \min _ {g, f _ {T} ^ {\prime} \in \mathcal {H} ^ {F}} \sum_ {i} \left[ L _ {\mathcal {H} _ {T}} ^ {i} \left(f _ {T} ^ {\prime}; g\right) = \left[ L _ {\text {c l s}} ^ {S _ {i}} \left(f _ {T} ^ {\prime}; g\right) + L _ {\text {c l s}} ^ {V} \left(f _ {T} ^ {\prime}; g\right) \right] / 2 + L _ {\text {s s l}} \right] \right\} \right. \\ \end{array} +$$ + +(18) + +To build a more reliable target function space $\mathcal{H}_T^F$ , we approximate the target error with the error rate on labeled samples and a semi-supervised regularization term $L_{ssl}^2$ including entropy minimization [14, 15], pseudo labeling [44, 46] and consistency regularization [22, 42], which has been intensively discussed in [23, 43, 59, 63]. + +# 3.5. Algorithm + +As described in Algorithm 1, we introduce a gradient reversal layer [12] to train the overall objective together. ImageNet [8] pre-trained ResNet-50 [16] is used as feature extractor $g$ and randomly initialized 2-layer fully-connected networks are used for classifiers $f_{S_i}^{\prime}, f_V^{\prime}, f_T^{\prime}, h$ . We adopt SGD with a momentum of 0.9 for optimization, where the initial learning rate is empirically set to 0.001. We employ the learning rate annealing strategy proposed in [12]. We use RandomFlip, RandomCrop, and RandAugment [7] as data augmentation with the batch size fixed to 24. + +![](images/2368c1f59ad63ea02d5288c77d00848fb6a4405f5d48019ba52a15b5fac7f00b.jpg) +Figure 3. Alignment mechanism of UM, where unknown target data $\hat{T}^K$ (green) is pushed away from labeled data into a separated cluster, while known target data $\hat{T}'$ is aligned back towards labeled clusters by $\min_g L_{dis}$ . + +# Algorithm 1 UM + +Input: source $\{\hat{S}_i^{\prime}\}_{i = 1}^{N}$ , labeled target $\hat{V}^{\prime}$ , unlabeled target $\hat{T}$ + +Output: updated parameters $\phi = (\{f_{S_i}^{\prime}\}_{i=1}^{N}, g, h, f_V^{\prime}f_T^{\prime})$ , $w = \{w_{q_i}, w_{k_i}\}_{i=1}^{N}$ + +Parameter: trade-off parameter $\lambda$ ; learning rate $\eta$ ; known class ratio estimator $\alpha$ ; coefficients $\nu, \beta, \tau$ + +Notation: gradient reversal operator $R(\cdot)$ + +for epoch $= 1,2,\dots$ do + +Estimate known class ratio $\alpha$ on $\hat{T}$ with $g,h$ + +if source-free then + +Estimate $\hat{T}^{\prime}$ according to PUR and Update $w$ to optimize AFG: + +$$ +w \leftarrow w - \eta \Delta w, \Delta w = \frac {\partial \sum_ {i = 1} ^ {N} \left(L _ {r c c} ^ {i} + L _ {c y c} ^ {i}\right)}{\partial w} +$$ + +Generate labeled features $g(\hat{S}_i^{\prime})$ according to Eq. (14) + +# end if + +Compute labeled target error $L_{cls}^{V}(h;g) = L_{V}$ , source error $L_{cls}^{S_i}(h;g) = L_S^i$ , hypothesis constraints $L_{\mathcal{H}_S}^i (f_{S_i}';g) + L_{\mathcal{H}_V}^i (f_V';g) + L_{\mathcal{H}_T}^i (f_T';g) = L_H^i$ for $i = 1,..N$ + +Compute discrepancy $L_{dis}^{i}(f_{S_{i}}^{\prime}, f_{V}^{\prime}, f_{T}^{\prime}, R \circ h \circ R; R \circ g) = L_{D}^{i}$ given the gradient reversal layer for $i = 1,..N$ + +Update $\phi$ to minimize the target error bound: + +$$ +\phi \leftarrow \phi - \eta \Delta \phi , +$$ + +$$ +\Delta \phi = \frac {\partial (\frac {1}{2} [ L _ {V} + \frac {1}{\nu} \log \sum_ {i = 1} ^ {N} \exp (\nu [ L _ {S} ^ {i} + L _ {H} ^ {i} - \lambda L _ {D} ^ {i} ]) ])}{\partial \phi} +$$ + +end for + +# 4. Evaluation + +We evaluated our proposal using two benchmarks, Office-Home and DomainNet. The trade-off parameter $\lambda$ is set to 0.01 during the training procedure according to [62, 63]. In addition, we empirically set the PU, scaling, and threshold coefficients $\beta$ to 0.15, $\nu$ to 0.1, and $\tau$ to 0.3 for all experiments. For the semi-supervised setting, we select the same few-shot labeled target data according to [41]. Regarding the open-set setting, we assign a distinct label space for each source domain as a subset of the target label space described below. We quantitatively compare our results against various baselines, including OSBP [40], PGL [34], ANNA [27], PUJE [63], MOSDANET [38], HyMOS [3], and MPU [58]. + +Evaluation Metrics for the proposed method and baselines are the widely used measures [34, 40], i.e., normalized accuracy for the known class only $(\mathrm{OS}^{*})$ and harmonic mean $\mathrm{HOS} = 2(\mathrm{OS}^{*} \times \mathrm{UNK}) / (\mathrm{OS}^{*} + \mathrm{UNK})$ [2, 27, 31, 54, 63]. + +Office-Home [52] is a widely-used domain adaptation benchmark, which consists of 15,500 images from 65 categories and four domains: Art, Clipart, Product, and RealWorld. We select the first 30 classes alphabetically as the known class and group the rest as the unknown. Each source domain contains 10 classes without overlap, leading to a large label shift scenario. + +DomainNet [36] is a more challenging benchmark dataset for large-scale domain adaptation that has 345 classes and + +
METHODTYPE→Clipart→Product→RealWorld→ArtAvg.
1-shot3-shot1-shot3-shot1-shot3-shot1-shot3-shot1-shot3-shot
OSBPSource-Combine60.462.670.172.369.768.360.764.365.266.9
PGL59.061.867.769.966.768.961.264.063.766.2
ANNA65.867.771.073.470.370.361.063.767.068.8
PUJE65.871.773.374.275.078.165.567.369.972.8
MOSDANETMulti-Source61.565.970.073.871.469.661.663.666.168.2
HyMOS56.664.464.467.366.268.459.062.261.665.6
UM68.072.179.083.079.480.867.770.373.576.6
MPU*Source-Free46.354.459.766.357.860.258.362.555.560.9
OSBP*44.556.555.665.159.364.355.659.953.861.5
PUJE*52.258.465.070.366.270.058.762.760.565.4
UM+AFG61.166.077.080.172.078.860.364.667.672.4
+ +Table 1. HOS (%) of ResNet-50 model fine-tuned on Office-Home dataset under 1-shot/3-shot setting + +
METHODTYPE→Clipart→Painting→Real→SketchAvg.
1-shot3-shot1-shot3-shot1-shot3-shot1-shot3-shot1-shot3-shot
OSBPSource-Combine54.257.449.853.162.664.049.550.154.056.2
PGL59.862.059.461.467.469.459.761.261.663.5
ANNA55.661.553.654.367.566.557.958.158.760.1
PUJE64.466.259.861.767.769.361.264.263.365.4
MOSDANETMulti-Source56.455.355.658.268.569.854.154.958.759.6
HyMOS53.054.454.156.065.167.456.357.157.158.7
UM70.371.566.068.875.178.566.169.569.472.1
MPU*Source-Free54.557.655.060.162.466.448.452.955.159.3
MOSDANET*58.160.554.359.363.262.549.454.356.359.2
PUJE*60.562.255.361.464.067.853.156.258.261.9
UM+AFG64.869.760.064.267.673.460.064.863.168.0
+ +Table 2. HOS (%) of ResNet-50 model fine-tuned on DomainNet dataset under 1-shot/3-shot setting + +6 domains. Following the protocol established in [41], we pick 4 domains (Real, Clipart, Painting, Sketch) with 126 classes for the experiments. We select the first 60 classes alphabetically as the known class and group the rest as the unknown. Similarly, each source domain contains 20 classes without any overlap. + +As reported in Tabs. 1 and 2, under the same setting given 1-shot/3-shot labeled target data (1/3 samples per class), we observe that our method UM consistently outperforms the state-of-the-art results, improving HOS by $3.6\% / 3.8\%$ and $6.1\% / 6.7\%$ on the benchmark datasets of Office-Home and DomainNet respectively, when source data is available. Furthermore, $\mathrm{UM + AFG}$ enhances HOS by $7.1\% / 7.0\%$ in Office-Home and $4.9\% / 6.1\%$ in DomainNet under the challenging source-free setting. Note that our proposed approach provides significant performance gains for the more complex datasets like DomainNet, which requires knowledge transfer across different modalities, regardless of covariate or label shift. We group all source domains with labeled target data as a single domain for the baselines that require the source-combine strategy. For the source-free cases, we introduce a few confident target data labeled by pre-trained models as pseudo-source data to enable several algorithms denoted by $*$ under this problem setting since none of the existing methods can directly address the open-set task under the source-free condition with a huge label shift across source domains. + +# 4.1. Feature Space Visualization + +To intuitively visualize the effectiveness of different approaches, we extracted features from the baseline models and our proposed model on the $\rightarrow$ Art task (Office-Home) and $\rightarrow$ Real task (DomainNet) with the ResNet-50 backbone + +
METHODTYPEOffice-Home →RealWorldDomainNet →Clipart
UNKOS*HOSUNKOS*HOS
DEFAULTSource-Free73.770.472.072.658.664.8
w/o Lsim70.970.670.769.359.163.8
w/o Lcyc74.068.571.172.856.963.9
w/o PUR39.687.054.447.372.957.4
+ +[16]. The feature distributions were processed with t-SNE [48] afterward. As shown in Fig. 4, compared with baselines, our method UM achieves a better alignment between source and target distributions, especially when the domain shift is large. Benefiting from our joint error-based adversarial alignment mechanism, the extracted feature space, including the cluster of unknown target data (green), has a more discriminative class-wise decision boundary. + +# 4.2. Ablation Study + +Self-supervised learning methods have shown that, by relying only on unlabeled data, it is still possible to obtain classification performance close to those of the supervised approaches [5, 17, 18]. In the source-free setting, we adopt the typical SimCLR [5] to help group the feature of unknown target data into a single cluster. As expected in Tab. 3, $L_{sim}^2$ can slightly improve the accuracy of the unknown class for a higher HOS. Furthermore, Progressive Unknown Rejection (PUR), a denoising of generated labeled features, is crucial to detecting unknowns in source-free cases. As also illustrated in Fig. 7d, generally, a larger threshold $\tau$ will lead to a higher UNK at the cost of low OS*, characterized as the trade-off between recognizing known and unknown data for open-set tasks. In addition, we verify the effectiveness of cycle-consistency regularization $L_{cyc}$ and find it helps maintain the normalized accuracy of the known class. + +# 4.3. Robustness against Varying Openness + +To verify the robustness of the proposed method, we conducted experiments on the $\rightarrow$ Painting task (DomainNet) with the openness varying in $\{0.25, 0.5, 0.75\}$ . Here, openness is defined by the ratio of unknown samples in the entire target data. PGL approach heuristically sets the hyperparameter according to the true unknown ratio to control the openness, while PUJE and UM automatically estimate the weight $\alpha$ during the training procedure. From Fig. 5a, we observe that our proposal consistently outperforms baselines by a large margin, which confirms its robustness to the change in openness. + +# 4.4. Stabel Coverage + +In Fig. 5b, we illustrate the recognition performance of UM over training steps on the $\rightarrow$ Art task of the Office-Home + +Table 3. Ablation study verified with ResNet-50 model on OfficeHome & DomainNet dataset + +
METHODTYPEBACKBONEOffice-HomeDomainNetAvg. +O*HOS
→Art→Product→Painting→Real
UNKOS*HOSUNKOS*HOSUNKOS*HOSUNKOS*HOSUNKOS*HOS
UMMulti-SourceResNet-5072.863.367.778.779.379.077.957.366.074.975.275.176.168.872.0
UM+AFGSource-FreeResNet-5066.155.560.383.371.677.063.956.560.076.660.667.672.561.166.2
UM+AFGViT-1677.759.867.587.680.984.168.557.462.582.873.277.779.267.873.0
+ +Table 4. Accuracy of ViT-B/16 model fine-tuned on Office-Home & DomainNet dataset under 1-shot setting + +dataset. OS* experiences a downward while the UNK keeps improving, which characterizes a trade-off between the accuracy of knowns and the accuracy of unknowns. We further observe that some previous works [27, 34] do not converge at the optimum. In contrast, our method always reaches a reliable convergence without suffering from a severe performance drop in recognizing known classes. + +# 4.5. Flexibility in Backbone Architecture + +As presented in Sec. 3.3, AFG allows the target model to use a different backbone architecture from the pre-trained source models. Therefore, unlike those model-centric methods whose performance is deeply limited by source model architecture, our method can be effectively applied to real-world problems where each source model is trained using various networks by leveraging the power of advanced backbones like ViT [9] for the target model. Tab. 4 reveals a clear advantage of AFG when changing the target backbone to ViT-B/16 as the HOS scores under the source-free condition approach and even outperform the source data results. The same ResNet-50 backbone is used for pre-trained source models across different experiments. + +# 4.6. Advantage in Increasing Labeled Target Data + +Sec. 4.6 shows the behavior of different methods when the number of labeled examples in the target domain increases from 1 to 10 per class on DomainNet using ResNet50 backbone. Cluster-based methods like OSBP, MOSDANET, and HyMOS will finally be caught up by a simple multi-class PU learning (MPU) when the sample size increases. On the contrary, our method consistently outperforms the most competitive baseline PUJE for various sizes of labeled target data. Furthermore, along with the growth in the size of $\hat{V}$ , the HOS score achieved by UM+AFG in the source-free setting gradually approaches, even surpasses those methods using source data. + +# 4.7. Sensitivity to PU, Scaling, and Threshold Coefficients + +We show the sensitivity of our approach to varying PU coefficient $\beta$ , scaling factor $\nu$ , and threshold $\tau$ in Sec. 4.7. We can draw two observations from this: the OS* score is relatively stable, and the unknown recognition achieves a more reliable performance for a larger coefficient $\beta$ ; generally, a larger $\nu$ means focusing on the source domain that contributes more error and ignoring others, while a smaller $\nu$ will equalize + +![](images/baead833bffaee584e19426d1998eeead018567ac396ee5e16b320c5b2abac07.jpg) +(a) OSBP + +![](images/d2c42b2a171394521b9bf8602debd0a11083dadfd7c65d32b226e4d545974e23.jpg) +(b) HyMOS + +![](images/f15e018d85423071d5e641ae072ae0e60c6ba0d2528ecb2c32ee97ab6e125b84.jpg) +(c) PUJE + +![](images/6ec085147406cebbe3d592a8c7a65096c4e7d31086a9c5de0aa4cc22b480166b.jpg) + +![](images/7223fd9ce990de23916bc3816c1301a5e021cae6e1c49bb981d2921d50924a21.jpg) +(e) MOSDANET + +![](images/3f9362d1ac223f824da343dcedf9a097dfb6bc69ee51c5583e18eef0313724f7.jpg) +(f) PGL + +![](images/dd384efbc1b66ee6ffd33a478ba5bc330de776d7c1b837d36e0facd77a7d23eb.jpg) +(g) ANNA + +![](images/229d3e1d4f2e004257d80251515d4af69f89ddd3649ecec6018adea93c384ae3.jpg) +(d) UM +(h) UM + +![](images/9f9bbd10253eda6ef4040bce1e232b300a2c7f34eed873b250f6a8e023b35dbf.jpg) +Figure 4. T-SNE visualization of feature distributions in (a)-(d) $\rightarrow$ Art task (Office-Home dataset); (e)-(h) $\rightarrow$ Real task (DomainNet dataset). +(a) robust against openness +Figure 5. (a) Performance comparisons w.r.t. varying openness of the $\rightarrow$ Painting task from DomainNet dataset; (b) Convergence analysis of the $\rightarrow$ Art task from Office-Home dataset compared to other baselines with confidence intervals + +![](images/8788f58a6b02ee9cf8f99ba6e949cce3625a03ac7f02f767a2ac7316ee280f69.jpg) +(b) stable convergence + +![](images/910dc367fec1548161cea36b899c04a7a077d125ae14d606c26f5d6fe125312a.jpg) +(a) $\rightarrow$ Clipart task +Figure 6. Accuracy vs the number of labeled target samples on DomainNet using ResNet50 backbone. Our method maintains a high level of performance for different sample sizes of the labeled target data. + +![](images/4674bd3fa5c9b459b0e1b137015581139b0181d2b1023e114cefbc8320254cbc.jpg) +(b) $\rightarrow$ Sketch task + +the importance of each domain, which can harm the performance when a remarkable label shift exists among source domains implied by Fig. 7c (the imbalance setting indicates a case where one source contains 20 classes while the other two sources take 5 classes respectively). + +![](images/797e41538c541eec8477180d0c59dd7dda8051c6221e3cd0a25f29e3f626c2e0.jpg) + +![](images/7874ae66c62a8388641219cdc0de361c3cbc0776ee47813622ef4eccd1aab535.jpg) + +![](images/212f295cdfc1c696e682eaa82a99ff0decf163f161fdf18f0d73bfabf7838eee.jpg) +(a) sensitivity to $\beta$ +(c) sensitivity to $\nu$ +Figure 7. (a)-(d) Sensitivity to varying loss coefficient $\beta, \nu, \tau$ verified in Office-Home dataset. + +![](images/74305a12e89c6566e6ac7d20ad9e0e21e0845f102794138569e075a441842f7b.jpg) +(b) sensitivity to $\beta$ +(d) sensitivity to $\tau$ + +# 5. Conclusion + +In this work, we addressed the semi-supervised open-set domain shift problem in multi-source cases with inconsistent label space by introducing a novel learning theory based on joint error and multi-class PU learning that can reduce the open-set risk, where the generalization error is bounded by the extension of VC learning theory based on uniform covering number. We generalize our method into source-free scenarios by attention-based feature generation, which is computationally efficient with reliable performance. We conduct extensive experiments on multiple domain adaptation benchmarks. Our model achieves the best performance regardless of source data, compared with recent baseline methods, proving our proposed approach's efficacy. + +# Acknowledgements + +# References + +[1] Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Vaughan. A theory of learning from different domains. Machine Learning, 79:151-175, 2010. 1 +[2] Silvia Bucci, Mohammad Reza Loghmani, and Tatiana Tommasi. On the effectiveness of image rotation for open set domain adaptation. In 16th European Conference on Computer Vision, pages 422-438. Springer International Publishing, 2020. 2, 6 +[3] Silvia Bucci, Francesco Cappio Borlino, Barbara Caputo, and Tatiana Tommasi. Distance-based hyperspherical classification for multi-source open-set domain adaptation. In IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1030-1039. IEEE, 2022. 6 +[4] Pau Panareda Busto and Juergen Gall. Open set domain adaptation. In IEEE International Conference on Computer Vision, pages 754-763, 2017. 1 +[5] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning. JMLR.org, 2020. 5, 7 +[6] Shivang Chopra, Suraj Kothawade, Houda Aynaou, and Aman Chadha. Source-free domain adaptation with diffusion-guided source data generation. CoRR, abs/2402.04929, 2024. 4 +[7] Ekin D. Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V. Le. Randaugment: Practical automated data augmentation with a reduced search space. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 3008-3017, 2020. 5 +[8] Jun Deng, Wei Dong, Richard Socher, Li-Jia Li, Kuntai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, 2009. 5 +[9] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2021. 7 +[10] Zhen Fang, Jie Lu, Feng Liu, Junyu Xuan, and Guangquan Zhang. Open set domain adaptation: Theoretical bound and algorithm. IEEE Transactions on Neural Networks and Learning Systems, 32:4309-4322, 2020. 2 +[11] Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In Proceedings of the 32nd International Conference on Machine Learning, pages 1180-1189. JMLR.org, 2015. 1 + +This research is partially supported by JST Moonshot R&D Grant Number JPMJPS2011, CREST Grant Number JPMJCR2015 and Basic Research Grant (Super AI) of Institute for AI and Beyond of the University of Tokyo. +[12] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17 (1):2096-2030, 2016. 5 +[13] Saurabh Garg, Sivaraman Balakrishnan, and Zachary C. Lipton. Domain adaptation under open set label shift. In Proceedings of the 36th International Conference on Neural Information Processing Systems. Curran Associates Inc., 2022. 3 +[14] Ryan Gomes, Andreas Krause, and Pietro Perona. Discriminative clustering by regularized information maximization. In Proceedings of the 23rd International Conference on Neural Information Processing Systems, pages 775-783. Curran Associates Inc., 2010. 5 +[15] Yves Grandvalet and Yoshua Bengio. Semi-supervised learning by entropy minimization. In Proceedings of the 17th International Conference on Neural Information Processing Systems, pages 529-536. MIT Press, 2004. 5 +[16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. IEEE Conference on Computer Vision and Pattern Recognition, pages 770-778, 2015. 5, 7 +[17] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9726-9735, 2020. 7 +[18] Olivier J. Henaff, Aravind Srinivas, Jeffrey De Fauw, Ali Razavi, Carl Doersch, S. M. Ali Eslami, and Aaron Van Den Oord. Data-efficient image recognition with contrastive predictive coding. In Proceedings of the 37th International Conference on Machine Learning. JMLR.org, 2020. 7 +[19] Judy Hoffman, Mehryar Mohri, and Ningshan Zhang. Algorithms and theory for multiple-source adaptation. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 8256-8266. Curran Associates Inc., 2018. 1 +[20] JoonHo Jang, Byeonghu Na, DongHyeok Shin, Mingi Ji, Kyungwoo Song, and Il-Chul Moon. Unknown-aware domain adversarial learning for open-set domain adaptation. In Proceedings of the 36th International Conference on Neural Information Processing Systems. Curran Associates Inc., 2022. 2 +[21] Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In 2nd International Conference on Learning Representations, 2014. 4 +[22] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. In 5th International Conference on Learning Representations, 2017. 5 +[23] Jichang Li, Guanbin Li, Yemin Shi, and Yizhou Yu. Cross-domain adaptive clustering for semi-supervised domain adaptation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2505-2514, 2021. 1, 5 +[24] Jingjing Li, Zhiqi Yu, Zhekai Du, Lei Zhu, and Heng Tao Shen. A comprehensive survey on source-free domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(8):5743-5762, 2024. 4 + +[25] Keqiuyin Li, Jie Lu, Hua Zuo, and Guangquan Zhang. Multisource domain adaptation handling inaccurate label spaces. Neurocomputing, 594:127824, 2024. 2 +[26] Rui Li, Qianfen Jiao, Wenming Cao, Hau-San Wong, and Si Wu. Model adaptation: Unsupervised domain adaptation without source data. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9638-9647, 2020. 4 +[27] Wuyang Li, Jie Liu, Bo Han, and Yixuan Yuan. Adjustment and alignment for unbiased open set domain adaptation. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 24110-24119, 2023. 2, 6, 7 +[28] Jian Liang, Dapeng Hu, and Jiashi Feng. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In Proceedings of the 37th International Conference on Machine Learning. JMLR.org, 2020. 4 +[29] Jian Liang, Dapeng Hu, Yunbo Wang, Ran He, and Jiashi Feng. Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11): 8602-8617, 2022. 4 +[30] Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. In 7th International Conference on Learning Representations, 2019. 3 +[31] Mohammad Reza Loghmania, Markus Vinczea, and Tatiana Tommasi. Positive-unlabeled learning for open set domain adaptation. Pattern Recognition Letters, 136:198-204, 2020. 6 +[32] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I. Jordan. Learning transferable features with deep adaptation networks. In Proceedings of the 32nd International Conference on Machine Learning, pages 97-105. JMLR.org, 2015. 1 +[33] Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 1647-1657. Curran Associates Inc., 2018. 1 +[34] Yadan Luo, Zijian Wang, Zi Huang, and Mahsa Baktashmotlagh. Progressive graph learning for open-set domain adaptation. In Proceedings of the 37th International Conference on Machine Learning, pages 6468-6478. PMLR, 2020. 2, 6, 7 +[35] Yadan Luo, Zijian Wang, Zhuoxiao Chen, Zi Huang, and Mahsa Baktashmotlagh. Source-free progressive graph learning for open-set domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(9):11240-11255, 2023. 4 +[36] Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In IEEE/CVF International Conference on Computer Vision, pages 1406-1415, 2019. 1, 2, 6 +[37] Md Mahmudur Rahman, Rameswar Panda, and Mohammad Arif Ul Alam. Semi-supervised domain adaptation with autoencoder via simultaneous learning. In IEEE/CVF Winter Conference on Applications of Computer Vision, pages 402-411, 2023. 1 + +[38] Sayan Rakshit, Dipesh Tamboli, Pragati Shuddhodhan Meshram, Biplab Banerjee, Gemma Roig, and Subhasis Chaudhuri. Multi-source open-set deep adversarial domain adaptation. In 16th European Conference on Computer Vision, pages 735-750. Springer International Publishing, 2020. 6 +[39] Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. Maximum classifier discrepancy for unsupervised domain adaptation. IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3723-3732, 2017. 1 +[40] Kuniaki Saito, Shohei Yamamoto, Yoshitaka Ushiku, and Tatsuya Harada. Open set domain adaptation by backpropagation. In 15th European Conference on Computer Vision, pages 156-171. Springer International Publishing, 2018. 2, 6 +[41] Kuniaki Saito, Donghyun Kim, Stan Sclaroff, Trevor Darrell, and Kate Saenko. Semi-supervised domain adaptation via minimax entropy. In IEEE/CVF International Conference on Computer Vision, pages 8049-8057, 2019. 1, 6 +[42] Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. In Proceedings of the 30th International Conference on Neural Information Processing Systems, pages 1171-1179. Curran Associates Inc., 2016. 5 +[43] Ankit Singh. Clda: contrastive learning for semi-supervised domain adaptation. In Proceedings of the 35th International Conference on Neural Information Processing Systems. Curran Associates Inc., 2021. 1, 5 +[44] Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Han Zhang, and Colin Raffel. Fixmatch: simplifying semi-supervised learning with consistency and confidence. In Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., 2020. 5 +[45] Shiliang Sun, Honglei Shi, and Yuanbin Wu. A survey of multi-source domain adaptation. Information Fusion, 24: 84-92, 2015. 1 +[46] Hui Tang, Ke Chen, and Kui Jia. Unsupervised domain adaptation via structurally regularized deep clustering. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8722-8732, 2020. 5 +[47] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2962-2971, 2017. 1 +[48] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 9: 2579-2605, 2008. 7 +[49] Vladimir N. Vapnik. The nature of statistical learning theory. Springer-Verlag New York, Inc., 1995. 2 +[50] V. N. Vapnik and A. Ya. Chervonenkis. On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities, pages 11-30. Springer International Publishing, 2015. 2 +[51] Naveen Venkat, Jogendra Nath Kundu, Durgesh Kumar Singh, Ambareesh Revanur, and R. Venkatesh Babu. Your classifier can secretly suffice multi-source domain adaptation. In Proceedings of the 34th International Conference on Neural Information Processing Systems, 2020. 1 + +[52] Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In IEEE Conference on Computer Vision and Pattern Recognition, pages 5385-5394, 2017. 6 +[53] Hang Wang, Minghao Xu, Bingbing Ni, and Wenjun Zhang. Learning to combine: Knowledge aggregation for multi-source domain adaptation. In 16th European Conference on Computer Vision, pages 727-744. Springer-Verlag, 2020. 1 +[54] Qian Wang, Fanlin Meng, and Toby P. Breckon. Progressively select and reject pseudolabeled samples for open-set domain adaptation. IEEE Transactions on Artificial Intelligence, 5(9): 4403-4414, 2024. 2, 6 +[55] Zixin Wang, Yadan Luo, Peng-Fei Zhang, Sen Wang, and Zi Huang. Discovering domain disentanglement for generalized multi-source domain adaptation. In IEEE International Conference on Multimedia and Expo, pages 1–6. IEEE, 2022. 2 +[56] Jun Wu and Jingrui He. Domain adaptation with dynamic open-set targets. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 2039-2049. Association for Computing Machinery, 2022. 3 +[57] R. Xu, Z. Chen, W. Zuo, J. Yan, and L. Lin. Deep cocktail network: Multi-source unsupervised domain adaptation with category shift. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3964-3973. IEEE Computer Society, 2018. 1 +[58] Yixing Xu, Chang Xu, Chao Xu, and Dacheng Tao. Multi-positive and unlabeled learning. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 3182-3188. AAAI Press, 2017. 2, 3, 6 +[59] Luyu Yang, Yan Wang, Mingfei Gao, Abhinav Shrivastava, Kilian Q. Weinberger, Wei-Lun Chao, and Ser-Nam Lim. Deep co-training with task decomposition for semi-supervised domain adaptation. In IEEE/CVF International Conference on Computer Vision, pages 8886-8896, 2021. 1, 5 +[60] S. Yang, Y. Wang, J. van de Weijer, L. Herranz, S. Jui, and J. Yang. Trust your good friends: Source-free domain adaptation by reciprocal neighborhood clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(12):15883-15895, 2023. 4 +[61] Jeongbeen Yoon, Dahiyun Kang, and Minsu Cho. Semi-supervised domain adaptation via sample-to-sample self-distillation. In IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1686-1695, 2022. 1 +[62] Dexuan Zhang, Thomas Westfechtel, and Tatsuya Harada. Unsupervised domain adaptation via minimized joint error. Transactions on Machine Learning Research, 2023. 1, 2, 6 +[63] Dexuan Zhang, Thomas Westfechtel, and Tatsuya Harada. Open-set domain adaptation via joint error based multi-class positive and unlabeled learning. In 18th European Conference on Computer Vision. Springer International Publishing, 2024. 2, 3, 5, 6 +[64] Yuchen Zhang, Tianle Liu, Mingsheng Long, and Michael Jordan. Bridging theory and algorithm for domain adaptation. In Proceedings of the 36th International Conference on Machine Learning, pages 7404-7413. PMLR, 2019. 1 + +[65] Han Zhao, Shanghang Zhang, Guanhang Wu, Joao P. Costeira, Jose M. F. Moura, and Geoffrey J. Gordon. Adversarial multiple source domain adaptation. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 8568-8579. Curran Associates Inc., 2018. 1, 3 +[66] Han Zhao, Remi Tachet des Combes, Kun Zhang, and Geoffrey J. Gordon. On learning invariant representation for domain adaptation. In Proceedings of the 36th International Conference on Machine Learning, 2019. 2 +[67] Yongchun Zhu, Fuzhen Zhuang, and Deqing Wang. Aligning domain-specific distribution and classifier for cross-domain classification from multiple sources. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence. AAAI Press, 2019. 1 \ No newline at end of file diff --git a/2025/A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains/images.zip b/2025/A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..678bac7bc4b4be5000061bac5812f0afc810711a --- /dev/null +++ b/2025/A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b76b7b0b0019b4f2825d470544527121cb6cbdfac674f501d0884032d37d9387 +size 512525 diff --git a/2025/A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains/layout.json b/2025/A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3aa7e009ec5cfc37862ba53e64c86d7903e4af84 --- /dev/null +++ b/2025/A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains/layout.json @@ -0,0 +1,12884 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 56, + 102, + 555, + 140 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 102, + 555, + 140 + ], + "spans": [ + { + "bbox": [ + 56, + 102, + 555, + 140 + ], + "type": "text", + "content": "A Theory of Learning Unified Model via Knowledge Integration from Label Space Varying Domains" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 145, + 160, + 465, + 189 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 160, + 465, + 189 + ], + "spans": [ + { + "bbox": [ + 145, + 160, + 465, + 189 + ], + "type": "text", + "content": "Dexuan Zhang1 Thomas Westfechtel1 Tatsuya Harada1,2 \n1The University of Tokyo, 2RIKEN" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 171, + 192, + 436, + 204 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 171, + 192, + 436, + 204 + ], + "spans": [ + { + "bbox": [ + 171, + 192, + 436, + 204 + ], + "type": "text", + "content": "{dexuan.zhang, thomas, harada}@mi.t.u-tokyo.ac.jp" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 151, + 231, + 200, + 243 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 231, + 200, + 243 + ], + "spans": [ + { + "bbox": [ + 151, + 231, + 200, + 243 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 54, + 257, + 297, + 413 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 257, + 297, + 413 + ], + "spans": [ + { + "bbox": [ + 54, + 257, + 297, + 413 + ], + "type": "text", + "content": "Existing domain adaptation systems can hardly be applied to real-world problems with new classes presenting at deployment time, especially regarding source-free scenarios where multiple source domains do not share the label space despite being given a few labeled target data. To address this, we consider a challenging problem: multi-source semi-supervised open-set domain adaptation and propose a learning theory via joint error, effectively tackling strong domain shift. To generalize the algorithm into source-free cases, we introduce a computationally efficient and architecture-flexible attention-based feature generation module. Extensive experiments on various data sets demonstrate the significant improvement of our proposed algorithm over baselines." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 453, + 135, + 464 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 453, + 135, + 464 + ], + "spans": [ + { + "bbox": [ + 55, + 453, + 135, + 464 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 474, + 297, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 474, + 297, + 713 + ], + "spans": [ + { + "bbox": [ + 56, + 474, + 297, + 713 + ], + "type": "text", + "content": "Generally, a supervised learning algorithm trained on a particular distribution of labeled samples (source domain) often fails to generalize when deployed on a new environment (target domain) in the presence of domain shift. In this regard, Domain adaptation (DA) [1] algorithms address the domain-shift problem by aligning the data distributions of the source and target domains through learning a domain-invariant feature space using statistical or adversarial learning approaches, which have made remarkable success. However, the problem setting still needs to be relaxed for real-world applications when we aim to integrate the knowledge learned from multiple source domains. Current DA methods can hardly cover the case for varying label space across domains, and the corresponding learning theory has yet to be proposed. Moreover, the solution is limited if this heterogeneous domain setting is extended to source-free situations where each source model may be trained on different network architectures and the label space. This work tackles a challenging multi-source semi-supervised open-set domain adaptation paradigm with varying label space, illustrated in Fig. 1." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 232, + 555, + 364 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 232, + 555, + 364 + ], + "spans": [ + { + "bbox": [ + 313, + 232, + 555, + 364 + ], + "type": "text", + "content": "In general, multi-source DA (MSDA) [45] and semi-supervised DA (SSDA) [41] are regarded as more practical than the single-source DA setup, considering that labeled data may come from various domains. More precisely, in these cases, the labeled samples can be differently distributed among themselves in addition to the usual domain shit between the source and the target domains. One naive approach to MSDA and SSDA is to group all labeled data into a single domain and deploy any unsupervised DA (UDA) method. However, such a trivial solution may lead to sub-optimal classification due to the gaps among labeled data [41]." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 365, + 556, + 544 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 365, + 556, + 544 + ], + "spans": [ + { + "bbox": [ + 313, + 365, + 556, + 544 + ], + "type": "text", + "content": "Most DA techniques assume the same label space in source and target domains, usually called the closed-set setting. The paradigm of closed-set DA has been substantially explored in the literature for UDA [11, 32, 33, 39, 47, 62, 64], SSDA [23, 37, 41, 43, 59, 61], and MSDA [19, 36, 51, 53, 57, 65, 67]. In contrast, the open-set DA (OSDA) [4] setting allows the presence of target-specific classes in addition to the shared classes. Such an open-set arrangement is more challenging due to a huge label shift across domains. The closed-set DA techniques cannot be directly applied in this case since these target-specific open-set samples, in turn, may jeopardize the domain alignment process. This work formalizes a more generalized problem where source domains may not share the label space, and the unlabeled target domain additionally contains novel classes." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 545, + 556, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 545, + 556, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 545, + 556, + 713 + ], + "type": "text", + "content": "Motivated by these, we consider a learning scenario in this work as the multi-source semi-supervised open-set DA (MSODA) where each source domain has a diverse label space, the labeled target domain consists of a few-shot of target data whose label space, i.e., the known class, is a superset of any source label space, and the unlabeled target domain contains data either from the known or a combined unknown class. Under this setup, the task is to classify the unlabeled data either into one of the known categories or a common unknown class. Such a setup invariably holds huge applications in fields relating to real-world visual perception like medical imaging and remote sensing, where acquisition of multi-domain data is feasible, and novel categories may show up abruptly [4]. Nonetheless, this MSODA problem" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "text", + "content": "10142" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 296, + 203 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 296, + 203 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 296, + 203 + ], + "type": "text", + "content": "cannot be effectively solved by directly utilizing the single source open-set paradigm of [2, 10, 20, 27, 34, 40, 54, 63] mainly because of the following factors: i) the varying label space of multiple source domains becomes an obstacle to the traditional OSDA techniques, and ii) the unknown recognition can be non-trivial since the target domain may be related to each source domain in a different degree. Regarding multi-source models, recent works [25, 55] consider various label spaces but require largely shared common classes for all domains to align the features, where we accept source domains with zero overlap." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 205, + 296, + 396 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 205, + 296, + 396 + ], + "spans": [ + { + "bbox": [ + 55, + 205, + 296, + 396 + ], + "type": "text", + "content": "[36] argued that reducing the domain gap among the source domains leads to a more robust and effective MSDA model. This idea is particularly relevant to our problem setting since aligning the source domains among themselves inherently helps distinguish unknown from known categories in the target domain. Otherwise, the domain shift among the source domains may lead to an unstable alignment of unlabeled target data. Inspired by this idea, we combine the theoretical results from [62, 63] to build a learning theory that can align all source domains with the labeled target domain via joint error, which is crucial to dealing with label shift [66]. Then we introduce PU learning [58] to detect unknowns with an end-to-end algorithm such that the generalization error is guaranteed, unlike those methods applying closed-set DA after unknown separation. Our major contributions can be summarized as:" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 399, + 296, + 495 + ], + "type": "list", + "angle": 0, + "index": 5, + "blocks": [ + { + "bbox": [ + 56, + 399, + 296, + 435 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 399, + 296, + 435 + ], + "spans": [ + { + "bbox": [ + 56, + 399, + 296, + 435 + ], + "type": "text", + "content": "- We introduce a challenging problem setting of multi-source semi-supervised open-set DA with varying label space and propose a learning theory via joint error." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 436, + 294, + 459 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 436, + 294, + 459 + ], + "spans": [ + { + "bbox": [ + 56, + 436, + 294, + 459 + ], + "type": "text", + "content": "- We design a framework to generate labeled source features via an attention-based mechanism for source-free cases." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 459, + 296, + 495 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 459, + 296, + 495 + ], + "spans": [ + { + "bbox": [ + 56, + 459, + 296, + 495 + ], + "type": "text", + "content": "- We demonstrate the efficacy of our proposal through extensive experiments on two benchmark datasets where we perform thorough robustness analysis." + } + ] + } + ], + "index": 4 + } + ], + "sub_type": "text" + }, + { + "type": "image", + "bbox": [ + 82, + 510, + 272, + 636 + ], + "blocks": [ + { + "bbox": [ + 82, + 510, + 272, + 636 + ], + "lines": [ + { + "bbox": [ + 82, + 510, + 272, + 636 + ], + "spans": [ + { + "bbox": [ + 82, + 510, + 272, + 636 + ], + "type": "image", + "image_path": "9f82b9808ce1520db69fff4acb45a3db7e5889bd39e3c49c8ed40582377becd7.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 644, + 296, + 710 + ], + "lines": [ + { + "bbox": [ + 55, + 644, + 296, + 710 + ], + "spans": [ + { + "bbox": [ + 55, + 644, + 296, + 710 + ], + "type": "text", + "content": "Figure 1. Knowledge integration from heterogeneous domains can be considered a task for multi-source semi-supervised open-set domain adaptation. Given a few labeled target data as the key, we aim to build a unified target model from multiple source domains with varying label space, which can be applied to query data containing unknown categories." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "bbox": [ + 314, + 71, + 505, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 71, + 505, + 85 + ], + "spans": [ + { + "bbox": [ + 314, + 71, + 505, + 85 + ], + "type": "text", + "content": "2. Learning Theory of Unified Model" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 91, + 555, + 222 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 91, + 555, + 222 + ], + "spans": [ + { + "bbox": [ + 313, + 91, + 555, + 222 + ], + "type": "text", + "content": "In this section, we present the theory that transfers the knowledge from multiple source domains to the target domain given a few labeled target data under the open-set situation. First, we propose a target error bound via joint error based on the theoretical results from [62, 63]. Then, we derive the generalization error of the proposed learning theory based on the generalized Vapnik-Chervonenkis (VC) complexity [49, 50] of real-valued function space. Finally, we proceed with the empirical objective function as an upper bound of a trivial convex combination with the log-sum-exp trick, which leads to a smoother optimization process." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "spans": [ + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "text", + "content": "We consider the unified model (UM) as a solution to multi-source semi-supervised open-set domain adaptation (MSODA) tasks, where the learning algorithm has access to multiple source domains that may have different label spaces. A set of " + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "inline_equation", + "content": "n_i" + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "text", + "content": " (" + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "inline_equation", + "content": "i = 1,..,N" + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "text", + "content": ") labeled points " + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "inline_equation", + "content": "\\{(x_{s_i}^j, y_{s_i}^j) \\in (\\mathcal{X} \\subseteq \\mathbb{R}^D \\times \\mathcal{Y}_i' \\subseteq \\mathcal{Y}')\\}_{j=1}^{n_i}" + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "text", + "content": " sampled i.i.d. from each source domain " + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "inline_equation", + "content": "S_i'" + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "text", + "content": ". In addition, a set of " + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "inline_equation", + "content": "l" + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "text", + "content": " labeled points (few-shot) " + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "inline_equation", + "content": "\\{(x_v^j, y_v^j) \\in (\\mathcal{X} \\subseteq \\mathbb{R}^D \\times \\mathcal{Y}' = \\{1, ..., K-1\\})\\}_{j=1}^l" + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "text", + "content": " sampled i.i.d. from the labeled target domain " + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "inline_equation", + "content": "V'" + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "text", + "content": " is available during learning. We seek a hypothesis that can classify a set of " + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "inline_equation", + "content": "m" + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "text", + "content": " unlabeled points " + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "inline_equation", + "content": "\\{(x_t^j) \\in X \\subseteq \\mathbb{R}^D\\}_{j=1}^m" + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "text", + "content": " sampled i.i.d. from target domain " + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "inline_equation", + "content": "T" + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "inline_equation", + "content": "\\mathcal{Y} = \\{1,..,K\\}" + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "text", + "content": " containing unknown class " + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "text", + "content": ". Let " + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "inline_equation", + "content": "\\mathcal{K} = \\{k | k \\in \\mathbb{R}^K : \\sum_{y \\in \\mathcal{Y}} k[y] = 1, k[y] \\in [0,1]\\}" + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "text", + "content": " denotes output space and " + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "inline_equation", + "content": "S_i, V" + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "text", + "content": " indicate the complete domains with label space " + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "inline_equation", + "content": "\\mathcal{Y}" + }, + { + "bbox": [ + 313, + 223, + 555, + 402 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "spans": [ + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "text", + "content": "Theorem 2.1 (Target Error Bound for MSODA via Joint Error1). Given source " + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "inline_equation", + "content": "(S_i)" + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "text", + "content": ", labeled " + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "inline_equation", + "content": "(V)" + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "text", + "content": " and unlabeled target " + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "inline_equation", + "content": "(T)" + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "text", + "content": " domains that contain data from the unknown class, let " + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "inline_equation", + "content": "f_{S_i}, f_V, f_T: \\mathcal{X} \\to \\mathcal{K}" + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "text", + "content": " be the true labeling functions of " + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "inline_equation", + "content": "S_i, V, T" + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "text", + "content": " respectively whose outputs are one-hot vectors denoting the corresponding classes of inputs. Let " + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "inline_equation", + "content": "\\epsilon: \\mathcal{K} \\times \\mathcal{K} \\to \\mathbb{R}" + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "text", + "content": " denote a distance metric and " + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "inline_equation", + "content": "\\epsilon_D(f, f') := \\mathbb{E}_{x \\sim D} \\epsilon(f(x), f'(x))" + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "text", + "content": " measure the expected disagreement between the outputs of " + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "inline_equation", + "content": "f, f': \\mathcal{X} \\to \\mathcal{K}" + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "text", + "content": " over a distribution " + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "text", + "content": " on " + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "inline_equation", + "content": "\\mathcal{X}" + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "text", + "content": ". Regarding the source error of a hypothesis " + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "inline_equation", + "content": "h \\in \\mathcal{H}: \\mathcal{X} \\to \\mathcal{K}" + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "inline_equation", + "content": "h(x)[y]" + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "text", + "content": " indicates the probability of " + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "inline_equation", + "content": "x \\in \\mathcal{X}" + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "text", + "content": " labeled as " + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "inline_equation", + "content": "y \\in \\mathcal{Y}" + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "text", + "content": ", we use the shorthand " + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "inline_equation", + "content": "\\epsilon_{S_i}(h) := \\epsilon_{S_i}(h, f_{S_i})" + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "text", + "content": ". Similarly, we use " + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "inline_equation", + "content": "\\epsilon_V(h), \\epsilon_T(h)" + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "text", + "content": " to denote the labeled and unlabeled target error. For " + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "inline_equation", + "content": "\\forall h, f_{S_i}', f_V', f_T' \\in \\mathcal{H}: \\mathcal{X} \\to \\mathcal{K}" + }, + { + "bbox": [ + 313, + 409, + 556, + 589 + ], + "type": "text", + "content": ", the expected target error is bounded," + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 317, + 602, + 555, + 661 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 602, + 555, + 661 + ], + "spans": [ + { + "bbox": [ + 317, + 602, + 555, + 661 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} 2 \\epsilon_ {T} (h) \\leq \\epsilon_ {V} (h) + \\sum_ {i = 1} ^ {N} \\alpha_ {i} U _ {i} (h), \\quad s. t. \\quad \\sum_ {i = 1} ^ {N} \\alpha_ {i} = 1 \\\\ = \\epsilon_ {V} (h) + \\sum_ {i = 1} ^ {N} \\alpha_ {i} \\left[ \\epsilon_ {S _ {i}} (h) + 2 D _ {S _ {i}, V, T} \\left(f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}, f _ {T} ^ {*}, h\\right) + 2 \\theta_ {i} \\right], \\tag {1} \\\\ \\end{array}", + "image_path": "0e033da83186d154776a32a9c6eb20904df0cc829f4ba20204565483f48c92ac.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 332, + 664, + 554, + 702 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 332, + 664, + 554, + 702 + ], + "spans": [ + { + "bbox": [ + 332, + 664, + 554, + 702 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} 2 D _ {S _ {i}, V, T} (f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}, f _ {T} ^ {*}, h) \\\\ = \\epsilon_ {T} (f _ {S _ {i}} ^ {*}, f _ {T} ^ {*}) + \\epsilon_ {T} (f _ {V} ^ {*}, f _ {T} ^ {*}) + \\epsilon_ {T} (h, f _ {S _ {i}} ^ {*}) + \\epsilon_ {T} (h, f _ {V} ^ {*}) \\\\ + \\epsilon_ {V} (f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}) + \\epsilon_ {S _ {i}} (f _ {V} ^ {*}, f _ {S _ {i}} ^ {*}) - \\epsilon_ {V} (h, f _ {S _ {i}} ^ {*}) - \\epsilon_ {S _ {i}} (h, f _ {V} ^ {*}) \\tag {2} \\\\ \\end{array}", + "image_path": "40ef29857365dfa6844243febf6602fc900f5f36203e0bd6fd8c28e62d7cc9a9.jpg" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "10143" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 66, + 91, + 296, + 144 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 91, + 296, + 144 + ], + "spans": [ + { + "bbox": [ + 66, + 91, + 296, + 144 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\theta_ {i} = \\underbrace {\\epsilon_ {S _ {i}} (f _ {S _ {i}} , f _ {S _ {i}} ^ {*}) / 2 + \\epsilon_ {V} (f _ {S _ {i}} , f _ {S _ {i}} ^ {*}) + \\epsilon_ {T} (f _ {S _ {i}} , f _ {S _ {i}} ^ {*})} _ {\\theta_ {S} ^ {i}} \\\\ + \\underbrace {\\epsilon_ {V} (f _ {V} , f _ {V} ^ {*}) / 2 + \\epsilon_ {S _ {i}} (f _ {V} , f _ {V} ^ {*}) + \\epsilon_ {T} (f _ {V} , f _ {V} ^ {*})} _ {\\theta_ {V} ^ {i}} + \\underbrace {\\epsilon_ {T} (f _ {T} , f _ {T} ^ {*})} _ {\\theta_ {T} ^ {i}} \\quad (3) \\\\ \\end{array}", + "image_path": "b4475a11795aa4173c686a0caa4e3bd18a8fdeb82406500234aa7e64f89ea6d2.jpg" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 152, + 296, + 189 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 152, + 296, + 189 + ], + "spans": [ + { + "bbox": [ + 55, + 152, + 296, + 189 + ], + "type": "text", + "content": "In the following, we discuss the approach to obtain generalization guarantees for multiple source domain adaptation in classification settings by a trivial union-bound argument." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 196, + 296, + 258 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 196, + 296, + 258 + ], + "spans": [ + { + "bbox": [ + 55, + 196, + 296, + 258 + ], + "type": "text", + "content": "Assumption 2.2 (Substitutes for True Labeling Functions). For finite training data " + }, + { + "bbox": [ + 55, + 196, + 296, + 258 + ], + "type": "inline_equation", + "content": "\\{\\hat{S}_i\\}_{i = 1}^N,\\hat{T},\\hat{V}" + }, + { + "bbox": [ + 55, + 196, + 296, + 258 + ], + "type": "text", + "content": " , we assume there exist approximated labeling functions " + }, + { + "bbox": [ + 55, + 196, + 296, + 258 + ], + "type": "inline_equation", + "content": "\\{f_{S_i}^*\\}_{i = 1}^N,f_T^*,f_V^*" + }, + { + "bbox": [ + 55, + 196, + 296, + 258 + ], + "type": "text", + "content": " that can lead the empirical deviation " + }, + { + "bbox": [ + 55, + 196, + 296, + 258 + ], + "type": "inline_equation", + "content": "\\sum_{i}\\alpha_{i}\\hat{\\theta}_{i}" + }, + { + "bbox": [ + 55, + 196, + 296, + 258 + ], + "type": "text", + "content": " very close to zero such that it can be ignored during the practical learning process." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "spans": [ + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "text", + "content": "Theorem 2.3 (Generalization Error" + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "inline_equation", + "content": "^1" + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "text", + "content": "). Let " + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "inline_equation", + "content": "\\hat{S}_i, \\hat{V}, \\hat{T}" + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "text", + "content": " denote the empirical distributions generated with " + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "inline_equation", + "content": "m" + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "text", + "content": " i.i.d. samples from each domain. Let " + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "inline_equation", + "content": "\\mathcal{F} = \\{f(x) = \\epsilon(h(x), h'(x)): \\mathcal{X} \\to [0, M] | h, h' \\in \\mathcal{H}\\}" + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "text", + "content": " be a function space with complexity measured by uniform covering number " + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "inline_equation", + "content": "\\mathcal{N}_1(\\xi, \\mathcal{F}, m)" + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "text", + "content": ". Let " + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "inline_equation", + "content": "\\alpha_i = \\frac{\\exp(\\nu \\hat{U}_i(h))}{\\sum_j \\exp(\\nu \\hat{U}_j(h))}" + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "inline_equation", + "content": "\\nu > 0" + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "text", + "content": ", given Jensen's & Cauchy's inequality and Assumption 2.2, there exist " + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "inline_equation", + "content": "f_{S_i}^* \\in \\mathcal{H}_{S_i} \\subseteq \\mathcal{H}" + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "inline_equation", + "content": "f_V^* \\in \\mathcal{H}_V \\subseteq \\mathcal{H}" + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "inline_equation", + "content": "f_T^* \\in \\mathcal{H}_T \\subseteq \\mathcal{H}" + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "text", + "content": ", such that for " + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "inline_equation", + "content": "0 < \\delta < 1" + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "text", + "content": ", with probability at least " + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "inline_equation", + "content": "1 - \\delta" + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "text", + "content": ", for " + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "inline_equation", + "content": "\\forall h \\in \\mathcal{H}" + }, + { + "bbox": [ + 54, + 264, + 296, + 381 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 58, + 395, + 308, + 464 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 395, + 308, + 464 + ], + "spans": [ + { + "bbox": [ + 58, + 395, + 308, + 464 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\epsilon_ {T} (h) \\leq \\frac {1}{2} [ \\underbrace {\\epsilon_ {\\hat {V}} (h)} _ {L _ {c l s} ^ {V} (h)} + \\frac {1}{\\nu} \\log \\sum_ {i = 1} ^ {N} \\exp (\\nu \\hat {U} _ {i} (h)) ] \\\\ + \\mathcal {O} \\left(\\inf _ {\\sqrt {\\frac {2}{m}} \\leq \\gamma \\leq M} (\\gamma + \\int_ {\\gamma} ^ {M} \\sqrt {\\frac {1}{m} \\log \\frac {2 (1 1 N + 6) \\mathcal {N} _ {1} (\\frac {\\xi}{8} , \\mathcal {F} , 2 m)}{\\delta}} d \\xi)\\right) \\tag {4} \\\\ \\end{array}", + "image_path": "40ea83085df739697ccd169ca618a1a06231d857cf20ca936fe4755bd3729aec.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 58, + 468, + 296, + 496 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 468, + 296, + 496 + ], + "spans": [ + { + "bbox": [ + 58, + 468, + 296, + 496 + ], + "type": "interline_equation", + "content": "\\hat {U} _ {i} (h) = \\underbrace {\\epsilon_ {\\hat {S} _ {i}} (h)} _ {L _ {c l s} ^ {S _ {i}} (h)} + 2 \\underbrace {D _ {\\hat {S} _ {i} , \\hat {V} , \\hat {T}} \\left(f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}, f _ {T} ^ {*}, h\\right)} _ {L _ {d i s} ^ {i} \\left(f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}, f _ {T} ^ {*}, h\\right)} \\tag {5}", + "image_path": "3c1eb47f36f7ba559e5c678bb819de72f760ff6ff29eea6e19b04ec223ff83c0.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 504, + 295, + 599 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 504, + 295, + 599 + ], + "spans": [ + { + "bbox": [ + 55, + 504, + 295, + 599 + ], + "type": "text", + "content": "The log-sum-exp trick [30] yields an upper bound of the convex combination as Theorem 2.3, where we no longer need to heuristically decide the value of " + }, + { + "bbox": [ + 55, + 504, + 295, + 599 + ], + "type": "inline_equation", + "content": "\\alpha_{i}" + }, + { + "bbox": [ + 55, + 504, + 295, + 599 + ], + "type": "text", + "content": " in the unified model. It smooths the objective and provides a principled and adaptive way to combine all the gradients from the " + }, + { + "bbox": [ + 55, + 504, + 295, + 599 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 55, + 504, + 295, + 599 + ], + "type": "text", + "content": " source domains. This often leads to better generalizations in practice because of the ensemble effect of multiple sources implied by the upper bound [65]." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 600, + 296, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 600, + 296, + 696 + ], + "spans": [ + { + "bbox": [ + 55, + 600, + 296, + 696 + ], + "type": "text", + "content": "According to [13, 56, 63], for " + }, + { + "bbox": [ + 55, + 600, + 296, + 696 + ], + "type": "inline_equation", + "content": "\\forall f, f': \\mathcal{X} \\to \\mathcal{K}, \\epsilon_V(f, f')" + }, + { + "bbox": [ + 55, + 600, + 296, + 696 + ], + "type": "text", + "content": " can be approximated by the expectation on " + }, + { + "bbox": [ + 55, + 600, + 296, + 696 + ], + "type": "inline_equation", + "content": "V'" + }, + { + "bbox": [ + 55, + 600, + 296, + 696 + ], + "type": "text", + "content": " based on PU learning. Moreover, source data " + }, + { + "bbox": [ + 55, + 600, + 296, + 696 + ], + "type": "inline_equation", + "content": "S_i'" + }, + { + "bbox": [ + 55, + 600, + 296, + 696 + ], + "type": "text", + "content": " may be unavailable due to privacy concerns (e.g., medical data) during the adaptation phase. To tackle this source-free domain adaptation (SFDA) problem, we propose a source features generation pipeline based on the attention mechanism, which can transfer the knowledge between models with different architectures." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 314, + 71, + 397, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 71, + 397, + 85 + ], + "spans": [ + { + "bbox": [ + 314, + 71, + 397, + 85 + ], + "type": "text", + "content": "3. Methodology" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 91, + 555, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 91, + 555, + 175 + ], + "spans": [ + { + "bbox": [ + 313, + 91, + 555, + 175 + ], + "type": "text", + "content": "In this section, we first recall several preliminaries crucial to the learning algorithm of open-set domain adaptation. Then, we propose a pipeline to transfer the knowledge between models with different architectures based on the attention mechanism to recover feature space under the source-free setting. Finally, we define constrained hypothesis space to obtain a rigorous objective function." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 182, + 457, + 194 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 182, + 457, + 194 + ], + "spans": [ + { + "bbox": [ + 313, + 182, + 457, + 194 + ], + "type": "text", + "content": "3.1. Discrepancy Measurement" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 200, + 555, + 248 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 200, + 555, + 248 + ], + "spans": [ + { + "bbox": [ + 313, + 200, + 555, + 248 + ], + "type": "text", + "content": "As introduced in [63], we recall the definition of Open-set Margin Discrepancy and Unknown Predictive Discrepancy, which serve as key components to bridging the gap between the theory and algorithm for open-set domain adaptation." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 252, + 555, + 286 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 252, + 555, + 286 + ], + "spans": [ + { + "bbox": [ + 313, + 252, + 555, + 286 + ], + "type": "text", + "content": "Definition 3.1 (Open-set Margin Discrepancy). Let " + }, + { + "bbox": [ + 313, + 252, + 555, + 286 + ], + "type": "inline_equation", + "content": "y, y'" + }, + { + "bbox": [ + 313, + 252, + 555, + 286 + ], + "type": "text", + "content": " denote outputs of " + }, + { + "bbox": [ + 313, + 252, + 555, + 286 + ], + "type": "inline_equation", + "content": "f, f': \\mathcal{X} \\to \\mathcal{K}" + }, + { + "bbox": [ + 313, + 252, + 555, + 286 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 313, + 252, + 555, + 286 + ], + "type": "inline_equation", + "content": "y = l(f(x)), l(f'(x)) = y'" + }, + { + "bbox": [ + 313, + 252, + 555, + 286 + ], + "type": "text", + "content": " given induced labeling function:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 386, + 291, + 555, + 306 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 386, + 291, + 555, + 306 + ], + "spans": [ + { + "bbox": [ + 386, + 291, + 555, + 306 + ], + "type": "interline_equation", + "content": "l \\circ f: x \\rightarrow \\underset {y \\in \\mathcal {Y}} {\\arg \\max } f (x) [ y ] \\tag {6}", + "image_path": "bb8583307ba9aa482d90d33c71cae83c20b40694cc90222e06fd6e8e17759c1b.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 314, + 312, + 554, + 335 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 312, + 554, + 335 + ], + "spans": [ + { + "bbox": [ + 314, + 312, + 554, + 335 + ], + "type": "text", + "content": "The Open-set Margin Discrepancy between two functions " + }, + { + "bbox": [ + 314, + 312, + 554, + 335 + ], + "type": "inline_equation", + "content": "f, f'" + }, + { + "bbox": [ + 314, + 312, + 554, + 335 + ], + "type": "text", + "content": " over a distribution " + }, + { + "bbox": [ + 314, + 312, + 554, + 335 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 314, + 312, + 554, + 335 + ], + "type": "text", + "content": " is given by:" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 322, + 342, + 555, + 378 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 322, + 342, + 555, + 378 + ], + "spans": [ + { + "bbox": [ + 322, + 342, + 555, + 378 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\epsilon_ {D} (f, f ^ {\\prime}) = \\mathbb {E} _ {x \\sim D} [ \\operatorname {o m d} (f (x), f ^ {\\prime} (x)) ] (7) \\\\ \\operatorname {o m d} (f (x), f ^ {\\prime} (x)) = \\max \\left(\\left| \\log (1 - f (x) [ y ]) - \\log (1 - f ^ {\\prime} (x) [ y ]) \\right|\\right), \\\\ | \\log (1 - f (x) [ y ^ {\\prime} ]) - \\log (1 - f ^ {\\prime} (x) [ y ^ {\\prime} ]) |) (8) \\\\ \\end{array}", + "image_path": "69b919164a9937feb54e146eb05bdc944ce5e380ebc4f3c2dd0a936595866665.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "spans": [ + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "text", + "content": "Definition 3.2 (Unknown Predictive Discrepancy). Let " + }, + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "inline_equation", + "content": "v: \\mathcal{K} \\times \\mathcal{K} \\to \\mathbb{R}" + }, + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "text", + "content": " denote the Unknown Predictive Discrepancy as a distance metric and " + }, + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "inline_equation", + "content": "v_{D}(f, f') := \\mathbb{E}_{x \\sim D} v(f(x), f'(x))" + }, + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "text", + "content": " measure the expected disagreement between the " + }, + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "text", + "content": "-th outputs of " + }, + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "inline_equation", + "content": "f, f': \\mathcal{X} \\to \\mathcal{K}" + }, + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "text", + "content": " over a distribution " + }, + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "text", + "content": " on " + }, + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "inline_equation", + "content": "\\mathcal{X}" + }, + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "text", + "content": ". Let " + }, + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "inline_equation", + "content": "e^{K}: \\mathcal{X} \\to [0, \\dots, 1] \\in \\mathcal{K}" + }, + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "text", + "content": " denote a function that can predict any input as the unknown class. The deviation from " + }, + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "inline_equation", + "content": "e^{K}" + }, + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "text", + "content": " for a hypothesis " + }, + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "inline_equation", + "content": "h \\in \\mathcal{H}" + }, + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "text", + "content": " is further referred to as the shorthand " + }, + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "inline_equation", + "content": "v_{D}(h) := v_{D}(h, e^{K})" + }, + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "text", + "content": " that measures the probability that samples from " + }, + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 313, + 383, + 555, + 492 + ], + "type": "text", + "content": " not classified as unknowns." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 334, + 498, + 555, + 510 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 334, + 498, + 555, + 510 + ], + "spans": [ + { + "bbox": [ + 334, + 498, + 555, + 510 + ], + "type": "interline_equation", + "content": "v _ {D} (f, f ^ {\\prime}) = \\mathbb {E} _ {x \\sim D} | \\log (1 - f (x) [ K ]) - \\log (1 - f ^ {\\prime} (x) [ K ]) | \\tag {9}", + "image_path": "6896cb1ff27eb6fefbc502409e053bbb5d2ddd6371416b1eeb267dd5cd85e3c9.jpg" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 517, + 541, + 529 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 517, + 541, + 529 + ], + "spans": [ + { + "bbox": [ + 313, + 517, + 541, + 529 + ], + "type": "text", + "content": "3.2. Inference on Expectation with PU Learning" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 313, + 535, + 554, + 595 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 535, + 554, + 595 + ], + "spans": [ + { + "bbox": [ + 313, + 535, + 554, + 595 + ], + "type": "text", + "content": "In this section, we introduce the techniques from PU learning [58] to estimate the expectation over source domain " + }, + { + "bbox": [ + 313, + 535, + 554, + 595 + ], + "type": "inline_equation", + "content": "S_{i}" + }, + { + "bbox": [ + 313, + 535, + 554, + 595 + ], + "type": "text", + "content": " by the incomplete source domain " + }, + { + "bbox": [ + 313, + 535, + 554, + 595 + ], + "type": "inline_equation", + "content": "S_{i}^{\\prime}" + }, + { + "bbox": [ + 313, + 535, + 554, + 595 + ], + "type": "text", + "content": " and target domain " + }, + { + "bbox": [ + 313, + 535, + 554, + 595 + ], + "type": "inline_equation", + "content": "T" + }, + { + "bbox": [ + 313, + 535, + 554, + 595 + ], + "type": "text", + "content": " for the open-set scenario. The expectation over " + }, + { + "bbox": [ + 313, + 535, + 554, + 595 + ], + "type": "inline_equation", + "content": "V" + }, + { + "bbox": [ + 313, + 535, + 554, + 595 + ], + "type": "text", + "content": " can be derived analogously." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 313, + 600, + 555, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 600, + 555, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 600, + 555, + 714 + ], + "type": "text", + "content": "Assumption 3.3. Let " + }, + { + "bbox": [ + 313, + 600, + 555, + 714 + ], + "type": "inline_equation", + "content": "S_{i}^{k} = P_{S_{i}}(x|y = k), V^{k} = P_{V}(x|y = k), T^{k} = P_{T}(x|y = k)" + }, + { + "bbox": [ + 313, + 600, + 555, + 714 + ], + "type": "text", + "content": " denote class conditional distributions, " + }, + { + "bbox": [ + 313, + 600, + 555, + 714 + ], + "type": "inline_equation", + "content": "S_{i}^{\\backslash K} = P_{S_{i}}(x|y \\neq K), V' = P_{V}(x|y \\neq K), T' = P_{T}(x|y \\neq K)" + }, + { + "bbox": [ + 313, + 600, + 555, + 714 + ], + "type": "text", + "content": " indicate incomplete domains that do not contain unknown class " + }, + { + "bbox": [ + 313, + 600, + 555, + 714 + ], + "type": "inline_equation", + "content": "S_{i}^{K}, V^{K}, T^{K}" + }, + { + "bbox": [ + 313, + 600, + 555, + 714 + ], + "type": "text", + "content": ". Given a feature extractor " + }, + { + "bbox": [ + 313, + 600, + 555, + 714 + ], + "type": "inline_equation", + "content": "g: \\mathcal{X} \\subseteq \\mathbb{R}^{D} \\to \\mathcal{Z} \\subseteq \\mathbb{R}^{F}" + }, + { + "bbox": [ + 313, + 600, + 555, + 714 + ], + "type": "text", + "content": ", assume that the feature space can be aligned by DA techniques such that " + }, + { + "bbox": [ + 313, + 600, + 555, + 714 + ], + "type": "inline_equation", + "content": "Z^{K} = P_{S_{i}^{K}}(z) = P_{V^{K}}(z) = P_{T^{K}}(z), Z' = P_{S_{i}^{\\backslash K}}(z) = P_{V'}(z) = P_{T'}(z)" + }, + { + "bbox": [ + 313, + 600, + 555, + 714 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 20 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 66, + 702, + 193, + 713 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 702, + 193, + 713 + ], + "spans": [ + { + "bbox": [ + 66, + 702, + 193, + 713 + ], + "type": "text", + "content": "1 see proofs in supplementary material." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "text", + "content": "10144" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 71, + 298, + 184 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 71, + 298, + 184 + ], + "spans": [ + { + "bbox": [ + 55, + 71, + 298, + 184 + ], + "type": "text", + "content": "Lemma 3.4 (PU Estimation" + }, + { + "bbox": [ + 55, + 71, + 298, + 184 + ], + "type": "inline_equation", + "content": "^1" + }, + { + "bbox": [ + 55, + 71, + 298, + 184 + ], + "type": "text", + "content": "). Let " + }, + { + "bbox": [ + 55, + 71, + 298, + 184 + ], + "type": "inline_equation", + "content": "g: \\mathcal{X} \\subseteq \\mathbb{R}^D \\to \\mathcal{Z} \\subseteq \\mathbb{R}^F" + }, + { + "bbox": [ + 55, + 71, + 298, + 184 + ], + "type": "text", + "content": " denote the feature extractor. Let " + }, + { + "bbox": [ + 55, + 71, + 298, + 184 + ], + "type": "inline_equation", + "content": "h \\in \\mathcal{H}^F: \\mathcal{Z} \\to \\mathcal{K}" + }, + { + "bbox": [ + 55, + 71, + 298, + 184 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 55, + 71, + 298, + 184 + ], + "type": "inline_equation", + "content": "h \\circ g \\in \\mathcal{H}: \\mathcal{X} \\to \\mathcal{K}" + }, + { + "bbox": [ + 55, + 71, + 298, + 184 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 71, + 298, + 184 + ], + "type": "inline_equation", + "content": "f_V^* \\in \\mathcal{H}_V^F, f_T^* \\in \\mathcal{H}_T^F, f_{S_i}^* \\in \\mathcal{H}_{S_i}^F: \\mathcal{Z} \\to \\mathcal{K}" + }, + { + "bbox": [ + 55, + 71, + 298, + 184 + ], + "type": "text", + "content": " denote the decomposed approximated labeling functions. Let " + }, + { + "bbox": [ + 55, + 71, + 298, + 184 + ], + "type": "inline_equation", + "content": "\\sum_{k=1}^{K} \\pi_{S_i}^k = 1, \\sum_{k=1}^{K} \\pi_V^k = 1, \\sum_{k=1}^{K} \\pi_T^k = 1" + }, + { + "bbox": [ + 55, + 71, + 298, + 184 + ], + "type": "text", + "content": " denote the class priors of each domain. Given Assumption 3.3, the expectation on " + }, + { + "bbox": [ + 55, + 71, + 298, + 184 + ], + "type": "inline_equation", + "content": "S_i" + }, + { + "bbox": [ + 55, + 71, + 298, + 184 + ], + "type": "text", + "content": " can be estimated by expectation on " + }, + { + "bbox": [ + 55, + 71, + 298, + 184 + ], + "type": "inline_equation", + "content": "S_i^{\\backslash K}" + }, + { + "bbox": [ + 55, + 71, + 298, + 184 + ], + "type": "text", + "content": " and Unknown Predictive Discrepancy (Definition 3.2) with a mild condition that " + }, + { + "bbox": [ + 55, + 71, + 298, + 184 + ], + "type": "inline_equation", + "content": "\\pi_{S_i}^K = \\pi_T^K = 1 - \\alpha" + }, + { + "bbox": [ + 55, + 71, + 298, + 184 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 87, + 196, + 297, + 219 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 196, + 297, + 219 + ], + "spans": [ + { + "bbox": [ + 87, + 196, + 297, + 219 + ], + "type": "interline_equation", + "content": "\\epsilon_ {S _ {i}} (h \\circ g) = \\alpha \\left[ \\epsilon_ {S _ {i} ^ {\\backslash K}} (h \\circ g) - v _ {S _ {i} ^ {\\backslash K}} (h \\circ g) \\right] + v _ {T} (h \\circ g) \\tag {10}", + "image_path": "6057bbe78ef6bbb1514d74f9223ee1483911b7ab53f54278cb1d9392f9d45b85.jpg" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 221, + 304, + 279 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 221, + 304, + 279 + ], + "spans": [ + { + "bbox": [ + 56, + 221, + 304, + 279 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\epsilon_ {S _ {i}} (f _ {S _ {i}} ^ {*} \\circ g, f _ {V} ^ {*} \\circ g) = \\alpha [ \\epsilon_ {S _ {i} \\backslash K} (f _ {S _ {i}} ^ {*} \\circ g, f _ {V} ^ {*} \\circ g) - v _ {S _ {i} \\backslash K} (f _ {S _ {i}} ^ {*} \\circ g, f _ {V} ^ {*} \\circ g) ] \\\\ + v _ {T} \\left(f _ {S _ {i}} ^ {*} \\circ g, f _ {V} ^ {*} \\circ g\\right) (11) \\\\ \\epsilon_ {S _ {i}} (f _ {V} ^ {*} \\circ g, h \\circ g) = \\alpha [ \\epsilon_ {S _ {i} ^ {\\backslash} K} (f _ {V} ^ {*} \\circ g, h \\circ g) - v _ {S _ {i} ^ {\\backslash} K} (f _ {V} ^ {*} \\circ g, h \\circ g) ] \\\\ + v _ {T} \\left(f _ {V} ^ {*} \\circ g, h \\circ g\\right) (12) \\\\ \\end{array}", + "image_path": "8b934ed70fdcf93eb0649d9c890f9a2f02cfa1cc87a60e118bf0a8afe903c70f.jpg" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 288, + 297, + 337 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 288, + 297, + 337 + ], + "spans": [ + { + "bbox": [ + 55, + 288, + 297, + 337 + ], + "type": "text", + "content": "Assumption 3.5. Given a feature extractor " + }, + { + "bbox": [ + 55, + 288, + 297, + 337 + ], + "type": "inline_equation", + "content": "g: \\mathcal{X} \\to \\mathcal{Z}" + }, + { + "bbox": [ + 55, + 288, + 297, + 337 + ], + "type": "text", + "content": ", assume that the covariate shift between each source and labeled target domain can be addressed for known categories as " + }, + { + "bbox": [ + 55, + 288, + 297, + 337 + ], + "type": "inline_equation", + "content": "P_{S_i^k}(z) = P_{V^k}(z), k = 1,..K - 1" + }, + { + "bbox": [ + 55, + 288, + 297, + 337 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 343, + 297, + 378 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 343, + 297, + 378 + ], + "spans": [ + { + "bbox": [ + 55, + 343, + 297, + 378 + ], + "type": "text", + "content": "Corollary 3.6. Let " + }, + { + "bbox": [ + 55, + 343, + 297, + 378 + ], + "type": "inline_equation", + "content": "\\mathcal{Y}_i^{\\prime \\prime} = \\{k|k\\notin \\mathcal{Y}_i^\\prime ,k = 1,..K - 1\\}" + }, + { + "bbox": [ + 55, + 343, + 297, + 378 + ], + "type": "text", + "content": " denote the label space that is absent from " + }, + { + "bbox": [ + 55, + 343, + 297, + 378 + ], + "type": "inline_equation", + "content": "S_{i}^{\\prime}" + }, + { + "bbox": [ + 55, + 343, + 297, + 378 + ], + "type": "text", + "content": ". Given Assumption 3.5, we further decompose the source error as:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 73, + 384, + 296, + 431 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 73, + 384, + 296, + 431 + ], + "spans": [ + { + "bbox": [ + 73, + 384, + 296, + 431 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\alpha \\epsilon_ {S _ {i} ^ {\\backslash K}} (h \\circ g) = \\sum_ {k \\in \\mathcal {Y} _ {i} ^ {\\prime \\prime}} \\pi_ {S _ {i}} ^ {k} \\epsilon_ {S _ {i} ^ {k}} (h \\circ g) + \\sum_ {k \\in \\mathcal {Y} _ {i} ^ {\\prime}} \\pi_ {S _ {i}} ^ {k} \\epsilon_ {S _ {i} ^ {k}} (h \\circ g) \\\\ = \\rho_ {i} \\sum_ {k \\in \\mathcal {Y} _ {i} ^ {\\prime \\prime}} \\epsilon_ {V _ {i} ^ {k}} (h \\circ g) + (1 - \\rho_ {i}) \\epsilon_ {S _ {i} ^ {\\prime}} (h \\circ g), \\tag {13} \\\\ \\end{array}", + "image_path": "e4d19f4fb947caf851ee2ebcb1014ae333e0d2a7d3eb9afb17813291e628e0b5.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 441, + 295, + 466 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 441, + 295, + 466 + ], + "spans": [ + { + "bbox": [ + 55, + 441, + 295, + 466 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 441, + 295, + 466 + ], + "type": "inline_equation", + "content": "\\rho_{i} = |\\mathcal{Y}_{i}^{\\prime \\prime}| / K" + }, + { + "bbox": [ + 55, + 441, + 295, + 466 + ], + "type": "text", + "content": " under a mild condition that " + }, + { + "bbox": [ + 55, + 441, + 295, + 466 + ], + "type": "inline_equation", + "content": "\\pi_{S_i}^k = 1 / K" + }, + { + "bbox": [ + 55, + 441, + 295, + 466 + ], + "type": "text", + "content": " for " + }, + { + "bbox": [ + 55, + 441, + 295, + 466 + ], + "type": "inline_equation", + "content": "k\\in \\mathcal{Y}_i^{\\prime \\prime}" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 472, + 296, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 472, + 296, + 521 + ], + "spans": [ + { + "bbox": [ + 55, + 472, + 296, + 521 + ], + "type": "text", + "content": "Remark 3.7. According to Definition 3.2, minimizing " + }, + { + "bbox": [ + 55, + 472, + 296, + 521 + ], + "type": "inline_equation", + "content": "v_{\\hat{T}}(h \\circ g)" + }, + { + "bbox": [ + 55, + 472, + 296, + 521 + ], + "type": "text", + "content": " means mapping target samples to the unknown class. In practice, a multiplier " + }, + { + "bbox": [ + 55, + 472, + 296, + 521 + ], + "type": "inline_equation", + "content": "\\beta < 1" + }, + { + "bbox": [ + 55, + 472, + 296, + 521 + ], + "type": "text", + "content": " is applied on " + }, + { + "bbox": [ + 55, + 472, + 296, + 521 + ], + "type": "inline_equation", + "content": "v_{\\hat{T}}(h \\circ g)" + }, + { + "bbox": [ + 55, + 472, + 296, + 521 + ], + "type": "text", + "content": " to prevent all target samples from being classified as unknown." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 527, + 296, + 552 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 527, + 296, + 552 + ], + "spans": [ + { + "bbox": [ + 55, + 527, + 296, + 552 + ], + "type": "text", + "content": "3.3. Towards Source-Free Knowledge Transfer with Attention-based Feature Generation" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 557, + 297, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 557, + 297, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 557, + 297, + 714 + ], + "type": "text", + "content": "Source-free domain adaptation (SFDA) has been considered a means of reducing reliance on source data. As described in [24], the existing SFDA research can generally be categorized into data-centric and model-centric methods. Model-centric methods employ techniques such as self-supervision, while data-centric methods focus on image-based reconstruction. Model-centric methods like [28, 29, 35, 60] require source model fine-tuning, where the generalization to multi-source cases with label shift can be nontrivial since it may fail to fully leverage the few-shot labeled data due to the missing classes in source domains. Meanwhile, for data-centric methods like [6, 26], the pipeline to generate source-like images is generally computationally intensive" + } + ] + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 339, + 71, + 531, + 152 + ], + "blocks": [ + { + "bbox": [ + 339, + 71, + 531, + 152 + ], + "lines": [ + { + "bbox": [ + 339, + 71, + 531, + 152 + ], + "spans": [ + { + "bbox": [ + 339, + 71, + 531, + 152 + ], + "type": "image", + "image_path": "b924781068c2bfcbf6d052fefbd06b27a184ae76704098abb9c471e600243125.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 159, + 555, + 248 + ], + "lines": [ + { + "bbox": [ + 313, + 159, + 555, + 248 + ], + "spans": [ + { + "bbox": [ + 313, + 159, + 555, + 248 + ], + "type": "text", + "content": "Figure 2. The mechanism of attention-based feature generation for source-free domain adaptation. Given a similarity-based weight estimated by the knowledge preserved in the pre-trained source model consisting of a black-box feature extractor " + }, + { + "bbox": [ + 313, + 159, + 555, + 248 + ], + "type": "inline_equation", + "content": "g_{i}" + }, + { + "bbox": [ + 313, + 159, + 555, + 248 + ], + "type": "text", + "content": " and a visible classifier " + }, + { + "bbox": [ + 313, + 159, + 555, + 248 + ], + "type": "inline_equation", + "content": "f_{i}" + }, + { + "bbox": [ + 313, + 159, + 555, + 248 + ], + "type": "text", + "content": ", labeled features generated with the attention module can be considered a weighted average of unlabeled target features, which serve as the anchor for the target distribution alignment in the adaptation phase." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 274, + 555, + 382 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 274, + 555, + 382 + ], + "spans": [ + { + "bbox": [ + 313, + 274, + 555, + 382 + ], + "type": "text", + "content": "and time-consuming, which can hardly be applied to highly structured domains. Furthermore, it might violate the intention of SFDA to protect privacy by recovering source-like images. Motivated by this, in this section, we propose a novel attention-based feature generation (AFG) algorithm that can produce labeled anchors for the alignment of unlabeled target data by leveraging the knowledge equipped in source models, which is more computationally efficient and independent from source model fine-tuning." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 385, + 556, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 385, + 556, + 491 + ], + "spans": [ + { + "bbox": [ + 313, + 385, + 556, + 491 + ], + "type": "text", + "content": "The SFDA scenario involves two phases: pre-training and adaptation. During pre-training, " + }, + { + "bbox": [ + 313, + 385, + 556, + 491 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 313, + 385, + 556, + 491 + ], + "type": "text", + "content": " models are trained on labeled data from each source domain " + }, + { + "bbox": [ + 313, + 385, + 556, + 491 + ], + "type": "inline_equation", + "content": "x_{s_i} \\sim S_i'" + }, + { + "bbox": [ + 313, + 385, + 556, + 491 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 385, + 556, + 491 + ], + "type": "inline_equation", + "content": "i = 1\\dots N" + }, + { + "bbox": [ + 313, + 385, + 556, + 491 + ], + "type": "text", + "content": ". Subsequently, the goal of the adaptation stage is to adapt the pre-trained source model to the unlabeled target data " + }, + { + "bbox": [ + 313, + 385, + 556, + 491 + ], + "type": "inline_equation", + "content": "x_t \\sim T" + }, + { + "bbox": [ + 313, + 385, + 556, + 491 + ], + "type": "text", + "content": " given few-shot labeled target data " + }, + { + "bbox": [ + 313, + 385, + 556, + 491 + ], + "type": "inline_equation", + "content": "x_v \\sim V'" + }, + { + "bbox": [ + 313, + 385, + 556, + 491 + ], + "type": "text", + "content": ". The proposed approach assumes a challenging open-set form, implying that the label spaces among the target and source domains are distinct." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 492, + 556, + 573 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 492, + 556, + 573 + ], + "spans": [ + { + "bbox": [ + 313, + 492, + 556, + 573 + ], + "type": "text", + "content": "Inspired by [28], which uses a single-layer linear classifier in source models to store the cluster center of source features, we choose the Bayesian linear classifier during pre-training such that source features can be sampled by the re-parameterization trick [21] in the adaptation phase. Let " + }, + { + "bbox": [ + 313, + 492, + 556, + 573 + ], + "type": "inline_equation", + "content": "g_{i}:\\mathcal{X}\\rightarrow \\mathcal{Z}_{i}\\subseteq \\mathbb{R}^{F_{i}}" + }, + { + "bbox": [ + 313, + 492, + 556, + 573 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 492, + 556, + 573 + ], + "type": "inline_equation", + "content": "f_{i}\\coloneqq \\{\\mu_{i},\\sigma_{i}\\}" + }, + { + "bbox": [ + 313, + 492, + 556, + 573 + ], + "type": "text", + "content": " denote each pre-trained source model. As illustrated in Fig. 2, given the source features approximated by the weight samples" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 573, + 553, + 619 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 573, + 553, + 619 + ], + "spans": [ + { + "bbox": [ + 313, + 573, + 553, + 619 + ], + "type": "text", + "content": "of Bayesian linear classifier as " + }, + { + "bbox": [ + 313, + 573, + 553, + 619 + ], + "type": "inline_equation", + "content": "g_{i}(\\hat{S}_{i}^{\\prime}) = \\left( \\begin{array}{c} g_{i}(x_{s_{i}}^{1}) \\\\ \\vdots \\\\ g_{i}(x_{s_{i}}^{\\| \\mathcal{Y}_{i}^{\\prime}\\|}) \\end{array} \\right) :=" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 619, + 553, + 666 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 619, + 553, + 666 + ], + "spans": [ + { + "bbox": [ + 313, + 619, + 553, + 666 + ], + "type": "interline_equation", + "content": "\\mu_ {i} + \\sigma_ {i} \\odot \\left( \\begin{array}{c} \\zeta_ {i} ^ {1} \\\\ \\vdots \\\\ \\zeta_ {i} ^ {\\| \\mathcal {Y} _ {i} ^ {\\prime} \\|} \\end{array} \\right), \\zeta_ {i} ^ {j} \\sim \\mathcal {N} (0, I) \\text {w i t h s i z e} \\| \\mathcal {Y} _ {i} ^ {\\prime} \\| \\times F _ {i}", + "image_path": "97a11440f80df020e851ca974199e92200e21b7a8c17e0f53a46e09efa106ddf.jpg" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 665, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 665, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 665, + 556, + 715 + ], + "type": "text", + "content": "(multiple samples can be generated from each class in practice), along with the query and key mapping functions " + }, + { + "bbox": [ + 313, + 665, + 556, + 715 + ], + "type": "inline_equation", + "content": "w_{q_i}, w_{k_i}: \\mathcal{Z}_i \\to \\mathcal{Z}_i' \\subseteq \\mathbb{R}^{F_i'}" + }, + { + "bbox": [ + 313, + 665, + 556, + 715 + ], + "type": "text", + "content": ", the corresponding labeled anchor defined as " + }, + { + "bbox": [ + 313, + 665, + 556, + 715 + ], + "type": "inline_equation", + "content": "\\{(g(x_{s_i}^j), y_i^j \\in \\mathcal{Y}_i')\\}_{j=1}^{\\|\\mathcal{Y}_i'\\|}, y_i^j \\neq y_i^{j'}" + }, + { + "bbox": [ + 313, + 665, + 556, + 715 + ], + "type": "text", + "content": " is given" + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "10145" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 73, + 71, + 84 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 73, + 71, + 84 + ], + "spans": [ + { + "bbox": [ + 55, + 73, + 71, + 84 + ], + "type": "text", + "content": "by:" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 84, + 93, + 296, + 117 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 93, + 296, + 117 + ], + "spans": [ + { + "bbox": [ + 84, + 93, + 296, + 117 + ], + "type": "interline_equation", + "content": "g (\\hat {S} _ {i} ^ {\\prime}) = \\operatorname {s o f t m a x} \\left(\\frac {w _ {q _ {i}} \\left(g _ {i} \\left(\\hat {S} _ {i} ^ {\\prime}\\right)\\right) \\cdot w _ {k _ {i}} \\left(g _ {i} \\left(\\hat {T} ^ {\\prime}\\right)\\right) ^ {\\top}}{\\sqrt {F _ {i} ^ {\\prime}}}\\right) g (\\hat {T} ^ {\\prime}), \\tag {14}", + "image_path": "7b5e81efb2f2ea1ccef7a97d3d5853bfbfec41b3632f801af94bcabc32c9e967.jpg" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 127, + 296, + 291 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 127, + 296, + 291 + ], + "spans": [ + { + "bbox": [ + 55, + 127, + 296, + 291 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 127, + 296, + 291 + ], + "type": "inline_equation", + "content": "\\hat{T}'" + }, + { + "bbox": [ + 55, + 127, + 296, + 291 + ], + "type": "text", + "content": " denotes the estimated known-class data from the target. To produce meaningful features for the distribution alignment in the adaptation phase, we propose two objective functions to learn the query and key mapping " + }, + { + "bbox": [ + 55, + 127, + 296, + 291 + ], + "type": "inline_equation", + "content": "\\{w_{q_i}, w_{k_i}\\}_{i=1}^N" + }, + { + "bbox": [ + 55, + 127, + 296, + 291 + ], + "type": "text", + "content": " of each source domain. Analogous to [5], we train " + }, + { + "bbox": [ + 55, + 127, + 296, + 291 + ], + "type": "inline_equation", + "content": "w_{q_i}, w_{k_i}" + }, + { + "bbox": [ + 55, + 127, + 296, + 291 + ], + "type": "text", + "content": " by maximizing the similarity between the projections of the same target features extracted by the per-trained source model " + }, + { + "bbox": [ + 55, + 127, + 296, + 291 + ], + "type": "inline_equation", + "content": "g_i" + }, + { + "bbox": [ + 55, + 127, + 296, + 291 + ], + "type": "text", + "content": " while pushing the different target features far apart, which can be achieved with minimizing reconstruction loss " + }, + { + "bbox": [ + 55, + 127, + 296, + 291 + ], + "type": "inline_equation", + "content": "L_{rec}^i" + }, + { + "bbox": [ + 55, + 127, + 296, + 291 + ], + "type": "text", + "content": " such that the output of the attention module can approximate target features " + }, + { + "bbox": [ + 55, + 127, + 296, + 291 + ], + "type": "inline_equation", + "content": "g(\\hat{T}')" + }, + { + "bbox": [ + 55, + 127, + 296, + 291 + ], + "type": "text", + "content": " given target data as query and key. To further regularize " + }, + { + "bbox": [ + 55, + 127, + 296, + 291 + ], + "type": "inline_equation", + "content": "w_{q_i}, w_{k_i}" + }, + { + "bbox": [ + 55, + 127, + 296, + 291 + ], + "type": "text", + "content": ", we introduce a cycle-consistency loss " + }, + { + "bbox": [ + 55, + 127, + 296, + 291 + ], + "type": "inline_equation", + "content": "L_{cyc}^i" + }, + { + "bbox": [ + 55, + 127, + 296, + 291 + ], + "type": "text", + "content": " that can bring the features generated by labeled and unlabeled target data " + }, + { + "bbox": [ + 55, + 127, + 296, + 291 + ], + "type": "inline_equation", + "content": "\\hat{V}', \\hat{T}'" + }, + { + "bbox": [ + 55, + 127, + 296, + 291 + ], + "type": "text", + "content": " close to each other." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 64, + 298, + 296, + 322 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 64, + 298, + 296, + 322 + ], + "spans": [ + { + "bbox": [ + 64, + 298, + 296, + 322 + ], + "type": "interline_equation", + "content": "L _ {r e c} ^ {i} = \\left| \\operatorname {s o f t m a x} \\left(\\frac {w _ {q _ {i}} \\left(g _ {i} \\left(\\hat {T} ^ {\\prime}\\right)\\right) \\cdot w _ {k _ {i}} \\left(g _ {i} \\left(\\hat {T} ^ {\\prime}\\right)\\right) ^ {\\top}}{\\sqrt {F _ {i} ^ {\\prime}}}\\right) g \\left(\\hat {T} ^ {\\prime}\\right) - g \\left(\\hat {T} ^ {\\prime}\\right) \\right| \\tag {15}", + "image_path": "db205541c26c3a043912fee635a76d6626024f7e1829836360dff29daeecdaad.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 65, + 323, + 296, + 347 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 323, + 296, + 347 + ], + "spans": [ + { + "bbox": [ + 65, + 323, + 296, + 347 + ], + "type": "interline_equation", + "content": "L _ {c y c} ^ {i} = | \\mathrm {s o f t m a x} (\\frac {w _ {q _ {i}} (g _ {i} (\\hat {S} _ {i} ^ {\\prime})) \\cdot w _ {k _ {i}} (g _ {i} (\\hat {V} ^ {\\prime})) ^ {\\top}}{\\sqrt {F _ {i} ^ {\\prime}}}) g (\\hat {V} ^ {\\prime}) - g (\\hat {S} _ {i} ^ {\\prime}) | \\quad (1 6)", + "image_path": "d8309f755afc3d2b96b84eec396ff86d87d72c154c3bad5ab4b73eb377b856cf.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 360, + 296, + 624 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 360, + 296, + 624 + ], + "spans": [ + { + "bbox": [ + 55, + 360, + 296, + 624 + ], + "type": "text", + "content": "Progressive Unknown Rejection (PUR) is additionally proposed to improve the recognition accuracy on unknown class. In the open-set setting, empirical target data " + }, + { + "bbox": [ + 55, + 360, + 296, + 624 + ], + "type": "inline_equation", + "content": "\\hat{T}" + }, + { + "bbox": [ + 55, + 360, + 296, + 624 + ], + "type": "text", + "content": " includes the unknown class, while the generated labeled anchors " + }, + { + "bbox": [ + 55, + 360, + 296, + 624 + ], + "type": "inline_equation", + "content": "g(\\hat{S}_i^{\\prime})" + }, + { + "bbox": [ + 55, + 360, + 296, + 624 + ], + "type": "text", + "content": " should be limited to the known class. According to the generation mechanism defined by Eq. (14), labeled anchors can be considered a similarity-based weighted average of target features, which are not supposed to contain components from irrelevant features of the unknown class. However, it is impractical to learn the ideal results where the weights assigned to those unrelated target features are zero by pure regularization of mapping functions " + }, + { + "bbox": [ + 55, + 360, + 296, + 624 + ], + "type": "inline_equation", + "content": "w_{q_i}, w_{k_i}" + }, + { + "bbox": [ + 55, + 360, + 296, + 624 + ], + "type": "text", + "content": ". To address this problem, we introduce a scheme to gradually reject the target features from the unknown class by removing the target data labeled as unknown given the current hypothesis " + }, + { + "bbox": [ + 55, + 360, + 296, + 624 + ], + "type": "inline_equation", + "content": "h" + }, + { + "bbox": [ + 55, + 360, + 296, + 624 + ], + "type": "text", + "content": " from " + }, + { + "bbox": [ + 55, + 360, + 296, + 624 + ], + "type": "inline_equation", + "content": "\\hat{T}" + }, + { + "bbox": [ + 55, + 360, + 296, + 624 + ], + "type": "text", + "content": ". Specifically, at each training iteration during the adaptation stage, for a batch of input target data, we rank the likelihood of the unknown class for each target sample " + }, + { + "bbox": [ + 55, + 360, + 296, + 624 + ], + "type": "inline_equation", + "content": "p(y = K|x_t) = h(x_t)[K]" + }, + { + "bbox": [ + 55, + 360, + 296, + 624 + ], + "type": "text", + "content": " in ascending order. Given a threshold " + }, + { + "bbox": [ + 55, + 360, + 296, + 624 + ], + "type": "inline_equation", + "content": "0 < \\tau < 1" + }, + { + "bbox": [ + 55, + 360, + 296, + 624 + ], + "type": "text", + "content": " progressively increasing from zero according to the exponential ramp-up function [22], we select bottom " + }, + { + "bbox": [ + 55, + 360, + 296, + 624 + ], + "type": "inline_equation", + "content": "1 - \\tau" + }, + { + "bbox": [ + 55, + 360, + 296, + 624 + ], + "type": "text", + "content": " target samples as " + }, + { + "bbox": [ + 55, + 360, + 296, + 624 + ], + "type": "inline_equation", + "content": "\\hat{T}'" + }, + { + "bbox": [ + 55, + 360, + 296, + 624 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 632, + 185, + 645 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 632, + 185, + 645 + ], + "spans": [ + { + "bbox": [ + 55, + 632, + 185, + 645 + ], + "type": "text", + "content": "3.4. Hypothesis Constraint" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 54, + 650, + 297, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 650, + 297, + 715 + ], + "spans": [ + { + "bbox": [ + 54, + 650, + 297, + 715 + ], + "type": "text", + "content": "Proposition 3.8. If " + }, + { + "bbox": [ + 54, + 650, + 297, + 715 + ], + "type": "inline_equation", + "content": "\\mathcal{H}_{S_i}^F, \\mathcal{H}_V^F, \\mathcal{H}_T^F" + }, + { + "bbox": [ + 54, + 650, + 297, + 715 + ], + "type": "text", + "content": " are sets of functions that can minimize a part of " + }, + { + "bbox": [ + 54, + 650, + 297, + 715 + ], + "type": "inline_equation", + "content": "\\hat{\\theta}_S^i, \\sum_i \\hat{\\theta}_V^i, \\sum_i \\hat{\\theta}_T^i" + }, + { + "bbox": [ + 54, + 650, + 297, + 715 + ], + "type": "text", + "content": " respectively, then " + }, + { + "bbox": [ + 54, + 650, + 297, + 715 + ], + "type": "inline_equation", + "content": "f_{S_i}^* \\in \\mathcal{H}_{S_i}^F, f_V^* \\in \\mathcal{H}_V^F, f_T^* \\in \\mathcal{H}_T^F" + }, + { + "bbox": [ + 54, + 650, + 297, + 715 + ], + "type": "text", + "content": " must hold such that we can relax " + }, + { + "bbox": [ + 54, + 650, + 297, + 715 + ], + "type": "inline_equation", + "content": "L_{\\text{dis}}" + }, + { + "bbox": [ + 54, + 650, + 297, + 715 + ], + "type": "text", + "content": " in Theorem 2.3 by considering maximum w.r.t. functions " + }, + { + "bbox": [ + 54, + 650, + 297, + 715 + ], + "type": "inline_equation", + "content": "f_{S_i}', f_V', f_T'" + }, + { + "bbox": [ + 54, + 650, + 297, + 715 + ], + "type": "text", + "content": " as:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 326, + 78, + 577, + 116 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 326, + 78, + 577, + 116 + ], + "spans": [ + { + "bbox": [ + 326, + 78, + 577, + 116 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\log \\sum_ {i} \\exp \\left(\\nu \\left[ L _ {c l s} ^ {S _ {i}} (h; g) + 2 L _ {d i s} ^ {i} \\left(f _ {S _ {i}} ^ {*}, f _ {V} ^ {*}, f _ {T} ^ {*}, h; g\\right) \\right]\\right) \\\\ \\leq \\sup _ {\\{f _ {S _ {i}} ^ {\\prime} \\in \\mathcal {H} _ {S _ {i}} ^ {F} \\} _ {i = 1} ^ {N}, f _ {V} ^ {\\prime} \\in \\mathcal {H} _ {V} ^ {F}, f _ {T} ^ {\\prime} \\in \\mathcal {H} _ {T} ^ {F}} \\log \\sum_ {i} \\exp (\\nu [ L _ {c l s} ^ {S _ {i}} (h; g) + 2 L _ {d i s} ^ {i} (f _ {S _ {i}} ^ {\\prime}, f _ {V} ^ {\\prime}, f _ {T} ^ {\\prime}, h; g) ]) \\\\ \\end{array}", + "image_path": "6f617ee5228d21efff3a0537ce1bb9caf69a2fd5c868a59cd69cbf23645ee001.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 540, + 125, + 555, + 134 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 540, + 125, + 555, + 134 + ], + "spans": [ + { + "bbox": [ + 540, + 125, + 555, + 134 + ], + "type": "text", + "content": "(17)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 144, + 555, + 232 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 144, + 555, + 232 + ], + "spans": [ + { + "bbox": [ + 313, + 144, + 555, + 232 + ], + "type": "text", + "content": "Definition 3.9 (Approximated Labeling Function Space). Let " + }, + { + "bbox": [ + 313, + 144, + 555, + 232 + ], + "type": "inline_equation", + "content": "L_{\\mathcal{H}_S}^i, L_{\\mathcal{H}_V}^i, L_{\\mathcal{H}_T}^i" + }, + { + "bbox": [ + 313, + 144, + 555, + 232 + ], + "type": "text", + "content": " denote the hypothesis constraints, i.e., a part of the empirical deviation between approximated and true labeling functions " + }, + { + "bbox": [ + 313, + 144, + 555, + 232 + ], + "type": "inline_equation", + "content": "\\hat{\\theta}_S^i, \\hat{\\theta}_T^i, \\hat{\\theta}_V^i" + }, + { + "bbox": [ + 313, + 144, + 555, + 232 + ], + "type": "text", + "content": ". Approximated Labeling Function Space " + }, + { + "bbox": [ + 313, + 144, + 555, + 232 + ], + "type": "inline_equation", + "content": "\\mathcal{H}_{S_i}^F, \\mathcal{H}_V^F, \\mathcal{H}_T^F" + }, + { + "bbox": [ + 313, + 144, + 555, + 232 + ], + "type": "text", + "content": " can be defined as the sets whose members " + }, + { + "bbox": [ + 313, + 144, + 555, + 232 + ], + "type": "inline_equation", + "content": "f_{S_i}', f_V', f_T' \\in \\mathcal{H}^F" + }, + { + "bbox": [ + 313, + 144, + 555, + 232 + ], + "type": "text", + "content": " can minimize " + }, + { + "bbox": [ + 313, + 144, + 555, + 232 + ], + "type": "inline_equation", + "content": "L_{\\mathcal{H}_S}^i, \\sum_i L_{\\mathcal{H}_V}^i, \\sum_i L_{\\mathcal{H}_T}^i" + }, + { + "bbox": [ + 313, + 144, + 555, + 232 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 326, + 238, + 581, + 273 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 326, + 238, + 581, + 273 + ], + "spans": [ + { + "bbox": [ + 326, + 238, + 581, + 273 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\left\\{\\mathcal {H} _ {S _ {i}} ^ {F} = \\left\\{f _ {S _ {i}} ^ {\\prime} | \\arg \\min _ {g, f _ {S _ {i}} ^ {\\prime} \\in \\mathcal {H} ^ {F}} \\left[ L _ {\\mathcal {H} _ {S}} ^ {i} \\left(f _ {S _ {i}} ^ {\\prime}; g\\right) = L _ {c l s} ^ {S _ {i}} \\left(f _ {S _ {i}} ^ {\\prime}; g\\right) / 2 + L _ {c l s} ^ {V} \\left(f _ {S _ {i}} ^ {\\prime}; g\\right) \\right] \\right\\} \\right. \\\\ \\left\\{\\mathcal {H} _ {V} ^ {F} = \\left\\{f _ {V} ^ {\\prime} \\mid \\arg \\min _ {g, f _ {V} ^ {\\prime} \\in \\mathcal {H} ^ {F}} \\sum_ {i} \\left[ L _ {\\mathcal {H} _ {V}} ^ {i} \\left(f _ {V} ^ {\\prime}; g\\right) = L _ {c l s} ^ {V} \\left(f _ {V} ^ {\\prime}; g\\right) / 2 + L _ {c l s} ^ {S _ {i}} \\left(f _ {V} ^ {\\prime}; g\\right) \\right] \\right\\} \\right. \\\\ \\left\\{\\mathcal {H} _ {T} ^ {F} = \\left\\{f _ {T} ^ {\\prime} \\mid \\arg \\min _ {g, f _ {T} ^ {\\prime} \\in \\mathcal {H} ^ {F}} \\sum_ {i} \\left[ L _ {\\mathcal {H} _ {T}} ^ {i} \\left(f _ {T} ^ {\\prime}; g\\right) = \\left[ L _ {\\text {c l s}} ^ {S _ {i}} \\left(f _ {T} ^ {\\prime}; g\\right) + L _ {\\text {c l s}} ^ {V} \\left(f _ {T} ^ {\\prime}; g\\right) \\right] / 2 + L _ {\\text {s s l}} \\right] \\right\\} \\right. \\\\ \\end{array}", + "image_path": "e4431f2cb9585dd705ad28e02256f27e824e6f16b93a6a37ab88ee86bf62bdf5.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 540, + 289, + 555, + 297 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 540, + 289, + 555, + 297 + ], + "spans": [ + { + "bbox": [ + 540, + 289, + 555, + 297 + ], + "type": "text", + "content": "(18)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 308, + 555, + 380 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 308, + 555, + 380 + ], + "spans": [ + { + "bbox": [ + 313, + 308, + 555, + 380 + ], + "type": "text", + "content": "To build a more reliable target function space " + }, + { + "bbox": [ + 313, + 308, + 555, + 380 + ], + "type": "inline_equation", + "content": "\\mathcal{H}_T^F" + }, + { + "bbox": [ + 313, + 308, + 555, + 380 + ], + "type": "text", + "content": ", we approximate the target error with the error rate on labeled samples and a semi-supervised regularization term " + }, + { + "bbox": [ + 313, + 308, + 555, + 380 + ], + "type": "inline_equation", + "content": "L_{ssl}^2" + }, + { + "bbox": [ + 313, + 308, + 555, + 380 + ], + "type": "text", + "content": " including entropy minimization [14, 15], pseudo labeling [44, 46] and consistency regularization [22, 42], which has been intensively discussed in [23, 43, 59, 63]." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 314, + 388, + 386, + 401 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 388, + 386, + 401 + ], + "spans": [ + { + "bbox": [ + 314, + 388, + 386, + 401 + ], + "type": "text", + "content": "3.5. Algorithm" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 407, + 556, + 526 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 407, + 556, + 526 + ], + "spans": [ + { + "bbox": [ + 313, + 407, + 556, + 526 + ], + "type": "text", + "content": "As described in Algorithm 1, we introduce a gradient reversal layer [12] to train the overall objective together. ImageNet [8] pre-trained ResNet-50 [16] is used as feature extractor " + }, + { + "bbox": [ + 313, + 407, + 556, + 526 + ], + "type": "inline_equation", + "content": "g" + }, + { + "bbox": [ + 313, + 407, + 556, + 526 + ], + "type": "text", + "content": " and randomly initialized 2-layer fully-connected networks are used for classifiers " + }, + { + "bbox": [ + 313, + 407, + 556, + 526 + ], + "type": "inline_equation", + "content": "f_{S_i}^{\\prime}, f_V^{\\prime}, f_T^{\\prime}, h" + }, + { + "bbox": [ + 313, + 407, + 556, + 526 + ], + "type": "text", + "content": ". We adopt SGD with a momentum of 0.9 for optimization, where the initial learning rate is empirically set to 0.001. We employ the learning rate annealing strategy proposed in [12]. We use RandomFlip, RandomCrop, and RandAugment [7] as data augmentation with the batch size fixed to 24." + } + ] + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 316, + 533, + 555, + 630 + ], + "blocks": [ + { + "bbox": [ + 316, + 533, + 555, + 630 + ], + "lines": [ + { + "bbox": [ + 316, + 533, + 555, + 630 + ], + "spans": [ + { + "bbox": [ + 316, + 533, + 555, + 630 + ], + "type": "image", + "image_path": "2368c1f59ad63ea02d5288c77d00848fb6a4405f5d48019ba52a15b5fac7f00b.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 636, + 556, + 682 + ], + "lines": [ + { + "bbox": [ + 313, + 636, + 556, + 682 + ], + "spans": [ + { + "bbox": [ + 313, + 636, + 556, + 682 + ], + "type": "text", + "content": "Figure 3. Alignment mechanism of UM, where unknown target data " + }, + { + "bbox": [ + 313, + 636, + 556, + 682 + ], + "type": "inline_equation", + "content": "\\hat{T}^K" + }, + { + "bbox": [ + 313, + 636, + 556, + 682 + ], + "type": "text", + "content": " (green) is pushed away from labeled data into a separated cluster, while known target data " + }, + { + "bbox": [ + 313, + 636, + 556, + 682 + ], + "type": "inline_equation", + "content": "\\hat{T}'" + }, + { + "bbox": [ + 313, + 636, + 556, + 682 + ], + "type": "text", + "content": " is aligned back towards labeled clusters by " + }, + { + "bbox": [ + 313, + 636, + 556, + 682 + ], + "type": "inline_equation", + "content": "\\min_g L_{dis}" + }, + { + "bbox": [ + 313, + 636, + 556, + 682 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_caption" + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 324, + 702, + 453, + 713 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 324, + 702, + 453, + 713 + ], + "spans": [ + { + "bbox": [ + 324, + 702, + 453, + 713 + ], + "type": "text", + "content": "2see details in supplementary material." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "text", + "content": "10146" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 129, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 129, + 83 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 129, + 83 + ], + "type": "text", + "content": "Algorithm 1 UM" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 64, + 87, + 264, + 99 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 64, + 87, + 264, + 99 + ], + "spans": [ + { + "bbox": [ + 64, + 87, + 264, + 99 + ], + "type": "text", + "content": "Input: source " + }, + { + "bbox": [ + 64, + 87, + 264, + 99 + ], + "type": "inline_equation", + "content": "\\{\\hat{S}_i^{\\prime}\\}_{i = 1}^{N}" + }, + { + "bbox": [ + 64, + 87, + 264, + 99 + ], + "type": "text", + "content": ", labeled target " + }, + { + "bbox": [ + 64, + 87, + 264, + 99 + ], + "type": "inline_equation", + "content": "\\hat{V}^{\\prime}" + }, + { + "bbox": [ + 64, + 87, + 264, + 99 + ], + "type": "text", + "content": ", unlabeled target " + }, + { + "bbox": [ + 64, + 87, + 264, + 99 + ], + "type": "inline_equation", + "content": "\\hat{T}" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 65, + 99, + 294, + 122 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 99, + 294, + 122 + ], + "spans": [ + { + "bbox": [ + 65, + 99, + 294, + 122 + ], + "type": "text", + "content": "Output: updated parameters " + }, + { + "bbox": [ + 65, + 99, + 294, + 122 + ], + "type": "inline_equation", + "content": "\\phi = (\\{f_{S_i}^{\\prime}\\}_{i=1}^{N}, g, h, f_V^{\\prime}f_T^{\\prime})" + }, + { + "bbox": [ + 65, + 99, + 294, + 122 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 65, + 99, + 294, + 122 + ], + "type": "inline_equation", + "content": "w = \\{w_{q_i}, w_{k_i}\\}_{i=1}^{N}" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 65, + 122, + 294, + 140 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 122, + 294, + 140 + ], + "spans": [ + { + "bbox": [ + 65, + 122, + 294, + 140 + ], + "type": "text", + "content": "Parameter: trade-off parameter " + }, + { + "bbox": [ + 65, + 122, + 294, + 140 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 65, + 122, + 294, + 140 + ], + "type": "text", + "content": "; learning rate " + }, + { + "bbox": [ + 65, + 122, + 294, + 140 + ], + "type": "inline_equation", + "content": "\\eta" + }, + { + "bbox": [ + 65, + 122, + 294, + 140 + ], + "type": "text", + "content": "; known class ratio estimator " + }, + { + "bbox": [ + 65, + 122, + 294, + 140 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 65, + 122, + 294, + 140 + ], + "type": "text", + "content": "; coefficients " + }, + { + "bbox": [ + 65, + 122, + 294, + 140 + ], + "type": "inline_equation", + "content": "\\nu, \\beta, \\tau" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 65, + 140, + 200, + 150 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 140, + 200, + 150 + ], + "spans": [ + { + "bbox": [ + 65, + 140, + 200, + 150 + ], + "type": "text", + "content": "Notation: gradient reversal operator " + }, + { + "bbox": [ + 65, + 140, + 200, + 150 + ], + "type": "inline_equation", + "content": "R(\\cdot)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 65, + 150, + 147, + 159 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 150, + 147, + 159 + ], + "spans": [ + { + "bbox": [ + 65, + 150, + 147, + 159 + ], + "type": "text", + "content": "for epoch " + }, + { + "bbox": [ + 65, + 150, + 147, + 159 + ], + "type": "inline_equation", + "content": "= 1,2,\\dots" + }, + { + "bbox": [ + 65, + 150, + 147, + 159 + ], + "type": "text", + "content": " do" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 77, + 159, + 220, + 168 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 159, + 220, + 168 + ], + "spans": [ + { + "bbox": [ + 77, + 159, + 220, + 168 + ], + "type": "text", + "content": "Estimate known class ratio " + }, + { + "bbox": [ + 77, + 159, + 220, + 168 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 77, + 159, + 220, + 168 + ], + "type": "text", + "content": " on " + }, + { + "bbox": [ + 77, + 159, + 220, + 168 + ], + "type": "inline_equation", + "content": "\\hat{T}" + }, + { + "bbox": [ + 77, + 159, + 220, + 168 + ], + "type": "text", + "content": " with " + }, + { + "bbox": [ + 77, + 159, + 220, + 168 + ], + "type": "inline_equation", + "content": "g,h" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 77, + 169, + 138, + 177 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 169, + 138, + 177 + ], + "spans": [ + { + "bbox": [ + 77, + 169, + 138, + 177 + ], + "type": "text", + "content": "if source-free then" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 89, + 178, + 293, + 187 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 89, + 178, + 293, + 187 + ], + "spans": [ + { + "bbox": [ + 89, + 178, + 293, + 187 + ], + "type": "text", + "content": "Estimate " + }, + { + "bbox": [ + 89, + 178, + 293, + 187 + ], + "type": "inline_equation", + "content": "\\hat{T}^{\\prime}" + }, + { + "bbox": [ + 89, + 178, + 293, + 187 + ], + "type": "text", + "content": " according to PUR and Update " + }, + { + "bbox": [ + 89, + 178, + 293, + 187 + ], + "type": "inline_equation", + "content": "w" + }, + { + "bbox": [ + 89, + 178, + 293, + 187 + ], + "type": "text", + "content": " to optimize AFG:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 96, + 188, + 249, + 203 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 96, + 188, + 249, + 203 + ], + "spans": [ + { + "bbox": [ + 96, + 188, + 249, + 203 + ], + "type": "interline_equation", + "content": "w \\leftarrow w - \\eta \\Delta w, \\Delta w = \\frac {\\partial \\sum_ {i = 1} ^ {N} \\left(L _ {r c c} ^ {i} + L _ {c y c} ^ {i}\\right)}{\\partial w}", + "image_path": "302185302e0bc7fd509444d9a81f88fac1964f2da246d35ec4a7c21d81c26e3c.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 89, + 204, + 262, + 213 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 89, + 204, + 262, + 213 + ], + "spans": [ + { + "bbox": [ + 89, + 204, + 262, + 213 + ], + "type": "text", + "content": "Generate labeled features " + }, + { + "bbox": [ + 89, + 204, + 262, + 213 + ], + "type": "inline_equation", + "content": "g(\\hat{S}_i^{\\prime})" + }, + { + "bbox": [ + 89, + 204, + 262, + 213 + ], + "type": "text", + "content": " according to Eq. (14)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 77, + 213, + 99, + 221 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 213, + 99, + 221 + ], + "spans": [ + { + "bbox": [ + 77, + 213, + 99, + 221 + ], + "type": "text", + "content": "end if" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 64, + 222, + 296, + 258 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 64, + 222, + 296, + 258 + ], + "spans": [ + { + "bbox": [ + 64, + 222, + 296, + 258 + ], + "type": "text", + "content": "Compute labeled target error " + }, + { + "bbox": [ + 64, + 222, + 296, + 258 + ], + "type": "inline_equation", + "content": "L_{cls}^{V}(h;g) = L_{V}" + }, + { + "bbox": [ + 64, + 222, + 296, + 258 + ], + "type": "text", + "content": ", source error " + }, + { + "bbox": [ + 64, + 222, + 296, + 258 + ], + "type": "inline_equation", + "content": "L_{cls}^{S_i}(h;g) = L_S^i" + }, + { + "bbox": [ + 64, + 222, + 296, + 258 + ], + "type": "text", + "content": ", hypothesis constraints " + }, + { + "bbox": [ + 64, + 222, + 296, + 258 + ], + "type": "inline_equation", + "content": "L_{\\mathcal{H}_S}^i (f_{S_i}';g) + L_{\\mathcal{H}_V}^i (f_V';g) + L_{\\mathcal{H}_T}^i (f_T';g) = L_H^i" + }, + { + "bbox": [ + 64, + 222, + 296, + 258 + ], + "type": "text", + "content": " for " + }, + { + "bbox": [ + 64, + 222, + 296, + 258 + ], + "type": "inline_equation", + "content": "i = 1,..N" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 64, + 258, + 294, + 277 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 64, + 258, + 294, + 277 + ], + "spans": [ + { + "bbox": [ + 64, + 258, + 294, + 277 + ], + "type": "text", + "content": "Compute discrepancy " + }, + { + "bbox": [ + 64, + 258, + 294, + 277 + ], + "type": "inline_equation", + "content": "L_{dis}^{i}(f_{S_{i}}^{\\prime}, f_{V}^{\\prime}, f_{T}^{\\prime}, R \\circ h \\circ R; R \\circ g) = L_{D}^{i}" + }, + { + "bbox": [ + 64, + 258, + 294, + 277 + ], + "type": "text", + "content": " given the gradient reversal layer for " + }, + { + "bbox": [ + 64, + 258, + 294, + 277 + ], + "type": "inline_equation", + "content": "i = 1,..N" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 77, + 278, + 222, + 287 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 278, + 222, + 287 + ], + "spans": [ + { + "bbox": [ + 77, + 278, + 222, + 287 + ], + "type": "text", + "content": "Update " + }, + { + "bbox": [ + 77, + 278, + 222, + 287 + ], + "type": "inline_equation", + "content": "\\phi" + }, + { + "bbox": [ + 77, + 278, + 222, + 287 + ], + "type": "text", + "content": " to minimize the target error bound:" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 85, + 288, + 138, + 296 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 288, + 138, + 296 + ], + "spans": [ + { + "bbox": [ + 85, + 288, + 138, + 296 + ], + "type": "interline_equation", + "content": "\\phi \\leftarrow \\phi - \\eta \\Delta \\phi ,", + "image_path": "d37062e4961a0d27de3e40bece7669b3fd3a7a6fb4e35c566f9d9fa6ef888174.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 85, + 296, + 260, + 312 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 296, + 260, + 312 + ], + "spans": [ + { + "bbox": [ + 85, + 296, + 260, + 312 + ], + "type": "interline_equation", + "content": "\\Delta \\phi = \\frac {\\partial (\\frac {1}{2} [ L _ {V} + \\frac {1}{\\nu} \\log \\sum_ {i = 1} ^ {N} \\exp (\\nu [ L _ {S} ^ {i} + L _ {H} ^ {i} - \\lambda L _ {D} ^ {i} ]) ])}{\\partial \\phi}", + "image_path": "cea275e5a12ee1f81993822917f0e16d2b6480e1d8714dcf1b0d25168eee76bb.jpg" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 64, + 312, + 91, + 319 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 64, + 312, + 91, + 319 + ], + "spans": [ + { + "bbox": [ + 64, + 312, + 91, + 319 + ], + "type": "text", + "content": "end for" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 55, + 342, + 127, + 355 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 342, + 127, + 355 + ], + "spans": [ + { + "bbox": [ + 55, + 342, + 127, + 355 + ], + "type": "text", + "content": "4. Evaluation" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 55, + 363, + 296, + 507 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 363, + 296, + 507 + ], + "spans": [ + { + "bbox": [ + 55, + 363, + 296, + 507 + ], + "type": "text", + "content": "We evaluated our proposal using two benchmarks, Office-Home and DomainNet. The trade-off parameter " + }, + { + "bbox": [ + 55, + 363, + 296, + 507 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 55, + 363, + 296, + 507 + ], + "type": "text", + "content": " is set to 0.01 during the training procedure according to [62, 63]. In addition, we empirically set the PU, scaling, and threshold coefficients " + }, + { + "bbox": [ + 55, + 363, + 296, + 507 + ], + "type": "inline_equation", + "content": "\\beta" + }, + { + "bbox": [ + 55, + 363, + 296, + 507 + ], + "type": "text", + "content": " to 0.15, " + }, + { + "bbox": [ + 55, + 363, + 296, + 507 + ], + "type": "inline_equation", + "content": "\\nu" + }, + { + "bbox": [ + 55, + 363, + 296, + 507 + ], + "type": "text", + "content": " to 0.1, and " + }, + { + "bbox": [ + 55, + 363, + 296, + 507 + ], + "type": "inline_equation", + "content": "\\tau" + }, + { + "bbox": [ + 55, + 363, + 296, + 507 + ], + "type": "text", + "content": " to 0.3 for all experiments. For the semi-supervised setting, we select the same few-shot labeled target data according to [41]. Regarding the open-set setting, we assign a distinct label space for each source domain as a subset of the target label space described below. We quantitatively compare our results against various baselines, including OSBP [40], PGL [34], ANNA [27], PUJE [63], MOSDANET [38], HyMOS [3], and MPU [58]." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 55, + 523, + 296, + 572 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 523, + 296, + 572 + ], + "spans": [ + { + "bbox": [ + 55, + 523, + 296, + 572 + ], + "type": "text", + "content": "Evaluation Metrics for the proposed method and baselines are the widely used measures [34, 40], i.e., normalized accuracy for the known class only " + }, + { + "bbox": [ + 55, + 523, + 296, + 572 + ], + "type": "inline_equation", + "content": "(\\mathrm{OS}^{*})" + }, + { + "bbox": [ + 55, + 523, + 296, + 572 + ], + "type": "text", + "content": " and harmonic mean " + }, + { + "bbox": [ + 55, + 523, + 296, + 572 + ], + "type": "inline_equation", + "content": "\\mathrm{HOS} = 2(\\mathrm{OS}^{*} \\times \\mathrm{UNK}) / (\\mathrm{OS}^{*} + \\mathrm{UNK})" + }, + { + "bbox": [ + 55, + 523, + 296, + 572 + ], + "type": "text", + "content": " [2, 27, 31, 54, 63]." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 55, + 588, + 296, + 673 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 588, + 296, + 673 + ], + "spans": [ + { + "bbox": [ + 55, + 588, + 296, + 673 + ], + "type": "text", + "content": "Office-Home [52] is a widely-used domain adaptation benchmark, which consists of 15,500 images from 65 categories and four domains: Art, Clipart, Product, and RealWorld. We select the first 30 classes alphabetically as the known class and group the rest as the unknown. Each source domain contains 10 classes without overlap, leading to a large label shift scenario." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 55, + 689, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 689, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 689, + 296, + 713 + ], + "type": "text", + "content": "DomainNet [36] is a more challenging benchmark dataset for large-scale domain adaptation that has 345 classes and" + } + ] + } + ], + "index": 22 + }, + { + "type": "table", + "bbox": [ + 315, + 69, + 554, + 146 + ], + "blocks": [ + { + "bbox": [ + 315, + 69, + 554, + 146 + ], + "lines": [ + { + "bbox": [ + 315, + 69, + 554, + 146 + ], + "spans": [ + { + "bbox": [ + 315, + 69, + 554, + 146 + ], + "type": "table", + "html": "
METHODTYPE→Clipart→Product→RealWorld→ArtAvg.
1-shot3-shot1-shot3-shot1-shot3-shot1-shot3-shot1-shot3-shot
OSBPSource-Combine60.462.670.172.369.768.360.764.365.266.9
PGL59.061.867.769.966.768.961.264.063.766.2
ANNA65.867.771.073.470.370.361.063.767.068.8
PUJE65.871.773.374.275.078.165.567.369.972.8
MOSDANETMulti-Source61.565.970.073.871.469.661.663.666.168.2
HyMOS56.664.464.467.366.268.459.062.261.665.6
UM68.072.179.083.079.480.867.770.373.576.6
MPU*Source-Free46.354.459.766.357.860.258.362.555.560.9
OSBP*44.556.555.665.159.364.355.659.953.861.5
PUJE*52.258.465.070.366.270.058.762.760.565.4
UM+AFG61.166.077.080.172.078.860.364.667.672.4
", + "image_path": "76650fc36cf69332371d9a6f28460ba260f0c8ede3c8f39b81b5b8e1c03d3710.jpg" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "table_body" + } + ], + "index": 23 + }, + { + "type": "table", + "bbox": [ + 315, + 186, + 554, + 262 + ], + "blocks": [ + { + "bbox": [ + 313, + 154, + 554, + 176 + ], + "lines": [ + { + "bbox": [ + 313, + 154, + 554, + 176 + ], + "spans": [ + { + "bbox": [ + 313, + 154, + 554, + 176 + ], + "type": "text", + "content": "Table 1. HOS (%) of ResNet-50 model fine-tuned on Office-Home dataset under 1-shot/3-shot setting" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 315, + 186, + 554, + 262 + ], + "lines": [ + { + "bbox": [ + 315, + 186, + 554, + 262 + ], + "spans": [ + { + "bbox": [ + 315, + 186, + 554, + 262 + ], + "type": "table", + "html": "
METHODTYPE→Clipart→Painting→Real→SketchAvg.
1-shot3-shot1-shot3-shot1-shot3-shot1-shot3-shot1-shot3-shot
OSBPSource-Combine54.257.449.853.162.664.049.550.154.056.2
PGL59.862.059.461.467.469.459.761.261.663.5
ANNA55.661.553.654.367.566.557.958.158.760.1
PUJE64.466.259.861.767.769.361.264.263.365.4
MOSDANETMulti-Source56.455.355.658.268.569.854.154.958.759.6
HyMOS53.054.454.156.065.167.456.357.157.158.7
UM70.371.566.068.875.178.566.169.569.472.1
MPU*Source-Free54.557.655.060.162.466.448.452.955.159.3
MOSDANET*58.160.554.359.363.262.549.454.356.359.2
PUJE*60.562.255.361.464.067.853.156.258.261.9
UM+AFG64.869.760.064.267.673.460.064.863.168.0
", + "image_path": "9e436e408be32af3f8175ff3f064ffb58b95c052ce0c3c743895f7bbaba59363.jpg" + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "table_body" + } + ], + "index": 25 + }, + { + "bbox": [ + 313, + 270, + 554, + 293 + ], + "lines": [ + { + "bbox": [ + 313, + 270, + 554, + 293 + ], + "spans": [ + { + "bbox": [ + 313, + 270, + 554, + 293 + ], + "type": "text", + "content": "Table 2. HOS (%) of ResNet-50 model fine-tuned on DomainNet dataset under 1-shot/3-shot setting" + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 313, + 315, + 554, + 386 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 315, + 554, + 386 + ], + "spans": [ + { + "bbox": [ + 313, + 315, + 554, + 386 + ], + "type": "text", + "content": "6 domains. Following the protocol established in [41], we pick 4 domains (Real, Clipart, Painting, Sketch) with 126 classes for the experiments. We select the first 60 classes alphabetically as the known class and group the rest as the unknown. Similarly, each source domain contains 20 classes without any overlap." + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 313, + 388, + 556, + 638 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 388, + 556, + 638 + ], + "spans": [ + { + "bbox": [ + 313, + 388, + 556, + 638 + ], + "type": "text", + "content": "As reported in Tabs. 1 and 2, under the same setting given 1-shot/3-shot labeled target data (1/3 samples per class), we observe that our method UM consistently outperforms the state-of-the-art results, improving HOS by " + }, + { + "bbox": [ + 313, + 388, + 556, + 638 + ], + "type": "inline_equation", + "content": "3.6\\% / 3.8\\%" + }, + { + "bbox": [ + 313, + 388, + 556, + 638 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 388, + 556, + 638 + ], + "type": "inline_equation", + "content": "6.1\\% / 6.7\\%" + }, + { + "bbox": [ + 313, + 388, + 556, + 638 + ], + "type": "text", + "content": " on the benchmark datasets of Office-Home and DomainNet respectively, when source data is available. Furthermore, " + }, + { + "bbox": [ + 313, + 388, + 556, + 638 + ], + "type": "inline_equation", + "content": "\\mathrm{UM + AFG}" + }, + { + "bbox": [ + 313, + 388, + 556, + 638 + ], + "type": "text", + "content": " enhances HOS by " + }, + { + "bbox": [ + 313, + 388, + 556, + 638 + ], + "type": "inline_equation", + "content": "7.1\\% / 7.0\\%" + }, + { + "bbox": [ + 313, + 388, + 556, + 638 + ], + "type": "text", + "content": " in Office-Home and " + }, + { + "bbox": [ + 313, + 388, + 556, + 638 + ], + "type": "inline_equation", + "content": "4.9\\% / 6.1\\%" + }, + { + "bbox": [ + 313, + 388, + 556, + 638 + ], + "type": "text", + "content": " in DomainNet under the challenging source-free setting. Note that our proposed approach provides significant performance gains for the more complex datasets like DomainNet, which requires knowledge transfer across different modalities, regardless of covariate or label shift. We group all source domains with labeled target data as a single domain for the baselines that require the source-combine strategy. For the source-free cases, we introduce a few confident target data labeled by pre-trained models as pseudo-source data to enable several algorithms denoted by " + }, + { + "bbox": [ + 313, + 388, + 556, + 638 + ], + "type": "inline_equation", + "content": "*" + }, + { + "bbox": [ + 313, + 388, + 556, + 638 + ], + "type": "text", + "content": " under this problem setting since none of the existing methods can directly address the open-set task under the source-free condition with a huge label shift across source domains." + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 313, + 647, + 468, + 659 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 647, + 468, + 659 + ], + "spans": [ + { + "bbox": [ + 313, + 647, + 468, + 659 + ], + "type": "text", + "content": "4.1. Feature Space Visualization" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 313, + 665, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 665, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 665, + 555, + 713 + ], + "type": "text", + "content": "To intuitively visualize the effectiveness of different approaches, we extracted features from the baseline models and our proposed model on the " + }, + { + "bbox": [ + 313, + 665, + 555, + 713 + ], + "type": "inline_equation", + "content": "\\rightarrow" + }, + { + "bbox": [ + 313, + 665, + 555, + 713 + ], + "type": "text", + "content": " Art task (Office-Home) and " + }, + { + "bbox": [ + 313, + 665, + 555, + 713 + ], + "type": "inline_equation", + "content": "\\rightarrow" + }, + { + "bbox": [ + 313, + 665, + 555, + 713 + ], + "type": "text", + "content": " Real task (DomainNet) with the ResNet-50 backbone" + } + ] + } + ], + "index": 30 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "10147" + } + ] + } + ], + "index": 31 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 56, + 70, + 295, + 137 + ], + "blocks": [ + { + "bbox": [ + 56, + 70, + 295, + 137 + ], + "lines": [ + { + "bbox": [ + 56, + 70, + 295, + 137 + ], + "spans": [ + { + "bbox": [ + 56, + 70, + 295, + 137 + ], + "type": "table", + "html": "
METHODTYPEOffice-Home →RealWorldDomainNet →Clipart
UNKOS*HOSUNKOS*HOS
DEFAULTSource-Free73.770.472.072.658.664.8
w/o Lsim70.970.670.769.359.163.8
w/o Lcyc74.068.571.172.856.963.9
w/o PUR39.687.054.447.372.957.4
", + "image_path": "db6c36da38841c2143345382b8b42c594ddfef500ed43ad2c082ecc63ba899f9.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 54, + 190, + 297, + 286 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 190, + 297, + 286 + ], + "spans": [ + { + "bbox": [ + 54, + 190, + 297, + 286 + ], + "type": "text", + "content": "[16]. The feature distributions were processed with t-SNE [48] afterward. As shown in Fig. 4, compared with baselines, our method UM achieves a better alignment between source and target distributions, especially when the domain shift is large. Benefiting from our joint error-based adversarial alignment mechanism, the extracted feature space, including the cluster of unknown target data (green), has a more discriminative class-wise decision boundary." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 294, + 149, + 306 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 294, + 149, + 306 + ], + "spans": [ + { + "bbox": [ + 55, + 294, + 149, + 306 + ], + "type": "text", + "content": "4.2. Ablation Study" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 312, + 296, + 504 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 312, + 296, + 504 + ], + "spans": [ + { + "bbox": [ + 56, + 312, + 296, + 504 + ], + "type": "text", + "content": "Self-supervised learning methods have shown that, by relying only on unlabeled data, it is still possible to obtain classification performance close to those of the supervised approaches [5, 17, 18]. In the source-free setting, we adopt the typical SimCLR [5] to help group the feature of unknown target data into a single cluster. As expected in Tab. 3, " + }, + { + "bbox": [ + 56, + 312, + 296, + 504 + ], + "type": "inline_equation", + "content": "L_{sim}^2" + }, + { + "bbox": [ + 56, + 312, + 296, + 504 + ], + "type": "text", + "content": " can slightly improve the accuracy of the unknown class for a higher HOS. Furthermore, Progressive Unknown Rejection (PUR), a denoising of generated labeled features, is crucial to detecting unknowns in source-free cases. As also illustrated in Fig. 7d, generally, a larger threshold " + }, + { + "bbox": [ + 56, + 312, + 296, + 504 + ], + "type": "inline_equation", + "content": "\\tau" + }, + { + "bbox": [ + 56, + 312, + 296, + 504 + ], + "type": "text", + "content": " will lead to a higher UNK at the cost of low OS*, characterized as the trade-off between recognizing known and unknown data for open-set tasks. In addition, we verify the effectiveness of cycle-consistency regularization " + }, + { + "bbox": [ + 56, + 312, + 296, + 504 + ], + "type": "inline_equation", + "content": "L_{cyc}" + }, + { + "bbox": [ + 56, + 312, + 296, + 504 + ], + "type": "text", + "content": " and find it helps maintain the normalized accuracy of the known class." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 512, + 255, + 525 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 512, + 255, + 525 + ], + "spans": [ + { + "bbox": [ + 55, + 512, + 255, + 525 + ], + "type": "text", + "content": "4.3. Robustness against Varying Openness" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 531, + 297, + 663 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 531, + 297, + 663 + ], + "spans": [ + { + "bbox": [ + 55, + 531, + 297, + 663 + ], + "type": "text", + "content": "To verify the robustness of the proposed method, we conducted experiments on the " + }, + { + "bbox": [ + 55, + 531, + 297, + 663 + ], + "type": "inline_equation", + "content": "\\rightarrow" + }, + { + "bbox": [ + 55, + 531, + 297, + 663 + ], + "type": "text", + "content": " Painting task (DomainNet) with the openness varying in " + }, + { + "bbox": [ + 55, + 531, + 297, + 663 + ], + "type": "inline_equation", + "content": "\\{0.25, 0.5, 0.75\\}" + }, + { + "bbox": [ + 55, + 531, + 297, + 663 + ], + "type": "text", + "content": ". Here, openness is defined by the ratio of unknown samples in the entire target data. PGL approach heuristically sets the hyperparameter according to the true unknown ratio to control the openness, while PUJE and UM automatically estimate the weight " + }, + { + "bbox": [ + 55, + 531, + 297, + 663 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 55, + 531, + 297, + 663 + ], + "type": "text", + "content": " during the training procedure. From Fig. 5a, we observe that our proposal consistently outperforms baselines by a large margin, which confirms its robustness to the change in openness." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 670, + 165, + 683 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 670, + 165, + 683 + ], + "spans": [ + { + "bbox": [ + 55, + 670, + 165, + 683 + ], + "type": "text", + "content": "4.4. Stabel Coverage" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 689, + 296, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 689, + 296, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 689, + 296, + 715 + ], + "type": "text", + "content": "In Fig. 5b, we illustrate the recognition performance of UM over training steps on the " + }, + { + "bbox": [ + 55, + 689, + 296, + 715 + ], + "type": "inline_equation", + "content": "\\rightarrow" + }, + { + "bbox": [ + 55, + 689, + 296, + 715 + ], + "type": "text", + "content": "Art task of the Office-Home" + } + ] + } + ], + "index": 8 + }, + { + "type": "table", + "bbox": [ + 315, + 70, + 554, + 99 + ], + "blocks": [ + { + "bbox": [ + 55, + 146, + 296, + 168 + ], + "lines": [ + { + "bbox": [ + 55, + 146, + 296, + 168 + ], + "spans": [ + { + "bbox": [ + 55, + 146, + 296, + 168 + ], + "type": "text", + "content": "Table 3. Ablation study verified with ResNet-50 model on OfficeHome & DomainNet dataset" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 315, + 70, + 554, + 99 + ], + "lines": [ + { + "bbox": [ + 315, + 70, + 554, + 99 + ], + "spans": [ + { + "bbox": [ + 315, + 70, + 554, + 99 + ], + "type": "table", + "html": "
METHODTYPEBACKBONEOffice-HomeDomainNetAvg. \nO*HOS
→Art→Product→Painting→Real
UNKOS*HOSUNKOS*HOSUNKOS*HOSUNKOS*HOSUNKOS*HOS
UMMulti-SourceResNet-5072.863.367.778.779.379.077.957.366.074.975.275.176.168.872.0
UM+AFGSource-FreeResNet-5066.155.560.383.371.677.063.956.560.076.660.667.672.561.166.2
UM+AFGViT-1677.759.867.587.680.984.168.557.462.582.873.277.779.267.873.0
", + "image_path": "3dc45f4f0984867be4e1fe1529fb979e0229e4d20357170310ee4cc5c77da353.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 107, + 555, + 130 + ], + "lines": [ + { + "bbox": [ + 313, + 107, + 555, + 130 + ], + "spans": [ + { + "bbox": [ + 313, + 107, + 555, + 130 + ], + "type": "text", + "content": "Table 4. Accuracy of ViT-B/16 model fine-tuned on Office-Home & DomainNet dataset under 1-shot setting" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 313, + 152, + 556, + 236 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 152, + 556, + 236 + ], + "spans": [ + { + "bbox": [ + 313, + 152, + 556, + 236 + ], + "type": "text", + "content": "dataset. OS* experiences a downward while the UNK keeps improving, which characterizes a trade-off between the accuracy of knowns and the accuracy of unknowns. We further observe that some previous works [27, 34] do not converge at the optimum. In contrast, our method always reaches a reliable convergence without suffering from a severe performance drop in recognizing known classes." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 244, + 508, + 257 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 244, + 508, + 257 + ], + "spans": [ + { + "bbox": [ + 313, + 244, + 508, + 257 + ], + "type": "text", + "content": "4.5. Flexibility in Backbone Architecture" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 263, + 556, + 419 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 263, + 556, + 419 + ], + "spans": [ + { + "bbox": [ + 313, + 263, + 556, + 419 + ], + "type": "text", + "content": "As presented in Sec. 3.3, AFG allows the target model to use a different backbone architecture from the pre-trained source models. Therefore, unlike those model-centric methods whose performance is deeply limited by source model architecture, our method can be effectively applied to real-world problems where each source model is trained using various networks by leveraging the power of advanced backbones like ViT [9] for the target model. Tab. 4 reveals a clear advantage of AFG when changing the target backbone to ViT-B/16 as the HOS scores under the source-free condition approach and even outperform the source data results. The same ResNet-50 backbone is used for pre-trained source models across different experiments." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 427, + 550, + 441 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 427, + 550, + 441 + ], + "spans": [ + { + "bbox": [ + 313, + 427, + 550, + 441 + ], + "type": "text", + "content": "4.6. Advantage in Increasing Labeled Target Data" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 446, + 556, + 590 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 446, + 556, + 590 + ], + "spans": [ + { + "bbox": [ + 313, + 446, + 556, + 590 + ], + "type": "text", + "content": "Sec. 4.6 shows the behavior of different methods when the number of labeled examples in the target domain increases from 1 to 10 per class on DomainNet using ResNet50 backbone. Cluster-based methods like OSBP, MOSDANET, and HyMOS will finally be caught up by a simple multi-class PU learning (MPU) when the sample size increases. On the contrary, our method consistently outperforms the most competitive baseline PUJE for various sizes of labeled target data. Furthermore, along with the growth in the size of " + }, + { + "bbox": [ + 313, + 446, + 556, + 590 + ], + "type": "inline_equation", + "content": "\\hat{V}" + }, + { + "bbox": [ + 313, + 446, + 556, + 590 + ], + "type": "text", + "content": ", the HOS score achieved by UM+AFG in the source-free setting gradually approaches, even surpasses those methods using source data." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 598, + 555, + 624 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 598, + 555, + 624 + ], + "spans": [ + { + "bbox": [ + 313, + 598, + 555, + 624 + ], + "type": "text", + "content": "4.7. Sensitivity to PU, Scaling, and Threshold Coefficients" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 629, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 629, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 629, + 556, + 715 + ], + "type": "text", + "content": "We show the sensitivity of our approach to varying PU coefficient " + }, + { + "bbox": [ + 313, + 629, + 556, + 715 + ], + "type": "inline_equation", + "content": "\\beta" + }, + { + "bbox": [ + 313, + 629, + 556, + 715 + ], + "type": "text", + "content": ", scaling factor " + }, + { + "bbox": [ + 313, + 629, + 556, + 715 + ], + "type": "inline_equation", + "content": "\\nu" + }, + { + "bbox": [ + 313, + 629, + 556, + 715 + ], + "type": "text", + "content": ", and threshold " + }, + { + "bbox": [ + 313, + 629, + 556, + 715 + ], + "type": "inline_equation", + "content": "\\tau" + }, + { + "bbox": [ + 313, + 629, + 556, + 715 + ], + "type": "text", + "content": " in Sec. 4.7. We can draw two observations from this: the OS* score is relatively stable, and the unknown recognition achieves a more reliable performance for a larger coefficient " + }, + { + "bbox": [ + 313, + 629, + 556, + 715 + ], + "type": "inline_equation", + "content": "\\beta" + }, + { + "bbox": [ + 313, + 629, + 556, + 715 + ], + "type": "text", + "content": "; generally, a larger " + }, + { + "bbox": [ + 313, + 629, + 556, + 715 + ], + "type": "inline_equation", + "content": "\\nu" + }, + { + "bbox": [ + 313, + 629, + 556, + 715 + ], + "type": "text", + "content": " means focusing on the source domain that contributes more error and ignoring others, while a smaller " + }, + { + "bbox": [ + 313, + 629, + 556, + 715 + ], + "type": "inline_equation", + "content": "\\nu" + }, + { + "bbox": [ + 313, + 629, + 556, + 715 + ], + "type": "text", + "content": " will equalize" + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "text", + "content": "10148" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 109, + 71, + 203, + 144 + ], + "blocks": [ + { + "bbox": [ + 109, + 71, + 203, + 144 + ], + "lines": [ + { + "bbox": [ + 109, + 71, + 203, + 144 + ], + "spans": [ + { + "bbox": [ + 109, + 71, + 203, + 144 + ], + "type": "image", + "image_path": "baead833bffaee584e19426d1998eeead018567ac396ee5e16b320c5b2abac07.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 140, + 148, + 173, + 157 + ], + "lines": [ + { + "bbox": [ + 140, + 148, + 173, + 157 + ], + "spans": [ + { + "bbox": [ + 140, + 148, + 173, + 157 + ], + "type": "text", + "content": "(a) OSBP" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 209, + 72, + 301, + 144 + ], + "blocks": [ + { + "bbox": [ + 209, + 72, + 301, + 144 + ], + "lines": [ + { + "bbox": [ + 209, + 72, + 301, + 144 + ], + "spans": [ + { + "bbox": [ + 209, + 72, + 301, + 144 + ], + "type": "image", + "image_path": "d2c42b2a171394521b9bf8602debd0a11083dadfd7c65d32b226e4d545974e23.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 235, + 148, + 276, + 158 + ], + "lines": [ + { + "bbox": [ + 235, + 148, + 276, + 158 + ], + "spans": [ + { + "bbox": [ + 235, + 148, + 276, + 158 + ], + "type": "text", + "content": "(b) HyMOS" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 308, + 72, + 386, + 144 + ], + "blocks": [ + { + "bbox": [ + 308, + 72, + 386, + 144 + ], + "lines": [ + { + "bbox": [ + 308, + 72, + 386, + 144 + ], + "spans": [ + { + "bbox": [ + 308, + 72, + 386, + 144 + ], + "type": "image", + "image_path": "f15e018d85423071d5e641ae072ae0e60c6ba0d2528ecb2c32ee97ab6e125b84.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 339, + 148, + 370, + 157 + ], + "lines": [ + { + "bbox": [ + 339, + 148, + 370, + 157 + ], + "spans": [ + { + "bbox": [ + 339, + 148, + 370, + 157 + ], + "type": "text", + "content": "(c) PUJE" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 406, + 72, + 500, + 144 + ], + "blocks": [ + { + "bbox": [ + 406, + 72, + 500, + 144 + ], + "lines": [ + { + "bbox": [ + 406, + 72, + 500, + 144 + ], + "spans": [ + { + "bbox": [ + 406, + 72, + 500, + 144 + ], + "type": "image", + "image_path": "6ec085147406cebbe3d592a8c7a65096c4e7d31086a9c5de0aa4cc22b480166b.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 109, + 160, + 203, + 232 + ], + "blocks": [ + { + "bbox": [ + 109, + 160, + 203, + 232 + ], + "lines": [ + { + "bbox": [ + 109, + 160, + 203, + 232 + ], + "spans": [ + { + "bbox": [ + 109, + 160, + 203, + 232 + ], + "type": "image", + "image_path": "7223fd9ce990de23916bc3816c1301a5e021cae6e1c49bb981d2921d50924a21.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 128, + 236, + 185, + 245 + ], + "lines": [ + { + "bbox": [ + 128, + 236, + 185, + 245 + ], + "spans": [ + { + "bbox": [ + 128, + 236, + 185, + 245 + ], + "type": "text", + "content": "(e) MOSDANET" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 209, + 160, + 301, + 232 + ], + "blocks": [ + { + "bbox": [ + 209, + 160, + 301, + 232 + ], + "lines": [ + { + "bbox": [ + 209, + 160, + 301, + 232 + ], + "spans": [ + { + "bbox": [ + 209, + 160, + 301, + 232 + ], + "type": "image", + "image_path": "3f9362d1ac223f824da343dcedf9a097dfb6bc69ee51c5583e18eef0313724f7.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 242, + 236, + 269, + 246 + ], + "lines": [ + { + "bbox": [ + 242, + 236, + 269, + 246 + ], + "spans": [ + { + "bbox": [ + 242, + 236, + 269, + 246 + ], + "type": "text", + "content": "(f) PGL" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 308, + 160, + 400, + 232 + ], + "blocks": [ + { + "bbox": [ + 308, + 160, + 400, + 232 + ], + "lines": [ + { + "bbox": [ + 308, + 160, + 400, + 232 + ], + "spans": [ + { + "bbox": [ + 308, + 160, + 400, + 232 + ], + "type": "image", + "image_path": "dd384efbc1b66ee6ffd33a478ba5bc330de776d7c1b837d36e0facd77a7d23eb.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 336, + 236, + 372, + 246 + ], + "lines": [ + { + "bbox": [ + 336, + 236, + 372, + 246 + ], + "spans": [ + { + "bbox": [ + 336, + 236, + 372, + 246 + ], + "type": "text", + "content": "(g) ANNA" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 406, + 160, + 500, + 232 + ], + "blocks": [ + { + "bbox": [ + 441, + 148, + 466, + 157 + ], + "lines": [ + { + "bbox": [ + 441, + 148, + 466, + 157 + ], + "spans": [ + { + "bbox": [ + 441, + 148, + 466, + 157 + ], + "type": "text", + "content": "(d) UM" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 406, + 160, + 500, + 232 + ], + "lines": [ + { + "bbox": [ + 406, + 160, + 500, + 232 + ], + "spans": [ + { + "bbox": [ + 406, + 160, + 500, + 232 + ], + "type": "image", + "image_path": "229d3e1d4f2e004257d80251515d4af69f89ddd3649ecec6018adea93c384ae3.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 441, + 236, + 466, + 246 + ], + "lines": [ + { + "bbox": [ + 441, + 236, + 466, + 246 + ], + "spans": [ + { + "bbox": [ + 441, + 236, + 466, + 246 + ], + "type": "text", + "content": "(h) UM" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_caption" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 72, + 289, + 173, + 367 + ], + "blocks": [ + { + "bbox": [ + 55, + 255, + 553, + 267 + ], + "lines": [ + { + "bbox": [ + 55, + 255, + 553, + 267 + ], + "spans": [ + { + "bbox": [ + 55, + 255, + 553, + 267 + ], + "type": "text", + "content": "Figure 4. T-SNE visualization of feature distributions in (a)-(d) " + }, + { + "bbox": [ + 55, + 255, + 553, + 267 + ], + "type": "inline_equation", + "content": "\\rightarrow" + }, + { + "bbox": [ + 55, + 255, + 553, + 267 + ], + "type": "text", + "content": " Art task (Office-Home dataset); (e)-(h) " + }, + { + "bbox": [ + 55, + 255, + 553, + 267 + ], + "type": "inline_equation", + "content": "\\rightarrow" + }, + { + "bbox": [ + 55, + 255, + 553, + 267 + ], + "type": "text", + "content": " Real task (DomainNet dataset)." + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 72, + 289, + 173, + 367 + ], + "lines": [ + { + "bbox": [ + 72, + 289, + 173, + 367 + ], + "spans": [ + { + "bbox": [ + 72, + 289, + 173, + 367 + ], + "type": "image", + "image_path": "9f9bbd10253eda6ef4040bce1e232b300a2c7f34eed873b250f6a8e023b35dbf.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 78, + 373, + 166, + 382 + ], + "lines": [ + { + "bbox": [ + 78, + 373, + 166, + 382 + ], + "spans": [ + { + "bbox": [ + 78, + 373, + 166, + 382 + ], + "type": "text", + "content": "(a) robust against openness" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 55, + 392, + 296, + 436 + ], + "lines": [ + { + "bbox": [ + 55, + 392, + 296, + 436 + ], + "spans": [ + { + "bbox": [ + 55, + 392, + 296, + 436 + ], + "type": "text", + "content": "Figure 5. (a) Performance comparisons w.r.t. varying openness of the " + }, + { + "bbox": [ + 55, + 392, + 296, + 436 + ], + "type": "inline_equation", + "content": "\\rightarrow" + }, + { + "bbox": [ + 55, + 392, + 296, + 436 + ], + "type": "text", + "content": " Painting task from DomainNet dataset; (b) Convergence analysis of the " + }, + { + "bbox": [ + 55, + 392, + 296, + 436 + ], + "type": "inline_equation", + "content": "\\rightarrow" + }, + { + "bbox": [ + 55, + 392, + 296, + 436 + ], + "type": "text", + "content": " Art task from Office-Home dataset compared to other baselines with confidence intervals" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_caption" + } + ], + "index": 17 + }, + { + "type": "image", + "bbox": [ + 178, + 289, + 278, + 367 + ], + "blocks": [ + { + "bbox": [ + 178, + 289, + 278, + 367 + ], + "lines": [ + { + "bbox": [ + 178, + 289, + 278, + 367 + ], + "spans": [ + { + "bbox": [ + 178, + 289, + 278, + 367 + ], + "type": "image", + "image_path": "8788f58a6b02ee9cf8f99ba6e949cce3625a03ac7f02f767a2ac7316ee280f69.jpg" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 192, + 373, + 265, + 382 + ], + "lines": [ + { + "bbox": [ + 192, + 373, + 265, + 382 + ], + "spans": [ + { + "bbox": [ + 192, + 373, + 265, + 382 + ], + "type": "text", + "content": "(b) stable convergence" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_caption" + } + ], + "index": 19 + }, + { + "type": "image", + "bbox": [ + 72, + 468, + 173, + 545 + ], + "blocks": [ + { + "bbox": [ + 72, + 468, + 173, + 545 + ], + "lines": [ + { + "bbox": [ + 72, + 468, + 173, + 545 + ], + "spans": [ + { + "bbox": [ + 72, + 468, + 173, + 545 + ], + "type": "image", + "image_path": "910dc367fec1548161cea36b899c04a7a077d125ae14d606c26f5d6fe125312a.jpg" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 93, + 552, + 153, + 561 + ], + "lines": [ + { + "bbox": [ + 93, + 552, + 153, + 561 + ], + "spans": [ + { + "bbox": [ + 93, + 552, + 153, + 561 + ], + "type": "text", + "content": "(a) " + }, + { + "bbox": [ + 93, + 552, + 153, + 561 + ], + "type": "inline_equation", + "content": "\\rightarrow" + }, + { + "bbox": [ + 93, + 552, + 153, + 561 + ], + "type": "text", + "content": " Clipart task" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 55, + 571, + 296, + 615 + ], + "lines": [ + { + "bbox": [ + 55, + 571, + 296, + 615 + ], + "spans": [ + { + "bbox": [ + 55, + 571, + 296, + 615 + ], + "type": "text", + "content": "Figure 6. Accuracy vs the number of labeled target samples on DomainNet using ResNet50 backbone. Our method maintains a high level of performance for different sample sizes of the labeled target data." + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_caption" + } + ], + "index": 22 + }, + { + "type": "image", + "bbox": [ + 179, + 468, + 279, + 545 + ], + "blocks": [ + { + "bbox": [ + 179, + 468, + 279, + 545 + ], + "lines": [ + { + "bbox": [ + 179, + 468, + 279, + 545 + ], + "spans": [ + { + "bbox": [ + 179, + 468, + 279, + 545 + ], + "type": "image", + "image_path": "4674bd3fa5c9b459b0e1b137015581139b0181d2b1023e114cefbc8320254cbc.jpg" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 199, + 552, + 259, + 560 + ], + "lines": [ + { + "bbox": [ + 199, + 552, + 259, + 560 + ], + "spans": [ + { + "bbox": [ + 199, + 552, + 259, + 560 + ], + "type": "text", + "content": "(b) " + }, + { + "bbox": [ + 199, + 552, + 259, + 560 + ], + "type": "inline_equation", + "content": "\\rightarrow" + }, + { + "bbox": [ + 199, + 552, + 259, + 560 + ], + "type": "text", + "content": " Sketch task" + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "image_caption" + } + ], + "index": 24 + }, + { + "bbox": [ + 55, + 654, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 654, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 654, + 296, + 713 + ], + "type": "text", + "content": "the importance of each domain, which can harm the performance when a remarkable label shift exists among source domains implied by Fig. 7c (the imbalance setting indicates a case where one source contains 20 classes while the other two sources take 5 classes respectively)." + } + ] + } + ], + "index": 27 + }, + { + "type": "image", + "bbox": [ + 331, + 289, + 432, + 367 + ], + "blocks": [ + { + "bbox": [ + 331, + 289, + 432, + 367 + ], + "lines": [ + { + "bbox": [ + 331, + 289, + 432, + 367 + ], + "spans": [ + { + "bbox": [ + 331, + 289, + 432, + 367 + ], + "type": "image", + "image_path": "797e41538c541eec8477180d0c59dd7dda8051c6221e3cd0a25f29e3f626c2e0.jpg" + } + ] + } + ], + "index": 28, + "angle": 0, + "type": "image_body" + } + ], + "index": 28 + }, + { + "type": "image", + "bbox": [ + 437, + 289, + 538, + 367 + ], + "blocks": [ + { + "bbox": [ + 437, + 289, + 538, + 367 + ], + "lines": [ + { + "bbox": [ + 437, + 289, + 538, + 367 + ], + "spans": [ + { + "bbox": [ + 437, + 289, + 538, + 367 + ], + "type": "image", + "image_path": "7874ae66c62a8388641219cdc0de361c3cbc0776ee47813622ef4eccd1aab535.jpg" + } + ] + } + ], + "index": 30, + "angle": 0, + "type": "image_body" + } + ], + "index": 30 + }, + { + "type": "image", + "bbox": [ + 331, + 387, + 432, + 463 + ], + "blocks": [ + { + "bbox": [ + 351, + 373, + 411, + 382 + ], + "lines": [ + { + "bbox": [ + 351, + 373, + 411, + 382 + ], + "spans": [ + { + "bbox": [ + 351, + 373, + 411, + 382 + ], + "type": "text", + "content": "(a) sensitivity to " + }, + { + "bbox": [ + 351, + 373, + 411, + 382 + ], + "type": "inline_equation", + "content": "\\beta" + } + ] + } + ], + "index": 29, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 331, + 387, + 432, + 463 + ], + "lines": [ + { + "bbox": [ + 331, + 387, + 432, + 463 + ], + "spans": [ + { + "bbox": [ + 331, + 387, + 432, + 463 + ], + "type": "image", + "image_path": "212f295cdfc1c696e682eaa82a99ff0decf163f161fdf18f0d73bfabf7838eee.jpg" + } + ] + } + ], + "index": 32, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 351, + 469, + 411, + 478 + ], + "lines": [ + { + "bbox": [ + 351, + 469, + 411, + 478 + ], + "spans": [ + { + "bbox": [ + 351, + 469, + 411, + 478 + ], + "type": "text", + "content": "(c) sensitivity to " + }, + { + "bbox": [ + 351, + 469, + 411, + 478 + ], + "type": "inline_equation", + "content": "\\nu" + } + ] + } + ], + "index": 33, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 313, + 488, + 555, + 510 + ], + "lines": [ + { + "bbox": [ + 313, + 488, + 555, + 510 + ], + "spans": [ + { + "bbox": [ + 313, + 488, + 555, + 510 + ], + "type": "text", + "content": "Figure 7. (a)-(d) Sensitivity to varying loss coefficient " + }, + { + "bbox": [ + 313, + 488, + 555, + 510 + ], + "type": "inline_equation", + "content": "\\beta, \\nu, \\tau" + }, + { + "bbox": [ + 313, + 488, + 555, + 510 + ], + "type": "text", + "content": " verified in Office-Home dataset." + } + ] + } + ], + "index": 36, + "angle": 0, + "type": "image_caption" + } + ], + "index": 32 + }, + { + "type": "image", + "bbox": [ + 437, + 387, + 537, + 462 + ], + "blocks": [ + { + "bbox": [ + 457, + 373, + 518, + 382 + ], + "lines": [ + { + "bbox": [ + 457, + 373, + 518, + 382 + ], + "spans": [ + { + "bbox": [ + 457, + 373, + 518, + 382 + ], + "type": "text", + "content": "(b) sensitivity to " + }, + { + "bbox": [ + 457, + 373, + 518, + 382 + ], + "type": "inline_equation", + "content": "\\beta" + } + ] + } + ], + "index": 31, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 437, + 387, + 537, + 462 + ], + "lines": [ + { + "bbox": [ + 437, + 387, + 537, + 462 + ], + "spans": [ + { + "bbox": [ + 437, + 387, + 537, + 462 + ], + "type": "image", + "image_path": "74305a12e89c6566e6ac7d20ad9e0e21e0845f102794138569e075a441842f7b.jpg" + } + ] + } + ], + "index": 34, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 458, + 469, + 517, + 478 + ], + "lines": [ + { + "bbox": [ + 458, + 469, + 517, + 478 + ], + "spans": [ + { + "bbox": [ + 458, + 469, + 517, + 478 + ], + "type": "text", + "content": "(d) sensitivity to " + }, + { + "bbox": [ + 458, + 469, + 517, + 478 + ], + "type": "inline_equation", + "content": "\\tau" + } + ] + } + ], + "index": 35, + "angle": 0, + "type": "image_caption" + } + ], + "index": 34 + }, + { + "bbox": [ + 314, + 535, + 388, + 547 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 535, + 388, + 547 + ], + "spans": [ + { + "bbox": [ + 314, + 535, + 388, + 547 + ], + "type": "text", + "content": "5. Conclusion" + } + ] + } + ], + "index": 37 + }, + { + "bbox": [ + 313, + 558, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 558, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 558, + 555, + 713 + ], + "type": "text", + "content": "In this work, we addressed the semi-supervised open-set domain shift problem in multi-source cases with inconsistent label space by introducing a novel learning theory based on joint error and multi-class PU learning that can reduce the open-set risk, where the generalization error is bounded by the extension of VC learning theory based on uniform covering number. We generalize our method into source-free scenarios by attention-based feature generation, which is computationally efficient with reliable performance. We conduct extensive experiments on multiple domain adaptation benchmarks. Our model achieves the best performance regardless of source data, compared with recent baseline methods, proving our proposed approach's efficacy." + } + ] + } + ], + "index": 38 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "10149" + } + ] + } + ], + "index": 39 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 158, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 158, + 85 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 158, + 85 + ], + "type": "text", + "content": "Acknowledgements" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 148, + 115, + 161 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 148, + 115, + 161 + ], + "spans": [ + { + "bbox": [ + 56, + 148, + 115, + 161 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 57, + 169, + 296, + 713 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 61, + 169, + 296, + 212 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 169, + 296, + 212 + ], + "spans": [ + { + "bbox": [ + 61, + 169, + 296, + 212 + ], + "type": "text", + "content": "[1] Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Vaughan. A theory of learning from different domains. Machine Learning, 79:151-175, 2010. 1" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 214, + 296, + 270 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 214, + 296, + 270 + ], + "spans": [ + { + "bbox": [ + 61, + 214, + 296, + 270 + ], + "type": "text", + "content": "[2] Silvia Bucci, Mohammad Reza Loghmani, and Tatiana Tommasi. On the effectiveness of image rotation for open set domain adaptation. In 16th European Conference on Computer Vision, pages 422-438. Springer International Publishing, 2020. 2, 6" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 61, + 271, + 296, + 326 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 271, + 296, + 326 + ], + "spans": [ + { + "bbox": [ + 61, + 271, + 296, + 326 + ], + "type": "text", + "content": "[3] Silvia Bucci, Francesco Cappio Borlino, Barbara Caputo, and Tatiana Tommasi. Distance-based hyperspherical classification for multi-source open-set domain adaptation. In IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1030-1039. IEEE, 2022. 6" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 327, + 296, + 360 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 327, + 296, + 360 + ], + "spans": [ + { + "bbox": [ + 62, + 327, + 296, + 360 + ], + "type": "text", + "content": "[4] Pau Panareda Busto and Juergen Gall. Open set domain adaptation. In IEEE International Conference on Computer Vision, pages 754-763, 2017. 1" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 362, + 296, + 416 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 362, + 296, + 416 + ], + "spans": [ + { + "bbox": [ + 62, + 362, + 296, + 416 + ], + "type": "text", + "content": "[5] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning. JMLR.org, 2020. 5, 7" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 418, + 296, + 452 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 418, + 296, + 452 + ], + "spans": [ + { + "bbox": [ + 62, + 418, + 296, + 452 + ], + "type": "text", + "content": "[6] Shivang Chopra, Suraj Kothawade, Houda Aynaou, and Aman Chadha. Source-free domain adaptation with diffusion-guided source data generation. CoRR, abs/2402.04929, 2024. 4" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 453, + 296, + 508 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 453, + 296, + 508 + ], + "spans": [ + { + "bbox": [ + 62, + 453, + 296, + 508 + ], + "type": "text", + "content": "[7] Ekin D. Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V. Le. Randaugment: Practical automated data augmentation with a reduced search space. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 3008-3017, 2020. 5" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 62, + 510, + 296, + 554 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 510, + 296, + 554 + ], + "spans": [ + { + "bbox": [ + 62, + 510, + 296, + 554 + ], + "type": "text", + "content": "[8] Jun Deng, Wei Dong, Richard Socher, Li-Jia Li, Kuntai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, 2009. 5" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 62, + 555, + 296, + 621 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 555, + 296, + 621 + ], + "spans": [ + { + "bbox": [ + 62, + 555, + 296, + 621 + ], + "type": "text", + "content": "[9] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2021. 7" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 624, + 296, + 667 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 624, + 296, + 667 + ], + "spans": [ + { + "bbox": [ + 57, + 624, + 296, + 667 + ], + "type": "text", + "content": "[10] Zhen Fang, Jie Lu, Feng Liu, Junyu Xuan, and Guangquan Zhang. Open set domain adaptation: Theoretical bound and algorithm. IEEE Transactions on Neural Networks and Learning Systems, 32:4309-4322, 2020. 2" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 57, + 669, + 296, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 669, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 57, + 669, + 296, + 713 + ], + "type": "text", + "content": "[11] Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In Proceedings of the 32nd International Conference on Machine Learning, pages 1180-1189. JMLR.org, 2015. 1" + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 57, + 73, + 555, + 713 + ], + "type": "list", + "angle": 0, + "index": 28, + "blocks": [ + { + "bbox": [ + 55, + 91, + 297, + 137 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 91, + 297, + 137 + ], + "spans": [ + { + "bbox": [ + 55, + 91, + 297, + 137 + ], + "type": "text", + "content": "This research is partially supported by JST Moonshot R&D Grant Number JPMJPS2011, CREST Grant Number JPMJCR2015 and Basic Research Grant (Super AI) of Institute for AI and Beyond of the University of Tokyo." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 316, + 73, + 555, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 73, + 555, + 127 + ], + "spans": [ + { + "bbox": [ + 316, + 73, + 555, + 127 + ], + "type": "text", + "content": "[12] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17 (1):2096-2030, 2016. 5" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 129, + 555, + 183 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 129, + 555, + 183 + ], + "spans": [ + { + "bbox": [ + 316, + 129, + 555, + 183 + ], + "type": "text", + "content": "[13] Saurabh Garg, Sivaraman Balakrishnan, and Zachary C. Lipton. Domain adaptation under open set label shift. In Proceedings of the 36th International Conference on Neural Information Processing Systems. Curran Associates Inc., 2022. 3" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 317, + 185, + 555, + 239 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 185, + 555, + 239 + ], + "spans": [ + { + "bbox": [ + 317, + 185, + 555, + 239 + ], + "type": "text", + "content": "[14] Ryan Gomes, Andreas Krause, and Pietro Perona. Discriminative clustering by regularized information maximization. In Proceedings of the 23rd International Conference on Neural Information Processing Systems, pages 775-783. Curran Associates Inc., 2010. 5" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 241, + 555, + 285 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 241, + 555, + 285 + ], + "spans": [ + { + "bbox": [ + 316, + 241, + 555, + 285 + ], + "type": "text", + "content": "[15] Yves Grandvalet and Yoshua Bengio. Semi-supervised learning by entropy minimization. In Proceedings of the 17th International Conference on Neural Information Processing Systems, pages 529-536. MIT Press, 2004. 5" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 286, + 555, + 330 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 286, + 555, + 330 + ], + "spans": [ + { + "bbox": [ + 316, + 286, + 555, + 330 + ], + "type": "text", + "content": "[16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. IEEE Conference on Computer Vision and Pattern Recognition, pages 770-778, 2015. 5, 7" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 331, + 555, + 376 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 331, + 555, + 376 + ], + "spans": [ + { + "bbox": [ + 316, + 331, + 555, + 376 + ], + "type": "text", + "content": "[17] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9726-9735, 2020. 7" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 317, + 376, + 554, + 432 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 376, + 554, + 432 + ], + "spans": [ + { + "bbox": [ + 317, + 376, + 554, + 432 + ], + "type": "text", + "content": "[18] Olivier J. Henaff, Aravind Srinivas, Jeffrey De Fauw, Ali Razavi, Carl Doersch, S. M. Ali Eslami, and Aaron Van Den Oord. Data-efficient image recognition with contrastive predictive coding. In Proceedings of the 37th International Conference on Machine Learning. JMLR.org, 2020. 7" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 317, + 433, + 555, + 487 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 433, + 555, + 487 + ], + "spans": [ + { + "bbox": [ + 317, + 433, + 555, + 487 + ], + "type": "text", + "content": "[19] Judy Hoffman, Mehryar Mohri, and Ningshan Zhang. Algorithms and theory for multiple-source adaptation. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 8256-8266. Curran Associates Inc., 2018. 1" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 317, + 488, + 555, + 553 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 488, + 555, + 553 + ], + "spans": [ + { + "bbox": [ + 317, + 488, + 555, + 553 + ], + "type": "text", + "content": "[20] JoonHo Jang, Byeonghu Na, DongHyeok Shin, Mingi Ji, Kyungwoo Song, and Il-Chul Moon. Unknown-aware domain adversarial learning for open-set domain adaptation. In Proceedings of the 36th International Conference on Neural Information Processing Systems. Curran Associates Inc., 2022. 2" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 555, + 555, + 589 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 555, + 555, + 589 + ], + "spans": [ + { + "bbox": [ + 316, + 555, + 555, + 589 + ], + "type": "text", + "content": "[21] Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In 2nd International Conference on Learning Representations, 2014. 4" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 590, + 554, + 623 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 590, + 554, + 623 + ], + "spans": [ + { + "bbox": [ + 316, + 590, + 554, + 623 + ], + "type": "text", + "content": "[22] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. In 5th International Conference on Learning Representations, 2017. 5" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 624, + 555, + 669 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 624, + 555, + 669 + ], + "spans": [ + { + "bbox": [ + 316, + 624, + 555, + 669 + ], + "type": "text", + "content": "[23] Jichang Li, Guanbin Li, Yemin Shi, and Yizhou Yu. Cross-domain adaptive clustering for semi-supervised domain adaptation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2505-2514, 2021. 1, 5" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 316, + 670, + 554, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 670, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 670, + 554, + 713 + ], + "type": "text", + "content": "[24] Jingjing Li, Zhiqi Yu, Zhekai Du, Lei Zhu, and Heng Tao Shen. A comprehensive survey on source-free domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(8):5743-5762, 2024. 4" + } + ] + } + ], + "index": 27 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "text", + "content": "10150" + } + ] + } + ], + "index": 29 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 297, + 712 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 56, + 72, + 297, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 297, + 106 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 297, + 106 + ], + "type": "text", + "content": "[25] Keqiuyin Li, Jie Lu, Hua Zuo, and Guangquan Zhang. Multisource domain adaptation handling inaccurate label spaces. Neurocomputing, 594:127824, 2024. 2" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 107, + 295, + 152 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 107, + 295, + 152 + ], + "spans": [ + { + "bbox": [ + 56, + 107, + 295, + 152 + ], + "type": "text", + "content": "[26] Rui Li, Qianfen Jiao, Wenming Cao, Hau-San Wong, and Si Wu. Model adaptation: Unsupervised domain adaptation without source data. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9638-9647, 2020. 4" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 152, + 295, + 196 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 152, + 295, + 196 + ], + "spans": [ + { + "bbox": [ + 56, + 152, + 295, + 196 + ], + "type": "text", + "content": "[27] Wuyang Li, Jie Liu, Bo Han, and Yixuan Yuan. Adjustment and alignment for unbiased open set domain adaptation. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 24110-24119, 2023. 2, 6, 7" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 198, + 295, + 251 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 198, + 295, + 251 + ], + "spans": [ + { + "bbox": [ + 56, + 198, + 295, + 251 + ], + "type": "text", + "content": "[28] Jian Liang, Dapeng Hu, and Jiashi Feng. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In Proceedings of the 37th International Conference on Machine Learning. JMLR.org, 2020. 4" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 254, + 295, + 308 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 254, + 295, + 308 + ], + "spans": [ + { + "bbox": [ + 56, + 254, + 295, + 308 + ], + "type": "text", + "content": "[29] Jian Liang, Dapeng Hu, Yunbo Wang, Ran He, and Jiashi Feng. Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11): 8602-8617, 2022. 4" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 309, + 295, + 343 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 309, + 295, + 343 + ], + "spans": [ + { + "bbox": [ + 56, + 309, + 295, + 343 + ], + "type": "text", + "content": "[30] Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. In 7th International Conference on Learning Representations, 2019. 3" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 344, + 295, + 388 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 344, + 295, + 388 + ], + "spans": [ + { + "bbox": [ + 56, + 344, + 295, + 388 + ], + "type": "text", + "content": "[31] Mohammad Reza Loghmania, Markus Vinczea, and Tatiana Tommasi. Positive-unlabeled learning for open set domain adaptation. Pattern Recognition Letters, 136:198-204, 2020. 6" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 388, + 295, + 442 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 388, + 295, + 442 + ], + "spans": [ + { + "bbox": [ + 56, + 388, + 295, + 442 + ], + "type": "text", + "content": "[32] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I. Jordan. Learning transferable features with deep adaptation networks. In Proceedings of the 32nd International Conference on Machine Learning, pages 97-105. JMLR.org, 2015. 1" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 445, + 295, + 499 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 445, + 295, + 499 + ], + "spans": [ + { + "bbox": [ + 56, + 445, + 295, + 499 + ], + "type": "text", + "content": "[33] Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 1647-1657. Curran Associates Inc., 2018. 1" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 501, + 295, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 501, + 295, + 555 + ], + "spans": [ + { + "bbox": [ + 56, + 501, + 295, + 555 + ], + "type": "text", + "content": "[34] Yadan Luo, Zijian Wang, Zi Huang, and Mahsa Baktashmotlagh. Progressive graph learning for open-set domain adaptation. In Proceedings of the 37th International Conference on Machine Learning, pages 6468-6478. PMLR, 2020. 2, 6, 7" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 557, + 295, + 612 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 557, + 295, + 612 + ], + "spans": [ + { + "bbox": [ + 56, + 557, + 295, + 612 + ], + "type": "text", + "content": "[35] Yadan Luo, Zijian Wang, Zhuoxiao Chen, Zi Huang, and Mahsa Baktashmotlagh. Source-free progressive graph learning for open-set domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(9):11240-11255, 2023. 4" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 613, + 295, + 658 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 613, + 295, + 658 + ], + "spans": [ + { + "bbox": [ + 56, + 613, + 295, + 658 + ], + "type": "text", + "content": "[36] Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In IEEE/CVF International Conference on Computer Vision, pages 1406-1415, 2019. 1, 2, 6" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 56, + 658, + 295, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 658, + 295, + 712 + ], + "spans": [ + { + "bbox": [ + 56, + 658, + 295, + 712 + ], + "type": "text", + "content": "[37] Md Mahmudur Rahman, Rameswar Panda, and Mohammad Arif Ul Alam. Semi-supervised domain adaptation with autoencoder via simultaneous learning. In IEEE/CVF Winter Conference on Applications of Computer Vision, pages 402-411, 2023. 1" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 72, + 555, + 712 + ], + "type": "list", + "angle": 0, + "index": 28, + "blocks": [ + { + "bbox": [ + 316, + 72, + 555, + 137 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 72, + 555, + 137 + ], + "spans": [ + { + "bbox": [ + 316, + 72, + 555, + 137 + ], + "type": "text", + "content": "[38] Sayan Rakshit, Dipesh Tamboli, Pragati Shuddhodhan Meshram, Biplab Banerjee, Gemma Roig, and Subhasis Chaudhuri. Multi-source open-set deep adversarial domain adaptation. In 16th European Conference on Computer Vision, pages 735-750. Springer International Publishing, 2020. 6" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 139, + 555, + 183 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 139, + 555, + 183 + ], + "spans": [ + { + "bbox": [ + 316, + 139, + 555, + 183 + ], + "type": "text", + "content": "[39] Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. Maximum classifier discrepancy for unsupervised domain adaptation. IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3723-3732, 2017. 1" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 184, + 555, + 228 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 184, + 555, + 228 + ], + "spans": [ + { + "bbox": [ + 316, + 184, + 555, + 228 + ], + "type": "text", + "content": "[40] Kuniaki Saito, Shohei Yamamoto, Yoshitaka Ushiku, and Tatsuya Harada. Open set domain adaptation by backpropagation. In 15th European Conference on Computer Vision, pages 156-171. Springer International Publishing, 2018. 2, 6" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 228, + 555, + 271 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 228, + 555, + 271 + ], + "spans": [ + { + "bbox": [ + 316, + 228, + 555, + 271 + ], + "type": "text", + "content": "[41] Kuniaki Saito, Donghyun Kim, Stan Sclaroff, Trevor Darrell, and Kate Saenko. Semi-supervised domain adaptation via minimax entropy. In IEEE/CVF International Conference on Computer Vision, pages 8049-8057, 2019. 1, 6" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 272, + 555, + 327 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 272, + 555, + 327 + ], + "spans": [ + { + "bbox": [ + 316, + 272, + 555, + 327 + ], + "type": "text", + "content": "[42] Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. In Proceedings of the 30th International Conference on Neural Information Processing Systems, pages 1171-1179. Curran Associates Inc., 2016. 5" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 327, + 555, + 370 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 327, + 555, + 370 + ], + "spans": [ + { + "bbox": [ + 316, + 327, + 555, + 370 + ], + "type": "text", + "content": "[43] Ankit Singh. Clda: contrastive learning for semi-supervised domain adaptation. In Proceedings of the 35th International Conference on Neural Information Processing Systems. Curran Associates Inc., 2021. 1, 5" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 371, + 555, + 436 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 371, + 555, + 436 + ], + "spans": [ + { + "bbox": [ + 316, + 371, + 555, + 436 + ], + "type": "text", + "content": "[44] Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Han Zhang, and Colin Raffel. Fixmatch: simplifying semi-supervised learning with consistency and confidence. In Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., 2020. 5" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 437, + 555, + 469 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 437, + 555, + 469 + ], + "spans": [ + { + "bbox": [ + 316, + 437, + 555, + 469 + ], + "type": "text", + "content": "[45] Shiliang Sun, Honglei Shi, and Yuanbin Wu. A survey of multi-source domain adaptation. Information Fusion, 24: 84-92, 2015. 1" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 471, + 555, + 514 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 471, + 555, + 514 + ], + "spans": [ + { + "bbox": [ + 316, + 471, + 555, + 514 + ], + "type": "text", + "content": "[46] Hui Tang, Ke Chen, and Kui Jia. Unsupervised domain adaptation via structurally regularized deep clustering. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8722-8732, 2020. 5" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 514, + 555, + 558 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 514, + 555, + 558 + ], + "spans": [ + { + "bbox": [ + 316, + 514, + 555, + 558 + ], + "type": "text", + "content": "[47] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2962-2971, 2017. 1" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 559, + 555, + 591 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 559, + 555, + 591 + ], + "spans": [ + { + "bbox": [ + 316, + 559, + 555, + 591 + ], + "type": "text", + "content": "[48] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 9: 2579-2605, 2008. 7" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 592, + 555, + 613 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 592, + 555, + 613 + ], + "spans": [ + { + "bbox": [ + 316, + 592, + 555, + 613 + ], + "type": "text", + "content": "[49] Vladimir N. Vapnik. The nature of statistical learning theory. Springer-Verlag New York, Inc., 1995. 2" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 614, + 555, + 657 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 614, + 555, + 657 + ], + "spans": [ + { + "bbox": [ + 316, + 614, + 555, + 657 + ], + "type": "text", + "content": "[50] V. N. Vapnik and A. Ya. Chervonenkis. On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities, pages 11-30. Springer International Publishing, 2015. 2" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 316, + 658, + 555, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 658, + 555, + 712 + ], + "spans": [ + { + "bbox": [ + 316, + 658, + 555, + 712 + ], + "type": "text", + "content": "[51] Naveen Venkat, Jogendra Nath Kundu, Durgesh Kumar Singh, Ambareesh Revanur, and R. Venkatesh Babu. Your classifier can secretly suffice multi-source domain adaptation. In Proceedings of the 34th International Conference on Neural Information Processing Systems, 2020. 1" + } + ] + } + ], + "index": 27 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "10151" + } + ] + } + ], + "index": 29 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 297, + 713 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 56, + 72, + 297, + 126 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 297, + 126 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 297, + 126 + ], + "type": "text", + "content": "[52] Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In IEEE Conference on Computer Vision and Pattern Recognition, pages 5385-5394, 2017. 6" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 127, + 296, + 183 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 127, + 296, + 183 + ], + "spans": [ + { + "bbox": [ + 56, + 127, + 296, + 183 + ], + "type": "text", + "content": "[53] Hang Wang, Minghao Xu, Bingbing Ni, and Wenjun Zhang. Learning to combine: Knowledge aggregation for multi-source domain adaptation. In 16th European Conference on Computer Vision, pages 727-744. Springer-Verlag, 2020. 1" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 183, + 296, + 227 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 183, + 296, + 227 + ], + "spans": [ + { + "bbox": [ + 56, + 183, + 296, + 227 + ], + "type": "text", + "content": "[54] Qian Wang, Fanlin Meng, and Toby P. Breckon. Progressively select and reject pseudolabeled samples for open-set domain adaptation. IEEE Transactions on Artificial Intelligence, 5(9): 4403-4414, 2024. 2, 6" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 228, + 296, + 281 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 228, + 296, + 281 + ], + "spans": [ + { + "bbox": [ + 56, + 228, + 296, + 281 + ], + "type": "text", + "content": "[55] Zixin Wang, Yadan Luo, Peng-Fei Zhang, Sen Wang, and Zi Huang. Discovering domain disentanglement for generalized multi-source domain adaptation. In IEEE International Conference on Multimedia and Expo, pages 1–6. IEEE, 2022. 2" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 282, + 296, + 327 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 282, + 296, + 327 + ], + "spans": [ + { + "bbox": [ + 56, + 282, + 296, + 327 + ], + "type": "text", + "content": "[56] Jun Wu and Jingrui He. Domain adaptation with dynamic open-set targets. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 2039-2049. Association for Computing Machinery, 2022. 3" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 327, + 296, + 382 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 327, + 296, + 382 + ], + "spans": [ + { + "bbox": [ + 56, + 327, + 296, + 382 + ], + "type": "text", + "content": "[57] R. Xu, Z. Chen, W. Zuo, J. Yan, and L. Lin. Deep cocktail network: Multi-source unsupervised domain adaptation with category shift. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3964-3973. IEEE Computer Society, 2018. 1" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 383, + 296, + 426 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 383, + 296, + 426 + ], + "spans": [ + { + "bbox": [ + 56, + 383, + 296, + 426 + ], + "type": "text", + "content": "[58] Yixing Xu, Chang Xu, Chao Xu, and Dacheng Tao. Multi-positive and unlabeled learning. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 3182-3188. AAAI Press, 2017. 2, 3, 6" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 426, + 296, + 482 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 426, + 296, + 482 + ], + "spans": [ + { + "bbox": [ + 56, + 426, + 296, + 482 + ], + "type": "text", + "content": "[59] Luyu Yang, Yan Wang, Mingfei Gao, Abhinav Shrivastava, Kilian Q. Weinberger, Wei-Lun Chao, and Ser-Nam Lim. Deep co-training with task decomposition for semi-supervised domain adaptation. In IEEE/CVF International Conference on Computer Vision, pages 8886-8896, 2021. 1, 5" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 482, + 296, + 536 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 482, + 296, + 536 + ], + "spans": [ + { + "bbox": [ + 56, + 482, + 296, + 536 + ], + "type": "text", + "content": "[60] S. Yang, Y. Wang, J. van de Weijer, L. Herranz, S. Jui, and J. Yang. Trust your good friends: Source-free domain adaptation by reciprocal neighborhood clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(12):15883-15895, 2023. 4" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 536, + 296, + 581 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 536, + 296, + 581 + ], + "spans": [ + { + "bbox": [ + 56, + 536, + 296, + 581 + ], + "type": "text", + "content": "[61] Jeongbeen Yoon, Dahiyun Kang, and Minsu Cho. Semi-supervised domain adaptation via sample-to-sample self-distillation. In IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1686-1695, 2022. 1" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 582, + 296, + 614 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 582, + 296, + 614 + ], + "spans": [ + { + "bbox": [ + 56, + 582, + 296, + 614 + ], + "type": "text", + "content": "[62] Dexuan Zhang, Thomas Westfechtel, and Tatsuya Harada. Unsupervised domain adaptation via minimized joint error. Transactions on Machine Learning Research, 2023. 1, 2, 6" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 614, + 296, + 669 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 614, + 296, + 669 + ], + "spans": [ + { + "bbox": [ + 56, + 614, + 296, + 669 + ], + "type": "text", + "content": "[63] Dexuan Zhang, Thomas Westfechtel, and Tatsuya Harada. Open-set domain adaptation via joint error based multi-class positive and unlabeled learning. In 18th European Conference on Computer Vision. Springer International Publishing, 2024. 2, 3, 5, 6" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 56, + 670, + 296, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 670, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 56, + 670, + 296, + 713 + ], + "type": "text", + "content": "[64] Yuchen Zhang, Tianle Liu, Mingsheng Long, and Michael Jordan. Bridging theory and algorithm for domain adaptation. In Proceedings of the 36th International Conference on Machine Learning, pages 7404-7413. PMLR, 2019. 1" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 315, + 72, + 555, + 239 + ], + "type": "list", + "angle": 0, + "index": 17, + "blocks": [ + { + "bbox": [ + 315, + 72, + 555, + 138 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 72, + 555, + 138 + ], + "spans": [ + { + "bbox": [ + 315, + 72, + 555, + 138 + ], + "type": "text", + "content": "[65] Han Zhao, Shanghang Zhang, Guanhang Wu, Joao P. Costeira, Jose M. F. Moura, and Geoffrey J. Gordon. Adversarial multiple source domain adaptation. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 8568-8579. Curran Associates Inc., 2018. 1, 3" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 315, + 140, + 555, + 184 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 140, + 555, + 184 + ], + "spans": [ + { + "bbox": [ + 315, + 140, + 555, + 184 + ], + "type": "text", + "content": "[66] Han Zhao, Remi Tachet des Combes, Kun Zhang, and Geoffrey J. Gordon. On learning invariant representation for domain adaptation. In Proceedings of the 36th International Conference on Machine Learning, 2019. 2" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 185, + 555, + 239 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 185, + 555, + 239 + ], + "spans": [ + { + "bbox": [ + 316, + 185, + 555, + 239 + ], + "type": "text", + "content": "[67] Yongchun Zhu, Fuzhen Zhuang, and Deqing Wang. Aligning domain-specific distribution and classifier for cross-domain classification from multiple sources. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence. AAAI Press, 2019. 1" + } + ] + } + ], + "index": 16 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "text", + "content": "10152" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/A Unified Approach to Interpreting Self-supervised Pre-training Methods for 3D Point Clouds via Interactions/99ce7f16-2914-4f50-bc65-0054f0b31f07_content_list.json b/2025/A Unified Approach to Interpreting Self-supervised Pre-training Methods for 3D Point Clouds via Interactions/99ce7f16-2914-4f50-bc65-0054f0b31f07_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..67a89ea26deda60d3104bc5fa643aa224cc9db7d --- /dev/null +++ b/2025/A Unified Approach to Interpreting Self-supervised Pre-training Methods for 3D Point Clouds via Interactions/99ce7f16-2914-4f50-bc65-0054f0b31f07_content_list.json @@ -0,0 +1,1637 @@ +[ + { + "type": "text", + "text": "A Unified Approach to Interpreting Self-supervised Pre-training Methods for 3D Point Clouds via Interactions", + "text_level": 1, + "bbox": [ + 127, + 128, + 870, + 174 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Qiang Li, Jian Ruan, Fanghao Wu, Yuchi Chen, Zhihua Wei*, and Wen Shen* Tongji University, Shanghai, China", + "bbox": [ + 194, + 203, + 807, + 239 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "{qli, jianruan, 2432055, 1953721, zhihua_wei, wenshen}@tongji.edu.cn", + "bbox": [ + 218, + 241, + 772, + 257 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 246, + 291, + 326, + 306 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Recently, many self-supervised pre-training methods have been proposed to improve the performance of deep neural networks (DNNs) for 3D point clouds processing. However, the common mechanism underlying the effectiveness of different pre-training methods remains unclear. In this paper, we use game-theoretic interactions as a unified approach to explore the common mechanism of pre-training methods. Specifically, we decompose the output score of a DNN into the sum of numerous effects of interactions, with each interaction representing a distinct 3D substructure of the input point cloud. Based on the decomposed interactions, we draw the following conclusions. (1) The common mechanism across different pre-training methods is that they enhance the strength of high-order interactions encoded by DNNs, which represent complex and global 3D structures, while reducing the strength of low-order interactions, which represent simple and local 3D structures. (2) Sufficient pre-training and adequate fine-tuning data for downstream tasks further reinforce the mechanism described above. (3) Pre-training methods carry a potential risk of reducing the transferability of features encoded by DNNs. Inspired by the observed common mechanism, we propose a new method to directly enhance the strength of high-order interactions and reduce the strength of low-order interactions encoded by DNNs, improving performance without the need for pre-training on large-scale datasets. Experiments show that our method achieves performance comparable to traditional pre-training methods.", + "bbox": [ + 88, + 323, + 485, + 746 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 89, + 775, + 220, + 792 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Self-supervised pre-training methods for 3D point clouds have developed rapidly in recent years [1, 9, 13, 16, 21, 23, 30, 32, 38, 43]. Pre-training methods first train DNNs on large-scale unlabeled datasets, then fine-tune the DNNs on downstream tasks, generally enhancing their performance.", + "bbox": [ + 89, + 801, + 483, + 878 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "However, the common mechanism underlying different pretraining methods remains unclear, posing challenges for gaining insights into effective model training strategies.", + "bbox": [ + 511, + 292, + 906, + 338 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "In this paper, we aim to explore the common mechanism behind the performance improvements of different pre-training methods, thereby providing insights into pretraining, and offering better guidance for the training process. Recent studies have employed interactions to explain the reasoning processes of DNNs [20, 25, 41]. Inspired by these studies, we use interactions to provide a unified interpretation of different pre-training methods.", + "bbox": [ + 511, + 339, + 908, + 460 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Specifically, given a point cloud $x$ with $n$ regions indexed by $N = \\{1,2,\\dots,n\\}$ , an interaction represents the collaborations among regions within a specific 3D structure $S \\subseteq N$ , where each interaction has a numerical effect $I(S)$ on the network output. For example, as shown in Fig. 1, the interaction between the regions in $S_{1} = \\{\\text{wingtip}, \\text{wing root}\\}$ form a concept of \"wing\", contributing $I(S_{1})$ to push the classification result toward the class \"airplane\". It has been proven by [6, 44] that the output score of a DNN consistently equals the sum of the effects of all activated interactions, regardless of how the input regions are masked. In this way, interactions can be seen as the detailed inference patterns encoded by the DNN.", + "bbox": [ + 511, + 460, + 908, + 656 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Based on interactions, we conduct comparative experiments to explore the common reasons behind the performance improvements of different pre-training methods. Specifically, we explore the impact of pre-training methods on the complexity of interactions encoded by DNNs. Here, the complexity refers to the number of regions contained in an interaction, i.e., the order of the interaction. A high-order interaction, e.g., $S_{3}$ in Fig. 1, captures collaborations among massive point regions, representing complex and global 3D structures. In contrast, a low-order interaction, e.g., $S_{1}$ , measures collaborations between a few regions, representing simple and local 3D structures. From the experiments, we draw the following key conclusions.", + "bbox": [ + 511, + 657, + 908, + 854 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "- The common mechanism across different pre-training", + "bbox": [ + 511, + 858, + 906, + 875 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 46 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "*Corresponding authors: Wen Shen and Zhihua Wei.", + "bbox": [ + 107, + 886, + 387, + 898 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "1We divide a point cloud into $n$ regions following [26].", + "bbox": [ + 531, + 886, + 823, + 900 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "27315", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/df7338f800ae385b4dddd39cd83328592a92a96ae7466c162041d4f73216074f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 93, + 88, + 643, + 271 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/571d909e20d4adaf0184bed55e90c7839287c07b105246f692cd9db6857ef0af.jpg", + "image_caption": [ + "Figure 1. (a) Illustration of how interactions can be used to explain a DNN. Given an input point cloud with $n$ regions, the output score of the DNN can be decomposed into the sum of the numerical effects of $2^{n}$ interactions, where each interaction $S$ encodes the collaborations among the point cloud regions in the set $S$ . (b) Comparing the strength of interactions across different orders encoded by the DGCNN trained from scratch (scr) and the DGCNN using a pre-training method (pt). Results show that the pre-trained DGCNN encodes stronger high-order interactions and weaker low-order interactions than the DGCNN trained from scratch." + ], + "image_footnote": [], + "bbox": [ + 653, + 90, + 906, + 273 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "methods is that they enhance the strength of high-order interactions encoded by DNNs while reducing the strength of low-order interactions. This common mechanism indicates that pre-training methods enhance the DNNs' ability to capture global 3D structures, while reducing their reliance on local 3D structures.", + "bbox": [ + 102, + 361, + 482, + 450 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Sufficient pre-training and adequate fine-tuning data for downstream tasks further reinforce the mechanism described above. We observe that the strength of high-order interactions increases with the number of pretraining epochs while the strength of low-order interactions decreases. Additionally, increasing the amount of data for downstream tasks also amplifies this effect.", + "- Pre-training methods carry a potential risk of reducing the transferability of features encoded by DNNs. We observe that the performance of the pre-trained DNNs may decrease on unseen test datasets, possibly due to pretraining methods causing the DNNs to encode high-order interactions with excessively high strength." + ], + "bbox": [ + 89, + 452, + 482, + 648 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Building on the common mechanism we identified, we propose a new method to directly enhance the strength of high-order interactions encoded by DNNs while reducing the strength of low-order interactions. Experimental results on classification and semantic segmentation benchmarks show that our method achieves performance comparable to pre-training methods, without the need for pre-training on large-scale unlabeled datasets.", + "bbox": [ + 89, + 650, + 482, + 771 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related work", + "text_level": 1, + "bbox": [ + 89, + 784, + 227, + 799 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Self-supervised learning (SSL) of 3D point clouds. Recently, 3D point cloud processing has developed rapidly [5, 11, 14, 15, 24, 27, 35, 37, 40], with many self-supervised methods proposed to learn representations from individual 3D objects [1, 9, 13, 16, 21, 23, 30, 32, 38, 43]. The goal of SSL is to design pretext tasks to help the model learn the", + "bbox": [ + 89, + 809, + 482, + 900 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "data distribution and features in advance, preparing it for downstream tasks. In this paper, we explore the common mechanism behind the performance improvement of the following five widely used open-source pre-training methods.", + "bbox": [ + 511, + 362, + 906, + 422 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Occlusion Completion (OcCo) [32]. OcCo masks occluded points from a camera view and trains an encoder-decoder model to reconstruct these missing points.", + "- Jigsaw [21]. Jigsaw trains a model to reconstruct point clouds with parts rearranged in random order.", + "- Implicit Auto-encoder (IAE) [38]. IAE trains the model as an encoder to map the point clouds to a high-dimensional space and uses a decoder to reconstruct the encoder's outputs back into 3D geometry.", + "- Spatio-Temporal Representation Learning (STRL) [9]. STRL captures spatio-temporal information from 3D sequences by using two temporally correlated frames to learn invariant representations.", + "- CrossPoint [1]. CrossPoint learns transferable representations by maximizing the agreement between 3D point clouds and corresponding 2D images." + ], + "bbox": [ + 513, + 426, + 903, + 669 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Using game-theoretical interactions to explain DNNs. Game-theoretical interactions provide a solid theoretical foundation for explaining DNNs. Ren et al. [17] proposed a mathematical formulation for the concepts encoded by a DNN, while Ren et al. [18] further leveraged these concepts to define optimal baseline values for Shapley values. Li et al. [10] provided a theoretical guarantee that interactions accurately capture the true concepts encoded by a DNN. At the application level, interactions have been widely used to explain the representation capacity of DNNs from various perspectives, including adversarial robustness [17, 34], adversarial transferability [33], and generalization power [41, 45]. In this paper, we use interactions to investigate the common mechanism underlying different pre-training methods for 3D point clouds.", + "bbox": [ + 511, + 674, + 906, + 900 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "27316", + "bbox": [ + 478, + 944, + 519, + 955 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/3289a2b5798488eb7ebe8ac75c333eef9e141c31637aeebe7028b38bc5fad491.jpg", + "image_caption": [ + "Figure 2. Process of dividing an input point cloud into $n$ regions." + ], + "image_footnote": [], + "bbox": [ + 93, + 88, + 482, + 161 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. Interactions in 3D point cloud processing", + "text_level": 1, + "bbox": [ + 89, + 186, + 459, + 205 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Preliminaries: interactions. As a new explanatory metric, interaction has been used to clarify the inference logic [7], generalization power [34], and robustness of a DNN [45]. It can be viewed as a universal measure due to its close theoretical connections with other metrics. As proven by [19], the Harsanyi interaction serves as the basis for existing game-theoretic attributions and interactions, including the Shapley value [22], the Shapley interaction index [8], and the Shapley Taylor interaction index [28]. Please see the supplementary material for additional details.", + "bbox": [ + 89, + 212, + 483, + 362 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Quantifying interactions for 3D point cloud processing. We extend interactions to 3D point clouds. Considering an point cloud $x \\in \\mathbb{R}^{P \\times 3}$ , we divide it into $n$ regions, as shown in Fig. 2. First, we apply the farthest point sampling (FPS) algorithm to select $n$ points from the point cloud as the centers of each region. Then, we use the $k$ -dimensional tree (KDTree) algorithm to assign the remaining points to their nearest region. By doing so, we divide the input point cloud $x$ into $n$ regions, indexed by $N = \\{1, 2, \\dots, n\\}$ .", + "bbox": [ + 89, + 364, + 483, + 500 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Given a trained DNN $v: \\mathbb{R}^{P \\times 3} \\to \\mathbb{R}$ , we follow [10, 20, 25] to define the DNN's output score as $v(x) = \\log \\frac{p}{1 - p}$ to represent the classification confidence, where $p$ is the output probability of the ground truth class. Then, the output score can be rewritten as the sum of the numerical effects of all $2^n$ interactions between the point regions, as follows.", + "bbox": [ + 89, + 500, + 483, + 589 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\nv (x) = \\sum_ {S \\subseteq N} I (S). \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 222, + 594, + 482, + 626 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Here, $I(S)$ represents the numerical effect of the interaction among the point regions in $S \\subseteq N$ , defined as follows.", + "bbox": [ + 89, + 632, + 482, + 662 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\nI (S) \\triangleq \\sum_ {T \\subseteq S} (- 1) ^ {| S | - | T |} \\cdot v \\left(x _ {T}\\right), \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 174, + 667, + 482, + 700 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $x_{T}$ represents the input point cloud with the regions in $T \\subseteq N$ unchanged, while the regions in $N \\backslash T$ are masked by replacing them with the centroid of the point cloud.", + "bbox": [ + 89, + 704, + 482, + 750 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Understanding interactions in 3D point cloud processing. The interaction extracted from the input point cloud $x$ encodes an AND relationship among the point regions in $S$ , with the numerical effect $I(S)$ representing the combined contribution of these regions to the output score $v(x)$ . As shown in Fig. 1, when the point regions in the set $S_{1} = \\{wingtip, wing root\\}$ are unmasked, they form a \"wing\" pattern and contribute a numerical effect $I(S_{1})$ that pushes the output score $v(x)$ towards the \"airplane\" category. Masking any region in $S_{1}$ will deactivate this AND", + "bbox": [ + 89, + 750, + 483, + 901 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "interaction and remove $I(S_{1})$ from $v(x)$ . In fact, Tang et al. [29] has proven that interaction satisfies the universal matching property, which states that the DNN's inference score $v(x_{T})$ can always be faithfully explained as the sum of the numerical effects of all activated interactions, regardless of how the point cloud regions are masked.", + "bbox": [ + 511, + 90, + 905, + 181 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Theorem 1 (Universal matching property, proven by [29]). Given an input sample $x$ with $n$ variables indexed by $N = \\{1,2,\\dots,n\\}$ , we can generate $2^n$ masked samples $x_{T}$ where $T \\subseteq N$ . Let us construct the following surrogate logical model $\\phi(\\cdot)$ to use interactions for inference, which are extracted from the DNN $v(\\cdot)$ on the sample $x$ . Then, the output of the surrogate logical model $\\phi(\\cdot)$ can always match the output of the DNN $v(\\cdot)$ , regardless of how the input sample is masked.", + "bbox": [ + 511, + 181, + 906, + 318 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\forall T \\subseteq N, \\phi (x _ {T}) = v (x _ {T}), \\\\ \\phi \\left(x _ {T}\\right) = v \\left(x _ {\\emptyset}\\right) + \\sum_ {S \\subseteq N} I (S) \\cdot \\mathbb {1} \\binom {x _ {T} \\text {t r i g g e r s}} {\\text {A N D r e l a t i o n} S} \\tag {3} \\\\ = v (x _ {\\emptyset}) + \\sum_ {\\emptyset \\neq S \\subseteq T} I (S). \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 527, + 323, + 903, + 402 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Defining and quantifying the representation complexity of DNNs. The order of an interaction is defined as $m = |S|$ , which reflects the representation complexity of DNNs. High-order interactions measure the effects of collaborations among massive point cloud regions, representing global and complex 3D structures, while low-order interactions measure the effects of collaborations between a few point regions, representing simple and local 3D structures. We introduce a new metric for measuring the representation complexity of DNNs, as follows.", + "bbox": [ + 511, + 409, + 905, + 561 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\kappa^ {(m)} \\triangleq \\frac {\\mathbb {E} _ {x} \\mathbb {E} _ {S \\subseteq N , | S | = m} [ | I (S) | ]}{Z}, \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 596, + 566, + 903, + 599 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $\\mathbb{E}$ denotes the mathematical expectation, and $Z = \\mathbb{E}_x\\mathbb{E}_{S\\subseteq N}[|I(S)|]$ is a normalization term to ensure fair comparisons across different DNNs. Here, $\\kappa^{(m)}$ measures the normalized average strength of the $m$ -th order interactions. If the value of $\\kappa^{(m)}$ of a high-order is larger than that of a low-order, the DNN's representation complexity is enough to capture global and complex 3D structures. Otherwise, the DNN's representation complexity is limited to encoding only local and simple 3D structures.", + "bbox": [ + 511, + 604, + 905, + 741 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "We further propose the following metrics to measure the strength of high-order interactions and low-order interactions encoded by the DNN.", + "bbox": [ + 511, + 742, + 903, + 787 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\kappa^ {\\text {h i g h}} = \\sum_ {m \\in \\Omega^ {\\text {h i g h}}} \\kappa^ {(m)}, s. t. \\Omega^ {\\text {h i g h}} \\stackrel {{\\text {d e f}}} {{=}} \\left\\{m \\mid \\lceil \\frac {2}{3} n \\rceil < m \\leq n \\right\\}, \\\\ \\kappa^ {\\text {l o w}} = \\sum_ {m \\in \\Omega^ {\\text {l o w}}} \\kappa^ {(m)}, s. t. \\Omega^ {\\text {l o w}} \\stackrel {{\\text {d e f}}} {{=}} \\left\\{m \\mid 1 \\leq m \\leq \\lceil \\frac {1}{3} n \\rceil \\right\\}, \\end{array} \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 519, + 791, + 903, + 859 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $\\Omega^{\\mathrm{high}}$ and $\\Omega^{\\mathrm{low}}$ denote the ranges of high-order and low-order interactions, respectively.", + "bbox": [ + 511, + 869, + 906, + 901 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "27317", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/67782b0baf6715b310e74bd4a3d170c23d7160ddc09fccb8198fea6e40ff40e5.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 93, + 88, + 331, + 189 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/7bd74d7397421b56e077a1cf24ea8db8769d47f46f8c4c5ef7afe2e6847ef828.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 331, + 88, + 517, + 189 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/2d4ad121797fb8d91330c85b0aa60c8b6db70c00e96fc0b651c0a4a601792d2e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 519, + 89, + 705, + 189 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/766c06325e4154bea1e38a0f39b207067288d0d6955c7cf7c8cdf979b57fde53.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 93, + 189, + 331, + 273 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/5cffe96ae4f6bd91c1e2184252f0c73e72a1f8e7e956f170434a17bcd890a18a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 331, + 191, + 517, + 273 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/2224f7471994640d18c96b588f8ca20940dc565deddc8597e39b0b3ce677752c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 519, + 191, + 705, + 273 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/06db17dd0b0b4c5e9c0c318dddeb42bcd5217b19031bc05f0ec0541021418599.jpg", + "image_caption": [ + "Figure 3. [Conclusion 1] (a) Comparing the normalized average strength of interactions encoded by different DNNs, including DNNs trained from scratch and DNNs trained with different pre-training methods. Results show that the DNNs using pre-training methods consistently encode stronger high-order interactions and weaker low-order interactions than the DNNs trained from scratch. (b) The relationship between the strength of high-order interactions encoded by different DNNs and their corresponding classification accuracy. Results show that DNNs encoding stronger high-order interactions tend to exhibit higher accuracy." + ], + "image_footnote": [], + "bbox": [ + 94, + 273, + 331, + 372 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/254d67983fdf6831316bb09ceed5141568c4ed51c6ec7dbf0ed8cb2eab5a12cc.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 331, + 276, + 517, + 372 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/de395489ef92dbdb43b95b11a7d31e7ef78124b86d6c6d378c96946f3f6b100f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 519, + 276, + 705, + 372 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/076650b651e6aa4430796a6f9f858dfef69848080a46eb6b562fc16c0fb90619.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 714, + 89, + 903, + 191 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/e2be93cf1a71bff0a1240735116ef1c0ea2645678d7d602e2b6cd91f0e07c5a4.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 714, + 191, + 903, + 276 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/2b0a90577a7f2b4cd573d46ab5d832b246c463ed621f16630a3d65b902a1901b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 714, + 276, + 903, + 372 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "4. Interpreting different pre-training methods using interactions", + "text_level": 1, + "bbox": [ + 89, + 465, + 482, + 500 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "4.1. Comparative study setup", + "text_level": 1, + "bbox": [ + 89, + 508, + 318, + 525 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "For a given network architecture, we compare the interactions encoded by the model trained from scratch with those encoded by models trained using various pre-training methods. This comparison aims to explore whether these pretraining methods share a common underlying reason for performance improvement, which we define as the common mechanism across these methods. To provide a unified explanation for most pre-training methods, we conduct experiments on five widely used open-source pre-training methods, including IAE [38], STRL [9], CrossPoint [1], OcCo [32] and JigSaw [21], as detailed in Sec. 2.", + "bbox": [ + 89, + 530, + 483, + 696 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Networks and datasets. We conduct experiments on three network architectures: DGCNN [35], PointNet [14], and PCN [40]. For DGCNN, we utilize all five pre-training methods, while for PointNet and PCN, we focus on OcCo [32] and Jigsaw [21], depending on the accessibility of open-source implementations for each pre-training method.", + "bbox": [ + 89, + 696, + 483, + 787 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "To compare the interactions encoded by different DNNs, we use three benchmark datasets for 3D classification task: ModelNet40 [36], ShapeNet² [3], and ScanObjectNN [31]. Tab. 1 shows the statistics of these datasets. We randomly select 10 samples per class from each dataset and use the", + "bbox": [ + 89, + 787, + 483, + 864 + ], + "page_idx": 3 + }, + { + "type": "table", + "img_path": "images/65cf3ad034d8e88dfa15645b93168538a874d32429bb6cc195c3851d66464738.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
NameType# Class# Training / Testing
ModelNetsynthesized409,843 / 2,468
ShapeNetsynthesized1612,137 / 4,744
ScanObjectNNreal world152,304 / 576
", + "bbox": [ + 517, + 464, + 905, + 537 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Table 1. Statistics of datasets for classification.", + "bbox": [ + 568, + 542, + 849, + 555 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "method in Sec. 3 to divide each point cloud sample into $n$ regions for quantifying the interactions encoded by DNNs.", + "bbox": [ + 511, + 568, + 905, + 598 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "4.2. Exploring the common mechanism of different pre-training methods", + "text_level": 1, + "bbox": [ + 511, + 607, + 905, + 638 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Conclusion 1. The common mechanism across different pre-training methods is that they enhance the strength of high-order interactions encoded by DNNs, while reducing the strength of low-order interactions.", + "bbox": [ + 527, + 656, + 890, + 718 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Fig. 3 (a) shows the normalized average strength of the interactions encoded by different DNNs, including DNNs trained from scratch and DNNs using different pre-training methods. Results show that the strength of high-order interactions encoded by the DNNs using pre-training methods is consistently greater than that of the DNNs trained from scratch, across all datasets and network architectures. Conversely, the DNNs using pre-training methods typically encode weaker low-order interactions than the DNNs trained from scratch. Fig. 3 (b) further illustrates the relationship between the strength of high-order interactions and the clas", + "bbox": [ + 511, + 734, + 906, + 900 + ], + "page_idx": 3 + }, + { + "type": "page_footnote", + "text": "2The ShapeNet dataset for classification is derived from the ShapeNet part segmentation dataset, following [26].", + "bbox": [ + 89, + 875, + 482, + 900 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "27318", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/3dda22ba09d99ba14a40a722fa9072e88e2e08d51ddd3ca13441f0b237d9a780.jpg", + "image_caption": [ + "Figure 4. Visualization of interactions encoded by the DGCNN trained from scratch (scr) and the DGCNN pre-trained (pt) with IAE. The pre-trained DGCNN typically encodes stronger high-order interactions and weaker low-order interactions compared to the DGCNN trained from scratch." + ], + "image_footnote": [], + "bbox": [ + 93, + 89, + 315, + 290 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/20f4fd5661286d9a212e95d4a2b4c514ec0522bee551dcc0bb8bb8d1903d9b35.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 321, + 89, + 480, + 290 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "sification accuracy across different DNNs. We observe that DNNs encoding stronger high-order interactions tend to exhibit higher accuracy. Thus, we regard this shared phenomenon as the common mechanism behind the performance improvement of different pre-training methods, i.e., different pre-training methods generally enhance the strength of high-order interactions encoded by DNNs, while reducing the strength of low-order interactions, as summarized in Conclusion 1.", + "bbox": [ + 88, + 381, + 482, + 517 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Conclusion 1 reveals that pre-training methods enhance the ability of DNNs to encode complex and global 3D structures, while reducing their reliance on simple and local 3D structures. As simple and local 3D structures (e.g., a curve, a corner) can appear across different categories, they often lack sufficient classification information, so an over-reliance on them may lead to incorrect classifications. For example, as shown in Fig. 4, the DNN trained from scratch incorrectly classifies a \"plant\" sample as a \"stool\". This misclassification may occur because the local structures the DNN learns for the plant, such as the \"stem\" and the \"leaf\", are similar to some local structures of a stool, such as the \"legs\". However, the DNN still encodes a high strength for these local structures (i.e., low-order interactions), which results in an incorrect classification. In contrast, pre-training methods improve the modeling of complex and global 3D structures, allowing DNNs to get a more comprehensive understanding of the input, which in turn enhances their performance. Thus, beyond traditional accuracy metrics, interactions can help identify the potential reasons for classification errors by revealing which 3D structures modeled by the DNN have inappropriate weights, offering a new perspective for debugging.", + "bbox": [ + 89, + 520, + 482, + 868 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Comparison with transformer-based pre-training methods. We also measure interactions encoded by transformer-", + "bbox": [ + 89, + 869, + 482, + 900 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/2b18729678ddf6c40e4c4532bb609d7399d56e1ef4fd93b55a6b755cc2577e1f.jpg", + "image_caption": [ + "Figure 5. Comparing the normalized average strength of interactions encoded by (1) transformer-based models, (2) traditional DNNs (e.g., DGCNN and PointNet) trained from scratch, and (3) traditional DNNs using pre-training methods (e.g., DGCNN with IAE, and PointNet with OcCo). Results show that transformer-based models also encode stronger high-order interactions and weaker low-order interactions, exhibiting a similar pattern to traditional DNNs using pre-training methods." + ], + "image_footnote": [], + "bbox": [ + 516, + 89, + 903, + 195 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "based models, including PointBERT [39], PointMAE [12], PointM2AE [42], and PointGPT [4]. These models integrate pre-training methods into the model architecture, making them incompatible with traditional DNNs (e.g., DGCNN). Therefore, we directly compare the interactions encoded by transformer-based models with the interactions encoded by traditional DNNs, including the DNNs trained from scratch and the DNNs trained with pre-training methods. As shown in Fig. 5, transformer-based models also encode stronger high-order interactions and weaker low-order interactions than traditional DNNs trained from scratch, which exhibit a similar pattern to the interactions encoded by traditional DNNs using pre-training methods. This further supports Conclusion 1.", + "bbox": [ + 511, + 323, + 906, + 535 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.3. Exploring the impact of different factors on the common mechanism", + "text_level": 1, + "bbox": [ + 511, + 542, + 903, + 574 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We further explore two factors that impact the common mechanism: (a) the extent of pre-training, and (b) the amount of fine-tuning data used for downstream tasks.", + "bbox": [ + 511, + 580, + 905, + 627 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Conclusion 2(a). The pre-training process progressively enhances the strength of high-order interactions encoded while weakening the strength of low-order interactions as the extent of pre-training increases.", + "bbox": [ + 527, + 641, + 890, + 703 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In this subsection, we first investigate the relationship between the extent of pre-training and the strength of interactions encoded by DNNs. Here, the extent of pre-training refers to the number of pre-training epochs, i.e., the range of epochs from the start of pre-training to the epoch at which pre-training converges. To this end, we conduct experiments on DGCNN with two pre-training methods, including IAE and CrossPoint. For each pre-training method, let $T_{\\mathrm{max}}$ denote the total number of epochs at which the pre-training process of the DNN converges. We select the DNNs at training epochs $0, 0.2T_{\\mathrm{max}}, 0.4T_{\\mathrm{max}}, \\ldots, T_{\\mathrm{max}}$ , covering six different stages of the pre-training process. Then, for all", + "bbox": [ + 511, + 719, + 906, + 900 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "27319", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/a87161cf4e1679b3d5dad2935f636cc2f3b4ffb672ab7dd1d6f0a10ba275ac30.jpg", + "image_caption": [ + "Figure 6. [Conclusion 2(a)] Comparing the normalized average strength of interactions encoded by DGCNNs pre-trained for different extents, ranging from initial pre-training (0%) to full convergence (100%). As the extent of pre-training increases, the strength of high-order interactions encoded by the DNNs typically rises, while the strength of low-order interactions generally decreases." + ], + "image_footnote": [], + "bbox": [ + 91, + 88, + 480, + 205 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/8aa174ef63623ca8ad69b2f8f26b8407fd901be9a1d01012708f253c838129c1.jpg", + "image_caption": [ + "Figure 7. [Conclusion 2(b)] Comparing the normalized average strength of interactions encoded by DNNs fine-tuned with varying amounts of data. Results show that as the amount of fine-tuning data increases from $1\\%$ to $100\\%$ , the strength of high-order interactions encoded by the DNNs generally increases, while the strength of low-order interactions generally decreases." + ], + "image_footnote": [], + "bbox": [ + 91, + 305, + 480, + 422 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "DNNs at different pre-training extents, we fine-tune them on the same downstream task and quantify the interactions encoded by these fine-tuned DNNs.", + "bbox": [ + 89, + 523, + 482, + 568 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Fig. 6 presents the experimental results. We observe that as the extent of pre-training increases, the strength of high-order interactions encoded by the DNNs generally increases, while the strength of low-order interactions typically decreases. We summarize this relationship between the extent of pre-training and the interactions encoded by DNNs in Conclusion 2(a). This conclusion suggests that sufficient pre-training enhances the model's ability to capture complex and global 3D contexts, further validating the common mechanism outlined in Conclusion 1.", + "bbox": [ + 89, + 569, + 482, + 720 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Conclusion 2(b). Increasing the amount of fine-tuning data further enhances the strength of high-order interactions encoded by DNNs, while weakening the strength of low-order interactions.", + "bbox": [ + 104, + 733, + 468, + 794 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "To investigate the relationship between the amount of fine-tuning data for downstream tasks and the interactions encoded by DNNs, we construct seven training sets of varying sizes from the ModelNet40 dataset, containing $1\\%$ , $10\\%$ , $20\\%$ , $30\\%$ , $50\\%$ , $70\\%$ , and $100\\%$ of the original ModelNet40 training data, respectively. Note that we ensure", + "bbox": [ + 89, + 810, + 483, + 902 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/037f4d02f5874a1f526c7bb0b41b8a9fd67377bf8b99ad52f1b4877cba002600.jpg", + "image_caption": [ + "Figure 8. Comparing the classification accuracy and the strength of high-order interactions encoded by different DNNs fine-tuned with varying amounts of data. As the amount of data increases, the accuracy gap between the DNN trained from scratch and the DNN pre-trained with IAE narrows, while the gap in the strength of high-order interactions encoded by these DNNs widens." + ], + "image_footnote": [], + "bbox": [ + 516, + 90, + 903, + 215 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "at least one sample from each class is included, allowing the model to learn from all categories. We then use the different-sized training sets to fine-tune DGCNNs, including those pre-trained using the IAE method and the Cross-Point method. As shown in Fig. 7, as the amount of fine-tuning data increases, the strength of high-order interactions encoded by DNNs gradually increases, while the strength of low-order interactions decreases. We summarize this relationship between the amount of fine-tuning data and the interactions encoded by DNNs in Conclusion 2(b).", + "bbox": [ + 511, + 313, + 906, + 465 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.4. Exploring the potential risk of pre-training methods in reducing DNN's transferability", + "text_level": 1, + "bbox": [ + 511, + 473, + 906, + 505 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Conclusion 3. Pre-training methods carry a potential risk of reducing the transferability of features encoded by DNNs.", + "bbox": [ + 527, + 521, + 890, + 568 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "When exploring the relationship between the amount of fine-tuning data and the interactions encoded by DNNs, we observe the following anomalous phenomenon. As shown in Fig. 8, the gap in classification accuracy between the pretrained DNN and the DNN trained from scratch becomes marginal as the fine-tuning data increases. For example, when the fine-tuning data reaches $100\\%$ , the accuracy gap is only $0.2\\%$ . However, the gap in the strength of high-order interactions between the two DNNs gradually increases, indicating that high-order interactions with excessively high strength are not necessary for performance improvement.", + "bbox": [ + 511, + 583, + 906, + 750 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Since high-order interactions generally carry a greater risk of overfitting [41], we investigate the potential risk of pre-training methods in reducing the transferability of features encoded by DNNs. Here, the transferability of features refers to the generalization ability of the features. For example, if the features learned from one dataset (e.g., the features of the airplane class in ModelNet) can be applied to another unseen dataset (e.g., identifying the airplane class in ShapeNet), we consider these features to have high transferability. To this end, we use ShapeNet as the unseen", + "bbox": [ + 511, + 750, + 908, + 901 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "27320", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/524e5286dfb6e543842f6239f5a62352866fbed12fbf6488c0615028f8db39c5.jpg", + "image_caption": [ + "Figure 9. [Conclusion 3] Comparing the zero-shot classification accuracy of DNNs with and without pre-training, followed by fine-tuning with varying amounts of data. Results show that the zero-shot accuracy of the pre-trained DNN initially exceeds that of the DNN trained from scratch when the fine-tuning data is limited $(e.g., 1\\%)$ , but falls below that of the DNN trained from scratch as the fine-tuning data becomes sufficient $(e.g., 100\\%)$ ." + ], + "image_footnote": [], + "bbox": [ + 91, + 88, + 480, + 195 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "dataset and compare the classification accuracy of different DNNs, including DGCNNs trained with varying amounts of data from ModelNet, as well as DGCNNs pre-trained and then fine-tuned with varying amounts of data. Since the category labels in the two datasets do not completely align, we identify eight common categories. Please see the supplementary material for more implementation details.", + "bbox": [ + 89, + 309, + 483, + 415 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Fig. 9 shows the results. We find that when the amount of fine-tuning data is limited (e.g., $1\\%$ ), pre-trained DNNs, such as the DNN pre-trained with CrossPoint, achieve higher zero-shot accuracy $(+8.9\\%)$ compared to the DNN trained from scratch. In contrast, when the fine-tuning data is sufficient (e.g., $100\\%$ ), the accuracy of the DNN pretrained with CrossPoint significantly lags behind that of the DNN trained from scratch $(-14.7\\%)$ . We attribute this to pre-training methods causing DNNs to encode high-order interactions with excessively high strength, which in turn reduces the transferability of the features encoded by the DNNs. Note that we are not criticizing the use of pretraining methods to enhance the strength of high-order interactions encoded by DNNs as inherently negative. Rather, we are proposing this potential risk and offering new insights for the design of pre-training methods.", + "bbox": [ + 89, + 416, + 483, + 659 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5. Guiding the training process using the common mechanism", + "text_level": 1, + "bbox": [ + 89, + 675, + 482, + 709 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Traditional pre-training methods, while improving performance, inevitably require extensive pre-training on large-scale unlabeled datasets, which demands considerable time and computational resources. As discussed above, we find that the common mechanism underlying different pre-training methods is that they universally enhance the strength of high-order interactions encoded by DNNs while reducing the strength of low-order interactions. Building on this insight, we propose a new method that directly enhances the strength of high-order interactions encoded by DNNs while reducing the strength of low-order interactions. In this way, our method achieves performance", + "bbox": [ + 89, + 719, + 483, + 901 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/b54690b9075dd82112437b300e73e7480e077e9978c3ea391be11a238f70d379.jpg", + "image_caption": [ + "Figure 10. (a) Curves showing the values of the proposed loss term $\\mathcal{L}_{\\text{interaction}}$ for different values of $\\alpha$ throughout the training process. (b) Comparison of the normalized average strength of interactions encoded by DNNs for various $\\alpha$ values in the loss term." + ], + "image_footnote": [], + "bbox": [ + 516, + 87, + 903, + 202 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "comparable to traditional pre-training methods while avoiding the need for pre-training on large-scale unlabeled datasets. Specifically, we introduce a new heuristic loss term defined as follows.", + "bbox": [ + 511, + 273, + 906, + 333 + ], + "page_idx": 6 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\text {i n t e r a c t i o n}} = \\mathbb {E} _ {x} \\left[ \\mathbb {E} _ {| S | \\in \\Omega^ {\\mathrm {l o w}}} [ | I (S) | ] - \\mathbb {E} _ {| S | \\in \\Omega^ {\\mathrm {h i g h}}} [ | I (S) | ] \\right], \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 522, + 342, + 906, + 359 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "where $\\Omega^{\\mathrm{high}}$ and $\\Omega^{\\mathrm{low}}$ define the ranges of high-order and low-order interactions, as detailed in Sec. 3. Minimizing the loss term $\\mathcal{L}_{\\mathrm{interaction}}$ forces the DNN to weaken the strength of low-order interactions, i.e., decreasing $\\mathbb{E}_x\\mathbb{E}_{|S|\\in \\Omega^{\\mathrm{low}}}[|I(S)|]$ , while enhancing the strength of high-order interactions, i.e., increasing $\\mathbb{E}_x\\mathbb{E}_{|S|\\in \\Omega^{\\mathrm{high}}}[|I(S)|]$ .", + "bbox": [ + 511, + 368, + 905, + 460 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "However, computing Eq. (6) is NP-hard. To overcome this challenge, we approximate $\\mathcal{L}_{\\mathrm{interaction}}$ using a sampling-based approach. Specifically, given a point cloud $x$ with $n$ regions indexed by $N = \\{1,2,\\dots,n\\}$ , we sample three disjoint subsets $S_{1}, S_{2}, S_{3} \\subseteq N$ where the orders of the subsets $|S_1|, |S_2|, |S_3| \\in \\Omega^{\\mathrm{low}}$ , with each subset representing a low-order interaction encoded by the DNN. We consider the union $S_{\\mathrm{union}} = S_{1} \\cup S_{2} \\cup S_{3}$ as a relatively high-order interaction. Then, we can approximate the interaction loss $\\mathcal{L}_{\\mathrm{interaction}}$ as follows.", + "bbox": [ + 511, + 460, + 905, + 609 + ], + "page_idx": 6 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\text {i n t e r a c t i o n}} ^ {\\prime} = \\mathbb {E} _ {S _ {1}, S _ {2}, S _ {3} \\subseteq N} \\left[ \\mathbb {E} _ {i \\in \\{1, 2, 3 \\}} [ | I (S _ {i}) | ] - | I (S _ {\\text {u n i o n}}) | \\right]. \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 517, + 619, + 903, + 648 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Given a traditional DNN, we incorporate the interaction loss into the training process using the following loss function for the classification task, without the need for additional pre-training on large-scale unlabeled datasets.", + "bbox": [ + 511, + 648, + 905, + 709 + ], + "page_idx": 6 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} = \\mathcal {L} _ {\\text {c l a s s i f i c a t i o n}} + \\alpha \\mathcal {L} _ {\\text {i n t e r a c t i o n}}, \\tag {8}\n$$\n", + "text_format": "latex", + "bbox": [ + 606, + 720, + 903, + 738 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "where $\\mathcal{L}_{\\mathrm{classification}}$ denotes the standard classification loss function (e.g., cross-entropy loss), and $\\alpha > 0$ is the hyperparameter controlling the strength of the interaction loss. Please see Tab. 4 for the effects of varying $\\alpha$ . As shown in Fig. 10 (b), the strength of high-order interactions encoded by the DNN with $\\alpha > 0$ is generally higher than the result when $\\alpha = 0$ , but it does not increase indefinitely as $\\alpha$ grows. This shows the effectiveness of our interaction loss.", + "bbox": [ + 511, + 750, + 905, + 869 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Experiments and results analysis. To evaluate the effectiveness of the proposed loss term, we conduct experi-", + "bbox": [ + 511, + 869, + 903, + 900 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "27321", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/f87f4c37eaaffb9145ef3c9d64b45b23f3fa8f7179c6ba3b8561b26d3725f91d.jpg", + "table_caption": [], + "table_footnote": [ + "Table 2. Classification accuracy (\\%) on ModelNet40 and ScanObjectNN datasets. The best results are shown in bold and the second-best results are underlined. Our method achieves results comparable to pre-training methods, while not requiring pretraining on large-scale datasets." + ], + "table_body": "
MethodModelNet40ScanObjectNNNo Pre-train
PointNet89.268.0
PointNet + JigSaw89.6-
PointNet + OcCo90.1-
PointNet + Linteraction (Ours)90.169.0
DGCNN92.578.1
DGCNN + JigSaw92.683.5
DGCNN + OcCo93.084.3
DGCNN + STRL93.1-
DGCNN + CrossPoint92.8-
DGCNN + IAE94.285.6
DGCNN + Linteraction (Ours)93.379.4
CurveNet92.879.2
CurveNet + Linteraction (Ours)93.182.0
GDANet92.378.7
GDANet + Linteraction (Ours)92.880.0
", + "bbox": [ + 91, + 88, + 482, + 329 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "ments on 3D point cloud classification and semantic segmentation tasks. For the classification task, we use the ModelNet40 and ScanObjectNN datasets, as described in Sec. 4.1. Specifically, for the ScanObjectNN dataset, we conduct experiments using the PB_T50_RS variant, which is the most challenging variant. We train PointNet and DGCNN using the proposed loss term and set $\\alpha$ to 0.0005. As shown in Tab. 2, our proposed loss term consistently improves the performance of PointNet, DGCNN, CurveNet [11], and GDANet [37] on both the ModelNet40 and the ScanObjectNN testing splits, compared to their original versions. Moreover, our method demonstrates performance comparable to pre-training methods, without the need for pre-training on large-scale datasets.", + "bbox": [ + 89, + 417, + 482, + 628 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "For the semantic segmentation task, we conduct experiments on the Stanford Large-Scale 3D Indoor Spaces (S3DIS) dataset [2]. The S3DIS consists of 3D point clouds collected from six distinct large-scale indoor environments, with each point cloud annotated with per-point categorical labels. We randomly subsample 4,096 points from the original point cloud and apply 6-fold cross-validation during fine-tuning. Since the proposed interaction loss is specifically designed for 3D classification, it cannot be directly applied to segmentation tasks. Instead, we adopt a two-stage training approach: first, we train a DNN on the classification task with our interaction loss, and then fine-tune the model on the semantic segmentation task. As shown in Tab. 3, the DGCNN using the proposed loss term achieves $86.8\\%$ overall accuracy and $59.0\\%$ mIoU, outperforming the majority of pre-training methods. Additionally, our loss term also improves the performance of the PointNet.", + "bbox": [ + 89, + 628, + 482, + 883 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Effects of the hyper-parameter $\\alpha$ . We train the", + "bbox": [ + 109, + 885, + 482, + 900 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/a7f499fbba8bcd6711624fce0dc5c8da83e3549cbe3d252016126d68a4c90d88.jpg", + "table_caption": [], + "table_footnote": [ + "Table 3. Semantic segmentation on S3DIS. We report Overall Accuracy (OA) and mean Intersection over Union (mIoU) across six folds. Our method surpasses most pre-training methods." + ], + "table_body": "
MethodS3DIS 6-Fold
OAmIoU
PointNet78.547.6
PointNet + Linteraction (Ours)82.150.8
DGCNN84.156.1
DGCNN + JigSaw84.456.6
DGCNN + OcCo85.158.5
DGCNN + STRL84.257.1
DGCNN + IAE85.960.7
DGCNN + Linteraction (Ours)86.859.0
", + "bbox": [ + 517, + 88, + 905, + 234 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/6a45d92e745b496fbe64f5c2ab1363eaca03eecdd2be1a4ba76ef84f3330e976.jpg", + "table_caption": [], + "table_footnote": [ + "Table 4. Classification accuracy (\\%) for DGCNNs trained with varying hyper-parameters $\\alpha$ for the interaction loss." + ], + "table_body": "
αModelNet40ScanObjectNN
0.092.578.1
0.000193.079.0
0.000593.379.4
0.00191.378.1
", + "bbox": [ + 563, + 297, + 854, + 375 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "DGCNN with various interaction loss weights $\\alpha$ and evaluate the testing accuracy, as shown in Tab. 4. The accuracy initially increases and then decreases as $\\alpha$ rises. We attribute this to the loss term enhancing the strength of high-order interactions encoded by the DNN. At lower $\\alpha$ values, the interaction loss improves the DNN's modeling of global 3D structures. However, excessively high values of $\\alpha$ lead to excessively high strength of high-order interactions, increasing the risk of overfitting, as discussed in Conclusion 3. With an appropriately chosen $\\alpha$ , the interaction loss effectively enhances the training process, further supporting the common mechanism outlined in Conclusion 1.", + "bbox": [ + 511, + 422, + 906, + 604 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6. Conclusion", + "text_level": 1, + "bbox": [ + 511, + 628, + 633, + 645 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this paper, we use interactions to investigate the common mechanism underlying the effectiveness of different pretraining methods for 3D point clouds. Specifically, these methods generally enhance the strength of high-order interactions encoded by DNNs, while reducing the strength of low-order interactions. We then explore the impact of various factors on the mechanism and find that sufficient pretraining and adequate fine-tuning data further reinforce this mechanism. Additionally, we identify a potential risk that pre-training may reduce the transferability of DNNs. Based on the common mechanism, we propose a new method that directly enhances the strength of high-order interactions encoded by DNNs while weakening the strength of low-order interactions. Experiments show that our method achieves performance comparable to pre-training methods, without the need for pre-training on large-scale datasets.", + "bbox": [ + 511, + 659, + 906, + 900 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "27322", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Acknowledgments", + "text_level": 1, + "bbox": [ + 91, + 90, + 250, + 107 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "This work is partially supported by the National Nature Science Foundation of China (No. 62206170, 62376199).", + "bbox": [ + 89, + 114, + 483, + 146 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 157, + 187, + 174 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Mohamed Afham, Isuru Dissanayake, Dinithi Dissanayake, Amaya Dharmasiri, Kanchana Thilakarathna, and Ranga Rodrigo. Crosspoint: Self-supervised cross-modal contrastive learning for 3d point cloud understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9902-9912, 2022. 1, 2, 4", + "[2] Iro Armeni, Ozan Sener, Amir R Zamir, Helen Jiang, Ioannis Brilakis, Martin Fischer, and Silvio Savarese. 3d semantic parsing of large-scale indoor spaces. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1534-1543, 2016. 8", + "[3] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015. 4", + "[4] Guangyan Chen, Meiling Wang, Yi Yang, Kai Yu, Li Yuan, and Yufeng Yue. Pointgpt: Auto-regressively generative pretraining from point clouds. Advances in Neural Information Processing Systems, 36, 2024. 5", + "[5] Jiajing Chen, Burak Kakillioglu, Huantao Ren, and Senem Velipasalar. Why discard if you can recycle?: A recycling max pooling module for 3d point cloud analysis. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, pages 559-567, 2022. 2", + "[6] Lu Chen, Siyu Lou, Benhao Huang, and Quanshi Zhang. Defining and extracting generalizable interaction primitives from dnns. arXiv preprint arXiv:2401.16318, 2024. 1", + "[7] Xu Cheng, Chuntung Chu, Yi Zheng, Jie Ren, and Quanshi Zhang. A game-theoretic taxonomy of visual concepts in dnns. arXiv preprint arXiv:2106.10938, 2021. 3", + "[8] Michel Grabisch and Marc Roubens. An axiomatic approach to the concept of interaction among players in cooperative games. International Journal of game theory, 28:547-565, 1999. 3, 1", + "[9] Siyuan Huang, Yichen Xie, Song-Chun Zhu, and Yixin Zhu. Spatio-temporal self-supervised representation learning for 3d point clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6535-6545, 2021. 1, 2, 4", + "[10] Mingjie Li and Quanshi Zhang. Does a neural network really encode symbolic concepts? In International conference on machine learning, pages 20452-20469. PMLR, 2023. 2, 3", + "[11] AAM Muzahid, Wanggen Wan, Ferdous Sohel, Lianyao Wu, and Li Hou. Curvenet: Curvature-based multitask learning deep networks for 3d object recognition. IEEE/CAA Journal of Automatica Sinica, 8(6):1177-1187, 2020. 2, 8", + "[12] Yatian Pang, Wenxiao Wang, Francis EH Tay, Wei Liu, Yonghong Tian, and Li Yuan. Masked autoencoders for point cloud self-supervised learning. In European conference on computer vision, pages 604-621. Springer, 2022. 5" + ], + "bbox": [ + 93, + 183, + 483, + 901 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[13] Omid Poursaeed, Tianxing Jiang, Han Qiao, Nayun Xu, and Vladimir G Kim. Self-supervised learning of point clouds via orientation estimation. In 2020 International Conference on 3D Vision (3DV), pages 1018-1028. IEEE, 2020. 1, 2", + "[14] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 652-660, 2017. 2, 4", + "[15] Guocheng Qian, Yuchen Li, Houwen Peng, Jinjie Mai, Hasan Hammoud, Mohamed Elhoseiny, and Bernard Ghanem. Pointnext: Revisiting pointnet++ with improved training and scaling strategies. Advances in neural information processing systems, 35:23192-23204, 2022. 2", + "[16] Yongming Rao, Jiwen Lu, and Jie Zhou. Global-local bidirectional reasoning for unsupervised representation learning of 3d point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5376-5385, 2020. 1, 2", + "[17] Jie Ren, Mingjie Li, Qirui Chen, Huiqi Deng, and Quanshi Zhang. Towards axiomatic, hierarchical, and symbolic explanation for deep models. arXiv preprint arXiv:2111.06206v5, 2021. 2, 1", + "[18] Jie Ren, Zhanpeng Zhou, Qirui Chen, and Quanshi Zhang. Can we faithfully represent masked states to compute shapley values on a dnn? arXiv preprint arXiv:2105.10719, 2021. 2", + "[19] Jie Ren, Mingjie Li, Qirui Chen, Huiqi Deng, and Quanshi Zhang. Defining and quantifying the emergence of sparse concepts in dnns. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 20280-20289, 2023. 3", + "[20] Qihan Ren, Yang Xu, Junpeng Zhang, Yue Xin, Dongrui Liu, and Quanshi Zhang. Towards the dynamics of a dnn learning symbolic interactions. arXiv preprint arXiv:2407.19198, 2024. 1, 3", + "[21] Jonathan Sauder and Bjarne Sievers. Self-supervised deep learning on point clouds by reconstructing space. Advances in Neural Information Processing Systems, 32, 2019. 1, 2, 4", + "[22] Lloyd S Shapley. A value for n-person games. Contribution to the Theory of Games, 2, 1953. 3, 1", + "[23] Charu Sharma and Manohar Kaul. Self-supervised few-shot learning on point clouds. Advances in Neural Information Processing Systems, 33:7212-7221, 2020. 1, 2", + "[24] Wen Shen, Binbin Zhang, Shikun Huang, Zhihua Wei, and Quanshi Zhang. 3d-rotation-equivariant quaternion neural networks. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XX 16, pages 531–547. Springer, 2020. 2", + "[25] Wen Shen, Qihan Ren, Dongrui Liu, and Quanshi Zhang. Interpreting representation quality of dnns for 3d point cloud processing. Advances in Neural Information Processing Systems, 34:8857-8870, 2021. 1, 3", + "[26] Wen Shen, Zhihua Wei, Shikun Huang, Binbin Zhang, Panyue Chen, Ping Zhao, and Quanshi Zhang. Verifiability and predictability: Interpreting utilities of network architectures for point cloud processing. In Proceedings of the IEEE/CVF" + ], + "bbox": [ + 516, + 92, + 905, + 901 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "27323", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Conference on Computer Vision and Pattern Recognition, pages 10703-10712, 2021. 1, 4", + "[27] Wen Shen, Zhihua Wei, Qihan Ren, Binbin Zhang, Shikun Huang, Jiaqi Fan, and Quanshi Zhang. Interpretable rotation-equivariant quaternion neural networks for 3d point cloud processing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(5):3290-3304, 2024. 2", + "[28] Mukund Sundararajan, Kedar Dhamdhere, and Ashish Agarwal. The shapley taylor interaction index. In International conference on machine learning, pages 9259-9268. PMLR, 2020. 3, 1", + "[29] Ling Tang, Wen Shen, Zhanpeng Zhou, Yuefeng Chen, and Quanshi Zhang. Defects of convolutional decoder networks in frequency representation. arXiv preprint arXiv:2210.09020, 2022. 3", + "[30] Ali Thabet, Humam Alwassel, and Bernard Ghanem. Self-supervised learning of local features in 3d point clouds. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 938-939, 2020. 1, 2", + "[31] Mikaela Angelina Uy, Quang-Hieu Pham, Binh-Son Hua, Thanh Nguyen, and Sai-Kit Yeung. Revisiting point cloud classification: A new benchmark dataset and classification model on real-world data. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1588–1597, 2019. 4", + "[32] Hanchen Wang, Qi Liu, Xiangyu Yue, Joan Lasenby, and Matt J Kusner. Unsupervised point cloud pre-training via occlusion completion. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9782-9792, 2021. 1, 2, 4", + "[33] Xin Wang, Jie Ren, Shuyun Lin, Xiangming Zhu, Yisen Wang, and Quanshi Zhang. A unified approach to interpreting and boosting adversarial transferability. arXiv preprint arXiv:2010.04055, 2020. 2", + "[34] Xin Wang, Shuyun Lin, Hao Zhang, Yufei Zhu, and Quanshi Zhang. Interpreting attributions and interactions of adversarial attacks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1095-1104, 2021. 2, 3", + "[35] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics (tog), 38(5):1-12, 2019. 2, 4", + "[36] Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1912-1920, 2015. 4", + "[37] Mutian Xu, Junhao Zhang, Zhipeng Zhou, Mingye Xu, Xiaojuan Qi, and Yu Qiao. Learning geometry-disentangled representation for complementary understanding of 3d object point cloud. In Proceedings of the AAAI conference on artificial intelligence, pages 3056-3064, 2021. 2, 8", + "[38] Siming Yan, Zhenpei Yang, Haoxiang Li, Chen Song, Li Guan, Hao Kang, Gang Hua, and Qixing Huang. Implicit autoencoder for point-cloud self-supervised representation" + ], + "bbox": [ + 91, + 90, + 482, + 900 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14530-14542, 2023. 1, 2, 4", + "[39] Xumin Yu, Lulu Tang, Yongming Rao, Tiejun Huang, Jie Zhou, and Jiwen Lu. Point-bert: Pre-training 3d point cloud transformers with masked point modeling. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 19313-19322, 2022. 5", + "[40] Wentao Yuan, Tejas Khot, David Held, Christoph Mertz, and Martial Hebert. Pcn: Point completion network. In 2018 international conference on 3D vision (3DV), pages 728-737. IEEE, 2018. 2, 4", + "[41] Hao Zhang, Sen Li, Yinchao Ma, Mingjie Li, Yichen Xie, and Quanshi Zhang. Interpreting and boosting dropout from a game-theoretic view. arXiv preprint arXiv:2009.11729, 2020. 1, 2, 6", + "[42] Renrui Zhang, Ziyu Guo, Peng Gao, Rongyao Fang, Bin Zhao, Dong Wang, Yu Qiao, and Hongsheng Li. Point-m2ae: multi-scale masked autoencoders for hierarchical point cloud pre-training. Advances in neural information processing systems, 35:27061-27074, 2022. 5", + "[43] Zaiwei Zhang, Rohit Girdhar, Armand Joulin, and Ishan Misra. Self-supervised pretraining of 3d features on any point-cloud. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10252-10263, 2021. 1, 2", + "[44] Huilin Zhou, Huijie Tang, Mingjie Li, Hao Zhang, Zhenyu Liu, and Quanshi Zhang. Explaining how a neural network plays the go game and let people learn. arXiv preprint arXiv:2310.09838, 2023. 1", + "[45] Huilin Zhou, Hao Zhang, Huiqi Deng, Dongrui Liu, Wen Shen, Shih-Han Chan, and Quanshi Zhang. Concept-level explanation for the generalization of a dnn. arXiv preprint arXiv:2302.13091, 2023. 2, 3" + ], + "bbox": [ + 516, + 92, + 903, + 571 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "27324", + "bbox": [ + 478, + 944, + 519, + 955 + ], + "page_idx": 9 + } +] \ No newline at end of file diff --git a/2025/A Unified Approach to Interpreting Self-supervised Pre-training Methods for 3D Point Clouds via Interactions/99ce7f16-2914-4f50-bc65-0054f0b31f07_model.json b/2025/A Unified Approach to Interpreting Self-supervised Pre-training Methods for 3D Point Clouds via Interactions/99ce7f16-2914-4f50-bc65-0054f0b31f07_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9ab42b3c8db5ff61b9f7c45c9ee15490a97d5a9f --- /dev/null +++ b/2025/A Unified Approach to Interpreting Self-supervised Pre-training Methods for 3D Point Clouds via Interactions/99ce7f16-2914-4f50-bc65-0054f0b31f07_model.json @@ -0,0 +1,2222 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.043 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.001, + 0.812, + 0.047 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.129, + 0.13, + 0.872, + 0.175 + ], + "angle": 0, + "content": "A Unified Approach to Interpreting Self-supervised Pre-training Methods for 3D Point Clouds via Interactions" + }, + { + "type": "text", + "bbox": [ + 0.196, + 0.204, + 0.808, + 0.24 + ], + "angle": 0, + "content": "Qiang Li, Jian Ruan, Fanghao Wu, Yuchi Chen, Zhihua Wei*, and Wen Shen* Tongji University, Shanghai, China" + }, + { + "type": "text", + "bbox": [ + 0.22, + 0.242, + 0.773, + 0.258 + ], + "angle": 0, + "content": "{qli, jianruan, 2432055, 1953721, zhihua_wei, wenshen}@tongji.edu.cn" + }, + { + "type": "title", + "bbox": [ + 0.248, + 0.292, + 0.327, + 0.308 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.324, + 0.486, + 0.747 + ], + "angle": 0, + "content": "Recently, many self-supervised pre-training methods have been proposed to improve the performance of deep neural networks (DNNs) for 3D point clouds processing. However, the common mechanism underlying the effectiveness of different pre-training methods remains unclear. In this paper, we use game-theoretic interactions as a unified approach to explore the common mechanism of pre-training methods. Specifically, we decompose the output score of a DNN into the sum of numerous effects of interactions, with each interaction representing a distinct 3D substructure of the input point cloud. Based on the decomposed interactions, we draw the following conclusions. (1) The common mechanism across different pre-training methods is that they enhance the strength of high-order interactions encoded by DNNs, which represent complex and global 3D structures, while reducing the strength of low-order interactions, which represent simple and local 3D structures. (2) Sufficient pre-training and adequate fine-tuning data for downstream tasks further reinforce the mechanism described above. (3) Pre-training methods carry a potential risk of reducing the transferability of features encoded by DNNs. Inspired by the observed common mechanism, we propose a new method to directly enhance the strength of high-order interactions and reduce the strength of low-order interactions encoded by DNNs, improving performance without the need for pre-training on large-scale datasets. Experiments show that our method achieves performance comparable to traditional pre-training methods." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.776, + 0.222, + 0.793 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.802, + 0.485, + 0.879 + ], + "angle": 0, + "content": "Self-supervised pre-training methods for 3D point clouds have developed rapidly in recent years [1, 9, 13, 16, 21, 23, 30, 32, 38, 43]. Pre-training methods first train DNNs on large-scale unlabeled datasets, then fine-tune the DNNs on downstream tasks, generally enhancing their performance." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.293, + 0.907, + 0.339 + ], + "angle": 0, + "content": "However, the common mechanism underlying different pretraining methods remains unclear, posing challenges for gaining insights into effective model training strategies." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.34, + 0.909, + 0.461 + ], + "angle": 0, + "content": "In this paper, we aim to explore the common mechanism behind the performance improvements of different pre-training methods, thereby providing insights into pretraining, and offering better guidance for the training process. Recent studies have employed interactions to explain the reasoning processes of DNNs [20, 25, 41]. Inspired by these studies, we use interactions to provide a unified interpretation of different pre-training methods." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.462, + 0.909, + 0.657 + ], + "angle": 0, + "content": "Specifically, given a point cloud \\( x \\) with \\( n \\) regions indexed by \\( N = \\{1,2,\\dots,n\\} \\), an interaction represents the collaborations among regions within a specific 3D structure \\( S \\subseteq N \\), where each interaction has a numerical effect \\( I(S) \\) on the network output. For example, as shown in Fig. 1, the interaction between the regions in \\( S_{1} = \\{\\text{wingtip}, \\text{wing root}\\} \\) form a concept of \"wing\", contributing \\( I(S_{1}) \\) to push the classification result toward the class \"airplane\". It has been proven by [6, 44] that the output score of a DNN consistently equals the sum of the effects of all activated interactions, regardless of how the input regions are masked. In this way, interactions can be seen as the detailed inference patterns encoded by the DNN." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.659, + 0.909, + 0.855 + ], + "angle": 0, + "content": "Based on interactions, we conduct comparative experiments to explore the common reasons behind the performance improvements of different pre-training methods. Specifically, we explore the impact of pre-training methods on the complexity of interactions encoded by DNNs. Here, the complexity refers to the number of regions contained in an interaction, i.e., the order of the interaction. A high-order interaction, e.g., \\( S_{3} \\) in Fig. 1, captures collaborations among massive point regions, representing complex and global 3D structures. In contrast, a low-order interaction, e.g., \\( S_{1} \\), measures collaborations between a few regions, representing simple and local 3D structures. From the experiments, we draw the following key conclusions." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.859, + 0.908, + 0.875 + ], + "angle": 0, + "content": "- The common mechanism across different pre-training" + }, + { + "type": "page_footnote", + "bbox": [ + 0.109, + 0.887, + 0.388, + 0.9 + ], + "angle": 0, + "content": "*Corresponding authors: Wen Shen and Zhihua Wei." + }, + { + "type": "page_footnote", + "bbox": [ + 0.532, + 0.887, + 0.825, + 0.901 + ], + "angle": 0, + "content": "1We divide a point cloud into \\( n \\) regions following [26]." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "27315" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.094, + 0.089, + 0.645, + 0.272 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.654, + 0.091, + 0.907, + 0.274 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.274, + 0.907, + 0.343 + ], + "angle": 0, + "content": "Figure 1. (a) Illustration of how interactions can be used to explain a DNN. Given an input point cloud with \\( n \\) regions, the output score of the DNN can be decomposed into the sum of the numerical effects of \\( 2^{n} \\) interactions, where each interaction \\( S \\) encodes the collaborations among the point cloud regions in the set \\( S \\). (b) Comparing the strength of interactions across different orders encoded by the DGCNN trained from scratch (scr) and the DGCNN using a pre-training method (pt). Results show that the pre-trained DGCNN encodes stronger high-order interactions and weaker low-order interactions than the DGCNN trained from scratch." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.362, + 0.483, + 0.452 + ], + "angle": 0, + "content": "methods is that they enhance the strength of high-order interactions encoded by DNNs while reducing the strength of low-order interactions. This common mechanism indicates that pre-training methods enhance the DNNs' ability to capture global 3D structures, while reducing their reliance on local 3D structures." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.453, + 0.483, + 0.558 + ], + "angle": 0, + "content": "- Sufficient pre-training and adequate fine-tuning data for downstream tasks further reinforce the mechanism described above. We observe that the strength of high-order interactions increases with the number of pretraining epochs while the strength of low-order interactions decreases. Additionally, increasing the amount of data for downstream tasks also amplifies this effect." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.559, + 0.483, + 0.649 + ], + "angle": 0, + "content": "- Pre-training methods carry a potential risk of reducing the transferability of features encoded by DNNs. We observe that the performance of the pre-trained DNNs may decrease on unseen test datasets, possibly due to pretraining methods causing the DNNs to encode high-order interactions with excessively high strength." + }, + { + "type": "list", + "bbox": [ + 0.091, + 0.453, + 0.483, + 0.649 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.651, + 0.483, + 0.772 + ], + "angle": 0, + "content": "Building on the common mechanism we identified, we propose a new method to directly enhance the strength of high-order interactions encoded by DNNs while reducing the strength of low-order interactions. Experimental results on classification and semantic segmentation benchmarks show that our method achieves performance comparable to pre-training methods, without the need for pre-training on large-scale unlabeled datasets." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.785, + 0.228, + 0.8 + ], + "angle": 0, + "content": "2. Related work" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.81, + 0.483, + 0.901 + ], + "angle": 0, + "content": "Self-supervised learning (SSL) of 3D point clouds. Recently, 3D point cloud processing has developed rapidly [5, 11, 14, 15, 24, 27, 35, 37, 40], with many self-supervised methods proposed to learn representations from individual 3D objects [1, 9, 13, 16, 21, 23, 30, 32, 38, 43]. The goal of SSL is to design pretext tasks to help the model learn the" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.363, + 0.907, + 0.424 + ], + "angle": 0, + "content": "data distribution and features in advance, preparing it for downstream tasks. In this paper, we explore the common mechanism behind the performance improvement of the following five widely used open-source pre-training methods." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.428, + 0.905, + 0.473 + ], + "angle": 0, + "content": "- Occlusion Completion (OcCo) [32]. OcCo masks occluded points from a camera view and trains an encoder-decoder model to reconstruct these missing points." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.474, + 0.905, + 0.503 + ], + "angle": 0, + "content": "- Jigsaw [21]. Jigsaw trains a model to reconstruct point clouds with parts rearranged in random order." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.504, + 0.905, + 0.564 + ], + "angle": 0, + "content": "- Implicit Auto-encoder (IAE) [38]. IAE trains the model as an encoder to map the point clouds to a high-dimensional space and uses a decoder to reconstruct the encoder's outputs back into 3D geometry." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.565, + 0.905, + 0.624 + ], + "angle": 0, + "content": "- Spatio-Temporal Representation Learning (STRL) [9]. STRL captures spatio-temporal information from 3D sequences by using two temporally correlated frames to learn invariant representations." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.625, + 0.905, + 0.67 + ], + "angle": 0, + "content": "- CrossPoint [1]. CrossPoint learns transferable representations by maximizing the agreement between 3D point clouds and corresponding 2D images." + }, + { + "type": "list", + "bbox": [ + 0.514, + 0.428, + 0.905, + 0.67 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.675, + 0.907, + 0.901 + ], + "angle": 0, + "content": "Using game-theoretical interactions to explain DNNs. Game-theoretical interactions provide a solid theoretical foundation for explaining DNNs. Ren et al. [17] proposed a mathematical formulation for the concepts encoded by a DNN, while Ren et al. [18] further leveraged these concepts to define optimal baseline values for Shapley values. Li et al. [10] provided a theoretical guarantee that interactions accurately capture the true concepts encoded by a DNN. At the application level, interactions have been widely used to explain the representation capacity of DNNs from various perspectives, including adversarial robustness [17, 34], adversarial transferability [33], and generalization power [41, 45]. In this paper, we use interactions to investigate the common mechanism underlying different pre-training methods for 3D point clouds." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.521, + 0.957 + ], + "angle": 0, + "content": "27316" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.094, + 0.089, + 0.483, + 0.162 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.093, + 0.163, + 0.48, + 0.177 + ], + "angle": 0, + "content": "Figure 2. Process of dividing an input point cloud into \\( n \\) regions." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.188, + 0.46, + 0.206 + ], + "angle": 0, + "content": "3. Interactions in 3D point cloud processing" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.213, + 0.484, + 0.363 + ], + "angle": 0, + "content": "Preliminaries: interactions. As a new explanatory metric, interaction has been used to clarify the inference logic [7], generalization power [34], and robustness of a DNN [45]. It can be viewed as a universal measure due to its close theoretical connections with other metrics. As proven by [19], the Harsanyi interaction serves as the basis for existing game-theoretic attributions and interactions, including the Shapley value [22], the Shapley interaction index [8], and the Shapley Taylor interaction index [28]. Please see the supplementary material for additional details." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.365, + 0.484, + 0.5 + ], + "angle": 0, + "content": "Quantifying interactions for 3D point cloud processing. We extend interactions to 3D point clouds. Considering an point cloud \\( x \\in \\mathbb{R}^{P \\times 3} \\), we divide it into \\( n \\) regions, as shown in Fig. 2. First, we apply the farthest point sampling (FPS) algorithm to select \\( n \\) points from the point cloud as the centers of each region. Then, we use the \\( k \\)-dimensional tree (KDTree) algorithm to assign the remaining points to their nearest region. By doing so, we divide the input point cloud \\( x \\) into \\( n \\) regions, indexed by \\( N = \\{1, 2, \\dots, n\\} \\)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.5, + 0.484, + 0.59 + ], + "angle": 0, + "content": "Given a trained DNN \\( v: \\mathbb{R}^{P \\times 3} \\to \\mathbb{R} \\), we follow [10, 20, 25] to define the DNN's output score as \\( v(x) = \\log \\frac{p}{1 - p} \\) to represent the classification confidence, where \\( p \\) is the output probability of the ground truth class. Then, the output score can be rewritten as the sum of the numerical effects of all \\( 2^n \\) interactions between the point regions, as follows." + }, + { + "type": "equation", + "bbox": [ + 0.223, + 0.595, + 0.483, + 0.627 + ], + "angle": 0, + "content": "\\[\nv (x) = \\sum_ {S \\subseteq N} I (S). \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.633, + 0.483, + 0.663 + ], + "angle": 0, + "content": "Here, \\( I(S) \\) represents the numerical effect of the interaction among the point regions in \\( S \\subseteq N \\), defined as follows." + }, + { + "type": "equation", + "bbox": [ + 0.175, + 0.668, + 0.483, + 0.702 + ], + "angle": 0, + "content": "\\[\nI (S) \\triangleq \\sum_ {T \\subseteq S} (- 1) ^ {| S | - | T |} \\cdot v \\left(x _ {T}\\right), \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.705, + 0.483, + 0.75 + ], + "angle": 0, + "content": "where \\( x_{T} \\) represents the input point cloud with the regions in \\( T \\subseteq N \\) unchanged, while the regions in \\( N \\backslash T \\) are masked by replacing them with the centroid of the point cloud." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.751, + 0.484, + 0.902 + ], + "angle": 0, + "content": "Understanding interactions in 3D point cloud processing. The interaction extracted from the input point cloud \\( x \\) encodes an AND relationship among the point regions in \\( S \\), with the numerical effect \\( I(S) \\) representing the combined contribution of these regions to the output score \\( v(x) \\). As shown in Fig. 1, when the point regions in the set \\( S_{1} = \\{wingtip, wing root\\} \\) are unmasked, they form a \"wing\" pattern and contribute a numerical effect \\( I(S_{1}) \\) that pushes the output score \\( v(x) \\) towards the \"airplane\" category. Masking any region in \\( S_{1} \\) will deactivate this AND" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.906, + 0.182 + ], + "angle": 0, + "content": "interaction and remove \\( I(S_{1}) \\) from \\( v(x) \\). In fact, Tang et al. [29] has proven that interaction satisfies the universal matching property, which states that the DNN's inference score \\( v(x_{T}) \\) can always be faithfully explained as the sum of the numerical effects of all activated interactions, regardless of how the point cloud regions are masked." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.183, + 0.907, + 0.319 + ], + "angle": 0, + "content": "Theorem 1 (Universal matching property, proven by [29]). Given an input sample \\( x \\) with \\( n \\) variables indexed by \\( N = \\{1,2,\\dots,n\\} \\), we can generate \\( 2^n \\) masked samples \\( x_{T} \\) where \\( T \\subseteq N \\). Let us construct the following surrogate logical model \\( \\phi(\\cdot) \\) to use interactions for inference, which are extracted from the DNN \\( v(\\cdot) \\) on the sample \\( x \\). Then, the output of the surrogate logical model \\( \\phi(\\cdot) \\) can always match the output of the DNN \\( v(\\cdot) \\), regardless of how the input sample is masked." + }, + { + "type": "equation", + "bbox": [ + 0.528, + 0.324, + 0.905, + 0.404 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\forall T \\subseteq N, \\phi (x _ {T}) = v (x _ {T}), \\\\ \\phi \\left(x _ {T}\\right) = v \\left(x _ {\\emptyset}\\right) + \\sum_ {S \\subseteq N} I (S) \\cdot \\mathbb {1} \\binom {x _ {T} \\text {t r i g g e r s}} {\\text {A N D r e l a t i o n} S} \\tag {3} \\\\ = v (x _ {\\emptyset}) + \\sum_ {\\emptyset \\neq S \\subseteq T} I (S). \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.41, + 0.906, + 0.562 + ], + "angle": 0, + "content": "Defining and quantifying the representation complexity of DNNs. The order of an interaction is defined as \\( m = |S| \\), which reflects the representation complexity of DNNs. High-order interactions measure the effects of collaborations among massive point cloud regions, representing global and complex 3D structures, while low-order interactions measure the effects of collaborations between a few point regions, representing simple and local 3D structures. We introduce a new metric for measuring the representation complexity of DNNs, as follows." + }, + { + "type": "equation", + "bbox": [ + 0.598, + 0.568, + 0.905, + 0.601 + ], + "angle": 0, + "content": "\\[\n\\kappa^ {(m)} \\triangleq \\frac {\\mathbb {E} _ {x} \\mathbb {E} _ {S \\subseteq N , | S | = m} [ | I (S) | ]}{Z}, \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.606, + 0.906, + 0.742 + ], + "angle": 0, + "content": "where \\(\\mathbb{E}\\) denotes the mathematical expectation, and \\(Z = \\mathbb{E}_x\\mathbb{E}_{S\\subseteq N}[|I(S)|]\\) is a normalization term to ensure fair comparisons across different DNNs. Here, \\(\\kappa^{(m)}\\) measures the normalized average strength of the \\(m\\)-th order interactions. If the value of \\(\\kappa^{(m)}\\) of a high-order is larger than that of a low-order, the DNN's representation complexity is enough to capture global and complex 3D structures. Otherwise, the DNN's representation complexity is limited to encoding only local and simple 3D structures." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.743, + 0.905, + 0.788 + ], + "angle": 0, + "content": "We further propose the following metrics to measure the strength of high-order interactions and low-order interactions encoded by the DNN." + }, + { + "type": "equation", + "bbox": [ + 0.521, + 0.792, + 0.905, + 0.861 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\kappa^ {\\text {h i g h}} = \\sum_ {m \\in \\Omega^ {\\text {h i g h}}} \\kappa^ {(m)}, s. t. \\Omega^ {\\text {h i g h}} \\stackrel {{\\text {d e f}}} {{=}} \\left\\{m \\mid \\lceil \\frac {2}{3} n \\rceil < m \\leq n \\right\\}, \\\\ \\kappa^ {\\text {l o w}} = \\sum_ {m \\in \\Omega^ {\\text {l o w}}} \\kappa^ {(m)}, s. t. \\Omega^ {\\text {l o w}} \\stackrel {{\\text {d e f}}} {{=}} \\left\\{m \\mid 1 \\leq m \\leq \\lceil \\frac {1}{3} n \\rceil \\right\\}, \\end{array} \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.87, + 0.907, + 0.902 + ], + "angle": 0, + "content": "where \\(\\Omega^{\\mathrm{high}}\\) and \\(\\Omega^{\\mathrm{low}}\\) denote the ranges of high-order and low-order interactions, respectively." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "27317" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.094, + 0.089, + 0.332, + 0.19 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.333, + 0.089, + 0.518, + 0.19 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.52, + 0.09, + 0.707, + 0.19 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.094, + 0.19, + 0.332, + 0.275 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.333, + 0.192, + 0.518, + 0.275 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.52, + 0.192, + 0.707, + 0.275 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.095, + 0.275, + 0.332, + 0.373 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.333, + 0.277, + 0.518, + 0.373 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.521, + 0.277, + 0.707, + 0.373 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.715, + 0.09, + 0.905, + 0.193 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.715, + 0.193, + 0.905, + 0.277 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.715, + 0.277, + 0.905, + 0.373 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.38, + 0.907, + 0.451 + ], + "angle": 0, + "content": "Figure 3. [Conclusion 1] (a) Comparing the normalized average strength of interactions encoded by different DNNs, including DNNs trained from scratch and DNNs trained with different pre-training methods. Results show that the DNNs using pre-training methods consistently encode stronger high-order interactions and weaker low-order interactions than the DNNs trained from scratch. (b) The relationship between the strength of high-order interactions encoded by different DNNs and their corresponding classification accuracy. Results show that DNNs encoding stronger high-order interactions tend to exhibit higher accuracy." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.466, + 0.483, + 0.501 + ], + "angle": 0, + "content": "4. Interpreting different pre-training methods using interactions" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.509, + 0.32, + 0.526 + ], + "angle": 0, + "content": "4.1. Comparative study setup" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.531, + 0.484, + 0.697 + ], + "angle": 0, + "content": "For a given network architecture, we compare the interactions encoded by the model trained from scratch with those encoded by models trained using various pre-training methods. This comparison aims to explore whether these pretraining methods share a common underlying reason for performance improvement, which we define as the common mechanism across these methods. To provide a unified explanation for most pre-training methods, we conduct experiments on five widely used open-source pre-training methods, including IAE [38], STRL [9], CrossPoint [1], OcCo [32] and JigSaw [21], as detailed in Sec. 2." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.698, + 0.484, + 0.789 + ], + "angle": 0, + "content": "Networks and datasets. We conduct experiments on three network architectures: DGCNN [35], PointNet [14], and PCN [40]. For DGCNN, we utilize all five pre-training methods, while for PointNet and PCN, we focus on OcCo [32] and Jigsaw [21], depending on the accessibility of open-source implementations for each pre-training method." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.789, + 0.484, + 0.865 + ], + "angle": 0, + "content": "To compare the interactions encoded by different DNNs, we use three benchmark datasets for 3D classification task: ModelNet40 [36], ShapeNet² [3], and ScanObjectNN [31]. Tab. 1 shows the statistics of these datasets. We randomly select 10 samples per class from each dataset and use the" + }, + { + "type": "table", + "bbox": [ + 0.518, + 0.465, + 0.906, + 0.538 + ], + "angle": 0, + "content": "
NameType# Class# Training / Testing
ModelNetsynthesized409,843 / 2,468
ShapeNetsynthesized1612,137 / 4,744
ScanObjectNNreal world152,304 / 576
" + }, + { + "type": "table_caption", + "bbox": [ + 0.569, + 0.543, + 0.85, + 0.556 + ], + "angle": 0, + "content": "Table 1. Statistics of datasets for classification." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.569, + 0.906, + 0.599 + ], + "angle": 0, + "content": "method in Sec. 3 to divide each point cloud sample into \\( n \\) regions for quantifying the interactions encoded by DNNs." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.608, + 0.906, + 0.64 + ], + "angle": 0, + "content": "4.2. Exploring the common mechanism of different pre-training methods" + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.657, + 0.892, + 0.719 + ], + "angle": 0, + "content": "Conclusion 1. The common mechanism across different pre-training methods is that they enhance the strength of high-order interactions encoded by DNNs, while reducing the strength of low-order interactions." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.735, + 0.907, + 0.901 + ], + "angle": 0, + "content": "Fig. 3 (a) shows the normalized average strength of the interactions encoded by different DNNs, including DNNs trained from scratch and DNNs using different pre-training methods. Results show that the strength of high-order interactions encoded by the DNNs using pre-training methods is consistently greater than that of the DNNs trained from scratch, across all datasets and network architectures. Conversely, the DNNs using pre-training methods typically encode weaker low-order interactions than the DNNs trained from scratch. Fig. 3 (b) further illustrates the relationship between the strength of high-order interactions and the clas" + }, + { + "type": "page_footnote", + "bbox": [ + 0.091, + 0.875, + 0.483, + 0.901 + ], + "angle": 0, + "content": "2The ShapeNet dataset for classification is derived from the ShapeNet part segmentation dataset, following [26]." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "27318" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.094, + 0.09, + 0.316, + 0.291 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.322, + 0.09, + 0.482, + 0.291 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.294, + 0.484, + 0.364 + ], + "angle": 0, + "content": "Figure 4. Visualization of interactions encoded by the DGCNN trained from scratch (scr) and the DGCNN pre-trained (pt) with IAE. The pre-trained DGCNN typically encodes stronger high-order interactions and weaker low-order interactions compared to the DGCNN trained from scratch." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.382, + 0.483, + 0.518 + ], + "angle": 0, + "content": "sification accuracy across different DNNs. We observe that DNNs encoding stronger high-order interactions tend to exhibit higher accuracy. Thus, we regard this shared phenomenon as the common mechanism behind the performance improvement of different pre-training methods, i.e., different pre-training methods generally enhance the strength of high-order interactions encoded by DNNs, while reducing the strength of low-order interactions, as summarized in Conclusion 1." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.521, + 0.483, + 0.869 + ], + "angle": 0, + "content": "Conclusion 1 reveals that pre-training methods enhance the ability of DNNs to encode complex and global 3D structures, while reducing their reliance on simple and local 3D structures. As simple and local 3D structures (e.g., a curve, a corner) can appear across different categories, they often lack sufficient classification information, so an over-reliance on them may lead to incorrect classifications. For example, as shown in Fig. 4, the DNN trained from scratch incorrectly classifies a \"plant\" sample as a \"stool\". This misclassification may occur because the local structures the DNN learns for the plant, such as the \"stem\" and the \"leaf\", are similar to some local structures of a stool, such as the \"legs\". However, the DNN still encodes a high strength for these local structures (i.e., low-order interactions), which results in an incorrect classification. In contrast, pre-training methods improve the modeling of complex and global 3D structures, allowing DNNs to get a more comprehensive understanding of the input, which in turn enhances their performance. Thus, beyond traditional accuracy metrics, interactions can help identify the potential reasons for classification errors by revealing which 3D structures modeled by the DNN have inappropriate weights, offering a new perspective for debugging." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.871, + 0.483, + 0.901 + ], + "angle": 0, + "content": "Comparison with transformer-based pre-training methods. We also measure interactions encoded by transformer-" + }, + { + "type": "image", + "bbox": [ + 0.517, + 0.09, + 0.905, + 0.196 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.512, + 0.198, + 0.907, + 0.31 + ], + "angle": 0, + "content": "Figure 5. Comparing the normalized average strength of interactions encoded by (1) transformer-based models, (2) traditional DNNs (e.g., DGCNN and PointNet) trained from scratch, and (3) traditional DNNs using pre-training methods (e.g., DGCNN with IAE, and PointNet with OcCo). Results show that transformer-based models also encode stronger high-order interactions and weaker low-order interactions, exhibiting a similar pattern to traditional DNNs using pre-training methods." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.324, + 0.907, + 0.536 + ], + "angle": 0, + "content": "based models, including PointBERT [39], PointMAE [12], PointM2AE [42], and PointGPT [4]. These models integrate pre-training methods into the model architecture, making them incompatible with traditional DNNs (e.g., DGCNN). Therefore, we directly compare the interactions encoded by transformer-based models with the interactions encoded by traditional DNNs, including the DNNs trained from scratch and the DNNs trained with pre-training methods. As shown in Fig. 5, transformer-based models also encode stronger high-order interactions and weaker low-order interactions than traditional DNNs trained from scratch, which exhibit a similar pattern to the interactions encoded by traditional DNNs using pre-training methods. This further supports Conclusion 1." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.544, + 0.905, + 0.575 + ], + "angle": 0, + "content": "4.3. Exploring the impact of different factors on the common mechanism" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.582, + 0.906, + 0.628 + ], + "angle": 0, + "content": "We further explore two factors that impact the common mechanism: (a) the extent of pre-training, and (b) the amount of fine-tuning data used for downstream tasks." + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.642, + 0.892, + 0.704 + ], + "angle": 0, + "content": "Conclusion 2(a). The pre-training process progressively enhances the strength of high-order interactions encoded while weakening the strength of low-order interactions as the extent of pre-training increases." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.72, + 0.907, + 0.901 + ], + "angle": 0, + "content": "In this subsection, we first investigate the relationship between the extent of pre-training and the strength of interactions encoded by DNNs. Here, the extent of pre-training refers to the number of pre-training epochs, i.e., the range of epochs from the start of pre-training to the epoch at which pre-training converges. To this end, we conduct experiments on DGCNN with two pre-training methods, including IAE and CrossPoint. For each pre-training method, let \\( T_{\\mathrm{max}} \\) denote the total number of epochs at which the pre-training process of the DNN converges. We select the DNNs at training epochs \\( 0, 0.2T_{\\mathrm{max}}, 0.4T_{\\mathrm{max}}, \\ldots, T_{\\mathrm{max}} \\), covering six different stages of the pre-training process. Then, for all" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.521, + 0.958 + ], + "angle": 0, + "content": "27319" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.093, + 0.089, + 0.482, + 0.206 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.21, + 0.483, + 0.295 + ], + "angle": 0, + "content": "Figure 6. [Conclusion 2(a)] Comparing the normalized average strength of interactions encoded by DGCNNs pre-trained for different extents, ranging from initial pre-training (0%) to full convergence (100%). As the extent of pre-training increases, the strength of high-order interactions encoded by the DNNs typically rises, while the strength of low-order interactions generally decreases." + }, + { + "type": "image", + "bbox": [ + 0.093, + 0.306, + 0.482, + 0.423 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.426, + 0.484, + 0.51 + ], + "angle": 0, + "content": "Figure 7. [Conclusion 2(b)] Comparing the normalized average strength of interactions encoded by DNNs fine-tuned with varying amounts of data. Results show that as the amount of fine-tuning data increases from \\(1\\%\\) to \\(100\\%\\), the strength of high-order interactions encoded by the DNNs generally increases, while the strength of low-order interactions generally decreases." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.524, + 0.483, + 0.569 + ], + "angle": 0, + "content": "DNNs at different pre-training extents, we fine-tune them on the same downstream task and quantify the interactions encoded by these fine-tuned DNNs." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.57, + 0.483, + 0.721 + ], + "angle": 0, + "content": "Fig. 6 presents the experimental results. We observe that as the extent of pre-training increases, the strength of high-order interactions encoded by the DNNs generally increases, while the strength of low-order interactions typically decreases. We summarize this relationship between the extent of pre-training and the interactions encoded by DNNs in Conclusion 2(a). This conclusion suggests that sufficient pre-training enhances the model's ability to capture complex and global 3D contexts, further validating the common mechanism outlined in Conclusion 1." + }, + { + "type": "text", + "bbox": [ + 0.105, + 0.734, + 0.47, + 0.795 + ], + "angle": 0, + "content": "Conclusion 2(b). Increasing the amount of fine-tuning data further enhances the strength of high-order interactions encoded by DNNs, while weakening the strength of low-order interactions." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.811, + 0.484, + 0.903 + ], + "angle": 0, + "content": "To investigate the relationship between the amount of fine-tuning data for downstream tasks and the interactions encoded by DNNs, we construct seven training sets of varying sizes from the ModelNet40 dataset, containing \\(1\\%\\), \\(10\\%\\), \\(20\\%\\), \\(30\\%\\), \\(50\\%\\), \\(70\\%\\), and \\(100\\%\\) of the original ModelNet40 training data, respectively. Note that we ensure" + }, + { + "type": "image", + "bbox": [ + 0.517, + 0.091, + 0.905, + 0.216 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.513, + 0.217, + 0.907, + 0.3 + ], + "angle": 0, + "content": "Figure 8. Comparing the classification accuracy and the strength of high-order interactions encoded by different DNNs fine-tuned with varying amounts of data. As the amount of data increases, the accuracy gap between the DNN trained from scratch and the DNN pre-trained with IAE narrows, while the gap in the strength of high-order interactions encoded by these DNNs widens." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.314, + 0.907, + 0.466 + ], + "angle": 0, + "content": "at least one sample from each class is included, allowing the model to learn from all categories. We then use the different-sized training sets to fine-tune DGCNNs, including those pre-trained using the IAE method and the Cross-Point method. As shown in Fig. 7, as the amount of fine-tuning data increases, the strength of high-order interactions encoded by DNNs gradually increases, while the strength of low-order interactions decreases. We summarize this relationship between the amount of fine-tuning data and the interactions encoded by DNNs in Conclusion 2(b)." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.474, + 0.907, + 0.506 + ], + "angle": 0, + "content": "4.4. Exploring the potential risk of pre-training methods in reducing DNN's transferability" + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.522, + 0.892, + 0.569 + ], + "angle": 0, + "content": "Conclusion 3. Pre-training methods carry a potential risk of reducing the transferability of features encoded by DNNs." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.584, + 0.907, + 0.75 + ], + "angle": 0, + "content": "When exploring the relationship between the amount of fine-tuning data and the interactions encoded by DNNs, we observe the following anomalous phenomenon. As shown in Fig. 8, the gap in classification accuracy between the pretrained DNN and the DNN trained from scratch becomes marginal as the fine-tuning data increases. For example, when the fine-tuning data reaches \\(100\\%\\), the accuracy gap is only \\(0.2\\%\\). However, the gap in the strength of high-order interactions between the two DNNs gradually increases, indicating that high-order interactions with excessively high strength are not necessary for performance improvement." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.75, + 0.909, + 0.902 + ], + "angle": 0, + "content": "Since high-order interactions generally carry a greater risk of overfitting [41], we investigate the potential risk of pre-training methods in reducing the transferability of features encoded by DNNs. Here, the transferability of features refers to the generalization ability of the features. For example, if the features learned from one dataset (e.g., the features of the airplane class in ModelNet) can be applied to another unseen dataset (e.g., identifying the airplane class in ShapeNet), we consider these features to have high transferability. To this end, we use ShapeNet as the unseen" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.521, + 0.958 + ], + "angle": 0, + "content": "27320" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.092, + 0.089, + 0.482, + 0.196 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.197, + 0.484, + 0.295 + ], + "angle": 0, + "content": "Figure 9. [Conclusion 3] Comparing the zero-shot classification accuracy of DNNs with and without pre-training, followed by fine-tuning with varying amounts of data. Results show that the zero-shot accuracy of the pre-trained DNN initially exceeds that of the DNN trained from scratch when the fine-tuning data is limited \\((e.g., 1\\%)\\), but falls below that of the DNN trained from scratch as the fine-tuning data becomes sufficient \\((e.g., 100\\%)\\)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.31, + 0.484, + 0.416 + ], + "angle": 0, + "content": "dataset and compare the classification accuracy of different DNNs, including DGCNNs trained with varying amounts of data from ModelNet, as well as DGCNNs pre-trained and then fine-tuned with varying amounts of data. Since the category labels in the two datasets do not completely align, we identify eight common categories. Please see the supplementary material for more implementation details." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.417, + 0.484, + 0.66 + ], + "angle": 0, + "content": "Fig. 9 shows the results. We find that when the amount of fine-tuning data is limited (e.g., \\(1\\%\\)), pre-trained DNNs, such as the DNN pre-trained with CrossPoint, achieve higher zero-shot accuracy \\((+8.9\\%)\\) compared to the DNN trained from scratch. In contrast, when the fine-tuning data is sufficient (e.g., \\(100\\%\\)), the accuracy of the DNN pretrained with CrossPoint significantly lags behind that of the DNN trained from scratch \\((-14.7\\%)\\). We attribute this to pre-training methods causing DNNs to encode high-order interactions with excessively high strength, which in turn reduces the transferability of the features encoded by the DNNs. Note that we are not criticizing the use of pretraining methods to enhance the strength of high-order interactions encoded by DNNs as inherently negative. Rather, we are proposing this potential risk and offering new insights for the design of pre-training methods." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.676, + 0.483, + 0.71 + ], + "angle": 0, + "content": "5. Guiding the training process using the common mechanism" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.72, + 0.484, + 0.902 + ], + "angle": 0, + "content": "Traditional pre-training methods, while improving performance, inevitably require extensive pre-training on large-scale unlabeled datasets, which demands considerable time and computational resources. As discussed above, we find that the common mechanism underlying different pre-training methods is that they universally enhance the strength of high-order interactions encoded by DNNs while reducing the strength of low-order interactions. Building on this insight, we propose a new method that directly enhances the strength of high-order interactions encoded by DNNs while reducing the strength of low-order interactions. In this way, our method achieves performance" + }, + { + "type": "image", + "bbox": [ + 0.517, + 0.088, + 0.905, + 0.203 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.513, + 0.204, + 0.905, + 0.26 + ], + "angle": 0, + "content": "Figure 10. (a) Curves showing the values of the proposed loss term \\(\\mathcal{L}_{\\text{interaction}}\\) for different values of \\(\\alpha\\) throughout the training process. (b) Comparison of the normalized average strength of interactions encoded by DNNs for various \\(\\alpha\\) values in the loss term." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.274, + 0.907, + 0.334 + ], + "angle": 0, + "content": "comparable to traditional pre-training methods while avoiding the need for pre-training on large-scale unlabeled datasets. Specifically, we introduce a new heuristic loss term defined as follows." + }, + { + "type": "equation", + "bbox": [ + 0.524, + 0.343, + 0.907, + 0.361 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\text {i n t e r a c t i o n}} = \\mathbb {E} _ {x} \\left[ \\mathbb {E} _ {| S | \\in \\Omega^ {\\mathrm {l o w}}} [ | I (S) | ] - \\mathbb {E} _ {| S | \\in \\Omega^ {\\mathrm {h i g h}}} [ | I (S) | ] \\right], \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.369, + 0.906, + 0.461 + ], + "angle": 0, + "content": "where \\(\\Omega^{\\mathrm{high}}\\) and \\(\\Omega^{\\mathrm{low}}\\) define the ranges of high-order and low-order interactions, as detailed in Sec. 3. Minimizing the loss term \\(\\mathcal{L}_{\\mathrm{interaction}}\\) forces the DNN to weaken the strength of low-order interactions, i.e., decreasing \\(\\mathbb{E}_x\\mathbb{E}_{|S|\\in \\Omega^{\\mathrm{low}}}[|I(S)|]\\), while enhancing the strength of high-order interactions, i.e., increasing \\(\\mathbb{E}_x\\mathbb{E}_{|S|\\in \\Omega^{\\mathrm{high}}}[|I(S)|]\\)." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.461, + 0.906, + 0.611 + ], + "angle": 0, + "content": "However, computing Eq. (6) is NP-hard. To overcome this challenge, we approximate \\(\\mathcal{L}_{\\mathrm{interaction}}\\) using a sampling-based approach. Specifically, given a point cloud \\(x\\) with \\(n\\) regions indexed by \\(N = \\{1,2,\\dots,n\\}\\), we sample three disjoint subsets \\(S_{1}, S_{2}, S_{3} \\subseteq N\\) where the orders of the subsets \\(|S_1|, |S_2|, |S_3| \\in \\Omega^{\\mathrm{low}}\\), with each subset representing a low-order interaction encoded by the DNN. We consider the union \\(S_{\\mathrm{union}} = S_{1} \\cup S_{2} \\cup S_{3}\\) as a relatively high-order interaction. Then, we can approximate the interaction loss \\(\\mathcal{L}_{\\mathrm{interaction}}\\) as follows." + }, + { + "type": "equation", + "bbox": [ + 0.518, + 0.62, + 0.905, + 0.649 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\text {i n t e r a c t i o n}} ^ {\\prime} = \\mathbb {E} _ {S _ {1}, S _ {2}, S _ {3} \\subseteq N} \\left[ \\mathbb {E} _ {i \\in \\{1, 2, 3 \\}} [ | I (S _ {i}) | ] - | I (S _ {\\text {u n i o n}}) | \\right]. \\tag {7}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.65, + 0.906, + 0.71 + ], + "angle": 0, + "content": "Given a traditional DNN, we incorporate the interaction loss into the training process using the following loss function for the classification task, without the need for additional pre-training on large-scale unlabeled datasets." + }, + { + "type": "equation", + "bbox": [ + 0.607, + 0.722, + 0.905, + 0.739 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} = \\mathcal {L} _ {\\text {c l a s s i f i c a t i o n}} + \\alpha \\mathcal {L} _ {\\text {i n t e r a c t i o n}}, \\tag {8}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.75, + 0.906, + 0.871 + ], + "angle": 0, + "content": "where \\(\\mathcal{L}_{\\mathrm{classification}}\\) denotes the standard classification loss function (e.g., cross-entropy loss), and \\(\\alpha > 0\\) is the hyperparameter controlling the strength of the interaction loss. Please see Tab. 4 for the effects of varying \\(\\alpha\\). As shown in Fig. 10 (b), the strength of high-order interactions encoded by the DNN with \\(\\alpha > 0\\) is generally higher than the result when \\(\\alpha = 0\\), but it does not increase indefinitely as \\(\\alpha\\) grows. This shows the effectiveness of our interaction loss." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.871, + 0.905, + 0.901 + ], + "angle": 0, + "content": "Experiments and results analysis. To evaluate the effectiveness of the proposed loss term, we conduct experi-" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.518, + 0.957 + ], + "angle": 0, + "content": "27321" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.093, + 0.089, + 0.483, + 0.33 + ], + "angle": 0, + "content": "
MethodModelNet40ScanObjectNNNo Pre-train
PointNet89.268.0
PointNet + JigSaw89.6-
PointNet + OcCo90.1-
PointNet + Linteraction (Ours)90.169.0
DGCNN92.578.1
DGCNN + JigSaw92.683.5
DGCNN + OcCo93.084.3
DGCNN + STRL93.1-
DGCNN + CrossPoint92.8-
DGCNN + IAE94.285.6
DGCNN + Linteraction (Ours)93.379.4
CurveNet92.879.2
CurveNet + Linteraction (Ours)93.182.0
GDANet92.378.7
GDANet + Linteraction (Ours)92.880.0
" + }, + { + "type": "table_footnote", + "bbox": [ + 0.09, + 0.336, + 0.483, + 0.406 + ], + "angle": 0, + "content": "Table 2. Classification accuracy (\\%) on ModelNet40 and ScanObjectNN datasets. The best results are shown in bold and the second-best results are underlined. Our method achieves results comparable to pre-training methods, while not requiring pretraining on large-scale datasets." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.418, + 0.483, + 0.629 + ], + "angle": 0, + "content": "ments on 3D point cloud classification and semantic segmentation tasks. For the classification task, we use the ModelNet40 and ScanObjectNN datasets, as described in Sec. 4.1. Specifically, for the ScanObjectNN dataset, we conduct experiments using the PB_T50_RS variant, which is the most challenging variant. We train PointNet and DGCNN using the proposed loss term and set \\(\\alpha\\) to 0.0005. As shown in Tab. 2, our proposed loss term consistently improves the performance of PointNet, DGCNN, CurveNet [11], and GDANet [37] on both the ModelNet40 and the ScanObjectNN testing splits, compared to their original versions. Moreover, our method demonstrates performance comparable to pre-training methods, without the need for pre-training on large-scale datasets." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.63, + 0.483, + 0.885 + ], + "angle": 0, + "content": "For the semantic segmentation task, we conduct experiments on the Stanford Large-Scale 3D Indoor Spaces (S3DIS) dataset [2]. The S3DIS consists of 3D point clouds collected from six distinct large-scale indoor environments, with each point cloud annotated with per-point categorical labels. We randomly subsample 4,096 points from the original point cloud and apply 6-fold cross-validation during fine-tuning. Since the proposed interaction loss is specifically designed for 3D classification, it cannot be directly applied to segmentation tasks. Instead, we adopt a two-stage training approach: first, we train a DNN on the classification task with our interaction loss, and then fine-tune the model on the semantic segmentation task. As shown in Tab. 3, the DGCNN using the proposed loss term achieves \\(86.8\\%\\) overall accuracy and \\(59.0\\%\\) mIoU, outperforming the majority of pre-training methods. Additionally, our loss term also improves the performance of the PointNet." + }, + { + "type": "text", + "bbox": [ + 0.11, + 0.886, + 0.483, + 0.901 + ], + "angle": 0, + "content": "Effects of the hyper-parameter \\(\\alpha\\). We train the" + }, + { + "type": "table", + "bbox": [ + 0.518, + 0.089, + 0.906, + 0.235 + ], + "angle": 0, + "content": "
MethodS3DIS 6-Fold
OAmIoU
PointNet78.547.6
PointNet + Linteraction (Ours)82.150.8
DGCNN84.156.1
DGCNN + JigSaw84.456.6
DGCNN + OcCo85.158.5
DGCNN + STRL84.257.1
DGCNN + IAE85.960.7
DGCNN + Linteraction (Ours)86.859.0
" + }, + { + "type": "table_footnote", + "bbox": [ + 0.513, + 0.24, + 0.907, + 0.282 + ], + "angle": 0, + "content": "Table 3. Semantic segmentation on S3DIS. We report Overall Accuracy (OA) and mean Intersection over Union (mIoU) across six folds. Our method surpasses most pre-training methods." + }, + { + "type": "table", + "bbox": [ + 0.565, + 0.299, + 0.856, + 0.376 + ], + "angle": 0, + "content": "
αModelNet40ScanObjectNN
0.092.578.1
0.000193.079.0
0.000593.379.4
0.00191.378.1
" + }, + { + "type": "table_footnote", + "bbox": [ + 0.513, + 0.38, + 0.907, + 0.409 + ], + "angle": 0, + "content": "Table 4. Classification accuracy (\\%) for DGCNNs trained with varying hyper-parameters \\(\\alpha\\) for the interaction loss." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.424, + 0.907, + 0.605 + ], + "angle": 0, + "content": "DGCNN with various interaction loss weights \\(\\alpha\\) and evaluate the testing accuracy, as shown in Tab. 4. The accuracy initially increases and then decreases as \\(\\alpha\\) rises. We attribute this to the loss term enhancing the strength of high-order interactions encoded by the DNN. At lower \\(\\alpha\\) values, the interaction loss improves the DNN's modeling of global 3D structures. However, excessively high values of \\(\\alpha\\) lead to excessively high strength of high-order interactions, increasing the risk of overfitting, as discussed in Conclusion 3. With an appropriately chosen \\(\\alpha\\), the interaction loss effectively enhances the training process, further supporting the common mechanism outlined in Conclusion 1." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.63, + 0.634, + 0.646 + ], + "angle": 0, + "content": "6. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.66, + 0.907, + 0.901 + ], + "angle": 0, + "content": "In this paper, we use interactions to investigate the common mechanism underlying the effectiveness of different pretraining methods for 3D point clouds. Specifically, these methods generally enhance the strength of high-order interactions encoded by DNNs, while reducing the strength of low-order interactions. We then explore the impact of various factors on the mechanism and find that sufficient pretraining and adequate fine-tuning data further reinforce this mechanism. Additionally, we identify a potential risk that pre-training may reduce the transferability of DNNs. Based on the common mechanism, we propose a new method that directly enhances the strength of high-order interactions encoded by DNNs while weakening the strength of low-order interactions. Experiments show that our method achieves performance comparable to pre-training methods, without the need for pre-training on large-scale datasets." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.521, + 0.958 + ], + "angle": 0, + "content": "27322" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.092, + 0.091, + 0.251, + 0.108 + ], + "angle": 0, + "content": "Acknowledgments" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.115, + 0.484, + 0.147 + ], + "angle": 0, + "content": "This work is partially supported by the National Nature Science Foundation of China (No. 62206170, 62376199)." + }, + { + "type": "title", + "bbox": [ + 0.093, + 0.159, + 0.188, + 0.175 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.184, + 0.484, + 0.268 + ], + "angle": 0, + "content": "[1] Mohamed Afham, Isuru Dissanayake, Dinithi Dissanayake, Amaya Dharmasiri, Kanchana Thilakarathna, and Ranga Rodrigo. Crosspoint: Self-supervised cross-modal contrastive learning for 3d point cloud understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9902-9912, 2022. 1, 2, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.1, + 0.269, + 0.484, + 0.337 + ], + "angle": 0, + "content": "[2] Iro Armeni, Ozan Sener, Amir R Zamir, Helen Jiang, Ioannis Brilakis, Martin Fischer, and Silvio Savarese. 3d semantic parsing of large-scale indoor spaces. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1534-1543, 2016. 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.339, + 0.483, + 0.407 + ], + "angle": 0, + "content": "[3] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.409, + 0.483, + 0.464 + ], + "angle": 0, + "content": "[4] Guangyan Chen, Meiling Wang, Yi Yang, Kai Yu, Li Yuan, and Yufeng Yue. Pointgpt: Auto-regressively generative pretraining from point clouds. Advances in Neural Information Processing Systems, 36, 2024. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.465, + 0.484, + 0.535 + ], + "angle": 0, + "content": "[5] Jiajing Chen, Burak Kakillioglu, Huantao Ren, and Senem Velipasalar. Why discard if you can recycle?: A recycling max pooling module for 3d point cloud analysis. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, pages 559-567, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.536, + 0.482, + 0.577 + ], + "angle": 0, + "content": "[6] Lu Chen, Siyu Lou, Benhao Huang, and Quanshi Zhang. Defining and extracting generalizable interaction primitives from dnns. arXiv preprint arXiv:2401.16318, 2024. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.578, + 0.482, + 0.619 + ], + "angle": 0, + "content": "[7] Xu Cheng, Chuntung Chu, Yi Zheng, Jie Ren, and Quanshi Zhang. A game-theoretic taxonomy of visual concepts in dnns. arXiv preprint arXiv:2106.10938, 2021. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.621, + 0.482, + 0.675 + ], + "angle": 0, + "content": "[8] Michel Grabisch and Marc Roubens. An axiomatic approach to the concept of interaction among players in cooperative games. International Journal of game theory, 28:547-565, 1999. 3, 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.677, + 0.482, + 0.745 + ], + "angle": 0, + "content": "[9] Siyuan Huang, Yichen Xie, Song-Chun Zhu, and Yixin Zhu. Spatio-temporal self-supervised representation learning for 3d point clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6535-6545, 2021. 1, 2, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.747, + 0.483, + 0.788 + ], + "angle": 0, + "content": "[10] Mingjie Li and Quanshi Zhang. Does a neural network really encode symbolic concepts? In International conference on machine learning, pages 20452-20469. PMLR, 2023. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.79, + 0.483, + 0.845 + ], + "angle": 0, + "content": "[11] AAM Muzahid, Wanggen Wan, Ferdous Sohel, Lianyao Wu, and Li Hou. Curvenet: Curvature-based multitask learning deep networks for 3d object recognition. IEEE/CAA Journal of Automatica Sinica, 8(6):1177-1187, 2020. 2, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.846, + 0.483, + 0.902 + ], + "angle": 0, + "content": "[12] Yatian Pang, Wenxiao Wang, Francis EH Tay, Wei Liu, Yonghong Tian, and Li Yuan. Masked autoencoders for point cloud self-supervised learning. In European conference on computer vision, pages 604-621. Springer, 2022. 5" + }, + { + "type": "list", + "bbox": [ + 0.094, + 0.184, + 0.484, + 0.902 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.093, + 0.906, + 0.148 + ], + "angle": 0, + "content": "[13] Omid Poursaeed, Tianxing Jiang, Han Qiao, Nayun Xu, and Vladimir G Kim. Self-supervised learning of point clouds via orientation estimation. In 2020 International Conference on 3D Vision (3DV), pages 1018-1028. IEEE, 2020. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.15, + 0.906, + 0.218 + ], + "angle": 0, + "content": "[14] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 652-660, 2017. 2, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.221, + 0.906, + 0.29 + ], + "angle": 0, + "content": "[15] Guocheng Qian, Yuchen Li, Houwen Peng, Jinjie Mai, Hasan Hammoud, Mohamed Elhoseiny, and Bernard Ghanem. Pointnext: Revisiting pointnet++ with improved training and scaling strategies. Advances in neural information processing systems, 35:23192-23204, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.292, + 0.906, + 0.359 + ], + "angle": 0, + "content": "[16] Yongming Rao, Jiwen Lu, and Jie Zhou. Global-local bidirectional reasoning for unsupervised representation learning of 3d point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5376-5385, 2020. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.362, + 0.906, + 0.416 + ], + "angle": 0, + "content": "[17] Jie Ren, Mingjie Li, Qirui Chen, Huiqi Deng, and Quanshi Zhang. Towards axiomatic, hierarchical, and symbolic explanation for deep models. arXiv preprint arXiv:2111.06206v5, 2021. 2, 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.419, + 0.906, + 0.473 + ], + "angle": 0, + "content": "[18] Jie Ren, Zhanpeng Zhou, Qirui Chen, and Quanshi Zhang. Can we faithfully represent masked states to compute shapley values on a dnn? arXiv preprint arXiv:2105.10719, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.476, + 0.906, + 0.544 + ], + "angle": 0, + "content": "[19] Jie Ren, Mingjie Li, Qirui Chen, Huiqi Deng, and Quanshi Zhang. Defining and quantifying the emergence of sparse concepts in dnns. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 20280-20289, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.547, + 0.906, + 0.6 + ], + "angle": 0, + "content": "[20] Qihan Ren, Yang Xu, Junpeng Zhang, Yue Xin, Dongrui Liu, and Quanshi Zhang. Towards the dynamics of a dnn learning symbolic interactions. arXiv preprint arXiv:2407.19198, 2024. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.603, + 0.906, + 0.645 + ], + "angle": 0, + "content": "[21] Jonathan Sauder and Bjarne Sievers. Self-supervised deep learning on point clouds by reconstructing space. Advances in Neural Information Processing Systems, 32, 2019. 1, 2, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.647, + 0.906, + 0.673 + ], + "angle": 0, + "content": "[22] Lloyd S Shapley. A value for n-person games. Contribution to the Theory of Games, 2, 1953. 3, 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.676, + 0.906, + 0.716 + ], + "angle": 0, + "content": "[23] Charu Sharma and Manohar Kaul. Self-supervised few-shot learning on point clouds. Advances in Neural Information Processing Systems, 33:7212-7221, 2020. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.719, + 0.906, + 0.788 + ], + "angle": 0, + "content": "[24] Wen Shen, Binbin Zhang, Shikun Huang, Zhihua Wei, and Quanshi Zhang. 3d-rotation-equivariant quaternion neural networks. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XX 16, pages 531–547. Springer, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.79, + 0.906, + 0.843 + ], + "angle": 0, + "content": "[25] Wen Shen, Qihan Ren, Dongrui Liu, and Quanshi Zhang. Interpreting representation quality of dnns for 3d point cloud processing. Advances in Neural Information Processing Systems, 34:8857-8870, 2021. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.846, + 0.906, + 0.902 + ], + "angle": 0, + "content": "[26] Wen Shen, Zhihua Wei, Shikun Huang, Binbin Zhang, Panyue Chen, Ping Zhao, and Quanshi Zhang. Verifiability and predictability: Interpreting utilities of network architectures for point cloud processing. In Proceedings of the IEEE/CVF" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.906, + 0.902 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "27323" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.126, + 0.092, + 0.482, + 0.12 + ], + "angle": 0, + "content": "Conference on Computer Vision and Pattern Recognition, pages 10703-10712, 2021. 1, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.122, + 0.483, + 0.191 + ], + "angle": 0, + "content": "[27] Wen Shen, Zhihua Wei, Qihan Ren, Binbin Zhang, Shikun Huang, Jiaqi Fan, and Quanshi Zhang. Interpretable rotation-equivariant quaternion neural networks for 3d point cloud processing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(5):3290-3304, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.193, + 0.483, + 0.247 + ], + "angle": 0, + "content": "[28] Mukund Sundararajan, Kedar Dhamdhere, and Ashish Agarwal. The shapley taylor interaction index. In International conference on machine learning, pages 9259-9268. PMLR, 2020. 3, 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.25, + 0.483, + 0.305 + ], + "angle": 0, + "content": "[29] Ling Tang, Wen Shen, Zhanpeng Zhou, Yuefeng Chen, and Quanshi Zhang. Defects of convolutional decoder networks in frequency representation. arXiv preprint arXiv:2210.09020, 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.307, + 0.483, + 0.375 + ], + "angle": 0, + "content": "[30] Ali Thabet, Humam Alwassel, and Bernard Ghanem. Self-supervised learning of local features in 3d point clouds. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 938-939, 2020. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.377, + 0.483, + 0.459 + ], + "angle": 0, + "content": "[31] Mikaela Angelina Uy, Quang-Hieu Pham, Binh-Son Hua, Thanh Nguyen, and Sai-Kit Yeung. Revisiting point cloud classification: A new benchmark dataset and classification model on real-world data. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1588–1597, 2019. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.462, + 0.483, + 0.531 + ], + "angle": 0, + "content": "[32] Hanchen Wang, Qi Liu, Xiangyu Yue, Joan Lasenby, and Matt J Kusner. Unsupervised point cloud pre-training via occlusion completion. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9782-9792, 2021. 1, 2, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.533, + 0.483, + 0.588 + ], + "angle": 0, + "content": "[33] Xin Wang, Jie Ren, Shuyun Lin, Xiangming Zhu, Yisen Wang, and Quanshi Zhang. A unified approach to interpreting and boosting adversarial transferability. arXiv preprint arXiv:2010.04055, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.59, + 0.483, + 0.657 + ], + "angle": 0, + "content": "[34] Xin Wang, Shuyun Lin, Hao Zhang, Yufei Zhu, and Quanshi Zhang. Interpreting attributions and interactions of adversarial attacks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1095-1104, 2021. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.661, + 0.483, + 0.716 + ], + "angle": 0, + "content": "[35] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics (tog), 38(5):1-12, 2019. 2, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.718, + 0.483, + 0.787 + ], + "angle": 0, + "content": "[36] Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1912-1920, 2015. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.789, + 0.483, + 0.858 + ], + "angle": 0, + "content": "[37] Mutian Xu, Junhao Zhang, Zhipeng Zhou, Mingye Xu, Xiaojuan Qi, and Yu Qiao. Learning geometry-disentangled representation for complementary understanding of 3d object point cloud. In Proceedings of the AAAI conference on artificial intelligence, pages 3056-3064, 2021. 2, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.86, + 0.483, + 0.901 + ], + "angle": 0, + "content": "[38] Siming Yan, Zhenpei Yang, Haoxiang Li, Chen Song, Li Guan, Hao Kang, Gang Hua, and Qixing Huang. Implicit autoencoder for point-cloud self-supervised representation" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.483, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.549, + 0.093, + 0.905, + 0.134 + ], + "angle": 0, + "content": "learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14530-14542, 2023. 1, 2, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.136, + 0.905, + 0.205 + ], + "angle": 0, + "content": "[39] Xumin Yu, Lulu Tang, Yongming Rao, Tiejun Huang, Jie Zhou, and Jiwen Lu. Point-bert: Pre-training 3d point cloud transformers with masked point modeling. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 19313-19322, 2022. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.207, + 0.905, + 0.26 + ], + "angle": 0, + "content": "[40] Wentao Yuan, Tejas Khot, David Held, Christoph Mertz, and Martial Hebert. Pcn: Point completion network. In 2018 international conference on 3D vision (3DV), pages 728-737. IEEE, 2018. 2, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.262, + 0.905, + 0.317 + ], + "angle": 0, + "content": "[41] Hao Zhang, Sen Li, Yinchao Ma, Mingjie Li, Yichen Xie, and Quanshi Zhang. Interpreting and boosting dropout from a game-theoretic view. arXiv preprint arXiv:2009.11729, 2020. 1, 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.32, + 0.905, + 0.387 + ], + "angle": 0, + "content": "[42] Renrui Zhang, Ziyu Guo, Peng Gao, Rongyao Fang, Bin Zhao, Dong Wang, Yu Qiao, and Hongsheng Li. Point-m2ae: multi-scale masked autoencoders for hierarchical point cloud pre-training. Advances in neural information processing systems, 35:27061-27074, 2022. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.39, + 0.905, + 0.457 + ], + "angle": 0, + "content": "[43] Zaiwei Zhang, Rohit Girdhar, Armand Joulin, and Ishan Misra. Self-supervised pretraining of 3d features on any point-cloud. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10252-10263, 2021. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.46, + 0.905, + 0.514 + ], + "angle": 0, + "content": "[44] Huilin Zhou, Huijie Tang, Mingjie Li, Hao Zhang, Zhenyu Liu, and Quanshi Zhang. Explaining how a neural network plays the go game and let people learn. arXiv preprint arXiv:2310.09838, 2023. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.517, + 0.905, + 0.572 + ], + "angle": 0, + "content": "[45] Huilin Zhou, Hao Zhang, Huiqi Deng, Dongrui Liu, Wen Shen, Shih-Han Chan, and Quanshi Zhang. Concept-level explanation for the generalization of a dnn. arXiv preprint arXiv:2302.13091, 2023. 2, 3" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.572 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "27324" + } + ] +] \ No newline at end of file diff --git a/2025/A Unified Approach to Interpreting Self-supervised Pre-training Methods for 3D Point Clouds via Interactions/99ce7f16-2914-4f50-bc65-0054f0b31f07_origin.pdf b/2025/A Unified Approach to Interpreting Self-supervised Pre-training Methods for 3D Point Clouds via Interactions/99ce7f16-2914-4f50-bc65-0054f0b31f07_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f8e933e14620ffc6b43b9f79f3e8b5d2fb2eb345 --- /dev/null +++ b/2025/A Unified Approach to Interpreting Self-supervised Pre-training Methods for 3D Point Clouds via Interactions/99ce7f16-2914-4f50-bc65-0054f0b31f07_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e1cb026b44a3cf950e10df44bdcf6773b38441fa5d0d628cae4df9b35bc80762 +size 2265976 diff --git a/2025/A Unified Approach to Interpreting Self-supervised Pre-training Methods for 3D Point Clouds via Interactions/full.md b/2025/A Unified Approach to Interpreting Self-supervised Pre-training Methods for 3D Point Clouds via Interactions/full.md new file mode 100644 index 0000000000000000000000000000000000000000..ff1938ceb9560ae6e3561bc9052d26d7cdfea6a8 --- /dev/null +++ b/2025/A Unified Approach to Interpreting Self-supervised Pre-training Methods for 3D Point Clouds via Interactions/full.md @@ -0,0 +1,315 @@ +# A Unified Approach to Interpreting Self-supervised Pre-training Methods for 3D Point Clouds via Interactions + +Qiang Li, Jian Ruan, Fanghao Wu, Yuchi Chen, Zhihua Wei*, and Wen Shen* Tongji University, Shanghai, China + +{qli, jianruan, 2432055, 1953721, zhihua_wei, wenshen}@tongji.edu.cn + +# Abstract + +Recently, many self-supervised pre-training methods have been proposed to improve the performance of deep neural networks (DNNs) for 3D point clouds processing. However, the common mechanism underlying the effectiveness of different pre-training methods remains unclear. In this paper, we use game-theoretic interactions as a unified approach to explore the common mechanism of pre-training methods. Specifically, we decompose the output score of a DNN into the sum of numerous effects of interactions, with each interaction representing a distinct 3D substructure of the input point cloud. Based on the decomposed interactions, we draw the following conclusions. (1) The common mechanism across different pre-training methods is that they enhance the strength of high-order interactions encoded by DNNs, which represent complex and global 3D structures, while reducing the strength of low-order interactions, which represent simple and local 3D structures. (2) Sufficient pre-training and adequate fine-tuning data for downstream tasks further reinforce the mechanism described above. (3) Pre-training methods carry a potential risk of reducing the transferability of features encoded by DNNs. Inspired by the observed common mechanism, we propose a new method to directly enhance the strength of high-order interactions and reduce the strength of low-order interactions encoded by DNNs, improving performance without the need for pre-training on large-scale datasets. Experiments show that our method achieves performance comparable to traditional pre-training methods. + +# 1. Introduction + +Self-supervised pre-training methods for 3D point clouds have developed rapidly in recent years [1, 9, 13, 16, 21, 23, 30, 32, 38, 43]. Pre-training methods first train DNNs on large-scale unlabeled datasets, then fine-tune the DNNs on downstream tasks, generally enhancing their performance. + +However, the common mechanism underlying different pretraining methods remains unclear, posing challenges for gaining insights into effective model training strategies. + +In this paper, we aim to explore the common mechanism behind the performance improvements of different pre-training methods, thereby providing insights into pretraining, and offering better guidance for the training process. Recent studies have employed interactions to explain the reasoning processes of DNNs [20, 25, 41]. Inspired by these studies, we use interactions to provide a unified interpretation of different pre-training methods. + +Specifically, given a point cloud $x$ with $n$ regions indexed by $N = \{1,2,\dots,n\}$ , an interaction represents the collaborations among regions within a specific 3D structure $S \subseteq N$ , where each interaction has a numerical effect $I(S)$ on the network output. For example, as shown in Fig. 1, the interaction between the regions in $S_{1} = \{\text{wingtip}, \text{wing root}\}$ form a concept of "wing", contributing $I(S_{1})$ to push the classification result toward the class "airplane". It has been proven by [6, 44] that the output score of a DNN consistently equals the sum of the effects of all activated interactions, regardless of how the input regions are masked. In this way, interactions can be seen as the detailed inference patterns encoded by the DNN. + +Based on interactions, we conduct comparative experiments to explore the common reasons behind the performance improvements of different pre-training methods. Specifically, we explore the impact of pre-training methods on the complexity of interactions encoded by DNNs. Here, the complexity refers to the number of regions contained in an interaction, i.e., the order of the interaction. A high-order interaction, e.g., $S_{3}$ in Fig. 1, captures collaborations among massive point regions, representing complex and global 3D structures. In contrast, a low-order interaction, e.g., $S_{1}$ , measures collaborations between a few regions, representing simple and local 3D structures. From the experiments, we draw the following key conclusions. + +- The common mechanism across different pre-training + +![](images/df7338f800ae385b4dddd39cd83328592a92a96ae7466c162041d4f73216074f.jpg) + +![](images/571d909e20d4adaf0184bed55e90c7839287c07b105246f692cd9db6857ef0af.jpg) +Figure 1. (a) Illustration of how interactions can be used to explain a DNN. Given an input point cloud with $n$ regions, the output score of the DNN can be decomposed into the sum of the numerical effects of $2^{n}$ interactions, where each interaction $S$ encodes the collaborations among the point cloud regions in the set $S$ . (b) Comparing the strength of interactions across different orders encoded by the DGCNN trained from scratch (scr) and the DGCNN using a pre-training method (pt). Results show that the pre-trained DGCNN encodes stronger high-order interactions and weaker low-order interactions than the DGCNN trained from scratch. + +methods is that they enhance the strength of high-order interactions encoded by DNNs while reducing the strength of low-order interactions. This common mechanism indicates that pre-training methods enhance the DNNs' ability to capture global 3D structures, while reducing their reliance on local 3D structures. + +- Sufficient pre-training and adequate fine-tuning data for downstream tasks further reinforce the mechanism described above. We observe that the strength of high-order interactions increases with the number of pretraining epochs while the strength of low-order interactions decreases. Additionally, increasing the amount of data for downstream tasks also amplifies this effect. +- Pre-training methods carry a potential risk of reducing the transferability of features encoded by DNNs. We observe that the performance of the pre-trained DNNs may decrease on unseen test datasets, possibly due to pretraining methods causing the DNNs to encode high-order interactions with excessively high strength. + +Building on the common mechanism we identified, we propose a new method to directly enhance the strength of high-order interactions encoded by DNNs while reducing the strength of low-order interactions. Experimental results on classification and semantic segmentation benchmarks show that our method achieves performance comparable to pre-training methods, without the need for pre-training on large-scale unlabeled datasets. + +# 2. Related work + +Self-supervised learning (SSL) of 3D point clouds. Recently, 3D point cloud processing has developed rapidly [5, 11, 14, 15, 24, 27, 35, 37, 40], with many self-supervised methods proposed to learn representations from individual 3D objects [1, 9, 13, 16, 21, 23, 30, 32, 38, 43]. The goal of SSL is to design pretext tasks to help the model learn the + +data distribution and features in advance, preparing it for downstream tasks. In this paper, we explore the common mechanism behind the performance improvement of the following five widely used open-source pre-training methods. + +- Occlusion Completion (OcCo) [32]. OcCo masks occluded points from a camera view and trains an encoder-decoder model to reconstruct these missing points. +- Jigsaw [21]. Jigsaw trains a model to reconstruct point clouds with parts rearranged in random order. +- Implicit Auto-encoder (IAE) [38]. IAE trains the model as an encoder to map the point clouds to a high-dimensional space and uses a decoder to reconstruct the encoder's outputs back into 3D geometry. +- Spatio-Temporal Representation Learning (STRL) [9]. STRL captures spatio-temporal information from 3D sequences by using two temporally correlated frames to learn invariant representations. +- CrossPoint [1]. CrossPoint learns transferable representations by maximizing the agreement between 3D point clouds and corresponding 2D images. + +Using game-theoretical interactions to explain DNNs. Game-theoretical interactions provide a solid theoretical foundation for explaining DNNs. Ren et al. [17] proposed a mathematical formulation for the concepts encoded by a DNN, while Ren et al. [18] further leveraged these concepts to define optimal baseline values for Shapley values. Li et al. [10] provided a theoretical guarantee that interactions accurately capture the true concepts encoded by a DNN. At the application level, interactions have been widely used to explain the representation capacity of DNNs from various perspectives, including adversarial robustness [17, 34], adversarial transferability [33], and generalization power [41, 45]. In this paper, we use interactions to investigate the common mechanism underlying different pre-training methods for 3D point clouds. + +![](images/3289a2b5798488eb7ebe8ac75c333eef9e141c31637aeebe7028b38bc5fad491.jpg) +Figure 2. Process of dividing an input point cloud into $n$ regions. + +# 3. Interactions in 3D point cloud processing + +Preliminaries: interactions. As a new explanatory metric, interaction has been used to clarify the inference logic [7], generalization power [34], and robustness of a DNN [45]. It can be viewed as a universal measure due to its close theoretical connections with other metrics. As proven by [19], the Harsanyi interaction serves as the basis for existing game-theoretic attributions and interactions, including the Shapley value [22], the Shapley interaction index [8], and the Shapley Taylor interaction index [28]. Please see the supplementary material for additional details. + +Quantifying interactions for 3D point cloud processing. We extend interactions to 3D point clouds. Considering an point cloud $x \in \mathbb{R}^{P \times 3}$ , we divide it into $n$ regions, as shown in Fig. 2. First, we apply the farthest point sampling (FPS) algorithm to select $n$ points from the point cloud as the centers of each region. Then, we use the $k$ -dimensional tree (KDTree) algorithm to assign the remaining points to their nearest region. By doing so, we divide the input point cloud $x$ into $n$ regions, indexed by $N = \{1, 2, \dots, n\}$ . + +Given a trained DNN $v: \mathbb{R}^{P \times 3} \to \mathbb{R}$ , we follow [10, 20, 25] to define the DNN's output score as $v(x) = \log \frac{p}{1 - p}$ to represent the classification confidence, where $p$ is the output probability of the ground truth class. Then, the output score can be rewritten as the sum of the numerical effects of all $2^n$ interactions between the point regions, as follows. + +$$ +v (x) = \sum_ {S \subseteq N} I (S). \tag {1} +$$ + +Here, $I(S)$ represents the numerical effect of the interaction among the point regions in $S \subseteq N$ , defined as follows. + +$$ +I (S) \triangleq \sum_ {T \subseteq S} (- 1) ^ {| S | - | T |} \cdot v \left(x _ {T}\right), \tag {2} +$$ + +where $x_{T}$ represents the input point cloud with the regions in $T \subseteq N$ unchanged, while the regions in $N \backslash T$ are masked by replacing them with the centroid of the point cloud. + +Understanding interactions in 3D point cloud processing. The interaction extracted from the input point cloud $x$ encodes an AND relationship among the point regions in $S$ , with the numerical effect $I(S)$ representing the combined contribution of these regions to the output score $v(x)$ . As shown in Fig. 1, when the point regions in the set $S_{1} = \{wingtip, wing root\}$ are unmasked, they form a "wing" pattern and contribute a numerical effect $I(S_{1})$ that pushes the output score $v(x)$ towards the "airplane" category. Masking any region in $S_{1}$ will deactivate this AND + +interaction and remove $I(S_{1})$ from $v(x)$ . In fact, Tang et al. [29] has proven that interaction satisfies the universal matching property, which states that the DNN's inference score $v(x_{T})$ can always be faithfully explained as the sum of the numerical effects of all activated interactions, regardless of how the point cloud regions are masked. + +Theorem 1 (Universal matching property, proven by [29]). Given an input sample $x$ with $n$ variables indexed by $N = \{1,2,\dots,n\}$ , we can generate $2^n$ masked samples $x_{T}$ where $T \subseteq N$ . Let us construct the following surrogate logical model $\phi(\cdot)$ to use interactions for inference, which are extracted from the DNN $v(\cdot)$ on the sample $x$ . Then, the output of the surrogate logical model $\phi(\cdot)$ can always match the output of the DNN $v(\cdot)$ , regardless of how the input sample is masked. + +$$ +\begin{array}{l} \forall T \subseteq N, \phi (x _ {T}) = v (x _ {T}), \\ \phi \left(x _ {T}\right) = v \left(x _ {\emptyset}\right) + \sum_ {S \subseteq N} I (S) \cdot \mathbb {1} \binom {x _ {T} \text {t r i g g e r s}} {\text {A N D r e l a t i o n} S} \tag {3} \\ = v (x _ {\emptyset}) + \sum_ {\emptyset \neq S \subseteq T} I (S). \\ \end{array} +$$ + +Defining and quantifying the representation complexity of DNNs. The order of an interaction is defined as $m = |S|$ , which reflects the representation complexity of DNNs. High-order interactions measure the effects of collaborations among massive point cloud regions, representing global and complex 3D structures, while low-order interactions measure the effects of collaborations between a few point regions, representing simple and local 3D structures. We introduce a new metric for measuring the representation complexity of DNNs, as follows. + +$$ +\kappa^ {(m)} \triangleq \frac {\mathbb {E} _ {x} \mathbb {E} _ {S \subseteq N , | S | = m} [ | I (S) | ]}{Z}, \tag {4} +$$ + +where $\mathbb{E}$ denotes the mathematical expectation, and $Z = \mathbb{E}_x\mathbb{E}_{S\subseteq N}[|I(S)|]$ is a normalization term to ensure fair comparisons across different DNNs. Here, $\kappa^{(m)}$ measures the normalized average strength of the $m$ -th order interactions. If the value of $\kappa^{(m)}$ of a high-order is larger than that of a low-order, the DNN's representation complexity is enough to capture global and complex 3D structures. Otherwise, the DNN's representation complexity is limited to encoding only local and simple 3D structures. + +We further propose the following metrics to measure the strength of high-order interactions and low-order interactions encoded by the DNN. + +$$ +\begin{array}{l} \kappa^ {\text {h i g h}} = \sum_ {m \in \Omega^ {\text {h i g h}}} \kappa^ {(m)}, s. t. \Omega^ {\text {h i g h}} \stackrel {{\text {d e f}}} {{=}} \left\{m \mid \lceil \frac {2}{3} n \rceil < m \leq n \right\}, \\ \kappa^ {\text {l o w}} = \sum_ {m \in \Omega^ {\text {l o w}}} \kappa^ {(m)}, s. t. \Omega^ {\text {l o w}} \stackrel {{\text {d e f}}} {{=}} \left\{m \mid 1 \leq m \leq \lceil \frac {1}{3} n \rceil \right\}, \end{array} \tag {5} +$$ + +where $\Omega^{\mathrm{high}}$ and $\Omega^{\mathrm{low}}$ denote the ranges of high-order and low-order interactions, respectively. + +![](images/67782b0baf6715b310e74bd4a3d170c23d7160ddc09fccb8198fea6e40ff40e5.jpg) + +![](images/7bd74d7397421b56e077a1cf24ea8db8769d47f46f8c4c5ef7afe2e6847ef828.jpg) + +![](images/2d4ad121797fb8d91330c85b0aa60c8b6db70c00e96fc0b651c0a4a601792d2e.jpg) + +![](images/766c06325e4154bea1e38a0f39b207067288d0d6955c7cf7c8cdf979b57fde53.jpg) + +![](images/5cffe96ae4f6bd91c1e2184252f0c73e72a1f8e7e956f170434a17bcd890a18a.jpg) + +![](images/2224f7471994640d18c96b588f8ca20940dc565deddc8597e39b0b3ce677752c.jpg) + +![](images/06db17dd0b0b4c5e9c0c318dddeb42bcd5217b19031bc05f0ec0541021418599.jpg) +Figure 3. [Conclusion 1] (a) Comparing the normalized average strength of interactions encoded by different DNNs, including DNNs trained from scratch and DNNs trained with different pre-training methods. Results show that the DNNs using pre-training methods consistently encode stronger high-order interactions and weaker low-order interactions than the DNNs trained from scratch. (b) The relationship between the strength of high-order interactions encoded by different DNNs and their corresponding classification accuracy. Results show that DNNs encoding stronger high-order interactions tend to exhibit higher accuracy. + +![](images/254d67983fdf6831316bb09ceed5141568c4ed51c6ec7dbf0ed8cb2eab5a12cc.jpg) + +![](images/de395489ef92dbdb43b95b11a7d31e7ef78124b86d6c6d378c96946f3f6b100f.jpg) + +![](images/076650b651e6aa4430796a6f9f858dfef69848080a46eb6b562fc16c0fb90619.jpg) + +![](images/e2be93cf1a71bff0a1240735116ef1c0ea2645678d7d602e2b6cd91f0e07c5a4.jpg) + +![](images/2b0a90577a7f2b4cd573d46ab5d832b246c463ed621f16630a3d65b902a1901b.jpg) + +# 4. Interpreting different pre-training methods using interactions + +# 4.1. Comparative study setup + +For a given network architecture, we compare the interactions encoded by the model trained from scratch with those encoded by models trained using various pre-training methods. This comparison aims to explore whether these pretraining methods share a common underlying reason for performance improvement, which we define as the common mechanism across these methods. To provide a unified explanation for most pre-training methods, we conduct experiments on five widely used open-source pre-training methods, including IAE [38], STRL [9], CrossPoint [1], OcCo [32] and JigSaw [21], as detailed in Sec. 2. + +Networks and datasets. We conduct experiments on three network architectures: DGCNN [35], PointNet [14], and PCN [40]. For DGCNN, we utilize all five pre-training methods, while for PointNet and PCN, we focus on OcCo [32] and Jigsaw [21], depending on the accessibility of open-source implementations for each pre-training method. + +To compare the interactions encoded by different DNNs, we use three benchmark datasets for 3D classification task: ModelNet40 [36], ShapeNet² [3], and ScanObjectNN [31]. Tab. 1 shows the statistics of these datasets. We randomly select 10 samples per class from each dataset and use the + +
NameType# Class# Training / Testing
ModelNetsynthesized409,843 / 2,468
ShapeNetsynthesized1612,137 / 4,744
ScanObjectNNreal world152,304 / 576
+ +Table 1. Statistics of datasets for classification. + +method in Sec. 3 to divide each point cloud sample into $n$ regions for quantifying the interactions encoded by DNNs. + +# 4.2. Exploring the common mechanism of different pre-training methods + +Conclusion 1. The common mechanism across different pre-training methods is that they enhance the strength of high-order interactions encoded by DNNs, while reducing the strength of low-order interactions. + +Fig. 3 (a) shows the normalized average strength of the interactions encoded by different DNNs, including DNNs trained from scratch and DNNs using different pre-training methods. Results show that the strength of high-order interactions encoded by the DNNs using pre-training methods is consistently greater than that of the DNNs trained from scratch, across all datasets and network architectures. Conversely, the DNNs using pre-training methods typically encode weaker low-order interactions than the DNNs trained from scratch. Fig. 3 (b) further illustrates the relationship between the strength of high-order interactions and the clas + +![](images/3dda22ba09d99ba14a40a722fa9072e88e2e08d51ddd3ca13441f0b237d9a780.jpg) +Figure 4. Visualization of interactions encoded by the DGCNN trained from scratch (scr) and the DGCNN pre-trained (pt) with IAE. The pre-trained DGCNN typically encodes stronger high-order interactions and weaker low-order interactions compared to the DGCNN trained from scratch. + +![](images/20f4fd5661286d9a212e95d4a2b4c514ec0522bee551dcc0bb8bb8d1903d9b35.jpg) + +sification accuracy across different DNNs. We observe that DNNs encoding stronger high-order interactions tend to exhibit higher accuracy. Thus, we regard this shared phenomenon as the common mechanism behind the performance improvement of different pre-training methods, i.e., different pre-training methods generally enhance the strength of high-order interactions encoded by DNNs, while reducing the strength of low-order interactions, as summarized in Conclusion 1. + +Conclusion 1 reveals that pre-training methods enhance the ability of DNNs to encode complex and global 3D structures, while reducing their reliance on simple and local 3D structures. As simple and local 3D structures (e.g., a curve, a corner) can appear across different categories, they often lack sufficient classification information, so an over-reliance on them may lead to incorrect classifications. For example, as shown in Fig. 4, the DNN trained from scratch incorrectly classifies a "plant" sample as a "stool". This misclassification may occur because the local structures the DNN learns for the plant, such as the "stem" and the "leaf", are similar to some local structures of a stool, such as the "legs". However, the DNN still encodes a high strength for these local structures (i.e., low-order interactions), which results in an incorrect classification. In contrast, pre-training methods improve the modeling of complex and global 3D structures, allowing DNNs to get a more comprehensive understanding of the input, which in turn enhances their performance. Thus, beyond traditional accuracy metrics, interactions can help identify the potential reasons for classification errors by revealing which 3D structures modeled by the DNN have inappropriate weights, offering a new perspective for debugging. + +Comparison with transformer-based pre-training methods. We also measure interactions encoded by transformer- + +![](images/2b18729678ddf6c40e4c4532bb609d7399d56e1ef4fd93b55a6b755cc2577e1f.jpg) +Figure 5. Comparing the normalized average strength of interactions encoded by (1) transformer-based models, (2) traditional DNNs (e.g., DGCNN and PointNet) trained from scratch, and (3) traditional DNNs using pre-training methods (e.g., DGCNN with IAE, and PointNet with OcCo). Results show that transformer-based models also encode stronger high-order interactions and weaker low-order interactions, exhibiting a similar pattern to traditional DNNs using pre-training methods. + +based models, including PointBERT [39], PointMAE [12], PointM2AE [42], and PointGPT [4]. These models integrate pre-training methods into the model architecture, making them incompatible with traditional DNNs (e.g., DGCNN). Therefore, we directly compare the interactions encoded by transformer-based models with the interactions encoded by traditional DNNs, including the DNNs trained from scratch and the DNNs trained with pre-training methods. As shown in Fig. 5, transformer-based models also encode stronger high-order interactions and weaker low-order interactions than traditional DNNs trained from scratch, which exhibit a similar pattern to the interactions encoded by traditional DNNs using pre-training methods. This further supports Conclusion 1. + +# 4.3. Exploring the impact of different factors on the common mechanism + +We further explore two factors that impact the common mechanism: (a) the extent of pre-training, and (b) the amount of fine-tuning data used for downstream tasks. + +Conclusion 2(a). The pre-training process progressively enhances the strength of high-order interactions encoded while weakening the strength of low-order interactions as the extent of pre-training increases. + +In this subsection, we first investigate the relationship between the extent of pre-training and the strength of interactions encoded by DNNs. Here, the extent of pre-training refers to the number of pre-training epochs, i.e., the range of epochs from the start of pre-training to the epoch at which pre-training converges. To this end, we conduct experiments on DGCNN with two pre-training methods, including IAE and CrossPoint. For each pre-training method, let $T_{\mathrm{max}}$ denote the total number of epochs at which the pre-training process of the DNN converges. We select the DNNs at training epochs $0, 0.2T_{\mathrm{max}}, 0.4T_{\mathrm{max}}, \ldots, T_{\mathrm{max}}$ , covering six different stages of the pre-training process. Then, for all + +![](images/a87161cf4e1679b3d5dad2935f636cc2f3b4ffb672ab7dd1d6f0a10ba275ac30.jpg) +Figure 6. [Conclusion 2(a)] Comparing the normalized average strength of interactions encoded by DGCNNs pre-trained for different extents, ranging from initial pre-training (0%) to full convergence (100%). As the extent of pre-training increases, the strength of high-order interactions encoded by the DNNs typically rises, while the strength of low-order interactions generally decreases. + +![](images/8aa174ef63623ca8ad69b2f8f26b8407fd901be9a1d01012708f253c838129c1.jpg) +Figure 7. [Conclusion 2(b)] Comparing the normalized average strength of interactions encoded by DNNs fine-tuned with varying amounts of data. Results show that as the amount of fine-tuning data increases from $1\%$ to $100\%$ , the strength of high-order interactions encoded by the DNNs generally increases, while the strength of low-order interactions generally decreases. + +DNNs at different pre-training extents, we fine-tune them on the same downstream task and quantify the interactions encoded by these fine-tuned DNNs. + +Fig. 6 presents the experimental results. We observe that as the extent of pre-training increases, the strength of high-order interactions encoded by the DNNs generally increases, while the strength of low-order interactions typically decreases. We summarize this relationship between the extent of pre-training and the interactions encoded by DNNs in Conclusion 2(a). This conclusion suggests that sufficient pre-training enhances the model's ability to capture complex and global 3D contexts, further validating the common mechanism outlined in Conclusion 1. + +Conclusion 2(b). Increasing the amount of fine-tuning data further enhances the strength of high-order interactions encoded by DNNs, while weakening the strength of low-order interactions. + +To investigate the relationship between the amount of fine-tuning data for downstream tasks and the interactions encoded by DNNs, we construct seven training sets of varying sizes from the ModelNet40 dataset, containing $1\%$ , $10\%$ , $20\%$ , $30\%$ , $50\%$ , $70\%$ , and $100\%$ of the original ModelNet40 training data, respectively. Note that we ensure + +![](images/037f4d02f5874a1f526c7bb0b41b8a9fd67377bf8b99ad52f1b4877cba002600.jpg) +Figure 8. Comparing the classification accuracy and the strength of high-order interactions encoded by different DNNs fine-tuned with varying amounts of data. As the amount of data increases, the accuracy gap between the DNN trained from scratch and the DNN pre-trained with IAE narrows, while the gap in the strength of high-order interactions encoded by these DNNs widens. + +at least one sample from each class is included, allowing the model to learn from all categories. We then use the different-sized training sets to fine-tune DGCNNs, including those pre-trained using the IAE method and the Cross-Point method. As shown in Fig. 7, as the amount of fine-tuning data increases, the strength of high-order interactions encoded by DNNs gradually increases, while the strength of low-order interactions decreases. We summarize this relationship between the amount of fine-tuning data and the interactions encoded by DNNs in Conclusion 2(b). + +# 4.4. Exploring the potential risk of pre-training methods in reducing DNN's transferability + +Conclusion 3. Pre-training methods carry a potential risk of reducing the transferability of features encoded by DNNs. + +When exploring the relationship between the amount of fine-tuning data and the interactions encoded by DNNs, we observe the following anomalous phenomenon. As shown in Fig. 8, the gap in classification accuracy between the pretrained DNN and the DNN trained from scratch becomes marginal as the fine-tuning data increases. For example, when the fine-tuning data reaches $100\%$ , the accuracy gap is only $0.2\%$ . However, the gap in the strength of high-order interactions between the two DNNs gradually increases, indicating that high-order interactions with excessively high strength are not necessary for performance improvement. + +Since high-order interactions generally carry a greater risk of overfitting [41], we investigate the potential risk of pre-training methods in reducing the transferability of features encoded by DNNs. Here, the transferability of features refers to the generalization ability of the features. For example, if the features learned from one dataset (e.g., the features of the airplane class in ModelNet) can be applied to another unseen dataset (e.g., identifying the airplane class in ShapeNet), we consider these features to have high transferability. To this end, we use ShapeNet as the unseen + +![](images/524e5286dfb6e543842f6239f5a62352866fbed12fbf6488c0615028f8db39c5.jpg) +Figure 9. [Conclusion 3] Comparing the zero-shot classification accuracy of DNNs with and without pre-training, followed by fine-tuning with varying amounts of data. Results show that the zero-shot accuracy of the pre-trained DNN initially exceeds that of the DNN trained from scratch when the fine-tuning data is limited $(e.g., 1\%)$ , but falls below that of the DNN trained from scratch as the fine-tuning data becomes sufficient $(e.g., 100\%)$ . + +dataset and compare the classification accuracy of different DNNs, including DGCNNs trained with varying amounts of data from ModelNet, as well as DGCNNs pre-trained and then fine-tuned with varying amounts of data. Since the category labels in the two datasets do not completely align, we identify eight common categories. Please see the supplementary material for more implementation details. + +Fig. 9 shows the results. We find that when the amount of fine-tuning data is limited (e.g., $1\%$ ), pre-trained DNNs, such as the DNN pre-trained with CrossPoint, achieve higher zero-shot accuracy $(+8.9\%)$ compared to the DNN trained from scratch. In contrast, when the fine-tuning data is sufficient (e.g., $100\%$ ), the accuracy of the DNN pretrained with CrossPoint significantly lags behind that of the DNN trained from scratch $(-14.7\%)$ . We attribute this to pre-training methods causing DNNs to encode high-order interactions with excessively high strength, which in turn reduces the transferability of the features encoded by the DNNs. Note that we are not criticizing the use of pretraining methods to enhance the strength of high-order interactions encoded by DNNs as inherently negative. Rather, we are proposing this potential risk and offering new insights for the design of pre-training methods. + +# 5. Guiding the training process using the common mechanism + +Traditional pre-training methods, while improving performance, inevitably require extensive pre-training on large-scale unlabeled datasets, which demands considerable time and computational resources. As discussed above, we find that the common mechanism underlying different pre-training methods is that they universally enhance the strength of high-order interactions encoded by DNNs while reducing the strength of low-order interactions. Building on this insight, we propose a new method that directly enhances the strength of high-order interactions encoded by DNNs while reducing the strength of low-order interactions. In this way, our method achieves performance + +![](images/b54690b9075dd82112437b300e73e7480e077e9978c3ea391be11a238f70d379.jpg) +Figure 10. (a) Curves showing the values of the proposed loss term $\mathcal{L}_{\text{interaction}}$ for different values of $\alpha$ throughout the training process. (b) Comparison of the normalized average strength of interactions encoded by DNNs for various $\alpha$ values in the loss term. + +comparable to traditional pre-training methods while avoiding the need for pre-training on large-scale unlabeled datasets. Specifically, we introduce a new heuristic loss term defined as follows. + +$$ +\mathcal {L} _ {\text {i n t e r a c t i o n}} = \mathbb {E} _ {x} \left[ \mathbb {E} _ {| S | \in \Omega^ {\mathrm {l o w}}} [ | I (S) | ] - \mathbb {E} _ {| S | \in \Omega^ {\mathrm {h i g h}}} [ | I (S) | ] \right], \tag {6} +$$ + +where $\Omega^{\mathrm{high}}$ and $\Omega^{\mathrm{low}}$ define the ranges of high-order and low-order interactions, as detailed in Sec. 3. Minimizing the loss term $\mathcal{L}_{\mathrm{interaction}}$ forces the DNN to weaken the strength of low-order interactions, i.e., decreasing $\mathbb{E}_x\mathbb{E}_{|S|\in \Omega^{\mathrm{low}}}[|I(S)|]$ , while enhancing the strength of high-order interactions, i.e., increasing $\mathbb{E}_x\mathbb{E}_{|S|\in \Omega^{\mathrm{high}}}[|I(S)|]$ . + +However, computing Eq. (6) is NP-hard. To overcome this challenge, we approximate $\mathcal{L}_{\mathrm{interaction}}$ using a sampling-based approach. Specifically, given a point cloud $x$ with $n$ regions indexed by $N = \{1,2,\dots,n\}$ , we sample three disjoint subsets $S_{1}, S_{2}, S_{3} \subseteq N$ where the orders of the subsets $|S_1|, |S_2|, |S_3| \in \Omega^{\mathrm{low}}$ , with each subset representing a low-order interaction encoded by the DNN. We consider the union $S_{\mathrm{union}} = S_{1} \cup S_{2} \cup S_{3}$ as a relatively high-order interaction. Then, we can approximate the interaction loss $\mathcal{L}_{\mathrm{interaction}}$ as follows. + +$$ +\mathcal {L} _ {\text {i n t e r a c t i o n}} ^ {\prime} = \mathbb {E} _ {S _ {1}, S _ {2}, S _ {3} \subseteq N} \left[ \mathbb {E} _ {i \in \{1, 2, 3 \}} [ | I (S _ {i}) | ] - | I (S _ {\text {u n i o n}}) | \right]. \tag {7} +$$ + +Given a traditional DNN, we incorporate the interaction loss into the training process using the following loss function for the classification task, without the need for additional pre-training on large-scale unlabeled datasets. + +$$ +\mathcal {L} = \mathcal {L} _ {\text {c l a s s i f i c a t i o n}} + \alpha \mathcal {L} _ {\text {i n t e r a c t i o n}}, \tag {8} +$$ + +where $\mathcal{L}_{\mathrm{classification}}$ denotes the standard classification loss function (e.g., cross-entropy loss), and $\alpha > 0$ is the hyperparameter controlling the strength of the interaction loss. Please see Tab. 4 for the effects of varying $\alpha$ . As shown in Fig. 10 (b), the strength of high-order interactions encoded by the DNN with $\alpha > 0$ is generally higher than the result when $\alpha = 0$ , but it does not increase indefinitely as $\alpha$ grows. This shows the effectiveness of our interaction loss. + +Experiments and results analysis. To evaluate the effectiveness of the proposed loss term, we conduct experi- + +
MethodModelNet40ScanObjectNNNo Pre-train
PointNet89.268.0
PointNet + JigSaw89.6-
PointNet + OcCo90.1-
PointNet + Linteraction (Ours)90.169.0
DGCNN92.578.1
DGCNN + JigSaw92.683.5
DGCNN + OcCo93.084.3
DGCNN + STRL93.1-
DGCNN + CrossPoint92.8-
DGCNN + IAE94.285.6
DGCNN + Linteraction (Ours)93.379.4
CurveNet92.879.2
CurveNet + Linteraction (Ours)93.182.0
GDANet92.378.7
GDANet + Linteraction (Ours)92.880.0
+ +Table 2. Classification accuracy (\%) on ModelNet40 and ScanObjectNN datasets. The best results are shown in bold and the second-best results are underlined. Our method achieves results comparable to pre-training methods, while not requiring pretraining on large-scale datasets. + +ments on 3D point cloud classification and semantic segmentation tasks. For the classification task, we use the ModelNet40 and ScanObjectNN datasets, as described in Sec. 4.1. Specifically, for the ScanObjectNN dataset, we conduct experiments using the PB_T50_RS variant, which is the most challenging variant. We train PointNet and DGCNN using the proposed loss term and set $\alpha$ to 0.0005. As shown in Tab. 2, our proposed loss term consistently improves the performance of PointNet, DGCNN, CurveNet [11], and GDANet [37] on both the ModelNet40 and the ScanObjectNN testing splits, compared to their original versions. Moreover, our method demonstrates performance comparable to pre-training methods, without the need for pre-training on large-scale datasets. + +For the semantic segmentation task, we conduct experiments on the Stanford Large-Scale 3D Indoor Spaces (S3DIS) dataset [2]. The S3DIS consists of 3D point clouds collected from six distinct large-scale indoor environments, with each point cloud annotated with per-point categorical labels. We randomly subsample 4,096 points from the original point cloud and apply 6-fold cross-validation during fine-tuning. Since the proposed interaction loss is specifically designed for 3D classification, it cannot be directly applied to segmentation tasks. Instead, we adopt a two-stage training approach: first, we train a DNN on the classification task with our interaction loss, and then fine-tune the model on the semantic segmentation task. As shown in Tab. 3, the DGCNN using the proposed loss term achieves $86.8\%$ overall accuracy and $59.0\%$ mIoU, outperforming the majority of pre-training methods. Additionally, our loss term also improves the performance of the PointNet. + +Effects of the hyper-parameter $\alpha$ . We train the + +
MethodS3DIS 6-Fold
OAmIoU
PointNet78.547.6
PointNet + Linteraction (Ours)82.150.8
DGCNN84.156.1
DGCNN + JigSaw84.456.6
DGCNN + OcCo85.158.5
DGCNN + STRL84.257.1
DGCNN + IAE85.960.7
DGCNN + Linteraction (Ours)86.859.0
+ +Table 3. Semantic segmentation on S3DIS. We report Overall Accuracy (OA) and mean Intersection over Union (mIoU) across six folds. Our method surpasses most pre-training methods. + +
αModelNet40ScanObjectNN
0.092.578.1
0.000193.079.0
0.000593.379.4
0.00191.378.1
+ +Table 4. Classification accuracy (\%) for DGCNNs trained with varying hyper-parameters $\alpha$ for the interaction loss. + +DGCNN with various interaction loss weights $\alpha$ and evaluate the testing accuracy, as shown in Tab. 4. The accuracy initially increases and then decreases as $\alpha$ rises. We attribute this to the loss term enhancing the strength of high-order interactions encoded by the DNN. At lower $\alpha$ values, the interaction loss improves the DNN's modeling of global 3D structures. However, excessively high values of $\alpha$ lead to excessively high strength of high-order interactions, increasing the risk of overfitting, as discussed in Conclusion 3. With an appropriately chosen $\alpha$ , the interaction loss effectively enhances the training process, further supporting the common mechanism outlined in Conclusion 1. + +# 6. Conclusion + +In this paper, we use interactions to investigate the common mechanism underlying the effectiveness of different pretraining methods for 3D point clouds. Specifically, these methods generally enhance the strength of high-order interactions encoded by DNNs, while reducing the strength of low-order interactions. We then explore the impact of various factors on the mechanism and find that sufficient pretraining and adequate fine-tuning data further reinforce this mechanism. Additionally, we identify a potential risk that pre-training may reduce the transferability of DNNs. Based on the common mechanism, we propose a new method that directly enhances the strength of high-order interactions encoded by DNNs while weakening the strength of low-order interactions. Experiments show that our method achieves performance comparable to pre-training methods, without the need for pre-training on large-scale datasets. + +# Acknowledgments + +This work is partially supported by the National Nature Science Foundation of China (No. 62206170, 62376199). + +# References + +[1] Mohamed Afham, Isuru Dissanayake, Dinithi Dissanayake, Amaya Dharmasiri, Kanchana Thilakarathna, and Ranga Rodrigo. Crosspoint: Self-supervised cross-modal contrastive learning for 3d point cloud understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9902-9912, 2022. 1, 2, 4 +[2] Iro Armeni, Ozan Sener, Amir R Zamir, Helen Jiang, Ioannis Brilakis, Martin Fischer, and Silvio Savarese. 3d semantic parsing of large-scale indoor spaces. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1534-1543, 2016. 8 +[3] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015. 4 +[4] Guangyan Chen, Meiling Wang, Yi Yang, Kai Yu, Li Yuan, and Yufeng Yue. Pointgpt: Auto-regressively generative pretraining from point clouds. Advances in Neural Information Processing Systems, 36, 2024. 5 +[5] Jiajing Chen, Burak Kakillioglu, Huantao Ren, and Senem Velipasalar. Why discard if you can recycle?: A recycling max pooling module for 3d point cloud analysis. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, pages 559-567, 2022. 2 +[6] Lu Chen, Siyu Lou, Benhao Huang, and Quanshi Zhang. Defining and extracting generalizable interaction primitives from dnns. arXiv preprint arXiv:2401.16318, 2024. 1 +[7] Xu Cheng, Chuntung Chu, Yi Zheng, Jie Ren, and Quanshi Zhang. A game-theoretic taxonomy of visual concepts in dnns. arXiv preprint arXiv:2106.10938, 2021. 3 +[8] Michel Grabisch and Marc Roubens. An axiomatic approach to the concept of interaction among players in cooperative games. International Journal of game theory, 28:547-565, 1999. 3, 1 +[9] Siyuan Huang, Yichen Xie, Song-Chun Zhu, and Yixin Zhu. Spatio-temporal self-supervised representation learning for 3d point clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6535-6545, 2021. 1, 2, 4 +[10] Mingjie Li and Quanshi Zhang. Does a neural network really encode symbolic concepts? In International conference on machine learning, pages 20452-20469. PMLR, 2023. 2, 3 +[11] AAM Muzahid, Wanggen Wan, Ferdous Sohel, Lianyao Wu, and Li Hou. Curvenet: Curvature-based multitask learning deep networks for 3d object recognition. IEEE/CAA Journal of Automatica Sinica, 8(6):1177-1187, 2020. 2, 8 +[12] Yatian Pang, Wenxiao Wang, Francis EH Tay, Wei Liu, Yonghong Tian, and Li Yuan. Masked autoencoders for point cloud self-supervised learning. In European conference on computer vision, pages 604-621. Springer, 2022. 5 + +[13] Omid Poursaeed, Tianxing Jiang, Han Qiao, Nayun Xu, and Vladimir G Kim. Self-supervised learning of point clouds via orientation estimation. In 2020 International Conference on 3D Vision (3DV), pages 1018-1028. IEEE, 2020. 1, 2 +[14] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 652-660, 2017. 2, 4 +[15] Guocheng Qian, Yuchen Li, Houwen Peng, Jinjie Mai, Hasan Hammoud, Mohamed Elhoseiny, and Bernard Ghanem. Pointnext: Revisiting pointnet++ with improved training and scaling strategies. Advances in neural information processing systems, 35:23192-23204, 2022. 2 +[16] Yongming Rao, Jiwen Lu, and Jie Zhou. Global-local bidirectional reasoning for unsupervised representation learning of 3d point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5376-5385, 2020. 1, 2 +[17] Jie Ren, Mingjie Li, Qirui Chen, Huiqi Deng, and Quanshi Zhang. Towards axiomatic, hierarchical, and symbolic explanation for deep models. arXiv preprint arXiv:2111.06206v5, 2021. 2, 1 +[18] Jie Ren, Zhanpeng Zhou, Qirui Chen, and Quanshi Zhang. Can we faithfully represent masked states to compute shapley values on a dnn? arXiv preprint arXiv:2105.10719, 2021. 2 +[19] Jie Ren, Mingjie Li, Qirui Chen, Huiqi Deng, and Quanshi Zhang. Defining and quantifying the emergence of sparse concepts in dnns. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 20280-20289, 2023. 3 +[20] Qihan Ren, Yang Xu, Junpeng Zhang, Yue Xin, Dongrui Liu, and Quanshi Zhang. Towards the dynamics of a dnn learning symbolic interactions. arXiv preprint arXiv:2407.19198, 2024. 1, 3 +[21] Jonathan Sauder and Bjarne Sievers. Self-supervised deep learning on point clouds by reconstructing space. Advances in Neural Information Processing Systems, 32, 2019. 1, 2, 4 +[22] Lloyd S Shapley. A value for n-person games. Contribution to the Theory of Games, 2, 1953. 3, 1 +[23] Charu Sharma and Manohar Kaul. Self-supervised few-shot learning on point clouds. Advances in Neural Information Processing Systems, 33:7212-7221, 2020. 1, 2 +[24] Wen Shen, Binbin Zhang, Shikun Huang, Zhihua Wei, and Quanshi Zhang. 3d-rotation-equivariant quaternion neural networks. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XX 16, pages 531–547. Springer, 2020. 2 +[25] Wen Shen, Qihan Ren, Dongrui Liu, and Quanshi Zhang. Interpreting representation quality of dnns for 3d point cloud processing. Advances in Neural Information Processing Systems, 34:8857-8870, 2021. 1, 3 +[26] Wen Shen, Zhihua Wei, Shikun Huang, Binbin Zhang, Panyue Chen, Ping Zhao, and Quanshi Zhang. Verifiability and predictability: Interpreting utilities of network architectures for point cloud processing. In Proceedings of the IEEE/CVF + +Conference on Computer Vision and Pattern Recognition, pages 10703-10712, 2021. 1, 4 +[27] Wen Shen, Zhihua Wei, Qihan Ren, Binbin Zhang, Shikun Huang, Jiaqi Fan, and Quanshi Zhang. Interpretable rotation-equivariant quaternion neural networks for 3d point cloud processing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(5):3290-3304, 2024. 2 +[28] Mukund Sundararajan, Kedar Dhamdhere, and Ashish Agarwal. The shapley taylor interaction index. In International conference on machine learning, pages 9259-9268. PMLR, 2020. 3, 1 +[29] Ling Tang, Wen Shen, Zhanpeng Zhou, Yuefeng Chen, and Quanshi Zhang. Defects of convolutional decoder networks in frequency representation. arXiv preprint arXiv:2210.09020, 2022. 3 +[30] Ali Thabet, Humam Alwassel, and Bernard Ghanem. Self-supervised learning of local features in 3d point clouds. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 938-939, 2020. 1, 2 +[31] Mikaela Angelina Uy, Quang-Hieu Pham, Binh-Son Hua, Thanh Nguyen, and Sai-Kit Yeung. Revisiting point cloud classification: A new benchmark dataset and classification model on real-world data. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1588–1597, 2019. 4 +[32] Hanchen Wang, Qi Liu, Xiangyu Yue, Joan Lasenby, and Matt J Kusner. Unsupervised point cloud pre-training via occlusion completion. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9782-9792, 2021. 1, 2, 4 +[33] Xin Wang, Jie Ren, Shuyun Lin, Xiangming Zhu, Yisen Wang, and Quanshi Zhang. A unified approach to interpreting and boosting adversarial transferability. arXiv preprint arXiv:2010.04055, 2020. 2 +[34] Xin Wang, Shuyun Lin, Hao Zhang, Yufei Zhu, and Quanshi Zhang. Interpreting attributions and interactions of adversarial attacks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1095-1104, 2021. 2, 3 +[35] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics (tog), 38(5):1-12, 2019. 2, 4 +[36] Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1912-1920, 2015. 4 +[37] Mutian Xu, Junhao Zhang, Zhipeng Zhou, Mingye Xu, Xiaojuan Qi, and Yu Qiao. Learning geometry-disentangled representation for complementary understanding of 3d object point cloud. In Proceedings of the AAAI conference on artificial intelligence, pages 3056-3064, 2021. 2, 8 +[38] Siming Yan, Zhenpei Yang, Haoxiang Li, Chen Song, Li Guan, Hao Kang, Gang Hua, and Qixing Huang. Implicit autoencoder for point-cloud self-supervised representation + +learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14530-14542, 2023. 1, 2, 4 +[39] Xumin Yu, Lulu Tang, Yongming Rao, Tiejun Huang, Jie Zhou, and Jiwen Lu. Point-bert: Pre-training 3d point cloud transformers with masked point modeling. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 19313-19322, 2022. 5 +[40] Wentao Yuan, Tejas Khot, David Held, Christoph Mertz, and Martial Hebert. Pcn: Point completion network. In 2018 international conference on 3D vision (3DV), pages 728-737. IEEE, 2018. 2, 4 +[41] Hao Zhang, Sen Li, Yinchao Ma, Mingjie Li, Yichen Xie, and Quanshi Zhang. Interpreting and boosting dropout from a game-theoretic view. arXiv preprint arXiv:2009.11729, 2020. 1, 2, 6 +[42] Renrui Zhang, Ziyu Guo, Peng Gao, Rongyao Fang, Bin Zhao, Dong Wang, Yu Qiao, and Hongsheng Li. Point-m2ae: multi-scale masked autoencoders for hierarchical point cloud pre-training. Advances in neural information processing systems, 35:27061-27074, 2022. 5 +[43] Zaiwei Zhang, Rohit Girdhar, Armand Joulin, and Ishan Misra. Self-supervised pretraining of 3d features on any point-cloud. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10252-10263, 2021. 1, 2 +[44] Huilin Zhou, Huijie Tang, Mingjie Li, Hao Zhang, Zhenyu Liu, and Quanshi Zhang. Explaining how a neural network plays the go game and let people learn. arXiv preprint arXiv:2310.09838, 2023. 1 +[45] Huilin Zhou, Hao Zhang, Huiqi Deng, Dongrui Liu, Wen Shen, Shih-Han Chan, and Quanshi Zhang. Concept-level explanation for the generalization of a dnn. arXiv preprint arXiv:2302.13091, 2023. 2, 3 \ No newline at end of file diff --git a/2025/A Unified Approach to Interpreting Self-supervised Pre-training Methods for 3D Point Clouds via Interactions/images.zip b/2025/A Unified Approach to Interpreting Self-supervised Pre-training Methods for 3D Point Clouds via Interactions/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..2ab5e3887044a8c5446d92cdec9600cc3f6cd4d2 --- /dev/null +++ b/2025/A Unified Approach to Interpreting Self-supervised Pre-training Methods for 3D Point Clouds via Interactions/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe3e24b5c29aecfe807612eaf5cc0a84fa644c621d45b7959dc2df5b9337d7d9 +size 635230 diff --git a/2025/A Unified Approach to Interpreting Self-supervised Pre-training Methods for 3D Point Clouds via Interactions/layout.json b/2025/A Unified Approach to Interpreting Self-supervised Pre-training Methods for 3D Point Clouds via Interactions/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..dde4832b11e0e70d0256c501dddbf9ab94d93814 --- /dev/null +++ b/2025/A Unified Approach to Interpreting Self-supervised Pre-training Methods for 3D Point Clouds via Interactions/layout.json @@ -0,0 +1,9198 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 78, + 102, + 533, + 138 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 78, + 102, + 533, + 138 + ], + "spans": [ + { + "bbox": [ + 78, + 102, + 533, + 138 + ], + "type": "text", + "content": "A Unified Approach to Interpreting Self-supervised Pre-training Methods for 3D Point Clouds via Interactions" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 119, + 161, + 494, + 190 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 119, + 161, + 494, + 190 + ], + "spans": [ + { + "bbox": [ + 119, + 161, + 494, + 190 + ], + "type": "text", + "content": "Qiang Li, Jian Ruan, Fanghao Wu, Yuchi Chen, Zhihua Wei*, and Wen Shen* Tongji University, Shanghai, China" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 134, + 191, + 473, + 204 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 134, + 191, + 473, + 204 + ], + "spans": [ + { + "bbox": [ + 134, + 191, + 473, + 204 + ], + "type": "text", + "content": "{qli, jianruan, 2432055, 1953721, zhihua_wei, wenshen}@tongji.edu.cn" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 151, + 231, + 200, + 243 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 231, + 200, + 243 + ], + "spans": [ + { + "bbox": [ + 151, + 231, + 200, + 243 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 54, + 256, + 297, + 591 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 256, + 297, + 591 + ], + "spans": [ + { + "bbox": [ + 54, + 256, + 297, + 591 + ], + "type": "text", + "content": "Recently, many self-supervised pre-training methods have been proposed to improve the performance of deep neural networks (DNNs) for 3D point clouds processing. However, the common mechanism underlying the effectiveness of different pre-training methods remains unclear. In this paper, we use game-theoretic interactions as a unified approach to explore the common mechanism of pre-training methods. Specifically, we decompose the output score of a DNN into the sum of numerous effects of interactions, with each interaction representing a distinct 3D substructure of the input point cloud. Based on the decomposed interactions, we draw the following conclusions. (1) The common mechanism across different pre-training methods is that they enhance the strength of high-order interactions encoded by DNNs, which represent complex and global 3D structures, while reducing the strength of low-order interactions, which represent simple and local 3D structures. (2) Sufficient pre-training and adequate fine-tuning data for downstream tasks further reinforce the mechanism described above. (3) Pre-training methods carry a potential risk of reducing the transferability of features encoded by DNNs. Inspired by the observed common mechanism, we propose a new method to directly enhance the strength of high-order interactions and reduce the strength of low-order interactions encoded by DNNs, improving performance without the need for pre-training on large-scale datasets. Experiments show that our method achieves performance comparable to traditional pre-training methods." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 614, + 135, + 628 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 614, + 135, + 628 + ], + "spans": [ + { + "bbox": [ + 55, + 614, + 135, + 628 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 635, + 296, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 635, + 296, + 696 + ], + "spans": [ + { + "bbox": [ + 55, + 635, + 296, + 696 + ], + "type": "text", + "content": "Self-supervised pre-training methods for 3D point clouds have developed rapidly in recent years [1, 9, 13, 16, 21, 23, 30, 32, 38, 43]. Pre-training methods first train DNNs on large-scale unlabeled datasets, then fine-tune the DNNs on downstream tasks, generally enhancing their performance." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 232, + 555, + 268 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 232, + 555, + 268 + ], + "spans": [ + { + "bbox": [ + 313, + 232, + 555, + 268 + ], + "type": "text", + "content": "However, the common mechanism underlying different pretraining methods remains unclear, posing challenges for gaining insights into effective model training strategies." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 269, + 556, + 365 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 269, + 556, + 365 + ], + "spans": [ + { + "bbox": [ + 313, + 269, + 556, + 365 + ], + "type": "text", + "content": "In this paper, we aim to explore the common mechanism behind the performance improvements of different pre-training methods, thereby providing insights into pretraining, and offering better guidance for the training process. Recent studies have employed interactions to explain the reasoning processes of DNNs [20, 25, 41]. Inspired by these studies, we use interactions to provide a unified interpretation of different pre-training methods." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 365, + 556, + 520 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 365, + 556, + 520 + ], + "spans": [ + { + "bbox": [ + 313, + 365, + 556, + 520 + ], + "type": "text", + "content": "Specifically, given a point cloud " + }, + { + "bbox": [ + 313, + 365, + 556, + 520 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 313, + 365, + 556, + 520 + ], + "type": "text", + "content": " with " + }, + { + "bbox": [ + 313, + 365, + 556, + 520 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 313, + 365, + 556, + 520 + ], + "type": "text", + "content": " regions indexed by " + }, + { + "bbox": [ + 313, + 365, + 556, + 520 + ], + "type": "inline_equation", + "content": "N = \\{1,2,\\dots,n\\}" + }, + { + "bbox": [ + 313, + 365, + 556, + 520 + ], + "type": "text", + "content": ", an interaction represents the collaborations among regions within a specific 3D structure " + }, + { + "bbox": [ + 313, + 365, + 556, + 520 + ], + "type": "inline_equation", + "content": "S \\subseteq N" + }, + { + "bbox": [ + 313, + 365, + 556, + 520 + ], + "type": "text", + "content": ", where each interaction has a numerical effect " + }, + { + "bbox": [ + 313, + 365, + 556, + 520 + ], + "type": "inline_equation", + "content": "I(S)" + }, + { + "bbox": [ + 313, + 365, + 556, + 520 + ], + "type": "text", + "content": " on the network output. For example, as shown in Fig. 1, the interaction between the regions in " + }, + { + "bbox": [ + 313, + 365, + 556, + 520 + ], + "type": "inline_equation", + "content": "S_{1} = \\{\\text{wingtip}, \\text{wing root}\\}" + }, + { + "bbox": [ + 313, + 365, + 556, + 520 + ], + "type": "text", + "content": " form a concept of \"wing\", contributing " + }, + { + "bbox": [ + 313, + 365, + 556, + 520 + ], + "type": "inline_equation", + "content": "I(S_{1})" + }, + { + "bbox": [ + 313, + 365, + 556, + 520 + ], + "type": "text", + "content": " to push the classification result toward the class \"airplane\". It has been proven by [6, 44] that the output score of a DNN consistently equals the sum of the effects of all activated interactions, regardless of how the input regions are masked. In this way, interactions can be seen as the detailed inference patterns encoded by the DNN." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 521, + 556, + 677 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 521, + 556, + 677 + ], + "spans": [ + { + "bbox": [ + 313, + 521, + 556, + 677 + ], + "type": "text", + "content": "Based on interactions, we conduct comparative experiments to explore the common reasons behind the performance improvements of different pre-training methods. Specifically, we explore the impact of pre-training methods on the complexity of interactions encoded by DNNs. Here, the complexity refers to the number of regions contained in an interaction, i.e., the order of the interaction. A high-order interaction, e.g., " + }, + { + "bbox": [ + 313, + 521, + 556, + 677 + ], + "type": "inline_equation", + "content": "S_{3}" + }, + { + "bbox": [ + 313, + 521, + 556, + 677 + ], + "type": "text", + "content": " in Fig. 1, captures collaborations among massive point regions, representing complex and global 3D structures. In contrast, a low-order interaction, e.g., " + }, + { + "bbox": [ + 313, + 521, + 556, + 677 + ], + "type": "inline_equation", + "content": "S_{1}" + }, + { + "bbox": [ + 313, + 521, + 556, + 677 + ], + "type": "text", + "content": ", measures collaborations between a few regions, representing simple and local 3D structures. From the experiments, we draw the following key conclusions." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 680, + 555, + 693 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 680, + 555, + 693 + ], + "spans": [ + { + "bbox": [ + 313, + 680, + 555, + 693 + ], + "type": "text", + "content": "- The common mechanism across different pre-training" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 66, + 702, + 237, + 712 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 702, + 237, + 712 + ], + "spans": [ + { + "bbox": [ + 66, + 702, + 237, + 712 + ], + "type": "text", + "content": "*Corresponding authors: Wen Shen and Zhihua Wei." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 325, + 702, + 504, + 713 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 325, + 702, + 504, + 713 + ], + "spans": [ + { + "bbox": [ + 325, + 702, + 504, + 713 + ], + "type": "text", + "content": "1We divide a point cloud into " + }, + { + "bbox": [ + 325, + 702, + 504, + 713 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 325, + 702, + 504, + 713 + ], + "type": "text", + "content": " regions following [26]." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "27315" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 57, + 70, + 394, + 215 + ], + "blocks": [ + { + "bbox": [ + 57, + 70, + 394, + 215 + ], + "lines": [ + { + "bbox": [ + 57, + 70, + 394, + 215 + ], + "spans": [ + { + "bbox": [ + 57, + 70, + 394, + 215 + ], + "type": "image", + "image_path": "df7338f800ae385b4dddd39cd83328592a92a96ae7466c162041d4f73216074f.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 400, + 72, + 555, + 217 + ], + "blocks": [ + { + "bbox": [ + 400, + 72, + 555, + 217 + ], + "lines": [ + { + "bbox": [ + 400, + 72, + 555, + 217 + ], + "spans": [ + { + "bbox": [ + 400, + 72, + 555, + 217 + ], + "type": "image", + "image_path": "571d909e20d4adaf0184bed55e90c7839287c07b105246f692cd9db6857ef0af.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 54, + 217, + 555, + 271 + ], + "lines": [ + { + "bbox": [ + 54, + 217, + 555, + 271 + ], + "spans": [ + { + "bbox": [ + 54, + 217, + 555, + 271 + ], + "type": "text", + "content": "Figure 1. (a) Illustration of how interactions can be used to explain a DNN. Given an input point cloud with " + }, + { + "bbox": [ + 54, + 217, + 555, + 271 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 54, + 217, + 555, + 271 + ], + "type": "text", + "content": " regions, the output score of the DNN can be decomposed into the sum of the numerical effects of " + }, + { + "bbox": [ + 54, + 217, + 555, + 271 + ], + "type": "inline_equation", + "content": "2^{n}" + }, + { + "bbox": [ + 54, + 217, + 555, + 271 + ], + "type": "text", + "content": " interactions, where each interaction " + }, + { + "bbox": [ + 54, + 217, + 555, + 271 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 54, + 217, + 555, + 271 + ], + "type": "text", + "content": " encodes the collaborations among the point cloud regions in the set " + }, + { + "bbox": [ + 54, + 217, + 555, + 271 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 54, + 217, + 555, + 271 + ], + "type": "text", + "content": ". (b) Comparing the strength of interactions across different orders encoded by the DGCNN trained from scratch (scr) and the DGCNN using a pre-training method (pt). Results show that the pre-trained DGCNN encodes stronger high-order interactions and weaker low-order interactions than the DGCNN trained from scratch." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 63, + 286, + 295, + 357 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 286, + 295, + 357 + ], + "spans": [ + { + "bbox": [ + 63, + 286, + 295, + 357 + ], + "type": "text", + "content": "methods is that they enhance the strength of high-order interactions encoded by DNNs while reducing the strength of low-order interactions. This common mechanism indicates that pre-training methods enhance the DNNs' ability to capture global 3D structures, while reducing their reliance on local 3D structures." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 358, + 295, + 514 + ], + "type": "list", + "angle": 0, + "index": 6, + "blocks": [ + { + "bbox": [ + 55, + 358, + 295, + 441 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 358, + 295, + 441 + ], + "spans": [ + { + "bbox": [ + 55, + 358, + 295, + 441 + ], + "type": "text", + "content": "- Sufficient pre-training and adequate fine-tuning data for downstream tasks further reinforce the mechanism described above. We observe that the strength of high-order interactions increases with the number of pretraining epochs while the strength of low-order interactions decreases. Additionally, increasing the amount of data for downstream tasks also amplifies this effect." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 442, + 295, + 514 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 442, + 295, + 514 + ], + "spans": [ + { + "bbox": [ + 55, + 442, + 295, + 514 + ], + "type": "text", + "content": "- Pre-training methods carry a potential risk of reducing the transferability of features encoded by DNNs. We observe that the performance of the pre-trained DNNs may decrease on unseen test datasets, possibly due to pretraining methods causing the DNNs to encode high-order interactions with excessively high strength." + } + ] + } + ], + "index": 5 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 55, + 515, + 295, + 611 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 515, + 295, + 611 + ], + "spans": [ + { + "bbox": [ + 55, + 515, + 295, + 611 + ], + "type": "text", + "content": "Building on the common mechanism we identified, we propose a new method to directly enhance the strength of high-order interactions encoded by DNNs while reducing the strength of low-order interactions. Experimental results on classification and semantic segmentation benchmarks show that our method achieves performance comparable to pre-training methods, without the need for pre-training on large-scale unlabeled datasets." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 621, + 139, + 633 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 621, + 139, + 633 + ], + "spans": [ + { + "bbox": [ + 55, + 621, + 139, + 633 + ], + "type": "text", + "content": "2. Related work" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 641, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 641, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 641, + 295, + 713 + ], + "type": "text", + "content": "Self-supervised learning (SSL) of 3D point clouds. Recently, 3D point cloud processing has developed rapidly [5, 11, 14, 15, 24, 27, 35, 37, 40], with many self-supervised methods proposed to learn representations from individual 3D objects [1, 9, 13, 16, 21, 23, 30, 32, 38, 43]. The goal of SSL is to design pretext tasks to help the model learn the" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 287, + 555, + 335 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 287, + 555, + 335 + ], + "spans": [ + { + "bbox": [ + 313, + 287, + 555, + 335 + ], + "type": "text", + "content": "data distribution and features in advance, preparing it for downstream tasks. In this paper, we explore the common mechanism behind the performance improvement of the following five widely used open-source pre-training methods." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 314, + 338, + 553, + 530 + ], + "type": "list", + "angle": 0, + "index": 16, + "blocks": [ + { + "bbox": [ + 314, + 338, + 553, + 374 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 338, + 553, + 374 + ], + "spans": [ + { + "bbox": [ + 314, + 338, + 553, + 374 + ], + "type": "text", + "content": "- Occlusion Completion (OcCo) [32]. OcCo masks occluded points from a camera view and trains an encoder-decoder model to reconstruct these missing points." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 314, + 375, + 553, + 398 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 375, + 553, + 398 + ], + "spans": [ + { + "bbox": [ + 314, + 375, + 553, + 398 + ], + "type": "text", + "content": "- Jigsaw [21]. Jigsaw trains a model to reconstruct point clouds with parts rearranged in random order." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 314, + 399, + 553, + 446 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 399, + 553, + 446 + ], + "spans": [ + { + "bbox": [ + 314, + 399, + 553, + 446 + ], + "type": "text", + "content": "- Implicit Auto-encoder (IAE) [38]. IAE trains the model as an encoder to map the point clouds to a high-dimensional space and uses a decoder to reconstruct the encoder's outputs back into 3D geometry." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 314, + 447, + 553, + 494 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 447, + 553, + 494 + ], + "spans": [ + { + "bbox": [ + 314, + 447, + 553, + 494 + ], + "type": "text", + "content": "- Spatio-Temporal Representation Learning (STRL) [9]. STRL captures spatio-temporal information from 3D sequences by using two temporally correlated frames to learn invariant representations." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 314, + 495, + 553, + 530 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 495, + 553, + 530 + ], + "spans": [ + { + "bbox": [ + 314, + 495, + 553, + 530 + ], + "type": "text", + "content": "- CrossPoint [1]. CrossPoint learns transferable representations by maximizing the agreement between 3D point clouds and corresponding 2D images." + } + ] + } + ], + "index": 15 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 313, + 534, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 534, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 534, + 555, + 713 + ], + "type": "text", + "content": "Using game-theoretical interactions to explain DNNs. Game-theoretical interactions provide a solid theoretical foundation for explaining DNNs. Ren et al. [17] proposed a mathematical formulation for the concepts encoded by a DNN, while Ren et al. [18] further leveraged these concepts to define optimal baseline values for Shapley values. Li et al. [10] provided a theoretical guarantee that interactions accurately capture the true concepts encoded by a DNN. At the application level, interactions have been widely used to explain the representation capacity of DNNs from various perspectives, including adversarial robustness [17, 34], adversarial transferability [33], and generalization power [41, 45]. In this paper, we use interactions to investigate the common mechanism underlying different pre-training methods for 3D point clouds." + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "text", + "content": "27316" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 57, + 70, + 295, + 128 + ], + "blocks": [ + { + "bbox": [ + 57, + 70, + 295, + 128 + ], + "lines": [ + { + "bbox": [ + 57, + 70, + 295, + 128 + ], + "spans": [ + { + "bbox": [ + 57, + 70, + 295, + 128 + ], + "type": "image", + "image_path": "3289a2b5798488eb7ebe8ac75c333eef9e141c31637aeebe7028b38bc5fad491.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 56, + 129, + 293, + 140 + ], + "lines": [ + { + "bbox": [ + 56, + 129, + 293, + 140 + ], + "spans": [ + { + "bbox": [ + 56, + 129, + 293, + 140 + ], + "type": "text", + "content": "Figure 2. Process of dividing an input point cloud into " + }, + { + "bbox": [ + 56, + 129, + 293, + 140 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 56, + 129, + 293, + 140 + ], + "type": "text", + "content": " regions." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 148, + 281, + 163 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 148, + 281, + 163 + ], + "spans": [ + { + "bbox": [ + 55, + 148, + 281, + 163 + ], + "type": "text", + "content": "3. Interactions in 3D point cloud processing" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 168, + 296, + 287 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 168, + 296, + 287 + ], + "spans": [ + { + "bbox": [ + 55, + 168, + 296, + 287 + ], + "type": "text", + "content": "Preliminaries: interactions. As a new explanatory metric, interaction has been used to clarify the inference logic [7], generalization power [34], and robustness of a DNN [45]. It can be viewed as a universal measure due to its close theoretical connections with other metrics. As proven by [19], the Harsanyi interaction serves as the basis for existing game-theoretic attributions and interactions, including the Shapley value [22], the Shapley interaction index [8], and the Shapley Taylor interaction index [28]. Please see the supplementary material for additional details." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 289, + 296, + 396 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 289, + 296, + 396 + ], + "spans": [ + { + "bbox": [ + 55, + 289, + 296, + 396 + ], + "type": "text", + "content": "Quantifying interactions for 3D point cloud processing. We extend interactions to 3D point clouds. Considering an point cloud " + }, + { + "bbox": [ + 55, + 289, + 296, + 396 + ], + "type": "inline_equation", + "content": "x \\in \\mathbb{R}^{P \\times 3}" + }, + { + "bbox": [ + 55, + 289, + 296, + 396 + ], + "type": "text", + "content": ", we divide it into " + }, + { + "bbox": [ + 55, + 289, + 296, + 396 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 55, + 289, + 296, + 396 + ], + "type": "text", + "content": " regions, as shown in Fig. 2. First, we apply the farthest point sampling (FPS) algorithm to select " + }, + { + "bbox": [ + 55, + 289, + 296, + 396 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 55, + 289, + 296, + 396 + ], + "type": "text", + "content": " points from the point cloud as the centers of each region. Then, we use the " + }, + { + "bbox": [ + 55, + 289, + 296, + 396 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 55, + 289, + 296, + 396 + ], + "type": "text", + "content": "-dimensional tree (KDTree) algorithm to assign the remaining points to their nearest region. By doing so, we divide the input point cloud " + }, + { + "bbox": [ + 55, + 289, + 296, + 396 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 55, + 289, + 296, + 396 + ], + "type": "text", + "content": " into " + }, + { + "bbox": [ + 55, + 289, + 296, + 396 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 55, + 289, + 296, + 396 + ], + "type": "text", + "content": " regions, indexed by " + }, + { + "bbox": [ + 55, + 289, + 296, + 396 + ], + "type": "inline_equation", + "content": "N = \\{1, 2, \\dots, n\\}" + }, + { + "bbox": [ + 55, + 289, + 296, + 396 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 396, + 296, + 467 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 396, + 296, + 467 + ], + "spans": [ + { + "bbox": [ + 55, + 396, + 296, + 467 + ], + "type": "text", + "content": "Given a trained DNN " + }, + { + "bbox": [ + 55, + 396, + 296, + 467 + ], + "type": "inline_equation", + "content": "v: \\mathbb{R}^{P \\times 3} \\to \\mathbb{R}" + }, + { + "bbox": [ + 55, + 396, + 296, + 467 + ], + "type": "text", + "content": ", we follow [10, 20, 25] to define the DNN's output score as " + }, + { + "bbox": [ + 55, + 396, + 296, + 467 + ], + "type": "inline_equation", + "content": "v(x) = \\log \\frac{p}{1 - p}" + }, + { + "bbox": [ + 55, + 396, + 296, + 467 + ], + "type": "text", + "content": " to represent the classification confidence, where " + }, + { + "bbox": [ + 55, + 396, + 296, + 467 + ], + "type": "inline_equation", + "content": "p" + }, + { + "bbox": [ + 55, + 396, + 296, + 467 + ], + "type": "text", + "content": " is the output probability of the ground truth class. Then, the output score can be rewritten as the sum of the numerical effects of all " + }, + { + "bbox": [ + 55, + 396, + 296, + 467 + ], + "type": "inline_equation", + "content": "2^n" + }, + { + "bbox": [ + 55, + 396, + 296, + 467 + ], + "type": "text", + "content": " interactions between the point regions, as follows." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 136, + 471, + 295, + 496 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 136, + 471, + 295, + 496 + ], + "spans": [ + { + "bbox": [ + 136, + 471, + 295, + 496 + ], + "type": "interline_equation", + "content": "v (x) = \\sum_ {S \\subseteq N} I (S). \\tag {1}", + "image_path": "626d5fa9580e8a5b28e474b7df892bc3e5205f0291e1fe70d873958ef57c1e47.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 501, + 295, + 525 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 501, + 295, + 525 + ], + "spans": [ + { + "bbox": [ + 55, + 501, + 295, + 525 + ], + "type": "text", + "content": "Here, " + }, + { + "bbox": [ + 55, + 501, + 295, + 525 + ], + "type": "inline_equation", + "content": "I(S)" + }, + { + "bbox": [ + 55, + 501, + 295, + 525 + ], + "type": "text", + "content": " represents the numerical effect of the interaction among the point regions in " + }, + { + "bbox": [ + 55, + 501, + 295, + 525 + ], + "type": "inline_equation", + "content": "S \\subseteq N" + }, + { + "bbox": [ + 55, + 501, + 295, + 525 + ], + "type": "text", + "content": ", defined as follows." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 107, + 529, + 295, + 555 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 529, + 295, + 555 + ], + "spans": [ + { + "bbox": [ + 107, + 529, + 295, + 555 + ], + "type": "interline_equation", + "content": "I (S) \\triangleq \\sum_ {T \\subseteq S} (- 1) ^ {| S | - | T |} \\cdot v \\left(x _ {T}\\right), \\tag {2}", + "image_path": "b33a935d0b0fe96baf545129dfa12dd65ed825a704056d321ddea0a906606095.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 558, + 295, + 594 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 558, + 295, + 594 + ], + "spans": [ + { + "bbox": [ + 55, + 558, + 295, + 594 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 558, + 295, + 594 + ], + "type": "inline_equation", + "content": "x_{T}" + }, + { + "bbox": [ + 55, + 558, + 295, + 594 + ], + "type": "text", + "content": " represents the input point cloud with the regions in " + }, + { + "bbox": [ + 55, + 558, + 295, + 594 + ], + "type": "inline_equation", + "content": "T \\subseteq N" + }, + { + "bbox": [ + 55, + 558, + 295, + 594 + ], + "type": "text", + "content": " unchanged, while the regions in " + }, + { + "bbox": [ + 55, + 558, + 295, + 594 + ], + "type": "inline_equation", + "content": "N \\backslash T" + }, + { + "bbox": [ + 55, + 558, + 295, + 594 + ], + "type": "text", + "content": " are masked by replacing them with the centroid of the point cloud." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "type": "text", + "content": "Understanding interactions in 3D point cloud processing. The interaction extracted from the input point cloud " + }, + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "type": "text", + "content": " encodes an AND relationship among the point regions in " + }, + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "type": "text", + "content": ", with the numerical effect " + }, + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "type": "inline_equation", + "content": "I(S)" + }, + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "type": "text", + "content": " representing the combined contribution of these regions to the output score " + }, + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "type": "inline_equation", + "content": "v(x)" + }, + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "type": "text", + "content": ". As shown in Fig. 1, when the point regions in the set " + }, + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "type": "inline_equation", + "content": "S_{1} = \\{wingtip, wing root\\}" + }, + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "type": "text", + "content": " are unmasked, they form a \"wing\" pattern and contribute a numerical effect " + }, + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "type": "inline_equation", + "content": "I(S_{1})" + }, + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "type": "text", + "content": " that pushes the output score " + }, + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "type": "inline_equation", + "content": "v(x)" + }, + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "type": "text", + "content": " towards the \"airplane\" category. Masking any region in " + }, + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "type": "inline_equation", + "content": "S_{1}" + }, + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "type": "text", + "content": " will deactivate this AND" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 72, + 554, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 554, + 144 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 554, + 144 + ], + "type": "text", + "content": "interaction and remove " + }, + { + "bbox": [ + 313, + 72, + 554, + 144 + ], + "type": "inline_equation", + "content": "I(S_{1})" + }, + { + "bbox": [ + 313, + 72, + 554, + 144 + ], + "type": "text", + "content": " from " + }, + { + "bbox": [ + 313, + 72, + 554, + 144 + ], + "type": "inline_equation", + "content": "v(x)" + }, + { + "bbox": [ + 313, + 72, + 554, + 144 + ], + "type": "text", + "content": ". In fact, Tang et al. [29] has proven that interaction satisfies the universal matching property, which states that the DNN's inference score " + }, + { + "bbox": [ + 313, + 72, + 554, + 144 + ], + "type": "inline_equation", + "content": "v(x_{T})" + }, + { + "bbox": [ + 313, + 72, + 554, + 144 + ], + "type": "text", + "content": " can always be faithfully explained as the sum of the numerical effects of all activated interactions, regardless of how the point cloud regions are masked." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "spans": [ + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "text", + "content": "Theorem 1 (Universal matching property, proven by [29]). Given an input sample " + }, + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "text", + "content": " with " + }, + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "text", + "content": " variables indexed by " + }, + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "inline_equation", + "content": "N = \\{1,2,\\dots,n\\}" + }, + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "text", + "content": ", we can generate " + }, + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "inline_equation", + "content": "2^n" + }, + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "text", + "content": " masked samples " + }, + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "inline_equation", + "content": "x_{T}" + }, + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "inline_equation", + "content": "T \\subseteq N" + }, + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "text", + "content": ". Let us construct the following surrogate logical model " + }, + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "inline_equation", + "content": "\\phi(\\cdot)" + }, + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "text", + "content": " to use interactions for inference, which are extracted from the DNN " + }, + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "inline_equation", + "content": "v(\\cdot)" + }, + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "text", + "content": " on the sample " + }, + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "text", + "content": ". Then, the output of the surrogate logical model " + }, + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "inline_equation", + "content": "\\phi(\\cdot)" + }, + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "text", + "content": " can always match the output of the DNN " + }, + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "inline_equation", + "content": "v(\\cdot)" + }, + { + "bbox": [ + 313, + 144, + 555, + 252 + ], + "type": "text", + "content": ", regardless of how the input sample is masked." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 323, + 256, + 553, + 319 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 323, + 256, + 553, + 319 + ], + "spans": [ + { + "bbox": [ + 323, + 256, + 553, + 319 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\forall T \\subseteq N, \\phi (x _ {T}) = v (x _ {T}), \\\\ \\phi \\left(x _ {T}\\right) = v \\left(x _ {\\emptyset}\\right) + \\sum_ {S \\subseteq N} I (S) \\cdot \\mathbb {1} \\binom {x _ {T} \\text {t r i g g e r s}} {\\text {A N D r e l a t i o n} S} \\tag {3} \\\\ = v (x _ {\\emptyset}) + \\sum_ {\\emptyset \\neq S \\subseteq T} I (S). \\\\ \\end{array}", + "image_path": "1f43af90f132ffd555454b86c3ac0f3c6ea0e5cb401601a835c22c90d675f320.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 324, + 554, + 445 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 324, + 554, + 445 + ], + "spans": [ + { + "bbox": [ + 313, + 324, + 554, + 445 + ], + "type": "text", + "content": "Defining and quantifying the representation complexity of DNNs. The order of an interaction is defined as " + }, + { + "bbox": [ + 313, + 324, + 554, + 445 + ], + "type": "inline_equation", + "content": "m = |S|" + }, + { + "bbox": [ + 313, + 324, + 554, + 445 + ], + "type": "text", + "content": ", which reflects the representation complexity of DNNs. High-order interactions measure the effects of collaborations among massive point cloud regions, representing global and complex 3D structures, while low-order interactions measure the effects of collaborations between a few point regions, representing simple and local 3D structures. We introduce a new metric for measuring the representation complexity of DNNs, as follows." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 365, + 449, + 553, + 475 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 365, + 449, + 553, + 475 + ], + "spans": [ + { + "bbox": [ + 365, + 449, + 553, + 475 + ], + "type": "interline_equation", + "content": "\\kappa^ {(m)} \\triangleq \\frac {\\mathbb {E} _ {x} \\mathbb {E} _ {S \\subseteq N , | S | = m} [ | I (S) | ]}{Z}, \\tag {4}", + "image_path": "7695ae362b970f177a2607109fdba204e1054afa790da82d53c2588abfdf1e1f.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 479, + 554, + 587 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 479, + 554, + 587 + ], + "spans": [ + { + "bbox": [ + 313, + 479, + 554, + 587 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 479, + 554, + 587 + ], + "type": "inline_equation", + "content": "\\mathbb{E}" + }, + { + "bbox": [ + 313, + 479, + 554, + 587 + ], + "type": "text", + "content": " denotes the mathematical expectation, and " + }, + { + "bbox": [ + 313, + 479, + 554, + 587 + ], + "type": "inline_equation", + "content": "Z = \\mathbb{E}_x\\mathbb{E}_{S\\subseteq N}[|I(S)|]" + }, + { + "bbox": [ + 313, + 479, + 554, + 587 + ], + "type": "text", + "content": " is a normalization term to ensure fair comparisons across different DNNs. Here, " + }, + { + "bbox": [ + 313, + 479, + 554, + 587 + ], + "type": "inline_equation", + "content": "\\kappa^{(m)}" + }, + { + "bbox": [ + 313, + 479, + 554, + 587 + ], + "type": "text", + "content": " measures the normalized average strength of the " + }, + { + "bbox": [ + 313, + 479, + 554, + 587 + ], + "type": "inline_equation", + "content": "m" + }, + { + "bbox": [ + 313, + 479, + 554, + 587 + ], + "type": "text", + "content": "-th order interactions. If the value of " + }, + { + "bbox": [ + 313, + 479, + 554, + 587 + ], + "type": "inline_equation", + "content": "\\kappa^{(m)}" + }, + { + "bbox": [ + 313, + 479, + 554, + 587 + ], + "type": "text", + "content": " of a high-order is larger than that of a low-order, the DNN's representation complexity is enough to capture global and complex 3D structures. Otherwise, the DNN's representation complexity is limited to encoding only local and simple 3D structures." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 588, + 553, + 624 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 588, + 553, + 624 + ], + "spans": [ + { + "bbox": [ + 313, + 588, + 553, + 624 + ], + "type": "text", + "content": "We further propose the following metrics to measure the strength of high-order interactions and low-order interactions encoded by the DNN." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 318, + 627, + 553, + 681 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 318, + 627, + 553, + 681 + ], + "spans": [ + { + "bbox": [ + 318, + 627, + 553, + 681 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\kappa^ {\\text {h i g h}} = \\sum_ {m \\in \\Omega^ {\\text {h i g h}}} \\kappa^ {(m)}, s. t. \\Omega^ {\\text {h i g h}} \\stackrel {{\\text {d e f}}} {{=}} \\left\\{m \\mid \\lceil \\frac {2}{3} n \\rceil < m \\leq n \\right\\}, \\\\ \\kappa^ {\\text {l o w}} = \\sum_ {m \\in \\Omega^ {\\text {l o w}}} \\kappa^ {(m)}, s. t. \\Omega^ {\\text {l o w}} \\stackrel {{\\text {d e f}}} {{=}} \\left\\{m \\mid 1 \\leq m \\leq \\lceil \\frac {1}{3} n \\rceil \\right\\}, \\end{array} \\tag {5}", + "image_path": "f034d432478a25c8b11420b3c8e2a1f39233dc0aab6506c0f81d5c0908f6fe23.jpg" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "inline_equation", + "content": "\\Omega^{\\mathrm{high}}" + }, + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "inline_equation", + "content": "\\Omega^{\\mathrm{low}}" + }, + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "text", + "content": " denote the ranges of high-order and low-order interactions, respectively." + } + ] + } + ], + "index": 19 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "27317" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 57, + 70, + 203, + 150 + ], + "blocks": [ + { + "bbox": [ + 57, + 70, + 203, + 150 + ], + "lines": [ + { + "bbox": [ + 57, + 70, + 203, + 150 + ], + "spans": [ + { + "bbox": [ + 57, + 70, + 203, + 150 + ], + "type": "image", + "image_path": "67782b0baf6715b310e74bd4a3d170c23d7160ddc09fccb8198fea6e40ff40e5.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 203, + 70, + 317, + 150 + ], + "blocks": [ + { + "bbox": [ + 203, + 70, + 317, + 150 + ], + "lines": [ + { + "bbox": [ + 203, + 70, + 317, + 150 + ], + "spans": [ + { + "bbox": [ + 203, + 70, + 317, + 150 + ], + "type": "image", + "image_path": "7bd74d7397421b56e077a1cf24ea8db8769d47f46f8c4c5ef7afe2e6847ef828.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 318, + 71, + 432, + 150 + ], + "blocks": [ + { + "bbox": [ + 318, + 71, + 432, + 150 + ], + "lines": [ + { + "bbox": [ + 318, + 71, + 432, + 150 + ], + "spans": [ + { + "bbox": [ + 318, + 71, + 432, + 150 + ], + "type": "image", + "image_path": "2d4ad121797fb8d91330c85b0aa60c8b6db70c00e96fc0b651c0a4a601792d2e.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 57, + 150, + 203, + 217 + ], + "blocks": [ + { + "bbox": [ + 57, + 150, + 203, + 217 + ], + "lines": [ + { + "bbox": [ + 57, + 150, + 203, + 217 + ], + "spans": [ + { + "bbox": [ + 57, + 150, + 203, + 217 + ], + "type": "image", + "image_path": "766c06325e4154bea1e38a0f39b207067288d0d6955c7cf7c8cdf979b57fde53.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 203, + 152, + 317, + 217 + ], + "blocks": [ + { + "bbox": [ + 203, + 152, + 317, + 217 + ], + "lines": [ + { + "bbox": [ + 203, + 152, + 317, + 217 + ], + "spans": [ + { + "bbox": [ + 203, + 152, + 317, + 217 + ], + "type": "image", + "image_path": "5cffe96ae4f6bd91c1e2184252f0c73e72a1f8e7e956f170434a17bcd890a18a.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 318, + 152, + 432, + 217 + ], + "blocks": [ + { + "bbox": [ + 318, + 152, + 432, + 217 + ], + "lines": [ + { + "bbox": [ + 318, + 152, + 432, + 217 + ], + "spans": [ + { + "bbox": [ + 318, + 152, + 432, + 217 + ], + "type": "image", + "image_path": "2224f7471994640d18c96b588f8ca20940dc565deddc8597e39b0b3ce677752c.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 58, + 217, + 203, + 295 + ], + "blocks": [ + { + "bbox": [ + 58, + 217, + 203, + 295 + ], + "lines": [ + { + "bbox": [ + 58, + 217, + 203, + 295 + ], + "spans": [ + { + "bbox": [ + 58, + 217, + 203, + 295 + ], + "type": "image", + "image_path": "06db17dd0b0b4c5e9c0c318dddeb42bcd5217b19031bc05f0ec0541021418599.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 300, + 555, + 357 + ], + "lines": [ + { + "bbox": [ + 55, + 300, + 555, + 357 + ], + "spans": [ + { + "bbox": [ + 55, + 300, + 555, + 357 + ], + "type": "text", + "content": "Figure 3. [Conclusion 1] (a) Comparing the normalized average strength of interactions encoded by different DNNs, including DNNs trained from scratch and DNNs trained with different pre-training methods. Results show that the DNNs using pre-training methods consistently encode stronger high-order interactions and weaker low-order interactions than the DNNs trained from scratch. (b) The relationship between the strength of high-order interactions encoded by different DNNs and their corresponding classification accuracy. Results show that DNNs encoding stronger high-order interactions tend to exhibit higher accuracy." + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 203, + 219, + 317, + 295 + ], + "blocks": [ + { + "bbox": [ + 203, + 219, + 317, + 295 + ], + "lines": [ + { + "bbox": [ + 203, + 219, + 317, + 295 + ], + "spans": [ + { + "bbox": [ + 203, + 219, + 317, + 295 + ], + "type": "image", + "image_path": "254d67983fdf6831316bb09ceed5141568c4ed51c6ec7dbf0ed8cb2eab5a12cc.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 318, + 219, + 432, + 295 + ], + "blocks": [ + { + "bbox": [ + 318, + 219, + 432, + 295 + ], + "lines": [ + { + "bbox": [ + 318, + 219, + 432, + 295 + ], + "spans": [ + { + "bbox": [ + 318, + 219, + 432, + 295 + ], + "type": "image", + "image_path": "de395489ef92dbdb43b95b11a7d31e7ef78124b86d6c6d378c96946f3f6b100f.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 437, + 71, + 553, + 152 + ], + "blocks": [ + { + "bbox": [ + 437, + 71, + 553, + 152 + ], + "lines": [ + { + "bbox": [ + 437, + 71, + 553, + 152 + ], + "spans": [ + { + "bbox": [ + 437, + 71, + 553, + 152 + ], + "type": "image", + "image_path": "076650b651e6aa4430796a6f9f858dfef69848080a46eb6b562fc16c0fb90619.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 437, + 152, + 553, + 219 + ], + "blocks": [ + { + "bbox": [ + 437, + 152, + 553, + 219 + ], + "lines": [ + { + "bbox": [ + 437, + 152, + 553, + 219 + ], + "spans": [ + { + "bbox": [ + 437, + 152, + 553, + 219 + ], + "type": "image", + "image_path": "e2be93cf1a71bff0a1240735116ef1c0ea2645678d7d602e2b6cd91f0e07c5a4.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 437, + 219, + 553, + 295 + ], + "blocks": [ + { + "bbox": [ + 437, + 219, + 553, + 295 + ], + "lines": [ + { + "bbox": [ + 437, + 219, + 553, + 295 + ], + "spans": [ + { + "bbox": [ + 437, + 219, + 553, + 295 + ], + "type": "image", + "image_path": "2b0a90577a7f2b4cd573d46ab5d832b246c463ed621f16630a3d65b902a1901b.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + } + ], + "index": 11 + }, + { + "bbox": [ + 55, + 369, + 295, + 396 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 369, + 295, + 396 + ], + "spans": [ + { + "bbox": [ + 55, + 369, + 295, + 396 + ], + "type": "text", + "content": "4. Interpreting different pre-training methods using interactions" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 55, + 403, + 195, + 416 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 403, + 195, + 416 + ], + "spans": [ + { + "bbox": [ + 55, + 403, + 195, + 416 + ], + "type": "text", + "content": "4.1. Comparative study setup" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 55, + 420, + 296, + 552 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 420, + 296, + 552 + ], + "spans": [ + { + "bbox": [ + 55, + 420, + 296, + 552 + ], + "type": "text", + "content": "For a given network architecture, we compare the interactions encoded by the model trained from scratch with those encoded by models trained using various pre-training methods. This comparison aims to explore whether these pretraining methods share a common underlying reason for performance improvement, which we define as the common mechanism across these methods. To provide a unified explanation for most pre-training methods, we conduct experiments on five widely used open-source pre-training methods, including IAE [38], STRL [9], CrossPoint [1], OcCo [32] and JigSaw [21], as detailed in Sec. 2." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 55, + 552, + 296, + 624 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 552, + 296, + 624 + ], + "spans": [ + { + "bbox": [ + 55, + 552, + 296, + 624 + ], + "type": "text", + "content": "Networks and datasets. We conduct experiments on three network architectures: DGCNN [35], PointNet [14], and PCN [40]. For DGCNN, we utilize all five pre-training methods, while for PointNet and PCN, we focus on OcCo [32] and Jigsaw [21], depending on the accessibility of open-source implementations for each pre-training method." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 55, + 624, + 296, + 685 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 624, + 296, + 685 + ], + "spans": [ + { + "bbox": [ + 55, + 624, + 296, + 685 + ], + "type": "text", + "content": "To compare the interactions encoded by different DNNs, we use three benchmark datasets for 3D classification task: ModelNet40 [36], ShapeNet² [3], and ScanObjectNN [31]. Tab. 1 shows the statistics of these datasets. We randomly select 10 samples per class from each dataset and use the" + } + ] + } + ], + "index": 17 + }, + { + "type": "table", + "bbox": [ + 317, + 368, + 554, + 426 + ], + "blocks": [ + { + "bbox": [ + 317, + 368, + 554, + 426 + ], + "lines": [ + { + "bbox": [ + 317, + 368, + 554, + 426 + ], + "spans": [ + { + "bbox": [ + 317, + 368, + 554, + 426 + ], + "type": "table", + "html": "
NameType# Class# Training / Testing
ModelNetsynthesized409,843 / 2,468
ShapeNetsynthesized1612,137 / 4,744
ScanObjectNNreal world152,304 / 576
", + "image_path": "65cf3ad034d8e88dfa15645b93168538a874d32429bb6cc195c3851d66464738.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "table_body" + } + ], + "index": 18 + }, + { + "bbox": [ + 348, + 430, + 520, + 440 + ], + "lines": [ + { + "bbox": [ + 348, + 430, + 520, + 440 + ], + "spans": [ + { + "bbox": [ + 348, + 430, + 520, + 440 + ], + "type": "text", + "content": "Table 1. Statistics of datasets for classification." + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 313, + 450, + 554, + 474 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 450, + 554, + 474 + ], + "spans": [ + { + "bbox": [ + 313, + 450, + 554, + 474 + ], + "type": "text", + "content": "method in Sec. 3 to divide each point cloud sample into " + }, + { + "bbox": [ + 313, + 450, + 554, + 474 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 313, + 450, + 554, + 474 + ], + "type": "text", + "content": " regions for quantifying the interactions encoded by DNNs." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 313, + 481, + 554, + 506 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 481, + 554, + 506 + ], + "spans": [ + { + "bbox": [ + 313, + 481, + 554, + 506 + ], + "type": "text", + "content": "4.2. Exploring the common mechanism of different pre-training methods" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 323, + 520, + 545, + 569 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 323, + 520, + 545, + 569 + ], + "spans": [ + { + "bbox": [ + 323, + 520, + 545, + 569 + ], + "type": "text", + "content": "Conclusion 1. The common mechanism across different pre-training methods is that they enhance the strength of high-order interactions encoded by DNNs, while reducing the strength of low-order interactions." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 313, + 582, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 582, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 582, + 555, + 713 + ], + "type": "text", + "content": "Fig. 3 (a) shows the normalized average strength of the interactions encoded by different DNNs, including DNNs trained from scratch and DNNs using different pre-training methods. Results show that the strength of high-order interactions encoded by the DNNs using pre-training methods is consistently greater than that of the DNNs trained from scratch, across all datasets and network architectures. Conversely, the DNNs using pre-training methods typically encode weaker low-order interactions than the DNNs trained from scratch. Fig. 3 (b) further illustrates the relationship between the strength of high-order interactions and the clas" + } + ] + } + ], + "index": 23 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 55, + 693, + 295, + 713 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 693, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 693, + 295, + 713 + ], + "type": "text", + "content": "2The ShapeNet dataset for classification is derived from the ShapeNet part segmentation dataset, following [26]." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "27318" + } + ] + } + ], + "index": 25 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 57, + 71, + 193, + 230 + ], + "blocks": [ + { + "bbox": [ + 57, + 71, + 193, + 230 + ], + "lines": [ + { + "bbox": [ + 57, + 71, + 193, + 230 + ], + "spans": [ + { + "bbox": [ + 57, + 71, + 193, + 230 + ], + "type": "image", + "image_path": "3dda22ba09d99ba14a40a722fa9072e88e2e08d51ddd3ca13441f0b237d9a780.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 232, + 296, + 288 + ], + "lines": [ + { + "bbox": [ + 55, + 232, + 296, + 288 + ], + "spans": [ + { + "bbox": [ + 55, + 232, + 296, + 288 + ], + "type": "text", + "content": "Figure 4. Visualization of interactions encoded by the DGCNN trained from scratch (scr) and the DGCNN pre-trained (pt) with IAE. The pre-trained DGCNN typically encodes stronger high-order interactions and weaker low-order interactions compared to the DGCNN trained from scratch." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 197, + 71, + 294, + 230 + ], + "blocks": [ + { + "bbox": [ + 197, + 71, + 294, + 230 + ], + "lines": [ + { + "bbox": [ + 197, + 71, + 294, + 230 + ], + "spans": [ + { + "bbox": [ + 197, + 71, + 294, + 230 + ], + "type": "image", + "image_path": "20f4fd5661286d9a212e95d4a2b4c514ec0522bee551dcc0bb8bb8d1903d9b35.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 54, + 302, + 295, + 410 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 302, + 295, + 410 + ], + "spans": [ + { + "bbox": [ + 54, + 302, + 295, + 410 + ], + "type": "text", + "content": "sification accuracy across different DNNs. We observe that DNNs encoding stronger high-order interactions tend to exhibit higher accuracy. Thus, we regard this shared phenomenon as the common mechanism behind the performance improvement of different pre-training methods, i.e., different pre-training methods generally enhance the strength of high-order interactions encoded by DNNs, while reducing the strength of low-order interactions, as summarized in Conclusion 1." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 412, + 295, + 688 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 412, + 295, + 688 + ], + "spans": [ + { + "bbox": [ + 55, + 412, + 295, + 688 + ], + "type": "text", + "content": "Conclusion 1 reveals that pre-training methods enhance the ability of DNNs to encode complex and global 3D structures, while reducing their reliance on simple and local 3D structures. As simple and local 3D structures (e.g., a curve, a corner) can appear across different categories, they often lack sufficient classification information, so an over-reliance on them may lead to incorrect classifications. For example, as shown in Fig. 4, the DNN trained from scratch incorrectly classifies a \"plant\" sample as a \"stool\". This misclassification may occur because the local structures the DNN learns for the plant, such as the \"stem\" and the \"leaf\", are similar to some local structures of a stool, such as the \"legs\". However, the DNN still encodes a high strength for these local structures (i.e., low-order interactions), which results in an incorrect classification. In contrast, pre-training methods improve the modeling of complex and global 3D structures, allowing DNNs to get a more comprehensive understanding of the input, which in turn enhances their performance. Thus, beyond traditional accuracy metrics, interactions can help identify the potential reasons for classification errors by revealing which 3D structures modeled by the DNN have inappropriate weights, offering a new perspective for debugging." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 689, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 689, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 689, + 295, + 713 + ], + "type": "text", + "content": "Comparison with transformer-based pre-training methods. We also measure interactions encoded by transformer-" + } + ] + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 316, + 71, + 553, + 155 + ], + "blocks": [ + { + "bbox": [ + 316, + 71, + 553, + 155 + ], + "lines": [ + { + "bbox": [ + 316, + 71, + 553, + 155 + ], + "spans": [ + { + "bbox": [ + 316, + 71, + 553, + 155 + ], + "type": "image", + "image_path": "2b18729678ddf6c40e4c4532bb609d7399d56e1ef4fd93b55a6b755cc2577e1f.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 156, + 555, + 245 + ], + "lines": [ + { + "bbox": [ + 313, + 156, + 555, + 245 + ], + "spans": [ + { + "bbox": [ + 313, + 156, + 555, + 245 + ], + "type": "text", + "content": "Figure 5. Comparing the normalized average strength of interactions encoded by (1) transformer-based models, (2) traditional DNNs (e.g., DGCNN and PointNet) trained from scratch, and (3) traditional DNNs using pre-training methods (e.g., DGCNN with IAE, and PointNet with OcCo). Results show that transformer-based models also encode stronger high-order interactions and weaker low-order interactions, exhibiting a similar pattern to traditional DNNs using pre-training methods." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 256, + 555, + 424 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 256, + 555, + 424 + ], + "spans": [ + { + "bbox": [ + 313, + 256, + 555, + 424 + ], + "type": "text", + "content": "based models, including PointBERT [39], PointMAE [12], PointM2AE [42], and PointGPT [4]. These models integrate pre-training methods into the model architecture, making them incompatible with traditional DNNs (e.g., DGCNN). Therefore, we directly compare the interactions encoded by transformer-based models with the interactions encoded by traditional DNNs, including the DNNs trained from scratch and the DNNs trained with pre-training methods. As shown in Fig. 5, transformer-based models also encode stronger high-order interactions and weaker low-order interactions than traditional DNNs trained from scratch, which exhibit a similar pattern to the interactions encoded by traditional DNNs using pre-training methods. This further supports Conclusion 1." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 430, + 553, + 455 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 430, + 553, + 455 + ], + "spans": [ + { + "bbox": [ + 313, + 430, + 553, + 455 + ], + "type": "text", + "content": "4.3. Exploring the impact of different factors on the common mechanism" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 460, + 554, + 497 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 460, + 554, + 497 + ], + "spans": [ + { + "bbox": [ + 313, + 460, + 554, + 497 + ], + "type": "text", + "content": "We further explore two factors that impact the common mechanism: (a) the extent of pre-training, and (b) the amount of fine-tuning data used for downstream tasks." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 323, + 508, + 545, + 557 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 323, + 508, + 545, + 557 + ], + "spans": [ + { + "bbox": [ + 323, + 508, + 545, + 557 + ], + "type": "text", + "content": "Conclusion 2(a). The pre-training process progressively enhances the strength of high-order interactions encoded while weakening the strength of low-order interactions as the extent of pre-training increases." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "text", + "content": "In this subsection, we first investigate the relationship between the extent of pre-training and the strength of interactions encoded by DNNs. Here, the extent of pre-training refers to the number of pre-training epochs, i.e., the range of epochs from the start of pre-training to the epoch at which pre-training converges. To this end, we conduct experiments on DGCNN with two pre-training methods, including IAE and CrossPoint. For each pre-training method, let " + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "inline_equation", + "content": "T_{\\mathrm{max}}" + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "text", + "content": " denote the total number of epochs at which the pre-training process of the DNN converges. We select the DNNs at training epochs " + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "inline_equation", + "content": "0, 0.2T_{\\mathrm{max}}, 0.4T_{\\mathrm{max}}, \\ldots, T_{\\mathrm{max}}" + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "text", + "content": ", covering six different stages of the pre-training process. Then, for all" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "27319" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 56, + 70, + 294, + 163 + ], + "blocks": [ + { + "bbox": [ + 56, + 70, + 294, + 163 + ], + "lines": [ + { + "bbox": [ + 56, + 70, + 294, + 163 + ], + "spans": [ + { + "bbox": [ + 56, + 70, + 294, + 163 + ], + "type": "image", + "image_path": "a87161cf4e1679b3d5dad2935f636cc2f3b4ffb672ab7dd1d6f0a10ba275ac30.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 166, + 295, + 233 + ], + "lines": [ + { + "bbox": [ + 55, + 166, + 295, + 233 + ], + "spans": [ + { + "bbox": [ + 55, + 166, + 295, + 233 + ], + "type": "text", + "content": "Figure 6. [Conclusion 2(a)] Comparing the normalized average strength of interactions encoded by DGCNNs pre-trained for different extents, ranging from initial pre-training (0%) to full convergence (100%). As the extent of pre-training increases, the strength of high-order interactions encoded by the DNNs typically rises, while the strength of low-order interactions generally decreases." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 56, + 242, + 294, + 335 + ], + "blocks": [ + { + "bbox": [ + 56, + 242, + 294, + 335 + ], + "lines": [ + { + "bbox": [ + 56, + 242, + 294, + 335 + ], + "spans": [ + { + "bbox": [ + 56, + 242, + 294, + 335 + ], + "type": "image", + "image_path": "8aa174ef63623ca8ad69b2f8f26b8407fd901be9a1d01012708f253c838129c1.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 337, + 296, + 403 + ], + "lines": [ + { + "bbox": [ + 55, + 337, + 296, + 403 + ], + "spans": [ + { + "bbox": [ + 55, + 337, + 296, + 403 + ], + "type": "text", + "content": "Figure 7. [Conclusion 2(b)] Comparing the normalized average strength of interactions encoded by DNNs fine-tuned with varying amounts of data. Results show that as the amount of fine-tuning data increases from " + }, + { + "bbox": [ + 55, + 337, + 296, + 403 + ], + "type": "inline_equation", + "content": "1\\%" + }, + { + "bbox": [ + 55, + 337, + 296, + 403 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 55, + 337, + 296, + 403 + ], + "type": "inline_equation", + "content": "100\\%" + }, + { + "bbox": [ + 55, + 337, + 296, + 403 + ], + "type": "text", + "content": ", the strength of high-order interactions encoded by the DNNs generally increases, while the strength of low-order interactions generally decreases." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 415, + 295, + 450 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 415, + 295, + 450 + ], + "spans": [ + { + "bbox": [ + 55, + 415, + 295, + 450 + ], + "type": "text", + "content": "DNNs at different pre-training extents, we fine-tune them on the same downstream task and quantify the interactions encoded by these fine-tuned DNNs." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 451, + 295, + 571 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 451, + 295, + 571 + ], + "spans": [ + { + "bbox": [ + 55, + 451, + 295, + 571 + ], + "type": "text", + "content": "Fig. 6 presents the experimental results. We observe that as the extent of pre-training increases, the strength of high-order interactions encoded by the DNNs generally increases, while the strength of low-order interactions typically decreases. We summarize this relationship between the extent of pre-training and the interactions encoded by DNNs in Conclusion 2(a). This conclusion suggests that sufficient pre-training enhances the model's ability to capture complex and global 3D contexts, further validating the common mechanism outlined in Conclusion 1." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 64, + 581, + 287, + 629 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 64, + 581, + 287, + 629 + ], + "spans": [ + { + "bbox": [ + 64, + 581, + 287, + 629 + ], + "type": "text", + "content": "Conclusion 2(b). Increasing the amount of fine-tuning data further enhances the strength of high-order interactions encoded by DNNs, while weakening the strength of low-order interactions." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 642, + 296, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 642, + 296, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 642, + 296, + 715 + ], + "type": "text", + "content": "To investigate the relationship between the amount of fine-tuning data for downstream tasks and the interactions encoded by DNNs, we construct seven training sets of varying sizes from the ModelNet40 dataset, containing " + }, + { + "bbox": [ + 55, + 642, + 296, + 715 + ], + "type": "inline_equation", + "content": "1\\%" + }, + { + "bbox": [ + 55, + 642, + 296, + 715 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 642, + 296, + 715 + ], + "type": "inline_equation", + "content": "10\\%" + }, + { + "bbox": [ + 55, + 642, + 296, + 715 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 642, + 296, + 715 + ], + "type": "inline_equation", + "content": "20\\%" + }, + { + "bbox": [ + 55, + 642, + 296, + 715 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 642, + 296, + 715 + ], + "type": "inline_equation", + "content": "30\\%" + }, + { + "bbox": [ + 55, + 642, + 296, + 715 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 642, + 296, + 715 + ], + "type": "inline_equation", + "content": "50\\%" + }, + { + "bbox": [ + 55, + 642, + 296, + 715 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 642, + 296, + 715 + ], + "type": "inline_equation", + "content": "70\\%" + }, + { + "bbox": [ + 55, + 642, + 296, + 715 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 55, + 642, + 296, + 715 + ], + "type": "inline_equation", + "content": "100\\%" + }, + { + "bbox": [ + 55, + 642, + 296, + 715 + ], + "type": "text", + "content": " of the original ModelNet40 training data, respectively. Note that we ensure" + } + ] + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 316, + 72, + 553, + 171 + ], + "blocks": [ + { + "bbox": [ + 316, + 72, + 553, + 171 + ], + "lines": [ + { + "bbox": [ + 316, + 72, + 553, + 171 + ], + "spans": [ + { + "bbox": [ + 316, + 72, + 553, + 171 + ], + "type": "image", + "image_path": "037f4d02f5874a1f526c7bb0b41b8a9fd67377bf8b99ad52f1b4877cba002600.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 171, + 555, + 237 + ], + "lines": [ + { + "bbox": [ + 313, + 171, + 555, + 237 + ], + "spans": [ + { + "bbox": [ + 313, + 171, + 555, + 237 + ], + "type": "text", + "content": "Figure 8. Comparing the classification accuracy and the strength of high-order interactions encoded by different DNNs fine-tuned with varying amounts of data. As the amount of data increases, the accuracy gap between the DNN trained from scratch and the DNN pre-trained with IAE narrows, while the gap in the strength of high-order interactions encoded by these DNNs widens." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 248, + 555, + 369 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 248, + 555, + 369 + ], + "spans": [ + { + "bbox": [ + 313, + 248, + 555, + 369 + ], + "type": "text", + "content": "at least one sample from each class is included, allowing the model to learn from all categories. We then use the different-sized training sets to fine-tune DGCNNs, including those pre-trained using the IAE method and the Cross-Point method. As shown in Fig. 7, as the amount of fine-tuning data increases, the strength of high-order interactions encoded by DNNs gradually increases, while the strength of low-order interactions decreases. We summarize this relationship between the amount of fine-tuning data and the interactions encoded by DNNs in Conclusion 2(b)." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 375, + 555, + 400 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 375, + 555, + 400 + ], + "spans": [ + { + "bbox": [ + 313, + 375, + 555, + 400 + ], + "type": "text", + "content": "4.4. Exploring the potential risk of pre-training methods in reducing DNN's transferability" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 323, + 413, + 545, + 450 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 323, + 413, + 545, + 450 + ], + "spans": [ + { + "bbox": [ + 323, + 413, + 545, + 450 + ], + "type": "text", + "content": "Conclusion 3. Pre-training methods carry a potential risk of reducing the transferability of features encoded by DNNs." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 462, + 555, + 594 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 462, + 555, + 594 + ], + "spans": [ + { + "bbox": [ + 313, + 462, + 555, + 594 + ], + "type": "text", + "content": "When exploring the relationship between the amount of fine-tuning data and the interactions encoded by DNNs, we observe the following anomalous phenomenon. As shown in Fig. 8, the gap in classification accuracy between the pretrained DNN and the DNN trained from scratch becomes marginal as the fine-tuning data increases. For example, when the fine-tuning data reaches " + }, + { + "bbox": [ + 313, + 462, + 555, + 594 + ], + "type": "inline_equation", + "content": "100\\%" + }, + { + "bbox": [ + 313, + 462, + 555, + 594 + ], + "type": "text", + "content": ", the accuracy gap is only " + }, + { + "bbox": [ + 313, + 462, + 555, + 594 + ], + "type": "inline_equation", + "content": "0.2\\%" + }, + { + "bbox": [ + 313, + 462, + 555, + 594 + ], + "type": "text", + "content": ". However, the gap in the strength of high-order interactions between the two DNNs gradually increases, indicating that high-order interactions with excessively high strength are not necessary for performance improvement." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 594, + 556, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 594, + 556, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 594, + 556, + 714 + ], + "type": "text", + "content": "Since high-order interactions generally carry a greater risk of overfitting [41], we investigate the potential risk of pre-training methods in reducing the transferability of features encoded by DNNs. Here, the transferability of features refers to the generalization ability of the features. For example, if the features learned from one dataset (e.g., the features of the airplane class in ModelNet) can be applied to another unseen dataset (e.g., identifying the airplane class in ShapeNet), we consider these features to have high transferability. To this end, we use ShapeNet as the unseen" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "27320" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 56, + 70, + 294, + 155 + ], + "blocks": [ + { + "bbox": [ + 56, + 70, + 294, + 155 + ], + "lines": [ + { + "bbox": [ + 56, + 70, + 294, + 155 + ], + "spans": [ + { + "bbox": [ + 56, + 70, + 294, + 155 + ], + "type": "image", + "image_path": "524e5286dfb6e543842f6239f5a62352866fbed12fbf6488c0615028f8db39c5.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 156, + 296, + 233 + ], + "lines": [ + { + "bbox": [ + 55, + 156, + 296, + 233 + ], + "spans": [ + { + "bbox": [ + 55, + 156, + 296, + 233 + ], + "type": "text", + "content": "Figure 9. [Conclusion 3] Comparing the zero-shot classification accuracy of DNNs with and without pre-training, followed by fine-tuning with varying amounts of data. Results show that the zero-shot accuracy of the pre-trained DNN initially exceeds that of the DNN trained from scratch when the fine-tuning data is limited " + }, + { + "bbox": [ + 55, + 156, + 296, + 233 + ], + "type": "inline_equation", + "content": "(e.g., 1\\%)" + }, + { + "bbox": [ + 55, + 156, + 296, + 233 + ], + "type": "text", + "content": ", but falls below that of the DNN trained from scratch as the fine-tuning data becomes sufficient " + }, + { + "bbox": [ + 55, + 156, + 296, + 233 + ], + "type": "inline_equation", + "content": "(e.g., 100\\%)" + }, + { + "bbox": [ + 55, + 156, + 296, + 233 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 245, + 296, + 329 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 245, + 296, + 329 + ], + "spans": [ + { + "bbox": [ + 55, + 245, + 296, + 329 + ], + "type": "text", + "content": "dataset and compare the classification accuracy of different DNNs, including DGCNNs trained with varying amounts of data from ModelNet, as well as DGCNNs pre-trained and then fine-tuned with varying amounts of data. Since the category labels in the two datasets do not completely align, we identify eight common categories. Please see the supplementary material for more implementation details." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 330, + 296, + 522 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 330, + 296, + 522 + ], + "spans": [ + { + "bbox": [ + 55, + 330, + 296, + 522 + ], + "type": "text", + "content": "Fig. 9 shows the results. We find that when the amount of fine-tuning data is limited (e.g., " + }, + { + "bbox": [ + 55, + 330, + 296, + 522 + ], + "type": "inline_equation", + "content": "1\\%" + }, + { + "bbox": [ + 55, + 330, + 296, + 522 + ], + "type": "text", + "content": "), pre-trained DNNs, such as the DNN pre-trained with CrossPoint, achieve higher zero-shot accuracy " + }, + { + "bbox": [ + 55, + 330, + 296, + 522 + ], + "type": "inline_equation", + "content": "(+8.9\\%)" + }, + { + "bbox": [ + 55, + 330, + 296, + 522 + ], + "type": "text", + "content": " compared to the DNN trained from scratch. In contrast, when the fine-tuning data is sufficient (e.g., " + }, + { + "bbox": [ + 55, + 330, + 296, + 522 + ], + "type": "inline_equation", + "content": "100\\%" + }, + { + "bbox": [ + 55, + 330, + 296, + 522 + ], + "type": "text", + "content": "), the accuracy of the DNN pretrained with CrossPoint significantly lags behind that of the DNN trained from scratch " + }, + { + "bbox": [ + 55, + 330, + 296, + 522 + ], + "type": "inline_equation", + "content": "(-14.7\\%)" + }, + { + "bbox": [ + 55, + 330, + 296, + 522 + ], + "type": "text", + "content": ". We attribute this to pre-training methods causing DNNs to encode high-order interactions with excessively high strength, which in turn reduces the transferability of the features encoded by the DNNs. Note that we are not criticizing the use of pretraining methods to enhance the strength of high-order interactions encoded by DNNs as inherently negative. Rather, we are proposing this potential risk and offering new insights for the design of pre-training methods." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 535, + 295, + 562 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 535, + 295, + 562 + ], + "spans": [ + { + "bbox": [ + 55, + 535, + 295, + 562 + ], + "type": "text", + "content": "5. Guiding the training process using the common mechanism" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 570, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 570, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 570, + 296, + 714 + ], + "type": "text", + "content": "Traditional pre-training methods, while improving performance, inevitably require extensive pre-training on large-scale unlabeled datasets, which demands considerable time and computational resources. As discussed above, we find that the common mechanism underlying different pre-training methods is that they universally enhance the strength of high-order interactions encoded by DNNs while reducing the strength of low-order interactions. Building on this insight, we propose a new method that directly enhances the strength of high-order interactions encoded by DNNs while reducing the strength of low-order interactions. In this way, our method achieves performance" + } + ] + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 316, + 69, + 553, + 160 + ], + "blocks": [ + { + "bbox": [ + 316, + 69, + 553, + 160 + ], + "lines": [ + { + "bbox": [ + 316, + 69, + 553, + 160 + ], + "spans": [ + { + "bbox": [ + 316, + 69, + 553, + 160 + ], + "type": "image", + "image_path": "b54690b9075dd82112437b300e73e7480e077e9978c3ea391be11a238f70d379.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 161, + 553, + 205 + ], + "lines": [ + { + "bbox": [ + 313, + 161, + 553, + 205 + ], + "spans": [ + { + "bbox": [ + 313, + 161, + 553, + 205 + ], + "type": "text", + "content": "Figure 10. (a) Curves showing the values of the proposed loss term " + }, + { + "bbox": [ + 313, + 161, + 553, + 205 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\text{interaction}}" + }, + { + "bbox": [ + 313, + 161, + 553, + 205 + ], + "type": "text", + "content": " for different values of " + }, + { + "bbox": [ + 313, + 161, + 553, + 205 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 313, + 161, + 553, + 205 + ], + "type": "text", + "content": " throughout the training process. (b) Comparison of the normalized average strength of interactions encoded by DNNs for various " + }, + { + "bbox": [ + 313, + 161, + 553, + 205 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 313, + 161, + 553, + 205 + ], + "type": "text", + "content": " values in the loss term." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 217, + 555, + 264 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 217, + 555, + 264 + ], + "spans": [ + { + "bbox": [ + 313, + 217, + 555, + 264 + ], + "type": "text", + "content": "comparable to traditional pre-training methods while avoiding the need for pre-training on large-scale unlabeled datasets. Specifically, we introduce a new heuristic loss term defined as follows." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 320, + 271, + 555, + 285 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 271, + 555, + 285 + ], + "spans": [ + { + "bbox": [ + 320, + 271, + 555, + 285 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\text {i n t e r a c t i o n}} = \\mathbb {E} _ {x} \\left[ \\mathbb {E} _ {| S | \\in \\Omega^ {\\mathrm {l o w}}} [ | I (S) | ] - \\mathbb {E} _ {| S | \\in \\Omega^ {\\mathrm {h i g h}}} [ | I (S) | ] \\right], \\tag {6}", + "image_path": "d20d032b8168e4b320b16f0d46ef609a0669ace7276d90708d05b64410191345.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 292, + 554, + 365 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 292, + 554, + 365 + ], + "spans": [ + { + "bbox": [ + 313, + 292, + 554, + 365 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 292, + 554, + 365 + ], + "type": "inline_equation", + "content": "\\Omega^{\\mathrm{high}}" + }, + { + "bbox": [ + 313, + 292, + 554, + 365 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 292, + 554, + 365 + ], + "type": "inline_equation", + "content": "\\Omega^{\\mathrm{low}}" + }, + { + "bbox": [ + 313, + 292, + 554, + 365 + ], + "type": "text", + "content": " define the ranges of high-order and low-order interactions, as detailed in Sec. 3. Minimizing the loss term " + }, + { + "bbox": [ + 313, + 292, + 554, + 365 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{interaction}}" + }, + { + "bbox": [ + 313, + 292, + 554, + 365 + ], + "type": "text", + "content": " forces the DNN to weaken the strength of low-order interactions, i.e., decreasing " + }, + { + "bbox": [ + 313, + 292, + 554, + 365 + ], + "type": "inline_equation", + "content": "\\mathbb{E}_x\\mathbb{E}_{|S|\\in \\Omega^{\\mathrm{low}}}[|I(S)|]" + }, + { + "bbox": [ + 313, + 292, + 554, + 365 + ], + "type": "text", + "content": ", while enhancing the strength of high-order interactions, i.e., increasing " + }, + { + "bbox": [ + 313, + 292, + 554, + 365 + ], + "type": "inline_equation", + "content": "\\mathbb{E}_x\\mathbb{E}_{|S|\\in \\Omega^{\\mathrm{high}}}[|I(S)|]" + }, + { + "bbox": [ + 313, + 292, + 554, + 365 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 365, + 554, + 483 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 365, + 554, + 483 + ], + "spans": [ + { + "bbox": [ + 313, + 365, + 554, + 483 + ], + "type": "text", + "content": "However, computing Eq. (6) is NP-hard. To overcome this challenge, we approximate " + }, + { + "bbox": [ + 313, + 365, + 554, + 483 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{interaction}}" + }, + { + "bbox": [ + 313, + 365, + 554, + 483 + ], + "type": "text", + "content": " using a sampling-based approach. Specifically, given a point cloud " + }, + { + "bbox": [ + 313, + 365, + 554, + 483 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 313, + 365, + 554, + 483 + ], + "type": "text", + "content": " with " + }, + { + "bbox": [ + 313, + 365, + 554, + 483 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 313, + 365, + 554, + 483 + ], + "type": "text", + "content": " regions indexed by " + }, + { + "bbox": [ + 313, + 365, + 554, + 483 + ], + "type": "inline_equation", + "content": "N = \\{1,2,\\dots,n\\}" + }, + { + "bbox": [ + 313, + 365, + 554, + 483 + ], + "type": "text", + "content": ", we sample three disjoint subsets " + }, + { + "bbox": [ + 313, + 365, + 554, + 483 + ], + "type": "inline_equation", + "content": "S_{1}, S_{2}, S_{3} \\subseteq N" + }, + { + "bbox": [ + 313, + 365, + 554, + 483 + ], + "type": "text", + "content": " where the orders of the subsets " + }, + { + "bbox": [ + 313, + 365, + 554, + 483 + ], + "type": "inline_equation", + "content": "|S_1|, |S_2|, |S_3| \\in \\Omega^{\\mathrm{low}}" + }, + { + "bbox": [ + 313, + 365, + 554, + 483 + ], + "type": "text", + "content": ", with each subset representing a low-order interaction encoded by the DNN. We consider the union " + }, + { + "bbox": [ + 313, + 365, + 554, + 483 + ], + "type": "inline_equation", + "content": "S_{\\mathrm{union}} = S_{1} \\cup S_{2} \\cup S_{3}" + }, + { + "bbox": [ + 313, + 365, + 554, + 483 + ], + "type": "text", + "content": " as a relatively high-order interaction. Then, we can approximate the interaction loss " + }, + { + "bbox": [ + 313, + 365, + 554, + 483 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{interaction}}" + }, + { + "bbox": [ + 313, + 365, + 554, + 483 + ], + "type": "text", + "content": " as follows." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 317, + 491, + 553, + 514 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 491, + 553, + 514 + ], + "spans": [ + { + "bbox": [ + 317, + 491, + 553, + 514 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\text {i n t e r a c t i o n}} ^ {\\prime} = \\mathbb {E} _ {S _ {1}, S _ {2}, S _ {3} \\subseteq N} \\left[ \\mathbb {E} _ {i \\in \\{1, 2, 3 \\}} [ | I (S _ {i}) | ] - | I (S _ {\\text {u n i o n}}) | \\right]. \\tag {7}", + "image_path": "3e158c9135a6e37545ff9354fd43e3403b7114216bfd441f4ae70c6a5dac2139.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 514, + 554, + 562 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 514, + 554, + 562 + ], + "spans": [ + { + "bbox": [ + 313, + 514, + 554, + 562 + ], + "type": "text", + "content": "Given a traditional DNN, we incorporate the interaction loss into the training process using the following loss function for the classification task, without the need for additional pre-training on large-scale unlabeled datasets." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 371, + 571, + 553, + 585 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 371, + 571, + 553, + 585 + ], + "spans": [ + { + "bbox": [ + 371, + 571, + 553, + 585 + ], + "type": "interline_equation", + "content": "\\mathcal {L} = \\mathcal {L} _ {\\text {c l a s s i f i c a t i o n}} + \\alpha \\mathcal {L} _ {\\text {i n t e r a c t i o n}}, \\tag {8}", + "image_path": "7c13222158df37e188b7225d026b3d3a5540fe3830cb16410510e7f609caf780.jpg" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 594, + 554, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 594, + 554, + 689 + ], + "spans": [ + { + "bbox": [ + 313, + 594, + 554, + 689 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 594, + 554, + 689 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{classification}}" + }, + { + "bbox": [ + 313, + 594, + 554, + 689 + ], + "type": "text", + "content": " denotes the standard classification loss function (e.g., cross-entropy loss), and " + }, + { + "bbox": [ + 313, + 594, + 554, + 689 + ], + "type": "inline_equation", + "content": "\\alpha > 0" + }, + { + "bbox": [ + 313, + 594, + 554, + 689 + ], + "type": "text", + "content": " is the hyperparameter controlling the strength of the interaction loss. Please see Tab. 4 for the effects of varying " + }, + { + "bbox": [ + 313, + 594, + 554, + 689 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 313, + 594, + 554, + 689 + ], + "type": "text", + "content": ". As shown in Fig. 10 (b), the strength of high-order interactions encoded by the DNN with " + }, + { + "bbox": [ + 313, + 594, + 554, + 689 + ], + "type": "inline_equation", + "content": "\\alpha > 0" + }, + { + "bbox": [ + 313, + 594, + 554, + 689 + ], + "type": "text", + "content": " is generally higher than the result when " + }, + { + "bbox": [ + 313, + 594, + 554, + 689 + ], + "type": "inline_equation", + "content": "\\alpha = 0" + }, + { + "bbox": [ + 313, + 594, + 554, + 689 + ], + "type": "text", + "content": ", but it does not increase indefinitely as " + }, + { + "bbox": [ + 313, + 594, + 554, + 689 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 313, + 594, + 554, + 689 + ], + "type": "text", + "content": " grows. This shows the effectiveness of our interaction loss." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 689, + 553, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 689, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 689, + 553, + 713 + ], + "type": "text", + "content": "Experiments and results analysis. To evaluate the effectiveness of the proposed loss term, we conduct experi-" + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "27321" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 56, + 70, + 295, + 261 + ], + "blocks": [ + { + "bbox": [ + 56, + 70, + 295, + 261 + ], + "lines": [ + { + "bbox": [ + 56, + 70, + 295, + 261 + ], + "spans": [ + { + "bbox": [ + 56, + 70, + 295, + 261 + ], + "type": "table", + "html": "
MethodModelNet40ScanObjectNNNo Pre-train
PointNet89.268.0
PointNet + JigSaw89.6-
PointNet + OcCo90.1-
PointNet + Linteraction (Ours)90.169.0
DGCNN92.578.1
DGCNN + JigSaw92.683.5
DGCNN + OcCo93.084.3
DGCNN + STRL93.1-
DGCNN + CrossPoint92.8-
DGCNN + IAE94.285.6
DGCNN + Linteraction (Ours)93.379.4
CurveNet92.879.2
CurveNet + Linteraction (Ours)93.182.0
GDANet92.378.7
GDANet + Linteraction (Ours)92.880.0
", + "image_path": "f87f4c37eaaffb9145ef3c9d64b45b23f3fa8f7179c6ba3b8561b26d3725f91d.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + }, + { + "bbox": [ + 55, + 266, + 295, + 321 + ], + "lines": [ + { + "bbox": [ + 55, + 266, + 295, + 321 + ], + "spans": [ + { + "bbox": [ + 55, + 266, + 295, + 321 + ], + "type": "text", + "content": "Table 2. Classification accuracy (\\%) on ModelNet40 and ScanObjectNN datasets. The best results are shown in bold and the second-best results are underlined. Our method achieves results comparable to pre-training methods, while not requiring pretraining on large-scale datasets." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_footnote" + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 331, + 295, + 498 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 331, + 295, + 498 + ], + "spans": [ + { + "bbox": [ + 55, + 331, + 295, + 498 + ], + "type": "text", + "content": "ments on 3D point cloud classification and semantic segmentation tasks. For the classification task, we use the ModelNet40 and ScanObjectNN datasets, as described in Sec. 4.1. Specifically, for the ScanObjectNN dataset, we conduct experiments using the PB_T50_RS variant, which is the most challenging variant. We train PointNet and DGCNN using the proposed loss term and set " + }, + { + "bbox": [ + 55, + 331, + 295, + 498 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 55, + 331, + 295, + 498 + ], + "type": "text", + "content": " to 0.0005. As shown in Tab. 2, our proposed loss term consistently improves the performance of PointNet, DGCNN, CurveNet [11], and GDANet [37] on both the ModelNet40 and the ScanObjectNN testing splits, compared to their original versions. Moreover, our method demonstrates performance comparable to pre-training methods, without the need for pre-training on large-scale datasets." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 498, + 295, + 700 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 498, + 295, + 700 + ], + "spans": [ + { + "bbox": [ + 55, + 498, + 295, + 700 + ], + "type": "text", + "content": "For the semantic segmentation task, we conduct experiments on the Stanford Large-Scale 3D Indoor Spaces (S3DIS) dataset [2]. The S3DIS consists of 3D point clouds collected from six distinct large-scale indoor environments, with each point cloud annotated with per-point categorical labels. We randomly subsample 4,096 points from the original point cloud and apply 6-fold cross-validation during fine-tuning. Since the proposed interaction loss is specifically designed for 3D classification, it cannot be directly applied to segmentation tasks. Instead, we adopt a two-stage training approach: first, we train a DNN on the classification task with our interaction loss, and then fine-tune the model on the semantic segmentation task. As shown in Tab. 3, the DGCNN using the proposed loss term achieves " + }, + { + "bbox": [ + 55, + 498, + 295, + 700 + ], + "type": "inline_equation", + "content": "86.8\\%" + }, + { + "bbox": [ + 55, + 498, + 295, + 700 + ], + "type": "text", + "content": " overall accuracy and " + }, + { + "bbox": [ + 55, + 498, + 295, + 700 + ], + "type": "inline_equation", + "content": "59.0\\%" + }, + { + "bbox": [ + 55, + 498, + 295, + 700 + ], + "type": "text", + "content": " mIoU, outperforming the majority of pre-training methods. Additionally, our loss term also improves the performance of the PointNet." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 701, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 701, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 67, + 701, + 295, + 713 + ], + "type": "text", + "content": "Effects of the hyper-parameter " + }, + { + "bbox": [ + 67, + 701, + 295, + 713 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 67, + 701, + 295, + 713 + ], + "type": "text", + "content": ". We train the" + } + ] + } + ], + "index": 4 + }, + { + "type": "table", + "bbox": [ + 317, + 70, + 554, + 186 + ], + "blocks": [ + { + "bbox": [ + 317, + 70, + 554, + 186 + ], + "lines": [ + { + "bbox": [ + 317, + 70, + 554, + 186 + ], + "spans": [ + { + "bbox": [ + 317, + 70, + 554, + 186 + ], + "type": "table", + "html": "
MethodS3DIS 6-Fold
OAmIoU
PointNet78.547.6
PointNet + Linteraction (Ours)82.150.8
DGCNN84.156.1
DGCNN + JigSaw84.456.6
DGCNN + OcCo85.158.5
DGCNN + STRL84.257.1
DGCNN + IAE85.960.7
DGCNN + Linteraction (Ours)86.859.0
", + "image_path": "a7f499fbba8bcd6711624fce0dc5c8da83e3549cbe3d252016126d68a4c90d88.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + }, + { + "bbox": [ + 313, + 190, + 555, + 223 + ], + "lines": [ + { + "bbox": [ + 313, + 190, + 555, + 223 + ], + "spans": [ + { + "bbox": [ + 313, + 190, + 555, + 223 + ], + "type": "text", + "content": "Table 3. Semantic segmentation on S3DIS. We report Overall Accuracy (OA) and mean Intersection over Union (mIoU) across six folds. Our method surpasses most pre-training methods." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_footnote" + } + ], + "index": 5 + }, + { + "type": "table", + "bbox": [ + 345, + 236, + 523, + 297 + ], + "blocks": [ + { + "bbox": [ + 345, + 236, + 523, + 297 + ], + "lines": [ + { + "bbox": [ + 345, + 236, + 523, + 297 + ], + "spans": [ + { + "bbox": [ + 345, + 236, + 523, + 297 + ], + "type": "table", + "html": "
αModelNet40ScanObjectNN
0.092.578.1
0.000193.079.0
0.000593.379.4
0.00191.378.1
", + "image_path": "6a45d92e745b496fbe64f5c2ab1363eaca03eecdd2be1a4ba76ef84f3330e976.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_body" + }, + { + "bbox": [ + 313, + 300, + 555, + 323 + ], + "lines": [ + { + "bbox": [ + 313, + 300, + 555, + 323 + ], + "spans": [ + { + "bbox": [ + 313, + 300, + 555, + 323 + ], + "type": "text", + "content": "Table 4. Classification accuracy (\\%) for DGCNNs trained with varying hyper-parameters " + }, + { + "bbox": [ + 313, + 300, + 555, + 323 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 313, + 300, + 555, + 323 + ], + "type": "text", + "content": " for the interaction loss." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_footnote" + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 335, + 555, + 479 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 335, + 555, + 479 + ], + "spans": [ + { + "bbox": [ + 313, + 335, + 555, + 479 + ], + "type": "text", + "content": "DGCNN with various interaction loss weights " + }, + { + "bbox": [ + 313, + 335, + 555, + 479 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 313, + 335, + 555, + 479 + ], + "type": "text", + "content": " and evaluate the testing accuracy, as shown in Tab. 4. The accuracy initially increases and then decreases as " + }, + { + "bbox": [ + 313, + 335, + 555, + 479 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 313, + 335, + 555, + 479 + ], + "type": "text", + "content": " rises. We attribute this to the loss term enhancing the strength of high-order interactions encoded by the DNN. At lower " + }, + { + "bbox": [ + 313, + 335, + 555, + 479 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 313, + 335, + 555, + 479 + ], + "type": "text", + "content": " values, the interaction loss improves the DNN's modeling of global 3D structures. However, excessively high values of " + }, + { + "bbox": [ + 313, + 335, + 555, + 479 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 313, + 335, + 555, + 479 + ], + "type": "text", + "content": " lead to excessively high strength of high-order interactions, increasing the risk of overfitting, as discussed in Conclusion 3. With an appropriately chosen " + }, + { + "bbox": [ + 313, + 335, + 555, + 479 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 313, + 335, + 555, + 479 + ], + "type": "text", + "content": ", the interaction loss effectively enhances the training process, further supporting the common mechanism outlined in Conclusion 1." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 498, + 388, + 511 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 498, + 388, + 511 + ], + "spans": [ + { + "bbox": [ + 313, + 498, + 388, + 511 + ], + "type": "text", + "content": "6. Conclusion" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 522, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 522, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 522, + 555, + 713 + ], + "type": "text", + "content": "In this paper, we use interactions to investigate the common mechanism underlying the effectiveness of different pretraining methods for 3D point clouds. Specifically, these methods generally enhance the strength of high-order interactions encoded by DNNs, while reducing the strength of low-order interactions. We then explore the impact of various factors on the mechanism and find that sufficient pretraining and adequate fine-tuning data further reinforce this mechanism. Additionally, we identify a potential risk that pre-training may reduce the transferability of DNNs. Based on the common mechanism, we propose a new method that directly enhances the strength of high-order interactions encoded by DNNs while weakening the strength of low-order interactions. Experiments show that our method achieves performance comparable to pre-training methods, without the need for pre-training on large-scale datasets." + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "27322" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 153, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 153, + 85 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 153, + 85 + ], + "type": "text", + "content": "Acknowledgments" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 91, + 296, + 116 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 91, + 296, + 116 + ], + "spans": [ + { + "bbox": [ + 55, + 91, + 296, + 116 + ], + "type": "text", + "content": "This work is partially supported by the National Nature Science Foundation of China (No. 62206170, 62376199)." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 125, + 115, + 138 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 125, + 115, + 138 + ], + "spans": [ + { + "bbox": [ + 56, + 125, + 115, + 138 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 57, + 145, + 296, + 714 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 61, + 145, + 296, + 212 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 145, + 296, + 212 + ], + "spans": [ + { + "bbox": [ + 61, + 145, + 296, + 212 + ], + "type": "text", + "content": "[1] Mohamed Afham, Isuru Dissanayake, Dinithi Dissanayake, Amaya Dharmasiri, Kanchana Thilakarathna, and Ranga Rodrigo. Crosspoint: Self-supervised cross-modal contrastive learning for 3d point cloud understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9902-9912, 2022. 1, 2, 4" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 213, + 296, + 266 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 213, + 296, + 266 + ], + "spans": [ + { + "bbox": [ + 61, + 213, + 296, + 266 + ], + "type": "text", + "content": "[2] Iro Armeni, Ozan Sener, Amir R Zamir, Helen Jiang, Ioannis Brilakis, Martin Fischer, and Silvio Savarese. 3d semantic parsing of large-scale indoor spaces. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1534-1543, 2016. 8" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 61, + 268, + 295, + 322 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 268, + 295, + 322 + ], + "spans": [ + { + "bbox": [ + 61, + 268, + 295, + 322 + ], + "type": "text", + "content": "[3] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015. 4" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 323, + 295, + 367 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 323, + 295, + 367 + ], + "spans": [ + { + "bbox": [ + 62, + 323, + 295, + 367 + ], + "type": "text", + "content": "[4] Guangyan Chen, Meiling Wang, Yi Yang, Kai Yu, Li Yuan, and Yufeng Yue. Pointgpt: Auto-regressively generative pretraining from point clouds. Advances in Neural Information Processing Systems, 36, 2024. 5" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 368, + 296, + 423 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 368, + 296, + 423 + ], + "spans": [ + { + "bbox": [ + 62, + 368, + 296, + 423 + ], + "type": "text", + "content": "[5] Jiajing Chen, Burak Kakillioglu, Huantao Ren, and Senem Velipasalar. Why discard if you can recycle?: A recycling max pooling module for 3d point cloud analysis. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, pages 559-567, 2022. 2" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 424, + 294, + 456 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 424, + 294, + 456 + ], + "spans": [ + { + "bbox": [ + 62, + 424, + 294, + 456 + ], + "type": "text", + "content": "[6] Lu Chen, Siyu Lou, Benhao Huang, and Quanshi Zhang. Defining and extracting generalizable interaction primitives from dnns. arXiv preprint arXiv:2401.16318, 2024. 1" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 457, + 294, + 490 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 457, + 294, + 490 + ], + "spans": [ + { + "bbox": [ + 62, + 457, + 294, + 490 + ], + "type": "text", + "content": "[7] Xu Cheng, Chuntung Chu, Yi Zheng, Jie Ren, and Quanshi Zhang. A game-theoretic taxonomy of visual concepts in dnns. arXiv preprint arXiv:2106.10938, 2021. 3" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 62, + 491, + 294, + 534 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 491, + 294, + 534 + ], + "spans": [ + { + "bbox": [ + 62, + 491, + 294, + 534 + ], + "type": "text", + "content": "[8] Michel Grabisch and Marc Roubens. An axiomatic approach to the concept of interaction among players in cooperative games. International Journal of game theory, 28:547-565, 1999. 3, 1" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 62, + 536, + 294, + 590 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 536, + 294, + 590 + ], + "spans": [ + { + "bbox": [ + 62, + 536, + 294, + 590 + ], + "type": "text", + "content": "[9] Siyuan Huang, Yichen Xie, Song-Chun Zhu, and Yixin Zhu. Spatio-temporal self-supervised representation learning for 3d point clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6535-6545, 2021. 1, 2, 4" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 591, + 295, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 591, + 295, + 624 + ], + "spans": [ + { + "bbox": [ + 57, + 591, + 295, + 624 + ], + "type": "text", + "content": "[10] Mingjie Li and Quanshi Zhang. Does a neural network really encode symbolic concepts? In International conference on machine learning, pages 20452-20469. PMLR, 2023. 2, 3" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 57, + 625, + 295, + 669 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 625, + 295, + 669 + ], + "spans": [ + { + "bbox": [ + 57, + 625, + 295, + 669 + ], + "type": "text", + "content": "[11] AAM Muzahid, Wanggen Wan, Ferdous Sohel, Lianyao Wu, and Li Hou. Curvenet: Curvature-based multitask learning deep networks for 3d object recognition. IEEE/CAA Journal of Automatica Sinica, 8(6):1177-1187, 2020. 2, 8" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 57, + 670, + 295, + 714 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 670, + 295, + 714 + ], + "spans": [ + { + "bbox": [ + 57, + 670, + 295, + 714 + ], + "type": "text", + "content": "[12] Yatian Pang, Wenxiao Wang, Francis EH Tay, Wei Liu, Yonghong Tian, and Li Yuan. Masked autoencoders for point cloud self-supervised learning. In European conference on computer vision, pages 604-621. Springer, 2022. 5" + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 554, + 714 + ], + "type": "list", + "angle": 0, + "index": 30, + "blocks": [ + { + "bbox": [ + 316, + 73, + 554, + 117 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 73, + 554, + 117 + ], + "spans": [ + { + "bbox": [ + 316, + 73, + 554, + 117 + ], + "type": "text", + "content": "[13] Omid Poursaeed, Tianxing Jiang, Han Qiao, Nayun Xu, and Vladimir G Kim. Self-supervised learning of point clouds via orientation estimation. In 2020 International Conference on 3D Vision (3DV), pages 1018-1028. IEEE, 2020. 1, 2" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 317, + 118, + 554, + 172 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 118, + 554, + 172 + ], + "spans": [ + { + "bbox": [ + 317, + 118, + 554, + 172 + ], + "type": "text", + "content": "[14] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 652-660, 2017. 2, 4" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 175, + 554, + 229 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 175, + 554, + 229 + ], + "spans": [ + { + "bbox": [ + 316, + 175, + 554, + 229 + ], + "type": "text", + "content": "[15] Guocheng Qian, Yuchen Li, Houwen Peng, Jinjie Mai, Hasan Hammoud, Mohamed Elhoseiny, and Bernard Ghanem. Pointnext: Revisiting pointnet++ with improved training and scaling strategies. Advances in neural information processing systems, 35:23192-23204, 2022. 2" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 317, + 231, + 554, + 284 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 231, + 554, + 284 + ], + "spans": [ + { + "bbox": [ + 317, + 231, + 554, + 284 + ], + "type": "text", + "content": "[16] Yongming Rao, Jiwen Lu, and Jie Zhou. Global-local bidirectional reasoning for unsupervised representation learning of 3d point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5376-5385, 2020. 1, 2" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 286, + 554, + 329 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 286, + 554, + 329 + ], + "spans": [ + { + "bbox": [ + 316, + 286, + 554, + 329 + ], + "type": "text", + "content": "[17] Jie Ren, Mingjie Li, Qirui Chen, Huiqi Deng, and Quanshi Zhang. Towards axiomatic, hierarchical, and symbolic explanation for deep models. arXiv preprint arXiv:2111.06206v5, 2021. 2, 1" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 331, + 554, + 374 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 331, + 554, + 374 + ], + "spans": [ + { + "bbox": [ + 316, + 331, + 554, + 374 + ], + "type": "text", + "content": "[18] Jie Ren, Zhanpeng Zhou, Qirui Chen, and Quanshi Zhang. Can we faithfully represent masked states to compute shapley values on a dnn? arXiv preprint arXiv:2105.10719, 2021. 2" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 376, + 554, + 430 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 376, + 554, + 430 + ], + "spans": [ + { + "bbox": [ + 316, + 376, + 554, + 430 + ], + "type": "text", + "content": "[19] Jie Ren, Mingjie Li, Qirui Chen, Huiqi Deng, and Quanshi Zhang. Defining and quantifying the emergence of sparse concepts in dnns. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 20280-20289, 2023. 3" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 433, + 554, + 475 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 433, + 554, + 475 + ], + "spans": [ + { + "bbox": [ + 316, + 433, + 554, + 475 + ], + "type": "text", + "content": "[20] Qihan Ren, Yang Xu, Junpeng Zhang, Yue Xin, Dongrui Liu, and Quanshi Zhang. Towards the dynamics of a dnn learning symbolic interactions. arXiv preprint arXiv:2407.19198, 2024. 1, 3" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 477, + 554, + 510 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 477, + 554, + 510 + ], + "spans": [ + { + "bbox": [ + 316, + 477, + 554, + 510 + ], + "type": "text", + "content": "[21] Jonathan Sauder and Bjarne Sievers. Self-supervised deep learning on point clouds by reconstructing space. Advances in Neural Information Processing Systems, 32, 2019. 1, 2, 4" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 512, + 554, + 533 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 512, + 554, + 533 + ], + "spans": [ + { + "bbox": [ + 316, + 512, + 554, + 533 + ], + "type": "text", + "content": "[22] Lloyd S Shapley. A value for n-person games. Contribution to the Theory of Games, 2, 1953. 3, 1" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 535, + 554, + 567 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 535, + 554, + 567 + ], + "spans": [ + { + "bbox": [ + 316, + 535, + 554, + 567 + ], + "type": "text", + "content": "[23] Charu Sharma and Manohar Kaul. Self-supervised few-shot learning on point clouds. Advances in Neural Information Processing Systems, 33:7212-7221, 2020. 1, 2" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 317, + 569, + 554, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 569, + 554, + 624 + ], + "spans": [ + { + "bbox": [ + 317, + 569, + 554, + 624 + ], + "type": "text", + "content": "[24] Wen Shen, Binbin Zhang, Shikun Huang, Zhihua Wei, and Quanshi Zhang. 3d-rotation-equivariant quaternion neural networks. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XX 16, pages 531–547. Springer, 2020. 2" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 316, + 625, + 554, + 667 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 625, + 554, + 667 + ], + "spans": [ + { + "bbox": [ + 316, + 625, + 554, + 667 + ], + "type": "text", + "content": "[25] Wen Shen, Qihan Ren, Dongrui Liu, and Quanshi Zhang. Interpreting representation quality of dnns for 3d point cloud processing. Advances in Neural Information Processing Systems, 34:8857-8870, 2021. 1, 3" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 316, + 670, + 554, + 714 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 670, + 554, + 714 + ], + "spans": [ + { + "bbox": [ + 316, + 670, + 554, + 714 + ], + "type": "text", + "content": "[26] Wen Shen, Zhihua Wei, Shikun Huang, Binbin Zhang, Panyue Chen, Ping Zhao, and Quanshi Zhang. Verifiability and predictability: Interpreting utilities of network architectures for point cloud processing. In Proceedings of the IEEE/CVF" + } + ] + } + ], + "index": 29 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "27323" + } + ] + } + ], + "index": 31 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 295, + 713 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 77, + 72, + 294, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 72, + 294, + 95 + ], + "spans": [ + { + "bbox": [ + 77, + 72, + 294, + 95 + ], + "type": "text", + "content": "Conference on Computer Vision and Pattern Recognition, pages 10703-10712, 2021. 1, 4" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 96, + 295, + 151 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 96, + 295, + 151 + ], + "spans": [ + { + "bbox": [ + 56, + 96, + 295, + 151 + ], + "type": "text", + "content": "[27] Wen Shen, Zhihua Wei, Qihan Ren, Binbin Zhang, Shikun Huang, Jiaqi Fan, and Quanshi Zhang. Interpretable rotation-equivariant quaternion neural networks for 3d point cloud processing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(5):3290-3304, 2024. 2" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 152, + 295, + 195 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 152, + 295, + 195 + ], + "spans": [ + { + "bbox": [ + 56, + 152, + 295, + 195 + ], + "type": "text", + "content": "[28] Mukund Sundararajan, Kedar Dhamdhere, and Ashish Agarwal. The shapley taylor interaction index. In International conference on machine learning, pages 9259-9268. PMLR, 2020. 3, 1" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 198, + 295, + 241 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 198, + 295, + 241 + ], + "spans": [ + { + "bbox": [ + 56, + 198, + 295, + 241 + ], + "type": "text", + "content": "[29] Ling Tang, Wen Shen, Zhanpeng Zhou, Yuefeng Chen, and Quanshi Zhang. Defects of convolutional decoder networks in frequency representation. arXiv preprint arXiv:2210.09020, 2022. 3" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 243, + 295, + 297 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 243, + 295, + 297 + ], + "spans": [ + { + "bbox": [ + 56, + 243, + 295, + 297 + ], + "type": "text", + "content": "[30] Ali Thabet, Humam Alwassel, and Bernard Ghanem. Self-supervised learning of local features in 3d point clouds. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 938-939, 2020. 1, 2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 298, + 295, + 363 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 298, + 295, + 363 + ], + "spans": [ + { + "bbox": [ + 56, + 298, + 295, + 363 + ], + "type": "text", + "content": "[31] Mikaela Angelina Uy, Quang-Hieu Pham, Binh-Son Hua, Thanh Nguyen, and Sai-Kit Yeung. Revisiting point cloud classification: A new benchmark dataset and classification model on real-world data. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1588–1597, 2019. 4" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 365, + 295, + 420 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 365, + 295, + 420 + ], + "spans": [ + { + "bbox": [ + 56, + 365, + 295, + 420 + ], + "type": "text", + "content": "[32] Hanchen Wang, Qi Liu, Xiangyu Yue, Joan Lasenby, and Matt J Kusner. Unsupervised point cloud pre-training via occlusion completion. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9782-9792, 2021. 1, 2, 4" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 422, + 295, + 465 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 422, + 295, + 465 + ], + "spans": [ + { + "bbox": [ + 56, + 422, + 295, + 465 + ], + "type": "text", + "content": "[33] Xin Wang, Jie Ren, Shuyun Lin, Xiangming Zhu, Yisen Wang, and Quanshi Zhang. A unified approach to interpreting and boosting adversarial transferability. arXiv preprint arXiv:2010.04055, 2020. 2" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 467, + 295, + 520 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 467, + 295, + 520 + ], + "spans": [ + { + "bbox": [ + 56, + 467, + 295, + 520 + ], + "type": "text", + "content": "[34] Xin Wang, Shuyun Lin, Hao Zhang, Yufei Zhu, and Quanshi Zhang. Interpreting attributions and interactions of adversarial attacks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1095-1104, 2021. 2, 3" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 523, + 295, + 567 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 523, + 295, + 567 + ], + "spans": [ + { + "bbox": [ + 56, + 523, + 295, + 567 + ], + "type": "text", + "content": "[35] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics (tog), 38(5):1-12, 2019. 2, 4" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 568, + 295, + 623 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 568, + 295, + 623 + ], + "spans": [ + { + "bbox": [ + 56, + 568, + 295, + 623 + ], + "type": "text", + "content": "[36] Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1912-1920, 2015. 4" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 624, + 295, + 679 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 624, + 295, + 679 + ], + "spans": [ + { + "bbox": [ + 56, + 624, + 295, + 679 + ], + "type": "text", + "content": "[37] Mutian Xu, Junhao Zhang, Zhipeng Zhou, Mingye Xu, Xiaojuan Qi, and Yu Qiao. Learning geometry-disentangled representation for complementary understanding of 3d object point cloud. In Proceedings of the AAAI conference on artificial intelligence, pages 3056-3064, 2021. 2, 8" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 56, + 681, + 295, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 681, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 56, + 681, + 295, + 713 + ], + "type": "text", + "content": "[38] Siming Yan, Zhenpei Yang, Haoxiang Li, Chen Song, Li Guan, Hao Kang, Gang Hua, and Qixing Huang. Implicit autoencoder for point-cloud self-supervised representation" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 553, + 453 + ], + "type": "list", + "angle": 0, + "index": 22, + "blocks": [ + { + "bbox": [ + 335, + 73, + 553, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 335, + 73, + 553, + 106 + ], + "spans": [ + { + "bbox": [ + 335, + 73, + 553, + 106 + ], + "type": "text", + "content": "learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14530-14542, 2023. 1, 2, 4" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 107, + 553, + 162 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 107, + 553, + 162 + ], + "spans": [ + { + "bbox": [ + 316, + 107, + 553, + 162 + ], + "type": "text", + "content": "[39] Xumin Yu, Lulu Tang, Yongming Rao, Tiejun Huang, Jie Zhou, and Jiwen Lu. Point-bert: Pre-training 3d point cloud transformers with masked point modeling. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 19313-19322, 2022. 5" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 163, + 553, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 163, + 553, + 205 + ], + "spans": [ + { + "bbox": [ + 316, + 163, + 553, + 205 + ], + "type": "text", + "content": "[40] Wentao Yuan, Tejas Khot, David Held, Christoph Mertz, and Martial Hebert. Pcn: Point completion network. In 2018 international conference on 3D vision (3DV), pages 728-737. IEEE, 2018. 2, 4" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 207, + 553, + 251 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 207, + 553, + 251 + ], + "spans": [ + { + "bbox": [ + 316, + 207, + 553, + 251 + ], + "type": "text", + "content": "[41] Hao Zhang, Sen Li, Yinchao Ma, Mingjie Li, Yichen Xie, and Quanshi Zhang. Interpreting and boosting dropout from a game-theoretic view. arXiv preprint arXiv:2009.11729, 2020. 1, 2, 6" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 253, + 553, + 306 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 253, + 553, + 306 + ], + "spans": [ + { + "bbox": [ + 316, + 253, + 553, + 306 + ], + "type": "text", + "content": "[42] Renrui Zhang, Ziyu Guo, Peng Gao, Rongyao Fang, Bin Zhao, Dong Wang, Yu Qiao, and Hongsheng Li. Point-m2ae: multi-scale masked autoencoders for hierarchical point cloud pre-training. Advances in neural information processing systems, 35:27061-27074, 2022. 5" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 308, + 553, + 361 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 308, + 553, + 361 + ], + "spans": [ + { + "bbox": [ + 316, + 308, + 553, + 361 + ], + "type": "text", + "content": "[43] Zaiwei Zhang, Rohit Girdhar, Armand Joulin, and Ishan Misra. Self-supervised pretraining of 3d features on any point-cloud. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10252-10263, 2021. 1, 2" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 364, + 553, + 407 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 364, + 553, + 407 + ], + "spans": [ + { + "bbox": [ + 316, + 364, + 553, + 407 + ], + "type": "text", + "content": "[44] Huilin Zhou, Huijie Tang, Mingjie Li, Hao Zhang, Zhenyu Liu, and Quanshi Zhang. Explaining how a neural network plays the go game and let people learn. arXiv preprint arXiv:2310.09838, 2023. 1" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 409, + 553, + 453 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 409, + 553, + 453 + ], + "spans": [ + { + "bbox": [ + 316, + 409, + 553, + 453 + ], + "type": "text", + "content": "[45] Huilin Zhou, Hao Zhang, Huiqi Deng, Dongrui Liu, Wen Shen, Shih-Han Chan, and Quanshi Zhang. Concept-level explanation for the generalization of a dnn. arXiv preprint arXiv:2302.13091, 2023. 2, 3" + } + ] + } + ], + "index": 21 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "text", + "content": "27324" + } + ] + } + ], + "index": 23 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/A Unified Framework for Heterogeneous Semi-supervised Learning/341a6b6d-e6ba-48c2-98b8-5f373e8e2473_content_list.json b/2025/A Unified Framework for Heterogeneous Semi-supervised Learning/341a6b6d-e6ba-48c2-98b8-5f373e8e2473_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a4dea0159546933d5d1cbf93a812beff212c2372 --- /dev/null +++ b/2025/A Unified Framework for Heterogeneous Semi-supervised Learning/341a6b6d-e6ba-48c2-98b8-5f373e8e2473_content_list.json @@ -0,0 +1,1395 @@ +[ + { + "type": "text", + "text": "A Unified Framework for Heterogeneous Semi-supervised Learning", + "text_level": 1, + "bbox": [ + 156, + 130, + 841, + 152 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Marzi Heidari*, Abdullah Alchihabi*, Hao Yan*, Yuhong Guo*† \n*School of Computer Science, Carleton University, Ottawa, Canada†Canada CIFAR AI Chair, Amii, Canada", + "bbox": [ + 232, + 180, + 766, + 233 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "{marziheidari@cmail., abdullahalchihibi@cmail., haoyan6@cmail., yuhong.guo@}carleton.ca", + "bbox": [ + 114, + 236, + 880, + 252 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 246, + 286, + 326, + 301 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "In this work, we introduce a novel problem setup termed as Heterogeneous Semi-Supervised Learning (HSSL), which presents unique challenges by bridging the semi-supervised learning (SSL) task and the unsupervised domain adaptation (UDA) task, and expanding standard semi-supervised learning to cope with heterogeneous training data. At its core, HSSL aims to learn a prediction model using a combination of labeled and unlabeled training data drawn separately from heterogeneous domains that share a common set of semantic categories. This model is intended to differentiate the semantic categories of test instances sampled from both the labeled and unlabeled domains. In particular, the labeled and unlabeled domains have dissimilar label distributions and class feature distributions. This heterogeneity, coupled with the assorted sources of the test data, introduces significant challenges to standard SSL and UDA methods. Therefore, we propose a novel method, Unified Framework for Heterogeneous Semi-supervised Learning (Uni-HSSL), to address HSSL by directly learning a fine-grained classifier from the heterogeneous data, which adaptively handles the inter-domain heterogeneity while leveraging both the unlabeled data and the inter-domain semantic class relationships for cross-domain knowledge transfer and adaptation. We conduct comprehensive experiments and the experimental results validate the efficacy and superior performance of the proposed Uni-HSSL over state-of-the-art semi-supervised learning and unsupervised domain adaptation methods.", + "bbox": [ + 86, + 316, + 485, + 724 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 91, + 753, + 220, + 768 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Deep learning models, owing to their hierarchical learned representations and intricate architectures, have monumentally advanced the state-of-the-art across a myriad of tasks [18]. Nonetheless, the success of deep learning has been often contingent on the availability of copious amounts of labeled data. Data annotation, especially in specialized domains, is not only resource-intensive but can also entail exorbitant costs [32]. Consequently, semi-supervised learn", + "bbox": [ + 89, + 779, + 485, + 901 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "ing (SSL) has been popularly studied, aiming to successfully utilize the free available unlabeled data to help train deep models in an annotation efficient manner [35].", + "bbox": [ + 511, + 287, + 906, + 333 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "However, current SSL methods assume that the unlabeled and labeled data are sampled from similar (homogeneous) distributions [25]. Such an assumption presents substantial practical limitations to applying traditional SSL methods to a wide range of application domains, where labeled and unlabeled data can have different distributions. For example, in the field of medical imaging, it is common for labeled MRI scans to be sourced from state-of-the-art research hospitals, while an influx of unlabeled scans could emanate from a myriad of rural clinics, each with its distinct scanning equipment and calibration idiosyncrasies. Similar heterogeneity patterns manifest in domains like aerial imagery, wildlife monitoring, and retail product classification. In such settings, the challenge lies in leveraging the unlabeled data given its dissimilarity with its labeled counterpart.", + "bbox": [ + 509, + 337, + 908, + 565 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Therefore, to address the current limitations of the traditional SSL, we propose a novel heterogeneous semi-supervised learning (HSSL) task, where the training data consist of labeled and unlabeled data sampled from different distribution domains. The two domains contain a common set of semantic classes, but have different label and class feature distributions. The goal of HSSL is to train a model using the heterogeneous training data so that it can perform well on a held-out test set sampled from both the labeled and unlabeled domains. Without posing distribution similarity assumptions between the labeled and unlabeled data, HSSL is expected to be applicable to a broader range of real-world scenarios compared to standard SSL. This novel heterogeneous semi-supervised learning task however is much more challenging due to the following characteristics: (1) The domain gap, expressed as divergence between class feature distributions across the labeled and unlabeled domains, presents a significant impediment to model generalization and learning. (2) The absence of annotated samples from the unlabeled domain during training further compounds the complexity of the task. (3) Considering that the test set comprises samples from both domains, the devised so", + "bbox": [ + 509, + 568, + 910, + 901 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 46 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "15371", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "lution methods need to accurately model the distributions inherent to each domain. It is imperative for the models to discern not only the domain from which a sample originates but also the specific semantic class it belongs to. This requires either an explicit or implicit methodology to categorize samples accurately with respect to both domain origin and semantic class categories, distinguishing the task from both conventional SSL and unsupervised domain adaptation (UDA)—traditional SSL overlooks the domain heterogeneity within both the training and testing data, whereas UDA exclusively concentrates on the unlabeled domain as the target domain [11, 21]. Therefore, traditional SSL and UDA methods are not readily applicable or effective in addressing the proposed HSSL task. A recent work [14] has made an effort to expand the traditional SSL task beyond its homogeneous assumptions. However, the proposed solution method learns separately in different domains using distinct components where an off-the-shelf UDA technique is employed to generate pseudo-labels for the unlabeled samples, bypassing the opportunity to train a unified cohesive model that could harness insights from both domains. Furthermore, their test set is confined to a labeled domain, while HSSL aims to train a model that generalizes across labeled and unlabeled domains. HSSL presents a more complex challenge, requiring the model to adapt and perform accurately across heterogeneous test data.", + "bbox": [ + 93, + 90, + 483, + 482 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this work, we propose a novel method, named as Unified framework for Heterogeneous Semi-Supervised Learning (Uni-HSSL), to address the HSSL problem. The proposed method learns a fine-grained classification model cohesively under a unified framework by amalgamating the labeled and unlabeled class categories within an extended and precisely doubled label space. The framework consists of three technical components designed to tackle the HSSL challenges: a weighted moving average pseudo-labeling component, a cross-domain prototype alignment component, and a progressive inter-domain mixup component. The pseudo-labeling component leverages a weighted moving average strategy to assign and update pseudo-labels for the unlabeled data. In this manner, it generates smooth and adaptive assignment of pseudo-labels, reducing the potential pitfalls of oscillating updates or noisy label assignments, which is crucial given the significant domain gap between labeled data and unlabeled data. The cross-domain prototype alignment ensures that the inherent semantic structures of similar classes across the labeled and unlabeled domains are aligned. This alignment of class-centric prototypes between domains leverages inter-domain semantic class relationships, enabling knowledge transfer from the labeled domain to the unlabeled domain. The progressive inter-domain mixup component generates new synthetic instances by interpolating between labeled and unlabeled samples and bridges the gap between the two domains. By adopting a progressive", + "bbox": [ + 93, + 493, + 483, + 900 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "augmentation schedule, it gradually adapts the model to the distribution of the unlabeled domain, facilitating a steady and reliable knowledge transfer. Comprehensive experiments are conducted on several benchmark datasets. The empirical results demonstrate the efficacy and superior performance of our proposed unified framework compared to multiple state-of-the-art SSL and unsupervised domain adaptation baselines for HSSL.", + "bbox": [ + 516, + 90, + 903, + 210 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related Works", + "text_level": 1, + "bbox": [ + 516, + 226, + 661, + 241 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.1. Semi-Supervised Learning", + "text_level": 1, + "bbox": [ + 516, + 251, + 751, + 267 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Conventional Semi-Supervised Learning (SSL) In conventional SSL, the labeled and unlabeled segments of the dataset encompass identical classes, sharing consistent class and feature distributions. SSL methods are primarily classified into three categories: regularization-based techniques, teacher-student models, and pseudo-labeling strategies. Regularization-based techniques like II-model [16] modify the loss function with additional terms for model refinement. Teacher-student models like MT [33] and ICT [37] involve training a student network to mimic a teacher model using unlabeled data. Pseudo-labeling strategies like Pseudo-Label [19], FixMatch [31], FlexMatch [39], and SimMatch [41] expand labeled datasets using unlabeled data with pseudo-labels in various ways.", + "bbox": [ + 514, + 273, + 906, + 484 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Open-Set Semi-Supervised Learning (OS-SSL) OS-SSL deals with unknown or additional classes present in the unlabeled data but absent in the labeled set. OS-SSL assumes the same feature distribution over labeled and unlabeled sets. This is different from HSSL, which operates under the assumption that labeled and unlabeled data come from separate domains with different feature distributions. The concept of OS-SSL, introduced in [25], focuses on class distribution mismatches in open-set scenarios. Methods for OS-SSL like UASD [6] use self-distillation to exclude outliers from unlabeled data. DS3L [12] and MTCF [38] employ diverse weighting strategies for subset mismatches, minimizing the impact of private data in unlabeled sets. OpenMatch [3] utilizes one-vs-all classifiers for outlier detection but faces difficulties with unseen categories. While OS-SSL has advanced SSL towards practical use, it lacks capacity to handle feature distribution mismatches between labeled and unlabeled data.", + "bbox": [ + 514, + 503, + 906, + 773 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Universal Semi-Supervised Learning (USSL) Universal SSL [13] involves both shared and unique classes across the labeled and unlabeled sets, with the test set matching the labeled set's class distribution. HSSL, however, assumes shared classes across the labeled and unlabeled domains and tests on samples from both domains without their domain identities, adding complexity.", + "bbox": [ + 514, + 794, + 903, + 900 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "15372", + "bbox": [ + 480, + 945, + 517, + 955 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Similar to our work, bidirectional Adaptation [14] addresses the disparity between limited labeled and abundant unlabeled data, but it tests only within the labeled domain's feature distribution. It uses UDA techniques for pseudolabeling, avoiding the complexities and benefits of cross-domain modeling. In contrast, HSSL aims for effective generalization across both domains, posing a more intricate challenge in model adaptation and generalization.", + "bbox": [ + 89, + 90, + 483, + 212 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.2. Unsupervised Domain Adaptation", + "text_level": 1, + "bbox": [ + 89, + 222, + 388, + 239 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Unsupervised domain adaptation aims at learning a target model given labeled data from a source domain and unlabeled data from a target domain. Typical deep UDA approaches can be categorized into three types: alignment-based, regularization-based, and self-training-based methods. Alignment-based methods aim to reduce the cross-domain feature discrepancy with adversarial alignment [11, 22] and distance-based methods [4, 21, 27, 29]. Regularization-based methods utilize regularization terms to leverage knowledge from the unlabeled target data. Typical regularization terms include entropy minimization [30], virtual adversarial training [30], batch spectral penalization [5], batch nuclear-norm maximization [9], and mutual information maximization [17]. Self-training-based methods explore effective pseudo-labeling for unlabeled target data fitting, including confidence threshold [2, 42] and cycle self-training [20].", + "bbox": [ + 89, + 244, + 483, + 487 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. Method", + "text_level": 1, + "bbox": [ + 89, + 502, + 181, + 517 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Problem Setup", + "text_level": 1, + "bbox": [ + 89, + 527, + 241, + 542 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "We consider the following Heterogeneous Semi-Supervised Learning (HSSL) setup. The training data consist of a set of labeled instances $\\mathcal{D}_L = \\{(\\mathbf{x}_i^l,\\mathbf{y}_i^l)\\}_{i = 1}^{N_l}$ , where each instance $\\mathbf{x}_i^l$ is annotated with a one-hot label indicator vector $\\mathbf{y}_i^l$ with length $C$ , and a set of unlabeled instances $\\mathcal{D}_U = \\{\\mathbf{x}_i^u\\}_{i = 1}^{N_u}$ . The labeled data and unlabeled data are from two different domains that have dissimilar label distributions such that $p_L(\\mathbf{y})\\neq p_U(\\mathbf{y})$ and heterogeneous feature distributions such that $p_L(\\mathbf{x}|\\mathbf{y})\\neq p_U(\\mathbf{x}|\\mathbf{y})$ , but share the same set of $C$ semantic classes. The goal is to train a prediction model using both the labeled set $\\mathcal{D}_L$ and unlabeled set $\\mathcal{D}_U$ so that the trained model would generalize well on a held-out test set that is indistinguishably sampled from both the labeled and unlabeled domains.", + "bbox": [ + 89, + 549, + 483, + 760 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.2. Proposed Method", + "text_level": 1, + "bbox": [ + 89, + 772, + 263, + 787 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In this section, we present the proposed Uni-HSSL method, which tackles the $C$ -class HSSL problem by combining the labeled and unlabeled class categories to a doubled label space and learning a fine-grained $2C$ -class classification model under a unified framework, aiming to adaptively handle the heterogeneous distributions across domains and gain better generalization over test instances randomly sampled", + "bbox": [ + 89, + 794, + 483, + 900 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "from both the labeled and unlabeled domains. The core idea centers on simultaneously facilitating effective knowledge transfer from the labeled domain to the unlabeled domain while harnessing the information within the unlabeled data.", + "bbox": [ + 511, + 90, + 903, + 151 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "We start by first pre-training a feature encoder and a $C$ -class semantic classifier on the labeled dataset, which can be used to produce the initial pseudo-labels of the unlabeled training data and provide partial initialization for our Uni-HSSL model. Then the $2C$ -class Uni-HSSL model, which consists of a feature encoder $f$ and a $2C$ -class classifier $h$ , will be learned within the proposed unified semi-supervised framework shown in Figure 1. The framework introduces three technical components to facilitate heterogeneous SSL. The weighted-moving-average (WMA) based pseudo-labeling component is deployed to support the effective exploitation of the unlabeled data, while the cross-domain prototype alignment component and progressive inter-domain mixup component are designed to promote information sharing and efficient and steady knowledge transfer from the labeled domain to the unlabeled domain. Further elaboration will be provided in the following sections.", + "bbox": [ + 511, + 152, + 906, + 409 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.2.1. Supervised Pre-training", + "text_level": 1, + "bbox": [ + 511, + 415, + 725, + 430 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The initial challenge in training a $2C$ -class classification model with the given heterogeneous data is the absence of labeled instances entirely in the unlabeled domain. To tackle this problem, we exploit the assumption that the labeled and unlabeled domains share the same set of $C$ semantic class categories, and pre-train a $C$ -class classification model in the labeled domain to provide initial pseudo-labels for the training instances in the unlabeled domain.", + "bbox": [ + 511, + 434, + 905, + 554 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Specifically, we pre-train a $C$ -class model, which consists of a feature encoder $f$ and a $C$ -class probabilistic classifier $g$ , on the labeled data $\\mathcal{D}_L$ by minimizing the following supervised cross-entropy loss:", + "bbox": [ + 511, + 555, + 906, + 616 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {c e} ^ {L} = \\mathbb {E} _ {\\left(\\mathbf {x} _ {i} ^ {l}, \\mathbf {y} _ {i} ^ {l}\\right) \\in \\mathcal {D} _ {L}} \\left[ \\ell_ {c e} \\left(\\mathbf {y} _ {i} ^ {l}, g (f (\\mathbf {x} _ {i} ^ {l}))\\right) \\right] \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 583, + 625, + 906, + 647 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $\\ell_{ce}$ denotes the cross-entropy function. Then we deploy the pre-trained classification model to make predictions on the unlabeled training instances in $\\mathcal{D}_U$ to generate their initial pseudo-labels:", + "bbox": [ + 511, + 655, + 908, + 715 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\bar {\\mathbf {y}} _ {i} ^ {0} = g \\left(f \\left(\\mathbf {x} _ {i} ^ {u}\\right)\\right), \\quad \\forall \\mathbf {x} _ {i} ^ {u} \\in \\mathcal {D} _ {U} \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 607, + 726, + 906, + 744 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $\\bar{\\mathbf{y}}_i^0$ denotes the predicted class probability vector with length $C$ for the unlabeled instance $\\mathbf{x}_i^u$ . To provide initial labels on the unlabeled data for training the $2C$ -class model, we further expand each $\\bar{\\mathbf{y}}_i^0$ by concatenating it with a zero vector with length $C$ , $\\mathbf{0}_C$ :", + "bbox": [ + 511, + 753, + 908, + 830 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\hat {\\mathbf {y}} _ {i} ^ {0} = \\operatorname {c o n c a t} \\left(\\mathbf {0} _ {C}, \\bar {\\mathbf {y}} _ {i} ^ {0}\\right) \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 635, + 840, + 906, + 859 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "This results in the first set of $C$ classes out of the $2C$ classes corresponding to the classes in the labeled domain, with", + "bbox": [ + 511, + 869, + 906, + 900 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "15373", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/b2dc7b8e254ecc5e7a924b0093c898cef1b0802eae32003cf24e947489c5e0c3.jpg", + "image_caption": [ + "Figure 1. An overview of the proposed Uni-HSSL training framework. The classification model consists of a feature encoder $f$ and a $2C$ -class classifier $h$ . After initialization with pre-training, the model is trained by jointly minimizing the combination of a supervised loss $\\mathcal{L}_{cl}^{L}$ on the labeled data, a WMA pseudo-labeling loss $\\mathcal{L}_{pl}^{U}$ on the unlabeled data, a cross-domain prototype alignment loss $\\mathcal{L}_{pa}$ , and a prediction loss $\\mathcal{L}_{\\mathrm{Mixup}}$ on the augmentation data produced via progressive inter-domain mixup." + ], + "image_footnote": [], + "bbox": [ + 163, + 93, + 831, + 335 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "the remaining set of $C$ classes corresponding to the classes in the unlabeled domain. Moreover, the parameters of the pre-trained $C$ -class model $(g \\circ f)$ can also be utilized to initialize the feature encoder $f$ and part of the classifier $h$ corresponding to the first $C$ classes in the $2C$ -class model, while the other part of $h$ will be randomly initialized.", + "bbox": [ + 88, + 424, + 483, + 518 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2.2. Semi-Supervised Training with Adaptive Pseudo-Labeling", + "text_level": 1, + "bbox": [ + 89, + 526, + 483, + 556 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "After initialization, the proposed $2C$ -class classification model (feature encoder $f$ and probabilistic classifier $h$ ) will be trained by leveraging both the labeled set $\\mathcal{D}_L$ and the unlabeled set $\\mathcal{D}_U$ within a pseudo-labeling based SSL framework. On the labeled set $\\mathcal{D}_L$ , the following standard supervised cross-entropy loss will be used as the training objective:", + "bbox": [ + 89, + 561, + 483, + 652 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {c l} ^ {L} = \\mathbb {E} _ {\\left(\\mathbf {x} _ {i} ^ {l}, \\mathbf {y} _ {i} ^ {l}\\right) \\in \\mathcal {D} _ {L}} \\left[ \\ell_ {c e} \\left(h \\left(f \\left(\\mathbf {x} _ {i} ^ {l}\\right)\\right), \\operatorname {c o n c a t} \\left(\\mathbf {y} _ {i} ^ {l}, \\mathbf {0} _ {C}\\right)\\right) \\right] \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 109, + 664, + 483, + 686 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where the concatenated label vector, $\\operatorname{concat}(\\mathbf{y}_i^l, \\mathbf{0}_C)$ , expands the ground-truth label vector $\\mathbf{y}_i^l$ into the $2C$ -class label space by appending a zero vector with length $C$ to it.", + "bbox": [ + 89, + 698, + 483, + 744 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Although we have obtained initial pseudo-labels for the unlabeled set $\\mathcal{D}_U$ by utilizing the pre-trained $C$ -class classifier, those initial labels are unavoidably noisy due to the existence of domain gap between the labeled and unlabeled domains. In order to effectively leverage the unlabeled data, we update the pseudo-label for each unlabeled instance $\\mathbf{x}_i^u$ during each training iteration in a weighted moving average (WMA) fashion as follows:", + "bbox": [ + 89, + 744, + 483, + 864 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\hat {\\mathbf {y}} _ {i} ^ {t} = \\beta \\hat {\\mathbf {y}} _ {i} ^ {t - 1} + (1 - \\beta) h \\left(f \\left(\\mathbf {x} _ {i} ^ {u}\\right)\\right) \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 174, + 878, + 483, + 897 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\beta \\in (0,1)$ is a hyper-parameter that controls the rate of update, and $\\hat{\\mathbf{y}}_i^t$ is the updated pseudo-label for $\\mathbf{x}_i^u$ at the $t$ -th training iteration. This weighted moving average update strategy can yield a smooth and adaptive assignment of pseudo-labels by promptly incorporating the progress in the classification model and mitigating the risk of oscillatory updates. Moreover, to further mitigate the adverse impact of noisy pseudo-labels, we deploy the following cross-entropy loss on the unlabeled set during training, selectively utilizing only instances with more reliable pseudo-labels:", + "bbox": [ + 511, + 424, + 906, + 575 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {p l} ^ {U} = \\mathbb {E} _ {\\mathbf {x} _ {i} ^ {u} \\in \\mathcal {D} _ {U}} [ \\mathbb {1} (\\max (\\hat {\\mathbf {y}} _ {i} ^ {t}) > \\epsilon) \\ell_ {c e} (h (f (\\mathbf {x} _ {i} ^ {u})), \\hat {\\mathbf {y}} _ {i} ^ {t}) ] \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 527, + 585, + 906, + 607 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\mathbb{1}(\\cdot)$ denotes an indicator function; $\\epsilon \\in (0,1)$ is a predefined confidence threshold to ensure that only unlabeled instances with the maximum prediction probabilities larger than $\\epsilon$ are used for the current training iteration.", + "bbox": [ + 511, + 616, + 906, + 676 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "By treating semantic classes in distinct domains as separate categories, the $2C$ -class classification model serves as a strategic choice to differentiate samples across domains. This approach avoids the additional complexity associated with a dedicated domain classifier and naturally handles the divergence in class-feature distributions across domains. It also simplifies the process and has the potential to enhance domain generalization through a shared feature encoder.", + "bbox": [ + 511, + 676, + 908, + 797 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2.3. Cross-Domain Semantic Prototype Alignment", + "text_level": 1, + "bbox": [ + 511, + 806, + 877, + 821 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Given that the labeled domain and unlabeled domain are comprised of the same set of $C$ semantic classes, there is a one-to-one correspondence relationship between each cross-domain class pair for the same semantic concept. In order to facilitate knowledge sharing and transfer across domains,", + "bbox": [ + 511, + 825, + 906, + 900 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "15374", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "we propose to align each semantic class from the labeled domain with its corresponding semantic class in the unlabeled domain within the learned feature embedding space. To this end, we represent each class using a class-prototype vector and design a cross-domain semantic class-prototype alignment component to enforce the corresponding semantic class pairs across the domains are more similar in the feature embedding space than non-corresponding class pairs.", + "bbox": [ + 89, + 90, + 483, + 210 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Specifically, we compute the prototype vector for the $k$ -th class in the labeled set as the average feature embedding of the labeled instances belonging to the class:", + "bbox": [ + 89, + 212, + 483, + 258 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {p} _ {k} = \\mathbb {E} _ {\\left(\\mathbf {x} _ {i} ^ {l}, \\mathbf {y} _ {i} ^ {l}\\right) \\in \\mathcal {D} _ {L}} \\left[ \\mathbb {1} \\left(\\arg \\max _ {j} \\mathbf {y} _ {i j} ^ {l} = k\\right) f \\left(\\mathbf {x} _ {i} ^ {l}\\right) \\right] \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 125, + 268, + 483, + 290 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\mathbf{y}_{ij}^{l}$ denotes the $j$ -th entry of the label vector $\\mathbf{y}_i^l$ . The corresponding $k$ -th semantic class in the unlabeled set is the $(C + k)$ -th class in the $2C$ -class label space. We compute the class prototype vectors in the unlabeled set based on the instances with reliable pseudo-labels, such that:", + "bbox": [ + 89, + 301, + 483, + 377 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {p} _ {C + k} = \\mathbb {E} _ {\\mathbf {x} _ {i} ^ {u} \\in \\mathcal {D} _ {U}} \\left[ \\mathbb {1} \\left(\\max (\\hat {\\mathbf {y}} _ {i} ^ {t}) > \\epsilon \\wedge \\right. \\left. \\arg \\max _ {j} \\hat {\\mathbf {y}} _ {i j} ^ {t} = C + k\\right) f \\left(\\mathbf {x} _ {i} ^ {u}\\right) \\right] \\tag {8}\n$$\n", + "text_format": "latex", + "bbox": [ + 91, + 388, + 483, + 424 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Then for each semantic class $k \\in \\{1, \\dots, C\\}$ , we align the prototypes of the corresponding class pairs from the labeled and unlabeled domains, $(\\mathbf{p}_k, \\mathbf{p}_{C + k})$ , by employing a cross-domain contrastive prototype alignment loss as follows:", + "bbox": [ + 89, + 434, + 483, + 494 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\mathcal {L} _ {p a} = - \\sum_ {k = 1} ^ {C} \\left[ \\log \\frac {\\exp \\left(\\cos \\left(\\mathbf {p} _ {k} , \\mathbf {p} _ {C + k}\\right) / \\tau\\right)}{\\sum_ {k ^ {\\prime} = 1} ^ {C} \\mathbb {1} \\left(k ^ {\\prime} \\neq k\\right) \\exp \\left(\\cos \\left(\\mathbf {p} _ {k} , \\mathbf {p} _ {C + k ^ {\\prime}}\\right) / \\tau\\right)} \\right. \\\\ \\left. + \\log \\frac {\\exp \\left(\\cos \\left(\\mathbf {p} _ {k} , \\mathbf {p} _ {C + k}\\right) / \\tau\\right)}{\\sum_ {k ^ {\\prime} = 1} ^ {C} \\mathbb {1} \\left(k ^ {\\prime} \\neq k\\right) \\exp \\left(\\cos \\left(\\mathbf {p} _ {k ^ {\\prime}} , \\mathbf {p} _ {C + k}\\right) / \\tau\\right)} \\right] \\tag {9} \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 94, + 507, + 483, + 602 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\tau$ is a temperature hyper-parameter, and $\\cos (\\cdot ,\\cdot)$ denotes the cosine similarity function. This contrastive loss promotes the sharing of predictive information between the labeled and unlabeled domains by encouraging the corresponding class prototype pairs to be closer to each other while simultaneously pushing the non-corresponding cross-domain class prototype pairs farther apart.", + "bbox": [ + 89, + 616, + 483, + 720 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.2.4. Progressive Inter-Domain Mixup", + "text_level": 1, + "bbox": [ + 89, + 729, + 366, + 744 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In order to bridge the gap between the labeled domain and the unlabeled domain, we propose a progressive inter-domain mixup mechanism to augment the training set by dynamically generating synthetic instances between the labeled set and unlabeled set, with the objective of facilitating steady and efficient knowledge transfer from the labeled domain to the unlabeled domain.", + "bbox": [ + 89, + 750, + 483, + 853 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Specifically, we generate an inter-domain synthetic instance $(\\mathbf{x}^m,\\mathbf{y}^m)$ by mixing a labeled instance $(\\mathbf{x}^l,\\mathbf{y}^l)$ from the labeled set $\\mathcal{D}_L$ with a pseudo-labeled instance $(\\mathbf{x}^u,\\hat{\\mathbf{y}}^t)$", + "bbox": [ + 89, + 854, + 483, + 901 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "from the unlabeled set $\\mathcal{D}_U$ through linear interpolation:", + "bbox": [ + 513, + 90, + 879, + 106 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {x} ^ {m} = \\lambda \\mathbf {x} ^ {u} + (1 - \\lambda) \\mathbf {x} ^ {l}, \\tag {10}\n$$\n", + "text_format": "latex", + "bbox": [ + 612, + 117, + 905, + 143 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {y} ^ {m} = \\lambda \\hat {\\mathbf {y}} ^ {t} + (1 - \\lambda) \\operatorname {c o n c a t} (\\mathbf {y} ^ {t}, \\mathbf {0} _ {C}),\n$$\n", + "text_format": "latex", + "bbox": [ + 571, + 138, + 821, + 156 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\lambda \\in [0,1]$ is the mixing coefficient. To fully utilize the available data in both domains, we can generate $N^{m} = \\max (N^{l},N^{u})$ synthetic instances to form a synthetic set $\\mathcal{D}_{\\mathrm{Mixup}}$ by mixing each instance in the larger domain with a randomly selected instance in the other domain.", + "bbox": [ + 511, + 167, + 905, + 242 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In the standard mixup [40], the mixing coefficient $\\lambda$ is sampled from a fixed $\\mathrm{Beta}(\\alpha, \\alpha)$ distribution with hyperparameter $\\alpha$ . To facilitate a steady and smooth adaptation from the labeled domain to the unlabeled domain for HSSL, we propose to dynamically generate the mixup data in each training iteration $t$ by deploying a progressive mixing up strategy that samples $\\lambda$ from a shifted $\\mathrm{Beta}(\\alpha, \\alpha)$ distribution based on a schedule function $\\psi(t)$ , such that:", + "bbox": [ + 511, + 244, + 906, + 364 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\lambda \\sim \\psi (t) \\times \\operatorname {B e t a} (\\alpha , \\alpha), \\quad \\psi (t) = 0. 5 + \\frac {t}{2 T} \\tag {11}\n$$\n", + "text_format": "latex", + "bbox": [ + 547, + 376, + 906, + 405 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $T$ denotes the total number of training iterations. Following this schedule, at the beginning of the training process, we have $\\psi(0) \\approx 0.5$ and $\\lambda$ is sampled from the approximate interval [0, 0.5) as the model prioritizes the labeled domain, guarding against noisy pseudo-label predictions from unlabeled data. As the training progresses, the model gradually increases its reliance on the unlabeled data, and the interval [0, $\\psi(t)$ ] from which $\\lambda$ is sampled is expanded gradually towards [0, 1] (with $\\psi(T) = 1$ ), allowing it to adapt seamlessly between domains.", + "bbox": [ + 511, + 414, + 906, + 564 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Following previous works on using mixup data [1], we employ the mixup set $\\mathcal{D}_{\\mathrm{Mixup}}$ for model training by minimizing the following mean squared error:", + "bbox": [ + 511, + 566, + 906, + 611 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\text {M i x u p}} = \\mathbb {E} _ {\\left(\\mathbf {x} _ {i} ^ {m}, \\mathbf {y} _ {i} ^ {m}\\right) \\in \\mathcal {D} _ {\\text {M i x u p}}} \\left[ \\left\\| h \\left(f \\left(\\mathbf {x} _ {i} ^ {m}\\right)\\right) - \\mathbf {y} _ {i} ^ {m}\\right) \\right\\| ^ {2} \\tag {12}\n$$\n", + "text_format": "latex", + "bbox": [ + 531, + 622, + 906, + 648 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.2.5. Training Objective", + "text_level": 1, + "bbox": [ + 513, + 657, + 691, + 672 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "By integrating the classification loss terms on the labeled set, the unlabeled set, and the mixup set, with the class prototype alignment loss, we obtain the following joint training objective for the Uni-HSSL model:", + "bbox": [ + 511, + 676, + 905, + 737 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\text {t o t a l}} = \\mathcal {L} _ {c l} ^ {L} + \\lambda_ {p l} \\mathcal {L} _ {p l} ^ {U} + \\lambda_ {p a} \\mathcal {L} _ {p a} + \\lambda_ {\\text {M i x u p}} \\mathcal {L} _ {\\text {M i x u p}} \\tag {13}\n$$\n", + "text_format": "latex", + "bbox": [ + 532, + 750, + 906, + 768 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\lambda_{pl},\\lambda_{pa}$ and $\\lambda_{\\mathrm{Mixup}}$ are trade-off hyper-parameters.", + "bbox": [ + 511, + 779, + 892, + 795 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4. Experiments", + "text_level": 1, + "bbox": [ + 511, + 808, + 643, + 825 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.1. Experimental Setup", + "text_level": 1, + "bbox": [ + 511, + 832, + 702, + 849 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Datasets We conducted comprehensive experiments to evaluate the performance of our proposed framework on four image classification benchmark datasets: Office-31,", + "bbox": [ + 511, + 854, + 906, + 900 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "15375", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/3364bb9cb1c8607ff476c7cc33072927c71d28afc10fb4d760e55149e101017a.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
SupervisedFlexMatchFixMatchSimMatchCDAN+SupMCC+SupBiAdaptUni-HSSL
A/C53.1(0.7)51.1(1.2)51.9(1.5)57.8(1.6)47.0(0.5)54.9(1.2)55.1(1.8)60.1(0.9)
C/A66.0(1.2)68.1(1.3)63.8(0.7)69.7(0.9)63.9(0.7)70.5(0.3)65.1(1.2)72.0(0.7)
C/R77.5(0.9)72.1(0.9)79.5(0.5)78.5(0.5)67.1(0.8)75.4(0.5)75.2(1.2)80.5(0.4)
R/C63.9(1.2)67.8(1.6)66.2(0.7)64.3(0.8)67.0(1.2)69.3(0.5)61.2(1.8)72.8(0.6)
R/A72.6(0.9)59.0(1.2)74.1(0.5)70.5(0.5)74.6(0.9)75.1(0.8)69.1(0.9)75.8(0.6)
A/R75.1(0.7)73.5(0.9)70.4(0.6)75.8(0.5)66.5(0.8)77.3(0.8)72.1(1.4)78.3(0.5)
A/P67.4(1.5)64.0(0.9)62.7(0.6)68.9(0.6)56.5(0.5)71.8(0.4)64.9(1.3)70.9(0.8)
P/A69.1(1.0)64.1(1.2)62.8(0.8)69.7(0.9)74.9(1.2)76.1(0.2)64.1(0.8)78.7(0.4)
C/P69.1(0.9)65.6(1.0)65.1(1.1)70.0(0.4)65.5(1.2)71.2(0.5)69.1(1.4)72.8(0.7)
P/C64.6(0.9)64.3(1.1)65.2(1.5)68.5(0.8)66.8(0.6)68.0(0.5)67.7(0.9)69.9(0.9)
P/R80.0(0.5)73.3(0.7)78.1(0.4)78.1(0.2)89.5(0.4)82.1(0.6)76.2(1.2)82.9(0.4)
R/P77.9(0.1)68.1(1.2)74.7(0.3)74.0(0.7)78.2(1.3)77.0(1.2)74.1(1.4)82.1(0.5)
Avg.69.765.967.970.567.472.367.874.7
", + "bbox": [ + 117, + 87, + 880, + 303 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Table 1. Mean classification accuracy (standard deviation is within parentheses) on the Office-Home dataset using the ResNet-50 backbone. The first domain in each row indicates the labeled domain while the second domain indicates the unlabeled domain.", + "bbox": [ + 89, + 314, + 906, + 344 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Office-Home, VisDA, and ISIC-2019. In all four datasets, we split the samples of each domain into 90/10 train/test data. Office-31 [28] is comprised of a collection of 4,652 images spanning 31 different categories. The images are sourced from 3 distinct domains: Amazon (A), DSLR (D), and Webcam (W) with different image resolutions, quality, and lighting conditions. Office-Home [36] is a large collection of over 15,500 images spanning 65 categories. The images are sourced from 4 diverse domains: Artistic images (A), Clip Art (C), Product images (P), and Real-World images (R). VisDA-2017 [26] is a large-scale dataset tailored specifically for the visual domain adaptation task. This dataset includes images of 12 distinct categories from two domains, Synthetic (S) and Real (R). With the significant domain shift between the synthetic and real images, VisDA highlights the difficulties associated with bridging significant domain gaps. ISIC-2019 is a comprehensive repository of skin cancer research images sourced from 4 different sources: BCN-20000 (BCN) [8], Skin Cancer MNIST (HAM) [34], MSK4 [7], and an undefined source. We only utilize BCN and HAM sources as they include samples from all eight distinct classes.", + "bbox": [ + 88, + 369, + 485, + 700 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Implementation Details For all baselines we compared our Uni-HSSL against, we strictly followed the implementation details and hyper-parameters specified in the corresponding original papers. In order to ensure consistent comparisons with a multitude of earlier studies across various benchmark datasets, we employed two common backbone networks: ResNet-50 and ResNet-101 which are pre-trained on the ImageNet [10] dataset. We utilized ResNet-101 for VisDA dataset experiments and ResNet-50 for all the other benchmark datasets. The supervised pre-training stage is made up of 10 epochs while the semi-supervised training stage is made up of 100 epochs. In both stages, we em", + "bbox": [ + 89, + 719, + 485, + 901 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "ployed an SGD optimizer with a learning rate of $5e^{-4}$ and Nesterov momentum [24] of 0.9. In the semi-supervised training stage, the learning rate is adjusted using a cosine annealing strategy [23, 37]. We set the L2 regularization coefficient to $1e^{-3}$ and the batch size to 32 for all datasets. The trade-off hyper-parameters $\\lambda_{pl},\\lambda_{pa},\\lambda_{\\mathrm{Mixup}}$ take the values 1, $1e^{-2}$ and 1 respectively, while $\\tau$ and $\\epsilon$ take the value 0.5 and $\\beta$ is set to 0.8. Furthermore, similar to [1], we apply random translations and horizontal flips to the input images prior to applying the Progressive Inter-Domain Mixup. We report the mean classification accuracy and the corresponding standard deviation over 3 runs in each experiment.", + "bbox": [ + 511, + 369, + 908, + 551 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2. Comparison Results", + "text_level": 1, + "bbox": [ + 511, + 561, + 705, + 575 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We evaluate the proposed Uni-HSSL framework on the heterogeneous semi-supervised learning tasks and compare it to four categories of baselines: Supervised Learning baselines, Semi-Supervised Learning (SSL) baselines, Unsupervised Domain Adaptation (UDA) baselines, and Bidirectional Adaptation baselines. The supervised baseline is exclusively trained on the labeled data and does not leverage the unlabeled data during training. We employ a set of representative SSL baselines (FlexMatch [39], FixMatch [31], and SimMatch [41]) and a set of representative UDA baselines (CDAN [22] and MCC [15]). In particular, we also compare our work with the state-of-the-art bidirectional adaptation method (BiAdapt) [14]. As the traditional UDA methods are trained to perform well solely on an unlabeled target domain, to ensure a fair comparison, we equip the UDA methods with a Supervised classifier (Sup) trained on the labeled set and a domain classifier and refer to them as $\\mathrm{MCC} + \\mathrm{Sup}$ and $\\mathrm{CDAN} + \\mathrm{Sup}$ . At inference time, the domain classifier assigns each test sample to the appropriate classifier in the corresponding domain—either the supervised classifier for samples predicted to originate from the labeled domain or", + "bbox": [ + 511, + 583, + 908, + 902 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "15376", + "bbox": [ + 480, + 944, + 519, + 955 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/7714dd751dca36bb537019b0ac976995b6c380b6319cc16832ce47941ea1e9d0.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
SupervisedFlexMatchFixMatchSimMatchCDAN+SupMCC+SupBiAdaptUni-HSSL
Plane93.8(0.2)98.3(0.7)94.9(0.5)93.6(0.8)98.4(0.3)98.6(0.3)90.1(1.4)98.2(0.5)
Bicycle74.1(0.5)74.8(0.9)53.5(0.2)81.1(0.8)94.4(0.7)96.6(0.5)79.1(1.2)97.5(0.9)
Bus79.4(0.7)53.9(1.2)79.5(0.8)56.9(1.2)90.1(0.5)88.6(0.7)54.7(1.3)91.4(0.8)
Car86.2(0.9)36.4(2.1)88.5(0.3)59.6(1.5)85.1(0.5)84.8(0.9)56.1(1.2)89.0(0.9)
Horse90.9(0.2)97.4(0.5)76.0(0.8)65.6(1.0)96.6(0.1)97.6(0.3)62.1(1.4)98.2(0.3)
Knife87.5(0.7)77.2(0.8)78.8(0.9)71.9(0.5)95.0(1.4)95.1(0.9)68.2(0.1)98.9(0.4)
Motor.94.5(0.4)66.6(1.2)40.8(1.2)70.8(0.9)96.6(0.5)94.2(0.2)68.1(1.5)97.0(0.6)
Person80.0(0.7)80.5(0.8)58.9(1.6)64.1(0.8)94.3(0.6)94.6(0.2)62.5(1.2)95.6(0.7)
Plant91.1(0.7)91.8(0.8)62.7(0.7)65.5(0.9)96.5(0.4)97.3(0.5)63.5(1.9)95.7(0.2)
Skateboard81.8(0.9)90.0(0.5)68.9(1.2)57.0(1.7)85.5(0.5)83.0(0.8)59.3(1.7)91.5(0.8)
Train96.0(0.3)96.8(0.7)94.2(0.4)74.2(0.9)95.7(0.7)95.6(0.1)71.3(1.5)97.0(0.3)
Truck59.8(0.9)49.2(1.2)49.5(1.2)52.1(1.7)79.8(0.2)80.6(1.0)50.1(1.5)82.4(0.7)
Avg.84.182.487.380.892.192.079.193.1
", + "bbox": [ + 96, + 87, + 901, + 303 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/4181a9504a07b44ca03c9657c5cc6c0587c93c034995be45d31860c37b8522bd.jpg", + "table_caption": [ + "Table 2. Mean classification accuracy (standard deviation is within parentheses) on the VisDA dataset using the ResNet-101 backbone. The rows correspond to the different classes of the dataset." + ], + "table_footnote": [], + "table_body": "
B/HH/BAvg.
Supervised70.5(0.9)65.4(1.2)67.9
FlexMatch71.3(1.4)68.7(0.8)70.0
FixMatch77.5(0.8)65.0(0.7)71.3
SimMatch75.1(1.5)69.2(1.7)72.2
CDAN+Sup72.9(1.0)65.2(0.4)69.1
MCC+Sup60.2(1.8)56.7(1.7)58.7
BiAdapt74.2(0.7)68.3(1.3)71.2
Uni-HSSL79.9(0.7)71.0(0.9)75.4
", + "bbox": [ + 129, + 366, + 444, + 508 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 3. Mean classification accuracy (standard deviation is within parentheses) on ISIC-2019 using the ResNet-50 backbone. The first domain in each row indicates the labeled domain the second domain indicates the unlabeled domain.", + "bbox": [ + 89, + 518, + 482, + 575 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "the UDA classifier for those predicted to come from the unlabeled domain.", + "bbox": [ + 89, + 595, + 482, + 625 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "The comparison results on the Office-Home, VisDA, ISIC-2019 and Office-31 datasets are reported in Tables 1, 2, 3 and 4, respectively, where the first domain indicates the labeled domain and the second domain indicates the unlabeled domain. In the case of the VisDA dataset, the labeled dataset is sampled from the synthetic domain (S) and the unlabeled dataset is sampled from the real domain (R) and we report the average classification accuracy for each class and the overall average classification accuracy. The tables show that Uni-HSSL consistently outperforms all baselines on all datasets across all setups. The performance gains over the supervised baseline are notable exceeding $9\\%$ , $4\\%$ , $9\\%$ , and $7\\%$ on average in the cases of the Office-31, Office-Home, VisDA, and ISIC-2019 datasets, respectively. In the case of the VisDA dataset, the performance improvement over the supervised baseline at the class level is substantial, exceeding $22\\%$ for some classes. Furthermore, Uni-HSSL consistently outperforms all the SSL baselines, achieving", + "bbox": [ + 89, + 628, + 485, + 901 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "performance gains exceeding $3\\%$ , $4\\%$ , $5\\%$ , and $3\\%$ over the most effective SSL baselines on the Office-31, Office-Home, VisDA, and ISIC-2019 datasets, respectively. In some cases, such as A/W on Office-31 and P/A on Office-Home, the performance improvement over SSL baselines is notable, surpassing $6\\%$ and $8\\%$ , respectively, highlighting the limitations of traditional SSL baselines in the proposed HSSL task. In the case of the UDA baselines, Uni-HSSL yields superior performance with all domain setups on all four datasets with performance gains around $4\\%$ , $2\\%$ , $1\\%$ , and $6\\%$ on Office-31, Office-Home, VisDA and ISIC-2019 datasets, respectively. Uni-HSSL outperforms the UDA baselines on almost all classes of the VisDA dataset, with the UDA baselines slightly excelling in only two classes. However, Uni-HSSL still maintains superior overall performance compared to the UDA baselines. Furthermore, the MCC+Sup baseline does not perform well on the ISIC-2019 dataset, where it suffers a major drop in performance which can be attributed to the MCC baseline's sensitivity to the class imbalance inherent in this dataset. Moreover, our Uni-HSSL also substantially outperforms BiAdapt, with performance gains surpassing $5\\%$ , $6\\%$ , $14\\%$ , and $4\\%$ on the Office-31, Office-Home, VisDA and ISIC-2019 datasets, respectively. These results underscore the robustness of Uni-HSSL and highlight the limitations of BiAdapt in effectively addressing the challenges posed by the proposed HSSL task.", + "bbox": [ + 511, + 369, + 908, + 762 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.3. Ablation Study", + "text_level": 1, + "bbox": [ + 511, + 772, + 666, + 787 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "In order to investigate the contribution of each component of the proposed framework, we conducted an ablation study to compare the proposed Uni-HSSL with its six variants: (1) “-w/o WMA”, which drops the Weighted Moving Average component of the pseudo-label update and simply uses the model predictions to generate pseudo-labels; (2) “-w/o $\\mathcal{L}_{\\mathrm{cl}}^{L}$ ”, which drops the cross-entropy classification loss on the la", + "bbox": [ + 511, + 794, + 908, + 902 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "15377", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/3139c34cc36c435b4dbaa76d7004cef3322abdcfff60864d9bba26612b566892.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
W/AA/WA/DD/AD/WW/DAvg.
Supervised68.6(1.6)82.8(1.2)85.1(1.8)35.5(0.9)96.9(0.4)98.2(0.5)77.8
FlexMatch68.1(1.8)81.3(1.3)85.1(1.8)63.0(2.1)98.5(0.2)98.9(0.2)82.4
FixMatch69.1(1.3)83.4(0.9)86.4(0.8)53.7(1.3)98.1(0.2)98.2(0.2)81.5
SimMatch71.1(0.9)84.1(1.0)86.5(0.5)68.6(1.1)96.8(0.5)98.8(0.4)84.3
CDAN+Sup61.2(1.2)82.5(1.3)87.4(2.2)58.3(2.6)79.2(0.4)97.5(0.4)77.7
MCC+Sup71.5(2.7)88.8(0.7)89.1(0.5)67.6(1.3)81.7(0.7)99.5(0.4)83.0
BiAdopt70.2(0.9)85.0(0.5)77.4(0.7)67.1(1.0)94.2(0.5)98.5(0.3)82.0
Uni-HSSL73.1(1.0)90.2(0.8)90.0(0.2)72.1(0.7)100(0.0)100(0.0)87.5
", + "bbox": [ + 183, + 88, + 812, + 229 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/fed544c827304609b0fb2b46f32701da9459e32623d38cec8da7914ee7895e1c.jpg", + "table_caption": [ + "Table 4. Mean classification accuracy (standard deviation is within parentheses) on the Office-31 dataset using the ResNet-50 backbone. The first domain in each column indicates the labeled domain the second domain indicates the unlabeled domain." + ], + "table_footnote": [], + "table_body": "
W/AA/WA/DD/AD/WW/DAvg.
Uni-HSSL73.1(1.0)90.2(0.8)90.0(0.2)72.1(0.7)100(0.0)100(0.0)87.5
-w/o WMA72.8(0.5)87.1(0.8)88.3(0.9)71.0(0.8)100(0.0)100(0.0)86.8
-w/o \\( \\mathcal{L}_{cl}^{L} \\)67.6(1.7)85.5(0.8)86.1(1.2)64.8(2.0)93.2(0.5)92.9(0.6)81.7
-w/o \\( \\mathcal{L}_{pl}^{U} \\)72.5(0.8)87.9(0.9)88.1(0.7)71.0(0.9)98.0(0.2)98.5(0.2)86.1
-w/o \\( \\mathcal{L}_{pa} \\)72.7(0.5)88.9(0.7)87.2(0.6)71.3(0.9)99.1(0.0)100(0.0)86.5
-w/o \\( \\mathcal{L}_{Mixup} \\)71.9(1.2)86.7(0.9)88.1(0.8)71.3(1.1)98.0(0.4)99.9(0.0)86.1
-w/o Prog. Mixup71.3(0.9)84.8(0.9)88.1(1.0)70.0(1.3)99.2(0.5)99.9(0.0)85.6
", + "bbox": [ + 158, + 282, + 839, + 409 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Table 5. Ablation study results in terms of mean classification accuracy (standard deviation is within parentheses) on the Office-31 dataset using the ResNet-50 backbone. The first domain in each column indicates the labeled domain while the second domain indicates the unlabeled domain.", + "bbox": [ + 89, + 420, + 906, + 460 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "beled set $\\mathcal{D}_L$ ; (3) “ $-\\mathrm{w/o}\\mathcal{L}_{\\mathrm{pl}}^U$ , which drops the cross-entropy pseudo-label classification loss on the unlabeled set $\\mathcal{D}_U$ ; (4) “ $-\\mathrm{w/o}\\mathcal{L}_{\\mathrm{pa}}$ , which drops the Cross-Domain Prototype Alignment component; (5) “ $-\\mathrm{w/o}\\mathcal{L}_{\\mathrm{Mixup}}$ , which drops the Progressive Inter-Domain Mixup component; and (6) “ $-\\mathrm{w/o}$ Prog. Mixup”, which drops the progressive component of the Inter-Domain Mixup and uses a simple mixup for inter-domain data augmentation. We compare the proposed UniHSSL with all the six variants on the Office-31 dataset and report the results in Table 5.", + "bbox": [ + 88, + 473, + 482, + 625 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "From the table, we can see that dropping any component from the proposed unified framework results in performance degradation in all cases. “-w/o $\\mathcal{L}_{\\mathrm{cl}}^{L}$ ” variant suffered the largest performance degradation, which highlights the importance of the ground-truth labels of $\\mathcal{D}_L$ in guiding the learning process of the framework. Dropping the WMA from the pseudo-label generation component led to a slight average performance drop to $86.8\\%$ , underscoring its role in obtaining stable and confident pseudo-labels. Similarly, dropping the classification loss on the unlabeled data $\\mathcal{L}_{pl}^{U}$ led to a performance degradation to $86.1\\%$ . Furthermore, the variant “-w/o Prog. Mixup” suffers a larger drop in performance in comparison with the variant “-w/o $\\mathcal{L}_{\\mathrm{Mixup}}$ ”, which highlights the importance of progressively generating the augmented samples to ensure the accuracy of their corresponding augmented labels. Generating inter-domain augmented samples without taking into account the domain gap between the labeled domain and unlabeled domain can", + "bbox": [ + 89, + 628, + 482, + 900 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "lead to a degradation in performance due to the noisy augmented labels of the generated samples. Overall, the consistent performance drops across all the tasks of Office-31 for each variant validate the essential contribution of each corresponding component of the Uni-HSSL framework.", + "bbox": [ + 511, + 474, + 906, + 551 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5. Conclusion", + "text_level": 1, + "bbox": [ + 511, + 570, + 633, + 585 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this paper, we introduced a challenging heterogeneous semi-supervised learning problem, where the labeled and unlabeled training data come from different domains and possess different label and class feature distributions. To address this demanding setup, we proposed a Unified Framework for Heterogeneous Semi-Supervised Learning (Uni-HSSL), which trains a fine-grained classification model over the concatenated label space by effectively exploiting the labeled and unlabeled data as well as their relationships. Uni-HSSL adopts a WMA pseudo-labeling strategy to obtain stable and confident pseudo-labels for the unlabeled data, while deploying a cross-domain class prototype alignment component to support knowledge transfer and sharing between domains. A novel progressive inter-domain mixup component is further devised to augment the training data and bridge the significant gap between the labeled and unlabeled domains. The experimental results demonstrate the effectiveness and superiority of the proposed Uni-HSSL over state-of-the-art semi-supervised learning methods and unsupervised domain adaptation baselines.", + "bbox": [ + 511, + 598, + 906, + 900 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "15378", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 89, + 187, + 104 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A holistic approach to semi-supervised learning. In Advances in Neural Information Processing Systems (NeurIPS), 2019. 5, 6", + "[2] David Berthelot, Rebecca Roelofs, Kihyuk Sohn, Nicholas Carlini, and Alexey Kurakin. Adamatch: A unified approach to semi-supervised learning and domain adaptation. In International Conference on Learning Representations (ICLR), 2021. 3", + "[3] Kaidi Cao, Maria Brbic, and Jure Leskovec. Open-world semi-supervised learning. In International Conference on Learning Representations (ICLR), 2022. 2", + "[4] Chao Chen, Zhihang Fu, Zhihong Chen, Sheng Jin, Zhaowei Cheng, Xinyu Jin, and Xian-Sheng Hua. Homm: Higher-order moment matching for unsupervised domain adaptation. In AAAI Conference on Artificial Intelligence, 2020. 3", + "[5] Xinyang Chen, Sinan Wang, Mingsheng Long, and Jianmin Wang. Transferability vs. discriminability: Batch spectral penalization for adversarial domain adaptation. In International Conference on Machine Learning (ICML), 2019. 3", + "[6] Yanbei Chen, Xiatian Zhu, Wei Li, and Shaogang Gong. Semi-supervised learning under class distribution mismatch. In Proceedings of the AAAI Conference on Artificial Intelligence, 2020. 2", + "[7] Noel CF Codella, David Gutman, M Emre Celebi, Brian Helba, Michael A Marchetti, Stephen W Dusza, Aadi Kalloo, Konstantinos Liopyris, Nabin Mishra, Harald Kittler, et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic). In International Symposium on Biomedical Imaging (ISBI), 2018. 6", + "[8] Marc Combalia, Noel CF Codella, Veronica Rotemberg, Brian Helba, Veronica Vilaplana, Ofer Reiter, Cristina Carrera, Alicia Barreiro, Allan C Halpern, Susana Puig, et al. Bcn20000: Dermoscopic lesions in the wild. arXiv preprint arXiv:1908.02288, 2019. 6", + "[9] Shuhao Cui, Shuhui Wang, Junbao Zhuo, Liang Li, Qingming Huang, and Qi Tian. Towards discriminability and diversity: Batch nuclear-norm maximization under label insufficient situations. In Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 3", + "[10] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Conference on Computer Vision and Pattern Recognition (CVPR), 2009. 6", + "[11] Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In International Conference on Machine Learning (ICML), 2015. 2, 3", + "[12] Lan-Zhe Guo, Zhen-Yu Zhang, Yuan Jiang, Yu-Feng Li, and Zhi-Hua Zhou. Safe deep semi-supervised learning for unseen-class unlabeled data. In International Conference on Machine Learning (ICML). PMLR, 2020. 2", + "[13] Zhuo Huang, Chao Xue, Bo Han, Jian Yang, and Chen Gong." + ], + "bbox": [ + 93, + 114, + 485, + 901 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Universal semi-supervised learning. Advances in Neural Information Processing Systems (NeurIPS), 2021. 2", + "[14] Lin-Han Jia, Lan-Zhe Guo, Zhi Zhou, Jie-Jing Shao, Yuke Xiang, and Yu-Feng Li. Bidirectional adaptation for robust semi-supervised learning with inconsistent data distributions. In International Conference on Machine Learning (ICML), 2023. 2, 3, 6", + "[15] Ying Jin, Ximei Wang, Mingsheng Long, and Jianmin Wang. Minimum class confusion for versatile domain adaptation. In European Conference on Computer Vision (ECCV), 2020. 6", + "[16] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. In International Conference on Learning Representations (ICLR), 2017. 2", + "[17] Qicheng Lao, Xiang Jiang, and Mohammad Havaei. Hypothesis disparity regularized mutual information maximization. In AAAI Conference on Artificial Intelligence, 2021. 3", + "[18] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. In Nature, 2015. 1", + "[19] Dong-Hyun Lee et al. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML, 2013. 2", + "[20] Hong Liu, Jianmin Wang, and Mingsheng Long. Cycle self-training for domain adaptation. In Advances in Neural Information Processing Systems (NeurIPS), 2021. 3", + "[21] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In International Conference on Machine Learning (ICML), 2015. 2, 3", + "[22] Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. In Advances in Neural Information Processing Systems (NeurIPS), 2018. 3, 6", + "[23] Ilya Loshchilov and Frank Hutter. SGDR: Stochastic gradient descent with warm restarts. In International Conference on Learning Representations (ICLR), 2017. 6", + "[24] Yurii Nesterov. A method for unconstrained convex minimization problem with the rate of convergence o (1/k2). In Dokl. Akad. Nauk. SSSR, 1983. 6", + "[25] Avital Oliver, Augustus Odena, Colin A Raffel, Ekin Dogus Cubuk, and Ian Goodfellow. Realistic evaluation of deep semi-supervised learning algorithms. In Advances in Neural Information Processing Systems (NeurIPS), 2018. 1, 2", + "[26] Xingchao Peng, Ben Usman, Neela Kaushik, Judy Hoffman, Dequan Wang, and Kate Saenko. Visda: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924, 2017. 6", + "[27] Hoang Phan, Trung Le, Trung Phung, Anh Tuan Bui, Nhat Ho, and Dinh Phung. Global-local regularization via distributional robustness. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2023. 3", + "[28] Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In European Conference on Computer Vision (ECCV), 2010. 6", + "[29] Jian Shen, Yanru Qu, Weinan Zhang, and Yong Yu. Wasserstein distance guided representation learning for domain adaptation. In AAAI Conference on Artificial Intelligence, 2018. 3" + ], + "bbox": [ + 516, + 92, + 906, + 898 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "15379", + "bbox": [ + 480, + 944, + 519, + 955 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[30] Rui Shu, Hung H Bui, Hirokazu Narui, and Stefano Ermon. A dirt-t approach to unsupervised domain adaptation. International Conference on Learning Representations (ICLR), 2018. 3", + "[31] Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In Advances in Neural Information Processing Systems (NeurIPS), 2020. 2, 6", + "[32] Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In International Conference on Computer Vision (ICCV), 2017. 1", + "[33] Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in Neural Information Processing Systems (NeurIPS), 2017. 2", + "[34] Philipp Tschandl, Cliff Rosendahl, and Harald Kittler. The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Scientific data, 2018. 6", + "[35] Jesper E Van Engelen and Holger H Hoos. A survey on semi-supervised learning. Machine learning, 2020. 1", + "[36] Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017. 6", + "[37] Vikas Verma, Kenji Kawaguchi, Alex Lamb, Juho Kannala, Arno Solin, Yoshua Bengio, and David Lopez-Paz. Interpolation consistency training for semi-supervised learning. In Neural Networks, 2022. 2, 6", + "[38] Qing Yu, Daiki Ikami, Go Irie, and Kiyoharu Aizawa. Multi-task curriculum framework for open-set semi-supervised learning. In European Conference on Computer Vision (ECCV), 2020. 2", + "[39] Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, and Takahiro Shinozaki. Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling. In Advances in Neural Information Processing Systems (NeurIPS), 2021. 2, 6", + "[40] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. Mixup: Beyond empirical risk minimization. In International Conference on Learning Representations (ICLR), 2018. 5", + "[41] Mingkai Zheng, Shan You, Lang Huang, Fei Wang, Chen Qian, and Chang Xu. Simmatch: Semi-supervised learning with similarity matching. In Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2, 6", + "[42] Yang Zou, Zhiding Yu, Xiaofeng Liu, BVK Kumar, and Jinsong Wang. Confidence regularized self-training. In International Conference on Computer Vision (ICCV), 2019. 3" + ], + "bbox": [ + 91, + 92, + 485, + 825 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "15380", + "bbox": [ + 480, + 944, + 519, + 955 + ], + "page_idx": 9 + } +] \ No newline at end of file diff --git a/2025/A Unified Framework for Heterogeneous Semi-supervised Learning/341a6b6d-e6ba-48c2-98b8-5f373e8e2473_model.json b/2025/A Unified Framework for Heterogeneous Semi-supervised Learning/341a6b6d-e6ba-48c2-98b8-5f373e8e2473_model.json new file mode 100644 index 0000000000000000000000000000000000000000..110c75b0219d1ec5e3a229f0d05d77b169eb956e --- /dev/null +++ b/2025/A Unified Framework for Heterogeneous Semi-supervised Learning/341a6b6d-e6ba-48c2-98b8-5f373e8e2473_model.json @@ -0,0 +1,1815 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.043 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.001, + 0.812, + 0.047 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.158, + 0.131, + 0.842, + 0.154 + ], + "angle": 0, + "content": "A Unified Framework for Heterogeneous Semi-supervised Learning" + }, + { + "type": "text", + "bbox": [ + 0.233, + 0.181, + 0.767, + 0.234 + ], + "angle": 0, + "content": "Marzi Heidari*, Abdullah Alchihabi*, Hao Yan*, Yuhong Guo*† \n*School of Computer Science, Carleton University, Ottawa, Canada†Canada CIFAR AI Chair, Amii, Canada" + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.237, + 0.882, + 0.253 + ], + "angle": 0, + "content": "{marziheidari@cmail., abdullahalchihibi@cmail., haoyan6@cmail., yuhong.guo@}carleton.ca" + }, + { + "type": "title", + "bbox": [ + 0.248, + 0.287, + 0.327, + 0.303 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.318, + 0.486, + 0.726 + ], + "angle": 0, + "content": "In this work, we introduce a novel problem setup termed as Heterogeneous Semi-Supervised Learning (HSSL), which presents unique challenges by bridging the semi-supervised learning (SSL) task and the unsupervised domain adaptation (UDA) task, and expanding standard semi-supervised learning to cope with heterogeneous training data. At its core, HSSL aims to learn a prediction model using a combination of labeled and unlabeled training data drawn separately from heterogeneous domains that share a common set of semantic categories. This model is intended to differentiate the semantic categories of test instances sampled from both the labeled and unlabeled domains. In particular, the labeled and unlabeled domains have dissimilar label distributions and class feature distributions. This heterogeneity, coupled with the assorted sources of the test data, introduces significant challenges to standard SSL and UDA methods. Therefore, we propose a novel method, Unified Framework for Heterogeneous Semi-supervised Learning (Uni-HSSL), to address HSSL by directly learning a fine-grained classifier from the heterogeneous data, which adaptively handles the inter-domain heterogeneity while leveraging both the unlabeled data and the inter-domain semantic class relationships for cross-domain knowledge transfer and adaptation. We conduct comprehensive experiments and the experimental results validate the efficacy and superior performance of the proposed Uni-HSSL over state-of-the-art semi-supervised learning and unsupervised domain adaptation methods." + }, + { + "type": "title", + "bbox": [ + 0.092, + 0.755, + 0.222, + 0.77 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.78, + 0.486, + 0.902 + ], + "angle": 0, + "content": "Deep learning models, owing to their hierarchical learned representations and intricate architectures, have monumentally advanced the state-of-the-art across a myriad of tasks [18]. Nonetheless, the success of deep learning has been often contingent on the availability of copious amounts of labeled data. Data annotation, especially in specialized domains, is not only resource-intensive but can also entail exorbitant costs [32]. Consequently, semi-supervised learn" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.289, + 0.907, + 0.334 + ], + "angle": 0, + "content": "ing (SSL) has been popularly studied, aiming to successfully utilize the free available unlabeled data to help train deep models in an annotation efficient manner [35]." + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.338, + 0.909, + 0.566 + ], + "angle": 0, + "content": "However, current SSL methods assume that the unlabeled and labeled data are sampled from similar (homogeneous) distributions [25]. Such an assumption presents substantial practical limitations to applying traditional SSL methods to a wide range of application domains, where labeled and unlabeled data can have different distributions. For example, in the field of medical imaging, it is common for labeled MRI scans to be sourced from state-of-the-art research hospitals, while an influx of unlabeled scans could emanate from a myriad of rural clinics, each with its distinct scanning equipment and calibration idiosyncrasies. Similar heterogeneity patterns manifest in domains like aerial imagery, wildlife monitoring, and retail product classification. In such settings, the challenge lies in leveraging the unlabeled data given its dissimilarity with its labeled counterpart." + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.569, + 0.911, + 0.902 + ], + "angle": 0, + "content": "Therefore, to address the current limitations of the traditional SSL, we propose a novel heterogeneous semi-supervised learning (HSSL) task, where the training data consist of labeled and unlabeled data sampled from different distribution domains. The two domains contain a common set of semantic classes, but have different label and class feature distributions. The goal of HSSL is to train a model using the heterogeneous training data so that it can perform well on a held-out test set sampled from both the labeled and unlabeled domains. Without posing distribution similarity assumptions between the labeled and unlabeled data, HSSL is expected to be applicable to a broader range of real-world scenarios compared to standard SSL. This novel heterogeneous semi-supervised learning task however is much more challenging due to the following characteristics: (1) The domain gap, expressed as divergence between class feature distributions across the labeled and unlabeled domains, presents a significant impediment to model generalization and learning. (2) The absence of annotated samples from the unlabeled domain during training further compounds the complexity of the task. (3) Considering that the test set comprises samples from both domains, the devised so" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.518, + 0.957 + ], + "angle": 0, + "content": "15371" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.094, + 0.092, + 0.484, + 0.483 + ], + "angle": 0, + "content": "lution methods need to accurately model the distributions inherent to each domain. It is imperative for the models to discern not only the domain from which a sample originates but also the specific semantic class it belongs to. This requires either an explicit or implicit methodology to categorize samples accurately with respect to both domain origin and semantic class categories, distinguishing the task from both conventional SSL and unsupervised domain adaptation (UDA)—traditional SSL overlooks the domain heterogeneity within both the training and testing data, whereas UDA exclusively concentrates on the unlabeled domain as the target domain [11, 21]. Therefore, traditional SSL and UDA methods are not readily applicable or effective in addressing the proposed HSSL task. A recent work [14] has made an effort to expand the traditional SSL task beyond its homogeneous assumptions. However, the proposed solution method learns separately in different domains using distinct components where an off-the-shelf UDA technique is employed to generate pseudo-labels for the unlabeled samples, bypassing the opportunity to train a unified cohesive model that could harness insights from both domains. Furthermore, their test set is confined to a labeled domain, while HSSL aims to train a model that generalizes across labeled and unlabeled domains. HSSL presents a more complex challenge, requiring the model to adapt and perform accurately across heterogeneous test data." + }, + { + "type": "text", + "bbox": [ + 0.094, + 0.494, + 0.484, + 0.901 + ], + "angle": 0, + "content": "In this work, we propose a novel method, named as Unified framework for Heterogeneous Semi-Supervised Learning (Uni-HSSL), to address the HSSL problem. The proposed method learns a fine-grained classification model cohesively under a unified framework by amalgamating the labeled and unlabeled class categories within an extended and precisely doubled label space. The framework consists of three technical components designed to tackle the HSSL challenges: a weighted moving average pseudo-labeling component, a cross-domain prototype alignment component, and a progressive inter-domain mixup component. The pseudo-labeling component leverages a weighted moving average strategy to assign and update pseudo-labels for the unlabeled data. In this manner, it generates smooth and adaptive assignment of pseudo-labels, reducing the potential pitfalls of oscillating updates or noisy label assignments, which is crucial given the significant domain gap between labeled data and unlabeled data. The cross-domain prototype alignment ensures that the inherent semantic structures of similar classes across the labeled and unlabeled domains are aligned. This alignment of class-centric prototypes between domains leverages inter-domain semantic class relationships, enabling knowledge transfer from the labeled domain to the unlabeled domain. The progressive inter-domain mixup component generates new synthetic instances by interpolating between labeled and unlabeled samples and bridges the gap between the two domains. By adopting a progressive" + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.092, + 0.905, + 0.211 + ], + "angle": 0, + "content": "augmentation schedule, it gradually adapts the model to the distribution of the unlabeled domain, facilitating a steady and reliable knowledge transfer. Comprehensive experiments are conducted on several benchmark datasets. The empirical results demonstrate the efficacy and superior performance of our proposed unified framework compared to multiple state-of-the-art SSL and unsupervised domain adaptation baselines for HSSL." + }, + { + "type": "title", + "bbox": [ + 0.517, + 0.227, + 0.662, + 0.242 + ], + "angle": 0, + "content": "2. Related Works" + }, + { + "type": "title", + "bbox": [ + 0.517, + 0.252, + 0.753, + 0.268 + ], + "angle": 0, + "content": "2.1. Semi-Supervised Learning" + }, + { + "type": "text", + "bbox": [ + 0.516, + 0.274, + 0.908, + 0.485 + ], + "angle": 0, + "content": "Conventional Semi-Supervised Learning (SSL) In conventional SSL, the labeled and unlabeled segments of the dataset encompass identical classes, sharing consistent class and feature distributions. SSL methods are primarily classified into three categories: regularization-based techniques, teacher-student models, and pseudo-labeling strategies. Regularization-based techniques like II-model [16] modify the loss function with additional terms for model refinement. Teacher-student models like MT [33] and ICT [37] involve training a student network to mimic a teacher model using unlabeled data. Pseudo-labeling strategies like Pseudo-Label [19], FixMatch [31], FlexMatch [39], and SimMatch [41] expand labeled datasets using unlabeled data with pseudo-labels in various ways." + }, + { + "type": "text", + "bbox": [ + 0.516, + 0.504, + 0.908, + 0.774 + ], + "angle": 0, + "content": "Open-Set Semi-Supervised Learning (OS-SSL) OS-SSL deals with unknown or additional classes present in the unlabeled data but absent in the labeled set. OS-SSL assumes the same feature distribution over labeled and unlabeled sets. This is different from HSSL, which operates under the assumption that labeled and unlabeled data come from separate domains with different feature distributions. The concept of OS-SSL, introduced in [25], focuses on class distribution mismatches in open-set scenarios. Methods for OS-SSL like UASD [6] use self-distillation to exclude outliers from unlabeled data. DS3L [12] and MTCF [38] employ diverse weighting strategies for subset mismatches, minimizing the impact of private data in unlabeled sets. OpenMatch [3] utilizes one-vs-all classifiers for outlier detection but faces difficulties with unseen categories. While OS-SSL has advanced SSL towards practical use, it lacks capacity to handle feature distribution mismatches between labeled and unlabeled data." + }, + { + "type": "text", + "bbox": [ + 0.516, + 0.795, + 0.905, + 0.901 + ], + "angle": 0, + "content": "Universal Semi-Supervised Learning (USSL) Universal SSL [13] involves both shared and unique classes across the labeled and unlabeled sets, with the test set matching the labeled set's class distribution. HSSL, however, assumes shared classes across the labeled and unlabeled domains and tests on samples from both domains without their domain identities, adding complexity." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.946, + 0.518, + 0.957 + ], + "angle": 0, + "content": "15372" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.485, + 0.213 + ], + "angle": 0, + "content": "Similar to our work, bidirectional Adaptation [14] addresses the disparity between limited labeled and abundant unlabeled data, but it tests only within the labeled domain's feature distribution. It uses UDA techniques for pseudolabeling, avoiding the complexities and benefits of cross-domain modeling. In contrast, HSSL aims for effective generalization across both domains, posing a more intricate challenge in model adaptation and generalization." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.223, + 0.39, + 0.24 + ], + "angle": 0, + "content": "2.2. Unsupervised Domain Adaptation" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.246, + 0.485, + 0.488 + ], + "angle": 0, + "content": "Unsupervised domain adaptation aims at learning a target model given labeled data from a source domain and unlabeled data from a target domain. Typical deep UDA approaches can be categorized into three types: alignment-based, regularization-based, and self-training-based methods. Alignment-based methods aim to reduce the cross-domain feature discrepancy with adversarial alignment [11, 22] and distance-based methods [4, 21, 27, 29]. Regularization-based methods utilize regularization terms to leverage knowledge from the unlabeled target data. Typical regularization terms include entropy minimization [30], virtual adversarial training [30], batch spectral penalization [5], batch nuclear-norm maximization [9], and mutual information maximization [17]. Self-training-based methods explore effective pseudo-labeling for unlabeled target data fitting, including confidence threshold [2, 42] and cycle self-training [20]." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.503, + 0.182, + 0.518 + ], + "angle": 0, + "content": "3. Method" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.528, + 0.242, + 0.544 + ], + "angle": 0, + "content": "3.1. Problem Setup" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.55, + 0.484, + 0.761 + ], + "angle": 0, + "content": "We consider the following Heterogeneous Semi-Supervised Learning (HSSL) setup. The training data consist of a set of labeled instances \\(\\mathcal{D}_L = \\{(\\mathbf{x}_i^l,\\mathbf{y}_i^l)\\}_{i = 1}^{N_l}\\), where each instance \\(\\mathbf{x}_i^l\\) is annotated with a one-hot label indicator vector \\(\\mathbf{y}_i^l\\) with length \\(C\\), and a set of unlabeled instances \\(\\mathcal{D}_U = \\{\\mathbf{x}_i^u\\}_{i = 1}^{N_u}\\). The labeled data and unlabeled data are from two different domains that have dissimilar label distributions such that \\(p_L(\\mathbf{y})\\neq p_U(\\mathbf{y})\\) and heterogeneous feature distributions such that \\(p_L(\\mathbf{x}|\\mathbf{y})\\neq p_U(\\mathbf{x}|\\mathbf{y})\\), but share the same set of \\(C\\) semantic classes. The goal is to train a prediction model using both the labeled set \\(\\mathcal{D}_L\\) and unlabeled set \\(\\mathcal{D}_U\\) so that the trained model would generalize well on a held-out test set that is indistinguishably sampled from both the labeled and unlabeled domains." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.773, + 0.264, + 0.789 + ], + "angle": 0, + "content": "3.2. Proposed Method" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.795, + 0.484, + 0.901 + ], + "angle": 0, + "content": "In this section, we present the proposed Uni-HSSL method, which tackles the \\(C\\)-class HSSL problem by combining the labeled and unlabeled class categories to a doubled label space and learning a fine-grained \\(2C\\)-class classification model under a unified framework, aiming to adaptively handle the heterogeneous distributions across domains and gain better generalization over test instances randomly sampled" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.905, + 0.152 + ], + "angle": 0, + "content": "from both the labeled and unlabeled domains. The core idea centers on simultaneously facilitating effective knowledge transfer from the labeled domain to the unlabeled domain while harnessing the information within the unlabeled data." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.153, + 0.907, + 0.41 + ], + "angle": 0, + "content": "We start by first pre-training a feature encoder and a \\(C\\)-class semantic classifier on the labeled dataset, which can be used to produce the initial pseudo-labels of the unlabeled training data and provide partial initialization for our Uni-HSSL model. Then the \\(2C\\)-class Uni-HSSL model, which consists of a feature encoder \\(f\\) and a \\(2C\\)-class classifier \\(h\\), will be learned within the proposed unified semi-supervised framework shown in Figure 1. The framework introduces three technical components to facilitate heterogeneous SSL. The weighted-moving-average (WMA) based pseudo-labeling component is deployed to support the effective exploitation of the unlabeled data, while the cross-domain prototype alignment component and progressive inter-domain mixup component are designed to promote information sharing and efficient and steady knowledge transfer from the labeled domain to the unlabeled domain. Further elaboration will be provided in the following sections." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.416, + 0.727, + 0.431 + ], + "angle": 0, + "content": "3.2.1. Supervised Pre-training" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.435, + 0.906, + 0.555 + ], + "angle": 0, + "content": "The initial challenge in training a \\(2C\\)-class classification model with the given heterogeneous data is the absence of labeled instances entirely in the unlabeled domain. To tackle this problem, we exploit the assumption that the labeled and unlabeled domains share the same set of \\(C\\) semantic class categories, and pre-train a \\(C\\)-class classification model in the labeled domain to provide initial pseudo-labels for the training instances in the unlabeled domain." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.556, + 0.907, + 0.617 + ], + "angle": 0, + "content": "Specifically, we pre-train a \\(C\\)-class model, which consists of a feature encoder \\(f\\) and a \\(C\\)-class probabilistic classifier \\(g\\), on the labeled data \\(\\mathcal{D}_L\\) by minimizing the following supervised cross-entropy loss:" + }, + { + "type": "equation", + "bbox": [ + 0.584, + 0.626, + 0.907, + 0.648 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {c e} ^ {L} = \\mathbb {E} _ {\\left(\\mathbf {x} _ {i} ^ {l}, \\mathbf {y} _ {i} ^ {l}\\right) \\in \\mathcal {D} _ {L}} \\left[ \\ell_ {c e} \\left(\\mathbf {y} _ {i} ^ {l}, g (f (\\mathbf {x} _ {i} ^ {l}))\\right) \\right] \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.656, + 0.909, + 0.716 + ], + "angle": 0, + "content": "where \\(\\ell_{ce}\\) denotes the cross-entropy function. Then we deploy the pre-trained classification model to make predictions on the unlabeled training instances in \\(\\mathcal{D}_U\\) to generate their initial pseudo-labels:" + }, + { + "type": "equation", + "bbox": [ + 0.608, + 0.727, + 0.907, + 0.746 + ], + "angle": 0, + "content": "\\[\n\\bar {\\mathbf {y}} _ {i} ^ {0} = g \\left(f \\left(\\mathbf {x} _ {i} ^ {u}\\right)\\right), \\quad \\forall \\mathbf {x} _ {i} ^ {u} \\in \\mathcal {D} _ {U} \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.755, + 0.909, + 0.832 + ], + "angle": 0, + "content": "where \\(\\bar{\\mathbf{y}}_i^0\\) denotes the predicted class probability vector with length \\(C\\) for the unlabeled instance \\(\\mathbf{x}_i^u\\). To provide initial labels on the unlabeled data for training the \\(2C\\)-class model, we further expand each \\(\\bar{\\mathbf{y}}_i^0\\) by concatenating it with a zero vector with length \\(C\\), \\(\\mathbf{0}_C\\):" + }, + { + "type": "equation", + "bbox": [ + 0.636, + 0.842, + 0.907, + 0.861 + ], + "angle": 0, + "content": "\\[\n\\hat {\\mathbf {y}} _ {i} ^ {0} = \\operatorname {c o n c a t} \\left(\\mathbf {0} _ {C}, \\bar {\\mathbf {y}} _ {i} ^ {0}\\right) \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.871, + 0.908, + 0.901 + ], + "angle": 0, + "content": "This results in the first set of \\( C \\) classes out of the \\( 2C \\) classes corresponding to the classes in the labeled domain, with" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "15373" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.164, + 0.094, + 0.833, + 0.337 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.351, + 0.908, + 0.409 + ], + "angle": 0, + "content": "Figure 1. An overview of the proposed Uni-HSSL training framework. The classification model consists of a feature encoder \\( f \\) and a \\( 2C \\)-class classifier \\( h \\). After initialization with pre-training, the model is trained by jointly minimizing the combination of a supervised loss \\( \\mathcal{L}_{cl}^{L} \\) on the labeled data, a WMA pseudo-labeling loss \\( \\mathcal{L}_{pl}^{U} \\) on the unlabeled data, a cross-domain prototype alignment loss \\( \\mathcal{L}_{pa} \\), and a prediction loss \\( \\mathcal{L}_{\\mathrm{Mixup}} \\) on the augmentation data produced via progressive inter-domain mixup." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.425, + 0.484, + 0.519 + ], + "angle": 0, + "content": "the remaining set of \\( C \\) classes corresponding to the classes in the unlabeled domain. Moreover, the parameters of the pre-trained \\( C \\)-class model \\( (g \\circ f) \\) can also be utilized to initialize the feature encoder \\( f \\) and part of the classifier \\( h \\) corresponding to the first \\( C \\) classes in the \\( 2C \\)-class model, while the other part of \\( h \\) will be randomly initialized." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.527, + 0.484, + 0.557 + ], + "angle": 0, + "content": "3.2.2. Semi-Supervised Training with Adaptive Pseudo-Labeling" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.562, + 0.484, + 0.653 + ], + "angle": 0, + "content": "After initialization, the proposed \\(2C\\)-class classification model (feature encoder \\(f\\) and probabilistic classifier \\(h\\)) will be trained by leveraging both the labeled set \\(\\mathcal{D}_L\\) and the unlabeled set \\(\\mathcal{D}_U\\) within a pseudo-labeling based SSL framework. On the labeled set \\(\\mathcal{D}_L\\), the following standard supervised cross-entropy loss will be used as the training objective:" + }, + { + "type": "equation", + "bbox": [ + 0.111, + 0.665, + 0.484, + 0.687 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {c l} ^ {L} = \\mathbb {E} _ {\\left(\\mathbf {x} _ {i} ^ {l}, \\mathbf {y} _ {i} ^ {l}\\right) \\in \\mathcal {D} _ {L}} \\left[ \\ell_ {c e} \\left(h \\left(f \\left(\\mathbf {x} _ {i} ^ {l}\\right)\\right), \\operatorname {c o n c a t} \\left(\\mathbf {y} _ {i} ^ {l}, \\mathbf {0} _ {C}\\right)\\right) \\right] \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.699, + 0.484, + 0.745 + ], + "angle": 0, + "content": "where the concatenated label vector, \\(\\operatorname{concat}(\\mathbf{y}_i^l, \\mathbf{0}_C)\\), expands the ground-truth label vector \\(\\mathbf{y}_i^l\\) into the \\(2C\\)-class label space by appending a zero vector with length \\(C\\) to it." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.746, + 0.484, + 0.866 + ], + "angle": 0, + "content": "Although we have obtained initial pseudo-labels for the unlabeled set \\( \\mathcal{D}_U \\) by utilizing the pre-trained \\( C \\)-class classifier, those initial labels are unavoidably noisy due to the existence of domain gap between the labeled and unlabeled domains. In order to effectively leverage the unlabeled data, we update the pseudo-label for each unlabeled instance \\( \\mathbf{x}_i^u \\) during each training iteration in a weighted moving average (WMA) fashion as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.176, + 0.879, + 0.484, + 0.898 + ], + "angle": 0, + "content": "\\[\n\\hat {\\mathbf {y}} _ {i} ^ {t} = \\beta \\hat {\\mathbf {y}} _ {i} ^ {t - 1} + (1 - \\beta) h \\left(f \\left(\\mathbf {x} _ {i} ^ {u}\\right)\\right) \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.425, + 0.907, + 0.577 + ], + "angle": 0, + "content": "where \\(\\beta \\in (0,1)\\) is a hyper-parameter that controls the rate of update, and \\(\\hat{\\mathbf{y}}_i^t\\) is the updated pseudo-label for \\(\\mathbf{x}_i^u\\) at the \\(t\\)-th training iteration. This weighted moving average update strategy can yield a smooth and adaptive assignment of pseudo-labels by promptly incorporating the progress in the classification model and mitigating the risk of oscillatory updates. Moreover, to further mitigate the adverse impact of noisy pseudo-labels, we deploy the following cross-entropy loss on the unlabeled set during training, selectively utilizing only instances with more reliable pseudo-labels:" + }, + { + "type": "equation", + "bbox": [ + 0.528, + 0.587, + 0.907, + 0.608 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {p l} ^ {U} = \\mathbb {E} _ {\\mathbf {x} _ {i} ^ {u} \\in \\mathcal {D} _ {U}} [ \\mathbb {1} (\\max (\\hat {\\mathbf {y}} _ {i} ^ {t}) > \\epsilon) \\ell_ {c e} (h (f (\\mathbf {x} _ {i} ^ {u})), \\hat {\\mathbf {y}} _ {i} ^ {t}) ] \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.617, + 0.907, + 0.678 + ], + "angle": 0, + "content": "where \\(\\mathbb{1}(\\cdot)\\) denotes an indicator function; \\(\\epsilon \\in (0,1)\\) is a predefined confidence threshold to ensure that only unlabeled instances with the maximum prediction probabilities larger than \\(\\epsilon\\) are used for the current training iteration." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.678, + 0.909, + 0.799 + ], + "angle": 0, + "content": "By treating semantic classes in distinct domains as separate categories, the \\(2C\\)-class classification model serves as a strategic choice to differentiate samples across domains. This approach avoids the additional complexity associated with a dedicated domain classifier and naturally handles the divergence in class-feature distributions across domains. It also simplifies the process and has the potential to enhance domain generalization through a shared feature encoder." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.807, + 0.879, + 0.822 + ], + "angle": 0, + "content": "3.2.3. Cross-Domain Semantic Prototype Alignment" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.826, + 0.908, + 0.901 + ], + "angle": 0, + "content": "Given that the labeled domain and unlabeled domain are comprised of the same set of \\( C \\) semantic classes, there is a one-to-one correspondence relationship between each cross-domain class pair for the same semantic concept. In order to facilitate knowledge sharing and transfer across domains," + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "15374" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.484, + 0.212 + ], + "angle": 0, + "content": "we propose to align each semantic class from the labeled domain with its corresponding semantic class in the unlabeled domain within the learned feature embedding space. To this end, we represent each class using a class-prototype vector and design a cross-domain semantic class-prototype alignment component to enforce the corresponding semantic class pairs across the domains are more similar in the feature embedding space than non-corresponding class pairs." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.213, + 0.484, + 0.259 + ], + "angle": 0, + "content": "Specifically, we compute the prototype vector for the \\(k\\)-th class in the labeled set as the average feature embedding of the labeled instances belonging to the class:" + }, + { + "type": "equation", + "bbox": [ + 0.126, + 0.27, + 0.484, + 0.291 + ], + "angle": 0, + "content": "\\[\n\\mathbf {p} _ {k} = \\mathbb {E} _ {\\left(\\mathbf {x} _ {i} ^ {l}, \\mathbf {y} _ {i} ^ {l}\\right) \\in \\mathcal {D} _ {L}} \\left[ \\mathbb {1} \\left(\\arg \\max _ {j} \\mathbf {y} _ {i j} ^ {l} = k\\right) f \\left(\\mathbf {x} _ {i} ^ {l}\\right) \\right] \\tag {7}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.302, + 0.484, + 0.378 + ], + "angle": 0, + "content": "where \\(\\mathbf{y}_{ij}^{l}\\) denotes the \\(j\\)-th entry of the label vector \\(\\mathbf{y}_i^l\\). The corresponding \\(k\\)-th semantic class in the unlabeled set is the \\((C + k)\\)-th class in the \\(2C\\)-class label space. We compute the class prototype vectors in the unlabeled set based on the instances with reliable pseudo-labels, such that:" + }, + { + "type": "equation", + "bbox": [ + 0.092, + 0.389, + 0.484, + 0.425 + ], + "angle": 0, + "content": "\\[\n\\mathbf {p} _ {C + k} = \\mathbb {E} _ {\\mathbf {x} _ {i} ^ {u} \\in \\mathcal {D} _ {U}} \\left[ \\mathbb {1} \\left(\\max (\\hat {\\mathbf {y}} _ {i} ^ {t}) > \\epsilon \\wedge \\right. \\left. \\arg \\max _ {j} \\hat {\\mathbf {y}} _ {i j} ^ {t} = C + k\\right) f \\left(\\mathbf {x} _ {i} ^ {u}\\right) \\right] \\tag {8}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.435, + 0.485, + 0.495 + ], + "angle": 0, + "content": "Then for each semantic class \\(k \\in \\{1, \\dots, C\\}\\), we align the prototypes of the corresponding class pairs from the labeled and unlabeled domains, \\((\\mathbf{p}_k, \\mathbf{p}_{C + k})\\), by employing a cross-domain contrastive prototype alignment loss as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.095, + 0.508, + 0.484, + 0.603 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\mathcal {L} _ {p a} = - \\sum_ {k = 1} ^ {C} \\left[ \\log \\frac {\\exp \\left(\\cos \\left(\\mathbf {p} _ {k} , \\mathbf {p} _ {C + k}\\right) / \\tau\\right)}{\\sum_ {k ^ {\\prime} = 1} ^ {C} \\mathbb {1} \\left(k ^ {\\prime} \\neq k\\right) \\exp \\left(\\cos \\left(\\mathbf {p} _ {k} , \\mathbf {p} _ {C + k ^ {\\prime}}\\right) / \\tau\\right)} \\right. \\\\ \\left. + \\log \\frac {\\exp \\left(\\cos \\left(\\mathbf {p} _ {k} , \\mathbf {p} _ {C + k}\\right) / \\tau\\right)}{\\sum_ {k ^ {\\prime} = 1} ^ {C} \\mathbb {1} \\left(k ^ {\\prime} \\neq k\\right) \\exp \\left(\\cos \\left(\\mathbf {p} _ {k ^ {\\prime}} , \\mathbf {p} _ {C + k}\\right) / \\tau\\right)} \\right] \\tag {9} \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.617, + 0.484, + 0.722 + ], + "angle": 0, + "content": "where \\(\\tau\\) is a temperature hyper-parameter, and \\(\\cos (\\cdot ,\\cdot)\\) denotes the cosine similarity function. This contrastive loss promotes the sharing of predictive information between the labeled and unlabeled domains by encouraging the corresponding class prototype pairs to be closer to each other while simultaneously pushing the non-corresponding cross-domain class prototype pairs farther apart." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.731, + 0.367, + 0.746 + ], + "angle": 0, + "content": "3.2.4. Progressive Inter-Domain Mixup" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.75, + 0.484, + 0.854 + ], + "angle": 0, + "content": "In order to bridge the gap between the labeled domain and the unlabeled domain, we propose a progressive inter-domain mixup mechanism to augment the training set by dynamically generating synthetic instances between the labeled set and unlabeled set, with the objective of facilitating steady and efficient knowledge transfer from the labeled domain to the unlabeled domain." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.856, + 0.484, + 0.902 + ], + "angle": 0, + "content": "Specifically, we generate an inter-domain synthetic instance \\((\\mathbf{x}^m,\\mathbf{y}^m)\\) by mixing a labeled instance \\((\\mathbf{x}^l,\\mathbf{y}^l)\\) from the labeled set \\(\\mathcal{D}_L\\) with a pseudo-labeled instance \\((\\mathbf{x}^u,\\hat{\\mathbf{y}}^t)\\)" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.092, + 0.88, + 0.107 + ], + "angle": 0, + "content": "from the unlabeled set \\(\\mathcal{D}_U\\) through linear interpolation:" + }, + { + "type": "equation", + "bbox": [ + 0.614, + 0.118, + 0.906, + 0.144 + ], + "angle": 0, + "content": "\\[\n\\mathbf {x} ^ {m} = \\lambda \\mathbf {x} ^ {u} + (1 - \\lambda) \\mathbf {x} ^ {l}, \\tag {10}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.572, + 0.14, + 0.823, + 0.157 + ], + "angle": 0, + "content": "\\[\n\\mathbf {y} ^ {m} = \\lambda \\hat {\\mathbf {y}} ^ {t} + (1 - \\lambda) \\operatorname {c o n c a t} (\\mathbf {y} ^ {t}, \\mathbf {0} _ {C}),\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.169, + 0.906, + 0.243 + ], + "angle": 0, + "content": "where \\(\\lambda \\in [0,1]\\) is the mixing coefficient. To fully utilize the available data in both domains, we can generate \\(N^{m} = \\max (N^{l},N^{u})\\) synthetic instances to form a synthetic set \\(\\mathcal{D}_{\\mathrm{Mixup}}\\) by mixing each instance in the larger domain with a randomly selected instance in the other domain." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.245, + 0.907, + 0.365 + ], + "angle": 0, + "content": "In the standard mixup [40], the mixing coefficient \\(\\lambda\\) is sampled from a fixed \\(\\mathrm{Beta}(\\alpha, \\alpha)\\) distribution with hyperparameter \\(\\alpha\\). To facilitate a steady and smooth adaptation from the labeled domain to the unlabeled domain for HSSL, we propose to dynamically generate the mixup data in each training iteration \\(t\\) by deploying a progressive mixing up strategy that samples \\(\\lambda\\) from a shifted \\(\\mathrm{Beta}(\\alpha, \\alpha)\\) distribution based on a schedule function \\(\\psi(t)\\), such that:" + }, + { + "type": "equation", + "bbox": [ + 0.549, + 0.377, + 0.907, + 0.406 + ], + "angle": 0, + "content": "\\[\n\\lambda \\sim \\psi (t) \\times \\operatorname {B e t a} (\\alpha , \\alpha), \\quad \\psi (t) = 0. 5 + \\frac {t}{2 T} \\tag {11}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.415, + 0.907, + 0.565 + ], + "angle": 0, + "content": "where \\( T \\) denotes the total number of training iterations. Following this schedule, at the beginning of the training process, we have \\( \\psi(0) \\approx 0.5 \\) and \\( \\lambda \\) is sampled from the approximate interval [0, 0.5) as the model prioritizes the labeled domain, guarding against noisy pseudo-label predictions from unlabeled data. As the training progresses, the model gradually increases its reliance on the unlabeled data, and the interval [0, \\( \\psi(t) \\)] from which \\( \\lambda \\) is sampled is expanded gradually towards [0, 1] (with \\( \\psi(T) = 1 \\)), allowing it to adapt seamlessly between domains." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.567, + 0.907, + 0.612 + ], + "angle": 0, + "content": "Following previous works on using mixup data [1], we employ the mixup set \\(\\mathcal{D}_{\\mathrm{Mixup}}\\) for model training by minimizing the following mean squared error:" + }, + { + "type": "equation", + "bbox": [ + 0.532, + 0.623, + 0.907, + 0.65 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\text {M i x u p}} = \\mathbb {E} _ {\\left(\\mathbf {x} _ {i} ^ {m}, \\mathbf {y} _ {i} ^ {m}\\right) \\in \\mathcal {D} _ {\\text {M i x u p}}} \\left[ \\left\\| h \\left(f \\left(\\mathbf {x} _ {i} ^ {m}\\right)\\right) - \\mathbf {y} _ {i} ^ {m}\\right) \\right\\| ^ {2} \\tag {12}\n\\]" + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.659, + 0.692, + 0.674 + ], + "angle": 0, + "content": "3.2.5. Training Objective" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.678, + 0.906, + 0.738 + ], + "angle": 0, + "content": "By integrating the classification loss terms on the labeled set, the unlabeled set, and the mixup set, with the class prototype alignment loss, we obtain the following joint training objective for the Uni-HSSL model:" + }, + { + "type": "equation", + "bbox": [ + 0.534, + 0.75, + 0.907, + 0.769 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\text {t o t a l}} = \\mathcal {L} _ {c l} ^ {L} + \\lambda_ {p l} \\mathcal {L} _ {p l} ^ {U} + \\lambda_ {p a} \\mathcal {L} _ {p a} + \\lambda_ {\\text {M i x u p}} \\mathcal {L} _ {\\text {M i x u p}} \\tag {13}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.78, + 0.893, + 0.796 + ], + "angle": 0, + "content": "where \\(\\lambda_{pl},\\lambda_{pa}\\) and \\(\\lambda_{\\mathrm{Mixup}}\\) are trade-off hyper-parameters." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.809, + 0.645, + 0.826 + ], + "angle": 0, + "content": "4. Experiments" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.833, + 0.704, + 0.85 + ], + "angle": 0, + "content": "4.1. Experimental Setup" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.856, + 0.907, + 0.901 + ], + "angle": 0, + "content": "Datasets We conducted comprehensive experiments to evaluate the performance of our proposed framework on four image classification benchmark datasets: Office-31," + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "15375" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.118, + 0.088, + 0.882, + 0.304 + ], + "angle": 0, + "content": "
SupervisedFlexMatchFixMatchSimMatchCDAN+SupMCC+SupBiAdaptUni-HSSL
A/C53.1(0.7)51.1(1.2)51.9(1.5)57.8(1.6)47.0(0.5)54.9(1.2)55.1(1.8)60.1(0.9)
C/A66.0(1.2)68.1(1.3)63.8(0.7)69.7(0.9)63.9(0.7)70.5(0.3)65.1(1.2)72.0(0.7)
C/R77.5(0.9)72.1(0.9)79.5(0.5)78.5(0.5)67.1(0.8)75.4(0.5)75.2(1.2)80.5(0.4)
R/C63.9(1.2)67.8(1.6)66.2(0.7)64.3(0.8)67.0(1.2)69.3(0.5)61.2(1.8)72.8(0.6)
R/A72.6(0.9)59.0(1.2)74.1(0.5)70.5(0.5)74.6(0.9)75.1(0.8)69.1(0.9)75.8(0.6)
A/R75.1(0.7)73.5(0.9)70.4(0.6)75.8(0.5)66.5(0.8)77.3(0.8)72.1(1.4)78.3(0.5)
A/P67.4(1.5)64.0(0.9)62.7(0.6)68.9(0.6)56.5(0.5)71.8(0.4)64.9(1.3)70.9(0.8)
P/A69.1(1.0)64.1(1.2)62.8(0.8)69.7(0.9)74.9(1.2)76.1(0.2)64.1(0.8)78.7(0.4)
C/P69.1(0.9)65.6(1.0)65.1(1.1)70.0(0.4)65.5(1.2)71.2(0.5)69.1(1.4)72.8(0.7)
P/C64.6(0.9)64.3(1.1)65.2(1.5)68.5(0.8)66.8(0.6)68.0(0.5)67.7(0.9)69.9(0.9)
P/R80.0(0.5)73.3(0.7)78.1(0.4)78.1(0.2)89.5(0.4)82.1(0.6)76.2(1.2)82.9(0.4)
R/P77.9(0.1)68.1(1.2)74.7(0.3)74.0(0.7)78.2(1.3)77.0(1.2)74.1(1.4)82.1(0.5)
Avg.69.765.967.970.567.472.367.874.7
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.315, + 0.908, + 0.345 + ], + "angle": 0, + "content": "Table 1. Mean classification accuracy (standard deviation is within parentheses) on the Office-Home dataset using the ResNet-50 backbone. The first domain in each row indicates the labeled domain while the second domain indicates the unlabeled domain." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.37, + 0.486, + 0.702 + ], + "angle": 0, + "content": "Office-Home, VisDA, and ISIC-2019. In all four datasets, we split the samples of each domain into 90/10 train/test data. Office-31 [28] is comprised of a collection of 4,652 images spanning 31 different categories. The images are sourced from 3 distinct domains: Amazon (A), DSLR (D), and Webcam (W) with different image resolutions, quality, and lighting conditions. Office-Home [36] is a large collection of over 15,500 images spanning 65 categories. The images are sourced from 4 diverse domains: Artistic images (A), Clip Art (C), Product images (P), and Real-World images (R). VisDA-2017 [26] is a large-scale dataset tailored specifically for the visual domain adaptation task. This dataset includes images of 12 distinct categories from two domains, Synthetic (S) and Real (R). With the significant domain shift between the synthetic and real images, VisDA highlights the difficulties associated with bridging significant domain gaps. ISIC-2019 is a comprehensive repository of skin cancer research images sourced from 4 different sources: BCN-20000 (BCN) [8], Skin Cancer MNIST (HAM) [34], MSK4 [7], and an undefined source. We only utilize BCN and HAM sources as they include samples from all eight distinct classes." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.72, + 0.486, + 0.902 + ], + "angle": 0, + "content": "Implementation Details For all baselines we compared our Uni-HSSL against, we strictly followed the implementation details and hyper-parameters specified in the corresponding original papers. In order to ensure consistent comparisons with a multitude of earlier studies across various benchmark datasets, we employed two common backbone networks: ResNet-50 and ResNet-101 which are pre-trained on the ImageNet [10] dataset. We utilized ResNet-101 for VisDA dataset experiments and ResNet-50 for all the other benchmark datasets. The supervised pre-training stage is made up of 10 epochs while the semi-supervised training stage is made up of 100 epochs. In both stages, we em" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.37, + 0.909, + 0.553 + ], + "angle": 0, + "content": "ployed an SGD optimizer with a learning rate of \\(5e^{-4}\\) and Nesterov momentum [24] of 0.9. In the semi-supervised training stage, the learning rate is adjusted using a cosine annealing strategy [23, 37]. We set the L2 regularization coefficient to \\(1e^{-3}\\) and the batch size to 32 for all datasets. The trade-off hyper-parameters \\(\\lambda_{pl},\\lambda_{pa},\\lambda_{\\mathrm{Mixup}}\\) take the values 1, \\(1e^{-2}\\) and 1 respectively, while \\(\\tau\\) and \\(\\epsilon\\) take the value 0.5 and \\(\\beta\\) is set to 0.8. Furthermore, similar to [1], we apply random translations and horizontal flips to the input images prior to applying the Progressive Inter-Domain Mixup. We report the mean classification accuracy and the corresponding standard deviation over 3 runs in each experiment." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.562, + 0.707, + 0.577 + ], + "angle": 0, + "content": "4.2. Comparison Results" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.584, + 0.909, + 0.903 + ], + "angle": 0, + "content": "We evaluate the proposed Uni-HSSL framework on the heterogeneous semi-supervised learning tasks and compare it to four categories of baselines: Supervised Learning baselines, Semi-Supervised Learning (SSL) baselines, Unsupervised Domain Adaptation (UDA) baselines, and Bidirectional Adaptation baselines. The supervised baseline is exclusively trained on the labeled data and does not leverage the unlabeled data during training. We employ a set of representative SSL baselines (FlexMatch [39], FixMatch [31], and SimMatch [41]) and a set of representative UDA baselines (CDAN [22] and MCC [15]). In particular, we also compare our work with the state-of-the-art bidirectional adaptation method (BiAdapt) [14]. As the traditional UDA methods are trained to perform well solely on an unlabeled target domain, to ensure a fair comparison, we equip the UDA methods with a Supervised classifier (Sup) trained on the labeled set and a domain classifier and refer to them as \\(\\mathrm{MCC} + \\mathrm{Sup}\\) and \\(\\mathrm{CDAN} + \\mathrm{Sup}\\). At inference time, the domain classifier assigns each test sample to the appropriate classifier in the corresponding domain—either the supervised classifier for samples predicted to originate from the labeled domain or" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "15376" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.097, + 0.088, + 0.902, + 0.304 + ], + "angle": 0, + "content": "
SupervisedFlexMatchFixMatchSimMatchCDAN+SupMCC+SupBiAdaptUni-HSSL
Plane93.8(0.2)98.3(0.7)94.9(0.5)93.6(0.8)98.4(0.3)98.6(0.3)90.1(1.4)98.2(0.5)
Bicycle74.1(0.5)74.8(0.9)53.5(0.2)81.1(0.8)94.4(0.7)96.6(0.5)79.1(1.2)97.5(0.9)
Bus79.4(0.7)53.9(1.2)79.5(0.8)56.9(1.2)90.1(0.5)88.6(0.7)54.7(1.3)91.4(0.8)
Car86.2(0.9)36.4(2.1)88.5(0.3)59.6(1.5)85.1(0.5)84.8(0.9)56.1(1.2)89.0(0.9)
Horse90.9(0.2)97.4(0.5)76.0(0.8)65.6(1.0)96.6(0.1)97.6(0.3)62.1(1.4)98.2(0.3)
Knife87.5(0.7)77.2(0.8)78.8(0.9)71.9(0.5)95.0(1.4)95.1(0.9)68.2(0.1)98.9(0.4)
Motor.94.5(0.4)66.6(1.2)40.8(1.2)70.8(0.9)96.6(0.5)94.2(0.2)68.1(1.5)97.0(0.6)
Person80.0(0.7)80.5(0.8)58.9(1.6)64.1(0.8)94.3(0.6)94.6(0.2)62.5(1.2)95.6(0.7)
Plant91.1(0.7)91.8(0.8)62.7(0.7)65.5(0.9)96.5(0.4)97.3(0.5)63.5(1.9)95.7(0.2)
Skateboard81.8(0.9)90.0(0.5)68.9(1.2)57.0(1.7)85.5(0.5)83.0(0.8)59.3(1.7)91.5(0.8)
Train96.0(0.3)96.8(0.7)94.2(0.4)74.2(0.9)95.7(0.7)95.6(0.1)71.3(1.5)97.0(0.3)
Truck59.8(0.9)49.2(1.2)49.5(1.2)52.1(1.7)79.8(0.2)80.6(1.0)50.1(1.5)82.4(0.7)
Avg.84.182.487.380.892.192.079.193.1
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.315, + 0.907, + 0.345 + ], + "angle": 0, + "content": "Table 2. Mean classification accuracy (standard deviation is within parentheses) on the VisDA dataset using the ResNet-101 backbone. The rows correspond to the different classes of the dataset." + }, + { + "type": "table", + "bbox": [ + 0.13, + 0.367, + 0.445, + 0.51 + ], + "angle": 0, + "content": "
B/HH/BAvg.
Supervised70.5(0.9)65.4(1.2)67.9
FlexMatch71.3(1.4)68.7(0.8)70.0
FixMatch77.5(0.8)65.0(0.7)71.3
SimMatch75.1(1.5)69.2(1.7)72.2
CDAN+Sup72.9(1.0)65.2(0.4)69.1
MCC+Sup60.2(1.8)56.7(1.7)58.7
BiAdapt74.2(0.7)68.3(1.3)71.2
Uni-HSSL79.9(0.7)71.0(0.9)75.4
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.52, + 0.483, + 0.577 + ], + "angle": 0, + "content": "Table 3. Mean classification accuracy (standard deviation is within parentheses) on ISIC-2019 using the ResNet-50 backbone. The first domain in each row indicates the labeled domain the second domain indicates the unlabeled domain." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.596, + 0.483, + 0.626 + ], + "angle": 0, + "content": "the UDA classifier for those predicted to come from the unlabeled domain." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.629, + 0.486, + 0.902 + ], + "angle": 0, + "content": "The comparison results on the Office-Home, VisDA, ISIC-2019 and Office-31 datasets are reported in Tables 1, 2, 3 and 4, respectively, where the first domain indicates the labeled domain and the second domain indicates the unlabeled domain. In the case of the VisDA dataset, the labeled dataset is sampled from the synthetic domain (S) and the unlabeled dataset is sampled from the real domain (R) and we report the average classification accuracy for each class and the overall average classification accuracy. The tables show that Uni-HSSL consistently outperforms all baselines on all datasets across all setups. The performance gains over the supervised baseline are notable exceeding \\(9\\%\\), \\(4\\%\\), \\(9\\%\\), and \\(7\\%\\) on average in the cases of the Office-31, Office-Home, VisDA, and ISIC-2019 datasets, respectively. In the case of the VisDA dataset, the performance improvement over the supervised baseline at the class level is substantial, exceeding \\(22\\%\\) for some classes. Furthermore, Uni-HSSL consistently outperforms all the SSL baselines, achieving" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.37, + 0.909, + 0.763 + ], + "angle": 0, + "content": "performance gains exceeding \\(3\\%\\), \\(4\\%\\), \\(5\\%\\), and \\(3\\%\\) over the most effective SSL baselines on the Office-31, Office-Home, VisDA, and ISIC-2019 datasets, respectively. In some cases, such as A/W on Office-31 and P/A on Office-Home, the performance improvement over SSL baselines is notable, surpassing \\(6\\%\\) and \\(8\\%\\), respectively, highlighting the limitations of traditional SSL baselines in the proposed HSSL task. In the case of the UDA baselines, Uni-HSSL yields superior performance with all domain setups on all four datasets with performance gains around \\(4\\%\\), \\(2\\%\\), \\(1\\%\\), and \\(6\\%\\) on Office-31, Office-Home, VisDA and ISIC-2019 datasets, respectively. Uni-HSSL outperforms the UDA baselines on almost all classes of the VisDA dataset, with the UDA baselines slightly excelling in only two classes. However, Uni-HSSL still maintains superior overall performance compared to the UDA baselines. Furthermore, the MCC+Sup baseline does not perform well on the ISIC-2019 dataset, where it suffers a major drop in performance which can be attributed to the MCC baseline's sensitivity to the class imbalance inherent in this dataset. Moreover, our Uni-HSSL also substantially outperforms BiAdapt, with performance gains surpassing \\(5\\%\\), \\(6\\%\\), \\(14\\%\\), and \\(4\\%\\) on the Office-31, Office-Home, VisDA and ISIC-2019 datasets, respectively. These results underscore the robustness of Uni-HSSL and highlight the limitations of BiAdapt in effectively addressing the challenges posed by the proposed HSSL task." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.773, + 0.667, + 0.789 + ], + "angle": 0, + "content": "4.3. Ablation Study" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.795, + 0.909, + 0.903 + ], + "angle": 0, + "content": "In order to investigate the contribution of each component of the proposed framework, we conducted an ablation study to compare the proposed Uni-HSSL with its six variants: (1) “-w/o WMA”, which drops the Weighted Moving Average component of the pseudo-label update and simply uses the model predictions to generate pseudo-labels; (2) “-w/o \\(\\mathcal{L}_{\\mathrm{cl}}^{L}\\)”, which drops the cross-entropy classification loss on the la" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "15377" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.184, + 0.089, + 0.813, + 0.231 + ], + "angle": 0, + "content": "
W/AA/WA/DD/AD/WW/DAvg.
Supervised68.6(1.6)82.8(1.2)85.1(1.8)35.5(0.9)96.9(0.4)98.2(0.5)77.8
FlexMatch68.1(1.8)81.3(1.3)85.1(1.8)63.0(2.1)98.5(0.2)98.9(0.2)82.4
FixMatch69.1(1.3)83.4(0.9)86.4(0.8)53.7(1.3)98.1(0.2)98.2(0.2)81.5
SimMatch71.1(0.9)84.1(1.0)86.5(0.5)68.6(1.1)96.8(0.5)98.8(0.4)84.3
CDAN+Sup61.2(1.2)82.5(1.3)87.4(2.2)58.3(2.6)79.2(0.4)97.5(0.4)77.7
MCC+Sup71.5(2.7)88.8(0.7)89.1(0.5)67.6(1.3)81.7(0.7)99.5(0.4)83.0
BiAdopt70.2(0.9)85.0(0.5)77.4(0.7)67.1(1.0)94.2(0.5)98.5(0.3)82.0
Uni-HSSL73.1(1.0)90.2(0.8)90.0(0.2)72.1(0.7)100(0.0)100(0.0)87.5
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.242, + 0.905, + 0.27 + ], + "angle": 0, + "content": "Table 4. Mean classification accuracy (standard deviation is within parentheses) on the Office-31 dataset using the ResNet-50 backbone. The first domain in each column indicates the labeled domain the second domain indicates the unlabeled domain." + }, + { + "type": "table", + "bbox": [ + 0.159, + 0.283, + 0.84, + 0.41 + ], + "angle": 0, + "content": "
W/AA/WA/DD/AD/WW/DAvg.
Uni-HSSL73.1(1.0)90.2(0.8)90.0(0.2)72.1(0.7)100(0.0)100(0.0)87.5
-w/o WMA72.8(0.5)87.1(0.8)88.3(0.9)71.0(0.8)100(0.0)100(0.0)86.8
-w/o \\( \\mathcal{L}_{cl}^{L} \\)67.6(1.7)85.5(0.8)86.1(1.2)64.8(2.0)93.2(0.5)92.9(0.6)81.7
-w/o \\( \\mathcal{L}_{pl}^{U} \\)72.5(0.8)87.9(0.9)88.1(0.7)71.0(0.9)98.0(0.2)98.5(0.2)86.1
-w/o \\( \\mathcal{L}_{pa} \\)72.7(0.5)88.9(0.7)87.2(0.6)71.3(0.9)99.1(0.0)100(0.0)86.5
-w/o \\( \\mathcal{L}_{Mixup} \\)71.9(1.2)86.7(0.9)88.1(0.8)71.3(1.1)98.0(0.4)99.9(0.0)86.1
-w/o Prog. Mixup71.3(0.9)84.8(0.9)88.1(1.0)70.0(1.3)99.2(0.5)99.9(0.0)85.6
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.421, + 0.907, + 0.462 + ], + "angle": 0, + "content": "Table 5. Ablation study results in terms of mean classification accuracy (standard deviation is within parentheses) on the Office-31 dataset using the ResNet-50 backbone. The first domain in each column indicates the labeled domain while the second domain indicates the unlabeled domain." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.474, + 0.483, + 0.626 + ], + "angle": 0, + "content": "beled set \\(\\mathcal{D}_L\\); (3) “\\(-\\mathrm{w/o}\\mathcal{L}_{\\mathrm{pl}}^U\\), which drops the cross-entropy pseudo-label classification loss on the unlabeled set \\(\\mathcal{D}_U\\); (4) “\\(-\\mathrm{w/o}\\mathcal{L}_{\\mathrm{pa}}\\), which drops the Cross-Domain Prototype Alignment component; (5) “\\(-\\mathrm{w/o}\\mathcal{L}_{\\mathrm{Mixup}}\\), which drops the Progressive Inter-Domain Mixup component; and (6) “\\(-\\mathrm{w/o}\\) Prog. Mixup”, which drops the progressive component of the Inter-Domain Mixup and uses a simple mixup for inter-domain data augmentation. We compare the proposed UniHSSL with all the six variants on the Office-31 dataset and report the results in Table 5." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.629, + 0.483, + 0.901 + ], + "angle": 0, + "content": "From the table, we can see that dropping any component from the proposed unified framework results in performance degradation in all cases. “-w/o \\(\\mathcal{L}_{\\mathrm{cl}}^{L}\\)” variant suffered the largest performance degradation, which highlights the importance of the ground-truth labels of \\(\\mathcal{D}_L\\) in guiding the learning process of the framework. Dropping the WMA from the pseudo-label generation component led to a slight average performance drop to \\(86.8\\%\\), underscoring its role in obtaining stable and confident pseudo-labels. Similarly, dropping the classification loss on the unlabeled data \\(\\mathcal{L}_{pl}^{U}\\) led to a performance degradation to \\(86.1\\%\\). Furthermore, the variant “-w/o Prog. Mixup” suffers a larger drop in performance in comparison with the variant “-w/o \\(\\mathcal{L}_{\\mathrm{Mixup}}\\)”, which highlights the importance of progressively generating the augmented samples to ensure the accuracy of their corresponding augmented labels. Generating inter-domain augmented samples without taking into account the domain gap between the labeled domain and unlabeled domain can" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.476, + 0.907, + 0.552 + ], + "angle": 0, + "content": "lead to a degradation in performance due to the noisy augmented labels of the generated samples. Overall, the consistent performance drops across all the tasks of Office-31 for each variant validate the essential contribution of each corresponding component of the Uni-HSSL framework." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.571, + 0.634, + 0.587 + ], + "angle": 0, + "content": "5. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.599, + 0.908, + 0.901 + ], + "angle": 0, + "content": "In this paper, we introduced a challenging heterogeneous semi-supervised learning problem, where the labeled and unlabeled training data come from different domains and possess different label and class feature distributions. To address this demanding setup, we proposed a Unified Framework for Heterogeneous Semi-Supervised Learning (Uni-HSSL), which trains a fine-grained classification model over the concatenated label space by effectively exploiting the labeled and unlabeled data as well as their relationships. Uni-HSSL adopts a WMA pseudo-labeling strategy to obtain stable and confident pseudo-labels for the unlabeled data, while deploying a cross-domain class prototype alignment component to support knowledge transfer and sharing between domains. A novel progressive inter-domain mixup component is further devised to augment the training data and bridge the significant gap between the labeled and unlabeled domains. The experimental results demonstrate the effectiveness and superiority of the proposed Uni-HSSL over state-of-the-art semi-supervised learning methods and unsupervised domain adaptation baselines." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.518, + 0.956 + ], + "angle": 0, + "content": "15378" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.093, + 0.09, + 0.188, + 0.106 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.115, + 0.486, + 0.184 + ], + "angle": 0, + "content": "[1] David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A holistic approach to semi-supervised learning. In Advances in Neural Information Processing Systems (NeurIPS), 2019. 5, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.187, + 0.485, + 0.255 + ], + "angle": 0, + "content": "[2] David Berthelot, Rebecca Roelofs, Kihyuk Sohn, Nicholas Carlini, and Alexey Kurakin. Adamatch: A unified approach to semi-supervised learning and domain adaptation. In International Conference on Learning Representations (ICLR), 2021. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.258, + 0.483, + 0.3 + ], + "angle": 0, + "content": "[3] Kaidi Cao, Maria Brbic, and Jure Leskovec. Open-world semi-supervised learning. In International Conference on Learning Representations (ICLR), 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.302, + 0.484, + 0.357 + ], + "angle": 0, + "content": "[4] Chao Chen, Zhihang Fu, Zhihong Chen, Sheng Jin, Zhaowei Cheng, Xinyu Jin, and Xian-Sheng Hua. Homm: Higher-order moment matching for unsupervised domain adaptation. In AAAI Conference on Artificial Intelligence, 2020. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.36, + 0.484, + 0.415 + ], + "angle": 0, + "content": "[5] Xinyang Chen, Sinan Wang, Mingsheng Long, and Jianmin Wang. Transferability vs. discriminability: Batch spectral penalization for adversarial domain adaptation. In International Conference on Machine Learning (ICML), 2019. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.417, + 0.484, + 0.47 + ], + "angle": 0, + "content": "[6] Yanbei Chen, Xiatian Zhu, Wei Li, and Shaogang Gong. Semi-supervised learning under class distribution mismatch. In Proceedings of the AAAI Conference on Artificial Intelligence, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.474, + 0.484, + 0.584 + ], + "angle": 0, + "content": "[7] Noel CF Codella, David Gutman, M Emre Celebi, Brian Helba, Michael A Marchetti, Stephen W Dusza, Aadi Kalloo, Konstantinos Liopyris, Nabin Mishra, Harald Kittler, et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic). In International Symposium on Biomedical Imaging (ISBI), 2018. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.587, + 0.484, + 0.655 + ], + "angle": 0, + "content": "[8] Marc Combalia, Noel CF Codella, Veronica Rotemberg, Brian Helba, Veronica Vilaplana, Ofer Reiter, Cristina Carrera, Alicia Barreiro, Allan C Halpern, Susana Puig, et al. Bcn20000: Dermoscopic lesions in the wild. arXiv preprint arXiv:1908.02288, 2019. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.657, + 0.484, + 0.727 + ], + "angle": 0, + "content": "[9] Shuhao Cui, Shuhui Wang, Junbao Zhuo, Liang Li, Qingming Huang, and Qi Tian. Towards discriminability and diversity: Batch nuclear-norm maximization under label insufficient situations. In Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.729, + 0.484, + 0.784 + ], + "angle": 0, + "content": "[10] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Conference on Computer Vision and Pattern Recognition (CVPR), 2009. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.787, + 0.484, + 0.828 + ], + "angle": 0, + "content": "[11] Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In International Conference on Machine Learning (ICML), 2015. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.83, + 0.484, + 0.885 + ], + "angle": 0, + "content": "[12] Lan-Zhe Guo, Zhen-Yu Zhang, Yuan Jiang, Yu-Feng Li, and Zhi-Hua Zhou. Safe deep semi-supervised learning for unseen-class unlabeled data. In International Conference on Machine Learning (ICML). PMLR, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.887, + 0.484, + 0.902 + ], + "angle": 0, + "content": "[13] Zhuo Huang, Chao Xue, Bo Han, Jian Yang, and Chen Gong." + }, + { + "type": "list", + "bbox": [ + 0.094, + 0.115, + 0.486, + 0.902 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.545, + 0.093, + 0.908, + 0.121 + ], + "angle": 0, + "content": "Universal semi-supervised learning. Advances in Neural Information Processing Systems (NeurIPS), 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.122, + 0.908, + 0.189 + ], + "angle": 0, + "content": "[14] Lin-Han Jia, Lan-Zhe Guo, Zhi Zhou, Jie-Jing Shao, Yuke Xiang, and Yu-Feng Li. Bidirectional adaptation for robust semi-supervised learning with inconsistent data distributions. In International Conference on Machine Learning (ICML), 2023. 2, 3, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.191, + 0.907, + 0.233 + ], + "angle": 0, + "content": "[15] Ying Jin, Ximei Wang, Mingsheng Long, and Jianmin Wang. Minimum class confusion for versatile domain adaptation. In European Conference on Computer Vision (ECCV), 2020. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.234, + 0.907, + 0.274 + ], + "angle": 0, + "content": "[16] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. In International Conference on Learning Representations (ICLR), 2017. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.275, + 0.907, + 0.316 + ], + "angle": 0, + "content": "[17] Qicheng Lao, Xiang Jiang, and Mohammad Havaei. Hypothesis disparity regularized mutual information maximization. In AAAI Conference on Artificial Intelligence, 2021. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.317, + 0.906, + 0.344 + ], + "angle": 0, + "content": "[18] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. In Nature, 2015. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.345, + 0.908, + 0.398 + ], + "angle": 0, + "content": "[19] Dong-Hyun Lee et al. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML, 2013. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.4, + 0.908, + 0.441 + ], + "angle": 0, + "content": "[20] Hong Liu, Jianmin Wang, and Mingsheng Long. Cycle self-training for domain adaptation. In Advances in Neural Information Processing Systems (NeurIPS), 2021. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.443, + 0.907, + 0.496 + ], + "angle": 0, + "content": "[21] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In International Conference on Machine Learning (ICML), 2015. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.498, + 0.907, + 0.552 + ], + "angle": 0, + "content": "[22] Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. In Advances in Neural Information Processing Systems (NeurIPS), 2018. 3, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.554, + 0.906, + 0.594 + ], + "angle": 0, + "content": "[23] Ilya Loshchilov and Frank Hutter. SGDR: Stochastic gradient descent with warm restarts. In International Conference on Learning Representations (ICLR), 2017. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.596, + 0.907, + 0.636 + ], + "angle": 0, + "content": "[24] Yurii Nesterov. A method for unconstrained convex minimization problem with the rate of convergence o (1/k2). In Dokl. Akad. Nauk. SSSR, 1983. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.638, + 0.906, + 0.692 + ], + "angle": 0, + "content": "[25] Avital Oliver, Augustus Odena, Colin A Raffel, Ekin Dogus Cubuk, and Ian Goodfellow. Realistic evaluation of deep semi-supervised learning algorithms. In Advances in Neural Information Processing Systems (NeurIPS), 2018. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.693, + 0.908, + 0.747 + ], + "angle": 0, + "content": "[26] Xingchao Peng, Ben Usman, Neela Kaushik, Judy Hoffman, Dequan Wang, and Kate Saenko. Visda: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924, 2017. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.749, + 0.908, + 0.803 + ], + "angle": 0, + "content": "[27] Hoang Phan, Trung Le, Trung Phung, Anh Tuan Bui, Nhat Ho, and Dinh Phung. Global-local regularization via distributional robustness. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.805, + 0.908, + 0.845 + ], + "angle": 0, + "content": "[28] Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In European Conference on Computer Vision (ECCV), 2010. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.847, + 0.907, + 0.899 + ], + "angle": 0, + "content": "[29] Jian Shen, Yanru Qu, Weinan Zhang, and Yong Yu. Wasserstein distance guided representation learning for domain adaptation. In AAAI Conference on Artificial Intelligence, 2018. 3" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.908, + 0.899 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "15379" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.093, + 0.486, + 0.146 + ], + "angle": 0, + "content": "[30] Rui Shu, Hung H Bui, Hirokazu Narui, and Stefano Ermon. A dirt-t approach to unsupervised domain adaptation. International Conference on Learning Representations (ICLR), 2018. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.149, + 0.483, + 0.232 + ], + "angle": 0, + "content": "[31] Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In Advances in Neural Information Processing Systems (NeurIPS), 2020. 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.234, + 0.483, + 0.288 + ], + "angle": 0, + "content": "[32] Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In International Conference on Computer Vision (ICCV), 2017. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.291, + 0.483, + 0.346 + ], + "angle": 0, + "content": "[33] Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in Neural Information Processing Systems (NeurIPS), 2017. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.348, + 0.483, + 0.401 + ], + "angle": 0, + "content": "[34] Philipp Tschandl, Cliff Rosendahl, and Harald Kittler. The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Scientific data, 2018. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.404, + 0.483, + 0.432 + ], + "angle": 0, + "content": "[35] Jesper E Van Engelen and Holger H Hoos. A survey on semi-supervised learning. Machine learning, 2020. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.434, + 0.483, + 0.487 + ], + "angle": 0, + "content": "[36] Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.489, + 0.483, + 0.543 + ], + "angle": 0, + "content": "[37] Vikas Verma, Kenji Kawaguchi, Alex Lamb, Juho Kannala, Arno Solin, Yoshua Bengio, and David Lopez-Paz. Interpolation consistency training for semi-supervised learning. In Neural Networks, 2022. 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.546, + 0.483, + 0.6 + ], + "angle": 0, + "content": "[38] Qing Yu, Daiki Ikami, Go Irie, and Kiyoharu Aizawa. Multi-task curriculum framework for open-set semi-supervised learning. In European Conference on Computer Vision (ECCV), 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.603, + 0.483, + 0.671 + ], + "angle": 0, + "content": "[39] Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, and Takahiro Shinozaki. Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling. In Advances in Neural Information Processing Systems (NeurIPS), 2021. 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.673, + 0.483, + 0.728 + ], + "angle": 0, + "content": "[40] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. Mixup: Beyond empirical risk minimization. In International Conference on Learning Representations (ICLR), 2018. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.73, + 0.483, + 0.784 + ], + "angle": 0, + "content": "[41] Mingkai Zheng, Shan You, Lang Huang, Fei Wang, Chen Qian, and Chang Xu. Simmatch: Semi-supervised learning with similarity matching. In Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.787, + 0.483, + 0.827 + ], + "angle": 0, + "content": "[42] Yang Zou, Zhiding Yu, Xiaofeng Liu, BVK Kumar, and Jinsong Wang. Confidence regularized self-training. In International Conference on Computer Vision (ICCV), 2019. 3" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.093, + 0.486, + 0.827 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "15380" + } + ] +] \ No newline at end of file diff --git a/2025/A Unified Framework for Heterogeneous Semi-supervised Learning/341a6b6d-e6ba-48c2-98b8-5f373e8e2473_origin.pdf b/2025/A Unified Framework for Heterogeneous Semi-supervised Learning/341a6b6d-e6ba-48c2-98b8-5f373e8e2473_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3aa3f34727a94cceb754b01c11d79aec49656d85 --- /dev/null +++ b/2025/A Unified Framework for Heterogeneous Semi-supervised Learning/341a6b6d-e6ba-48c2-98b8-5f373e8e2473_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:efbd693438903ca70e4f6c6ddce07dd3a3f4ac3397d3474d4b53129e3272378f +size 555598 diff --git a/2025/A Unified Framework for Heterogeneous Semi-supervised Learning/full.md b/2025/A Unified Framework for Heterogeneous Semi-supervised Learning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..bd1b5e3053d22453d0fc26ec05628ea91bff1046 --- /dev/null +++ b/2025/A Unified Framework for Heterogeneous Semi-supervised Learning/full.md @@ -0,0 +1,283 @@ +# A Unified Framework for Heterogeneous Semi-supervised Learning + +Marzi Heidari*, Abdullah Alchihabi*, Hao Yan*, Yuhong Guo*† +*School of Computer Science, Carleton University, Ottawa, Canada†Canada CIFAR AI Chair, Amii, Canada + +{marziheidari@cmail., abdullahalchihibi@cmail., haoyan6@cmail., yuhong.guo@}carleton.ca + +# Abstract + +In this work, we introduce a novel problem setup termed as Heterogeneous Semi-Supervised Learning (HSSL), which presents unique challenges by bridging the semi-supervised learning (SSL) task and the unsupervised domain adaptation (UDA) task, and expanding standard semi-supervised learning to cope with heterogeneous training data. At its core, HSSL aims to learn a prediction model using a combination of labeled and unlabeled training data drawn separately from heterogeneous domains that share a common set of semantic categories. This model is intended to differentiate the semantic categories of test instances sampled from both the labeled and unlabeled domains. In particular, the labeled and unlabeled domains have dissimilar label distributions and class feature distributions. This heterogeneity, coupled with the assorted sources of the test data, introduces significant challenges to standard SSL and UDA methods. Therefore, we propose a novel method, Unified Framework for Heterogeneous Semi-supervised Learning (Uni-HSSL), to address HSSL by directly learning a fine-grained classifier from the heterogeneous data, which adaptively handles the inter-domain heterogeneity while leveraging both the unlabeled data and the inter-domain semantic class relationships for cross-domain knowledge transfer and adaptation. We conduct comprehensive experiments and the experimental results validate the efficacy and superior performance of the proposed Uni-HSSL over state-of-the-art semi-supervised learning and unsupervised domain adaptation methods. + +# 1. Introduction + +Deep learning models, owing to their hierarchical learned representations and intricate architectures, have monumentally advanced the state-of-the-art across a myriad of tasks [18]. Nonetheless, the success of deep learning has been often contingent on the availability of copious amounts of labeled data. Data annotation, especially in specialized domains, is not only resource-intensive but can also entail exorbitant costs [32]. Consequently, semi-supervised learn + +ing (SSL) has been popularly studied, aiming to successfully utilize the free available unlabeled data to help train deep models in an annotation efficient manner [35]. + +However, current SSL methods assume that the unlabeled and labeled data are sampled from similar (homogeneous) distributions [25]. Such an assumption presents substantial practical limitations to applying traditional SSL methods to a wide range of application domains, where labeled and unlabeled data can have different distributions. For example, in the field of medical imaging, it is common for labeled MRI scans to be sourced from state-of-the-art research hospitals, while an influx of unlabeled scans could emanate from a myriad of rural clinics, each with its distinct scanning equipment and calibration idiosyncrasies. Similar heterogeneity patterns manifest in domains like aerial imagery, wildlife monitoring, and retail product classification. In such settings, the challenge lies in leveraging the unlabeled data given its dissimilarity with its labeled counterpart. + +Therefore, to address the current limitations of the traditional SSL, we propose a novel heterogeneous semi-supervised learning (HSSL) task, where the training data consist of labeled and unlabeled data sampled from different distribution domains. The two domains contain a common set of semantic classes, but have different label and class feature distributions. The goal of HSSL is to train a model using the heterogeneous training data so that it can perform well on a held-out test set sampled from both the labeled and unlabeled domains. Without posing distribution similarity assumptions between the labeled and unlabeled data, HSSL is expected to be applicable to a broader range of real-world scenarios compared to standard SSL. This novel heterogeneous semi-supervised learning task however is much more challenging due to the following characteristics: (1) The domain gap, expressed as divergence between class feature distributions across the labeled and unlabeled domains, presents a significant impediment to model generalization and learning. (2) The absence of annotated samples from the unlabeled domain during training further compounds the complexity of the task. (3) Considering that the test set comprises samples from both domains, the devised so + +lution methods need to accurately model the distributions inherent to each domain. It is imperative for the models to discern not only the domain from which a sample originates but also the specific semantic class it belongs to. This requires either an explicit or implicit methodology to categorize samples accurately with respect to both domain origin and semantic class categories, distinguishing the task from both conventional SSL and unsupervised domain adaptation (UDA)—traditional SSL overlooks the domain heterogeneity within both the training and testing data, whereas UDA exclusively concentrates on the unlabeled domain as the target domain [11, 21]. Therefore, traditional SSL and UDA methods are not readily applicable or effective in addressing the proposed HSSL task. A recent work [14] has made an effort to expand the traditional SSL task beyond its homogeneous assumptions. However, the proposed solution method learns separately in different domains using distinct components where an off-the-shelf UDA technique is employed to generate pseudo-labels for the unlabeled samples, bypassing the opportunity to train a unified cohesive model that could harness insights from both domains. Furthermore, their test set is confined to a labeled domain, while HSSL aims to train a model that generalizes across labeled and unlabeled domains. HSSL presents a more complex challenge, requiring the model to adapt and perform accurately across heterogeneous test data. + +In this work, we propose a novel method, named as Unified framework for Heterogeneous Semi-Supervised Learning (Uni-HSSL), to address the HSSL problem. The proposed method learns a fine-grained classification model cohesively under a unified framework by amalgamating the labeled and unlabeled class categories within an extended and precisely doubled label space. The framework consists of three technical components designed to tackle the HSSL challenges: a weighted moving average pseudo-labeling component, a cross-domain prototype alignment component, and a progressive inter-domain mixup component. The pseudo-labeling component leverages a weighted moving average strategy to assign and update pseudo-labels for the unlabeled data. In this manner, it generates smooth and adaptive assignment of pseudo-labels, reducing the potential pitfalls of oscillating updates or noisy label assignments, which is crucial given the significant domain gap between labeled data and unlabeled data. The cross-domain prototype alignment ensures that the inherent semantic structures of similar classes across the labeled and unlabeled domains are aligned. This alignment of class-centric prototypes between domains leverages inter-domain semantic class relationships, enabling knowledge transfer from the labeled domain to the unlabeled domain. The progressive inter-domain mixup component generates new synthetic instances by interpolating between labeled and unlabeled samples and bridges the gap between the two domains. By adopting a progressive + +augmentation schedule, it gradually adapts the model to the distribution of the unlabeled domain, facilitating a steady and reliable knowledge transfer. Comprehensive experiments are conducted on several benchmark datasets. The empirical results demonstrate the efficacy and superior performance of our proposed unified framework compared to multiple state-of-the-art SSL and unsupervised domain adaptation baselines for HSSL. + +# 2. Related Works + +# 2.1. Semi-Supervised Learning + +Conventional Semi-Supervised Learning (SSL) In conventional SSL, the labeled and unlabeled segments of the dataset encompass identical classes, sharing consistent class and feature distributions. SSL methods are primarily classified into three categories: regularization-based techniques, teacher-student models, and pseudo-labeling strategies. Regularization-based techniques like II-model [16] modify the loss function with additional terms for model refinement. Teacher-student models like MT [33] and ICT [37] involve training a student network to mimic a teacher model using unlabeled data. Pseudo-labeling strategies like Pseudo-Label [19], FixMatch [31], FlexMatch [39], and SimMatch [41] expand labeled datasets using unlabeled data with pseudo-labels in various ways. + +Open-Set Semi-Supervised Learning (OS-SSL) OS-SSL deals with unknown or additional classes present in the unlabeled data but absent in the labeled set. OS-SSL assumes the same feature distribution over labeled and unlabeled sets. This is different from HSSL, which operates under the assumption that labeled and unlabeled data come from separate domains with different feature distributions. The concept of OS-SSL, introduced in [25], focuses on class distribution mismatches in open-set scenarios. Methods for OS-SSL like UASD [6] use self-distillation to exclude outliers from unlabeled data. DS3L [12] and MTCF [38] employ diverse weighting strategies for subset mismatches, minimizing the impact of private data in unlabeled sets. OpenMatch [3] utilizes one-vs-all classifiers for outlier detection but faces difficulties with unseen categories. While OS-SSL has advanced SSL towards practical use, it lacks capacity to handle feature distribution mismatches between labeled and unlabeled data. + +Universal Semi-Supervised Learning (USSL) Universal SSL [13] involves both shared and unique classes across the labeled and unlabeled sets, with the test set matching the labeled set's class distribution. HSSL, however, assumes shared classes across the labeled and unlabeled domains and tests on samples from both domains without their domain identities, adding complexity. + +Similar to our work, bidirectional Adaptation [14] addresses the disparity between limited labeled and abundant unlabeled data, but it tests only within the labeled domain's feature distribution. It uses UDA techniques for pseudolabeling, avoiding the complexities and benefits of cross-domain modeling. In contrast, HSSL aims for effective generalization across both domains, posing a more intricate challenge in model adaptation and generalization. + +# 2.2. Unsupervised Domain Adaptation + +Unsupervised domain adaptation aims at learning a target model given labeled data from a source domain and unlabeled data from a target domain. Typical deep UDA approaches can be categorized into three types: alignment-based, regularization-based, and self-training-based methods. Alignment-based methods aim to reduce the cross-domain feature discrepancy with adversarial alignment [11, 22] and distance-based methods [4, 21, 27, 29]. Regularization-based methods utilize regularization terms to leverage knowledge from the unlabeled target data. Typical regularization terms include entropy minimization [30], virtual adversarial training [30], batch spectral penalization [5], batch nuclear-norm maximization [9], and mutual information maximization [17]. Self-training-based methods explore effective pseudo-labeling for unlabeled target data fitting, including confidence threshold [2, 42] and cycle self-training [20]. + +# 3. Method + +# 3.1. Problem Setup + +We consider the following Heterogeneous Semi-Supervised Learning (HSSL) setup. The training data consist of a set of labeled instances $\mathcal{D}_L = \{(\mathbf{x}_i^l,\mathbf{y}_i^l)\}_{i = 1}^{N_l}$ , where each instance $\mathbf{x}_i^l$ is annotated with a one-hot label indicator vector $\mathbf{y}_i^l$ with length $C$ , and a set of unlabeled instances $\mathcal{D}_U = \{\mathbf{x}_i^u\}_{i = 1}^{N_u}$ . The labeled data and unlabeled data are from two different domains that have dissimilar label distributions such that $p_L(\mathbf{y})\neq p_U(\mathbf{y})$ and heterogeneous feature distributions such that $p_L(\mathbf{x}|\mathbf{y})\neq p_U(\mathbf{x}|\mathbf{y})$ , but share the same set of $C$ semantic classes. The goal is to train a prediction model using both the labeled set $\mathcal{D}_L$ and unlabeled set $\mathcal{D}_U$ so that the trained model would generalize well on a held-out test set that is indistinguishably sampled from both the labeled and unlabeled domains. + +# 3.2. Proposed Method + +In this section, we present the proposed Uni-HSSL method, which tackles the $C$ -class HSSL problem by combining the labeled and unlabeled class categories to a doubled label space and learning a fine-grained $2C$ -class classification model under a unified framework, aiming to adaptively handle the heterogeneous distributions across domains and gain better generalization over test instances randomly sampled + +from both the labeled and unlabeled domains. The core idea centers on simultaneously facilitating effective knowledge transfer from the labeled domain to the unlabeled domain while harnessing the information within the unlabeled data. + +We start by first pre-training a feature encoder and a $C$ -class semantic classifier on the labeled dataset, which can be used to produce the initial pseudo-labels of the unlabeled training data and provide partial initialization for our Uni-HSSL model. Then the $2C$ -class Uni-HSSL model, which consists of a feature encoder $f$ and a $2C$ -class classifier $h$ , will be learned within the proposed unified semi-supervised framework shown in Figure 1. The framework introduces three technical components to facilitate heterogeneous SSL. The weighted-moving-average (WMA) based pseudo-labeling component is deployed to support the effective exploitation of the unlabeled data, while the cross-domain prototype alignment component and progressive inter-domain mixup component are designed to promote information sharing and efficient and steady knowledge transfer from the labeled domain to the unlabeled domain. Further elaboration will be provided in the following sections. + +# 3.2.1. Supervised Pre-training + +The initial challenge in training a $2C$ -class classification model with the given heterogeneous data is the absence of labeled instances entirely in the unlabeled domain. To tackle this problem, we exploit the assumption that the labeled and unlabeled domains share the same set of $C$ semantic class categories, and pre-train a $C$ -class classification model in the labeled domain to provide initial pseudo-labels for the training instances in the unlabeled domain. + +Specifically, we pre-train a $C$ -class model, which consists of a feature encoder $f$ and a $C$ -class probabilistic classifier $g$ , on the labeled data $\mathcal{D}_L$ by minimizing the following supervised cross-entropy loss: + +$$ +\mathcal {L} _ {c e} ^ {L} = \mathbb {E} _ {\left(\mathbf {x} _ {i} ^ {l}, \mathbf {y} _ {i} ^ {l}\right) \in \mathcal {D} _ {L}} \left[ \ell_ {c e} \left(\mathbf {y} _ {i} ^ {l}, g (f (\mathbf {x} _ {i} ^ {l}))\right) \right] \tag {1} +$$ + +where $\ell_{ce}$ denotes the cross-entropy function. Then we deploy the pre-trained classification model to make predictions on the unlabeled training instances in $\mathcal{D}_U$ to generate their initial pseudo-labels: + +$$ +\bar {\mathbf {y}} _ {i} ^ {0} = g \left(f \left(\mathbf {x} _ {i} ^ {u}\right)\right), \quad \forall \mathbf {x} _ {i} ^ {u} \in \mathcal {D} _ {U} \tag {2} +$$ + +where $\bar{\mathbf{y}}_i^0$ denotes the predicted class probability vector with length $C$ for the unlabeled instance $\mathbf{x}_i^u$ . To provide initial labels on the unlabeled data for training the $2C$ -class model, we further expand each $\bar{\mathbf{y}}_i^0$ by concatenating it with a zero vector with length $C$ , $\mathbf{0}_C$ : + +$$ +\hat {\mathbf {y}} _ {i} ^ {0} = \operatorname {c o n c a t} \left(\mathbf {0} _ {C}, \bar {\mathbf {y}} _ {i} ^ {0}\right) \tag {3} +$$ + +This results in the first set of $C$ classes out of the $2C$ classes corresponding to the classes in the labeled domain, with + +![](images/b2dc7b8e254ecc5e7a924b0093c898cef1b0802eae32003cf24e947489c5e0c3.jpg) +Figure 1. An overview of the proposed Uni-HSSL training framework. The classification model consists of a feature encoder $f$ and a $2C$ -class classifier $h$ . After initialization with pre-training, the model is trained by jointly minimizing the combination of a supervised loss $\mathcal{L}_{cl}^{L}$ on the labeled data, a WMA pseudo-labeling loss $\mathcal{L}_{pl}^{U}$ on the unlabeled data, a cross-domain prototype alignment loss $\mathcal{L}_{pa}$ , and a prediction loss $\mathcal{L}_{\mathrm{Mixup}}$ on the augmentation data produced via progressive inter-domain mixup. + +the remaining set of $C$ classes corresponding to the classes in the unlabeled domain. Moreover, the parameters of the pre-trained $C$ -class model $(g \circ f)$ can also be utilized to initialize the feature encoder $f$ and part of the classifier $h$ corresponding to the first $C$ classes in the $2C$ -class model, while the other part of $h$ will be randomly initialized. + +# 3.2.2. Semi-Supervised Training with Adaptive Pseudo-Labeling + +After initialization, the proposed $2C$ -class classification model (feature encoder $f$ and probabilistic classifier $h$ ) will be trained by leveraging both the labeled set $\mathcal{D}_L$ and the unlabeled set $\mathcal{D}_U$ within a pseudo-labeling based SSL framework. On the labeled set $\mathcal{D}_L$ , the following standard supervised cross-entropy loss will be used as the training objective: + +$$ +\mathcal {L} _ {c l} ^ {L} = \mathbb {E} _ {\left(\mathbf {x} _ {i} ^ {l}, \mathbf {y} _ {i} ^ {l}\right) \in \mathcal {D} _ {L}} \left[ \ell_ {c e} \left(h \left(f \left(\mathbf {x} _ {i} ^ {l}\right)\right), \operatorname {c o n c a t} \left(\mathbf {y} _ {i} ^ {l}, \mathbf {0} _ {C}\right)\right) \right] \tag {4} +$$ + +where the concatenated label vector, $\operatorname{concat}(\mathbf{y}_i^l, \mathbf{0}_C)$ , expands the ground-truth label vector $\mathbf{y}_i^l$ into the $2C$ -class label space by appending a zero vector with length $C$ to it. + +Although we have obtained initial pseudo-labels for the unlabeled set $\mathcal{D}_U$ by utilizing the pre-trained $C$ -class classifier, those initial labels are unavoidably noisy due to the existence of domain gap between the labeled and unlabeled domains. In order to effectively leverage the unlabeled data, we update the pseudo-label for each unlabeled instance $\mathbf{x}_i^u$ during each training iteration in a weighted moving average (WMA) fashion as follows: + +$$ +\hat {\mathbf {y}} _ {i} ^ {t} = \beta \hat {\mathbf {y}} _ {i} ^ {t - 1} + (1 - \beta) h \left(f \left(\mathbf {x} _ {i} ^ {u}\right)\right) \tag {5} +$$ + +where $\beta \in (0,1)$ is a hyper-parameter that controls the rate of update, and $\hat{\mathbf{y}}_i^t$ is the updated pseudo-label for $\mathbf{x}_i^u$ at the $t$ -th training iteration. This weighted moving average update strategy can yield a smooth and adaptive assignment of pseudo-labels by promptly incorporating the progress in the classification model and mitigating the risk of oscillatory updates. Moreover, to further mitigate the adverse impact of noisy pseudo-labels, we deploy the following cross-entropy loss on the unlabeled set during training, selectively utilizing only instances with more reliable pseudo-labels: + +$$ +\mathcal {L} _ {p l} ^ {U} = \mathbb {E} _ {\mathbf {x} _ {i} ^ {u} \in \mathcal {D} _ {U}} [ \mathbb {1} (\max (\hat {\mathbf {y}} _ {i} ^ {t}) > \epsilon) \ell_ {c e} (h (f (\mathbf {x} _ {i} ^ {u})), \hat {\mathbf {y}} _ {i} ^ {t}) ] \tag {6} +$$ + +where $\mathbb{1}(\cdot)$ denotes an indicator function; $\epsilon \in (0,1)$ is a predefined confidence threshold to ensure that only unlabeled instances with the maximum prediction probabilities larger than $\epsilon$ are used for the current training iteration. + +By treating semantic classes in distinct domains as separate categories, the $2C$ -class classification model serves as a strategic choice to differentiate samples across domains. This approach avoids the additional complexity associated with a dedicated domain classifier and naturally handles the divergence in class-feature distributions across domains. It also simplifies the process and has the potential to enhance domain generalization through a shared feature encoder. + +# 3.2.3. Cross-Domain Semantic Prototype Alignment + +Given that the labeled domain and unlabeled domain are comprised of the same set of $C$ semantic classes, there is a one-to-one correspondence relationship between each cross-domain class pair for the same semantic concept. In order to facilitate knowledge sharing and transfer across domains, + +we propose to align each semantic class from the labeled domain with its corresponding semantic class in the unlabeled domain within the learned feature embedding space. To this end, we represent each class using a class-prototype vector and design a cross-domain semantic class-prototype alignment component to enforce the corresponding semantic class pairs across the domains are more similar in the feature embedding space than non-corresponding class pairs. + +Specifically, we compute the prototype vector for the $k$ -th class in the labeled set as the average feature embedding of the labeled instances belonging to the class: + +$$ +\mathbf {p} _ {k} = \mathbb {E} _ {\left(\mathbf {x} _ {i} ^ {l}, \mathbf {y} _ {i} ^ {l}\right) \in \mathcal {D} _ {L}} \left[ \mathbb {1} \left(\arg \max _ {j} \mathbf {y} _ {i j} ^ {l} = k\right) f \left(\mathbf {x} _ {i} ^ {l}\right) \right] \tag {7} +$$ + +where $\mathbf{y}_{ij}^{l}$ denotes the $j$ -th entry of the label vector $\mathbf{y}_i^l$ . The corresponding $k$ -th semantic class in the unlabeled set is the $(C + k)$ -th class in the $2C$ -class label space. We compute the class prototype vectors in the unlabeled set based on the instances with reliable pseudo-labels, such that: + +$$ +\mathbf {p} _ {C + k} = \mathbb {E} _ {\mathbf {x} _ {i} ^ {u} \in \mathcal {D} _ {U}} \left[ \mathbb {1} \left(\max (\hat {\mathbf {y}} _ {i} ^ {t}) > \epsilon \wedge \right. \left. \arg \max _ {j} \hat {\mathbf {y}} _ {i j} ^ {t} = C + k\right) f \left(\mathbf {x} _ {i} ^ {u}\right) \right] \tag {8} +$$ + +Then for each semantic class $k \in \{1, \dots, C\}$ , we align the prototypes of the corresponding class pairs from the labeled and unlabeled domains, $(\mathbf{p}_k, \mathbf{p}_{C + k})$ , by employing a cross-domain contrastive prototype alignment loss as follows: + +$$ +\begin{array}{l} \mathcal {L} _ {p a} = - \sum_ {k = 1} ^ {C} \left[ \log \frac {\exp \left(\cos \left(\mathbf {p} _ {k} , \mathbf {p} _ {C + k}\right) / \tau\right)}{\sum_ {k ^ {\prime} = 1} ^ {C} \mathbb {1} \left(k ^ {\prime} \neq k\right) \exp \left(\cos \left(\mathbf {p} _ {k} , \mathbf {p} _ {C + k ^ {\prime}}\right) / \tau\right)} \right. \\ \left. + \log \frac {\exp \left(\cos \left(\mathbf {p} _ {k} , \mathbf {p} _ {C + k}\right) / \tau\right)}{\sum_ {k ^ {\prime} = 1} ^ {C} \mathbb {1} \left(k ^ {\prime} \neq k\right) \exp \left(\cos \left(\mathbf {p} _ {k ^ {\prime}} , \mathbf {p} _ {C + k}\right) / \tau\right)} \right] \tag {9} \\ \end{array} +$$ + +where $\tau$ is a temperature hyper-parameter, and $\cos (\cdot ,\cdot)$ denotes the cosine similarity function. This contrastive loss promotes the sharing of predictive information between the labeled and unlabeled domains by encouraging the corresponding class prototype pairs to be closer to each other while simultaneously pushing the non-corresponding cross-domain class prototype pairs farther apart. + +# 3.2.4. Progressive Inter-Domain Mixup + +In order to bridge the gap between the labeled domain and the unlabeled domain, we propose a progressive inter-domain mixup mechanism to augment the training set by dynamically generating synthetic instances between the labeled set and unlabeled set, with the objective of facilitating steady and efficient knowledge transfer from the labeled domain to the unlabeled domain. + +Specifically, we generate an inter-domain synthetic instance $(\mathbf{x}^m,\mathbf{y}^m)$ by mixing a labeled instance $(\mathbf{x}^l,\mathbf{y}^l)$ from the labeled set $\mathcal{D}_L$ with a pseudo-labeled instance $(\mathbf{x}^u,\hat{\mathbf{y}}^t)$ + +from the unlabeled set $\mathcal{D}_U$ through linear interpolation: + +$$ +\mathbf {x} ^ {m} = \lambda \mathbf {x} ^ {u} + (1 - \lambda) \mathbf {x} ^ {l}, \tag {10} +$$ + +$$ +\mathbf {y} ^ {m} = \lambda \hat {\mathbf {y}} ^ {t} + (1 - \lambda) \operatorname {c o n c a t} (\mathbf {y} ^ {t}, \mathbf {0} _ {C}), +$$ + +where $\lambda \in [0,1]$ is the mixing coefficient. To fully utilize the available data in both domains, we can generate $N^{m} = \max (N^{l},N^{u})$ synthetic instances to form a synthetic set $\mathcal{D}_{\mathrm{Mixup}}$ by mixing each instance in the larger domain with a randomly selected instance in the other domain. + +In the standard mixup [40], the mixing coefficient $\lambda$ is sampled from a fixed $\mathrm{Beta}(\alpha, \alpha)$ distribution with hyperparameter $\alpha$ . To facilitate a steady and smooth adaptation from the labeled domain to the unlabeled domain for HSSL, we propose to dynamically generate the mixup data in each training iteration $t$ by deploying a progressive mixing up strategy that samples $\lambda$ from a shifted $\mathrm{Beta}(\alpha, \alpha)$ distribution based on a schedule function $\psi(t)$ , such that: + +$$ +\lambda \sim \psi (t) \times \operatorname {B e t a} (\alpha , \alpha), \quad \psi (t) = 0. 5 + \frac {t}{2 T} \tag {11} +$$ + +where $T$ denotes the total number of training iterations. Following this schedule, at the beginning of the training process, we have $\psi(0) \approx 0.5$ and $\lambda$ is sampled from the approximate interval [0, 0.5) as the model prioritizes the labeled domain, guarding against noisy pseudo-label predictions from unlabeled data. As the training progresses, the model gradually increases its reliance on the unlabeled data, and the interval [0, $\psi(t)$ ] from which $\lambda$ is sampled is expanded gradually towards [0, 1] (with $\psi(T) = 1$ ), allowing it to adapt seamlessly between domains. + +Following previous works on using mixup data [1], we employ the mixup set $\mathcal{D}_{\mathrm{Mixup}}$ for model training by minimizing the following mean squared error: + +$$ +\mathcal {L} _ {\text {M i x u p}} = \mathbb {E} _ {\left(\mathbf {x} _ {i} ^ {m}, \mathbf {y} _ {i} ^ {m}\right) \in \mathcal {D} _ {\text {M i x u p}}} \left[ \left\| h \left(f \left(\mathbf {x} _ {i} ^ {m}\right)\right) - \mathbf {y} _ {i} ^ {m}\right) \right\| ^ {2} \tag {12} +$$ + +# 3.2.5. Training Objective + +By integrating the classification loss terms on the labeled set, the unlabeled set, and the mixup set, with the class prototype alignment loss, we obtain the following joint training objective for the Uni-HSSL model: + +$$ +\mathcal {L} _ {\text {t o t a l}} = \mathcal {L} _ {c l} ^ {L} + \lambda_ {p l} \mathcal {L} _ {p l} ^ {U} + \lambda_ {p a} \mathcal {L} _ {p a} + \lambda_ {\text {M i x u p}} \mathcal {L} _ {\text {M i x u p}} \tag {13} +$$ + +where $\lambda_{pl},\lambda_{pa}$ and $\lambda_{\mathrm{Mixup}}$ are trade-off hyper-parameters. + +# 4. Experiments + +# 4.1. Experimental Setup + +Datasets We conducted comprehensive experiments to evaluate the performance of our proposed framework on four image classification benchmark datasets: Office-31, + +
SupervisedFlexMatchFixMatchSimMatchCDAN+SupMCC+SupBiAdaptUni-HSSL
A/C53.1(0.7)51.1(1.2)51.9(1.5)57.8(1.6)47.0(0.5)54.9(1.2)55.1(1.8)60.1(0.9)
C/A66.0(1.2)68.1(1.3)63.8(0.7)69.7(0.9)63.9(0.7)70.5(0.3)65.1(1.2)72.0(0.7)
C/R77.5(0.9)72.1(0.9)79.5(0.5)78.5(0.5)67.1(0.8)75.4(0.5)75.2(1.2)80.5(0.4)
R/C63.9(1.2)67.8(1.6)66.2(0.7)64.3(0.8)67.0(1.2)69.3(0.5)61.2(1.8)72.8(0.6)
R/A72.6(0.9)59.0(1.2)74.1(0.5)70.5(0.5)74.6(0.9)75.1(0.8)69.1(0.9)75.8(0.6)
A/R75.1(0.7)73.5(0.9)70.4(0.6)75.8(0.5)66.5(0.8)77.3(0.8)72.1(1.4)78.3(0.5)
A/P67.4(1.5)64.0(0.9)62.7(0.6)68.9(0.6)56.5(0.5)71.8(0.4)64.9(1.3)70.9(0.8)
P/A69.1(1.0)64.1(1.2)62.8(0.8)69.7(0.9)74.9(1.2)76.1(0.2)64.1(0.8)78.7(0.4)
C/P69.1(0.9)65.6(1.0)65.1(1.1)70.0(0.4)65.5(1.2)71.2(0.5)69.1(1.4)72.8(0.7)
P/C64.6(0.9)64.3(1.1)65.2(1.5)68.5(0.8)66.8(0.6)68.0(0.5)67.7(0.9)69.9(0.9)
P/R80.0(0.5)73.3(0.7)78.1(0.4)78.1(0.2)89.5(0.4)82.1(0.6)76.2(1.2)82.9(0.4)
R/P77.9(0.1)68.1(1.2)74.7(0.3)74.0(0.7)78.2(1.3)77.0(1.2)74.1(1.4)82.1(0.5)
Avg.69.765.967.970.567.472.367.874.7
+ +Table 1. Mean classification accuracy (standard deviation is within parentheses) on the Office-Home dataset using the ResNet-50 backbone. The first domain in each row indicates the labeled domain while the second domain indicates the unlabeled domain. + +Office-Home, VisDA, and ISIC-2019. In all four datasets, we split the samples of each domain into 90/10 train/test data. Office-31 [28] is comprised of a collection of 4,652 images spanning 31 different categories. The images are sourced from 3 distinct domains: Amazon (A), DSLR (D), and Webcam (W) with different image resolutions, quality, and lighting conditions. Office-Home [36] is a large collection of over 15,500 images spanning 65 categories. The images are sourced from 4 diverse domains: Artistic images (A), Clip Art (C), Product images (P), and Real-World images (R). VisDA-2017 [26] is a large-scale dataset tailored specifically for the visual domain adaptation task. This dataset includes images of 12 distinct categories from two domains, Synthetic (S) and Real (R). With the significant domain shift between the synthetic and real images, VisDA highlights the difficulties associated with bridging significant domain gaps. ISIC-2019 is a comprehensive repository of skin cancer research images sourced from 4 different sources: BCN-20000 (BCN) [8], Skin Cancer MNIST (HAM) [34], MSK4 [7], and an undefined source. We only utilize BCN and HAM sources as they include samples from all eight distinct classes. + +Implementation Details For all baselines we compared our Uni-HSSL against, we strictly followed the implementation details and hyper-parameters specified in the corresponding original papers. In order to ensure consistent comparisons with a multitude of earlier studies across various benchmark datasets, we employed two common backbone networks: ResNet-50 and ResNet-101 which are pre-trained on the ImageNet [10] dataset. We utilized ResNet-101 for VisDA dataset experiments and ResNet-50 for all the other benchmark datasets. The supervised pre-training stage is made up of 10 epochs while the semi-supervised training stage is made up of 100 epochs. In both stages, we em + +ployed an SGD optimizer with a learning rate of $5e^{-4}$ and Nesterov momentum [24] of 0.9. In the semi-supervised training stage, the learning rate is adjusted using a cosine annealing strategy [23, 37]. We set the L2 regularization coefficient to $1e^{-3}$ and the batch size to 32 for all datasets. The trade-off hyper-parameters $\lambda_{pl},\lambda_{pa},\lambda_{\mathrm{Mixup}}$ take the values 1, $1e^{-2}$ and 1 respectively, while $\tau$ and $\epsilon$ take the value 0.5 and $\beta$ is set to 0.8. Furthermore, similar to [1], we apply random translations and horizontal flips to the input images prior to applying the Progressive Inter-Domain Mixup. We report the mean classification accuracy and the corresponding standard deviation over 3 runs in each experiment. + +# 4.2. Comparison Results + +We evaluate the proposed Uni-HSSL framework on the heterogeneous semi-supervised learning tasks and compare it to four categories of baselines: Supervised Learning baselines, Semi-Supervised Learning (SSL) baselines, Unsupervised Domain Adaptation (UDA) baselines, and Bidirectional Adaptation baselines. The supervised baseline is exclusively trained on the labeled data and does not leverage the unlabeled data during training. We employ a set of representative SSL baselines (FlexMatch [39], FixMatch [31], and SimMatch [41]) and a set of representative UDA baselines (CDAN [22] and MCC [15]). In particular, we also compare our work with the state-of-the-art bidirectional adaptation method (BiAdapt) [14]. As the traditional UDA methods are trained to perform well solely on an unlabeled target domain, to ensure a fair comparison, we equip the UDA methods with a Supervised classifier (Sup) trained on the labeled set and a domain classifier and refer to them as $\mathrm{MCC} + \mathrm{Sup}$ and $\mathrm{CDAN} + \mathrm{Sup}$ . At inference time, the domain classifier assigns each test sample to the appropriate classifier in the corresponding domain—either the supervised classifier for samples predicted to originate from the labeled domain or + +
SupervisedFlexMatchFixMatchSimMatchCDAN+SupMCC+SupBiAdaptUni-HSSL
Plane93.8(0.2)98.3(0.7)94.9(0.5)93.6(0.8)98.4(0.3)98.6(0.3)90.1(1.4)98.2(0.5)
Bicycle74.1(0.5)74.8(0.9)53.5(0.2)81.1(0.8)94.4(0.7)96.6(0.5)79.1(1.2)97.5(0.9)
Bus79.4(0.7)53.9(1.2)79.5(0.8)56.9(1.2)90.1(0.5)88.6(0.7)54.7(1.3)91.4(0.8)
Car86.2(0.9)36.4(2.1)88.5(0.3)59.6(1.5)85.1(0.5)84.8(0.9)56.1(1.2)89.0(0.9)
Horse90.9(0.2)97.4(0.5)76.0(0.8)65.6(1.0)96.6(0.1)97.6(0.3)62.1(1.4)98.2(0.3)
Knife87.5(0.7)77.2(0.8)78.8(0.9)71.9(0.5)95.0(1.4)95.1(0.9)68.2(0.1)98.9(0.4)
Motor.94.5(0.4)66.6(1.2)40.8(1.2)70.8(0.9)96.6(0.5)94.2(0.2)68.1(1.5)97.0(0.6)
Person80.0(0.7)80.5(0.8)58.9(1.6)64.1(0.8)94.3(0.6)94.6(0.2)62.5(1.2)95.6(0.7)
Plant91.1(0.7)91.8(0.8)62.7(0.7)65.5(0.9)96.5(0.4)97.3(0.5)63.5(1.9)95.7(0.2)
Skateboard81.8(0.9)90.0(0.5)68.9(1.2)57.0(1.7)85.5(0.5)83.0(0.8)59.3(1.7)91.5(0.8)
Train96.0(0.3)96.8(0.7)94.2(0.4)74.2(0.9)95.7(0.7)95.6(0.1)71.3(1.5)97.0(0.3)
Truck59.8(0.9)49.2(1.2)49.5(1.2)52.1(1.7)79.8(0.2)80.6(1.0)50.1(1.5)82.4(0.7)
Avg.84.182.487.380.892.192.079.193.1
+ +Table 2. Mean classification accuracy (standard deviation is within parentheses) on the VisDA dataset using the ResNet-101 backbone. The rows correspond to the different classes of the dataset. + +
B/HH/BAvg.
Supervised70.5(0.9)65.4(1.2)67.9
FlexMatch71.3(1.4)68.7(0.8)70.0
FixMatch77.5(0.8)65.0(0.7)71.3
SimMatch75.1(1.5)69.2(1.7)72.2
CDAN+Sup72.9(1.0)65.2(0.4)69.1
MCC+Sup60.2(1.8)56.7(1.7)58.7
BiAdapt74.2(0.7)68.3(1.3)71.2
Uni-HSSL79.9(0.7)71.0(0.9)75.4
+ +Table 3. Mean classification accuracy (standard deviation is within parentheses) on ISIC-2019 using the ResNet-50 backbone. The first domain in each row indicates the labeled domain the second domain indicates the unlabeled domain. + +the UDA classifier for those predicted to come from the unlabeled domain. + +The comparison results on the Office-Home, VisDA, ISIC-2019 and Office-31 datasets are reported in Tables 1, 2, 3 and 4, respectively, where the first domain indicates the labeled domain and the second domain indicates the unlabeled domain. In the case of the VisDA dataset, the labeled dataset is sampled from the synthetic domain (S) and the unlabeled dataset is sampled from the real domain (R) and we report the average classification accuracy for each class and the overall average classification accuracy. The tables show that Uni-HSSL consistently outperforms all baselines on all datasets across all setups. The performance gains over the supervised baseline are notable exceeding $9\%$ , $4\%$ , $9\%$ , and $7\%$ on average in the cases of the Office-31, Office-Home, VisDA, and ISIC-2019 datasets, respectively. In the case of the VisDA dataset, the performance improvement over the supervised baseline at the class level is substantial, exceeding $22\%$ for some classes. Furthermore, Uni-HSSL consistently outperforms all the SSL baselines, achieving + +performance gains exceeding $3\%$ , $4\%$ , $5\%$ , and $3\%$ over the most effective SSL baselines on the Office-31, Office-Home, VisDA, and ISIC-2019 datasets, respectively. In some cases, such as A/W on Office-31 and P/A on Office-Home, the performance improvement over SSL baselines is notable, surpassing $6\%$ and $8\%$ , respectively, highlighting the limitations of traditional SSL baselines in the proposed HSSL task. In the case of the UDA baselines, Uni-HSSL yields superior performance with all domain setups on all four datasets with performance gains around $4\%$ , $2\%$ , $1\%$ , and $6\%$ on Office-31, Office-Home, VisDA and ISIC-2019 datasets, respectively. Uni-HSSL outperforms the UDA baselines on almost all classes of the VisDA dataset, with the UDA baselines slightly excelling in only two classes. However, Uni-HSSL still maintains superior overall performance compared to the UDA baselines. Furthermore, the MCC+Sup baseline does not perform well on the ISIC-2019 dataset, where it suffers a major drop in performance which can be attributed to the MCC baseline's sensitivity to the class imbalance inherent in this dataset. Moreover, our Uni-HSSL also substantially outperforms BiAdapt, with performance gains surpassing $5\%$ , $6\%$ , $14\%$ , and $4\%$ on the Office-31, Office-Home, VisDA and ISIC-2019 datasets, respectively. These results underscore the robustness of Uni-HSSL and highlight the limitations of BiAdapt in effectively addressing the challenges posed by the proposed HSSL task. + +# 4.3. Ablation Study + +In order to investigate the contribution of each component of the proposed framework, we conducted an ablation study to compare the proposed Uni-HSSL with its six variants: (1) “-w/o WMA”, which drops the Weighted Moving Average component of the pseudo-label update and simply uses the model predictions to generate pseudo-labels; (2) “-w/o $\mathcal{L}_{\mathrm{cl}}^{L}$ ”, which drops the cross-entropy classification loss on the la + +
W/AA/WA/DD/AD/WW/DAvg.
Supervised68.6(1.6)82.8(1.2)85.1(1.8)35.5(0.9)96.9(0.4)98.2(0.5)77.8
FlexMatch68.1(1.8)81.3(1.3)85.1(1.8)63.0(2.1)98.5(0.2)98.9(0.2)82.4
FixMatch69.1(1.3)83.4(0.9)86.4(0.8)53.7(1.3)98.1(0.2)98.2(0.2)81.5
SimMatch71.1(0.9)84.1(1.0)86.5(0.5)68.6(1.1)96.8(0.5)98.8(0.4)84.3
CDAN+Sup61.2(1.2)82.5(1.3)87.4(2.2)58.3(2.6)79.2(0.4)97.5(0.4)77.7
MCC+Sup71.5(2.7)88.8(0.7)89.1(0.5)67.6(1.3)81.7(0.7)99.5(0.4)83.0
BiAdopt70.2(0.9)85.0(0.5)77.4(0.7)67.1(1.0)94.2(0.5)98.5(0.3)82.0
Uni-HSSL73.1(1.0)90.2(0.8)90.0(0.2)72.1(0.7)100(0.0)100(0.0)87.5
+ +Table 4. Mean classification accuracy (standard deviation is within parentheses) on the Office-31 dataset using the ResNet-50 backbone. The first domain in each column indicates the labeled domain the second domain indicates the unlabeled domain. + +
W/AA/WA/DD/AD/WW/DAvg.
Uni-HSSL73.1(1.0)90.2(0.8)90.0(0.2)72.1(0.7)100(0.0)100(0.0)87.5
-w/o WMA72.8(0.5)87.1(0.8)88.3(0.9)71.0(0.8)100(0.0)100(0.0)86.8
-w/o \( \mathcal{L}_{cl}^{L} \)67.6(1.7)85.5(0.8)86.1(1.2)64.8(2.0)93.2(0.5)92.9(0.6)81.7
-w/o \( \mathcal{L}_{pl}^{U} \)72.5(0.8)87.9(0.9)88.1(0.7)71.0(0.9)98.0(0.2)98.5(0.2)86.1
-w/o \( \mathcal{L}_{pa} \)72.7(0.5)88.9(0.7)87.2(0.6)71.3(0.9)99.1(0.0)100(0.0)86.5
-w/o \( \mathcal{L}_{Mixup} \)71.9(1.2)86.7(0.9)88.1(0.8)71.3(1.1)98.0(0.4)99.9(0.0)86.1
-w/o Prog. Mixup71.3(0.9)84.8(0.9)88.1(1.0)70.0(1.3)99.2(0.5)99.9(0.0)85.6
+ +Table 5. Ablation study results in terms of mean classification accuracy (standard deviation is within parentheses) on the Office-31 dataset using the ResNet-50 backbone. The first domain in each column indicates the labeled domain while the second domain indicates the unlabeled domain. + +beled set $\mathcal{D}_L$ ; (3) “ $-\mathrm{w/o}\mathcal{L}_{\mathrm{pl}}^U$ , which drops the cross-entropy pseudo-label classification loss on the unlabeled set $\mathcal{D}_U$ ; (4) “ $-\mathrm{w/o}\mathcal{L}_{\mathrm{pa}}$ , which drops the Cross-Domain Prototype Alignment component; (5) “ $-\mathrm{w/o}\mathcal{L}_{\mathrm{Mixup}}$ , which drops the Progressive Inter-Domain Mixup component; and (6) “ $-\mathrm{w/o}$ Prog. Mixup”, which drops the progressive component of the Inter-Domain Mixup and uses a simple mixup for inter-domain data augmentation. We compare the proposed UniHSSL with all the six variants on the Office-31 dataset and report the results in Table 5. + +From the table, we can see that dropping any component from the proposed unified framework results in performance degradation in all cases. “-w/o $\mathcal{L}_{\mathrm{cl}}^{L}$ ” variant suffered the largest performance degradation, which highlights the importance of the ground-truth labels of $\mathcal{D}_L$ in guiding the learning process of the framework. Dropping the WMA from the pseudo-label generation component led to a slight average performance drop to $86.8\%$ , underscoring its role in obtaining stable and confident pseudo-labels. Similarly, dropping the classification loss on the unlabeled data $\mathcal{L}_{pl}^{U}$ led to a performance degradation to $86.1\%$ . Furthermore, the variant “-w/o Prog. Mixup” suffers a larger drop in performance in comparison with the variant “-w/o $\mathcal{L}_{\mathrm{Mixup}}$ ”, which highlights the importance of progressively generating the augmented samples to ensure the accuracy of their corresponding augmented labels. Generating inter-domain augmented samples without taking into account the domain gap between the labeled domain and unlabeled domain can + +lead to a degradation in performance due to the noisy augmented labels of the generated samples. Overall, the consistent performance drops across all the tasks of Office-31 for each variant validate the essential contribution of each corresponding component of the Uni-HSSL framework. + +# 5. Conclusion + +In this paper, we introduced a challenging heterogeneous semi-supervised learning problem, where the labeled and unlabeled training data come from different domains and possess different label and class feature distributions. To address this demanding setup, we proposed a Unified Framework for Heterogeneous Semi-Supervised Learning (Uni-HSSL), which trains a fine-grained classification model over the concatenated label space by effectively exploiting the labeled and unlabeled data as well as their relationships. Uni-HSSL adopts a WMA pseudo-labeling strategy to obtain stable and confident pseudo-labels for the unlabeled data, while deploying a cross-domain class prototype alignment component to support knowledge transfer and sharing between domains. A novel progressive inter-domain mixup component is further devised to augment the training data and bridge the significant gap between the labeled and unlabeled domains. The experimental results demonstrate the effectiveness and superiority of the proposed Uni-HSSL over state-of-the-art semi-supervised learning methods and unsupervised domain adaptation baselines. + +# References + +[1] David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A holistic approach to semi-supervised learning. In Advances in Neural Information Processing Systems (NeurIPS), 2019. 5, 6 +[2] David Berthelot, Rebecca Roelofs, Kihyuk Sohn, Nicholas Carlini, and Alexey Kurakin. Adamatch: A unified approach to semi-supervised learning and domain adaptation. In International Conference on Learning Representations (ICLR), 2021. 3 +[3] Kaidi Cao, Maria Brbic, and Jure Leskovec. Open-world semi-supervised learning. In International Conference on Learning Representations (ICLR), 2022. 2 +[4] Chao Chen, Zhihang Fu, Zhihong Chen, Sheng Jin, Zhaowei Cheng, Xinyu Jin, and Xian-Sheng Hua. Homm: Higher-order moment matching for unsupervised domain adaptation. In AAAI Conference on Artificial Intelligence, 2020. 3 +[5] Xinyang Chen, Sinan Wang, Mingsheng Long, and Jianmin Wang. Transferability vs. discriminability: Batch spectral penalization for adversarial domain adaptation. In International Conference on Machine Learning (ICML), 2019. 3 +[6] Yanbei Chen, Xiatian Zhu, Wei Li, and Shaogang Gong. Semi-supervised learning under class distribution mismatch. In Proceedings of the AAAI Conference on Artificial Intelligence, 2020. 2 +[7] Noel CF Codella, David Gutman, M Emre Celebi, Brian Helba, Michael A Marchetti, Stephen W Dusza, Aadi Kalloo, Konstantinos Liopyris, Nabin Mishra, Harald Kittler, et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic). In International Symposium on Biomedical Imaging (ISBI), 2018. 6 +[8] Marc Combalia, Noel CF Codella, Veronica Rotemberg, Brian Helba, Veronica Vilaplana, Ofer Reiter, Cristina Carrera, Alicia Barreiro, Allan C Halpern, Susana Puig, et al. Bcn20000: Dermoscopic lesions in the wild. arXiv preprint arXiv:1908.02288, 2019. 6 +[9] Shuhao Cui, Shuhui Wang, Junbao Zhuo, Liang Li, Qingming Huang, and Qi Tian. Towards discriminability and diversity: Batch nuclear-norm maximization under label insufficient situations. In Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 3 +[10] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Conference on Computer Vision and Pattern Recognition (CVPR), 2009. 6 +[11] Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In International Conference on Machine Learning (ICML), 2015. 2, 3 +[12] Lan-Zhe Guo, Zhen-Yu Zhang, Yuan Jiang, Yu-Feng Li, and Zhi-Hua Zhou. Safe deep semi-supervised learning for unseen-class unlabeled data. In International Conference on Machine Learning (ICML). PMLR, 2020. 2 +[13] Zhuo Huang, Chao Xue, Bo Han, Jian Yang, and Chen Gong. + +Universal semi-supervised learning. Advances in Neural Information Processing Systems (NeurIPS), 2021. 2 +[14] Lin-Han Jia, Lan-Zhe Guo, Zhi Zhou, Jie-Jing Shao, Yuke Xiang, and Yu-Feng Li. Bidirectional adaptation for robust semi-supervised learning with inconsistent data distributions. In International Conference on Machine Learning (ICML), 2023. 2, 3, 6 +[15] Ying Jin, Ximei Wang, Mingsheng Long, and Jianmin Wang. Minimum class confusion for versatile domain adaptation. In European Conference on Computer Vision (ECCV), 2020. 6 +[16] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. In International Conference on Learning Representations (ICLR), 2017. 2 +[17] Qicheng Lao, Xiang Jiang, and Mohammad Havaei. Hypothesis disparity regularized mutual information maximization. In AAAI Conference on Artificial Intelligence, 2021. 3 +[18] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. In Nature, 2015. 1 +[19] Dong-Hyun Lee et al. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML, 2013. 2 +[20] Hong Liu, Jianmin Wang, and Mingsheng Long. Cycle self-training for domain adaptation. In Advances in Neural Information Processing Systems (NeurIPS), 2021. 3 +[21] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In International Conference on Machine Learning (ICML), 2015. 2, 3 +[22] Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. In Advances in Neural Information Processing Systems (NeurIPS), 2018. 3, 6 +[23] Ilya Loshchilov and Frank Hutter. SGDR: Stochastic gradient descent with warm restarts. In International Conference on Learning Representations (ICLR), 2017. 6 +[24] Yurii Nesterov. A method for unconstrained convex minimization problem with the rate of convergence o (1/k2). In Dokl. Akad. Nauk. SSSR, 1983. 6 +[25] Avital Oliver, Augustus Odena, Colin A Raffel, Ekin Dogus Cubuk, and Ian Goodfellow. Realistic evaluation of deep semi-supervised learning algorithms. In Advances in Neural Information Processing Systems (NeurIPS), 2018. 1, 2 +[26] Xingchao Peng, Ben Usman, Neela Kaushik, Judy Hoffman, Dequan Wang, and Kate Saenko. Visda: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924, 2017. 6 +[27] Hoang Phan, Trung Le, Trung Phung, Anh Tuan Bui, Nhat Ho, and Dinh Phung. Global-local regularization via distributional robustness. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2023. 3 +[28] Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In European Conference on Computer Vision (ECCV), 2010. 6 +[29] Jian Shen, Yanru Qu, Weinan Zhang, and Yong Yu. Wasserstein distance guided representation learning for domain adaptation. In AAAI Conference on Artificial Intelligence, 2018. 3 + +[30] Rui Shu, Hung H Bui, Hirokazu Narui, and Stefano Ermon. A dirt-t approach to unsupervised domain adaptation. International Conference on Learning Representations (ICLR), 2018. 3 +[31] Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In Advances in Neural Information Processing Systems (NeurIPS), 2020. 2, 6 +[32] Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In International Conference on Computer Vision (ICCV), 2017. 1 +[33] Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in Neural Information Processing Systems (NeurIPS), 2017. 2 +[34] Philipp Tschandl, Cliff Rosendahl, and Harald Kittler. The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Scientific data, 2018. 6 +[35] Jesper E Van Engelen and Holger H Hoos. A survey on semi-supervised learning. Machine learning, 2020. 1 +[36] Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017. 6 +[37] Vikas Verma, Kenji Kawaguchi, Alex Lamb, Juho Kannala, Arno Solin, Yoshua Bengio, and David Lopez-Paz. Interpolation consistency training for semi-supervised learning. In Neural Networks, 2022. 2, 6 +[38] Qing Yu, Daiki Ikami, Go Irie, and Kiyoharu Aizawa. Multi-task curriculum framework for open-set semi-supervised learning. In European Conference on Computer Vision (ECCV), 2020. 2 +[39] Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, and Takahiro Shinozaki. Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling. In Advances in Neural Information Processing Systems (NeurIPS), 2021. 2, 6 +[40] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. Mixup: Beyond empirical risk minimization. In International Conference on Learning Representations (ICLR), 2018. 5 +[41] Mingkai Zheng, Shan You, Lang Huang, Fei Wang, Chen Qian, and Chang Xu. Simmatch: Semi-supervised learning with similarity matching. In Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2, 6 +[42] Yang Zou, Zhiding Yu, Xiaofeng Liu, BVK Kumar, and Jinsong Wang. Confidence regularized self-training. In International Conference on Computer Vision (ICCV), 2019. 3 \ No newline at end of file diff --git a/2025/A Unified Framework for Heterogeneous Semi-supervised Learning/images.zip b/2025/A Unified Framework for Heterogeneous Semi-supervised Learning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..478e8a29bf08d7bbbea787ba38f6a9aeb173e5e5 --- /dev/null +++ b/2025/A Unified Framework for Heterogeneous Semi-supervised Learning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0247d7da573e210c055a5b3f6c5e4be96353b74fa84255a2a118ce8c78120a3 +size 582620 diff --git a/2025/A Unified Framework for Heterogeneous Semi-supervised Learning/layout.json b/2025/A Unified Framework for Heterogeneous Semi-supervised Learning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..62fa9b43c4da56b8a196dadced0e001cf2312d30 --- /dev/null +++ b/2025/A Unified Framework for Heterogeneous Semi-supervised Learning/layout.json @@ -0,0 +1,8499 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 96, + 103, + 515, + 121 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 96, + 103, + 515, + 121 + ], + "spans": [ + { + "bbox": [ + 96, + 103, + 515, + 121 + ], + "type": "text", + "content": "A Unified Framework for Heterogeneous Semi-supervised Learning" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 142, + 143, + 469, + 185 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 142, + 143, + 469, + 185 + ], + "spans": [ + { + "bbox": [ + 142, + 143, + 469, + 185 + ], + "type": "text", + "content": "Marzi Heidari*, Abdullah Alchihabi*, Hao Yan*, Yuhong Guo*† \n*School of Computer Science, Carleton University, Ottawa, Canada†Canada CIFAR AI Chair, Amii, Canada" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 70, + 187, + 539, + 200 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 70, + 187, + 539, + 200 + ], + "spans": [ + { + "bbox": [ + 70, + 187, + 539, + 200 + ], + "type": "text", + "content": "{marziheidari@cmail., abdullahalchihibi@cmail., haoyan6@cmail., yuhong.guo@}carleton.ca" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 151, + 227, + 200, + 239 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 227, + 200, + 239 + ], + "spans": [ + { + "bbox": [ + 151, + 227, + 200, + 239 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 53, + 251, + 297, + 574 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 251, + 297, + 574 + ], + "spans": [ + { + "bbox": [ + 53, + 251, + 297, + 574 + ], + "type": "text", + "content": "In this work, we introduce a novel problem setup termed as Heterogeneous Semi-Supervised Learning (HSSL), which presents unique challenges by bridging the semi-supervised learning (SSL) task and the unsupervised domain adaptation (UDA) task, and expanding standard semi-supervised learning to cope with heterogeneous training data. At its core, HSSL aims to learn a prediction model using a combination of labeled and unlabeled training data drawn separately from heterogeneous domains that share a common set of semantic categories. This model is intended to differentiate the semantic categories of test instances sampled from both the labeled and unlabeled domains. In particular, the labeled and unlabeled domains have dissimilar label distributions and class feature distributions. This heterogeneity, coupled with the assorted sources of the test data, introduces significant challenges to standard SSL and UDA methods. Therefore, we propose a novel method, Unified Framework for Heterogeneous Semi-supervised Learning (Uni-HSSL), to address HSSL by directly learning a fine-grained classifier from the heterogeneous data, which adaptively handles the inter-domain heterogeneity while leveraging both the unlabeled data and the inter-domain semantic class relationships for cross-domain knowledge transfer and adaptation. We conduct comprehensive experiments and the experimental results validate the efficacy and superior performance of the proposed Uni-HSSL over state-of-the-art semi-supervised learning and unsupervised domain adaptation methods." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 597, + 135, + 609 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 597, + 135, + 609 + ], + "spans": [ + { + "bbox": [ + 56, + 597, + 135, + 609 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 617, + 297, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 617, + 297, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 617, + 297, + 714 + ], + "type": "text", + "content": "Deep learning models, owing to their hierarchical learned representations and intricate architectures, have monumentally advanced the state-of-the-art across a myriad of tasks [18]. Nonetheless, the success of deep learning has been often contingent on the availability of copious amounts of labeled data. Data annotation, especially in specialized domains, is not only resource-intensive but can also entail exorbitant costs [32]. Consequently, semi-supervised learn" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 228, + 555, + 264 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 228, + 555, + 264 + ], + "spans": [ + { + "bbox": [ + 313, + 228, + 555, + 264 + ], + "type": "text", + "content": "ing (SSL) has been popularly studied, aiming to successfully utilize the free available unlabeled data to help train deep models in an annotation efficient manner [35]." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 312, + 267, + 556, + 448 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 267, + 556, + 448 + ], + "spans": [ + { + "bbox": [ + 312, + 267, + 556, + 448 + ], + "type": "text", + "content": "However, current SSL methods assume that the unlabeled and labeled data are sampled from similar (homogeneous) distributions [25]. Such an assumption presents substantial practical limitations to applying traditional SSL methods to a wide range of application domains, where labeled and unlabeled data can have different distributions. For example, in the field of medical imaging, it is common for labeled MRI scans to be sourced from state-of-the-art research hospitals, while an influx of unlabeled scans could emanate from a myriad of rural clinics, each with its distinct scanning equipment and calibration idiosyncrasies. Similar heterogeneity patterns manifest in domains like aerial imagery, wildlife monitoring, and retail product classification. In such settings, the challenge lies in leveraging the unlabeled data given its dissimilarity with its labeled counterpart." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 312, + 450, + 557, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 450, + 557, + 714 + ], + "spans": [ + { + "bbox": [ + 312, + 450, + 557, + 714 + ], + "type": "text", + "content": "Therefore, to address the current limitations of the traditional SSL, we propose a novel heterogeneous semi-supervised learning (HSSL) task, where the training data consist of labeled and unlabeled data sampled from different distribution domains. The two domains contain a common set of semantic classes, but have different label and class feature distributions. The goal of HSSL is to train a model using the heterogeneous training data so that it can perform well on a held-out test set sampled from both the labeled and unlabeled domains. Without posing distribution similarity assumptions between the labeled and unlabeled data, HSSL is expected to be applicable to a broader range of real-world scenarios compared to standard SSL. This novel heterogeneous semi-supervised learning task however is much more challenging due to the following characteristics: (1) The domain gap, expressed as divergence between class feature distributions across the labeled and unlabeled domains, presents a significant impediment to model generalization and learning. (2) The absence of annotated samples from the unlabeled domain during training further compounds the complexity of the task. (3) Considering that the test set comprises samples from both domains, the devised so" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "15371" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 57, + 72, + 296, + 382 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 72, + 296, + 382 + ], + "spans": [ + { + "bbox": [ + 57, + 72, + 296, + 382 + ], + "type": "text", + "content": "lution methods need to accurately model the distributions inherent to each domain. It is imperative for the models to discern not only the domain from which a sample originates but also the specific semantic class it belongs to. This requires either an explicit or implicit methodology to categorize samples accurately with respect to both domain origin and semantic class categories, distinguishing the task from both conventional SSL and unsupervised domain adaptation (UDA)—traditional SSL overlooks the domain heterogeneity within both the training and testing data, whereas UDA exclusively concentrates on the unlabeled domain as the target domain [11, 21]. Therefore, traditional SSL and UDA methods are not readily applicable or effective in addressing the proposed HSSL task. A recent work [14] has made an effort to expand the traditional SSL task beyond its homogeneous assumptions. However, the proposed solution method learns separately in different domains using distinct components where an off-the-shelf UDA technique is employed to generate pseudo-labels for the unlabeled samples, bypassing the opportunity to train a unified cohesive model that could harness insights from both domains. Furthermore, their test set is confined to a labeled domain, while HSSL aims to train a model that generalizes across labeled and unlabeled domains. HSSL presents a more complex challenge, requiring the model to adapt and perform accurately across heterogeneous test data." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 57, + 391, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 391, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 57, + 391, + 296, + 713 + ], + "type": "text", + "content": "In this work, we propose a novel method, named as Unified framework for Heterogeneous Semi-Supervised Learning (Uni-HSSL), to address the HSSL problem. The proposed method learns a fine-grained classification model cohesively under a unified framework by amalgamating the labeled and unlabeled class categories within an extended and precisely doubled label space. The framework consists of three technical components designed to tackle the HSSL challenges: a weighted moving average pseudo-labeling component, a cross-domain prototype alignment component, and a progressive inter-domain mixup component. The pseudo-labeling component leverages a weighted moving average strategy to assign and update pseudo-labels for the unlabeled data. In this manner, it generates smooth and adaptive assignment of pseudo-labels, reducing the potential pitfalls of oscillating updates or noisy label assignments, which is crucial given the significant domain gap between labeled data and unlabeled data. The cross-domain prototype alignment ensures that the inherent semantic structures of similar classes across the labeled and unlabeled domains are aligned. This alignment of class-centric prototypes between domains leverages inter-domain semantic class relationships, enabling knowledge transfer from the labeled domain to the unlabeled domain. The progressive inter-domain mixup component generates new synthetic instances by interpolating between labeled and unlabeled samples and bridges the gap between the two domains. By adopting a progressive" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 316, + 72, + 553, + 167 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 72, + 553, + 167 + ], + "spans": [ + { + "bbox": [ + 316, + 72, + 553, + 167 + ], + "type": "text", + "content": "augmentation schedule, it gradually adapts the model to the distribution of the unlabeled domain, facilitating a steady and reliable knowledge transfer. Comprehensive experiments are conducted on several benchmark datasets. The empirical results demonstrate the efficacy and superior performance of our proposed unified framework compared to multiple state-of-the-art SSL and unsupervised domain adaptation baselines for HSSL." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 316, + 179, + 405, + 191 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 179, + 405, + 191 + ], + "spans": [ + { + "bbox": [ + 316, + 179, + 405, + 191 + ], + "type": "text", + "content": "2. Related Works" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 316, + 199, + 460, + 212 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 199, + 460, + 212 + ], + "spans": [ + { + "bbox": [ + 316, + 199, + 460, + 212 + ], + "type": "text", + "content": "2.1. Semi-Supervised Learning" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 315, + 217, + 555, + 384 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 217, + 555, + 384 + ], + "spans": [ + { + "bbox": [ + 315, + 217, + 555, + 384 + ], + "type": "text", + "content": "Conventional Semi-Supervised Learning (SSL) In conventional SSL, the labeled and unlabeled segments of the dataset encompass identical classes, sharing consistent class and feature distributions. SSL methods are primarily classified into three categories: regularization-based techniques, teacher-student models, and pseudo-labeling strategies. Regularization-based techniques like II-model [16] modify the loss function with additional terms for model refinement. Teacher-student models like MT [33] and ICT [37] involve training a student network to mimic a teacher model using unlabeled data. Pseudo-labeling strategies like Pseudo-Label [19], FixMatch [31], FlexMatch [39], and SimMatch [41] expand labeled datasets using unlabeled data with pseudo-labels in various ways." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 315, + 399, + 555, + 613 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 399, + 555, + 613 + ], + "spans": [ + { + "bbox": [ + 315, + 399, + 555, + 613 + ], + "type": "text", + "content": "Open-Set Semi-Supervised Learning (OS-SSL) OS-SSL deals with unknown or additional classes present in the unlabeled data but absent in the labeled set. OS-SSL assumes the same feature distribution over labeled and unlabeled sets. This is different from HSSL, which operates under the assumption that labeled and unlabeled data come from separate domains with different feature distributions. The concept of OS-SSL, introduced in [25], focuses on class distribution mismatches in open-set scenarios. Methods for OS-SSL like UASD [6] use self-distillation to exclude outliers from unlabeled data. DS3L [12] and MTCF [38] employ diverse weighting strategies for subset mismatches, minimizing the impact of private data in unlabeled sets. OpenMatch [3] utilizes one-vs-all classifiers for outlier detection but faces difficulties with unseen categories. While OS-SSL has advanced SSL towards practical use, it lacks capacity to handle feature distribution mismatches between labeled and unlabeled data." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 315, + 629, + 553, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 629, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 315, + 629, + 553, + 713 + ], + "type": "text", + "content": "Universal Semi-Supervised Learning (USSL) Universal SSL [13] involves both shared and unique classes across the labeled and unlabeled sets, with the test set matching the labeled set's class distribution. HSSL, however, assumes shared classes across the labeled and unlabeled domains and tests on samples from both domains without their domain identities, adding complexity." + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 749, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 749, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 749, + 317, + 757 + ], + "type": "text", + "content": "15372" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 296, + 168 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 296, + 168 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 296, + 168 + ], + "type": "text", + "content": "Similar to our work, bidirectional Adaptation [14] addresses the disparity between limited labeled and abundant unlabeled data, but it tests only within the labeled domain's feature distribution. It uses UDA techniques for pseudolabeling, avoiding the complexities and benefits of cross-domain modeling. In contrast, HSSL aims for effective generalization across both domains, posing a more intricate challenge in model adaptation and generalization." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 176, + 238, + 190 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 176, + 238, + 190 + ], + "spans": [ + { + "bbox": [ + 55, + 176, + 238, + 190 + ], + "type": "text", + "content": "2.2. Unsupervised Domain Adaptation" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 194, + 296, + 386 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 194, + 296, + 386 + ], + "spans": [ + { + "bbox": [ + 55, + 194, + 296, + 386 + ], + "type": "text", + "content": "Unsupervised domain adaptation aims at learning a target model given labeled data from a source domain and unlabeled data from a target domain. Typical deep UDA approaches can be categorized into three types: alignment-based, regularization-based, and self-training-based methods. Alignment-based methods aim to reduce the cross-domain feature discrepancy with adversarial alignment [11, 22] and distance-based methods [4, 21, 27, 29]. Regularization-based methods utilize regularization terms to leverage knowledge from the unlabeled target data. Typical regularization terms include entropy minimization [30], virtual adversarial training [30], batch spectral penalization [5], batch nuclear-norm maximization [9], and mutual information maximization [17]. Self-training-based methods explore effective pseudo-labeling for unlabeled target data fitting, including confidence threshold [2, 42] and cycle self-training [20]." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 398, + 111, + 410 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 398, + 111, + 410 + ], + "spans": [ + { + "bbox": [ + 55, + 398, + 111, + 410 + ], + "type": "text", + "content": "3. Method" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 418, + 148, + 430 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 418, + 148, + 430 + ], + "spans": [ + { + "bbox": [ + 55, + 418, + 148, + 430 + ], + "type": "text", + "content": "3.1. Problem Setup" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 435, + 296, + 602 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 435, + 296, + 602 + ], + "spans": [ + { + "bbox": [ + 55, + 435, + 296, + 602 + ], + "type": "text", + "content": "We consider the following Heterogeneous Semi-Supervised Learning (HSSL) setup. The training data consist of a set of labeled instances " + }, + { + "bbox": [ + 55, + 435, + 296, + 602 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_L = \\{(\\mathbf{x}_i^l,\\mathbf{y}_i^l)\\}_{i = 1}^{N_l}" + }, + { + "bbox": [ + 55, + 435, + 296, + 602 + ], + "type": "text", + "content": ", where each instance " + }, + { + "bbox": [ + 55, + 435, + 296, + 602 + ], + "type": "inline_equation", + "content": "\\mathbf{x}_i^l" + }, + { + "bbox": [ + 55, + 435, + 296, + 602 + ], + "type": "text", + "content": " is annotated with a one-hot label indicator vector " + }, + { + "bbox": [ + 55, + 435, + 296, + 602 + ], + "type": "inline_equation", + "content": "\\mathbf{y}_i^l" + }, + { + "bbox": [ + 55, + 435, + 296, + 602 + ], + "type": "text", + "content": " with length " + }, + { + "bbox": [ + 55, + 435, + 296, + 602 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 55, + 435, + 296, + 602 + ], + "type": "text", + "content": ", and a set of unlabeled instances " + }, + { + "bbox": [ + 55, + 435, + 296, + 602 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_U = \\{\\mathbf{x}_i^u\\}_{i = 1}^{N_u}" + }, + { + "bbox": [ + 55, + 435, + 296, + 602 + ], + "type": "text", + "content": ". The labeled data and unlabeled data are from two different domains that have dissimilar label distributions such that " + }, + { + "bbox": [ + 55, + 435, + 296, + 602 + ], + "type": "inline_equation", + "content": "p_L(\\mathbf{y})\\neq p_U(\\mathbf{y})" + }, + { + "bbox": [ + 55, + 435, + 296, + 602 + ], + "type": "text", + "content": " and heterogeneous feature distributions such that " + }, + { + "bbox": [ + 55, + 435, + 296, + 602 + ], + "type": "inline_equation", + "content": "p_L(\\mathbf{x}|\\mathbf{y})\\neq p_U(\\mathbf{x}|\\mathbf{y})" + }, + { + "bbox": [ + 55, + 435, + 296, + 602 + ], + "type": "text", + "content": ", but share the same set of " + }, + { + "bbox": [ + 55, + 435, + 296, + 602 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 55, + 435, + 296, + 602 + ], + "type": "text", + "content": " semantic classes. The goal is to train a prediction model using both the labeled set " + }, + { + "bbox": [ + 55, + 435, + 296, + 602 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_L" + }, + { + "bbox": [ + 55, + 435, + 296, + 602 + ], + "type": "text", + "content": " and unlabeled set " + }, + { + "bbox": [ + 55, + 435, + 296, + 602 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_U" + }, + { + "bbox": [ + 55, + 435, + 296, + 602 + ], + "type": "text", + "content": " so that the trained model would generalize well on a held-out test set that is indistinguishably sampled from both the labeled and unlabeled domains." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 612, + 161, + 624 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 612, + 161, + 624 + ], + "spans": [ + { + "bbox": [ + 55, + 612, + 161, + 624 + ], + "type": "text", + "content": "3.2. Proposed Method" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 629, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 629, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 629, + 296, + 713 + ], + "type": "text", + "content": "In this section, we present the proposed Uni-HSSL method, which tackles the " + }, + { + "bbox": [ + 55, + 629, + 296, + 713 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 55, + 629, + 296, + 713 + ], + "type": "text", + "content": "-class HSSL problem by combining the labeled and unlabeled class categories to a doubled label space and learning a fine-grained " + }, + { + "bbox": [ + 55, + 629, + 296, + 713 + ], + "type": "inline_equation", + "content": "2C" + }, + { + "bbox": [ + 55, + 629, + 296, + 713 + ], + "type": "text", + "content": "-class classification model under a unified framework, aiming to adaptively handle the heterogeneous distributions across domains and gain better generalization over test instances randomly sampled" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 72, + 553, + 120 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 553, + 120 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 553, + 120 + ], + "type": "text", + "content": "from both the labeled and unlabeled domains. The core idea centers on simultaneously facilitating effective knowledge transfer from the labeled domain to the unlabeled domain while harnessing the information within the unlabeled data." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 121, + 555, + 324 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 121, + 555, + 324 + ], + "spans": [ + { + "bbox": [ + 313, + 121, + 555, + 324 + ], + "type": "text", + "content": "We start by first pre-training a feature encoder and a " + }, + { + "bbox": [ + 313, + 121, + 555, + 324 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 313, + 121, + 555, + 324 + ], + "type": "text", + "content": "-class semantic classifier on the labeled dataset, which can be used to produce the initial pseudo-labels of the unlabeled training data and provide partial initialization for our Uni-HSSL model. Then the " + }, + { + "bbox": [ + 313, + 121, + 555, + 324 + ], + "type": "inline_equation", + "content": "2C" + }, + { + "bbox": [ + 313, + 121, + 555, + 324 + ], + "type": "text", + "content": "-class Uni-HSSL model, which consists of a feature encoder " + }, + { + "bbox": [ + 313, + 121, + 555, + 324 + ], + "type": "inline_equation", + "content": "f" + }, + { + "bbox": [ + 313, + 121, + 555, + 324 + ], + "type": "text", + "content": " and a " + }, + { + "bbox": [ + 313, + 121, + 555, + 324 + ], + "type": "inline_equation", + "content": "2C" + }, + { + "bbox": [ + 313, + 121, + 555, + 324 + ], + "type": "text", + "content": "-class classifier " + }, + { + "bbox": [ + 313, + 121, + 555, + 324 + ], + "type": "inline_equation", + "content": "h" + }, + { + "bbox": [ + 313, + 121, + 555, + 324 + ], + "type": "text", + "content": ", will be learned within the proposed unified semi-supervised framework shown in Figure 1. The framework introduces three technical components to facilitate heterogeneous SSL. The weighted-moving-average (WMA) based pseudo-labeling component is deployed to support the effective exploitation of the unlabeled data, while the cross-domain prototype alignment component and progressive inter-domain mixup component are designed to promote information sharing and efficient and steady knowledge transfer from the labeled domain to the unlabeled domain. Further elaboration will be provided in the following sections." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 329, + 444, + 341 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 329, + 444, + 341 + ], + "spans": [ + { + "bbox": [ + 313, + 329, + 444, + 341 + ], + "type": "text", + "content": "3.2.1. Supervised Pre-training" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 344, + 554, + 439 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 344, + 554, + 439 + ], + "spans": [ + { + "bbox": [ + 313, + 344, + 554, + 439 + ], + "type": "text", + "content": "The initial challenge in training a " + }, + { + "bbox": [ + 313, + 344, + 554, + 439 + ], + "type": "inline_equation", + "content": "2C" + }, + { + "bbox": [ + 313, + 344, + 554, + 439 + ], + "type": "text", + "content": "-class classification model with the given heterogeneous data is the absence of labeled instances entirely in the unlabeled domain. To tackle this problem, we exploit the assumption that the labeled and unlabeled domains share the same set of " + }, + { + "bbox": [ + 313, + 344, + 554, + 439 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 313, + 344, + 554, + 439 + ], + "type": "text", + "content": " semantic class categories, and pre-train a " + }, + { + "bbox": [ + 313, + 344, + 554, + 439 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 313, + 344, + 554, + 439 + ], + "type": "text", + "content": "-class classification model in the labeled domain to provide initial pseudo-labels for the training instances in the unlabeled domain." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 440, + 555, + 488 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 440, + 555, + 488 + ], + "spans": [ + { + "bbox": [ + 313, + 440, + 555, + 488 + ], + "type": "text", + "content": "Specifically, we pre-train a " + }, + { + "bbox": [ + 313, + 440, + 555, + 488 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 313, + 440, + 555, + 488 + ], + "type": "text", + "content": "-class model, which consists of a feature encoder " + }, + { + "bbox": [ + 313, + 440, + 555, + 488 + ], + "type": "inline_equation", + "content": "f" + }, + { + "bbox": [ + 313, + 440, + 555, + 488 + ], + "type": "text", + "content": " and a " + }, + { + "bbox": [ + 313, + 440, + 555, + 488 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 313, + 440, + 555, + 488 + ], + "type": "text", + "content": "-class probabilistic classifier " + }, + { + "bbox": [ + 313, + 440, + 555, + 488 + ], + "type": "inline_equation", + "content": "g" + }, + { + "bbox": [ + 313, + 440, + 555, + 488 + ], + "type": "text", + "content": ", on the labeled data " + }, + { + "bbox": [ + 313, + 440, + 555, + 488 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_L" + }, + { + "bbox": [ + 313, + 440, + 555, + 488 + ], + "type": "text", + "content": " by minimizing the following supervised cross-entropy loss:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 357, + 495, + 555, + 513 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 357, + 495, + 555, + 513 + ], + "spans": [ + { + "bbox": [ + 357, + 495, + 555, + 513 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {c e} ^ {L} = \\mathbb {E} _ {\\left(\\mathbf {x} _ {i} ^ {l}, \\mathbf {y} _ {i} ^ {l}\\right) \\in \\mathcal {D} _ {L}} \\left[ \\ell_ {c e} \\left(\\mathbf {y} _ {i} ^ {l}, g (f (\\mathbf {x} _ {i} ^ {l}))\\right) \\right] \\tag {1}", + "image_path": "ed5ee6bf08aa1c00eebcfcf347c814fe2da6ac10a987c86d20b38544cdeebb59.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 519, + 556, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 519, + 556, + 567 + ], + "spans": [ + { + "bbox": [ + 313, + 519, + 556, + 567 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 519, + 556, + 567 + ], + "type": "inline_equation", + "content": "\\ell_{ce}" + }, + { + "bbox": [ + 313, + 519, + 556, + 567 + ], + "type": "text", + "content": " denotes the cross-entropy function. Then we deploy the pre-trained classification model to make predictions on the unlabeled training instances in " + }, + { + "bbox": [ + 313, + 519, + 556, + 567 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_U" + }, + { + "bbox": [ + 313, + 519, + 556, + 567 + ], + "type": "text", + "content": " to generate their initial pseudo-labels:" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 372, + 575, + 555, + 590 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 372, + 575, + 555, + 590 + ], + "spans": [ + { + "bbox": [ + 372, + 575, + 555, + 590 + ], + "type": "interline_equation", + "content": "\\bar {\\mathbf {y}} _ {i} ^ {0} = g \\left(f \\left(\\mathbf {x} _ {i} ^ {u}\\right)\\right), \\quad \\forall \\mathbf {x} _ {i} ^ {u} \\in \\mathcal {D} _ {U} \\tag {2}", + "image_path": "56abf667d06700e05bb1f950eff51388fc7d23e0cfb5db6cadfd2f3d2a0a2667.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 597, + 556, + 658 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 597, + 556, + 658 + ], + "spans": [ + { + "bbox": [ + 313, + 597, + 556, + 658 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 597, + 556, + 658 + ], + "type": "inline_equation", + "content": "\\bar{\\mathbf{y}}_i^0" + }, + { + "bbox": [ + 313, + 597, + 556, + 658 + ], + "type": "text", + "content": " denotes the predicted class probability vector with length " + }, + { + "bbox": [ + 313, + 597, + 556, + 658 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 313, + 597, + 556, + 658 + ], + "type": "text", + "content": " for the unlabeled instance " + }, + { + "bbox": [ + 313, + 597, + 556, + 658 + ], + "type": "inline_equation", + "content": "\\mathbf{x}_i^u" + }, + { + "bbox": [ + 313, + 597, + 556, + 658 + ], + "type": "text", + "content": ". To provide initial labels on the unlabeled data for training the " + }, + { + "bbox": [ + 313, + 597, + 556, + 658 + ], + "type": "inline_equation", + "content": "2C" + }, + { + "bbox": [ + 313, + 597, + 556, + 658 + ], + "type": "text", + "content": "-class model, we further expand each " + }, + { + "bbox": [ + 313, + 597, + 556, + 658 + ], + "type": "inline_equation", + "content": "\\bar{\\mathbf{y}}_i^0" + }, + { + "bbox": [ + 313, + 597, + 556, + 658 + ], + "type": "text", + "content": " by concatenating it with a zero vector with length " + }, + { + "bbox": [ + 313, + 597, + 556, + 658 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 313, + 597, + 556, + 658 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 597, + 556, + 658 + ], + "type": "inline_equation", + "content": "\\mathbf{0}_C" + }, + { + "bbox": [ + 313, + 597, + 556, + 658 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 389, + 666, + 555, + 681 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 389, + 666, + 555, + 681 + ], + "spans": [ + { + "bbox": [ + 389, + 666, + 555, + 681 + ], + "type": "interline_equation", + "content": "\\hat {\\mathbf {y}} _ {i} ^ {0} = \\operatorname {c o n c a t} \\left(\\mathbf {0} _ {C}, \\bar {\\mathbf {y}} _ {i} ^ {0}\\right) \\tag {3}", + "image_path": "01ba431795715b8cdf8b0fe14b26bfdee617abb039fabe60853bc21f8e095491.jpg" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 689, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 689, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 689, + 555, + 713 + ], + "type": "text", + "content": "This results in the first set of " + }, + { + "bbox": [ + 313, + 689, + 555, + 713 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 313, + 689, + 555, + 713 + ], + "type": "text", + "content": " classes out of the " + }, + { + "bbox": [ + 313, + 689, + 555, + 713 + ], + "type": "inline_equation", + "content": "2C" + }, + { + "bbox": [ + 313, + 689, + 555, + 713 + ], + "type": "text", + "content": " classes corresponding to the classes in the labeled domain, with" + } + ] + } + ], + "index": 18 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "15373" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 100, + 74, + 509, + 266 + ], + "blocks": [ + { + "bbox": [ + 100, + 74, + 509, + 266 + ], + "lines": [ + { + "bbox": [ + 100, + 74, + 509, + 266 + ], + "spans": [ + { + "bbox": [ + 100, + 74, + 509, + 266 + ], + "type": "image", + "image_path": "b2dc7b8e254ecc5e7a924b0093c898cef1b0802eae32003cf24e947489c5e0c3.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 54, + 277, + 555, + 323 + ], + "lines": [ + { + "bbox": [ + 54, + 277, + 555, + 323 + ], + "spans": [ + { + "bbox": [ + 54, + 277, + 555, + 323 + ], + "type": "text", + "content": "Figure 1. An overview of the proposed Uni-HSSL training framework. The classification model consists of a feature encoder " + }, + { + "bbox": [ + 54, + 277, + 555, + 323 + ], + "type": "inline_equation", + "content": "f" + }, + { + "bbox": [ + 54, + 277, + 555, + 323 + ], + "type": "text", + "content": " and a " + }, + { + "bbox": [ + 54, + 277, + 555, + 323 + ], + "type": "inline_equation", + "content": "2C" + }, + { + "bbox": [ + 54, + 277, + 555, + 323 + ], + "type": "text", + "content": "-class classifier " + }, + { + "bbox": [ + 54, + 277, + 555, + 323 + ], + "type": "inline_equation", + "content": "h" + }, + { + "bbox": [ + 54, + 277, + 555, + 323 + ], + "type": "text", + "content": ". After initialization with pre-training, the model is trained by jointly minimizing the combination of a supervised loss " + }, + { + "bbox": [ + 54, + 277, + 555, + 323 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{cl}^{L}" + }, + { + "bbox": [ + 54, + 277, + 555, + 323 + ], + "type": "text", + "content": " on the labeled data, a WMA pseudo-labeling loss " + }, + { + "bbox": [ + 54, + 277, + 555, + 323 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{pl}^{U}" + }, + { + "bbox": [ + 54, + 277, + 555, + 323 + ], + "type": "text", + "content": " on the unlabeled data, a cross-domain prototype alignment loss " + }, + { + "bbox": [ + 54, + 277, + 555, + 323 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{pa}" + }, + { + "bbox": [ + 54, + 277, + 555, + 323 + ], + "type": "text", + "content": ", and a prediction loss " + }, + { + "bbox": [ + 54, + 277, + 555, + 323 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{Mixup}}" + }, + { + "bbox": [ + 54, + 277, + 555, + 323 + ], + "type": "text", + "content": " on the augmentation data produced via progressive inter-domain mixup." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 54, + 336, + 296, + 411 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 336, + 296, + 411 + ], + "spans": [ + { + "bbox": [ + 54, + 336, + 296, + 411 + ], + "type": "text", + "content": "the remaining set of " + }, + { + "bbox": [ + 54, + 336, + 296, + 411 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 54, + 336, + 296, + 411 + ], + "type": "text", + "content": " classes corresponding to the classes in the unlabeled domain. Moreover, the parameters of the pre-trained " + }, + { + "bbox": [ + 54, + 336, + 296, + 411 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 54, + 336, + 296, + 411 + ], + "type": "text", + "content": "-class model " + }, + { + "bbox": [ + 54, + 336, + 296, + 411 + ], + "type": "inline_equation", + "content": "(g \\circ f)" + }, + { + "bbox": [ + 54, + 336, + 296, + 411 + ], + "type": "text", + "content": " can also be utilized to initialize the feature encoder " + }, + { + "bbox": [ + 54, + 336, + 296, + 411 + ], + "type": "inline_equation", + "content": "f" + }, + { + "bbox": [ + 54, + 336, + 296, + 411 + ], + "type": "text", + "content": " and part of the classifier " + }, + { + "bbox": [ + 54, + 336, + 296, + 411 + ], + "type": "inline_equation", + "content": "h" + }, + { + "bbox": [ + 54, + 336, + 296, + 411 + ], + "type": "text", + "content": " corresponding to the first " + }, + { + "bbox": [ + 54, + 336, + 296, + 411 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 54, + 336, + 296, + 411 + ], + "type": "text", + "content": " classes in the " + }, + { + "bbox": [ + 54, + 336, + 296, + 411 + ], + "type": "inline_equation", + "content": "2C" + }, + { + "bbox": [ + 54, + 336, + 296, + 411 + ], + "type": "text", + "content": "-class model, while the other part of " + }, + { + "bbox": [ + 54, + 336, + 296, + 411 + ], + "type": "inline_equation", + "content": "h" + }, + { + "bbox": [ + 54, + 336, + 296, + 411 + ], + "type": "text", + "content": " will be randomly initialized." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 417, + 296, + 441 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 417, + 296, + 441 + ], + "spans": [ + { + "bbox": [ + 55, + 417, + 296, + 441 + ], + "type": "text", + "content": "3.2.2. Semi-Supervised Training with Adaptive Pseudo-Labeling" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 445, + 296, + 517 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 445, + 296, + 517 + ], + "spans": [ + { + "bbox": [ + 55, + 445, + 296, + 517 + ], + "type": "text", + "content": "After initialization, the proposed " + }, + { + "bbox": [ + 55, + 445, + 296, + 517 + ], + "type": "inline_equation", + "content": "2C" + }, + { + "bbox": [ + 55, + 445, + 296, + 517 + ], + "type": "text", + "content": "-class classification model (feature encoder " + }, + { + "bbox": [ + 55, + 445, + 296, + 517 + ], + "type": "inline_equation", + "content": "f" + }, + { + "bbox": [ + 55, + 445, + 296, + 517 + ], + "type": "text", + "content": " and probabilistic classifier " + }, + { + "bbox": [ + 55, + 445, + 296, + 517 + ], + "type": "inline_equation", + "content": "h" + }, + { + "bbox": [ + 55, + 445, + 296, + 517 + ], + "type": "text", + "content": ") will be trained by leveraging both the labeled set " + }, + { + "bbox": [ + 55, + 445, + 296, + 517 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_L" + }, + { + "bbox": [ + 55, + 445, + 296, + 517 + ], + "type": "text", + "content": " and the unlabeled set " + }, + { + "bbox": [ + 55, + 445, + 296, + 517 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_U" + }, + { + "bbox": [ + 55, + 445, + 296, + 517 + ], + "type": "text", + "content": " within a pseudo-labeling based SSL framework. On the labeled set " + }, + { + "bbox": [ + 55, + 445, + 296, + 517 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_L" + }, + { + "bbox": [ + 55, + 445, + 296, + 517 + ], + "type": "text", + "content": ", the following standard supervised cross-entropy loss will be used as the training objective:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 526, + 296, + 544 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 526, + 296, + 544 + ], + "spans": [ + { + "bbox": [ + 67, + 526, + 296, + 544 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {c l} ^ {L} = \\mathbb {E} _ {\\left(\\mathbf {x} _ {i} ^ {l}, \\mathbf {y} _ {i} ^ {l}\\right) \\in \\mathcal {D} _ {L}} \\left[ \\ell_ {c e} \\left(h \\left(f \\left(\\mathbf {x} _ {i} ^ {l}\\right)\\right), \\operatorname {c o n c a t} \\left(\\mathbf {y} _ {i} ^ {l}, \\mathbf {0} _ {C}\\right)\\right) \\right] \\tag {4}", + "image_path": "c066b7ec3c6281a4863aefcd758aabf6f0a73bf2cd097a03e6c6cde479e0eb7f.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 553, + 296, + 590 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 553, + 296, + 590 + ], + "spans": [ + { + "bbox": [ + 55, + 553, + 296, + 590 + ], + "type": "text", + "content": "where the concatenated label vector, " + }, + { + "bbox": [ + 55, + 553, + 296, + 590 + ], + "type": "inline_equation", + "content": "\\operatorname{concat}(\\mathbf{y}_i^l, \\mathbf{0}_C)" + }, + { + "bbox": [ + 55, + 553, + 296, + 590 + ], + "type": "text", + "content": ", expands the ground-truth label vector " + }, + { + "bbox": [ + 55, + 553, + 296, + 590 + ], + "type": "inline_equation", + "content": "\\mathbf{y}_i^l" + }, + { + "bbox": [ + 55, + 553, + 296, + 590 + ], + "type": "text", + "content": " into the " + }, + { + "bbox": [ + 55, + 553, + 296, + 590 + ], + "type": "inline_equation", + "content": "2C" + }, + { + "bbox": [ + 55, + 553, + 296, + 590 + ], + "type": "text", + "content": "-class label space by appending a zero vector with length " + }, + { + "bbox": [ + 55, + 553, + 296, + 590 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 55, + 553, + 296, + 590 + ], + "type": "text", + "content": " to it." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 590, + 296, + 685 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 590, + 296, + 685 + ], + "spans": [ + { + "bbox": [ + 55, + 590, + 296, + 685 + ], + "type": "text", + "content": "Although we have obtained initial pseudo-labels for the unlabeled set " + }, + { + "bbox": [ + 55, + 590, + 296, + 685 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_U" + }, + { + "bbox": [ + 55, + 590, + 296, + 685 + ], + "type": "text", + "content": " by utilizing the pre-trained " + }, + { + "bbox": [ + 55, + 590, + 296, + 685 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 55, + 590, + 296, + 685 + ], + "type": "text", + "content": "-class classifier, those initial labels are unavoidably noisy due to the existence of domain gap between the labeled and unlabeled domains. In order to effectively leverage the unlabeled data, we update the pseudo-label for each unlabeled instance " + }, + { + "bbox": [ + 55, + 590, + 296, + 685 + ], + "type": "inline_equation", + "content": "\\mathbf{x}_i^u" + }, + { + "bbox": [ + 55, + 590, + 296, + 685 + ], + "type": "text", + "content": " during each training iteration in a weighted moving average (WMA) fashion as follows:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 107, + 696, + 296, + 711 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 696, + 296, + 711 + ], + "spans": [ + { + "bbox": [ + 107, + 696, + 296, + 711 + ], + "type": "interline_equation", + "content": "\\hat {\\mathbf {y}} _ {i} ^ {t} = \\beta \\hat {\\mathbf {y}} _ {i} ^ {t - 1} + (1 - \\beta) h \\left(f \\left(\\mathbf {x} _ {i} ^ {u}\\right)\\right) \\tag {5}", + "image_path": "c00272b364e73b1e97b6a3e81514955ccd9575d888ebf9e76083ad097282699b.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 336, + 555, + 456 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 336, + 555, + 456 + ], + "spans": [ + { + "bbox": [ + 313, + 336, + 555, + 456 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 336, + 555, + 456 + ], + "type": "inline_equation", + "content": "\\beta \\in (0,1)" + }, + { + "bbox": [ + 313, + 336, + 555, + 456 + ], + "type": "text", + "content": " is a hyper-parameter that controls the rate of update, and " + }, + { + "bbox": [ + 313, + 336, + 555, + 456 + ], + "type": "inline_equation", + "content": "\\hat{\\mathbf{y}}_i^t" + }, + { + "bbox": [ + 313, + 336, + 555, + 456 + ], + "type": "text", + "content": " is the updated pseudo-label for " + }, + { + "bbox": [ + 313, + 336, + 555, + 456 + ], + "type": "inline_equation", + "content": "\\mathbf{x}_i^u" + }, + { + "bbox": [ + 313, + 336, + 555, + 456 + ], + "type": "text", + "content": " at the " + }, + { + "bbox": [ + 313, + 336, + 555, + 456 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 313, + 336, + 555, + 456 + ], + "type": "text", + "content": "-th training iteration. This weighted moving average update strategy can yield a smooth and adaptive assignment of pseudo-labels by promptly incorporating the progress in the classification model and mitigating the risk of oscillatory updates. Moreover, to further mitigate the adverse impact of noisy pseudo-labels, we deploy the following cross-entropy loss on the unlabeled set during training, selectively utilizing only instances with more reliable pseudo-labels:" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 323, + 464, + 555, + 481 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 323, + 464, + 555, + 481 + ], + "spans": [ + { + "bbox": [ + 323, + 464, + 555, + 481 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {p l} ^ {U} = \\mathbb {E} _ {\\mathbf {x} _ {i} ^ {u} \\in \\mathcal {D} _ {U}} [ \\mathbb {1} (\\max (\\hat {\\mathbf {y}} _ {i} ^ {t}) > \\epsilon) \\ell_ {c e} (h (f (\\mathbf {x} _ {i} ^ {u})), \\hat {\\mathbf {y}} _ {i} ^ {t}) ] \\tag {6}", + "image_path": "763985a22f2b8849abb1b36cf29026bd94acc55f374c028ee2bfc22567c469c1.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 488, + 555, + 536 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 488, + 555, + 536 + ], + "spans": [ + { + "bbox": [ + 313, + 488, + 555, + 536 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 488, + 555, + 536 + ], + "type": "inline_equation", + "content": "\\mathbb{1}(\\cdot)" + }, + { + "bbox": [ + 313, + 488, + 555, + 536 + ], + "type": "text", + "content": " denotes an indicator function; " + }, + { + "bbox": [ + 313, + 488, + 555, + 536 + ], + "type": "inline_equation", + "content": "\\epsilon \\in (0,1)" + }, + { + "bbox": [ + 313, + 488, + 555, + 536 + ], + "type": "text", + "content": " is a predefined confidence threshold to ensure that only unlabeled instances with the maximum prediction probabilities larger than " + }, + { + "bbox": [ + 313, + 488, + 555, + 536 + ], + "type": "inline_equation", + "content": "\\epsilon" + }, + { + "bbox": [ + 313, + 488, + 555, + 536 + ], + "type": "text", + "content": " are used for the current training iteration." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 536, + 556, + 632 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 536, + 556, + 632 + ], + "spans": [ + { + "bbox": [ + 313, + 536, + 556, + 632 + ], + "type": "text", + "content": "By treating semantic classes in distinct domains as separate categories, the " + }, + { + "bbox": [ + 313, + 536, + 556, + 632 + ], + "type": "inline_equation", + "content": "2C" + }, + { + "bbox": [ + 313, + 536, + 556, + 632 + ], + "type": "text", + "content": "-class classification model serves as a strategic choice to differentiate samples across domains. This approach avoids the additional complexity associated with a dedicated domain classifier and naturally handles the divergence in class-feature distributions across domains. It also simplifies the process and has the potential to enhance domain generalization through a shared feature encoder." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 639, + 537, + 651 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 639, + 537, + 651 + ], + "spans": [ + { + "bbox": [ + 313, + 639, + 537, + 651 + ], + "type": "text", + "content": "3.2.3. Cross-Domain Semantic Prototype Alignment" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 654, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 654, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 654, + 555, + 713 + ], + "type": "text", + "content": "Given that the labeled domain and unlabeled domain are comprised of the same set of " + }, + { + "bbox": [ + 313, + 654, + 555, + 713 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 313, + 654, + 555, + 713 + ], + "type": "text", + "content": " semantic classes, there is a one-to-one correspondence relationship between each cross-domain class pair for the same semantic concept. In order to facilitate knowledge sharing and transfer across domains," + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "15374" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 296, + 167 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 296, + 167 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 296, + 167 + ], + "type": "text", + "content": "we propose to align each semantic class from the labeled domain with its corresponding semantic class in the unlabeled domain within the learned feature embedding space. To this end, we represent each class using a class-prototype vector and design a cross-domain semantic class-prototype alignment component to enforce the corresponding semantic class pairs across the domains are more similar in the feature embedding space than non-corresponding class pairs." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 168, + 296, + 205 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 168, + 296, + 205 + ], + "spans": [ + { + "bbox": [ + 55, + 168, + 296, + 205 + ], + "type": "text", + "content": "Specifically, we compute the prototype vector for the " + }, + { + "bbox": [ + 55, + 168, + 296, + 205 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 55, + 168, + 296, + 205 + ], + "type": "text", + "content": "-th class in the labeled set as the average feature embedding of the labeled instances belonging to the class:" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 77, + 213, + 296, + 230 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 213, + 296, + 230 + ], + "spans": [ + { + "bbox": [ + 77, + 213, + 296, + 230 + ], + "type": "interline_equation", + "content": "\\mathbf {p} _ {k} = \\mathbb {E} _ {\\left(\\mathbf {x} _ {i} ^ {l}, \\mathbf {y} _ {i} ^ {l}\\right) \\in \\mathcal {D} _ {L}} \\left[ \\mathbb {1} \\left(\\arg \\max _ {j} \\mathbf {y} _ {i j} ^ {l} = k\\right) f \\left(\\mathbf {x} _ {i} ^ {l}\\right) \\right] \\tag {7}", + "image_path": "4ca3c3f1702c087b9a758f7a036124e51b3a965c163e83b664d6bd9aac58dc59.jpg" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 239, + 296, + 299 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 239, + 296, + 299 + ], + "spans": [ + { + "bbox": [ + 55, + 239, + 296, + 299 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 239, + 296, + 299 + ], + "type": "inline_equation", + "content": "\\mathbf{y}_{ij}^{l}" + }, + { + "bbox": [ + 55, + 239, + 296, + 299 + ], + "type": "text", + "content": " denotes the " + }, + { + "bbox": [ + 55, + 239, + 296, + 299 + ], + "type": "inline_equation", + "content": "j" + }, + { + "bbox": [ + 55, + 239, + 296, + 299 + ], + "type": "text", + "content": "-th entry of the label vector " + }, + { + "bbox": [ + 55, + 239, + 296, + 299 + ], + "type": "inline_equation", + "content": "\\mathbf{y}_i^l" + }, + { + "bbox": [ + 55, + 239, + 296, + 299 + ], + "type": "text", + "content": ". The corresponding " + }, + { + "bbox": [ + 55, + 239, + 296, + 299 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 55, + 239, + 296, + 299 + ], + "type": "text", + "content": "-th semantic class in the unlabeled set is the " + }, + { + "bbox": [ + 55, + 239, + 296, + 299 + ], + "type": "inline_equation", + "content": "(C + k)" + }, + { + "bbox": [ + 55, + 239, + 296, + 299 + ], + "type": "text", + "content": "-th class in the " + }, + { + "bbox": [ + 55, + 239, + 296, + 299 + ], + "type": "inline_equation", + "content": "2C" + }, + { + "bbox": [ + 55, + 239, + 296, + 299 + ], + "type": "text", + "content": "-class label space. We compute the class prototype vectors in the unlabeled set based on the instances with reliable pseudo-labels, such that:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 308, + 296, + 336 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 308, + 296, + 336 + ], + "spans": [ + { + "bbox": [ + 56, + 308, + 296, + 336 + ], + "type": "interline_equation", + "content": "\\mathbf {p} _ {C + k} = \\mathbb {E} _ {\\mathbf {x} _ {i} ^ {u} \\in \\mathcal {D} _ {U}} \\left[ \\mathbb {1} \\left(\\max (\\hat {\\mathbf {y}} _ {i} ^ {t}) > \\epsilon \\wedge \\right. \\left. \\arg \\max _ {j} \\hat {\\mathbf {y}} _ {i j} ^ {t} = C + k\\right) f \\left(\\mathbf {x} _ {i} ^ {u}\\right) \\right] \\tag {8}", + "image_path": "7ca9c95b7c1c2783e78fba14e07212c50b807159f1dd0058e8a853f0c2346a88.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 344, + 296, + 392 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 344, + 296, + 392 + ], + "spans": [ + { + "bbox": [ + 55, + 344, + 296, + 392 + ], + "type": "text", + "content": "Then for each semantic class " + }, + { + "bbox": [ + 55, + 344, + 296, + 392 + ], + "type": "inline_equation", + "content": "k \\in \\{1, \\dots, C\\}" + }, + { + "bbox": [ + 55, + 344, + 296, + 392 + ], + "type": "text", + "content": ", we align the prototypes of the corresponding class pairs from the labeled and unlabeled domains, " + }, + { + "bbox": [ + 55, + 344, + 296, + 392 + ], + "type": "inline_equation", + "content": "(\\mathbf{p}_k, \\mathbf{p}_{C + k})" + }, + { + "bbox": [ + 55, + 344, + 296, + 392 + ], + "type": "text", + "content": ", by employing a cross-domain contrastive prototype alignment loss as follows:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 58, + 402, + 296, + 477 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 402, + 296, + 477 + ], + "spans": [ + { + "bbox": [ + 58, + 402, + 296, + 477 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\mathcal {L} _ {p a} = - \\sum_ {k = 1} ^ {C} \\left[ \\log \\frac {\\exp \\left(\\cos \\left(\\mathbf {p} _ {k} , \\mathbf {p} _ {C + k}\\right) / \\tau\\right)}{\\sum_ {k ^ {\\prime} = 1} ^ {C} \\mathbb {1} \\left(k ^ {\\prime} \\neq k\\right) \\exp \\left(\\cos \\left(\\mathbf {p} _ {k} , \\mathbf {p} _ {C + k ^ {\\prime}}\\right) / \\tau\\right)} \\right. \\\\ \\left. + \\log \\frac {\\exp \\left(\\cos \\left(\\mathbf {p} _ {k} , \\mathbf {p} _ {C + k}\\right) / \\tau\\right)}{\\sum_ {k ^ {\\prime} = 1} ^ {C} \\mathbb {1} \\left(k ^ {\\prime} \\neq k\\right) \\exp \\left(\\cos \\left(\\mathbf {p} _ {k ^ {\\prime}} , \\mathbf {p} _ {C + k}\\right) / \\tau\\right)} \\right] \\tag {9} \\\\ \\end{array}", + "image_path": "7ee5234071348693882e442f395602353fed5620d5de300e377274d384123f9f.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 488, + 296, + 571 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 488, + 296, + 571 + ], + "spans": [ + { + "bbox": [ + 55, + 488, + 296, + 571 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 488, + 296, + 571 + ], + "type": "inline_equation", + "content": "\\tau" + }, + { + "bbox": [ + 55, + 488, + 296, + 571 + ], + "type": "text", + "content": " is a temperature hyper-parameter, and " + }, + { + "bbox": [ + 55, + 488, + 296, + 571 + ], + "type": "inline_equation", + "content": "\\cos (\\cdot ,\\cdot)" + }, + { + "bbox": [ + 55, + 488, + 296, + 571 + ], + "type": "text", + "content": " denotes the cosine similarity function. This contrastive loss promotes the sharing of predictive information between the labeled and unlabeled domains by encouraging the corresponding class prototype pairs to be closer to each other while simultaneously pushing the non-corresponding cross-domain class prototype pairs farther apart." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 578, + 224, + 590 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 578, + 224, + 590 + ], + "spans": [ + { + "bbox": [ + 55, + 578, + 224, + 590 + ], + "type": "text", + "content": "3.2.4. Progressive Inter-Domain Mixup" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 594, + 296, + 676 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 594, + 296, + 676 + ], + "spans": [ + { + "bbox": [ + 55, + 594, + 296, + 676 + ], + "type": "text", + "content": "In order to bridge the gap between the labeled domain and the unlabeled domain, we propose a progressive inter-domain mixup mechanism to augment the training set by dynamically generating synthetic instances between the labeled set and unlabeled set, with the objective of facilitating steady and efficient knowledge transfer from the labeled domain to the unlabeled domain." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 677, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 677, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 677, + 296, + 714 + ], + "type": "text", + "content": "Specifically, we generate an inter-domain synthetic instance " + }, + { + "bbox": [ + 55, + 677, + 296, + 714 + ], + "type": "inline_equation", + "content": "(\\mathbf{x}^m,\\mathbf{y}^m)" + }, + { + "bbox": [ + 55, + 677, + 296, + 714 + ], + "type": "text", + "content": " by mixing a labeled instance " + }, + { + "bbox": [ + 55, + 677, + 296, + 714 + ], + "type": "inline_equation", + "content": "(\\mathbf{x}^l,\\mathbf{y}^l)" + }, + { + "bbox": [ + 55, + 677, + 296, + 714 + ], + "type": "text", + "content": " from the labeled set " + }, + { + "bbox": [ + 55, + 677, + 296, + 714 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_L" + }, + { + "bbox": [ + 55, + 677, + 296, + 714 + ], + "type": "text", + "content": " with a pseudo-labeled instance " + }, + { + "bbox": [ + 55, + 677, + 296, + 714 + ], + "type": "inline_equation", + "content": "(\\mathbf{x}^u,\\hat{\\mathbf{y}}^t)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 314, + 72, + 538, + 84 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 72, + 538, + 84 + ], + "spans": [ + { + "bbox": [ + 314, + 72, + 538, + 84 + ], + "type": "text", + "content": "from the unlabeled set " + }, + { + "bbox": [ + 314, + 72, + 538, + 84 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_U" + }, + { + "bbox": [ + 314, + 72, + 538, + 84 + ], + "type": "text", + "content": " through linear interpolation:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 375, + 93, + 554, + 114 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 375, + 93, + 554, + 114 + ], + "spans": [ + { + "bbox": [ + 375, + 93, + 554, + 114 + ], + "type": "interline_equation", + "content": "\\mathbf {x} ^ {m} = \\lambda \\mathbf {x} ^ {u} + (1 - \\lambda) \\mathbf {x} ^ {l}, \\tag {10}", + "image_path": "b9cf5dab288c9d07d0c5f6a24916c02e84ac2ceb4cf62efbeb69f603d40e9bdc.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 350, + 110, + 503, + 124 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 350, + 110, + 503, + 124 + ], + "spans": [ + { + "bbox": [ + 350, + 110, + 503, + 124 + ], + "type": "interline_equation", + "content": "\\mathbf {y} ^ {m} = \\lambda \\hat {\\mathbf {y}} ^ {t} + (1 - \\lambda) \\operatorname {c o n c a t} (\\mathbf {y} ^ {t}, \\mathbf {0} _ {C}),", + "image_path": "7c5055700340c218d8a11f11f716e14c4f2099512b4991aa2e49b2cd3ea35a09.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 133, + 554, + 192 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 133, + 554, + 192 + ], + "spans": [ + { + "bbox": [ + 313, + 133, + 554, + 192 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 133, + 554, + 192 + ], + "type": "inline_equation", + "content": "\\lambda \\in [0,1]" + }, + { + "bbox": [ + 313, + 133, + 554, + 192 + ], + "type": "text", + "content": " is the mixing coefficient. To fully utilize the available data in both domains, we can generate " + }, + { + "bbox": [ + 313, + 133, + 554, + 192 + ], + "type": "inline_equation", + "content": "N^{m} = \\max (N^{l},N^{u})" + }, + { + "bbox": [ + 313, + 133, + 554, + 192 + ], + "type": "text", + "content": " synthetic instances to form a synthetic set " + }, + { + "bbox": [ + 313, + 133, + 554, + 192 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_{\\mathrm{Mixup}}" + }, + { + "bbox": [ + 313, + 133, + 554, + 192 + ], + "type": "text", + "content": " by mixing each instance in the larger domain with a randomly selected instance in the other domain." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 194, + 555, + 289 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 194, + 555, + 289 + ], + "spans": [ + { + "bbox": [ + 313, + 194, + 555, + 289 + ], + "type": "text", + "content": "In the standard mixup [40], the mixing coefficient " + }, + { + "bbox": [ + 313, + 194, + 555, + 289 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 313, + 194, + 555, + 289 + ], + "type": "text", + "content": " is sampled from a fixed " + }, + { + "bbox": [ + 313, + 194, + 555, + 289 + ], + "type": "inline_equation", + "content": "\\mathrm{Beta}(\\alpha, \\alpha)" + }, + { + "bbox": [ + 313, + 194, + 555, + 289 + ], + "type": "text", + "content": " distribution with hyperparameter " + }, + { + "bbox": [ + 313, + 194, + 555, + 289 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 313, + 194, + 555, + 289 + ], + "type": "text", + "content": ". To facilitate a steady and smooth adaptation from the labeled domain to the unlabeled domain for HSSL, we propose to dynamically generate the mixup data in each training iteration " + }, + { + "bbox": [ + 313, + 194, + 555, + 289 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 313, + 194, + 555, + 289 + ], + "type": "text", + "content": " by deploying a progressive mixing up strategy that samples " + }, + { + "bbox": [ + 313, + 194, + 555, + 289 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 313, + 194, + 555, + 289 + ], + "type": "text", + "content": " from a shifted " + }, + { + "bbox": [ + 313, + 194, + 555, + 289 + ], + "type": "inline_equation", + "content": "\\mathrm{Beta}(\\alpha, \\alpha)" + }, + { + "bbox": [ + 313, + 194, + 555, + 289 + ], + "type": "text", + "content": " distribution based on a schedule function " + }, + { + "bbox": [ + 313, + 194, + 555, + 289 + ], + "type": "inline_equation", + "content": "\\psi(t)" + }, + { + "bbox": [ + 313, + 194, + 555, + 289 + ], + "type": "text", + "content": ", such that:" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 335, + 298, + 555, + 321 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 335, + 298, + 555, + 321 + ], + "spans": [ + { + "bbox": [ + 335, + 298, + 555, + 321 + ], + "type": "interline_equation", + "content": "\\lambda \\sim \\psi (t) \\times \\operatorname {B e t a} (\\alpha , \\alpha), \\quad \\psi (t) = 0. 5 + \\frac {t}{2 T} \\tag {11}", + "image_path": "a0cd3df0a94560e7125b97a67645b7480858afefeaf11206c92941cb9ea25512.jpg" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 328, + 555, + 447 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 328, + 555, + 447 + ], + "spans": [ + { + "bbox": [ + 313, + 328, + 555, + 447 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 328, + 555, + 447 + ], + "type": "inline_equation", + "content": "T" + }, + { + "bbox": [ + 313, + 328, + 555, + 447 + ], + "type": "text", + "content": " denotes the total number of training iterations. Following this schedule, at the beginning of the training process, we have " + }, + { + "bbox": [ + 313, + 328, + 555, + 447 + ], + "type": "inline_equation", + "content": "\\psi(0) \\approx 0.5" + }, + { + "bbox": [ + 313, + 328, + 555, + 447 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 328, + 555, + 447 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 313, + 328, + 555, + 447 + ], + "type": "text", + "content": " is sampled from the approximate interval [0, 0.5) as the model prioritizes the labeled domain, guarding against noisy pseudo-label predictions from unlabeled data. As the training progresses, the model gradually increases its reliance on the unlabeled data, and the interval [0, " + }, + { + "bbox": [ + 313, + 328, + 555, + 447 + ], + "type": "inline_equation", + "content": "\\psi(t)" + }, + { + "bbox": [ + 313, + 328, + 555, + 447 + ], + "type": "text", + "content": "] from which " + }, + { + "bbox": [ + 313, + 328, + 555, + 447 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 313, + 328, + 555, + 447 + ], + "type": "text", + "content": " is sampled is expanded gradually towards [0, 1] (with " + }, + { + "bbox": [ + 313, + 328, + 555, + 447 + ], + "type": "inline_equation", + "content": "\\psi(T) = 1" + }, + { + "bbox": [ + 313, + 328, + 555, + 447 + ], + "type": "text", + "content": "), allowing it to adapt seamlessly between domains." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 449, + 555, + 484 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 449, + 555, + 484 + ], + "spans": [ + { + "bbox": [ + 313, + 449, + 555, + 484 + ], + "type": "text", + "content": "Following previous works on using mixup data [1], we employ the mixup set " + }, + { + "bbox": [ + 313, + 449, + 555, + 484 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_{\\mathrm{Mixup}}" + }, + { + "bbox": [ + 313, + 449, + 555, + 484 + ], + "type": "text", + "content": " for model training by minimizing the following mean squared error:" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 325, + 493, + 555, + 514 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 325, + 493, + 555, + 514 + ], + "spans": [ + { + "bbox": [ + 325, + 493, + 555, + 514 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\text {M i x u p}} = \\mathbb {E} _ {\\left(\\mathbf {x} _ {i} ^ {m}, \\mathbf {y} _ {i} ^ {m}\\right) \\in \\mathcal {D} _ {\\text {M i x u p}}} \\left[ \\left\\| h \\left(f \\left(\\mathbf {x} _ {i} ^ {m}\\right)\\right) - \\mathbf {y} _ {i} ^ {m}\\right) \\right\\| ^ {2} \\tag {12}", + "image_path": "acbc731a222e727eacebe36237359a23be8b6f2d14612934d7a77432d5825bad.jpg" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 314, + 521, + 423, + 533 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 521, + 423, + 533 + ], + "spans": [ + { + "bbox": [ + 314, + 521, + 423, + 533 + ], + "type": "text", + "content": "3.2.5. Training Objective" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 313, + 536, + 554, + 584 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 536, + 554, + 584 + ], + "spans": [ + { + "bbox": [ + 313, + 536, + 554, + 584 + ], + "type": "text", + "content": "By integrating the classification loss terms on the labeled set, the unlabeled set, and the mixup set, with the class prototype alignment loss, we obtain the following joint training objective for the Uni-HSSL model:" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 326, + 594, + 555, + 609 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 326, + 594, + 555, + 609 + ], + "spans": [ + { + "bbox": [ + 326, + 594, + 555, + 609 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\text {t o t a l}} = \\mathcal {L} _ {c l} ^ {L} + \\lambda_ {p l} \\mathcal {L} _ {p l} ^ {U} + \\lambda_ {p a} \\mathcal {L} _ {p a} + \\lambda_ {\\text {M i x u p}} \\mathcal {L} _ {\\text {M i x u p}} \\tag {13}", + "image_path": "d1951c18aa2cadedb33f666f21dec94e5063a7fdd5aba0d618b9f8df96a6fab3.jpg" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 313, + 617, + 546, + 630 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 617, + 546, + 630 + ], + "spans": [ + { + "bbox": [ + 313, + 617, + 546, + 630 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 617, + 546, + 630 + ], + "type": "inline_equation", + "content": "\\lambda_{pl},\\lambda_{pa}" + }, + { + "bbox": [ + 313, + 617, + 546, + 630 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 617, + 546, + 630 + ], + "type": "inline_equation", + "content": "\\lambda_{\\mathrm{Mixup}}" + }, + { + "bbox": [ + 313, + 617, + 546, + 630 + ], + "type": "text", + "content": " are trade-off hyper-parameters." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 313, + 640, + 394, + 654 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 640, + 394, + 654 + ], + "spans": [ + { + "bbox": [ + 313, + 640, + 394, + 654 + ], + "type": "text", + "content": "4. Experiments" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 313, + 659, + 430, + 673 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 659, + 430, + 673 + ], + "spans": [ + { + "bbox": [ + 313, + 659, + 430, + 673 + ], + "type": "text", + "content": "4.1. Experimental Setup" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 313, + 677, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 677, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 677, + 555, + 713 + ], + "type": "text", + "content": "Datasets We conducted comprehensive experiments to evaluate the performance of our proposed framework on four image classification benchmark datasets: Office-31," + } + ] + } + ], + "index": 26 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "15375" + } + ] + } + ], + "index": 27 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 72, + 69, + 539, + 240 + ], + "blocks": [ + { + "bbox": [ + 72, + 69, + 539, + 240 + ], + "lines": [ + { + "bbox": [ + 72, + 69, + 539, + 240 + ], + "spans": [ + { + "bbox": [ + 72, + 69, + 539, + 240 + ], + "type": "table", + "html": "
SupervisedFlexMatchFixMatchSimMatchCDAN+SupMCC+SupBiAdaptUni-HSSL
A/C53.1(0.7)51.1(1.2)51.9(1.5)57.8(1.6)47.0(0.5)54.9(1.2)55.1(1.8)60.1(0.9)
C/A66.0(1.2)68.1(1.3)63.8(0.7)69.7(0.9)63.9(0.7)70.5(0.3)65.1(1.2)72.0(0.7)
C/R77.5(0.9)72.1(0.9)79.5(0.5)78.5(0.5)67.1(0.8)75.4(0.5)75.2(1.2)80.5(0.4)
R/C63.9(1.2)67.8(1.6)66.2(0.7)64.3(0.8)67.0(1.2)69.3(0.5)61.2(1.8)72.8(0.6)
R/A72.6(0.9)59.0(1.2)74.1(0.5)70.5(0.5)74.6(0.9)75.1(0.8)69.1(0.9)75.8(0.6)
A/R75.1(0.7)73.5(0.9)70.4(0.6)75.8(0.5)66.5(0.8)77.3(0.8)72.1(1.4)78.3(0.5)
A/P67.4(1.5)64.0(0.9)62.7(0.6)68.9(0.6)56.5(0.5)71.8(0.4)64.9(1.3)70.9(0.8)
P/A69.1(1.0)64.1(1.2)62.8(0.8)69.7(0.9)74.9(1.2)76.1(0.2)64.1(0.8)78.7(0.4)
C/P69.1(0.9)65.6(1.0)65.1(1.1)70.0(0.4)65.5(1.2)71.2(0.5)69.1(1.4)72.8(0.7)
P/C64.6(0.9)64.3(1.1)65.2(1.5)68.5(0.8)66.8(0.6)68.0(0.5)67.7(0.9)69.9(0.9)
P/R80.0(0.5)73.3(0.7)78.1(0.4)78.1(0.2)89.5(0.4)82.1(0.6)76.2(1.2)82.9(0.4)
R/P77.9(0.1)68.1(1.2)74.7(0.3)74.0(0.7)78.2(1.3)77.0(1.2)74.1(1.4)82.1(0.5)
Avg.69.765.967.970.567.472.367.874.7
", + "image_path": "3364bb9cb1c8607ff476c7cc33072927c71d28afc10fb4d760e55149e101017a.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 249, + 555, + 273 + ], + "lines": [ + { + "bbox": [ + 55, + 249, + 555, + 273 + ], + "spans": [ + { + "bbox": [ + 55, + 249, + 555, + 273 + ], + "type": "text", + "content": "Table 1. Mean classification accuracy (standard deviation is within parentheses) on the Office-Home dataset using the ResNet-50 backbone. The first domain in each row indicates the labeled domain while the second domain indicates the unlabeled domain." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 54, + 293, + 297, + 555 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 293, + 297, + 555 + ], + "spans": [ + { + "bbox": [ + 54, + 293, + 297, + 555 + ], + "type": "text", + "content": "Office-Home, VisDA, and ISIC-2019. In all four datasets, we split the samples of each domain into 90/10 train/test data. Office-31 [28] is comprised of a collection of 4,652 images spanning 31 different categories. The images are sourced from 3 distinct domains: Amazon (A), DSLR (D), and Webcam (W) with different image resolutions, quality, and lighting conditions. Office-Home [36] is a large collection of over 15,500 images spanning 65 categories. The images are sourced from 4 diverse domains: Artistic images (A), Clip Art (C), Product images (P), and Real-World images (R). VisDA-2017 [26] is a large-scale dataset tailored specifically for the visual domain adaptation task. This dataset includes images of 12 distinct categories from two domains, Synthetic (S) and Real (R). With the significant domain shift between the synthetic and real images, VisDA highlights the difficulties associated with bridging significant domain gaps. ISIC-2019 is a comprehensive repository of skin cancer research images sourced from 4 different sources: BCN-20000 (BCN) [8], Skin Cancer MNIST (HAM) [34], MSK4 [7], and an undefined source. We only utilize BCN and HAM sources as they include samples from all eight distinct classes." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 570, + 297, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 570, + 297, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 570, + 297, + 714 + ], + "type": "text", + "content": "Implementation Details For all baselines we compared our Uni-HSSL against, we strictly followed the implementation details and hyper-parameters specified in the corresponding original papers. In order to ensure consistent comparisons with a multitude of earlier studies across various benchmark datasets, we employed two common backbone networks: ResNet-50 and ResNet-101 which are pre-trained on the ImageNet [10] dataset. We utilized ResNet-101 for VisDA dataset experiments and ResNet-50 for all the other benchmark datasets. The supervised pre-training stage is made up of 10 epochs while the semi-supervised training stage is made up of 100 epochs. In both stages, we em" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 313, + 293, + 556, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 293, + 556, + 437 + ], + "spans": [ + { + "bbox": [ + 313, + 293, + 556, + 437 + ], + "type": "text", + "content": "ployed an SGD optimizer with a learning rate of " + }, + { + "bbox": [ + 313, + 293, + 556, + 437 + ], + "type": "inline_equation", + "content": "5e^{-4}" + }, + { + "bbox": [ + 313, + 293, + 556, + 437 + ], + "type": "text", + "content": " and Nesterov momentum [24] of 0.9. In the semi-supervised training stage, the learning rate is adjusted using a cosine annealing strategy [23, 37]. We set the L2 regularization coefficient to " + }, + { + "bbox": [ + 313, + 293, + 556, + 437 + ], + "type": "inline_equation", + "content": "1e^{-3}" + }, + { + "bbox": [ + 313, + 293, + 556, + 437 + ], + "type": "text", + "content": " and the batch size to 32 for all datasets. The trade-off hyper-parameters " + }, + { + "bbox": [ + 313, + 293, + 556, + 437 + ], + "type": "inline_equation", + "content": "\\lambda_{pl},\\lambda_{pa},\\lambda_{\\mathrm{Mixup}}" + }, + { + "bbox": [ + 313, + 293, + 556, + 437 + ], + "type": "text", + "content": " take the values 1, " + }, + { + "bbox": [ + 313, + 293, + 556, + 437 + ], + "type": "inline_equation", + "content": "1e^{-2}" + }, + { + "bbox": [ + 313, + 293, + 556, + 437 + ], + "type": "text", + "content": " and 1 respectively, while " + }, + { + "bbox": [ + 313, + 293, + 556, + 437 + ], + "type": "inline_equation", + "content": "\\tau" + }, + { + "bbox": [ + 313, + 293, + 556, + 437 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 293, + 556, + 437 + ], + "type": "inline_equation", + "content": "\\epsilon" + }, + { + "bbox": [ + 313, + 293, + 556, + 437 + ], + "type": "text", + "content": " take the value 0.5 and " + }, + { + "bbox": [ + 313, + 293, + 556, + 437 + ], + "type": "inline_equation", + "content": "\\beta" + }, + { + "bbox": [ + 313, + 293, + 556, + 437 + ], + "type": "text", + "content": " is set to 0.8. Furthermore, similar to [1], we apply random translations and horizontal flips to the input images prior to applying the Progressive Inter-Domain Mixup. We report the mean classification accuracy and the corresponding standard deviation over 3 runs in each experiment." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 313, + 445, + 432, + 456 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 445, + 432, + 456 + ], + "spans": [ + { + "bbox": [ + 313, + 445, + 432, + 456 + ], + "type": "text", + "content": "4.2. Comparison Results" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 462, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 462, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 462, + 556, + 715 + ], + "type": "text", + "content": "We evaluate the proposed Uni-HSSL framework on the heterogeneous semi-supervised learning tasks and compare it to four categories of baselines: Supervised Learning baselines, Semi-Supervised Learning (SSL) baselines, Unsupervised Domain Adaptation (UDA) baselines, and Bidirectional Adaptation baselines. The supervised baseline is exclusively trained on the labeled data and does not leverage the unlabeled data during training. We employ a set of representative SSL baselines (FlexMatch [39], FixMatch [31], and SimMatch [41]) and a set of representative UDA baselines (CDAN [22] and MCC [15]). In particular, we also compare our work with the state-of-the-art bidirectional adaptation method (BiAdapt) [14]. As the traditional UDA methods are trained to perform well solely on an unlabeled target domain, to ensure a fair comparison, we equip the UDA methods with a Supervised classifier (Sup) trained on the labeled set and a domain classifier and refer to them as " + }, + { + "bbox": [ + 313, + 462, + 556, + 715 + ], + "type": "inline_equation", + "content": "\\mathrm{MCC} + \\mathrm{Sup}" + }, + { + "bbox": [ + 313, + 462, + 556, + 715 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 462, + 556, + 715 + ], + "type": "inline_equation", + "content": "\\mathrm{CDAN} + \\mathrm{Sup}" + }, + { + "bbox": [ + 313, + 462, + 556, + 715 + ], + "type": "text", + "content": ". At inference time, the domain classifier assigns each test sample to the appropriate classifier in the corresponding domain—either the supervised classifier for samples predicted to originate from the labeled domain or" + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "text", + "content": "15376" + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 59, + 69, + 552, + 240 + ], + "blocks": [ + { + "bbox": [ + 59, + 69, + 552, + 240 + ], + "lines": [ + { + "bbox": [ + 59, + 69, + 552, + 240 + ], + "spans": [ + { + "bbox": [ + 59, + 69, + 552, + 240 + ], + "type": "table", + "html": "
SupervisedFlexMatchFixMatchSimMatchCDAN+SupMCC+SupBiAdaptUni-HSSL
Plane93.8(0.2)98.3(0.7)94.9(0.5)93.6(0.8)98.4(0.3)98.6(0.3)90.1(1.4)98.2(0.5)
Bicycle74.1(0.5)74.8(0.9)53.5(0.2)81.1(0.8)94.4(0.7)96.6(0.5)79.1(1.2)97.5(0.9)
Bus79.4(0.7)53.9(1.2)79.5(0.8)56.9(1.2)90.1(0.5)88.6(0.7)54.7(1.3)91.4(0.8)
Car86.2(0.9)36.4(2.1)88.5(0.3)59.6(1.5)85.1(0.5)84.8(0.9)56.1(1.2)89.0(0.9)
Horse90.9(0.2)97.4(0.5)76.0(0.8)65.6(1.0)96.6(0.1)97.6(0.3)62.1(1.4)98.2(0.3)
Knife87.5(0.7)77.2(0.8)78.8(0.9)71.9(0.5)95.0(1.4)95.1(0.9)68.2(0.1)98.9(0.4)
Motor.94.5(0.4)66.6(1.2)40.8(1.2)70.8(0.9)96.6(0.5)94.2(0.2)68.1(1.5)97.0(0.6)
Person80.0(0.7)80.5(0.8)58.9(1.6)64.1(0.8)94.3(0.6)94.6(0.2)62.5(1.2)95.6(0.7)
Plant91.1(0.7)91.8(0.8)62.7(0.7)65.5(0.9)96.5(0.4)97.3(0.5)63.5(1.9)95.7(0.2)
Skateboard81.8(0.9)90.0(0.5)68.9(1.2)57.0(1.7)85.5(0.5)83.0(0.8)59.3(1.7)91.5(0.8)
Train96.0(0.3)96.8(0.7)94.2(0.4)74.2(0.9)95.7(0.7)95.6(0.1)71.3(1.5)97.0(0.3)
Truck59.8(0.9)49.2(1.2)49.5(1.2)52.1(1.7)79.8(0.2)80.6(1.0)50.1(1.5)82.4(0.7)
Avg.84.182.487.380.892.192.079.193.1
", + "image_path": "7714dd751dca36bb537019b0ac976995b6c380b6319cc16832ce47941ea1e9d0.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 79, + 290, + 272, + 403 + ], + "blocks": [ + { + "bbox": [ + 55, + 249, + 555, + 273 + ], + "lines": [ + { + "bbox": [ + 55, + 249, + 555, + 273 + ], + "spans": [ + { + "bbox": [ + 55, + 249, + 555, + 273 + ], + "type": "text", + "content": "Table 2. Mean classification accuracy (standard deviation is within parentheses) on the VisDA dataset using the ResNet-101 backbone. The rows correspond to the different classes of the dataset." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 79, + 290, + 272, + 403 + ], + "lines": [ + { + "bbox": [ + 79, + 290, + 272, + 403 + ], + "spans": [ + { + "bbox": [ + 79, + 290, + 272, + 403 + ], + "type": "table", + "html": "
B/HH/BAvg.
Supervised70.5(0.9)65.4(1.2)67.9
FlexMatch71.3(1.4)68.7(0.8)70.0
FixMatch77.5(0.8)65.0(0.7)71.3
SimMatch75.1(1.5)69.2(1.7)72.2
CDAN+Sup72.9(1.0)65.2(0.4)69.1
MCC+Sup60.2(1.8)56.7(1.7)58.7
BiAdapt74.2(0.7)68.3(1.3)71.2
Uni-HSSL79.9(0.7)71.0(0.9)75.4
", + "image_path": "4181a9504a07b44ca03c9657c5cc6c0587c93c034995be45d31860c37b8522bd.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 411, + 295, + 456 + ], + "lines": [ + { + "bbox": [ + 55, + 411, + 295, + 456 + ], + "spans": [ + { + "bbox": [ + 55, + 411, + 295, + 456 + ], + "type": "text", + "content": "Table 3. Mean classification accuracy (standard deviation is within parentheses) on ISIC-2019 using the ResNet-50 backbone. The first domain in each row indicates the labeled domain the second domain indicates the unlabeled domain." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 55, + 472, + 295, + 495 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 472, + 295, + 495 + ], + "spans": [ + { + "bbox": [ + 55, + 472, + 295, + 495 + ], + "type": "text", + "content": "the UDA classifier for those predicted to come from the unlabeled domain." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 498, + 297, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 498, + 297, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 498, + 297, + 714 + ], + "type": "text", + "content": "The comparison results on the Office-Home, VisDA, ISIC-2019 and Office-31 datasets are reported in Tables 1, 2, 3 and 4, respectively, where the first domain indicates the labeled domain and the second domain indicates the unlabeled domain. In the case of the VisDA dataset, the labeled dataset is sampled from the synthetic domain (S) and the unlabeled dataset is sampled from the real domain (R) and we report the average classification accuracy for each class and the overall average classification accuracy. The tables show that Uni-HSSL consistently outperforms all baselines on all datasets across all setups. The performance gains over the supervised baseline are notable exceeding " + }, + { + "bbox": [ + 55, + 498, + 297, + 714 + ], + "type": "inline_equation", + "content": "9\\%" + }, + { + "bbox": [ + 55, + 498, + 297, + 714 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 498, + 297, + 714 + ], + "type": "inline_equation", + "content": "4\\%" + }, + { + "bbox": [ + 55, + 498, + 297, + 714 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 498, + 297, + 714 + ], + "type": "inline_equation", + "content": "9\\%" + }, + { + "bbox": [ + 55, + 498, + 297, + 714 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 55, + 498, + 297, + 714 + ], + "type": "inline_equation", + "content": "7\\%" + }, + { + "bbox": [ + 55, + 498, + 297, + 714 + ], + "type": "text", + "content": " on average in the cases of the Office-31, Office-Home, VisDA, and ISIC-2019 datasets, respectively. In the case of the VisDA dataset, the performance improvement over the supervised baseline at the class level is substantial, exceeding " + }, + { + "bbox": [ + 55, + 498, + 297, + 714 + ], + "type": "inline_equation", + "content": "22\\%" + }, + { + "bbox": [ + 55, + 498, + 297, + 714 + ], + "type": "text", + "content": " for some classes. Furthermore, Uni-HSSL consistently outperforms all the SSL baselines, achieving" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "spans": [ + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "text", + "content": "performance gains exceeding " + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "inline_equation", + "content": "3\\%" + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "inline_equation", + "content": "4\\%" + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "inline_equation", + "content": "5\\%" + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "inline_equation", + "content": "3\\%" + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "text", + "content": " over the most effective SSL baselines on the Office-31, Office-Home, VisDA, and ISIC-2019 datasets, respectively. In some cases, such as A/W on Office-31 and P/A on Office-Home, the performance improvement over SSL baselines is notable, surpassing " + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "inline_equation", + "content": "6\\%" + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "inline_equation", + "content": "8\\%" + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "text", + "content": ", respectively, highlighting the limitations of traditional SSL baselines in the proposed HSSL task. In the case of the UDA baselines, Uni-HSSL yields superior performance with all domain setups on all four datasets with performance gains around " + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "inline_equation", + "content": "4\\%" + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "inline_equation", + "content": "2\\%" + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "inline_equation", + "content": "1\\%" + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "inline_equation", + "content": "6\\%" + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "text", + "content": " on Office-31, Office-Home, VisDA and ISIC-2019 datasets, respectively. Uni-HSSL outperforms the UDA baselines on almost all classes of the VisDA dataset, with the UDA baselines slightly excelling in only two classes. However, Uni-HSSL still maintains superior overall performance compared to the UDA baselines. Furthermore, the MCC+Sup baseline does not perform well on the ISIC-2019 dataset, where it suffers a major drop in performance which can be attributed to the MCC baseline's sensitivity to the class imbalance inherent in this dataset. Moreover, our Uni-HSSL also substantially outperforms BiAdapt, with performance gains surpassing " + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "inline_equation", + "content": "5\\%" + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "inline_equation", + "content": "6\\%" + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "inline_equation", + "content": "14\\%" + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "inline_equation", + "content": "4\\%" + }, + { + "bbox": [ + 313, + 293, + 556, + 604 + ], + "type": "text", + "content": " on the Office-31, Office-Home, VisDA and ISIC-2019 datasets, respectively. These results underscore the robustness of Uni-HSSL and highlight the limitations of BiAdapt in effectively addressing the challenges posed by the proposed HSSL task." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 612, + 408, + 624 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 612, + 408, + 624 + ], + "spans": [ + { + "bbox": [ + 313, + 612, + 408, + 624 + ], + "type": "text", + "content": "4.3. Ablation Study" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 629, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 629, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 629, + 556, + 715 + ], + "type": "text", + "content": "In order to investigate the contribution of each component of the proposed framework, we conducted an ablation study to compare the proposed Uni-HSSL with its six variants: (1) “-w/o WMA”, which drops the Weighted Moving Average component of the pseudo-label update and simply uses the model predictions to generate pseudo-labels; (2) “-w/o " + }, + { + "bbox": [ + 313, + 629, + 556, + 715 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{cl}}^{L}" + }, + { + "bbox": [ + 313, + 629, + 556, + 715 + ], + "type": "text", + "content": "”, which drops the cross-entropy classification loss on the la" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "15377" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 112, + 70, + 497, + 182 + ], + "blocks": [ + { + "bbox": [ + 112, + 70, + 497, + 182 + ], + "lines": [ + { + "bbox": [ + 112, + 70, + 497, + 182 + ], + "spans": [ + { + "bbox": [ + 112, + 70, + 497, + 182 + ], + "type": "table", + "html": "
W/AA/WA/DD/AD/WW/DAvg.
Supervised68.6(1.6)82.8(1.2)85.1(1.8)35.5(0.9)96.9(0.4)98.2(0.5)77.8
FlexMatch68.1(1.8)81.3(1.3)85.1(1.8)63.0(2.1)98.5(0.2)98.9(0.2)82.4
FixMatch69.1(1.3)83.4(0.9)86.4(0.8)53.7(1.3)98.1(0.2)98.2(0.2)81.5
SimMatch71.1(0.9)84.1(1.0)86.5(0.5)68.6(1.1)96.8(0.5)98.8(0.4)84.3
CDAN+Sup61.2(1.2)82.5(1.3)87.4(2.2)58.3(2.6)79.2(0.4)97.5(0.4)77.7
MCC+Sup71.5(2.7)88.8(0.7)89.1(0.5)67.6(1.3)81.7(0.7)99.5(0.4)83.0
BiAdopt70.2(0.9)85.0(0.5)77.4(0.7)67.1(1.0)94.2(0.5)98.5(0.3)82.0
Uni-HSSL73.1(1.0)90.2(0.8)90.0(0.2)72.1(0.7)100(0.0)100(0.0)87.5
", + "image_path": "3139c34cc36c435b4dbaa76d7004cef3322abdcfff60864d9bba26612b566892.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 97, + 224, + 514, + 324 + ], + "blocks": [ + { + "bbox": [ + 55, + 191, + 553, + 213 + ], + "lines": [ + { + "bbox": [ + 55, + 191, + 553, + 213 + ], + "spans": [ + { + "bbox": [ + 55, + 191, + 553, + 213 + ], + "type": "text", + "content": "Table 4. Mean classification accuracy (standard deviation is within parentheses) on the Office-31 dataset using the ResNet-50 backbone. The first domain in each column indicates the labeled domain the second domain indicates the unlabeled domain." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 97, + 224, + 514, + 324 + ], + "lines": [ + { + "bbox": [ + 97, + 224, + 514, + 324 + ], + "spans": [ + { + "bbox": [ + 97, + 224, + 514, + 324 + ], + "type": "table", + "html": "
W/AA/WA/DD/AD/WW/DAvg.
Uni-HSSL73.1(1.0)90.2(0.8)90.0(0.2)72.1(0.7)100(0.0)100(0.0)87.5
-w/o WMA72.8(0.5)87.1(0.8)88.3(0.9)71.0(0.8)100(0.0)100(0.0)86.8
-w/o \\( \\mathcal{L}_{cl}^{L} \\)67.6(1.7)85.5(0.8)86.1(1.2)64.8(2.0)93.2(0.5)92.9(0.6)81.7
-w/o \\( \\mathcal{L}_{pl}^{U} \\)72.5(0.8)87.9(0.9)88.1(0.7)71.0(0.9)98.0(0.2)98.5(0.2)86.1
-w/o \\( \\mathcal{L}_{pa} \\)72.7(0.5)88.9(0.7)87.2(0.6)71.3(0.9)99.1(0.0)100(0.0)86.5
-w/o \\( \\mathcal{L}_{Mixup} \\)71.9(1.2)86.7(0.9)88.1(0.8)71.3(1.1)98.0(0.4)99.9(0.0)86.1
-w/o Prog. Mixup71.3(0.9)84.8(0.9)88.1(1.0)70.0(1.3)99.2(0.5)99.9(0.0)85.6
", + "image_path": "fed544c827304609b0fb2b46f32701da9459e32623d38cec8da7914ee7895e1c.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 333, + 555, + 365 + ], + "lines": [ + { + "bbox": [ + 55, + 333, + 555, + 365 + ], + "spans": [ + { + "bbox": [ + 55, + 333, + 555, + 365 + ], + "type": "text", + "content": "Table 5. Ablation study results in terms of mean classification accuracy (standard deviation is within parentheses) on the Office-31 dataset using the ResNet-50 backbone. The first domain in each column indicates the labeled domain while the second domain indicates the unlabeled domain." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 54, + 375, + 295, + 495 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 375, + 295, + 495 + ], + "spans": [ + { + "bbox": [ + 54, + 375, + 295, + 495 + ], + "type": "text", + "content": "beled set " + }, + { + "bbox": [ + 54, + 375, + 295, + 495 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_L" + }, + { + "bbox": [ + 54, + 375, + 295, + 495 + ], + "type": "text", + "content": "; (3) “" + }, + { + "bbox": [ + 54, + 375, + 295, + 495 + ], + "type": "inline_equation", + "content": "-\\mathrm{w/o}\\mathcal{L}_{\\mathrm{pl}}^U" + }, + { + "bbox": [ + 54, + 375, + 295, + 495 + ], + "type": "text", + "content": ", which drops the cross-entropy pseudo-label classification loss on the unlabeled set " + }, + { + "bbox": [ + 54, + 375, + 295, + 495 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_U" + }, + { + "bbox": [ + 54, + 375, + 295, + 495 + ], + "type": "text", + "content": "; (4) “" + }, + { + "bbox": [ + 54, + 375, + 295, + 495 + ], + "type": "inline_equation", + "content": "-\\mathrm{w/o}\\mathcal{L}_{\\mathrm{pa}}" + }, + { + "bbox": [ + 54, + 375, + 295, + 495 + ], + "type": "text", + "content": ", which drops the Cross-Domain Prototype Alignment component; (5) “" + }, + { + "bbox": [ + 54, + 375, + 295, + 495 + ], + "type": "inline_equation", + "content": "-\\mathrm{w/o}\\mathcal{L}_{\\mathrm{Mixup}}" + }, + { + "bbox": [ + 54, + 375, + 295, + 495 + ], + "type": "text", + "content": ", which drops the Progressive Inter-Domain Mixup component; and (6) “" + }, + { + "bbox": [ + 54, + 375, + 295, + 495 + ], + "type": "inline_equation", + "content": "-\\mathrm{w/o}" + }, + { + "bbox": [ + 54, + 375, + 295, + 495 + ], + "type": "text", + "content": " Prog. Mixup”, which drops the progressive component of the Inter-Domain Mixup and uses a simple mixup for inter-domain data augmentation. We compare the proposed UniHSSL with all the six variants on the Office-31 dataset and report the results in Table 5." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 498, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 498, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 498, + 295, + 713 + ], + "type": "text", + "content": "From the table, we can see that dropping any component from the proposed unified framework results in performance degradation in all cases. “-w/o " + }, + { + "bbox": [ + 55, + 498, + 295, + 713 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{cl}}^{L}" + }, + { + "bbox": [ + 55, + 498, + 295, + 713 + ], + "type": "text", + "content": "” variant suffered the largest performance degradation, which highlights the importance of the ground-truth labels of " + }, + { + "bbox": [ + 55, + 498, + 295, + 713 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_L" + }, + { + "bbox": [ + 55, + 498, + 295, + 713 + ], + "type": "text", + "content": " in guiding the learning process of the framework. Dropping the WMA from the pseudo-label generation component led to a slight average performance drop to " + }, + { + "bbox": [ + 55, + 498, + 295, + 713 + ], + "type": "inline_equation", + "content": "86.8\\%" + }, + { + "bbox": [ + 55, + 498, + 295, + 713 + ], + "type": "text", + "content": ", underscoring its role in obtaining stable and confident pseudo-labels. Similarly, dropping the classification loss on the unlabeled data " + }, + { + "bbox": [ + 55, + 498, + 295, + 713 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{pl}^{U}" + }, + { + "bbox": [ + 55, + 498, + 295, + 713 + ], + "type": "text", + "content": " led to a performance degradation to " + }, + { + "bbox": [ + 55, + 498, + 295, + 713 + ], + "type": "inline_equation", + "content": "86.1\\%" + }, + { + "bbox": [ + 55, + 498, + 295, + 713 + ], + "type": "text", + "content": ". Furthermore, the variant “-w/o Prog. Mixup” suffers a larger drop in performance in comparison with the variant “-w/o " + }, + { + "bbox": [ + 55, + 498, + 295, + 713 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{Mixup}}" + }, + { + "bbox": [ + 55, + 498, + 295, + 713 + ], + "type": "text", + "content": "”, which highlights the importance of progressively generating the augmented samples to ensure the accuracy of their corresponding augmented labels. Generating inter-domain augmented samples without taking into account the domain gap between the labeled domain and unlabeled domain can" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 376, + 555, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 376, + 555, + 437 + ], + "spans": [ + { + "bbox": [ + 313, + 376, + 555, + 437 + ], + "type": "text", + "content": "lead to a degradation in performance due to the noisy augmented labels of the generated samples. Overall, the consistent performance drops across all the tasks of Office-31 for each variant validate the essential contribution of each corresponding component of the Uni-HSSL framework." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 452, + 388, + 464 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 452, + 388, + 464 + ], + "spans": [ + { + "bbox": [ + 313, + 452, + 388, + 464 + ], + "type": "text", + "content": "5. Conclusion" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 474, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 474, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 474, + 555, + 713 + ], + "type": "text", + "content": "In this paper, we introduced a challenging heterogeneous semi-supervised learning problem, where the labeled and unlabeled training data come from different domains and possess different label and class feature distributions. To address this demanding setup, we proposed a Unified Framework for Heterogeneous Semi-Supervised Learning (Uni-HSSL), which trains a fine-grained classification model over the concatenated label space by effectively exploiting the labeled and unlabeled data as well as their relationships. Uni-HSSL adopts a WMA pseudo-labeling strategy to obtain stable and confident pseudo-labels for the unlabeled data, while deploying a cross-domain class prototype alignment component to support knowledge transfer and sharing between domains. A novel progressive inter-domain mixup component is further devised to augment the training data and bridge the significant gap between the labeled and unlabeled domains. The experimental results demonstrate the effectiveness and superiority of the proposed Uni-HSSL over state-of-the-art semi-supervised learning methods and unsupervised domain adaptation baselines." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "15378" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "spans": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 57, + 91, + 297, + 714 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 61, + 91, + 297, + 145 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 91, + 297, + 145 + ], + "spans": [ + { + "bbox": [ + 61, + 91, + 297, + 145 + ], + "type": "text", + "content": "[1] David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A holistic approach to semi-supervised learning. In Advances in Neural Information Processing Systems (NeurIPS), 2019. 5, 6" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 148, + 296, + 201 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 148, + 296, + 201 + ], + "spans": [ + { + "bbox": [ + 61, + 148, + 296, + 201 + ], + "type": "text", + "content": "[2] David Berthelot, Rebecca Roelofs, Kihyuk Sohn, Nicholas Carlini, and Alexey Kurakin. Adamatch: A unified approach to semi-supervised learning and domain adaptation. In International Conference on Learning Representations (ICLR), 2021. 3" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 62, + 204, + 295, + 237 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 204, + 295, + 237 + ], + "spans": [ + { + "bbox": [ + 62, + 204, + 295, + 237 + ], + "type": "text", + "content": "[3] Kaidi Cao, Maria Brbic, and Jure Leskovec. Open-world semi-supervised learning. In International Conference on Learning Representations (ICLR), 2022. 2" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 62, + 239, + 296, + 282 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 239, + 296, + 282 + ], + "spans": [ + { + "bbox": [ + 62, + 239, + 296, + 282 + ], + "type": "text", + "content": "[4] Chao Chen, Zhihang Fu, Zhihong Chen, Sheng Jin, Zhaowei Cheng, Xinyu Jin, and Xian-Sheng Hua. Homm: Higher-order moment matching for unsupervised domain adaptation. In AAAI Conference on Artificial Intelligence, 2020. 3" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 62, + 285, + 296, + 328 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 285, + 296, + 328 + ], + "spans": [ + { + "bbox": [ + 62, + 285, + 296, + 328 + ], + "type": "text", + "content": "[5] Xinyang Chen, Sinan Wang, Mingsheng Long, and Jianmin Wang. Transferability vs. discriminability: Batch spectral penalization for adversarial domain adaptation. In International Conference on Machine Learning (ICML), 2019. 3" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 330, + 296, + 372 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 330, + 296, + 372 + ], + "spans": [ + { + "bbox": [ + 62, + 330, + 296, + 372 + ], + "type": "text", + "content": "[6] Yanbei Chen, Xiatian Zhu, Wei Li, and Shaogang Gong. Semi-supervised learning under class distribution mismatch. In Proceedings of the AAAI Conference on Artificial Intelligence, 2020. 2" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 375, + 296, + 462 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 375, + 296, + 462 + ], + "spans": [ + { + "bbox": [ + 62, + 375, + 296, + 462 + ], + "type": "text", + "content": "[7] Noel CF Codella, David Gutman, M Emre Celebi, Brian Helba, Michael A Marchetti, Stephen W Dusza, Aadi Kalloo, Konstantinos Liopyris, Nabin Mishra, Harald Kittler, et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic). In International Symposium on Biomedical Imaging (ISBI), 2018. 6" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 464, + 296, + 518 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 464, + 296, + 518 + ], + "spans": [ + { + "bbox": [ + 62, + 464, + 296, + 518 + ], + "type": "text", + "content": "[8] Marc Combalia, Noel CF Codella, Veronica Rotemberg, Brian Helba, Veronica Vilaplana, Ofer Reiter, Cristina Carrera, Alicia Barreiro, Allan C Halpern, Susana Puig, et al. Bcn20000: Dermoscopic lesions in the wild. arXiv preprint arXiv:1908.02288, 2019. 6" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 520, + 296, + 575 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 520, + 296, + 575 + ], + "spans": [ + { + "bbox": [ + 62, + 520, + 296, + 575 + ], + "type": "text", + "content": "[9] Shuhao Cui, Shuhui Wang, Junbao Zhuo, Liang Li, Qingming Huang, and Qi Tian. Towards discriminability and diversity: Batch nuclear-norm maximization under label insufficient situations. In Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 3" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 57, + 577, + 296, + 620 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 577, + 296, + 620 + ], + "spans": [ + { + "bbox": [ + 57, + 577, + 296, + 620 + ], + "type": "text", + "content": "[10] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Conference on Computer Vision and Pattern Recognition (CVPR), 2009. 6" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 57, + 623, + 296, + 655 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 623, + 296, + 655 + ], + "spans": [ + { + "bbox": [ + 57, + 623, + 296, + 655 + ], + "type": "text", + "content": "[11] Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In International Conference on Machine Learning (ICML), 2015. 2, 3" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 657, + 296, + 700 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 657, + 296, + 700 + ], + "spans": [ + { + "bbox": [ + 57, + 657, + 296, + 700 + ], + "type": "text", + "content": "[12] Lan-Zhe Guo, Zhen-Yu Zhang, Yuan Jiang, Yu-Feng Li, and Zhi-Hua Zhou. Safe deep semi-supervised learning for unseen-class unlabeled data. In International Conference on Machine Learning (ICML). PMLR, 2020. 2" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 57, + 702, + 296, + 714 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 702, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 57, + 702, + 296, + 714 + ], + "type": "text", + "content": "[13] Zhuo Huang, Chao Xue, Bo Han, Jian Yang, and Chen Gong." + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 555, + 712 + ], + "type": "list", + "angle": 0, + "index": 32, + "blocks": [ + { + "bbox": [ + 333, + 73, + 555, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 333, + 73, + 555, + 95 + ], + "spans": [ + { + "bbox": [ + 333, + 73, + 555, + 95 + ], + "type": "text", + "content": "Universal semi-supervised learning. Advances in Neural Information Processing Systems (NeurIPS), 2021. 2" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 96, + 555, + 149 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 96, + 555, + 149 + ], + "spans": [ + { + "bbox": [ + 316, + 96, + 555, + 149 + ], + "type": "text", + "content": "[14] Lin-Han Jia, Lan-Zhe Guo, Zhi Zhou, Jie-Jing Shao, Yuke Xiang, and Yu-Feng Li. Bidirectional adaptation for robust semi-supervised learning with inconsistent data distributions. In International Conference on Machine Learning (ICML), 2023. 2, 3, 6" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 317, + 151, + 555, + 184 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 151, + 555, + 184 + ], + "spans": [ + { + "bbox": [ + 317, + 151, + 555, + 184 + ], + "type": "text", + "content": "[15] Ying Jin, Ximei Wang, Mingsheng Long, and Jianmin Wang. Minimum class confusion for versatile domain adaptation. In European Conference on Computer Vision (ECCV), 2020. 6" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 317, + 185, + 555, + 217 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 185, + 555, + 217 + ], + "spans": [ + { + "bbox": [ + 317, + 185, + 555, + 217 + ], + "type": "text", + "content": "[16] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. In International Conference on Learning Representations (ICLR), 2017. 2" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 217, + 555, + 250 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 217, + 555, + 250 + ], + "spans": [ + { + "bbox": [ + 316, + 217, + 555, + 250 + ], + "type": "text", + "content": "[17] Qicheng Lao, Xiang Jiang, and Mohammad Havaei. Hypothesis disparity regularized mutual information maximization. In AAAI Conference on Artificial Intelligence, 2021. 3" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 251, + 554, + 272 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 251, + 554, + 272 + ], + "spans": [ + { + "bbox": [ + 316, + 251, + 554, + 272 + ], + "type": "text", + "content": "[18] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. In Nature, 2015. 1" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 317, + 273, + 555, + 315 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 273, + 555, + 315 + ], + "spans": [ + { + "bbox": [ + 317, + 273, + 555, + 315 + ], + "type": "text", + "content": "[19] Dong-Hyun Lee et al. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML, 2013. 2" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 316, + 555, + 349 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 316, + 555, + 349 + ], + "spans": [ + { + "bbox": [ + 316, + 316, + 555, + 349 + ], + "type": "text", + "content": "[20] Hong Liu, Jianmin Wang, and Mingsheng Long. Cycle self-training for domain adaptation. In Advances in Neural Information Processing Systems (NeurIPS), 2021. 3" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 350, + 555, + 392 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 350, + 555, + 392 + ], + "spans": [ + { + "bbox": [ + 316, + 350, + 555, + 392 + ], + "type": "text", + "content": "[21] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In International Conference on Machine Learning (ICML), 2015. 2, 3" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 394, + 555, + 437 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 394, + 555, + 437 + ], + "spans": [ + { + "bbox": [ + 316, + 394, + 555, + 437 + ], + "type": "text", + "content": "[22] Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. In Advances in Neural Information Processing Systems (NeurIPS), 2018. 3, 6" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 438, + 554, + 470 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 438, + 554, + 470 + ], + "spans": [ + { + "bbox": [ + 316, + 438, + 554, + 470 + ], + "type": "text", + "content": "[23] Ilya Loshchilov and Frank Hutter. SGDR: Stochastic gradient descent with warm restarts. In International Conference on Learning Representations (ICLR), 2017. 6" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 472, + 555, + 503 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 472, + 555, + 503 + ], + "spans": [ + { + "bbox": [ + 316, + 472, + 555, + 503 + ], + "type": "text", + "content": "[24] Yurii Nesterov. A method for unconstrained convex minimization problem with the rate of convergence o (1/k2). In Dokl. Akad. Nauk. SSSR, 1983. 6" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 317, + 505, + 554, + 548 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 505, + 554, + 548 + ], + "spans": [ + { + "bbox": [ + 317, + 505, + 554, + 548 + ], + "type": "text", + "content": "[25] Avital Oliver, Augustus Odena, Colin A Raffel, Ekin Dogus Cubuk, and Ian Goodfellow. Realistic evaluation of deep semi-supervised learning algorithms. In Advances in Neural Information Processing Systems (NeurIPS), 2018. 1, 2" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 317, + 548, + 555, + 591 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 548, + 555, + 591 + ], + "spans": [ + { + "bbox": [ + 317, + 548, + 555, + 591 + ], + "type": "text", + "content": "[26] Xingchao Peng, Ben Usman, Neela Kaushik, Judy Hoffman, Dequan Wang, and Kate Saenko. Visda: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924, 2017. 6" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 317, + 593, + 555, + 635 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 593, + 555, + 635 + ], + "spans": [ + { + "bbox": [ + 317, + 593, + 555, + 635 + ], + "type": "text", + "content": "[27] Hoang Phan, Trung Le, Trung Phung, Anh Tuan Bui, Nhat Ho, and Dinh Phung. Global-local regularization via distributional robustness. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2023. 3" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 317, + 637, + 555, + 669 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 637, + 555, + 669 + ], + "spans": [ + { + "bbox": [ + 317, + 637, + 555, + 669 + ], + "type": "text", + "content": "[28] Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In European Conference on Computer Vision (ECCV), 2010. 6" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 316, + 670, + 555, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 670, + 555, + 712 + ], + "spans": [ + { + "bbox": [ + 316, + 670, + 555, + 712 + ], + "type": "text", + "content": "[29] Jian Shen, Yanru Qu, Weinan Zhang, and Yong Yu. Wasserstein distance guided representation learning for domain adaptation. In AAAI Conference on Artificial Intelligence, 2018. 3" + } + ] + } + ], + "index": 31 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "text", + "content": "15379" + } + ] + } + ], + "index": 33 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 73, + 297, + 654 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 56, + 73, + 297, + 115 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 73, + 297, + 115 + ], + "spans": [ + { + "bbox": [ + 56, + 73, + 297, + 115 + ], + "type": "text", + "content": "[30] Rui Shu, Hung H Bui, Hirokazu Narui, and Stefano Ermon. A dirt-t approach to unsupervised domain adaptation. International Conference on Learning Representations (ICLR), 2018. 3" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 118, + 295, + 183 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 118, + 295, + 183 + ], + "spans": [ + { + "bbox": [ + 56, + 118, + 295, + 183 + ], + "type": "text", + "content": "[31] Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In Advances in Neural Information Processing Systems (NeurIPS), 2020. 2, 6" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 185, + 295, + 228 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 185, + 295, + 228 + ], + "spans": [ + { + "bbox": [ + 56, + 185, + 295, + 228 + ], + "type": "text", + "content": "[32] Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In International Conference on Computer Vision (ICCV), 2017. 1" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 230, + 295, + 274 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 230, + 295, + 274 + ], + "spans": [ + { + "bbox": [ + 56, + 230, + 295, + 274 + ], + "type": "text", + "content": "[33] Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in Neural Information Processing Systems (NeurIPS), 2017. 2" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 275, + 295, + 317 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 275, + 295, + 317 + ], + "spans": [ + { + "bbox": [ + 56, + 275, + 295, + 317 + ], + "type": "text", + "content": "[34] Philipp Tschandl, Cliff Rosendahl, and Harald Kittler. The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Scientific data, 2018. 6" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 319, + 295, + 342 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 319, + 295, + 342 + ], + "spans": [ + { + "bbox": [ + 56, + 319, + 295, + 342 + ], + "type": "text", + "content": "[35] Jesper E Van Engelen and Holger H Hoos. A survey on semi-supervised learning. Machine learning, 2020. 1" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 343, + 295, + 385 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 343, + 295, + 385 + ], + "spans": [ + { + "bbox": [ + 56, + 343, + 295, + 385 + ], + "type": "text", + "content": "[36] Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017. 6" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 387, + 295, + 430 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 387, + 295, + 430 + ], + "spans": [ + { + "bbox": [ + 56, + 387, + 295, + 430 + ], + "type": "text", + "content": "[37] Vikas Verma, Kenji Kawaguchi, Alex Lamb, Juho Kannala, Arno Solin, Yoshua Bengio, and David Lopez-Paz. Interpolation consistency training for semi-supervised learning. In Neural Networks, 2022. 2, 6" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 432, + 295, + 475 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 432, + 295, + 475 + ], + "spans": [ + { + "bbox": [ + 56, + 432, + 295, + 475 + ], + "type": "text", + "content": "[38] Qing Yu, Daiki Ikami, Go Irie, and Kiyoharu Aizawa. Multi-task curriculum framework for open-set semi-supervised learning. In European Conference on Computer Vision (ECCV), 2020. 2" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 477, + 295, + 531 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 477, + 295, + 531 + ], + "spans": [ + { + "bbox": [ + 56, + 477, + 295, + 531 + ], + "type": "text", + "content": "[39] Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, and Takahiro Shinozaki. Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling. In Advances in Neural Information Processing Systems (NeurIPS), 2021. 2, 6" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 533, + 295, + 576 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 533, + 295, + 576 + ], + "spans": [ + { + "bbox": [ + 56, + 533, + 295, + 576 + ], + "type": "text", + "content": "[40] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. Mixup: Beyond empirical risk minimization. In International Conference on Learning Representations (ICLR), 2018. 5" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 578, + 295, + 620 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 578, + 295, + 620 + ], + "spans": [ + { + "bbox": [ + 56, + 578, + 295, + 620 + ], + "type": "text", + "content": "[41] Mingkai Zheng, Shan You, Lang Huang, Fei Wang, Chen Qian, and Chang Xu. Simmatch: Semi-supervised learning with similarity matching. In Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2, 6" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 56, + 623, + 295, + 654 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 623, + 295, + 654 + ], + "spans": [ + { + "bbox": [ + 56, + 623, + 295, + 654 + ], + "type": "text", + "content": "[42] Yang Zou, Zhiding Yu, Xiaofeng Liu, BVK Kumar, and Jinsong Wang. Confidence regularized self-training. In International Conference on Computer Vision (ICCV), 2019. 3" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "text", + "content": "15380" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/A Unified Image-Dense Annotation Generation Model for Underwater Scenes/c77f859c-1439-4915-ba1a-a9314ac3d9a9_content_list.json b/2025/A Unified Image-Dense Annotation Generation Model for Underwater Scenes/c77f859c-1439-4915-ba1a-a9314ac3d9a9_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..54c85fd6e30d2828cfb885f8f64da2d2f3bf241c --- /dev/null +++ b/2025/A Unified Image-Dense Annotation Generation Model for Underwater Scenes/c77f859c-1439-4915-ba1a-a9314ac3d9a9_content_list.json @@ -0,0 +1,1556 @@ +[ + { + "type": "text", + "text": "A Unified Image-Dense Annotation Generation Model for Underwater Scenes", + "text_level": 1, + "bbox": [ + 107, + 130, + 888, + 151 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Hongkai Lin Dingkang Liang Zhenghao Qi Xiang Bai* Huazhong University of Science and Technology {hklin,dkliang,xbai}@hust.edu.cn", + "bbox": [ + 250, + 180, + 743, + 234 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/dbc64f5ab09687b03fea39ce38291518f31f2ee783bc816eaaa6bfb11ba196c5.jpg", + "image_caption": [ + "Figure 1. We present TIDE, a unified underwater image-dense annotation generation model. Its core lies in the shared layout information and the natural complementarity between multimodal features. Our model, derived from the text-to-image model and fine-tuned with underwater data, enables the generation of highly consistent underwater image-dense annotations from solely text conditions." + ], + "image_footnote": [], + "bbox": [ + 112, + 243, + 883, + 611 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 246, + 670, + 328, + 686 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Underwater dense prediction, especially depth estimation and semantic segmentation, is crucial for gaining a comprehensive understanding of underwater scenes. Nevertheless, high-quality and large-scale underwater datasets with dense annotations remain scarce because of the complex environment and the exorbitant data collection costs. This paper proposes a unified Text-to-Image and DEnse annotation generation method (TIDE) for underwater scenes. It relies solely on text as input to simultaneously generate realistic underwater images and multiple highly consistent dense annotations. Specifically, we unify the generation of", + "bbox": [ + 88, + 703, + 485, + 869 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "text-to-image and text-to-dense annotations within a single model. The Implicit Layout Sharing mechanism (ILS) and cross-modal interaction method called Time Adaptive Normalization (TAN) are introduced to jointly optimize the consistency between image and dense annotations. We synthesize a large-scale underwater dataset using TIDE to validate the effectiveness of our method in underwater dense prediction tasks. The results demonstrate that our method effectively improves the performance of existing underwater dense prediction models and mitigates the scarcity of underwater data with dense annotations. We hope our method can offer new perspectives on alleviating data scarcity issues in other fields. The code is available at https://github.com/HongkLin/TIDE.", + "bbox": [ + 511, + 672, + 908, + 883 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 46 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "* Corresponding author.", + "bbox": [ + 114, + 887, + 246, + 900 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "961", + "bbox": [ + 485, + 945, + 509, + 955 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 91, + 89, + 222, + 104 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Underwater dense prediction, particularly depth estimation and semantic segmentation, is essential for underwater exploration and environmental monitoring. However, the complex environment and the prohibitive data collection costs result in a scarcity of underwater data with dense annotations. Such conditions severely hinder the advancement of dense prediction technologies in underwater scenes.", + "bbox": [ + 89, + 114, + 483, + 220 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Fortunately, the recent success of the image generative technique [14, 28, 42] provides a breakthrough in addressing the scarcity of underwater scene data. In the field of general object understanding, controllable data synthesis [22, 30, 31, 37] demonstrates its effectiveness in few-shot scenarios. A straightforward solution is to apply them to underwater scenes directly. For instance, Atlantis [41], a pioneering controllable generation method for underwater depth data that takes ControlNet as its core, utilizes terrestrial depth maps as conditions. It effectively mitigates the issue of scarce underwater depth data and achieves consistent performance improvements across multiple underwater depth datasets and models.", + "bbox": [ + 88, + 220, + 483, + 416 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Despite remarkable progress, there are still challenges in Atlantis, as follows: 1) Atlantis, as shown in Fig. 2(a), generates underwater depth data using terrestrial depth maps as conditions due to the lack of underwater depth maps. It is considered a suboptimal approach since it may not align with natural underwater scenes. Better recreating authentic underwater environments is equally essential. 2) It generates data with only a single type of dense annotations, which is insufficient for understanding complex underwater scenes. Thus, a natural question arises: How can we simultaneously generate highly consistent, one-to-many, and vivid underwater images and dense annotation pairs?", + "bbox": [ + 89, + 417, + 483, + 598 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this paper, we explore the possibility of simultaneously generating highly consistent, realistic underwater scene images and multiple types of dense annotations using only text conditions. Our approach, which we refer to as TIDE, is illustrated in Fig. 2(b), presents a unified Text-to-Image and DEnse annotation generation method. TIDE is an end-to-end training and inference model that integrates denoising models in parallel for both text-to-image generation and text-to-dense annotation generation.", + "bbox": [ + 89, + 598, + 483, + 733 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To align the images and multiple type dense annotations generated by parallel denoising models, we propose the Implicit Layout Sharing (ILS) mechanism. Specifically, the cross-attention map as an implicit layout is the key to controlling the image layout in the text-to-image model [6, 28], inspiring us to share the implicit layout for aligning images and dense annotations. ILS effortlessly replaces the cross-attention map in the text-to-dense annotation model with that from the text-to-image model, effectively improving the consistency between the image and dense annotations. Furthermore, considering the intrinsic", + "bbox": [ + 89, + 734, + 483, + 900 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/17ad3b11a489e5895d1163ec2e425ab8073bebc39c5a14b448aac4484eadf16e.jpg", + "image_caption": [ + "Figure 2. The comparison between Atlantis [41] and our method. Unlike Atlantis, which requires text and depth map conditions, our method only needs text as the input condition to generate image-dense annotations (e.g., depth maps and semantic masks)." + ], + "image_footnote": [], + "bbox": [ + 516, + 88, + 908, + 320 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "complementarity between features of different modalities, we introduce a cross-modal interaction method called Time Adaptive Normalization (TAN), a normalization layer that modulates the activations using different modal features. The consistency of the image and dense annotations can further be jointly optimized through cross-modal feature interaction among different dense annotation generation and between image and dense annotation generation.", + "bbox": [ + 511, + 414, + 906, + 535 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To verify the effectiveness of our method, we use TIDE to generate a large-scale dataset of underwater images with dense annotations named SynTIDE. Extensive experiments demonstrate the effectiveness of SynTIDE for underwater dense prediction tasks. In the underwater depth estimation task, SynTIDE presents consistent improvements in various fine-tuning models. For example, when adopting representative NewCRFs [40] as the fine-tuning model, our approach achieves significance gains over previous work, particularly in the $SI_{log}$ and $\\delta_1$ metrics, with improvements of 14.73 and 36% on the D3 and D5 subsets of Sea-thru [3] dataset, respectively. In underwater semantic segmentation, pre-training with SynTIDE yields consistent improvements across different models. For instance, when using ViT-Adapter [7] as the training model, pre-training with the SynTIDE dataset leads to improvements of 2.1% mIoU on the USIS10K [20] dataset.", + "bbox": [ + 511, + 536, + 908, + 792 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "TIDE demonstrates powerful data generation capabilities for underwater scenes. Using only easily accessible text prompts, TIDE can generate highly consistent and realistic underwater images and multiple types of dense annotations. It holds potential as a mainstream data synthesis method for underwater scenes and offers a promising direction for alleviating data scarcity in other fields. The main contribu", + "bbox": [ + 511, + 794, + 908, + 900 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "962", + "bbox": [ + 485, + 945, + 509, + 955 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "tions of this work are as follows: 1) We propose a novel data synthesis method, TIDE, which uses text as the sole condition to generate images and their corresponding multitype dense annotations simultaneously. To our knowledge, TIDE is the first method capable of simultaneously synthesizing both images and multiple dense annotations from text. 2) To align the images and dense annotations, we introduce the Implicit Layout Sharing mechanism. The text-to-image and text-to-dense annotation models share the same layout information, ensuring proper alignment between the image and dense annotations. Meanwhile, the consistency between image and dense annotations can be further optimized through the cross-modal interaction method called Time Adaptive Normalization.", + "bbox": [ + 89, + 90, + 483, + 303 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2. Relate Work", + "text_level": 1, + "bbox": [ + 89, + 316, + 220, + 332 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.1. Underwater Dense Prediction", + "text_level": 1, + "bbox": [ + 89, + 342, + 354, + 357 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Dense prediction tasks in underwater scenes are crucial for comprehensively understanding underwater scenes. The publication of SUIM [16] provides a fundamental dataset and benchmark for the exploration of underwater semantic segmentation. To fill the gap in underwater instance segmentation, WaterMask [19] publishes the UIIS dataset, and a model is designed to cater to the unique characteristics of underwater images, improving the accuracy of underwater instance segmentation. Recently, the rise of general foundational segmentation models [17, 27] drives further development in the field of underwater segmentation [20, 36, 43].", + "bbox": [ + 89, + 366, + 482, + 532 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Due to the lack of underwater depth estimation datasets, most underwater depth estimation methods focus on traditional techniques, unsupervised, or self-supervised approaches. Traditional methods [9] mainly rely on statistical priors, such as the dark channel prior [12], to estimate underwater depth. Gupta et al. [10] model the relationship between underwater and above-water hazy appearances to depth estimation. UW-GAN [11] and Atlantis [41] improve the performance of underwater depth estimation by synthesizing training datasets through generative models.", + "bbox": [ + 89, + 532, + 482, + 683 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "While these methods make notable contributions to underwater dense prediction tasks, the large-scale and high-quality dataset in underwater scenes with only segmentation or depth annotations remains insufficient for achieving comprehensive underwater scene understanding.", + "bbox": [ + 89, + 684, + 482, + 760 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.2. Controllable Data Synthesis", + "text_level": 1, + "bbox": [ + 89, + 770, + 341, + 786 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Thanks to the success of diffusion models [14] and the availability of large-scale, high-quality text-image training data, text-to-image models [6, 23, 26, 28] and controllable image generation models [21, 35, 42] achieve unprecedented success in image quality, diversity, and consistency.", + "bbox": [ + 89, + 794, + 482, + 869 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "He et al. [13] are the first to explore and demonstrate the effectiveness of state-of-the-art text-to-image genera", + "bbox": [ + 89, + 869, + 482, + 901 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "tion models for image recognition. This makes it possible to achieve diverse data collection and accurate annotation at a lower cost. Wu et al. and Nguyen et al. [22, 31] explore the ability of pre-trained diffusion models to enhance real data in few-shot settings for segmentation tasks. Diffumask [32] ingeniously combines text-to-image models with AffinityNet [2], achieving open-vocabulary segmentation data synthesis. Freemask [37] demonstrates that synthetic data can further enhance the performance of semantic segmentation models under fully supervised settings by incorporating freestyle, a controllable image generation method using semantic masks as input conditions. Seggen [39] designs a multi-stage semantic segmentation data synthesis method, text2mask and mask2image, which achieves high semantic consistency semantic segmentation data only using text as the condition. Detdiffusion [30] synthesizes object detection data by incorporating object categories and spatial coordinates into the text.", + "bbox": [ + 511, + 90, + 903, + 362 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Unlike the aforementioned single-task data synthesis methods, we propose a novel end-to-end underwater data synthesis approach that simultaneously generates semantic masks and depth maps, relying solely on text conditions.", + "bbox": [ + 511, + 363, + 903, + 422 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. Preliminaries", + "text_level": 1, + "bbox": [ + 513, + 436, + 651, + 452 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Diffusion Models (DMs) [14] emerge as leading text-to-image (T2I) generation models, recognized for their ability to produce realistic images. DMs can reconstruct data distribution by learning the reverse process of a diffusion process. Denoting $z_{t}$ as the random variable at $t$ -th timestep, the diffusion process is modeled as a Markov Chain:", + "bbox": [ + 511, + 460, + 905, + 551 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\nz _ {t} \\sim \\mathcal {N} (\\sqrt {\\alpha_ {t}} z _ {t - 1}, (1 - \\alpha_ {t}) I), \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 607, + 566, + 903, + 583 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $\\alpha_{t}$ is the fixed coefficient predefined in the noise schedule, and $I$ refers to identity matrix. A prominent variant, the Latent Diffusion Model (LDM) [28], innovatively shifts the diffusion process of standard DMs into a latent space. This transition notably decreases computational costs while preserving the generative quality and flexibility of the original model. The resulting efficiency gain primarily arises from the reduced dimensionality of the latent space, which allows for lower training costs without compromising the model's generative capabilities.", + "bbox": [ + 511, + 590, + 905, + 739 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Stable Diffusion, an exemplary implementation of LDM, comprises an AutoEncoder [29] and a latent diffusion model. The AutoEncoder $\\varepsilon$ is designed to learn a latent space that is perceptually equivalent to the image space. Meanwhile, the LDM $\\epsilon_{\\theta}$ is parameterized as a denoising model with cross-attention and trained on a large-scale dataset of text-image pairs via:", + "bbox": [ + 511, + 739, + 905, + 847 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {L D M} := \\mathbb {E} _ {\\varepsilon (x), y, \\epsilon \\sim N (0, 1), t} [ \\| \\epsilon - \\epsilon_ {\\theta} (z _ {t}, t, \\tau_ {\\theta} (y)) \\| _ {2} ^ {2} ], \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 524, + 857, + 903, + 876 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $\\epsilon$ is the target noise. $\\tau_{\\theta}$ and $y$ are the pre-trained", + "bbox": [ + 511, + 885, + 903, + 900 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "963", + "bbox": [ + 485, + 945, + 509, + 955 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/01bdc75ed160a5f95660aa52c6cb1677e6053d024173d47df59b4017a0df17b8.jpg", + "image_caption": [ + "(a) Training phase of our TIDE" + ], + "image_footnote": [], + "bbox": [ + 89, + 85, + 704, + 300 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/f7f5c503325057f5f49b3b485661f5143744fa49061911315aa4ec6d5c742250.jpg", + "image_caption": [ + "(b) Inference phase", + "Figure 3. Training and Inference. The denoising model of TIDE mainly consists of three transformers, each dedicated to text-to-image, text-to-depth, and text-to-mask. The proposed Implicit Layout Sharing mechanism (ILS) and Time Adaptive Normalization (TAN) are used to align the generated image, depth map, and semantic mask." + ], + "image_footnote": [], + "bbox": [ + 733, + 89, + 906, + 296 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "text encoder (e.g., CLIP [24], T5 [25]) and text prompts, respectively. This equation represents the mean-squared error (MSE) between the target noise $\\epsilon$ and the noise predicted by the model, encapsulating the core learning mechanism of the latent diffusion model.", + "bbox": [ + 88, + 397, + 483, + 472 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "4. Our Method", + "text_level": 1, + "bbox": [ + 89, + 489, + 220, + 505 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "An overview of our method, a unified text-to-image and dense annotation generation model (TIDE), is shown in Fig. 3. TIDE is built upon a pre-trained transformer [6] for text-to-image generation, along with two fine-tuned mini-transformers (details provided in Sec. 5.1.1) dedicated to text-to-depth and text-to-mask generation. Simply parallelizing multiple text-to-image processes does not ensure consistency between the images and dense annotations. To enable consistency between them, we propose Implicit Layout Sharing (ILS) and the cross-modal interaction method named Time Adaptive Normalization (TAN). After training, TIDE simultaneously generates images and multiple dense annotations with high consistency using only text as input.", + "bbox": [ + 88, + 516, + 483, + 713 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "4.1. Data Preparation", + "text_level": 1, + "bbox": [ + 89, + 724, + 261, + 742 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "We aim to generate realistic underwater images, corresponding highly consistent depth maps, and semantic masks. However, existing high-quality, dense annotation data primarily consists of mask annotations. Therefore, we construct training data around these datasets with semantic masks, as shown in Tab. 1. On this basis, we obtain the corresponding depth map and caption for each image using existing foundation models. Specifically, for each underwater image, the corresponding depth map is obtained by pre-trained Depth Anything [38]. Meanwhile, the caption of", + "bbox": [ + 88, + 750, + 483, + 902 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "each image is obtained from the pre-trained BLIP2 [18]. We construct approximately 14K quadruples {Image, Depth, Mask, Caption} for TIDE training.", + "bbox": [ + 511, + 397, + 906, + 443 + ], + "page_idx": 3 + }, + { + "type": "table", + "img_path": "images/e77aefbc828542b09bf7b8055f58e9804fa364f28fe964eb77a803f1be0350a4.jpg", + "table_caption": [ + "Table 1. Segmentation Datasets and Data Splits. \\* denotes the training set of TIDE, while the others are used for evaluation." + ], + "table_footnote": [], + "table_body": "
DatasetsSeg TaskTrainValTest
SUIM [16]Semantic1,488*110*/
UIIS [19]Instance3,937*691/
USIS10K [20]Instance7,442*1,5941,596*
", + "bbox": [ + 516, + 497, + 900, + 565 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "4.2. Implicit Layout Sharing Mechanism", + "text_level": 1, + "bbox": [ + 511, + 595, + 828, + 611 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In advanced text-to-image models [6, 28], the cross-attention map plays a crucial role in controlling the image layout. Existing methods [21, 35] demonstrate that adjusting the cross-attention map during the text-to-image process can effectively control the layout of the generated image. Therefore, the cross-attention map can be considered as the implicit layout information. Intuitively, sharing the implicit layout between text-to-image and text-to-dense annotations may establish a strong correlation between the generated image and dense annotations. To this end, we propose an Implicit Layout Sharing mechanism to align the generated image and dense annotations. Specifically, cross-attention, as a crucial process for generating implicit layouts in text-to-image/mask/depth model, can first be formulated as:", + "bbox": [ + 509, + 619, + 906, + 832 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\operatorname {A t t n} _ {i} \\left(\\boldsymbol {Q} _ {i}, \\boldsymbol {K} _ {i}, \\boldsymbol {V} _ {i}\\right) = \\operatorname {s o f t m a x} \\left(\\boldsymbol {Q} _ {i} \\boldsymbol {K} _ {i} ^ {\\top} / \\sqrt {c}\\right) \\boldsymbol {V} _ {i},\n$$\n", + "text_format": "latex", + "bbox": [ + 547, + 845, + 864, + 863 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\operatorname {A t t n} _ {d} \\left(\\boldsymbol {Q} _ {d}, \\boldsymbol {K} _ {d}, \\boldsymbol {V} _ {d}\\right) = \\operatorname {s o f t m a x} \\left(\\boldsymbol {Q} _ {d} \\boldsymbol {K} _ {d} ^ {\\top} / \\sqrt {c}\\right) \\boldsymbol {V} _ {d}, \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 542, + 864, + 903, + 883 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\operatorname {A t t n} _ {m} \\left(\\boldsymbol {Q} _ {m}, \\boldsymbol {K} _ {m}, \\boldsymbol {V} _ {m}\\right) = \\operatorname {s o f t m a x} \\left(\\boldsymbol {Q} _ {m} \\boldsymbol {K} _ {m} ^ {\\top} / \\sqrt {c}\\right) \\boldsymbol {V} _ {m},\n$$\n", + "text_format": "latex", + "bbox": [ + 522, + 886, + 877, + 904 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "964", + "bbox": [ + 485, + 945, + 511, + 955 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $c$ refers to the feature channel. $Q_{i} / Q_{d} / Q_{m}$ , $K_{i} / K_{d} / K_{m}$ , and $V_{i} / V_{d} / V_{m}$ represent the query, key, and value within the text-to-image/depth/mask cross-attention module, respectively. Since text-to-image models are pre-trained on high-quality and large-scale image-caption datasets, they exhibit strong controllability and generalization. Therefore, sharing the implicit layouts from the text-to-image model is the optimal choice to ensure the quality of the generated data. As shown in Fig. 3(a), the implicit layouts from the block in the text-to-image model are shared with the cross-attention in the block of text-to-dense annotation models. The implicit layouts refer to:", + "bbox": [ + 89, + 90, + 483, + 272 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {M} _ {i} = \\operatorname {s o f t m a x} \\left(\\boldsymbol {Q} _ {i} \\boldsymbol {K} _ {i} ^ {\\top} / \\sqrt {c}\\right). \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 183, + 277, + 482, + 296 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "By sharing the implicit layouts from the text-to-image model, the cross-attention of text-to-depth $(\\mathsf{Attn}_d)$ and text-to-mask $(\\mathsf{Attn}_m)$ can be simplified as follows:", + "bbox": [ + 89, + 301, + 483, + 347 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\operatorname {A t t n} _ {d} \\left(\\boldsymbol {Q} _ {d}, \\boldsymbol {K} _ {d}, \\boldsymbol {V} _ {d}\\right) = \\boldsymbol {M} _ {i} \\times \\boldsymbol {V} _ {d},\n$$\n", + "text_format": "latex", + "bbox": [ + 174, + 353, + 413, + 369 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\operatorname {A t t n} _ {m} \\left(\\boldsymbol {Q} _ {m}, \\boldsymbol {K} _ {m}, \\boldsymbol {V} _ {m}\\right) = \\boldsymbol {M} _ {i} \\times \\boldsymbol {V} _ {m}, \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 156, + 367, + 480, + 388 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\times$ refers to matrix multiplication. Implicit Layout Sharing is an elegant and efficient method that unifies image and dense annotation generation, improving consistency between them. It also reduces the overall generation cost, as there is no need to compute separate cross-attention maps for the text-to-dense annotation models.", + "bbox": [ + 89, + 393, + 483, + 484 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.3. Time Adaptive Normalization", + "text_level": 1, + "bbox": [ + 89, + 492, + 356, + 508 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Considering the complementary nature of different modality features, we propose a cross-modal feature interaction method called Time Adaptive Normalization (TAN), as shown in Fig. 4.", + "bbox": [ + 89, + 515, + 483, + 575 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Specifically, TAN is utilized to adjust the image layout by leveraging the cross-modal features $\\boldsymbol{x}_f$ from different branches. The cross-modal features are mapped to two normalization parameters, $\\gamma$ and $\\beta$ , by MLPs, which are used to control the variation in the image layout. In this context, the features from text-to-depth and text-to-mask serve as cross-modal input features for each other. For instance, in the TAN corresponding to the $i$ -th text-to-depth block, the outputs from both the $i$ -th text-to-depth block and the $i$ -th text-to-mask block serve as the input feature $\\boldsymbol{x}$ and cross-modal input feature $\\boldsymbol{x}_f$ , respectively. A slight difference is that for text-to-image, the features from both text-to-depth and text-to-mask serve as the cross-modal features. In the TAN cross-modal interaction process of text-to-image, two sets of $\\gamma$ and $\\beta$ are obtained, provided by the different modalities features from text-to-depth and text-to-mask. These two sets of parameters are averaged to the $\\bar{\\gamma}$ and $\\bar{\\beta}$ . Then, time embeddings $\\boldsymbol{x}_t$ is introduced to adaptively control the influence of the cross-modal features. The normalization can be formalized as follows:", + "bbox": [ + 89, + 575, + 483, + 878 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {x} ^ {\\prime} = \\alpha \\cdot \\gamma \\boldsymbol {x} + \\alpha \\cdot \\beta , \\boldsymbol {x} ^ {*} = \\boldsymbol {x} ^ {\\prime} + \\boldsymbol {x}, \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 166, + 883, + 482, + 902 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/3699708a205ef92db41f8131b1aaa0f5d3fe93d139c57d08349f9880048dc5ce.jpg", + "image_caption": [ + "Sigmoid $\\oplus$ Element-wise add $\\otimes$ Element-wise product", + "Figure 4. In TAN, the cross-modal features are first mapped to the modulation parameters $\\gamma$ and $\\beta$ . Then, a time-adaptive confidence $\\alpha$ is introduced to control the degree of normalization." + ], + "image_footnote": [], + "bbox": [ + 555, + 87, + 875, + 218 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\pmb{x}$ , $\\pmb{x}'$ , and $\\pmb{x}^*$ are the input feature, normalized feature, and output feature, respectively. $\\alpha$ is the time adaptive coefficients obtained from $\\pmb{x}_t$ through linear transformation and sigmoid. The TAN will be applied not only from text-to-dense annotations to text-to-image but also between text-to-depth and text-to-mask to improve the consistency among dense annotations. Implicit Layout Sharing and Time Adaptive Normalization are two complementary methods that construct a joint interaction process, optimizing the consistency between the generated image and dense annotations during training.", + "bbox": [ + 511, + 319, + 906, + 487 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.4. Learning Objective", + "text_level": 1, + "bbox": [ + 513, + 494, + 699, + 511 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "During training, the learnable parameters include only the proposed TAN module and the LoRA [15] used to fine-tune the pre-trained transformer. The overall loss $\\mathcal{L}$ is composed equally of the denoising losses from the three branches: text-to-image, text-to-depth, and text-to-mask:", + "bbox": [ + 511, + 518, + 905, + 594 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} = \\mathcal {L} _ {m s e} ^ {I} + \\mathcal {L} _ {m s e} ^ {D} + \\mathcal {L} _ {m s e} ^ {M}. \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 614, + 606, + 905, + 626 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.5. Data Synthesis", + "text_level": 1, + "bbox": [ + 511, + 633, + 663, + 648 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Thanks to the proposed ILS and TAN, TIDE can generate realistic and highly consistent underwater images and dense annotations after training, using only text conditions, as shown in Fig. 3(b).", + "bbox": [ + 511, + 656, + 905, + 715 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We filter out redundant parts from the 14K captions obtained in Sec. 4.1, resulting in approximately 5K nonredundant captions as text conditions. For each caption, we generate ten samples to construct a large-scale synthetic dataset named SynTIDE. Some representative examples are shown in Fig. 1. The SynTIDE dataset is utilized to validate the effectiveness of our method in the dense prediction task for underwater scenes.", + "bbox": [ + 511, + 717, + 905, + 837 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.6. Analysis", + "text_level": 1, + "bbox": [ + 511, + 847, + 616, + 863 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Insights of framework design. In the text-to-image model, the cross-attention map contains the layout information of", + "bbox": [ + 511, + 869, + 906, + 900 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "965", + "bbox": [ + 485, + 945, + 511, + 955 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/95f68ad81dfc45669541655eb1b6bf8315f035e53a8fa702e5c529bbb95ae049.jpg", + "table_caption": [ + "Table 2. Quantitative comparisons on real underwater depth estimation datasets." + ], + "table_footnote": [], + "table_body": "
MethodFine-tuning DatasetReference\\( S I_{log} \\)↓A.Rel↓\\( log_{10} \\)↓RMSE↓S.Rel↓RMSElog↓\\( \\delta_1 \\uparrow \\)\\( \\delta_2 \\uparrow \\)\\( \\delta_3 \\uparrow \\)
Quantitative comparisons on the D3 and D5 subsets of Sea-thru [3] dataset.
AdaBins [5]Atlantis [41]SynTIDE (Ours)CVPR 24-38.2426.92(-11.32)1.331.31(-0.02)0.120.08(-0.04)1.411.12(-0.29)12.8915.74(+2.85)0.390.27(-0.12)0.500.71(+0.21)0.810.95(+0.14)0.920.99(+0.07)
NewCRFs [40]Atlantis [41]SynTIDE (Ours)CVPR 24-37.1022.37(-14.73)1.681.50(-0.18)0.120.06(-0.06)1.441.24(-0.20)14.7622.50(+7.74)0.380.23(-0.15)0.480.84(+0.36)0.840.97(+0.13)0.950.99(+0.04)
PixelFormer [1]Atlantis [41]SynTIDE (Ours)CVPR 24-23.7021.39(-2.31)1.341.46(+0.12)0.060.05(-0.01)1.171.15(-0.02)17.2921.79(+4.50)0.240.22(-0.02)0.810.88(+0.07)0.970.98(+0.01)0.990.99(+0.00)
MIM [34]Atlantis [41]SynTIDE (Ours)CVPR 24-37.0122.49(-14.52)1.371.27(-0.10)0.110.06(-0.05)1.511.01(-0.50)14.4216.46(+2.04)0.380.23(-0.15)0.560.85(+0.29)0.840.97(+0.13)0.940.99(+0.05)
Quantitative comparisons on the SQUID [4] dataset.
AdaBins [5]Atlantis [41]SynTIDE (Ours)CVPR 24-29.5625.63(-3.93)0.280.23(-0.05)0.110.09(-0.02)2.242.69(+0.45)0.690.92(+0.23)0.310.27(-0.04)0.560.67(+0.11)0.860.90(+0.04)0.940.97(+0.03)
NewCRFs [40]Atlantis [41]SynTIDE (Ours)CVPR 24-25.1925.55(+0.36)0.230.23(-0.00)0.090.09(+0.00)2.563.02(+0.46)0.831.07(+0.24)0.260.27(+0.01)0.680.68(+0.00)0.900.91(+0.01)0.960.97(+0.01)
PixelFormer [1]Atlantis [41]SynTIDE (Ours)CVPR 24-21.3419.08(-2.26)0.180.16(-0.02)0.070.07(-0.00)1.861.75(-0.11)0.430.36(-0.07)0.220.19(-0.03)0.760.79(+0.03)0.940.97(+0.03)0.980.99(+0.01)
MIM [34]Atlantis [41]SynTIDE (Ours)CVPR 24-27.4526.98(-0.47)0.260.25(-0.01)0.100.09(-0.01)2.143.04(+0.90)0.681.11(+0.43)0.280.28(-0.00)0.610.65(+0.04)0.880.89(+0.01)0.950.96(+0.01)
", + "bbox": [ + 94, + 116, + 903, + 450 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "the image. Thus, the cross-attention map can be viewed as an implicit layout. If two text-to-image models share the same implicit layout and undergo proper fine-tuning, the generated images are likely to exhibit strong layout similarity. Therefore, we share the same implicit layout across multiple text-to-image models. Meanwhile, we use LoRA to fine-tune the multiple text-to-image models [6].", + "bbox": [ + 88, + 474, + 480, + 580 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Zero-shot generation ability. Thanks to our training strategy, which fine-tunes the pre-trained text-to-image model using only LoRA, the generalization ability of the text-to-image model is retained to some extent. This enables TIDE to generate underwater images during inference that are not seen during training. Furthermore, due to the proposed Implicit Layout Sharing and Time Adaptive Normalization mechanisms, the generated depth maps align well with these images. Therefore, TIDE has the ability to generate zero-shot underwater image-depth map pairs.", + "bbox": [ + 88, + 580, + 482, + 733 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "5. Experiments", + "text_level": 1, + "bbox": [ + 89, + 746, + 223, + 762 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "5.1. Dataset and Evaluation Metrics", + "text_level": 1, + "bbox": [ + 89, + 770, + 369, + 786 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Underwater Depth Estimation. We follow the work [41], the D3 and D5 subsets of Sea-thru [3], and the SQUID dataset [4] used to evaluate the depth estimation capability in underwater scenes. These datasets include underwater images with depth maps obtained via the Structure-fromMotion (SfM) algorithm.", + "bbox": [ + 89, + 794, + 482, + 883 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The quantitative evaluation metrics include root", + "bbox": [ + 109, + 885, + 482, + 900 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "mean square error (RMSE) and its logarithmic variant $(RMSE_{log})$ , absolute error in log-scale $(\\log_{10})$ , absolute relative error (A.Rel), squared relative error (S.Rel), the percentage of inlier pixels $(\\delta_i)$ with thresholds of $1.25^i$ , and scale-invariant error in log-scale $(SI_{log})$ : $100\\sqrt{Var(\\epsilon_{log})}$ .", + "bbox": [ + 511, + 474, + 903, + 551 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Underwater Semantic Segmentation. The UIIS [19] and USIS10K [20] datasets are chosen to validate the effectiveness of our method in underwater semantic segmentation tasks. Instance masks belonging to the same semantic category are merged to construct semantic segmentation annotations for the UIIS and USIS10K datasets.", + "bbox": [ + 511, + 554, + 903, + 643 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We calculate the mean Intersection over Union (mIoU) for six categories (i.e., Fish, Reefs, Aquatic Plants, Wrecks, Human Divers, and Robots) to evaluate the accuracy of the segmentation results.", + "bbox": [ + 511, + 646, + 903, + 705 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "5.1.1 Implementation Details", + "text_level": 1, + "bbox": [ + 511, + 724, + 732, + 739 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The training process consists of two parts: pre-training the mini-transformer and training TIDE. In the first stage, the mini-transformer is initialized with the first ten layers of the PixArt- $\\alpha$ [6] pre-trained transformer. Then, the mini-transformer is trained for 60K iterations on the text-to-image task with all parameters. The training data consists of 14K underwater image-caption pairs from Sec. 4.1. In the second stage, the PixArt- $\\alpha$ pre-trained transformer and the mini-transformer are used as initial weights for the text-to-image and text-to-dense annotation models, respectively.", + "bbox": [ + 509, + 750, + 906, + 900 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "966", + "bbox": [ + 485, + 945, + 511, + 955 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/0200a5a38c2c5778a8d09f9647797d24355f7219dda2eaeb1bdcdc0f638f6aa5.jpg", + "table_caption": [ + "Table 3. Quantitative results of underwater semantic segmentation." + ], + "table_footnote": [], + "table_body": "
MethodBackboneTraining DatamIoU
RealSynTIDEUIISUSIS10K
Segformer [33] (NeurIPS 21)MiT-B470.274.6
76.572.8
75.4(+5.2)76.1(+1.5)
Mask2former [8] (CVPR 22)Swin-B72.776.1
74.272.9
74.3(+1.6)77.1(+1.0)
ViT-Adapter [7] (ICLR 23)ViT-Adapter-B73.574.6
75.772.6
75.1(+1.6)76.7(+2.1)
", + "bbox": [ + 91, + 114, + 480, + 310 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Meanwhile, they are fine-tuned using LoRA [15] for 200K iterations with a batch size of 4. The LoRA ranks of the text-to-image/depth/mask branches are 32, 64, and 64, respectively. All experiments are conducted on a server with four NVIDIA 4090 24G GPUs.", + "bbox": [ + 89, + 337, + 482, + 411 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.2. Main results", + "text_level": 1, + "bbox": [ + 89, + 422, + 225, + 436 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.2.1 Underwater Depth Estimation", + "text_level": 1, + "bbox": [ + 89, + 446, + 356, + 463 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We train four representative depth estimation models, Adasbin [5], NewCRFs [29], PixelFormer [1], and MIM [34], to present quantitative results, as shown in Tab. 2. Compared to previous underwater data synthesis work Atlantis [41], depth estimation models trained on our SynTIDE dataset show consistent improvements across most quantitative metrics on two evaluated datasets. Especially on MIM [34], a powerful pre-trained model, our method reduces the $SI_{\\text{log}}$ metric from $37.01 \\rightarrow 22.49$ (-14.52) and improves $\\delta_1$ from $0.56 \\rightarrow 0.85$ (+0.29) on the D3 and D5 subsets of the Seathru dataset. Meanwhile, on PixelFormer [1], a depth estimation model with outstanding generalization that also performs best for Atlantis, our method achieves better performance across nearly all quantitative metrics on both evaluated underwater depth estimation datasets.", + "bbox": [ + 88, + 472, + 482, + 698 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "These results demonstrate that our method achieves highly competitive consistency compared to Atlantis, which uses stronger dense conditions. Furthermore, the data generated by TIDE is closer to natural underwater scenes and shows rich species diversity. Most importantly, TIDE unifies the generation of images and multiple highly consistent dense annotations, capabilities that Atlantis lacks.", + "bbox": [ + 89, + 699, + 482, + 805 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.3. Underwater Semantic Segmentation", + "text_level": 1, + "bbox": [ + 89, + 816, + 405, + 832 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "In the underwater semantic segmentation task, we validate the effectiveness of our method by pre-training with the SynTIDE dataset in three representative semantic segmentation models, Segformer [33], Mask2former [8], and ViT", + "bbox": [ + 89, + 839, + 482, + 901 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/1ab5913f1600c53e492cc97506b93072f4c8acbb9d39c2959551d276e8ebeb25.jpg", + "table_caption": [ + "Table 4. Ablation on the impact of each TIDE component." + ], + "table_footnote": [], + "table_body": "
ILSTANSIlog ↓A.Rel ↓δ1 ↑mIoU
24.461.230.7636.8
24.591.400.7836.2
23.711.370.7942.1
", + "bbox": [ + 516, + 114, + 903, + 179 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/e5b74515cb139fe94eaf5850506d08926c46dd30e0ca4b38008bf988a3ab7b35.jpg", + "table_caption": [ + "Table 5. Ablation on the impact of the component positions. \" {Start, End}\" indicates the starting and ending positions where the operations are applied, with a step size of 3." + ], + "table_footnote": [], + "table_body": "
{Start, End}SIlog ↓A.Rel ↓δ1 ↑mIoU
{0,12}22.061.460.8634.7
{15,27}44.861.090.438.6
{0,27}23.711.370.7942.1
", + "bbox": [ + 517, + 247, + 903, + 311 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/5a0f5f50021144b5c5cb781663c58a069792b30633c53ec1a01e1d26a09c3477.jpg", + "table_caption": [ + "Table 6. Ablation on the impact of scaling synthetic data for underwater dense prediction tasks." + ], + "table_footnote": [], + "table_body": "
N SampleSIlog ↓A.Rel ↓δ1↑mIoU
123.491.460.8255.3
322.941.540.8560.9
622.961.560.8563.6
1022.371.500.8464.2
", + "bbox": [ + 516, + 366, + 903, + 455 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Adapter [7]. Following the work [37], we filter the noise in the generated annotations with 1.5 tolerance.", + "bbox": [ + 511, + 481, + 903, + 511 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Pre-training on high-quality synthetic datasets is widely recognized as a way to gain strong prior knowledge. On the UIIS dataset, models trained on the SynTIDE dataset consistently achieve superior results compared to real data. On the other larger USIS10K dataset, by further fine-tuning the model on the UIIS10K train set, we achieve notable improvements. Especially on ViT-Adapter, we enhance the performance of the model from $74.6\\% \\rightarrow 76.7\\%$ .", + "bbox": [ + 511, + 512, + 905, + 633 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "These results show that models pre-trained on the SynTIDE dataset exhibit strong prior knowledge in the underwater semantic segmentation task. Additionally, these results demonstrate that the unified image and dense annotation generation model proposed in this paper can generate highly consistent image-dense annotation pairs, making it suitable for various underwater dense prediction tasks.", + "bbox": [ + 511, + 633, + 905, + 739 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.4. Ablation Studies", + "text_level": 1, + "bbox": [ + 511, + 752, + 679, + 766 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Unless otherwise specified, we conduct ablation studies by training TIDE for 30K iterations. We synthesize three samples for each caption, as described in Sec. 4.5. We conduct ablation studies on the USIS10K dataset with SegFormer-B4 for semantic segmentation and the D3 and D5 subsets of the Sea-thru dataset with NewCRFs for depth estimation.", + "bbox": [ + 511, + 775, + 905, + 867 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Ablation on the effectiveness of each component. We first evaluate the contribution of each component within", + "bbox": [ + 511, + 869, + 905, + 900 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "967", + "bbox": [ + 485, + 945, + 509, + 955 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "TIDE, as shown in Tab. 4. When utilizing only the Implicit Layout Sharing (ILS) mechanism or Time Adaptive Normalization (TAN), the former outperforms the latter in depth estimation and semantic segmentation. Combining both methods results in a significant improvement $(36.8\\% \\rightarrow 42.1\\%)$ in semantic segmentation. These results indicate that ILS and TAN are complementary methods. By combining them for end-to-end training, the consistency between images and dense annotations can be further optimized. Additionally, we further demonstrate the effectiveness of the time-adaptive operation. As shown in the last row of Tab. 4, without time-adaptive parameters, the quality of the generated data will be varying degrees of degradation, especially for the semantic segmentation task.", + "bbox": [ + 89, + 90, + 480, + 301 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Ablation on the position of components. We then study the effect of the position of ILS and TAN, as shown in Tab. 5. We find that applying the ILS and TAN mechanisms in the first half of the transformer of text-to-image yields better performance than using them in the second half. This can be attributed to the layout information produced in the first half of the transformer, which is mismatched with the ILS introduced in the latter part. Meanwhile, the results demonstrate that combining both achieves better consistency between the image and dense annotations.", + "bbox": [ + 89, + 301, + 480, + 452 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Ablation on data scaling. Finally, we synthesize $N$ samples for each caption to validate the impact of synthetic data scale on underwater dense prediction tasks, as shown in Tab. 6. It can be observed that as the amount of synthetic data increases, there is no substantial improvement in the underwater depth estimation task. However, for the underwater semantic segmentation task, a significant gain is observed in the early stages as $N$ increases, but the tendency of improvement begins to flatten after $N = 6$ .", + "bbox": [ + 89, + 457, + 482, + 592 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5.5. More Challenging Underwater Data Synthesis", + "text_level": 1, + "bbox": [ + 89, + 602, + 480, + 618 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We validate whether TIDE can generate more challenging data by adding extra text prompts about underwater lighting or water quality (e.g., low light, turbidity) to the original underwater scene caption. As shown in Fig. 5, the results demonstrate that TIDE can generate more challenging underwater images. While annotating these underwater images may be extremely difficult for humans, TIDE can effortlessly produce highly consistent and accurate dense annotations, which hold great practical value for real-world underwater applications. In additional, to demonstrate the diversity of generated underwater data, we generate twelve underwater images from the same text prompt, as shown in Fig. 6. It can be observed that, despite sharing the same text prompt, the generated images exhibit rich diversity.", + "bbox": [ + 89, + 626, + 482, + 838 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5.6. Limitation", + "text_level": 1, + "bbox": [ + 89, + 847, + 209, + 861 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Despite the promising results achieved, our method still has some limitations. First, our approach cannot gener-", + "bbox": [ + 89, + 869, + 482, + 900 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/f365ef2c20cdc9bd13840b7a564002ebdccc654c8edfb99a69b2dcfceb71171a.jpg", + "image_caption": [ + "Figure 5. More challenging underwater data generated by TIDE." + ], + "image_footnote": [], + "bbox": [ + 517, + 89, + 903, + 246 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/5d98c398f2a8ca0fdebbecb449d4a876f85b15b8d1979a6867739d416b881fa2.jpg", + "image_caption": [ + "Figure 6. Visualization of generated data diversity." + ], + "image_footnote": [], + "bbox": [ + 516, + 282, + 903, + 386 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "ate instance-level semantic masks from the generation perspective. Relying on text prompts to guide the generation of instance-level masks with semantic annotations remains challenging. Additionally, although TIDE can leverage the powerful priors of pre-trained T2I models to generate highly challenging underwater images (e.g., low light, turbidity), there is still room for improvement. These will be key directions for future expansion.", + "bbox": [ + 511, + 436, + 906, + 556 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6. Conclusion", + "text_level": 1, + "bbox": [ + 511, + 571, + 633, + 585 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "This paper introduces a unified text-to-image and dense annotation generation model for underwater scenes. The model can generate realistic underwater images and multiple highly consistent dense annotations using only text prompts as input. We validate the effectiveness of our method on underwater depth estimation and semantic segmentation tasks by synthesizing a large-scale underwater dataset containing images along with highly consistent depth and semantic segmentation annotations. In the depth estimation task, extensive experimental results show that our method, using only text as input, achieves highly competitive results compared to previous methods that required stronger dense conditions for underwater depth synthesis. Meanwhile, pre-training with data synthesized using our method further improves model performance in the semantic segmentation task. Our study provides a new perspective for alleviating data scarcity in other fields.", + "bbox": [ + 511, + 595, + 906, + 852 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Acknowledgement. This work was supported by the NSFC (Grant U234120202 and 62225603).", + "bbox": [ + 511, + 857, + 906, + 887 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "968", + "bbox": [ + 485, + 945, + 509, + 955 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 89, + 187, + 104 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Ashutosh Agarwal and Chetan Arora. Attention attention everywhere: Monocular depth prediction with skip attention. In Proc. of IEEE Winter Conf. on Applications of Computer Vision, pages 5861-5870, 2023. 6, 7", + "[2] Jiwoon Ahn and Suha Kwak. Learning pixel-level semantic affinity with image-level supervision for weakly supervised semantic segmentation. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 4981-4990, 2018. 3", + "[3] Derya Akkaynak and Tali Treibitz. Sea-thru: A method for removing water from underwater images. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 1682-1691, 2019. 2, 6", + "[4] Dana Berman, Deborah Levy, Shai Avidan, and Tali Treibitz. Underwater single image color restoration using haze-lines and a new quantitative dataset. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(8):2822-2837, 2020. 6", + "[5] Shariq Farooq Bhat, Ibrahim Alhashim, and Peter Wonka. Adabins: Depth estimation using adaptive bins. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 4009-4018, 2021. 6, 7", + "[6] Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li. Pixart-α: Fast training of diffusion transformer for photorealistic text-to-image synthesis, 2024. 2, 3, 4, 6", + "[7] Zhe Chen, Yuchen Duan, Wenhai Wang, Junjun He, Tong Lu, Jifeng Dai, and Yu Qiao. Vision transformer adapter for dense predictions. In Proc. of Intl. Conf. on Learning Representations, 2023. 2, 7", + "[8] Bowen Cheng, Ishan Misra, Alexander G Schwing, Alexander Kirillov, and Rohit Girdhar. Masked-attention mask transformer for universal image segmentation. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 1290–1299, 2022. 7", + "[9] Paulo LJ Drews, Erickson R Nascimento, Silvia SC Botelho, and Mario Fernando Montenegro Campos. Underwater depth estimation and image restoration based on single images. IEEE computer graphics and applications, 36(2):24-35, 2016. 3", + "[10] Honey Gupta and Kaushik Mitra. Unsupervised single image underwater depth estimation. In Proc. of IEEE Intl. Conf. on Image Processing, pages 624-628. IEEE, 2019. 3", + "[11] Praful Hamberde, Subrahmanyam Murala, and Abhinav Dhall. Uw-gan: Single-image depth estimation and image enhancement for underwater images. 70:1-12, 2021. 3", + "[12] Kaiming He, Jian Sun, and Xiaou Tang. Single image haze removal using dark channel prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(12):2341-2353, 2010. 3", + "[13] Ruifei He, Shuyang Sun, Xin Yu, Chuhui Xue, Wenqing Zhang, Philip Torr, Song Bai, and Xiaojuan Qi. Is synthetic data from generative models ready for image recognition? In Proc. of Intl. Conf. on Learning Representations, 2023. 3" + ], + "bbox": [ + 93, + 114, + 483, + 900 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[14] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Proc. of Advances in Neural Information Processing Systems, 33:6840-6851, 2020. 2, 3", + "[15] Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. In Proc. of Intl. Conf. on Learning Representations, 2022. 5, 7", + "[16] Md Jahidul Islam, Chelsey Edge, Yuyang Xiao, Peigen Luo, Munteqim Mehtaz, Christopher Morse, Sadman Sakib Enan, and Junaed Sattar. Semantic segmentation of underwater imagery: Dataset and benchmark. In Proc. of the IEEE Int. Conf. on Intelligent Robots and Systems, pages 1769-1776. IEEE, 2020. 3, 4", + "[17] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. In Porc. of IEEE Intl. Conf. on Computer Vision, pages 4015-4026, 2023. 3", + "[18] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In Proc. of Intl. Conf. on Machine Learning, pages 19730–19742. PMLR, 2023. 4", + "[19] Shijie Lian, Hua Li, Runmin Cong, Suqi Li, Wei Zhang, and Sam Kwong. Watermask: Instance segmentation for underwater imagery. In Porc. of IEEE Intl. Conf. on Computer Vision, pages 1305-1315, 2023. 3, 4, 6", + "[20] Shijie Lian, Ziyi Zhang, Hua Li, Wenjie Li, Laurence Tianruo Yang, Sam Kwong, and Runmin Cong. Diving into underwater: Segment anything model guided underwater salient instance segmentation and a large-scale dataset. In Proc. of Intl. Conf. on Machine Learning, 2024. 2, 3, 4, 6", + "[21] Zhengyao Lv, Yuxiang Wei, Wangmeng Zuo, and KwanYee K Wong. Place: Adaptive layout-semantic fusion for semantic image synthesis. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 9264-9274, 2024. 3, 4", + "[22] Quang Nguyen, Truong Vu, Anh Tran, and Khoi Nguyen. Dataset diffusion: Diffusion-based synthetic data generation for pixel-level semantic segmentation. In Proc. of Advances in Neural Information Processing Systems, 2024. 2, 3", + "[23] Alexander Quinn Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mcgrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In Proc. of Intl. Conf. on Machine Learning, pages 16784-16804. PMLR, 2022. 3", + "[24] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In Proc. of Intl. Conf. on Machine Learning, pages 8748-8763. PMLR, 2021. 4", + "[25] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):1-67, 2020. 4" + ], + "bbox": [ + 516, + 92, + 905, + 900 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "969", + "bbox": [ + 486, + 945, + 509, + 955 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[26] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 1 (2):3, 2022. 3", + "[27] Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Ronghang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman Radle, Chloe Rolland, Laura Gustafson, et al. Sam 2: Segment anything in images and videos. In Proc. of Intl. Conf. on Learning Representations, 2025. 3", + "[28] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 10684-10695, 2022. 2, 3, 4", + "[29] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. In Proc. of Advances in Neural Information Processing Systems, 2017. 3, 7", + "[30] Yibo Wang, Ruiyuan Gao, Kai Chen, Kaiqiang Zhou, Yingjie Cai, Lanqing Hong, Zhenguo Li, Lihui Jiang, DitYan Yeung, Qiang Xu, et al. Detdiffusion: Synergizing generative and perceptive models for enhanced data generation and perception. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 7246-7255, 2024. 2, 3", + "[31] Weijia Wu, Yuzhong Zhao, Hao Chen, Yuchao Gu, Rui Zhao, Yefei He, Hong Zhou, Mike Zheng Shou, and Chunhua Shen. Datasetdm: Synthesizing data with perception annotations using diffusion models. Proc. of Advances in Neural Information Processing Systems, 36:54683-54695, 2023. 2, 3", + "[32] Weijia Wu, Yuzhong Zhao, Mike Zheng Shou, Hong Zhou, and Chunhua Shen. Diffumask: Synthesizing images with pixel-level annotations for semantic segmentation using diffusion models. In Porc. of IEEE Intl. Conf. on Computer Vision, pages 1206-1217, 2023. 3", + "[33] Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M Alvarez, and Ping Luo. Segformer: Simple and efficient design for semantic segmentation with transformers. 34:12077-12090, 2021. 7", + "[34] Zhenda Xie, Zigang Geng, Jingcheng Hu, Zheng Zhang, Han Hu, and Yue Cao. Revealing the dark secrets of masked image modeling. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 14475-14485, 2023. 6, 7", + "[35] Han Xue, Zhiwu Huang, Qianru Sun, Li Song, and Wenjun Zhang. Freestyle layout-to-image synthesis. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 14256-14266, 2023. 3, 4", + "[36] Tianyu Yan, Zifu Wan, Xinhao Deng, Pingping Zhang, Yang Liu, and Huchuan Lu. Mas-sam: Segment any marine animal with aggregated features. In Proc. of Intl. Joint Conf. on Artificial Intelligence, 2024. 3", + "[37] Lihe Yang, Xiaogang Xu, Bingyi Kang, Yinghuan Shi, and Hengshuang Zhao. Freemask: Synthetic images with dense annotations make stronger segmentation models. Proc. of Advances in Neural Information Processing Systems, 36, 2023. 2, 3, 7" + ], + "bbox": [ + 91, + 92, + 480, + 898 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[38] Lihe Yang, Bingyi Kang, Zilong Huang, Zhen Zhao, Xiaogang Xu, Jiashi Feng, and Hengshuang Zhao. Depth anything v2. In Proc. of Advances in Neural Information Processing Systems, 2024. 4", + "[39] Hanrong Ye, Jason Kuen, Qing Liu, Zhe Lin, Brian Price, and Dan Xu. Seggen: Supercharging segmentation models with text2mask and mask2img synthesis. arXiv preprint arXiv:2311.03355, 2023. 3", + "[40] Weihao Yuan, Xiaodong Gu, Zuozhuo Dai, Siyu Zhu, and Ping Tan. Neural window fully-connected crfs for monocular depth estimation. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 3916-3925, 2022. 2, 6", + "[41] Fan Zhang, Shaodi You, Yu Li, and Ying Fu. Atlantis: Enabling underwater depth estimation with stable diffusion. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 11852-11861, 2024. 2, 3, 6, 7", + "[42] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Porc. of IEEE Intl. Conf. on Computer Vision, pages 3836-3847, 2023. 2, 3", + "[43] Pingping Zhang, Tianyu Yan, Yang Liu, and Hutchuan Lu. Fantastic animals and where to find them: Segment any marine animal with dual sam. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 2578-2587, 2024. 3" + ], + "bbox": [ + 516, + 92, + 903, + 457 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "970", + "bbox": [ + 485, + 944, + 511, + 955 + ], + "page_idx": 9 + } +] \ No newline at end of file diff --git a/2025/A Unified Image-Dense Annotation Generation Model for Underwater Scenes/c77f859c-1439-4915-ba1a-a9314ac3d9a9_model.json b/2025/A Unified Image-Dense Annotation Generation Model for Underwater Scenes/c77f859c-1439-4915-ba1a-a9314ac3d9a9_model.json new file mode 100644 index 0000000000000000000000000000000000000000..655c7a94c3e5a3f09b39e8bd29b4e80e03429bbb --- /dev/null +++ b/2025/A Unified Image-Dense Annotation Generation Model for Underwater Scenes/c77f859c-1439-4915-ba1a-a9314ac3d9a9_model.json @@ -0,0 +1,2068 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.044 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.001, + 0.812, + 0.047 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.109, + 0.131, + 0.89, + 0.152 + ], + "angle": 0, + "content": "A Unified Image-Dense Annotation Generation Model for Underwater Scenes" + }, + { + "type": "text", + "bbox": [ + 0.251, + 0.181, + 0.744, + 0.235 + ], + "angle": 0, + "content": "Hongkai Lin Dingkang Liang Zhenghao Qi Xiang Bai* Huazhong University of Science and Technology {hklin,dkliang,xbai}@hust.edu.cn" + }, + { + "type": "image", + "bbox": [ + 0.113, + 0.244, + 0.885, + 0.612 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.616, + 0.908, + 0.66 + ], + "angle": 0, + "content": "Figure 1. We present TIDE, a unified underwater image-dense annotation generation model. Its core lies in the shared layout information and the natural complementarity between multimodal features. Our model, derived from the text-to-image model and fine-tuned with underwater data, enables the generation of highly consistent underwater image-dense annotations from solely text conditions." + }, + { + "type": "title", + "bbox": [ + 0.248, + 0.671, + 0.33, + 0.687 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.704, + 0.486, + 0.871 + ], + "angle": 0, + "content": "Underwater dense prediction, especially depth estimation and semantic segmentation, is crucial for gaining a comprehensive understanding of underwater scenes. Nevertheless, high-quality and large-scale underwater datasets with dense annotations remain scarce because of the complex environment and the exorbitant data collection costs. This paper proposes a unified Text-to-Image and DEnse annotation generation method (TIDE) for underwater scenes. It relies solely on text as input to simultaneously generate realistic underwater images and multiple highly consistent dense annotations. Specifically, we unify the generation of" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.673, + 0.909, + 0.884 + ], + "angle": 0, + "content": "text-to-image and text-to-dense annotations within a single model. The Implicit Layout Sharing mechanism (ILS) and cross-modal interaction method called Time Adaptive Normalization (TAN) are introduced to jointly optimize the consistency between image and dense annotations. We synthesize a large-scale underwater dataset using TIDE to validate the effectiveness of our method in underwater dense prediction tasks. The results demonstrate that our method effectively improves the performance of existing underwater dense prediction models and mitigates the scarcity of underwater data with dense annotations. We hope our method can offer new perspectives on alleviating data scarcity issues in other fields. The code is available at https://github.com/HongkLin/TIDE." + }, + { + "type": "page_footnote", + "bbox": [ + 0.115, + 0.888, + 0.248, + 0.901 + ], + "angle": 0, + "content": "* Corresponding author." + }, + { + "type": "page_number", + "bbox": [ + 0.486, + 0.946, + 0.511, + 0.957 + ], + "angle": 0, + "content": "961" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.092, + 0.09, + 0.223, + 0.106 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.115, + 0.484, + 0.221 + ], + "angle": 0, + "content": "Underwater dense prediction, particularly depth estimation and semantic segmentation, is essential for underwater exploration and environmental monitoring. However, the complex environment and the prohibitive data collection costs result in a scarcity of underwater data with dense annotations. Such conditions severely hinder the advancement of dense prediction technologies in underwater scenes." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.222, + 0.484, + 0.417 + ], + "angle": 0, + "content": "Fortunately, the recent success of the image generative technique [14, 28, 42] provides a breakthrough in addressing the scarcity of underwater scene data. In the field of general object understanding, controllable data synthesis [22, 30, 31, 37] demonstrates its effectiveness in few-shot scenarios. A straightforward solution is to apply them to underwater scenes directly. For instance, Atlantis [41], a pioneering controllable generation method for underwater depth data that takes ControlNet as its core, utilizes terrestrial depth maps as conditions. It effectively mitigates the issue of scarce underwater depth data and achieves consistent performance improvements across multiple underwater depth datasets and models." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.418, + 0.484, + 0.599 + ], + "angle": 0, + "content": "Despite remarkable progress, there are still challenges in Atlantis, as follows: 1) Atlantis, as shown in Fig. 2(a), generates underwater depth data using terrestrial depth maps as conditions due to the lack of underwater depth maps. It is considered a suboptimal approach since it may not align with natural underwater scenes. Better recreating authentic underwater environments is equally essential. 2) It generates data with only a single type of dense annotations, which is insufficient for understanding complex underwater scenes. Thus, a natural question arises: How can we simultaneously generate highly consistent, one-to-many, and vivid underwater images and dense annotation pairs?" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.599, + 0.484, + 0.734 + ], + "angle": 0, + "content": "In this paper, we explore the possibility of simultaneously generating highly consistent, realistic underwater scene images and multiple types of dense annotations using only text conditions. Our approach, which we refer to as TIDE, is illustrated in Fig. 2(b), presents a unified Text-to-Image and DEnse annotation generation method. TIDE is an end-to-end training and inference model that integrates denoising models in parallel for both text-to-image generation and text-to-dense annotation generation." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.735, + 0.484, + 0.901 + ], + "angle": 0, + "content": "To align the images and multiple type dense annotations generated by parallel denoising models, we propose the Implicit Layout Sharing (ILS) mechanism. Specifically, the cross-attention map as an implicit layout is the key to controlling the image layout in the text-to-image model [6, 28], inspiring us to share the implicit layout for aligning images and dense annotations. ILS effortlessly replaces the cross-attention map in the text-to-dense annotation model with that from the text-to-image model, effectively improving the consistency between the image and dense annotations. Furthermore, considering the intrinsic" + }, + { + "type": "image", + "bbox": [ + 0.517, + 0.089, + 0.909, + 0.321 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.513, + 0.33, + 0.907, + 0.387 + ], + "angle": 0, + "content": "Figure 2. The comparison between Atlantis [41] and our method. Unlike Atlantis, which requires text and depth map conditions, our method only needs text as the input condition to generate image-dense annotations (e.g., depth maps and semantic masks)." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.415, + 0.907, + 0.536 + ], + "angle": 0, + "content": "complementarity between features of different modalities, we introduce a cross-modal interaction method called Time Adaptive Normalization (TAN), a normalization layer that modulates the activations using different modal features. The consistency of the image and dense annotations can further be jointly optimized through cross-modal feature interaction among different dense annotation generation and between image and dense annotation generation." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.537, + 0.909, + 0.793 + ], + "angle": 0, + "content": "To verify the effectiveness of our method, we use TIDE to generate a large-scale dataset of underwater images with dense annotations named SynTIDE. Extensive experiments demonstrate the effectiveness of SynTIDE for underwater dense prediction tasks. In the underwater depth estimation task, SynTIDE presents consistent improvements in various fine-tuning models. For example, when adopting representative NewCRFs [40] as the fine-tuning model, our approach achieves significance gains over previous work, particularly in the \\( SI_{log} \\) and \\( \\delta_1 \\) metrics, with improvements of 14.73 and 36% on the D3 and D5 subsets of Sea-thru [3] dataset, respectively. In underwater semantic segmentation, pre-training with SynTIDE yields consistent improvements across different models. For instance, when using ViT-Adapter [7] as the training model, pre-training with the SynTIDE dataset leads to improvements of 2.1% mIoU on the USIS10K [20] dataset." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.795, + 0.909, + 0.901 + ], + "angle": 0, + "content": "TIDE demonstrates powerful data generation capabilities for underwater scenes. Using only easily accessible text prompts, TIDE can generate highly consistent and realistic underwater images and multiple types of dense annotations. It holds potential as a mainstream data synthesis method for underwater scenes and offers a promising direction for alleviating data scarcity in other fields. The main contribu" + }, + { + "type": "page_number", + "bbox": [ + 0.486, + 0.946, + 0.511, + 0.957 + ], + "angle": 0, + "content": "962" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.485, + 0.304 + ], + "angle": 0, + "content": "tions of this work are as follows: 1) We propose a novel data synthesis method, TIDE, which uses text as the sole condition to generate images and their corresponding multitype dense annotations simultaneously. To our knowledge, TIDE is the first method capable of simultaneously synthesizing both images and multiple dense annotations from text. 2) To align the images and dense annotations, we introduce the Implicit Layout Sharing mechanism. The text-to-image and text-to-dense annotation models share the same layout information, ensuring proper alignment between the image and dense annotations. Meanwhile, the consistency between image and dense annotations can be further optimized through the cross-modal interaction method called Time Adaptive Normalization." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.318, + 0.222, + 0.333 + ], + "angle": 0, + "content": "2. Relate Work" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.343, + 0.355, + 0.358 + ], + "angle": 0, + "content": "2.1. Underwater Dense Prediction" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.367, + 0.483, + 0.533 + ], + "angle": 0, + "content": "Dense prediction tasks in underwater scenes are crucial for comprehensively understanding underwater scenes. The publication of SUIM [16] provides a fundamental dataset and benchmark for the exploration of underwater semantic segmentation. To fill the gap in underwater instance segmentation, WaterMask [19] publishes the UIIS dataset, and a model is designed to cater to the unique characteristics of underwater images, improving the accuracy of underwater instance segmentation. Recently, the rise of general foundational segmentation models [17, 27] drives further development in the field of underwater segmentation [20, 36, 43]." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.534, + 0.483, + 0.684 + ], + "angle": 0, + "content": "Due to the lack of underwater depth estimation datasets, most underwater depth estimation methods focus on traditional techniques, unsupervised, or self-supervised approaches. Traditional methods [9] mainly rely on statistical priors, such as the dark channel prior [12], to estimate underwater depth. Gupta et al. [10] model the relationship between underwater and above-water hazy appearances to depth estimation. UW-GAN [11] and Atlantis [41] improve the performance of underwater depth estimation by synthesizing training datasets through generative models." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.685, + 0.483, + 0.761 + ], + "angle": 0, + "content": "While these methods make notable contributions to underwater dense prediction tasks, the large-scale and high-quality dataset in underwater scenes with only segmentation or depth annotations remains insufficient for achieving comprehensive underwater scene understanding." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.771, + 0.342, + 0.787 + ], + "angle": 0, + "content": "2.2. Controllable Data Synthesis" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.795, + 0.483, + 0.87 + ], + "angle": 0, + "content": "Thanks to the success of diffusion models [14] and the availability of large-scale, high-quality text-image training data, text-to-image models [6, 23, 26, 28] and controllable image generation models [21, 35, 42] achieve unprecedented success in image quality, diversity, and consistency." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.871, + 0.483, + 0.902 + ], + "angle": 0, + "content": "He et al. [13] are the first to explore and demonstrate the effectiveness of state-of-the-art text-to-image genera" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.905, + 0.363 + ], + "angle": 0, + "content": "tion models for image recognition. This makes it possible to achieve diverse data collection and accurate annotation at a lower cost. Wu et al. and Nguyen et al. [22, 31] explore the ability of pre-trained diffusion models to enhance real data in few-shot settings for segmentation tasks. Diffumask [32] ingeniously combines text-to-image models with AffinityNet [2], achieving open-vocabulary segmentation data synthesis. Freemask [37] demonstrates that synthetic data can further enhance the performance of semantic segmentation models under fully supervised settings by incorporating freestyle, a controllable image generation method using semantic masks as input conditions. Seggen [39] designs a multi-stage semantic segmentation data synthesis method, text2mask and mask2image, which achieves high semantic consistency semantic segmentation data only using text as the condition. Detdiffusion [30] synthesizes object detection data by incorporating object categories and spatial coordinates into the text." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.364, + 0.905, + 0.424 + ], + "angle": 0, + "content": "Unlike the aforementioned single-task data synthesis methods, we propose a novel end-to-end underwater data synthesis approach that simultaneously generates semantic masks and depth maps, relying solely on text conditions." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.438, + 0.652, + 0.453 + ], + "angle": 0, + "content": "3. Preliminaries" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.462, + 0.906, + 0.553 + ], + "angle": 0, + "content": "Diffusion Models (DMs) [14] emerge as leading text-to-image (T2I) generation models, recognized for their ability to produce realistic images. DMs can reconstruct data distribution by learning the reverse process of a diffusion process. Denoting \\( z_{t} \\) as the random variable at \\( t \\)-th timestep, the diffusion process is modeled as a Markov Chain:" + }, + { + "type": "equation", + "bbox": [ + 0.608, + 0.568, + 0.905, + 0.584 + ], + "angle": 0, + "content": "\\[\nz _ {t} \\sim \\mathcal {N} (\\sqrt {\\alpha_ {t}} z _ {t - 1}, (1 - \\alpha_ {t}) I), \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.591, + 0.906, + 0.741 + ], + "angle": 0, + "content": "where \\(\\alpha_{t}\\) is the fixed coefficient predefined in the noise schedule, and \\(I\\) refers to identity matrix. A prominent variant, the Latent Diffusion Model (LDM) [28], innovatively shifts the diffusion process of standard DMs into a latent space. This transition notably decreases computational costs while preserving the generative quality and flexibility of the original model. The resulting efficiency gain primarily arises from the reduced dimensionality of the latent space, which allows for lower training costs without compromising the model's generative capabilities." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.741, + 0.906, + 0.848 + ], + "angle": 0, + "content": "Stable Diffusion, an exemplary implementation of LDM, comprises an AutoEncoder [29] and a latent diffusion model. The AutoEncoder \\(\\varepsilon\\) is designed to learn a latent space that is perceptually equivalent to the image space. Meanwhile, the LDM \\(\\epsilon_{\\theta}\\) is parameterized as a denoising model with cross-attention and trained on a large-scale dataset of text-image pairs via:" + }, + { + "type": "equation", + "bbox": [ + 0.525, + 0.858, + 0.905, + 0.877 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {L D M} := \\mathbb {E} _ {\\varepsilon (x), y, \\epsilon \\sim N (0, 1), t} [ \\| \\epsilon - \\epsilon_ {\\theta} (z _ {t}, t, \\tau_ {\\theta} (y)) \\| _ {2} ^ {2} ], \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.886, + 0.905, + 0.901 + ], + "angle": 0, + "content": "where \\(\\epsilon\\) is the target noise. \\(\\tau_{\\theta}\\) and \\(y\\) are the pre-trained" + }, + { + "type": "page_number", + "bbox": [ + 0.486, + 0.946, + 0.511, + 0.957 + ], + "angle": 0, + "content": "963" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.091, + 0.087, + 0.705, + 0.301 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.319, + 0.305, + 0.48, + 0.318 + ], + "angle": 0, + "content": "(a) Training phase of our TIDE" + }, + { + "type": "image", + "bbox": [ + 0.735, + 0.09, + 0.907, + 0.297 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.769, + 0.306, + 0.871, + 0.318 + ], + "angle": 0, + "content": "(b) Inference phase" + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.329, + 0.908, + 0.371 + ], + "angle": 0, + "content": "Figure 3. Training and Inference. The denoising model of TIDE mainly consists of three transformers, each dedicated to text-to-image, text-to-depth, and text-to-mask. The proposed Implicit Layout Sharing mechanism (ILS) and Time Adaptive Normalization (TAN) are used to align the generated image, depth map, and semantic mask." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.398, + 0.484, + 0.473 + ], + "angle": 0, + "content": "text encoder (e.g., CLIP [24], T5 [25]) and text prompts, respectively. This equation represents the mean-squared error (MSE) between the target noise \\(\\epsilon\\) and the noise predicted by the model, encapsulating the core learning mechanism of the latent diffusion model." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.49, + 0.221, + 0.506 + ], + "angle": 0, + "content": "4. Our Method" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.517, + 0.484, + 0.714 + ], + "angle": 0, + "content": "An overview of our method, a unified text-to-image and dense annotation generation model (TIDE), is shown in Fig. 3. TIDE is built upon a pre-trained transformer [6] for text-to-image generation, along with two fine-tuned mini-transformers (details provided in Sec. 5.1.1) dedicated to text-to-depth and text-to-mask generation. Simply parallelizing multiple text-to-image processes does not ensure consistency between the images and dense annotations. To enable consistency between them, we propose Implicit Layout Sharing (ILS) and the cross-modal interaction method named Time Adaptive Normalization (TAN). After training, TIDE simultaneously generates images and multiple dense annotations with high consistency using only text as input." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.726, + 0.262, + 0.743 + ], + "angle": 0, + "content": "4.1. Data Preparation" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.75, + 0.484, + 0.903 + ], + "angle": 0, + "content": "We aim to generate realistic underwater images, corresponding highly consistent depth maps, and semantic masks. However, existing high-quality, dense annotation data primarily consists of mask annotations. Therefore, we construct training data around these datasets with semantic masks, as shown in Tab. 1. On this basis, we obtain the corresponding depth map and caption for each image using existing foundation models. Specifically, for each underwater image, the corresponding depth map is obtained by pre-trained Depth Anything [38]. Meanwhile, the caption of" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.398, + 0.907, + 0.444 + ], + "angle": 0, + "content": "each image is obtained from the pre-trained BLIP2 [18]. We construct approximately 14K quadruples {Image, Depth, Mask, Caption} for TIDE training." + }, + { + "type": "table_caption", + "bbox": [ + 0.513, + 0.46, + 0.907, + 0.488 + ], + "angle": 0, + "content": "Table 1. Segmentation Datasets and Data Splits. \\* denotes the training set of TIDE, while the others are used for evaluation." + }, + { + "type": "table", + "bbox": [ + 0.517, + 0.498, + 0.901, + 0.566 + ], + "angle": 0, + "content": "
DatasetsSeg TaskTrainValTest
SUIM [16]Semantic1,488*110*/
UIIS [19]Instance3,937*691/
USIS10K [20]Instance7,442*1,5941,596*
" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.596, + 0.83, + 0.612 + ], + "angle": 0, + "content": "4.2. Implicit Layout Sharing Mechanism" + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.621, + 0.907, + 0.833 + ], + "angle": 0, + "content": "In advanced text-to-image models [6, 28], the cross-attention map plays a crucial role in controlling the image layout. Existing methods [21, 35] demonstrate that adjusting the cross-attention map during the text-to-image process can effectively control the layout of the generated image. Therefore, the cross-attention map can be considered as the implicit layout information. Intuitively, sharing the implicit layout between text-to-image and text-to-dense annotations may establish a strong correlation between the generated image and dense annotations. To this end, we propose an Implicit Layout Sharing mechanism to align the generated image and dense annotations. Specifically, cross-attention, as a crucial process for generating implicit layouts in text-to-image/mask/depth model, can first be formulated as:" + }, + { + "type": "equation", + "bbox": [ + 0.549, + 0.846, + 0.865, + 0.864 + ], + "angle": 0, + "content": "\\[\n\\operatorname {A t t n} _ {i} \\left(\\boldsymbol {Q} _ {i}, \\boldsymbol {K} _ {i}, \\boldsymbol {V} _ {i}\\right) = \\operatorname {s o f t m a x} \\left(\\boldsymbol {Q} _ {i} \\boldsymbol {K} _ {i} ^ {\\top} / \\sqrt {c}\\right) \\boldsymbol {V} _ {i},\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.543, + 0.866, + 0.905, + 0.884 + ], + "angle": 0, + "content": "\\[\n\\operatorname {A t t n} _ {d} \\left(\\boldsymbol {Q} _ {d}, \\boldsymbol {K} _ {d}, \\boldsymbol {V} _ {d}\\right) = \\operatorname {s o f t m a x} \\left(\\boldsymbol {Q} _ {d} \\boldsymbol {K} _ {d} ^ {\\top} / \\sqrt {c}\\right) \\boldsymbol {V} _ {d}, \\tag {3}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.523, + 0.887, + 0.879, + 0.905 + ], + "angle": 0, + "content": "\\[\n\\operatorname {A t t n} _ {m} \\left(\\boldsymbol {Q} _ {m}, \\boldsymbol {K} _ {m}, \\boldsymbol {V} _ {m}\\right) = \\operatorname {s o f t m a x} \\left(\\boldsymbol {Q} _ {m} \\boldsymbol {K} _ {m} ^ {\\top} / \\sqrt {c}\\right) \\boldsymbol {V} _ {m},\n\\]" + }, + { + "type": "page_number", + "bbox": [ + 0.486, + 0.946, + 0.512, + 0.957 + ], + "angle": 0, + "content": "964" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.485, + 0.273 + ], + "angle": 0, + "content": "where \\(c\\) refers to the feature channel. \\(Q_{i} / Q_{d} / Q_{m}\\), \\(K_{i} / K_{d} / K_{m}\\), and \\(V_{i} / V_{d} / V_{m}\\) represent the query, key, and value within the text-to-image/depth/mask cross-attention module, respectively. Since text-to-image models are pre-trained on high-quality and large-scale image-caption datasets, they exhibit strong controllability and generalization. Therefore, sharing the implicit layouts from the text-to-image model is the optimal choice to ensure the quality of the generated data. As shown in Fig. 3(a), the implicit layouts from the block in the text-to-image model are shared with the cross-attention in the block of text-to-dense annotation models. The implicit layouts refer to:" + }, + { + "type": "equation", + "bbox": [ + 0.184, + 0.279, + 0.483, + 0.297 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {M} _ {i} = \\operatorname {s o f t m a x} \\left(\\boldsymbol {Q} _ {i} \\boldsymbol {K} _ {i} ^ {\\top} / \\sqrt {c}\\right). \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.303, + 0.484, + 0.348 + ], + "angle": 0, + "content": "By sharing the implicit layouts from the text-to-image model, the cross-attention of text-to-depth \\((\\mathsf{Attn}_d)\\) and text-to-mask \\((\\mathsf{Attn}_m)\\) can be simplified as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.175, + 0.354, + 0.414, + 0.37 + ], + "angle": 0, + "content": "\\[\n\\operatorname {A t t n} _ {d} \\left(\\boldsymbol {Q} _ {d}, \\boldsymbol {K} _ {d}, \\boldsymbol {V} _ {d}\\right) = \\boldsymbol {M} _ {i} \\times \\boldsymbol {V} _ {d},\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.158, + 0.368, + 0.482, + 0.39 + ], + "angle": 0, + "content": "\\[\n\\operatorname {A t t n} _ {m} \\left(\\boldsymbol {Q} _ {m}, \\boldsymbol {K} _ {m}, \\boldsymbol {V} _ {m}\\right) = \\boldsymbol {M} _ {i} \\times \\boldsymbol {V} _ {m}, \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.395, + 0.484, + 0.486 + ], + "angle": 0, + "content": "where \\(\\times\\) refers to matrix multiplication. Implicit Layout Sharing is an elegant and efficient method that unifies image and dense annotation generation, improving consistency between them. It also reduces the overall generation cost, as there is no need to compute separate cross-attention maps for the text-to-dense annotation models." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.493, + 0.357, + 0.509 + ], + "angle": 0, + "content": "4.3. Time Adaptive Normalization" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.516, + 0.484, + 0.576 + ], + "angle": 0, + "content": "Considering the complementary nature of different modality features, we propose a cross-modal feature interaction method called Time Adaptive Normalization (TAN), as shown in Fig. 4." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.577, + 0.484, + 0.879 + ], + "angle": 0, + "content": "Specifically, TAN is utilized to adjust the image layout by leveraging the cross-modal features \\( \\boldsymbol{x}_f \\) from different branches. The cross-modal features are mapped to two normalization parameters, \\( \\gamma \\) and \\( \\beta \\), by MLPs, which are used to control the variation in the image layout. In this context, the features from text-to-depth and text-to-mask serve as cross-modal input features for each other. For instance, in the TAN corresponding to the \\( i \\)-th text-to-depth block, the outputs from both the \\( i \\)-th text-to-depth block and the \\( i \\)-th text-to-mask block serve as the input feature \\( \\boldsymbol{x} \\) and cross-modal input feature \\( \\boldsymbol{x}_f \\), respectively. A slight difference is that for text-to-image, the features from both text-to-depth and text-to-mask serve as the cross-modal features. In the TAN cross-modal interaction process of text-to-image, two sets of \\( \\gamma \\) and \\( \\beta \\) are obtained, provided by the different modalities features from text-to-depth and text-to-mask. These two sets of parameters are averaged to the \\( \\bar{\\gamma} \\) and \\( \\bar{\\beta} \\). Then, time embeddings \\( \\boldsymbol{x}_t \\) is introduced to adaptively control the influence of the cross-modal features. The normalization can be formalized as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.168, + 0.884, + 0.483, + 0.903 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {x} ^ {\\prime} = \\alpha \\cdot \\gamma \\boldsymbol {x} + \\alpha \\cdot \\beta , \\boldsymbol {x} ^ {*} = \\boldsymbol {x} ^ {\\prime} + \\boldsymbol {x}, \\tag {6}\n\\]" + }, + { + "type": "image", + "bbox": [ + 0.557, + 0.088, + 0.877, + 0.219 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.533, + 0.227, + 0.887, + 0.243 + ], + "angle": 0, + "content": "Sigmoid \\(\\oplus\\) Element-wise add \\(\\otimes\\) Element-wise product" + }, + { + "type": "image_caption", + "bbox": [ + 0.513, + 0.253, + 0.907, + 0.296 + ], + "angle": 0, + "content": "Figure 4. In TAN, the cross-modal features are first mapped to the modulation parameters \\(\\gamma\\) and \\(\\beta\\). Then, a time-adaptive confidence \\(\\alpha\\) is introduced to control the degree of normalization." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.32, + 0.907, + 0.488 + ], + "angle": 0, + "content": "where \\( \\pmb{x} \\), \\( \\pmb{x}' \\), and \\( \\pmb{x}^* \\) are the input feature, normalized feature, and output feature, respectively. \\( \\alpha \\) is the time adaptive coefficients obtained from \\( \\pmb{x}_t \\) through linear transformation and sigmoid. The TAN will be applied not only from text-to-dense annotations to text-to-image but also between text-to-depth and text-to-mask to improve the consistency among dense annotations. Implicit Layout Sharing and Time Adaptive Normalization are two complementary methods that construct a joint interaction process, optimizing the consistency between the generated image and dense annotations during training." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.496, + 0.7, + 0.512 + ], + "angle": 0, + "content": "4.4. Learning Objective" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.52, + 0.906, + 0.595 + ], + "angle": 0, + "content": "During training, the learnable parameters include only the proposed TAN module and the LoRA [15] used to fine-tune the pre-trained transformer. The overall loss \\(\\mathcal{L}\\) is composed equally of the denoising losses from the three branches: text-to-image, text-to-depth, and text-to-mask:" + }, + { + "type": "equation", + "bbox": [ + 0.615, + 0.607, + 0.906, + 0.627 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} = \\mathcal {L} _ {m s e} ^ {I} + \\mathcal {L} _ {m s e} ^ {D} + \\mathcal {L} _ {m s e} ^ {M}. \\tag {7}\n\\]" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.634, + 0.665, + 0.65 + ], + "angle": 0, + "content": "4.5. Data Synthesis" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.657, + 0.906, + 0.717 + ], + "angle": 0, + "content": "Thanks to the proposed ILS and TAN, TIDE can generate realistic and highly consistent underwater images and dense annotations after training, using only text conditions, as shown in Fig. 3(b)." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.718, + 0.906, + 0.838 + ], + "angle": 0, + "content": "We filter out redundant parts from the 14K captions obtained in Sec. 4.1, resulting in approximately 5K nonredundant captions as text conditions. For each caption, we generate ten samples to construct a large-scale synthetic dataset named SynTIDE. Some representative examples are shown in Fig. 1. The SynTIDE dataset is utilized to validate the effectiveness of our method in the dense prediction task for underwater scenes." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.848, + 0.617, + 0.864 + ], + "angle": 0, + "content": "4.6. Analysis" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.871, + 0.907, + 0.901 + ], + "angle": 0, + "content": "Insights of framework design. In the text-to-image model, the cross-attention map contains the layout information of" + }, + { + "type": "page_number", + "bbox": [ + 0.486, + 0.946, + 0.512, + 0.957 + ], + "angle": 0, + "content": "965" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.262, + 0.09, + 0.735, + 0.104 + ], + "angle": 0, + "content": "Table 2. Quantitative comparisons on real underwater depth estimation datasets." + }, + { + "type": "table", + "bbox": [ + 0.095, + 0.117, + 0.904, + 0.451 + ], + "angle": 0, + "content": "
MethodFine-tuning DatasetReference\\( S I_{log} \\)↓A.Rel↓\\( log_{10} \\)↓RMSE↓S.Rel↓RMSElog↓\\( \\delta_1 \\uparrow \\)\\( \\delta_2 \\uparrow \\)\\( \\delta_3 \\uparrow \\)
Quantitative comparisons on the D3 and D5 subsets of Sea-thru [3] dataset.
AdaBins [5]Atlantis [41]SynTIDE (Ours)CVPR 24-38.2426.92(-11.32)1.331.31(-0.02)0.120.08(-0.04)1.411.12(-0.29)12.8915.74(+2.85)0.390.27(-0.12)0.500.71(+0.21)0.810.95(+0.14)0.920.99(+0.07)
NewCRFs [40]Atlantis [41]SynTIDE (Ours)CVPR 24-37.1022.37(-14.73)1.681.50(-0.18)0.120.06(-0.06)1.441.24(-0.20)14.7622.50(+7.74)0.380.23(-0.15)0.480.84(+0.36)0.840.97(+0.13)0.950.99(+0.04)
PixelFormer [1]Atlantis [41]SynTIDE (Ours)CVPR 24-23.7021.39(-2.31)1.341.46(+0.12)0.060.05(-0.01)1.171.15(-0.02)17.2921.79(+4.50)0.240.22(-0.02)0.810.88(+0.07)0.970.98(+0.01)0.990.99(+0.00)
MIM [34]Atlantis [41]SynTIDE (Ours)CVPR 24-37.0122.49(-14.52)1.371.27(-0.10)0.110.06(-0.05)1.511.01(-0.50)14.4216.46(+2.04)0.380.23(-0.15)0.560.85(+0.29)0.840.97(+0.13)0.940.99(+0.05)
Quantitative comparisons on the SQUID [4] dataset.
AdaBins [5]Atlantis [41]SynTIDE (Ours)CVPR 24-29.5625.63(-3.93)0.280.23(-0.05)0.110.09(-0.02)2.242.69(+0.45)0.690.92(+0.23)0.310.27(-0.04)0.560.67(+0.11)0.860.90(+0.04)0.940.97(+0.03)
NewCRFs [40]Atlantis [41]SynTIDE (Ours)CVPR 24-25.1925.55(+0.36)0.230.23(-0.00)0.090.09(+0.00)2.563.02(+0.46)0.831.07(+0.24)0.260.27(+0.01)0.680.68(+0.00)0.900.91(+0.01)0.960.97(+0.01)
PixelFormer [1]Atlantis [41]SynTIDE (Ours)CVPR 24-21.3419.08(-2.26)0.180.16(-0.02)0.070.07(-0.00)1.861.75(-0.11)0.430.36(-0.07)0.220.19(-0.03)0.760.79(+0.03)0.940.97(+0.03)0.980.99(+0.01)
MIM [34]Atlantis [41]SynTIDE (Ours)CVPR 24-27.4526.98(-0.47)0.260.25(-0.01)0.100.09(-0.01)2.143.04(+0.90)0.681.11(+0.43)0.280.28(-0.00)0.610.65(+0.04)0.880.89(+0.01)0.950.96(+0.01)
" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.476, + 0.482, + 0.581 + ], + "angle": 0, + "content": "the image. Thus, the cross-attention map can be viewed as an implicit layout. If two text-to-image models share the same implicit layout and undergo proper fine-tuning, the generated images are likely to exhibit strong layout similarity. Therefore, we share the same implicit layout across multiple text-to-image models. Meanwhile, we use LoRA to fine-tune the multiple text-to-image models [6]." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.582, + 0.483, + 0.734 + ], + "angle": 0, + "content": "Zero-shot generation ability. Thanks to our training strategy, which fine-tunes the pre-trained text-to-image model using only LoRA, the generalization ability of the text-to-image model is retained to some extent. This enables TIDE to generate underwater images during inference that are not seen during training. Furthermore, due to the proposed Implicit Layout Sharing and Time Adaptive Normalization mechanisms, the generated depth maps align well with these images. Therefore, TIDE has the ability to generate zero-shot underwater image-depth map pairs." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.747, + 0.224, + 0.763 + ], + "angle": 0, + "content": "5. Experiments" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.771, + 0.37, + 0.787 + ], + "angle": 0, + "content": "5.1. Dataset and Evaluation Metrics" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.795, + 0.483, + 0.884 + ], + "angle": 0, + "content": "Underwater Depth Estimation. We follow the work [41], the D3 and D5 subsets of Sea-thru [3], and the SQUID dataset [4] used to evaluate the depth estimation capability in underwater scenes. These datasets include underwater images with depth maps obtained via the Structure-fromMotion (SfM) algorithm." + }, + { + "type": "text", + "bbox": [ + 0.11, + 0.886, + 0.483, + 0.901 + ], + "angle": 0, + "content": "The quantitative evaluation metrics include root" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.476, + 0.905, + 0.553 + ], + "angle": 0, + "content": "mean square error (RMSE) and its logarithmic variant \\((RMSE_{log})\\), absolute error in log-scale \\((\\log_{10})\\), absolute relative error (A.Rel), squared relative error (S.Rel), the percentage of inlier pixels \\((\\delta_i)\\) with thresholds of \\(1.25^i\\), and scale-invariant error in log-scale \\((SI_{log})\\): \\(100\\sqrt{Var(\\epsilon_{log})}\\)." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.555, + 0.905, + 0.645 + ], + "angle": 0, + "content": "Underwater Semantic Segmentation. The UIIS [19] and USIS10K [20] datasets are chosen to validate the effectiveness of our method in underwater semantic segmentation tasks. Instance masks belonging to the same semantic category are merged to construct semantic segmentation annotations for the UIIS and USIS10K datasets." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.647, + 0.905, + 0.706 + ], + "angle": 0, + "content": "We calculate the mean Intersection over Union (mIoU) for six categories (i.e., Fish, Reefs, Aquatic Plants, Wrecks, Human Divers, and Robots) to evaluate the accuracy of the segmentation results." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.726, + 0.733, + 0.74 + ], + "angle": 0, + "content": "5.1.1 Implementation Details" + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.75, + 0.907, + 0.901 + ], + "angle": 0, + "content": "The training process consists of two parts: pre-training the mini-transformer and training TIDE. In the first stage, the mini-transformer is initialized with the first ten layers of the PixArt-\\(\\alpha\\) [6] pre-trained transformer. Then, the mini-transformer is trained for 60K iterations on the text-to-image task with all parameters. The training data consists of 14K underwater image-caption pairs from Sec. 4.1. In the second stage, the PixArt-\\(\\alpha\\) pre-trained transformer and the mini-transformer are used as initial weights for the text-to-image and text-to-dense annotation models, respectively." + }, + { + "type": "page_number", + "bbox": [ + 0.486, + 0.946, + 0.512, + 0.957 + ], + "angle": 0, + "content": "966" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.092, + 0.09, + 0.482, + 0.104 + ], + "angle": 0, + "content": "Table 3. Quantitative results of underwater semantic segmentation." + }, + { + "type": "table", + "bbox": [ + 0.093, + 0.115, + 0.482, + 0.311 + ], + "angle": 0, + "content": "
MethodBackboneTraining DatamIoU
RealSynTIDEUIISUSIS10K
Segformer [33] (NeurIPS 21)MiT-B470.274.6
76.572.8
75.4(+5.2)76.1(+1.5)
Mask2former [8] (CVPR 22)Swin-B72.776.1
74.272.9
74.3(+1.6)77.1(+1.0)
ViT-Adapter [7] (ICLR 23)ViT-Adapter-B73.574.6
75.772.6
75.1(+1.6)76.7(+2.1)
" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.338, + 0.483, + 0.412 + ], + "angle": 0, + "content": "Meanwhile, they are fine-tuned using LoRA [15] for 200K iterations with a batch size of 4. The LoRA ranks of the text-to-image/depth/mask branches are 32, 64, and 64, respectively. All experiments are conducted on a server with four NVIDIA 4090 24G GPUs." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.424, + 0.226, + 0.438 + ], + "angle": 0, + "content": "5.2. Main results" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.448, + 0.357, + 0.464 + ], + "angle": 0, + "content": "5.2.1 Underwater Depth Estimation" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.473, + 0.483, + 0.699 + ], + "angle": 0, + "content": "We train four representative depth estimation models, Adasbin [5], NewCRFs [29], PixelFormer [1], and MIM [34], to present quantitative results, as shown in Tab. 2. Compared to previous underwater data synthesis work Atlantis [41], depth estimation models trained on our SynTIDE dataset show consistent improvements across most quantitative metrics on two evaluated datasets. Especially on MIM [34], a powerful pre-trained model, our method reduces the \\( SI_{\\text{log}} \\) metric from \\( 37.01 \\rightarrow 22.49 \\) (-14.52) and improves \\( \\delta_1 \\) from \\( 0.56 \\rightarrow 0.85 \\) (+0.29) on the D3 and D5 subsets of the Seathru dataset. Meanwhile, on PixelFormer [1], a depth estimation model with outstanding generalization that also performs best for Atlantis, our method achieves better performance across nearly all quantitative metrics on both evaluated underwater depth estimation datasets." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.7, + 0.483, + 0.806 + ], + "angle": 0, + "content": "These results demonstrate that our method achieves highly competitive consistency compared to Atlantis, which uses stronger dense conditions. Furthermore, the data generated by TIDE is closer to natural underwater scenes and shows rich species diversity. Most importantly, TIDE unifies the generation of images and multiple highly consistent dense annotations, capabilities that Atlantis lacks." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.817, + 0.406, + 0.833 + ], + "angle": 0, + "content": "5.3. Underwater Semantic Segmentation" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.84, + 0.483, + 0.902 + ], + "angle": 0, + "content": "In the underwater semantic segmentation task, we validate the effectiveness of our method by pre-training with the SynTIDE dataset in three representative semantic segmentation models, Segformer [33], Mask2former [8], and ViT" + }, + { + "type": "table_caption", + "bbox": [ + 0.536, + 0.09, + 0.882, + 0.104 + ], + "angle": 0, + "content": "Table 4. Ablation on the impact of each TIDE component." + }, + { + "type": "table", + "bbox": [ + 0.517, + 0.115, + 0.905, + 0.18 + ], + "angle": 0, + "content": "
ILSTANSIlog ↓A.Rel ↓δ1 ↑mIoU
24.461.230.7636.8
24.591.400.7836.2
23.711.370.7942.1
" + }, + { + "type": "table_caption", + "bbox": [ + 0.513, + 0.195, + 0.905, + 0.237 + ], + "angle": 0, + "content": "Table 5. Ablation on the impact of the component positions. \" {Start, End}\" indicates the starting and ending positions where the operations are applied, with a step size of 3." + }, + { + "type": "table", + "bbox": [ + 0.518, + 0.248, + 0.905, + 0.313 + ], + "angle": 0, + "content": "
{Start, End}SIlog ↓A.Rel ↓δ1 ↑mIoU
{0,12}22.061.460.8634.7
{15,27}44.861.090.438.6
{0,27}23.711.370.7942.1
" + }, + { + "type": "table_caption", + "bbox": [ + 0.513, + 0.327, + 0.905, + 0.356 + ], + "angle": 0, + "content": "Table 6. Ablation on the impact of scaling synthetic data for underwater dense prediction tasks." + }, + { + "type": "table", + "bbox": [ + 0.517, + 0.367, + 0.905, + 0.456 + ], + "angle": 0, + "content": "
N SampleSIlog ↓A.Rel ↓δ1↑mIoU
123.491.460.8255.3
322.941.540.8560.9
622.961.560.8563.6
1022.371.500.8464.2
" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.482, + 0.905, + 0.512 + ], + "angle": 0, + "content": "Adapter [7]. Following the work [37], we filter the noise in the generated annotations with 1.5 tolerance." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.513, + 0.906, + 0.634 + ], + "angle": 0, + "content": "Pre-training on high-quality synthetic datasets is widely recognized as a way to gain strong prior knowledge. On the UIIS dataset, models trained on the SynTIDE dataset consistently achieve superior results compared to real data. On the other larger USIS10K dataset, by further fine-tuning the model on the UIIS10K train set, we achieve notable improvements. Especially on ViT-Adapter, we enhance the performance of the model from \\(74.6\\% \\rightarrow 76.7\\%\\)." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.635, + 0.906, + 0.741 + ], + "angle": 0, + "content": "These results show that models pre-trained on the SynTIDE dataset exhibit strong prior knowledge in the underwater semantic segmentation task. Additionally, these results demonstrate that the unified image and dense annotation generation model proposed in this paper can generate highly consistent image-dense annotation pairs, making it suitable for various underwater dense prediction tasks." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.753, + 0.68, + 0.767 + ], + "angle": 0, + "content": "5.4. Ablation Studies" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.776, + 0.906, + 0.868 + ], + "angle": 0, + "content": "Unless otherwise specified, we conduct ablation studies by training TIDE for 30K iterations. We synthesize three samples for each caption, as described in Sec. 4.5. We conduct ablation studies on the USIS10K dataset with SegFormer-B4 for semantic segmentation and the D3 and D5 subsets of the Sea-thru dataset with NewCRFs for depth estimation." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.871, + 0.906, + 0.901 + ], + "angle": 0, + "content": "Ablation on the effectiveness of each component. We first evaluate the contribution of each component within" + }, + { + "type": "page_number", + "bbox": [ + 0.486, + 0.946, + 0.511, + 0.957 + ], + "angle": 0, + "content": "967" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.482, + 0.302 + ], + "angle": 0, + "content": "TIDE, as shown in Tab. 4. When utilizing only the Implicit Layout Sharing (ILS) mechanism or Time Adaptive Normalization (TAN), the former outperforms the latter in depth estimation and semantic segmentation. Combining both methods results in a significant improvement \\((36.8\\% \\rightarrow 42.1\\%)\\) in semantic segmentation. These results indicate that ILS and TAN are complementary methods. By combining them for end-to-end training, the consistency between images and dense annotations can be further optimized. Additionally, we further demonstrate the effectiveness of the time-adaptive operation. As shown in the last row of Tab. 4, without time-adaptive parameters, the quality of the generated data will be varying degrees of degradation, especially for the semantic segmentation task." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.303, + 0.482, + 0.453 + ], + "angle": 0, + "content": "Ablation on the position of components. We then study the effect of the position of ILS and TAN, as shown in Tab. 5. We find that applying the ILS and TAN mechanisms in the first half of the transformer of text-to-image yields better performance than using them in the second half. This can be attributed to the layout information produced in the first half of the transformer, which is mismatched with the ILS introduced in the latter part. Meanwhile, the results demonstrate that combining both achieves better consistency between the image and dense annotations." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.458, + 0.483, + 0.593 + ], + "angle": 0, + "content": "Ablation on data scaling. Finally, we synthesize \\( N \\) samples for each caption to validate the impact of synthetic data scale on underwater dense prediction tasks, as shown in Tab. 6. It can be observed that as the amount of synthetic data increases, there is no substantial improvement in the underwater depth estimation task. However, for the underwater semantic segmentation task, a significant gain is observed in the early stages as \\( N \\) increases, but the tendency of improvement begins to flatten after \\( N = 6 \\)." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.603, + 0.481, + 0.619 + ], + "angle": 0, + "content": "5.5. More Challenging Underwater Data Synthesis" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.627, + 0.483, + 0.839 + ], + "angle": 0, + "content": "We validate whether TIDE can generate more challenging data by adding extra text prompts about underwater lighting or water quality (e.g., low light, turbidity) to the original underwater scene caption. As shown in Fig. 5, the results demonstrate that TIDE can generate more challenging underwater images. While annotating these underwater images may be extremely difficult for humans, TIDE can effortlessly produce highly consistent and accurate dense annotations, which hold great practical value for real-world underwater applications. In additional, to demonstrate the diversity of generated underwater data, we generate twelve underwater images from the same text prompt, as shown in Fig. 6. It can be observed that, despite sharing the same text prompt, the generated images exhibit rich diversity." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.848, + 0.21, + 0.862 + ], + "angle": 0, + "content": "5.6. Limitation" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.871, + 0.483, + 0.901 + ], + "angle": 0, + "content": "Despite the promising results achieved, our method still has some limitations. First, our approach cannot gener-" + }, + { + "type": "image", + "bbox": [ + 0.518, + 0.09, + 0.905, + 0.247 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.517, + 0.256, + 0.901, + 0.27 + ], + "angle": 0, + "content": "Figure 5. More challenging underwater data generated by TIDE." + }, + { + "type": "image", + "bbox": [ + 0.517, + 0.284, + 0.905, + 0.387 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.558, + 0.397, + 0.86, + 0.412 + ], + "angle": 0, + "content": "Figure 6. Visualization of generated data diversity." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.438, + 0.907, + 0.558 + ], + "angle": 0, + "content": "ate instance-level semantic masks from the generation perspective. Relying on text prompts to guide the generation of instance-level masks with semantic annotations remains challenging. Additionally, although TIDE can leverage the powerful priors of pre-trained T2I models to generate highly challenging underwater images (e.g., low light, turbidity), there is still room for improvement. These will be key directions for future expansion." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.572, + 0.634, + 0.587 + ], + "angle": 0, + "content": "6. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.597, + 0.907, + 0.853 + ], + "angle": 0, + "content": "This paper introduces a unified text-to-image and dense annotation generation model for underwater scenes. The model can generate realistic underwater images and multiple highly consistent dense annotations using only text prompts as input. We validate the effectiveness of our method on underwater depth estimation and semantic segmentation tasks by synthesizing a large-scale underwater dataset containing images along with highly consistent depth and semantic segmentation annotations. In the depth estimation task, extensive experimental results show that our method, using only text as input, achieves highly competitive results compared to previous methods that required stronger dense conditions for underwater depth synthesis. Meanwhile, pre-training with data synthesized using our method further improves model performance in the semantic segmentation task. Our study provides a new perspective for alleviating data scarcity in other fields." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.858, + 0.907, + 0.888 + ], + "angle": 0, + "content": "Acknowledgement. This work was supported by the NSFC (Grant U234120202 and 62225603)." + }, + { + "type": "page_number", + "bbox": [ + 0.486, + 0.946, + 0.511, + 0.957 + ], + "angle": 0, + "content": "968" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.093, + 0.09, + 0.188, + 0.106 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.116, + 0.484, + 0.171 + ], + "angle": 0, + "content": "[1] Ashutosh Agarwal and Chetan Arora. Attention attention everywhere: Monocular depth prediction with skip attention. In Proc. of IEEE Winter Conf. on Applications of Computer Vision, pages 5861-5870, 2023. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.173, + 0.484, + 0.241 + ], + "angle": 0, + "content": "[2] Jiwoon Ahn and Suha Kwak. Learning pixel-level semantic affinity with image-level supervision for weakly supervised semantic segmentation. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 4981-4990, 2018. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.245, + 0.484, + 0.299 + ], + "angle": 0, + "content": "[3] Derya Akkaynak and Tali Treibitz. Sea-thru: A method for removing water from underwater images. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 1682-1691, 2019. 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.302, + 0.483, + 0.369 + ], + "angle": 0, + "content": "[4] Dana Berman, Deborah Levy, Shai Avidan, and Tali Treibitz. Underwater single image color restoration using haze-lines and a new quantitative dataset. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(8):2822-2837, 2020. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.373, + 0.484, + 0.428 + ], + "angle": 0, + "content": "[5] Shariq Farooq Bhat, Ibrahim Alhashim, and Peter Wonka. Adabins: Depth estimation using adaptive bins. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 4009-4018, 2021. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.431, + 0.483, + 0.498 + ], + "angle": 0, + "content": "[6] Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li. Pixart-α: Fast training of diffusion transformer for photorealistic text-to-image synthesis, 2024. 2, 3, 4, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.501, + 0.483, + 0.557 + ], + "angle": 0, + "content": "[7] Zhe Chen, Yuchen Duan, Wenhai Wang, Junjun He, Tong Lu, Jifeng Dai, and Yu Qiao. Vision transformer adapter for dense predictions. In Proc. of Intl. Conf. on Learning Representations, 2023. 2, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.559, + 0.484, + 0.627 + ], + "angle": 0, + "content": "[8] Bowen Cheng, Ishan Misra, Alexander G Schwing, Alexander Kirillov, and Rohit Girdhar. Masked-attention mask transformer for universal image segmentation. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 1290–1299, 2022. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.63, + 0.483, + 0.698 + ], + "angle": 0, + "content": "[9] Paulo LJ Drews, Erickson R Nascimento, Silvia SC Botelho, and Mario Fernando Montenegro Campos. Underwater depth estimation and image restoration based on single images. IEEE computer graphics and applications, 36(2):24-35, 2016. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.701, + 0.483, + 0.743 + ], + "angle": 0, + "content": "[10] Honey Gupta and Kaushik Mitra. Unsupervised single image underwater depth estimation. In Proc. of IEEE Intl. Conf. on Image Processing, pages 624-628. IEEE, 2019. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.746, + 0.483, + 0.786 + ], + "angle": 0, + "content": "[11] Praful Hamberde, Subrahmanyam Murala, and Abhinav Dhall. Uw-gan: Single-image depth estimation and image enhancement for underwater images. 70:1-12, 2021. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.789, + 0.483, + 0.842 + ], + "angle": 0, + "content": "[12] Kaiming He, Jian Sun, and Xiaou Tang. Single image haze removal using dark channel prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(12):2341-2353, 2010. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.846, + 0.483, + 0.901 + ], + "angle": 0, + "content": "[13] Ruifei He, Shuyang Sun, Xin Yu, Chuhui Xue, Wenqing Zhang, Philip Torr, Song Bai, and Xiaojuan Qi. Is synthetic data from generative models ready for image recognition? In Proc. of Intl. Conf. on Learning Representations, 2023. 3" + }, + { + "type": "list", + "bbox": [ + 0.094, + 0.116, + 0.484, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.093, + 0.906, + 0.135 + ], + "angle": 0, + "content": "[14] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Proc. of Advances in Neural Information Processing Systems, 33:6840-6851, 2020. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.136, + 0.906, + 0.19 + ], + "angle": 0, + "content": "[15] Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. In Proc. of Intl. Conf. on Learning Representations, 2022. 5, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.192, + 0.906, + 0.272 + ], + "angle": 0, + "content": "[16] Md Jahidul Islam, Chelsey Edge, Yuyang Xiao, Peigen Luo, Munteqim Mehtaz, Christopher Morse, Sadman Sakib Enan, and Junaed Sattar. Semantic segmentation of underwater imagery: Dataset and benchmark. In Proc. of the IEEE Int. Conf. on Intelligent Robots and Systems, pages 1769-1776. IEEE, 2020. 3, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.274, + 0.906, + 0.342 + ], + "angle": 0, + "content": "[17] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. In Porc. of IEEE Intl. Conf. on Computer Vision, pages 4015-4026, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.344, + 0.906, + 0.412 + ], + "angle": 0, + "content": "[18] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In Proc. of Intl. Conf. on Machine Learning, pages 19730–19742. PMLR, 2023. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.414, + 0.906, + 0.469 + ], + "angle": 0, + "content": "[19] Shijie Lian, Hua Li, Runmin Cong, Suqi Li, Wei Zhang, and Sam Kwong. Watermask: Instance segmentation for underwater imagery. In Porc. of IEEE Intl. Conf. on Computer Vision, pages 1305-1315, 2023. 3, 4, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.47, + 0.906, + 0.538 + ], + "angle": 0, + "content": "[20] Shijie Lian, Ziyi Zhang, Hua Li, Wenjie Li, Laurence Tianruo Yang, Sam Kwong, and Runmin Cong. Diving into underwater: Segment anything model guided underwater salient instance segmentation and a large-scale dataset. In Proc. of Intl. Conf. on Machine Learning, 2024. 2, 3, 4, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.54, + 0.906, + 0.607 + ], + "angle": 0, + "content": "[21] Zhengyao Lv, Yuxiang Wei, Wangmeng Zuo, and KwanYee K Wong. Place: Adaptive layout-semantic fusion for semantic image synthesis. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 9264-9274, 2024. 3, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.609, + 0.906, + 0.664 + ], + "angle": 0, + "content": "[22] Quang Nguyen, Truong Vu, Anh Tran, and Khoi Nguyen. Dataset diffusion: Diffusion-based synthetic data generation for pixel-level semantic segmentation. In Proc. of Advances in Neural Information Processing Systems, 2024. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.665, + 0.906, + 0.747 + ], + "angle": 0, + "content": "[23] Alexander Quinn Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mcgrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In Proc. of Intl. Conf. on Machine Learning, pages 16784-16804. PMLR, 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.748, + 0.906, + 0.83 + ], + "angle": 0, + "content": "[24] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In Proc. of Intl. Conf. on Machine Learning, pages 8748-8763. PMLR, 2021. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.832, + 0.906, + 0.901 + ], + "angle": 0, + "content": "[25] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):1-67, 2020. 4" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.906, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.487, + 0.946, + 0.511, + 0.957 + ], + "angle": 0, + "content": "969" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.093, + 0.482, + 0.147 + ], + "angle": 0, + "content": "[26] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 1 (2):3, 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.15, + 0.482, + 0.219 + ], + "angle": 0, + "content": "[27] Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Ronghang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman Radle, Chloe Rolland, Laura Gustafson, et al. Sam 2: Segment anything in images and videos. In Proc. of Intl. Conf. on Learning Representations, 2025. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.221, + 0.482, + 0.289 + ], + "angle": 0, + "content": "[28] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 10684-10695, 2022. 2, 3, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.292, + 0.482, + 0.333 + ], + "angle": 0, + "content": "[29] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. In Proc. of Advances in Neural Information Processing Systems, 2017. 3, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.335, + 0.482, + 0.431 + ], + "angle": 0, + "content": "[30] Yibo Wang, Ruiyuan Gao, Kai Chen, Kaiqiang Zhou, Yingjie Cai, Lanqing Hong, Zhenguo Li, Lihui Jiang, DitYan Yeung, Qiang Xu, et al. Detdiffusion: Synergizing generative and perceptive models for enhanced data generation and perception. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 7246-7255, 2024. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.434, + 0.482, + 0.516 + ], + "angle": 0, + "content": "[31] Weijia Wu, Yuzhong Zhao, Hao Chen, Yuchao Gu, Rui Zhao, Yefei He, Hong Zhou, Mike Zheng Shou, and Chunhua Shen. Datasetdm: Synthesizing data with perception annotations using diffusion models. Proc. of Advances in Neural Information Processing Systems, 36:54683-54695, 2023. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.519, + 0.482, + 0.587 + ], + "angle": 0, + "content": "[32] Weijia Wu, Yuzhong Zhao, Mike Zheng Shou, Hong Zhou, and Chunhua Shen. Diffumask: Synthesizing images with pixel-level annotations for semantic segmentation using diffusion models. In Porc. of IEEE Intl. Conf. on Computer Vision, pages 1206-1217, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.59, + 0.482, + 0.643 + ], + "angle": 0, + "content": "[33] Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M Alvarez, and Ping Luo. Segformer: Simple and efficient design for semantic segmentation with transformers. 34:12077-12090, 2021. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.647, + 0.482, + 0.715 + ], + "angle": 0, + "content": "[34] Zhenda Xie, Zigang Geng, Jingcheng Hu, Zheng Zhang, Han Hu, and Yue Cao. Revealing the dark secrets of masked image modeling. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 14475-14485, 2023. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.718, + 0.482, + 0.772 + ], + "angle": 0, + "content": "[35] Han Xue, Zhiwu Huang, Qianru Sun, Li Song, and Wenjun Zhang. Freestyle layout-to-image synthesis. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 14256-14266, 2023. 3, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.775, + 0.482, + 0.829 + ], + "angle": 0, + "content": "[36] Tianyu Yan, Zifu Wan, Xinhao Deng, Pingping Zhang, Yang Liu, and Huchuan Lu. Mas-sam: Segment any marine animal with aggregated features. In Proc. of Intl. Joint Conf. on Artificial Intelligence, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.832, + 0.482, + 0.899 + ], + "angle": 0, + "content": "[37] Lihe Yang, Xiaogang Xu, Bingyi Kang, Yinghuan Shi, and Hengshuang Zhao. Freemask: Synthetic images with dense annotations make stronger segmentation models. Proc. of Advances in Neural Information Processing Systems, 36, 2023. 2, 3, 7" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.093, + 0.482, + 0.899 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.147 + ], + "angle": 0, + "content": "[38] Lihe Yang, Bingyi Kang, Zilong Huang, Zhen Zhao, Xiaogang Xu, Jiashi Feng, and Hengshuang Zhao. Depth anything v2. In Proc. of Advances in Neural Information Processing Systems, 2024. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.15, + 0.905, + 0.204 + ], + "angle": 0, + "content": "[39] Hanrong Ye, Jason Kuen, Qing Liu, Zhe Lin, Brian Price, and Dan Xu. Seggen: Supercharging segmentation models with text2mask and mask2img synthesis. arXiv preprint arXiv:2311.03355, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.207, + 0.905, + 0.273 + ], + "angle": 0, + "content": "[40] Weihao Yuan, Xiaodong Gu, Zuozhuo Dai, Siyu Zhu, and Ping Tan. Neural window fully-connected crfs for monocular depth estimation. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 3916-3925, 2022. 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.277, + 0.905, + 0.331 + ], + "angle": 0, + "content": "[41] Fan Zhang, Shaodi You, Yu Li, and Ying Fu. Atlantis: Enabling underwater depth estimation with stable diffusion. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 11852-11861, 2024. 2, 3, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.334, + 0.905, + 0.387 + ], + "angle": 0, + "content": "[42] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Porc. of IEEE Intl. Conf. on Computer Vision, pages 3836-3847, 2023. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.39, + 0.905, + 0.458 + ], + "angle": 0, + "content": "[43] Pingping Zhang, Tianyu Yan, Yang Liu, and Hutchuan Lu. Fantastic animals and where to find them: Segment any marine animal with dual sam. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 2578-2587, 2024. 3" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.458 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.486, + 0.945, + 0.512, + 0.957 + ], + "angle": 0, + "content": "970" + } + ] +] \ No newline at end of file diff --git a/2025/A Unified Image-Dense Annotation Generation Model for Underwater Scenes/c77f859c-1439-4915-ba1a-a9314ac3d9a9_origin.pdf b/2025/A Unified Image-Dense Annotation Generation Model for Underwater Scenes/c77f859c-1439-4915-ba1a-a9314ac3d9a9_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6fb69c8c9c2c96a9d9fdd47e69cd2990429b61a1 --- /dev/null +++ b/2025/A Unified Image-Dense Annotation Generation Model for Underwater Scenes/c77f859c-1439-4915-ba1a-a9314ac3d9a9_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:df2c2476aa115d31167388ba6af7cb360af926307ee355528f03eab60044baf6 +size 4263362 diff --git a/2025/A Unified Image-Dense Annotation Generation Model for Underwater Scenes/full.md b/2025/A Unified Image-Dense Annotation Generation Model for Underwater Scenes/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3fc65f38a68a8ef5d14b0b3ca7f2faae080d3cd3 --- /dev/null +++ b/2025/A Unified Image-Dense Annotation Generation Model for Underwater Scenes/full.md @@ -0,0 +1,309 @@ +# A Unified Image-Dense Annotation Generation Model for Underwater Scenes + +Hongkai Lin Dingkang Liang Zhenghao Qi Xiang Bai* Huazhong University of Science and Technology {hklin,dkliang,xbai}@hust.edu.cn + +![](images/dbc64f5ab09687b03fea39ce38291518f31f2ee783bc816eaaa6bfb11ba196c5.jpg) +Figure 1. We present TIDE, a unified underwater image-dense annotation generation model. Its core lies in the shared layout information and the natural complementarity between multimodal features. Our model, derived from the text-to-image model and fine-tuned with underwater data, enables the generation of highly consistent underwater image-dense annotations from solely text conditions. + +# Abstract + +Underwater dense prediction, especially depth estimation and semantic segmentation, is crucial for gaining a comprehensive understanding of underwater scenes. Nevertheless, high-quality and large-scale underwater datasets with dense annotations remain scarce because of the complex environment and the exorbitant data collection costs. This paper proposes a unified Text-to-Image and DEnse annotation generation method (TIDE) for underwater scenes. It relies solely on text as input to simultaneously generate realistic underwater images and multiple highly consistent dense annotations. Specifically, we unify the generation of + +text-to-image and text-to-dense annotations within a single model. The Implicit Layout Sharing mechanism (ILS) and cross-modal interaction method called Time Adaptive Normalization (TAN) are introduced to jointly optimize the consistency between image and dense annotations. We synthesize a large-scale underwater dataset using TIDE to validate the effectiveness of our method in underwater dense prediction tasks. The results demonstrate that our method effectively improves the performance of existing underwater dense prediction models and mitigates the scarcity of underwater data with dense annotations. We hope our method can offer new perspectives on alleviating data scarcity issues in other fields. The code is available at https://github.com/HongkLin/TIDE. + +# 1. Introduction + +Underwater dense prediction, particularly depth estimation and semantic segmentation, is essential for underwater exploration and environmental monitoring. However, the complex environment and the prohibitive data collection costs result in a scarcity of underwater data with dense annotations. Such conditions severely hinder the advancement of dense prediction technologies in underwater scenes. + +Fortunately, the recent success of the image generative technique [14, 28, 42] provides a breakthrough in addressing the scarcity of underwater scene data. In the field of general object understanding, controllable data synthesis [22, 30, 31, 37] demonstrates its effectiveness in few-shot scenarios. A straightforward solution is to apply them to underwater scenes directly. For instance, Atlantis [41], a pioneering controllable generation method for underwater depth data that takes ControlNet as its core, utilizes terrestrial depth maps as conditions. It effectively mitigates the issue of scarce underwater depth data and achieves consistent performance improvements across multiple underwater depth datasets and models. + +Despite remarkable progress, there are still challenges in Atlantis, as follows: 1) Atlantis, as shown in Fig. 2(a), generates underwater depth data using terrestrial depth maps as conditions due to the lack of underwater depth maps. It is considered a suboptimal approach since it may not align with natural underwater scenes. Better recreating authentic underwater environments is equally essential. 2) It generates data with only a single type of dense annotations, which is insufficient for understanding complex underwater scenes. Thus, a natural question arises: How can we simultaneously generate highly consistent, one-to-many, and vivid underwater images and dense annotation pairs? + +In this paper, we explore the possibility of simultaneously generating highly consistent, realistic underwater scene images and multiple types of dense annotations using only text conditions. Our approach, which we refer to as TIDE, is illustrated in Fig. 2(b), presents a unified Text-to-Image and DEnse annotation generation method. TIDE is an end-to-end training and inference model that integrates denoising models in parallel for both text-to-image generation and text-to-dense annotation generation. + +To align the images and multiple type dense annotations generated by parallel denoising models, we propose the Implicit Layout Sharing (ILS) mechanism. Specifically, the cross-attention map as an implicit layout is the key to controlling the image layout in the text-to-image model [6, 28], inspiring us to share the implicit layout for aligning images and dense annotations. ILS effortlessly replaces the cross-attention map in the text-to-dense annotation model with that from the text-to-image model, effectively improving the consistency between the image and dense annotations. Furthermore, considering the intrinsic + +![](images/17ad3b11a489e5895d1163ec2e425ab8073bebc39c5a14b448aac4484eadf16e.jpg) +Figure 2. The comparison between Atlantis [41] and our method. Unlike Atlantis, which requires text and depth map conditions, our method only needs text as the input condition to generate image-dense annotations (e.g., depth maps and semantic masks). + +complementarity between features of different modalities, we introduce a cross-modal interaction method called Time Adaptive Normalization (TAN), a normalization layer that modulates the activations using different modal features. The consistency of the image and dense annotations can further be jointly optimized through cross-modal feature interaction among different dense annotation generation and between image and dense annotation generation. + +To verify the effectiveness of our method, we use TIDE to generate a large-scale dataset of underwater images with dense annotations named SynTIDE. Extensive experiments demonstrate the effectiveness of SynTIDE for underwater dense prediction tasks. In the underwater depth estimation task, SynTIDE presents consistent improvements in various fine-tuning models. For example, when adopting representative NewCRFs [40] as the fine-tuning model, our approach achieves significance gains over previous work, particularly in the $SI_{log}$ and $\delta_1$ metrics, with improvements of 14.73 and 36% on the D3 and D5 subsets of Sea-thru [3] dataset, respectively. In underwater semantic segmentation, pre-training with SynTIDE yields consistent improvements across different models. For instance, when using ViT-Adapter [7] as the training model, pre-training with the SynTIDE dataset leads to improvements of 2.1% mIoU on the USIS10K [20] dataset. + +TIDE demonstrates powerful data generation capabilities for underwater scenes. Using only easily accessible text prompts, TIDE can generate highly consistent and realistic underwater images and multiple types of dense annotations. It holds potential as a mainstream data synthesis method for underwater scenes and offers a promising direction for alleviating data scarcity in other fields. The main contribu + +tions of this work are as follows: 1) We propose a novel data synthesis method, TIDE, which uses text as the sole condition to generate images and their corresponding multitype dense annotations simultaneously. To our knowledge, TIDE is the first method capable of simultaneously synthesizing both images and multiple dense annotations from text. 2) To align the images and dense annotations, we introduce the Implicit Layout Sharing mechanism. The text-to-image and text-to-dense annotation models share the same layout information, ensuring proper alignment between the image and dense annotations. Meanwhile, the consistency between image and dense annotations can be further optimized through the cross-modal interaction method called Time Adaptive Normalization. + +# 2. Relate Work + +# 2.1. Underwater Dense Prediction + +Dense prediction tasks in underwater scenes are crucial for comprehensively understanding underwater scenes. The publication of SUIM [16] provides a fundamental dataset and benchmark for the exploration of underwater semantic segmentation. To fill the gap in underwater instance segmentation, WaterMask [19] publishes the UIIS dataset, and a model is designed to cater to the unique characteristics of underwater images, improving the accuracy of underwater instance segmentation. Recently, the rise of general foundational segmentation models [17, 27] drives further development in the field of underwater segmentation [20, 36, 43]. + +Due to the lack of underwater depth estimation datasets, most underwater depth estimation methods focus on traditional techniques, unsupervised, or self-supervised approaches. Traditional methods [9] mainly rely on statistical priors, such as the dark channel prior [12], to estimate underwater depth. Gupta et al. [10] model the relationship between underwater and above-water hazy appearances to depth estimation. UW-GAN [11] and Atlantis [41] improve the performance of underwater depth estimation by synthesizing training datasets through generative models. + +While these methods make notable contributions to underwater dense prediction tasks, the large-scale and high-quality dataset in underwater scenes with only segmentation or depth annotations remains insufficient for achieving comprehensive underwater scene understanding. + +# 2.2. Controllable Data Synthesis + +Thanks to the success of diffusion models [14] and the availability of large-scale, high-quality text-image training data, text-to-image models [6, 23, 26, 28] and controllable image generation models [21, 35, 42] achieve unprecedented success in image quality, diversity, and consistency. + +He et al. [13] are the first to explore and demonstrate the effectiveness of state-of-the-art text-to-image genera + +tion models for image recognition. This makes it possible to achieve diverse data collection and accurate annotation at a lower cost. Wu et al. and Nguyen et al. [22, 31] explore the ability of pre-trained diffusion models to enhance real data in few-shot settings for segmentation tasks. Diffumask [32] ingeniously combines text-to-image models with AffinityNet [2], achieving open-vocabulary segmentation data synthesis. Freemask [37] demonstrates that synthetic data can further enhance the performance of semantic segmentation models under fully supervised settings by incorporating freestyle, a controllable image generation method using semantic masks as input conditions. Seggen [39] designs a multi-stage semantic segmentation data synthesis method, text2mask and mask2image, which achieves high semantic consistency semantic segmentation data only using text as the condition. Detdiffusion [30] synthesizes object detection data by incorporating object categories and spatial coordinates into the text. + +Unlike the aforementioned single-task data synthesis methods, we propose a novel end-to-end underwater data synthesis approach that simultaneously generates semantic masks and depth maps, relying solely on text conditions. + +# 3. Preliminaries + +Diffusion Models (DMs) [14] emerge as leading text-to-image (T2I) generation models, recognized for their ability to produce realistic images. DMs can reconstruct data distribution by learning the reverse process of a diffusion process. Denoting $z_{t}$ as the random variable at $t$ -th timestep, the diffusion process is modeled as a Markov Chain: + +$$ +z _ {t} \sim \mathcal {N} (\sqrt {\alpha_ {t}} z _ {t - 1}, (1 - \alpha_ {t}) I), \tag {1} +$$ + +where $\alpha_{t}$ is the fixed coefficient predefined in the noise schedule, and $I$ refers to identity matrix. A prominent variant, the Latent Diffusion Model (LDM) [28], innovatively shifts the diffusion process of standard DMs into a latent space. This transition notably decreases computational costs while preserving the generative quality and flexibility of the original model. The resulting efficiency gain primarily arises from the reduced dimensionality of the latent space, which allows for lower training costs without compromising the model's generative capabilities. + +Stable Diffusion, an exemplary implementation of LDM, comprises an AutoEncoder [29] and a latent diffusion model. The AutoEncoder $\varepsilon$ is designed to learn a latent space that is perceptually equivalent to the image space. Meanwhile, the LDM $\epsilon_{\theta}$ is parameterized as a denoising model with cross-attention and trained on a large-scale dataset of text-image pairs via: + +$$ +\mathcal {L} _ {L D M} := \mathbb {E} _ {\varepsilon (x), y, \epsilon \sim N (0, 1), t} [ \| \epsilon - \epsilon_ {\theta} (z _ {t}, t, \tau_ {\theta} (y)) \| _ {2} ^ {2} ], \tag {2} +$$ + +where $\epsilon$ is the target noise. $\tau_{\theta}$ and $y$ are the pre-trained + +![](images/01bdc75ed160a5f95660aa52c6cb1677e6053d024173d47df59b4017a0df17b8.jpg) +(a) Training phase of our TIDE + +![](images/f7f5c503325057f5f49b3b485661f5143744fa49061911315aa4ec6d5c742250.jpg) +(b) Inference phase +Figure 3. Training and Inference. The denoising model of TIDE mainly consists of three transformers, each dedicated to text-to-image, text-to-depth, and text-to-mask. The proposed Implicit Layout Sharing mechanism (ILS) and Time Adaptive Normalization (TAN) are used to align the generated image, depth map, and semantic mask. + +text encoder (e.g., CLIP [24], T5 [25]) and text prompts, respectively. This equation represents the mean-squared error (MSE) between the target noise $\epsilon$ and the noise predicted by the model, encapsulating the core learning mechanism of the latent diffusion model. + +# 4. Our Method + +An overview of our method, a unified text-to-image and dense annotation generation model (TIDE), is shown in Fig. 3. TIDE is built upon a pre-trained transformer [6] for text-to-image generation, along with two fine-tuned mini-transformers (details provided in Sec. 5.1.1) dedicated to text-to-depth and text-to-mask generation. Simply parallelizing multiple text-to-image processes does not ensure consistency between the images and dense annotations. To enable consistency between them, we propose Implicit Layout Sharing (ILS) and the cross-modal interaction method named Time Adaptive Normalization (TAN). After training, TIDE simultaneously generates images and multiple dense annotations with high consistency using only text as input. + +# 4.1. Data Preparation + +We aim to generate realistic underwater images, corresponding highly consistent depth maps, and semantic masks. However, existing high-quality, dense annotation data primarily consists of mask annotations. Therefore, we construct training data around these datasets with semantic masks, as shown in Tab. 1. On this basis, we obtain the corresponding depth map and caption for each image using existing foundation models. Specifically, for each underwater image, the corresponding depth map is obtained by pre-trained Depth Anything [38]. Meanwhile, the caption of + +each image is obtained from the pre-trained BLIP2 [18]. We construct approximately 14K quadruples {Image, Depth, Mask, Caption} for TIDE training. + +Table 1. Segmentation Datasets and Data Splits. \* denotes the training set of TIDE, while the others are used for evaluation. + +
DatasetsSeg TaskTrainValTest
SUIM [16]Semantic1,488*110*/
UIIS [19]Instance3,937*691/
USIS10K [20]Instance7,442*1,5941,596*
+ +# 4.2. Implicit Layout Sharing Mechanism + +In advanced text-to-image models [6, 28], the cross-attention map plays a crucial role in controlling the image layout. Existing methods [21, 35] demonstrate that adjusting the cross-attention map during the text-to-image process can effectively control the layout of the generated image. Therefore, the cross-attention map can be considered as the implicit layout information. Intuitively, sharing the implicit layout between text-to-image and text-to-dense annotations may establish a strong correlation between the generated image and dense annotations. To this end, we propose an Implicit Layout Sharing mechanism to align the generated image and dense annotations. Specifically, cross-attention, as a crucial process for generating implicit layouts in text-to-image/mask/depth model, can first be formulated as: + +$$ +\operatorname {A t t n} _ {i} \left(\boldsymbol {Q} _ {i}, \boldsymbol {K} _ {i}, \boldsymbol {V} _ {i}\right) = \operatorname {s o f t m a x} \left(\boldsymbol {Q} _ {i} \boldsymbol {K} _ {i} ^ {\top} / \sqrt {c}\right) \boldsymbol {V} _ {i}, +$$ + +$$ +\operatorname {A t t n} _ {d} \left(\boldsymbol {Q} _ {d}, \boldsymbol {K} _ {d}, \boldsymbol {V} _ {d}\right) = \operatorname {s o f t m a x} \left(\boldsymbol {Q} _ {d} \boldsymbol {K} _ {d} ^ {\top} / \sqrt {c}\right) \boldsymbol {V} _ {d}, \tag {3} +$$ + +$$ +\operatorname {A t t n} _ {m} \left(\boldsymbol {Q} _ {m}, \boldsymbol {K} _ {m}, \boldsymbol {V} _ {m}\right) = \operatorname {s o f t m a x} \left(\boldsymbol {Q} _ {m} \boldsymbol {K} _ {m} ^ {\top} / \sqrt {c}\right) \boldsymbol {V} _ {m}, +$$ + +where $c$ refers to the feature channel. $Q_{i} / Q_{d} / Q_{m}$ , $K_{i} / K_{d} / K_{m}$ , and $V_{i} / V_{d} / V_{m}$ represent the query, key, and value within the text-to-image/depth/mask cross-attention module, respectively. Since text-to-image models are pre-trained on high-quality and large-scale image-caption datasets, they exhibit strong controllability and generalization. Therefore, sharing the implicit layouts from the text-to-image model is the optimal choice to ensure the quality of the generated data. As shown in Fig. 3(a), the implicit layouts from the block in the text-to-image model are shared with the cross-attention in the block of text-to-dense annotation models. The implicit layouts refer to: + +$$ +\boldsymbol {M} _ {i} = \operatorname {s o f t m a x} \left(\boldsymbol {Q} _ {i} \boldsymbol {K} _ {i} ^ {\top} / \sqrt {c}\right). \tag {4} +$$ + +By sharing the implicit layouts from the text-to-image model, the cross-attention of text-to-depth $(\mathsf{Attn}_d)$ and text-to-mask $(\mathsf{Attn}_m)$ can be simplified as follows: + +$$ +\operatorname {A t t n} _ {d} \left(\boldsymbol {Q} _ {d}, \boldsymbol {K} _ {d}, \boldsymbol {V} _ {d}\right) = \boldsymbol {M} _ {i} \times \boldsymbol {V} _ {d}, +$$ + +$$ +\operatorname {A t t n} _ {m} \left(\boldsymbol {Q} _ {m}, \boldsymbol {K} _ {m}, \boldsymbol {V} _ {m}\right) = \boldsymbol {M} _ {i} \times \boldsymbol {V} _ {m}, \tag {5} +$$ + +where $\times$ refers to matrix multiplication. Implicit Layout Sharing is an elegant and efficient method that unifies image and dense annotation generation, improving consistency between them. It also reduces the overall generation cost, as there is no need to compute separate cross-attention maps for the text-to-dense annotation models. + +# 4.3. Time Adaptive Normalization + +Considering the complementary nature of different modality features, we propose a cross-modal feature interaction method called Time Adaptive Normalization (TAN), as shown in Fig. 4. + +Specifically, TAN is utilized to adjust the image layout by leveraging the cross-modal features $\boldsymbol{x}_f$ from different branches. The cross-modal features are mapped to two normalization parameters, $\gamma$ and $\beta$ , by MLPs, which are used to control the variation in the image layout. In this context, the features from text-to-depth and text-to-mask serve as cross-modal input features for each other. For instance, in the TAN corresponding to the $i$ -th text-to-depth block, the outputs from both the $i$ -th text-to-depth block and the $i$ -th text-to-mask block serve as the input feature $\boldsymbol{x}$ and cross-modal input feature $\boldsymbol{x}_f$ , respectively. A slight difference is that for text-to-image, the features from both text-to-depth and text-to-mask serve as the cross-modal features. In the TAN cross-modal interaction process of text-to-image, two sets of $\gamma$ and $\beta$ are obtained, provided by the different modalities features from text-to-depth and text-to-mask. These two sets of parameters are averaged to the $\bar{\gamma}$ and $\bar{\beta}$ . Then, time embeddings $\boldsymbol{x}_t$ is introduced to adaptively control the influence of the cross-modal features. The normalization can be formalized as follows: + +$$ +\boldsymbol {x} ^ {\prime} = \alpha \cdot \gamma \boldsymbol {x} + \alpha \cdot \beta , \boldsymbol {x} ^ {*} = \boldsymbol {x} ^ {\prime} + \boldsymbol {x}, \tag {6} +$$ + +![](images/3699708a205ef92db41f8131b1aaa0f5d3fe93d139c57d08349f9880048dc5ce.jpg) +Sigmoid $\oplus$ Element-wise add $\otimes$ Element-wise product +Figure 4. In TAN, the cross-modal features are first mapped to the modulation parameters $\gamma$ and $\beta$ . Then, a time-adaptive confidence $\alpha$ is introduced to control the degree of normalization. + +where $\pmb{x}$ , $\pmb{x}'$ , and $\pmb{x}^*$ are the input feature, normalized feature, and output feature, respectively. $\alpha$ is the time adaptive coefficients obtained from $\pmb{x}_t$ through linear transformation and sigmoid. The TAN will be applied not only from text-to-dense annotations to text-to-image but also between text-to-depth and text-to-mask to improve the consistency among dense annotations. Implicit Layout Sharing and Time Adaptive Normalization are two complementary methods that construct a joint interaction process, optimizing the consistency between the generated image and dense annotations during training. + +# 4.4. Learning Objective + +During training, the learnable parameters include only the proposed TAN module and the LoRA [15] used to fine-tune the pre-trained transformer. The overall loss $\mathcal{L}$ is composed equally of the denoising losses from the three branches: text-to-image, text-to-depth, and text-to-mask: + +$$ +\mathcal {L} = \mathcal {L} _ {m s e} ^ {I} + \mathcal {L} _ {m s e} ^ {D} + \mathcal {L} _ {m s e} ^ {M}. \tag {7} +$$ + +# 4.5. Data Synthesis + +Thanks to the proposed ILS and TAN, TIDE can generate realistic and highly consistent underwater images and dense annotations after training, using only text conditions, as shown in Fig. 3(b). + +We filter out redundant parts from the 14K captions obtained in Sec. 4.1, resulting in approximately 5K nonredundant captions as text conditions. For each caption, we generate ten samples to construct a large-scale synthetic dataset named SynTIDE. Some representative examples are shown in Fig. 1. The SynTIDE dataset is utilized to validate the effectiveness of our method in the dense prediction task for underwater scenes. + +# 4.6. Analysis + +Insights of framework design. In the text-to-image model, the cross-attention map contains the layout information of + +Table 2. Quantitative comparisons on real underwater depth estimation datasets. + +
MethodFine-tuning DatasetReference\( S I_{log} \)↓A.Rel↓\( log_{10} \)↓RMSE↓S.Rel↓RMSElog↓\( \delta_1 \uparrow \)\( \delta_2 \uparrow \)\( \delta_3 \uparrow \)
Quantitative comparisons on the D3 and D5 subsets of Sea-thru [3] dataset.
AdaBins [5]Atlantis [41]SynTIDE (Ours)CVPR 24-38.2426.92(-11.32)1.331.31(-0.02)0.120.08(-0.04)1.411.12(-0.29)12.8915.74(+2.85)0.390.27(-0.12)0.500.71(+0.21)0.810.95(+0.14)0.920.99(+0.07)
NewCRFs [40]Atlantis [41]SynTIDE (Ours)CVPR 24-37.1022.37(-14.73)1.681.50(-0.18)0.120.06(-0.06)1.441.24(-0.20)14.7622.50(+7.74)0.380.23(-0.15)0.480.84(+0.36)0.840.97(+0.13)0.950.99(+0.04)
PixelFormer [1]Atlantis [41]SynTIDE (Ours)CVPR 24-23.7021.39(-2.31)1.341.46(+0.12)0.060.05(-0.01)1.171.15(-0.02)17.2921.79(+4.50)0.240.22(-0.02)0.810.88(+0.07)0.970.98(+0.01)0.990.99(+0.00)
MIM [34]Atlantis [41]SynTIDE (Ours)CVPR 24-37.0122.49(-14.52)1.371.27(-0.10)0.110.06(-0.05)1.511.01(-0.50)14.4216.46(+2.04)0.380.23(-0.15)0.560.85(+0.29)0.840.97(+0.13)0.940.99(+0.05)
Quantitative comparisons on the SQUID [4] dataset.
AdaBins [5]Atlantis [41]SynTIDE (Ours)CVPR 24-29.5625.63(-3.93)0.280.23(-0.05)0.110.09(-0.02)2.242.69(+0.45)0.690.92(+0.23)0.310.27(-0.04)0.560.67(+0.11)0.860.90(+0.04)0.940.97(+0.03)
NewCRFs [40]Atlantis [41]SynTIDE (Ours)CVPR 24-25.1925.55(+0.36)0.230.23(-0.00)0.090.09(+0.00)2.563.02(+0.46)0.831.07(+0.24)0.260.27(+0.01)0.680.68(+0.00)0.900.91(+0.01)0.960.97(+0.01)
PixelFormer [1]Atlantis [41]SynTIDE (Ours)CVPR 24-21.3419.08(-2.26)0.180.16(-0.02)0.070.07(-0.00)1.861.75(-0.11)0.430.36(-0.07)0.220.19(-0.03)0.760.79(+0.03)0.940.97(+0.03)0.980.99(+0.01)
MIM [34]Atlantis [41]SynTIDE (Ours)CVPR 24-27.4526.98(-0.47)0.260.25(-0.01)0.100.09(-0.01)2.143.04(+0.90)0.681.11(+0.43)0.280.28(-0.00)0.610.65(+0.04)0.880.89(+0.01)0.950.96(+0.01)
+ +the image. Thus, the cross-attention map can be viewed as an implicit layout. If two text-to-image models share the same implicit layout and undergo proper fine-tuning, the generated images are likely to exhibit strong layout similarity. Therefore, we share the same implicit layout across multiple text-to-image models. Meanwhile, we use LoRA to fine-tune the multiple text-to-image models [6]. + +Zero-shot generation ability. Thanks to our training strategy, which fine-tunes the pre-trained text-to-image model using only LoRA, the generalization ability of the text-to-image model is retained to some extent. This enables TIDE to generate underwater images during inference that are not seen during training. Furthermore, due to the proposed Implicit Layout Sharing and Time Adaptive Normalization mechanisms, the generated depth maps align well with these images. Therefore, TIDE has the ability to generate zero-shot underwater image-depth map pairs. + +# 5. Experiments + +# 5.1. Dataset and Evaluation Metrics + +Underwater Depth Estimation. We follow the work [41], the D3 and D5 subsets of Sea-thru [3], and the SQUID dataset [4] used to evaluate the depth estimation capability in underwater scenes. These datasets include underwater images with depth maps obtained via the Structure-fromMotion (SfM) algorithm. + +The quantitative evaluation metrics include root + +mean square error (RMSE) and its logarithmic variant $(RMSE_{log})$ , absolute error in log-scale $(\log_{10})$ , absolute relative error (A.Rel), squared relative error (S.Rel), the percentage of inlier pixels $(\delta_i)$ with thresholds of $1.25^i$ , and scale-invariant error in log-scale $(SI_{log})$ : $100\sqrt{Var(\epsilon_{log})}$ . + +Underwater Semantic Segmentation. The UIIS [19] and USIS10K [20] datasets are chosen to validate the effectiveness of our method in underwater semantic segmentation tasks. Instance masks belonging to the same semantic category are merged to construct semantic segmentation annotations for the UIIS and USIS10K datasets. + +We calculate the mean Intersection over Union (mIoU) for six categories (i.e., Fish, Reefs, Aquatic Plants, Wrecks, Human Divers, and Robots) to evaluate the accuracy of the segmentation results. + +# 5.1.1 Implementation Details + +The training process consists of two parts: pre-training the mini-transformer and training TIDE. In the first stage, the mini-transformer is initialized with the first ten layers of the PixArt- $\alpha$ [6] pre-trained transformer. Then, the mini-transformer is trained for 60K iterations on the text-to-image task with all parameters. The training data consists of 14K underwater image-caption pairs from Sec. 4.1. In the second stage, the PixArt- $\alpha$ pre-trained transformer and the mini-transformer are used as initial weights for the text-to-image and text-to-dense annotation models, respectively. + +Table 3. Quantitative results of underwater semantic segmentation. + +
MethodBackboneTraining DatamIoU
RealSynTIDEUIISUSIS10K
Segformer [33] (NeurIPS 21)MiT-B470.274.6
76.572.8
75.4(+5.2)76.1(+1.5)
Mask2former [8] (CVPR 22)Swin-B72.776.1
74.272.9
74.3(+1.6)77.1(+1.0)
ViT-Adapter [7] (ICLR 23)ViT-Adapter-B73.574.6
75.772.6
75.1(+1.6)76.7(+2.1)
+ +Meanwhile, they are fine-tuned using LoRA [15] for 200K iterations with a batch size of 4. The LoRA ranks of the text-to-image/depth/mask branches are 32, 64, and 64, respectively. All experiments are conducted on a server with four NVIDIA 4090 24G GPUs. + +# 5.2. Main results + +# 5.2.1 Underwater Depth Estimation + +We train four representative depth estimation models, Adasbin [5], NewCRFs [29], PixelFormer [1], and MIM [34], to present quantitative results, as shown in Tab. 2. Compared to previous underwater data synthesis work Atlantis [41], depth estimation models trained on our SynTIDE dataset show consistent improvements across most quantitative metrics on two evaluated datasets. Especially on MIM [34], a powerful pre-trained model, our method reduces the $SI_{\text{log}}$ metric from $37.01 \rightarrow 22.49$ (-14.52) and improves $\delta_1$ from $0.56 \rightarrow 0.85$ (+0.29) on the D3 and D5 subsets of the Seathru dataset. Meanwhile, on PixelFormer [1], a depth estimation model with outstanding generalization that also performs best for Atlantis, our method achieves better performance across nearly all quantitative metrics on both evaluated underwater depth estimation datasets. + +These results demonstrate that our method achieves highly competitive consistency compared to Atlantis, which uses stronger dense conditions. Furthermore, the data generated by TIDE is closer to natural underwater scenes and shows rich species diversity. Most importantly, TIDE unifies the generation of images and multiple highly consistent dense annotations, capabilities that Atlantis lacks. + +# 5.3. Underwater Semantic Segmentation + +In the underwater semantic segmentation task, we validate the effectiveness of our method by pre-training with the SynTIDE dataset in three representative semantic segmentation models, Segformer [33], Mask2former [8], and ViT + +Table 4. Ablation on the impact of each TIDE component. + +
ILSTANSIlog ↓A.Rel ↓δ1 ↑mIoU
24.461.230.7636.8
24.591.400.7836.2
23.711.370.7942.1
+ +Table 5. Ablation on the impact of the component positions. " {Start, End}" indicates the starting and ending positions where the operations are applied, with a step size of 3. + +
{Start, End}SIlog ↓A.Rel ↓δ1 ↑mIoU
{0,12}22.061.460.8634.7
{15,27}44.861.090.438.6
{0,27}23.711.370.7942.1
+ +Table 6. Ablation on the impact of scaling synthetic data for underwater dense prediction tasks. + +
N SampleSIlog ↓A.Rel ↓δ1↑mIoU
123.491.460.8255.3
322.941.540.8560.9
622.961.560.8563.6
1022.371.500.8464.2
+ +Adapter [7]. Following the work [37], we filter the noise in the generated annotations with 1.5 tolerance. + +Pre-training on high-quality synthetic datasets is widely recognized as a way to gain strong prior knowledge. On the UIIS dataset, models trained on the SynTIDE dataset consistently achieve superior results compared to real data. On the other larger USIS10K dataset, by further fine-tuning the model on the UIIS10K train set, we achieve notable improvements. Especially on ViT-Adapter, we enhance the performance of the model from $74.6\% \rightarrow 76.7\%$ . + +These results show that models pre-trained on the SynTIDE dataset exhibit strong prior knowledge in the underwater semantic segmentation task. Additionally, these results demonstrate that the unified image and dense annotation generation model proposed in this paper can generate highly consistent image-dense annotation pairs, making it suitable for various underwater dense prediction tasks. + +# 5.4. Ablation Studies + +Unless otherwise specified, we conduct ablation studies by training TIDE for 30K iterations. We synthesize three samples for each caption, as described in Sec. 4.5. We conduct ablation studies on the USIS10K dataset with SegFormer-B4 for semantic segmentation and the D3 and D5 subsets of the Sea-thru dataset with NewCRFs for depth estimation. + +Ablation on the effectiveness of each component. We first evaluate the contribution of each component within + +TIDE, as shown in Tab. 4. When utilizing only the Implicit Layout Sharing (ILS) mechanism or Time Adaptive Normalization (TAN), the former outperforms the latter in depth estimation and semantic segmentation. Combining both methods results in a significant improvement $(36.8\% \rightarrow 42.1\%)$ in semantic segmentation. These results indicate that ILS and TAN are complementary methods. By combining them for end-to-end training, the consistency between images and dense annotations can be further optimized. Additionally, we further demonstrate the effectiveness of the time-adaptive operation. As shown in the last row of Tab. 4, without time-adaptive parameters, the quality of the generated data will be varying degrees of degradation, especially for the semantic segmentation task. + +Ablation on the position of components. We then study the effect of the position of ILS and TAN, as shown in Tab. 5. We find that applying the ILS and TAN mechanisms in the first half of the transformer of text-to-image yields better performance than using them in the second half. This can be attributed to the layout information produced in the first half of the transformer, which is mismatched with the ILS introduced in the latter part. Meanwhile, the results demonstrate that combining both achieves better consistency between the image and dense annotations. + +Ablation on data scaling. Finally, we synthesize $N$ samples for each caption to validate the impact of synthetic data scale on underwater dense prediction tasks, as shown in Tab. 6. It can be observed that as the amount of synthetic data increases, there is no substantial improvement in the underwater depth estimation task. However, for the underwater semantic segmentation task, a significant gain is observed in the early stages as $N$ increases, but the tendency of improvement begins to flatten after $N = 6$ . + +# 5.5. More Challenging Underwater Data Synthesis + +We validate whether TIDE can generate more challenging data by adding extra text prompts about underwater lighting or water quality (e.g., low light, turbidity) to the original underwater scene caption. As shown in Fig. 5, the results demonstrate that TIDE can generate more challenging underwater images. While annotating these underwater images may be extremely difficult for humans, TIDE can effortlessly produce highly consistent and accurate dense annotations, which hold great practical value for real-world underwater applications. In additional, to demonstrate the diversity of generated underwater data, we generate twelve underwater images from the same text prompt, as shown in Fig. 6. It can be observed that, despite sharing the same text prompt, the generated images exhibit rich diversity. + +# 5.6. Limitation + +Despite the promising results achieved, our method still has some limitations. First, our approach cannot gener- + +![](images/f365ef2c20cdc9bd13840b7a564002ebdccc654c8edfb99a69b2dcfceb71171a.jpg) +Figure 5. More challenging underwater data generated by TIDE. + +![](images/5d98c398f2a8ca0fdebbecb449d4a876f85b15b8d1979a6867739d416b881fa2.jpg) +Figure 6. Visualization of generated data diversity. + +ate instance-level semantic masks from the generation perspective. Relying on text prompts to guide the generation of instance-level masks with semantic annotations remains challenging. Additionally, although TIDE can leverage the powerful priors of pre-trained T2I models to generate highly challenging underwater images (e.g., low light, turbidity), there is still room for improvement. These will be key directions for future expansion. + +# 6. Conclusion + +This paper introduces a unified text-to-image and dense annotation generation model for underwater scenes. The model can generate realistic underwater images and multiple highly consistent dense annotations using only text prompts as input. We validate the effectiveness of our method on underwater depth estimation and semantic segmentation tasks by synthesizing a large-scale underwater dataset containing images along with highly consistent depth and semantic segmentation annotations. In the depth estimation task, extensive experimental results show that our method, using only text as input, achieves highly competitive results compared to previous methods that required stronger dense conditions for underwater depth synthesis. Meanwhile, pre-training with data synthesized using our method further improves model performance in the semantic segmentation task. Our study provides a new perspective for alleviating data scarcity in other fields. + +Acknowledgement. This work was supported by the NSFC (Grant U234120202 and 62225603). + +# References + +[1] Ashutosh Agarwal and Chetan Arora. Attention attention everywhere: Monocular depth prediction with skip attention. In Proc. of IEEE Winter Conf. on Applications of Computer Vision, pages 5861-5870, 2023. 6, 7 +[2] Jiwoon Ahn and Suha Kwak. Learning pixel-level semantic affinity with image-level supervision for weakly supervised semantic segmentation. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 4981-4990, 2018. 3 +[3] Derya Akkaynak and Tali Treibitz. Sea-thru: A method for removing water from underwater images. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 1682-1691, 2019. 2, 6 +[4] Dana Berman, Deborah Levy, Shai Avidan, and Tali Treibitz. Underwater single image color restoration using haze-lines and a new quantitative dataset. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(8):2822-2837, 2020. 6 +[5] Shariq Farooq Bhat, Ibrahim Alhashim, and Peter Wonka. Adabins: Depth estimation using adaptive bins. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 4009-4018, 2021. 6, 7 +[6] Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li. Pixart-α: Fast training of diffusion transformer for photorealistic text-to-image synthesis, 2024. 2, 3, 4, 6 +[7] Zhe Chen, Yuchen Duan, Wenhai Wang, Junjun He, Tong Lu, Jifeng Dai, and Yu Qiao. Vision transformer adapter for dense predictions. In Proc. of Intl. Conf. on Learning Representations, 2023. 2, 7 +[8] Bowen Cheng, Ishan Misra, Alexander G Schwing, Alexander Kirillov, and Rohit Girdhar. Masked-attention mask transformer for universal image segmentation. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 1290–1299, 2022. 7 +[9] Paulo LJ Drews, Erickson R Nascimento, Silvia SC Botelho, and Mario Fernando Montenegro Campos. Underwater depth estimation and image restoration based on single images. IEEE computer graphics and applications, 36(2):24-35, 2016. 3 +[10] Honey Gupta and Kaushik Mitra. Unsupervised single image underwater depth estimation. In Proc. of IEEE Intl. Conf. on Image Processing, pages 624-628. IEEE, 2019. 3 +[11] Praful Hamberde, Subrahmanyam Murala, and Abhinav Dhall. Uw-gan: Single-image depth estimation and image enhancement for underwater images. 70:1-12, 2021. 3 +[12] Kaiming He, Jian Sun, and Xiaou Tang. Single image haze removal using dark channel prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(12):2341-2353, 2010. 3 +[13] Ruifei He, Shuyang Sun, Xin Yu, Chuhui Xue, Wenqing Zhang, Philip Torr, Song Bai, and Xiaojuan Qi. Is synthetic data from generative models ready for image recognition? In Proc. of Intl. Conf. on Learning Representations, 2023. 3 + +[14] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Proc. of Advances in Neural Information Processing Systems, 33:6840-6851, 2020. 2, 3 +[15] Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. In Proc. of Intl. Conf. on Learning Representations, 2022. 5, 7 +[16] Md Jahidul Islam, Chelsey Edge, Yuyang Xiao, Peigen Luo, Munteqim Mehtaz, Christopher Morse, Sadman Sakib Enan, and Junaed Sattar. Semantic segmentation of underwater imagery: Dataset and benchmark. In Proc. of the IEEE Int. Conf. on Intelligent Robots and Systems, pages 1769-1776. IEEE, 2020. 3, 4 +[17] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. In Porc. of IEEE Intl. Conf. on Computer Vision, pages 4015-4026, 2023. 3 +[18] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In Proc. of Intl. Conf. on Machine Learning, pages 19730–19742. PMLR, 2023. 4 +[19] Shijie Lian, Hua Li, Runmin Cong, Suqi Li, Wei Zhang, and Sam Kwong. Watermask: Instance segmentation for underwater imagery. In Porc. of IEEE Intl. Conf. on Computer Vision, pages 1305-1315, 2023. 3, 4, 6 +[20] Shijie Lian, Ziyi Zhang, Hua Li, Wenjie Li, Laurence Tianruo Yang, Sam Kwong, and Runmin Cong. Diving into underwater: Segment anything model guided underwater salient instance segmentation and a large-scale dataset. In Proc. of Intl. Conf. on Machine Learning, 2024. 2, 3, 4, 6 +[21] Zhengyao Lv, Yuxiang Wei, Wangmeng Zuo, and KwanYee K Wong. Place: Adaptive layout-semantic fusion for semantic image synthesis. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 9264-9274, 2024. 3, 4 +[22] Quang Nguyen, Truong Vu, Anh Tran, and Khoi Nguyen. Dataset diffusion: Diffusion-based synthetic data generation for pixel-level semantic segmentation. In Proc. of Advances in Neural Information Processing Systems, 2024. 2, 3 +[23] Alexander Quinn Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mcgrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In Proc. of Intl. Conf. on Machine Learning, pages 16784-16804. PMLR, 2022. 3 +[24] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In Proc. of Intl. Conf. on Machine Learning, pages 8748-8763. PMLR, 2021. 4 +[25] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):1-67, 2020. 4 + +[26] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 1 (2):3, 2022. 3 +[27] Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Ronghang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman Radle, Chloe Rolland, Laura Gustafson, et al. Sam 2: Segment anything in images and videos. In Proc. of Intl. Conf. on Learning Representations, 2025. 3 +[28] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 10684-10695, 2022. 2, 3, 4 +[29] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. In Proc. of Advances in Neural Information Processing Systems, 2017. 3, 7 +[30] Yibo Wang, Ruiyuan Gao, Kai Chen, Kaiqiang Zhou, Yingjie Cai, Lanqing Hong, Zhenguo Li, Lihui Jiang, DitYan Yeung, Qiang Xu, et al. Detdiffusion: Synergizing generative and perceptive models for enhanced data generation and perception. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 7246-7255, 2024. 2, 3 +[31] Weijia Wu, Yuzhong Zhao, Hao Chen, Yuchao Gu, Rui Zhao, Yefei He, Hong Zhou, Mike Zheng Shou, and Chunhua Shen. Datasetdm: Synthesizing data with perception annotations using diffusion models. Proc. of Advances in Neural Information Processing Systems, 36:54683-54695, 2023. 2, 3 +[32] Weijia Wu, Yuzhong Zhao, Mike Zheng Shou, Hong Zhou, and Chunhua Shen. Diffumask: Synthesizing images with pixel-level annotations for semantic segmentation using diffusion models. In Porc. of IEEE Intl. Conf. on Computer Vision, pages 1206-1217, 2023. 3 +[33] Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M Alvarez, and Ping Luo. Segformer: Simple and efficient design for semantic segmentation with transformers. 34:12077-12090, 2021. 7 +[34] Zhenda Xie, Zigang Geng, Jingcheng Hu, Zheng Zhang, Han Hu, and Yue Cao. Revealing the dark secrets of masked image modeling. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 14475-14485, 2023. 6, 7 +[35] Han Xue, Zhiwu Huang, Qianru Sun, Li Song, and Wenjun Zhang. Freestyle layout-to-image synthesis. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 14256-14266, 2023. 3, 4 +[36] Tianyu Yan, Zifu Wan, Xinhao Deng, Pingping Zhang, Yang Liu, and Huchuan Lu. Mas-sam: Segment any marine animal with aggregated features. In Proc. of Intl. Joint Conf. on Artificial Intelligence, 2024. 3 +[37] Lihe Yang, Xiaogang Xu, Bingyi Kang, Yinghuan Shi, and Hengshuang Zhao. Freemask: Synthetic images with dense annotations make stronger segmentation models. Proc. of Advances in Neural Information Processing Systems, 36, 2023. 2, 3, 7 + +[38] Lihe Yang, Bingyi Kang, Zilong Huang, Zhen Zhao, Xiaogang Xu, Jiashi Feng, and Hengshuang Zhao. Depth anything v2. In Proc. of Advances in Neural Information Processing Systems, 2024. 4 +[39] Hanrong Ye, Jason Kuen, Qing Liu, Zhe Lin, Brian Price, and Dan Xu. Seggen: Supercharging segmentation models with text2mask and mask2img synthesis. arXiv preprint arXiv:2311.03355, 2023. 3 +[40] Weihao Yuan, Xiaodong Gu, Zuozhuo Dai, Siyu Zhu, and Ping Tan. Neural window fully-connected crfs for monocular depth estimation. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 3916-3925, 2022. 2, 6 +[41] Fan Zhang, Shaodi You, Yu Li, and Ying Fu. Atlantis: Enabling underwater depth estimation with stable diffusion. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 11852-11861, 2024. 2, 3, 6, 7 +[42] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Porc. of IEEE Intl. Conf. on Computer Vision, pages 3836-3847, 2023. 2, 3 +[43] Pingping Zhang, Tianyu Yan, Yang Liu, and Hutchuan Lu. Fantastic animals and where to find them: Segment any marine animal with dual sam. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 2578-2587, 2024. 3 \ No newline at end of file diff --git a/2025/A Unified Image-Dense Annotation Generation Model for Underwater Scenes/images.zip b/2025/A Unified Image-Dense Annotation Generation Model for Underwater Scenes/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e41c73ae91eb320a320a26d8b62839add4d33d27 --- /dev/null +++ b/2025/A Unified Image-Dense Annotation Generation Model for Underwater Scenes/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6a5e4d142d3ba5ff254a2ba5d1b8a4f198758bedb4add58630a67aa5cefe630c +size 698250 diff --git a/2025/A Unified Image-Dense Annotation Generation Model for Underwater Scenes/layout.json b/2025/A Unified Image-Dense Annotation Generation Model for Underwater Scenes/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..db54613f61a6b7e61ad9b808301e58a68f8f764e --- /dev/null +++ b/2025/A Unified Image-Dense Annotation Generation Model for Underwater Scenes/layout.json @@ -0,0 +1,7550 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 66, + 103, + 544, + 120 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 103, + 544, + 120 + ], + "spans": [ + { + "bbox": [ + 66, + 103, + 544, + 120 + ], + "type": "text", + "content": "A Unified Image-Dense Annotation Generation Model for Underwater Scenes" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 153, + 143, + 455, + 186 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 153, + 143, + 455, + 186 + ], + "spans": [ + { + "bbox": [ + 153, + 143, + 455, + 186 + ], + "type": "text", + "content": "Hongkai Lin Dingkang Liang Zhenghao Qi Xiang Bai* Huazhong University of Science and Technology {hklin,dkliang,xbai}@hust.edu.cn" + } + ] + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 69, + 193, + 541, + 484 + ], + "blocks": [ + { + "bbox": [ + 69, + 193, + 541, + 484 + ], + "lines": [ + { + "bbox": [ + 69, + 193, + 541, + 484 + ], + "spans": [ + { + "bbox": [ + 69, + 193, + 541, + 484 + ], + "type": "image", + "image_path": "dbc64f5ab09687b03fea39ce38291518f31f2ee783bc816eaaa6bfb11ba196c5.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 54, + 487, + 555, + 522 + ], + "lines": [ + { + "bbox": [ + 54, + 487, + 555, + 522 + ], + "spans": [ + { + "bbox": [ + 54, + 487, + 555, + 522 + ], + "type": "text", + "content": "Figure 1. We present TIDE, a unified underwater image-dense annotation generation model. Its core lies in the shared layout information and the natural complementarity between multimodal features. Our model, derived from the text-to-image model and fine-tuned with underwater data, enables the generation of highly consistent underwater image-dense annotations from solely text conditions." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "bbox": [ + 151, + 531, + 201, + 544 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 531, + 201, + 544 + ], + "spans": [ + { + "bbox": [ + 151, + 531, + 201, + 544 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 54, + 557, + 297, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 557, + 297, + 689 + ], + "spans": [ + { + "bbox": [ + 54, + 557, + 297, + 689 + ], + "type": "text", + "content": "Underwater dense prediction, especially depth estimation and semantic segmentation, is crucial for gaining a comprehensive understanding of underwater scenes. Nevertheless, high-quality and large-scale underwater datasets with dense annotations remain scarce because of the complex environment and the exorbitant data collection costs. This paper proposes a unified Text-to-Image and DEnse annotation generation method (TIDE) for underwater scenes. It relies solely on text as input to simultaneously generate realistic underwater images and multiple highly consistent dense annotations. Specifically, we unify the generation of" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 533, + 556, + 700 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 533, + 556, + 700 + ], + "spans": [ + { + "bbox": [ + 313, + 533, + 556, + 700 + ], + "type": "text", + "content": "text-to-image and text-to-dense annotations within a single model. The Implicit Layout Sharing mechanism (ILS) and cross-modal interaction method called Time Adaptive Normalization (TAN) are introduced to jointly optimize the consistency between image and dense annotations. We synthesize a large-scale underwater dataset using TIDE to validate the effectiveness of our method in underwater dense prediction tasks. The results demonstrate that our method effectively improves the performance of existing underwater dense prediction models and mitigates the scarcity of underwater data with dense annotations. We hope our method can offer new perspectives on alleviating data scarcity issues in other fields. The code is available at https://github.com/HongkLin/TIDE." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 70, + 703, + 151, + 713 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 70, + 703, + 151, + 713 + ], + "spans": [ + { + "bbox": [ + 70, + 703, + 151, + 713 + ], + "type": "text", + "content": "* Corresponding author." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 297, + 749, + 312, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 297, + 749, + 312, + 757 + ], + "spans": [ + { + "bbox": [ + 297, + 749, + 312, + 757 + ], + "type": "text", + "content": "961" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 71, + 136, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 71, + 136, + 83 + ], + "spans": [ + { + "bbox": [ + 56, + 71, + 136, + 83 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 91, + 296, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 91, + 296, + 175 + ], + "spans": [ + { + "bbox": [ + 55, + 91, + 296, + 175 + ], + "type": "text", + "content": "Underwater dense prediction, particularly depth estimation and semantic segmentation, is essential for underwater exploration and environmental monitoring. However, the complex environment and the prohibitive data collection costs result in a scarcity of underwater data with dense annotations. Such conditions severely hinder the advancement of dense prediction technologies in underwater scenes." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 54, + 175, + 296, + 330 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 175, + 296, + 330 + ], + "spans": [ + { + "bbox": [ + 54, + 175, + 296, + 330 + ], + "type": "text", + "content": "Fortunately, the recent success of the image generative technique [14, 28, 42] provides a breakthrough in addressing the scarcity of underwater scene data. In the field of general object understanding, controllable data synthesis [22, 30, 31, 37] demonstrates its effectiveness in few-shot scenarios. A straightforward solution is to apply them to underwater scenes directly. For instance, Atlantis [41], a pioneering controllable generation method for underwater depth data that takes ControlNet as its core, utilizes terrestrial depth maps as conditions. It effectively mitigates the issue of scarce underwater depth data and achieves consistent performance improvements across multiple underwater depth datasets and models." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 331, + 296, + 474 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 331, + 296, + 474 + ], + "spans": [ + { + "bbox": [ + 55, + 331, + 296, + 474 + ], + "type": "text", + "content": "Despite remarkable progress, there are still challenges in Atlantis, as follows: 1) Atlantis, as shown in Fig. 2(a), generates underwater depth data using terrestrial depth maps as conditions due to the lack of underwater depth maps. It is considered a suboptimal approach since it may not align with natural underwater scenes. Better recreating authentic underwater environments is equally essential. 2) It generates data with only a single type of dense annotations, which is insufficient for understanding complex underwater scenes. Thus, a natural question arises: How can we simultaneously generate highly consistent, one-to-many, and vivid underwater images and dense annotation pairs?" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 474, + 296, + 581 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 474, + 296, + 581 + ], + "spans": [ + { + "bbox": [ + 55, + 474, + 296, + 581 + ], + "type": "text", + "content": "In this paper, we explore the possibility of simultaneously generating highly consistent, realistic underwater scene images and multiple types of dense annotations using only text conditions. Our approach, which we refer to as TIDE, is illustrated in Fig. 2(b), presents a unified Text-to-Image and DEnse annotation generation method. TIDE is an end-to-end training and inference model that integrates denoising models in parallel for both text-to-image generation and text-to-dense annotation generation." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 582, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 582, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 582, + 296, + 713 + ], + "type": "text", + "content": "To align the images and multiple type dense annotations generated by parallel denoising models, we propose the Implicit Layout Sharing (ILS) mechanism. Specifically, the cross-attention map as an implicit layout is the key to controlling the image layout in the text-to-image model [6, 28], inspiring us to share the implicit layout for aligning images and dense annotations. ILS effortlessly replaces the cross-attention map in the text-to-dense annotation model with that from the text-to-image model, effectively improving the consistency between the image and dense annotations. Furthermore, considering the intrinsic" + } + ] + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 316, + 70, + 556, + 254 + ], + "blocks": [ + { + "bbox": [ + 316, + 70, + 556, + 254 + ], + "lines": [ + { + "bbox": [ + 316, + 70, + 556, + 254 + ], + "spans": [ + { + "bbox": [ + 316, + 70, + 556, + 254 + ], + "type": "image", + "image_path": "17ad3b11a489e5895d1163ec2e425ab8073bebc39c5a14b448aac4484eadf16e.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 261, + 555, + 306 + ], + "lines": [ + { + "bbox": [ + 313, + 261, + 555, + 306 + ], + "spans": [ + { + "bbox": [ + 313, + 261, + 555, + 306 + ], + "type": "text", + "content": "Figure 2. The comparison between Atlantis [41] and our method. Unlike Atlantis, which requires text and depth map conditions, our method only needs text as the input condition to generate image-dense annotations (e.g., depth maps and semantic masks)." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 328, + 555, + 424 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 328, + 555, + 424 + ], + "spans": [ + { + "bbox": [ + 313, + 328, + 555, + 424 + ], + "type": "text", + "content": "complementarity between features of different modalities, we introduce a cross-modal interaction method called Time Adaptive Normalization (TAN), a normalization layer that modulates the activations using different modal features. The consistency of the image and dense annotations can further be jointly optimized through cross-modal feature interaction among different dense annotation generation and between image and dense annotation generation." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 425, + 556, + 628 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 425, + 556, + 628 + ], + "spans": [ + { + "bbox": [ + 313, + 425, + 556, + 628 + ], + "type": "text", + "content": "To verify the effectiveness of our method, we use TIDE to generate a large-scale dataset of underwater images with dense annotations named SynTIDE. Extensive experiments demonstrate the effectiveness of SynTIDE for underwater dense prediction tasks. In the underwater depth estimation task, SynTIDE presents consistent improvements in various fine-tuning models. For example, when adopting representative NewCRFs [40] as the fine-tuning model, our approach achieves significance gains over previous work, particularly in the " + }, + { + "bbox": [ + 313, + 425, + 556, + 628 + ], + "type": "inline_equation", + "content": "SI_{log}" + }, + { + "bbox": [ + 313, + 425, + 556, + 628 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 425, + 556, + 628 + ], + "type": "inline_equation", + "content": "\\delta_1" + }, + { + "bbox": [ + 313, + 425, + 556, + 628 + ], + "type": "text", + "content": " metrics, with improvements of 14.73 and 36% on the D3 and D5 subsets of Sea-thru [3] dataset, respectively. In underwater semantic segmentation, pre-training with SynTIDE yields consistent improvements across different models. For instance, when using ViT-Adapter [7] as the training model, pre-training with the SynTIDE dataset leads to improvements of 2.1% mIoU on the USIS10K [20] dataset." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 629, + 556, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 629, + 556, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 629, + 556, + 713 + ], + "type": "text", + "content": "TIDE demonstrates powerful data generation capabilities for underwater scenes. Using only easily accessible text prompts, TIDE can generate highly consistent and realistic underwater images and multiple types of dense annotations. It holds potential as a mainstream data synthesis method for underwater scenes and offers a promising direction for alleviating data scarcity in other fields. The main contribu" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 297, + 749, + 312, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 297, + 749, + 312, + 757 + ], + "spans": [ + { + "bbox": [ + 297, + 749, + 312, + 757 + ], + "type": "text", + "content": "962" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 296, + 240 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 296, + 240 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 296, + 240 + ], + "type": "text", + "content": "tions of this work are as follows: 1) We propose a novel data synthesis method, TIDE, which uses text as the sole condition to generate images and their corresponding multitype dense annotations simultaneously. To our knowledge, TIDE is the first method capable of simultaneously synthesizing both images and multiple dense annotations from text. 2) To align the images and dense annotations, we introduce the Implicit Layout Sharing mechanism. The text-to-image and text-to-dense annotation models share the same layout information, ensuring proper alignment between the image and dense annotations. Meanwhile, the consistency between image and dense annotations can be further optimized through the cross-modal interaction method called Time Adaptive Normalization." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 251, + 135, + 263 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 251, + 135, + 263 + ], + "spans": [ + { + "bbox": [ + 55, + 251, + 135, + 263 + ], + "type": "text", + "content": "2. Relate Work" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 271, + 217, + 283 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 271, + 217, + 283 + ], + "spans": [ + { + "bbox": [ + 55, + 271, + 217, + 283 + ], + "type": "text", + "content": "2.1. Underwater Dense Prediction" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 290, + 295, + 422 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 290, + 295, + 422 + ], + "spans": [ + { + "bbox": [ + 55, + 290, + 295, + 422 + ], + "type": "text", + "content": "Dense prediction tasks in underwater scenes are crucial for comprehensively understanding underwater scenes. The publication of SUIM [16] provides a fundamental dataset and benchmark for the exploration of underwater semantic segmentation. To fill the gap in underwater instance segmentation, WaterMask [19] publishes the UIIS dataset, and a model is designed to cater to the unique characteristics of underwater images, improving the accuracy of underwater instance segmentation. Recently, the rise of general foundational segmentation models [17, 27] drives further development in the field of underwater segmentation [20, 36, 43]." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 422, + 295, + 541 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 422, + 295, + 541 + ], + "spans": [ + { + "bbox": [ + 55, + 422, + 295, + 541 + ], + "type": "text", + "content": "Due to the lack of underwater depth estimation datasets, most underwater depth estimation methods focus on traditional techniques, unsupervised, or self-supervised approaches. Traditional methods [9] mainly rely on statistical priors, such as the dark channel prior [12], to estimate underwater depth. Gupta et al. [10] model the relationship between underwater and above-water hazy appearances to depth estimation. UW-GAN [11] and Atlantis [41] improve the performance of underwater depth estimation by synthesizing training datasets through generative models." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 542, + 295, + 602 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 542, + 295, + 602 + ], + "spans": [ + { + "bbox": [ + 55, + 542, + 295, + 602 + ], + "type": "text", + "content": "While these methods make notable contributions to underwater dense prediction tasks, the large-scale and high-quality dataset in underwater scenes with only segmentation or depth annotations remains insufficient for achieving comprehensive underwater scene understanding." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 610, + 209, + 623 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 610, + 209, + 623 + ], + "spans": [ + { + "bbox": [ + 55, + 610, + 209, + 623 + ], + "type": "text", + "content": "2.2. Controllable Data Synthesis" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 629, + 295, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 629, + 295, + 689 + ], + "spans": [ + { + "bbox": [ + 55, + 629, + 295, + 689 + ], + "type": "text", + "content": "Thanks to the success of diffusion models [14] and the availability of large-scale, high-quality text-image training data, text-to-image models [6, 23, 26, 28] and controllable image generation models [21, 35, 42] achieve unprecedented success in image quality, diversity, and consistency." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 689, + 295, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 689, + 295, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 689, + 295, + 714 + ], + "type": "text", + "content": "He et al. [13] are the first to explore and demonstrate the effectiveness of state-of-the-art text-to-image genera" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 72, + 553, + 287 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 553, + 287 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 553, + 287 + ], + "type": "text", + "content": "tion models for image recognition. This makes it possible to achieve diverse data collection and accurate annotation at a lower cost. Wu et al. and Nguyen et al. [22, 31] explore the ability of pre-trained diffusion models to enhance real data in few-shot settings for segmentation tasks. Diffumask [32] ingeniously combines text-to-image models with AffinityNet [2], achieving open-vocabulary segmentation data synthesis. Freemask [37] demonstrates that synthetic data can further enhance the performance of semantic segmentation models under fully supervised settings by incorporating freestyle, a controllable image generation method using semantic masks as input conditions. Seggen [39] designs a multi-stage semantic segmentation data synthesis method, text2mask and mask2image, which achieves high semantic consistency semantic segmentation data only using text as the condition. Detdiffusion [30] synthesizes object detection data by incorporating object categories and spatial coordinates into the text." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 288, + 553, + 335 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 288, + 553, + 335 + ], + "spans": [ + { + "bbox": [ + 313, + 288, + 553, + 335 + ], + "type": "text", + "content": "Unlike the aforementioned single-task data synthesis methods, we propose a novel end-to-end underwater data synthesis approach that simultaneously generates semantic masks and depth maps, relying solely on text conditions." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 314, + 346, + 399, + 358 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 346, + 399, + 358 + ], + "spans": [ + { + "bbox": [ + 314, + 346, + 399, + 358 + ], + "type": "text", + "content": "3. Preliminaries" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 365, + 554, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 365, + 554, + 437 + ], + "spans": [ + { + "bbox": [ + 313, + 365, + 554, + 437 + ], + "type": "text", + "content": "Diffusion Models (DMs) [14] emerge as leading text-to-image (T2I) generation models, recognized for their ability to produce realistic images. DMs can reconstruct data distribution by learning the reverse process of a diffusion process. Denoting " + }, + { + "bbox": [ + 313, + 365, + 554, + 437 + ], + "type": "inline_equation", + "content": "z_{t}" + }, + { + "bbox": [ + 313, + 365, + 554, + 437 + ], + "type": "text", + "content": " as the random variable at " + }, + { + "bbox": [ + 313, + 365, + 554, + 437 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 313, + 365, + 554, + 437 + ], + "type": "text", + "content": "-th timestep, the diffusion process is modeled as a Markov Chain:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 372, + 449, + 553, + 462 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 372, + 449, + 553, + 462 + ], + "spans": [ + { + "bbox": [ + 372, + 449, + 553, + 462 + ], + "type": "interline_equation", + "content": "z _ {t} \\sim \\mathcal {N} (\\sqrt {\\alpha_ {t}} z _ {t - 1}, (1 - \\alpha_ {t}) I), \\tag {1}", + "image_path": "fac215acf47e69fa88b1436ed55d79676495f36f643bbe2f3bc7b795d24f4d2e.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 468, + 554, + 586 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 468, + 554, + 586 + ], + "spans": [ + { + "bbox": [ + 313, + 468, + 554, + 586 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 468, + 554, + 586 + ], + "type": "inline_equation", + "content": "\\alpha_{t}" + }, + { + "bbox": [ + 313, + 468, + 554, + 586 + ], + "type": "text", + "content": " is the fixed coefficient predefined in the noise schedule, and " + }, + { + "bbox": [ + 313, + 468, + 554, + 586 + ], + "type": "inline_equation", + "content": "I" + }, + { + "bbox": [ + 313, + 468, + 554, + 586 + ], + "type": "text", + "content": " refers to identity matrix. A prominent variant, the Latent Diffusion Model (LDM) [28], innovatively shifts the diffusion process of standard DMs into a latent space. This transition notably decreases computational costs while preserving the generative quality and flexibility of the original model. The resulting efficiency gain primarily arises from the reduced dimensionality of the latent space, which allows for lower training costs without compromising the model's generative capabilities." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 586, + 554, + 671 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 586, + 554, + 671 + ], + "spans": [ + { + "bbox": [ + 313, + 586, + 554, + 671 + ], + "type": "text", + "content": "Stable Diffusion, an exemplary implementation of LDM, comprises an AutoEncoder [29] and a latent diffusion model. The AutoEncoder " + }, + { + "bbox": [ + 313, + 586, + 554, + 671 + ], + "type": "inline_equation", + "content": "\\varepsilon" + }, + { + "bbox": [ + 313, + 586, + 554, + 671 + ], + "type": "text", + "content": " is designed to learn a latent space that is perceptually equivalent to the image space. Meanwhile, the LDM " + }, + { + "bbox": [ + 313, + 586, + 554, + 671 + ], + "type": "inline_equation", + "content": "\\epsilon_{\\theta}" + }, + { + "bbox": [ + 313, + 586, + 554, + 671 + ], + "type": "text", + "content": " is parameterized as a denoising model with cross-attention and trained on a large-scale dataset of text-image pairs via:" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 321, + 679, + 553, + 694 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 679, + 553, + 694 + ], + "spans": [ + { + "bbox": [ + 321, + 679, + 553, + 694 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {L D M} := \\mathbb {E} _ {\\varepsilon (x), y, \\epsilon \\sim N (0, 1), t} [ \\| \\epsilon - \\epsilon_ {\\theta} (z _ {t}, t, \\tau_ {\\theta} (y)) \\| _ {2} ^ {2} ], \\tag {2}", + "image_path": "9067c526ee46c929fb77aeaca0ba10d1028ec9a87f66754c4e56633291c8ff7f.jpg" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 701, + 553, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 701, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 701, + 553, + 713 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 701, + 553, + 713 + ], + "type": "inline_equation", + "content": "\\epsilon" + }, + { + "bbox": [ + 313, + 701, + 553, + 713 + ], + "type": "text", + "content": " is the target noise. " + }, + { + "bbox": [ + 313, + 701, + 553, + 713 + ], + "type": "inline_equation", + "content": "\\tau_{\\theta}" + }, + { + "bbox": [ + 313, + 701, + 553, + 713 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 701, + 553, + 713 + ], + "type": "inline_equation", + "content": "y" + }, + { + "bbox": [ + 313, + 701, + 553, + 713 + ], + "type": "text", + "content": " are the pre-trained" + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 297, + 749, + 312, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 297, + 749, + 312, + 757 + ], + "spans": [ + { + "bbox": [ + 297, + 749, + 312, + 757 + ], + "type": "text", + "content": "963" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 55, + 68, + 431, + 238 + ], + "blocks": [ + { + "bbox": [ + 55, + 68, + 431, + 238 + ], + "lines": [ + { + "bbox": [ + 55, + 68, + 431, + 238 + ], + "spans": [ + { + "bbox": [ + 55, + 68, + 431, + 238 + ], + "type": "image", + "image_path": "01bdc75ed160a5f95660aa52c6cb1677e6053d024173d47df59b4017a0df17b8.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 195, + 241, + 293, + 251 + ], + "lines": [ + { + "bbox": [ + 195, + 241, + 293, + 251 + ], + "spans": [ + { + "bbox": [ + 195, + 241, + 293, + 251 + ], + "type": "text", + "content": "(a) Training phase of our TIDE" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 449, + 71, + 555, + 235 + ], + "blocks": [ + { + "bbox": [ + 449, + 71, + 555, + 235 + ], + "lines": [ + { + "bbox": [ + 449, + 71, + 555, + 235 + ], + "spans": [ + { + "bbox": [ + 449, + 71, + 555, + 235 + ], + "type": "image", + "image_path": "f7f5c503325057f5f49b3b485661f5143744fa49061911315aa4ec6d5c742250.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 470, + 242, + 533, + 251 + ], + "lines": [ + { + "bbox": [ + 470, + 242, + 533, + 251 + ], + "spans": [ + { + "bbox": [ + 470, + 242, + 533, + 251 + ], + "type": "text", + "content": "(b) Inference phase" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 54, + 260, + 555, + 293 + ], + "lines": [ + { + "bbox": [ + 54, + 260, + 555, + 293 + ], + "spans": [ + { + "bbox": [ + 54, + 260, + 555, + 293 + ], + "type": "text", + "content": "Figure 3. Training and Inference. The denoising model of TIDE mainly consists of three transformers, each dedicated to text-to-image, text-to-depth, and text-to-mask. The proposed Implicit Layout Sharing mechanism (ILS) and Time Adaptive Normalization (TAN) are used to align the generated image, depth map, and semantic mask." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 54, + 315, + 296, + 374 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 315, + 296, + 374 + ], + "spans": [ + { + "bbox": [ + 54, + 315, + 296, + 374 + ], + "type": "text", + "content": "text encoder (e.g., CLIP [24], T5 [25]) and text prompts, respectively. This equation represents the mean-squared error (MSE) between the target noise " + }, + { + "bbox": [ + 54, + 315, + 296, + 374 + ], + "type": "inline_equation", + "content": "\\epsilon" + }, + { + "bbox": [ + 54, + 315, + 296, + 374 + ], + "type": "text", + "content": " and the noise predicted by the model, encapsulating the core learning mechanism of the latent diffusion model." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 388, + 135, + 400 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 388, + 135, + 400 + ], + "spans": [ + { + "bbox": [ + 55, + 388, + 135, + 400 + ], + "type": "text", + "content": "4. Our Method" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 54, + 409, + 296, + 565 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 409, + 296, + 565 + ], + "spans": [ + { + "bbox": [ + 54, + 409, + 296, + 565 + ], + "type": "text", + "content": "An overview of our method, a unified text-to-image and dense annotation generation model (TIDE), is shown in Fig. 3. TIDE is built upon a pre-trained transformer [6] for text-to-image generation, along with two fine-tuned mini-transformers (details provided in Sec. 5.1.1) dedicated to text-to-depth and text-to-mask generation. Simply parallelizing multiple text-to-image processes does not ensure consistency between the images and dense annotations. To enable consistency between them, we propose Implicit Layout Sharing (ILS) and the cross-modal interaction method named Time Adaptive Normalization (TAN). After training, TIDE simultaneously generates images and multiple dense annotations with high consistency using only text as input." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 574, + 160, + 588 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 574, + 160, + 588 + ], + "spans": [ + { + "bbox": [ + 55, + 574, + 160, + 588 + ], + "type": "text", + "content": "4.1. Data Preparation" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 54, + 594, + 296, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 594, + 296, + 715 + ], + "spans": [ + { + "bbox": [ + 54, + 594, + 296, + 715 + ], + "type": "text", + "content": "We aim to generate realistic underwater images, corresponding highly consistent depth maps, and semantic masks. However, existing high-quality, dense annotation data primarily consists of mask annotations. Therefore, we construct training data around these datasets with semantic masks, as shown in Tab. 1. On this basis, we obtain the corresponding depth map and caption for each image using existing foundation models. Specifically, for each underwater image, the corresponding depth map is obtained by pre-trained Depth Anything [38]. Meanwhile, the caption of" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 315, + 555, + 351 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 315, + 555, + 351 + ], + "spans": [ + { + "bbox": [ + 313, + 315, + 555, + 351 + ], + "type": "text", + "content": "each image is obtained from the pre-trained BLIP2 [18]. We construct approximately 14K quadruples {Image, Depth, Mask, Caption} for TIDE training." + } + ] + } + ], + "index": 10 + }, + { + "type": "table", + "bbox": [ + 316, + 394, + 551, + 448 + ], + "blocks": [ + { + "bbox": [ + 313, + 364, + 555, + 386 + ], + "lines": [ + { + "bbox": [ + 313, + 364, + 555, + 386 + ], + "spans": [ + { + "bbox": [ + 313, + 364, + 555, + 386 + ], + "type": "text", + "content": "Table 1. Segmentation Datasets and Data Splits. \\* denotes the training set of TIDE, while the others are used for evaluation." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 316, + 394, + 551, + 448 + ], + "lines": [ + { + "bbox": [ + 316, + 394, + 551, + 448 + ], + "spans": [ + { + "bbox": [ + 316, + 394, + 551, + 448 + ], + "type": "table", + "html": "
DatasetsSeg TaskTrainValTest
SUIM [16]Semantic1,488*110*/
UIIS [19]Instance3,937*691/
USIS10K [20]Instance7,442*1,5941,596*
", + "image_path": "e77aefbc828542b09bf7b8055f58e9804fa364f28fe964eb77a803f1be0350a4.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "table_body" + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 472, + 507, + 484 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 472, + 507, + 484 + ], + "spans": [ + { + "bbox": [ + 313, + 472, + 507, + 484 + ], + "type": "text", + "content": "4.2. Implicit Layout Sharing Mechanism" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 312, + 491, + 555, + 659 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 491, + 555, + 659 + ], + "spans": [ + { + "bbox": [ + 312, + 491, + 555, + 659 + ], + "type": "text", + "content": "In advanced text-to-image models [6, 28], the cross-attention map plays a crucial role in controlling the image layout. Existing methods [21, 35] demonstrate that adjusting the cross-attention map during the text-to-image process can effectively control the layout of the generated image. Therefore, the cross-attention map can be considered as the implicit layout information. Intuitively, sharing the implicit layout between text-to-image and text-to-dense annotations may establish a strong correlation between the generated image and dense annotations. To this end, we propose an Implicit Layout Sharing mechanism to align the generated image and dense annotations. Specifically, cross-attention, as a crucial process for generating implicit layouts in text-to-image/mask/depth model, can first be formulated as:" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 335, + 670, + 529, + 684 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 335, + 670, + 529, + 684 + ], + "spans": [ + { + "bbox": [ + 335, + 670, + 529, + 684 + ], + "type": "interline_equation", + "content": "\\operatorname {A t t n} _ {i} \\left(\\boldsymbol {Q} _ {i}, \\boldsymbol {K} _ {i}, \\boldsymbol {V} _ {i}\\right) = \\operatorname {s o f t m a x} \\left(\\boldsymbol {Q} _ {i} \\boldsymbol {K} _ {i} ^ {\\top} / \\sqrt {c}\\right) \\boldsymbol {V} _ {i},", + "image_path": "c796eb65347c26e1797fd610cd7b836010ac05c14c946e2410f9ae573d3ef08e.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 332, + 685, + 553, + 700 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 332, + 685, + 553, + 700 + ], + "spans": [ + { + "bbox": [ + 332, + 685, + 553, + 700 + ], + "type": "interline_equation", + "content": "\\operatorname {A t t n} _ {d} \\left(\\boldsymbol {Q} _ {d}, \\boldsymbol {K} _ {d}, \\boldsymbol {V} _ {d}\\right) = \\operatorname {s o f t m a x} \\left(\\boldsymbol {Q} _ {d} \\boldsymbol {K} _ {d} ^ {\\top} / \\sqrt {c}\\right) \\boldsymbol {V} _ {d}, \\tag {3}", + "image_path": "e4e44eac94a333053aef82637fa998509d8ef282156b0e4dd4cddbde415a7f49.jpg" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 320, + 702, + 537, + 716 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 702, + 537, + 716 + ], + "spans": [ + { + "bbox": [ + 320, + 702, + 537, + 716 + ], + "type": "interline_equation", + "content": "\\operatorname {A t t n} _ {m} \\left(\\boldsymbol {Q} _ {m}, \\boldsymbol {K} _ {m}, \\boldsymbol {V} _ {m}\\right) = \\operatorname {s o f t m a x} \\left(\\boldsymbol {Q} _ {m} \\boldsymbol {K} _ {m} ^ {\\top} / \\sqrt {c}\\right) \\boldsymbol {V} _ {m},", + "image_path": "3075a4ab9b7f80e57eb4f56f2c5bb2d307743ce87c5bdf013a5f18286a695ae4.jpg" + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 297, + 749, + 313, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 297, + 749, + 313, + 757 + ], + "spans": [ + { + "bbox": [ + 297, + 749, + 313, + 757 + ], + "type": "text", + "content": "964" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 296, + 216 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 296, + 216 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 296, + 216 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 72, + 296, + 216 + ], + "type": "inline_equation", + "content": "c" + }, + { + "bbox": [ + 55, + 72, + 296, + 216 + ], + "type": "text", + "content": " refers to the feature channel. " + }, + { + "bbox": [ + 55, + 72, + 296, + 216 + ], + "type": "inline_equation", + "content": "Q_{i} / Q_{d} / Q_{m}" + }, + { + "bbox": [ + 55, + 72, + 296, + 216 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 72, + 296, + 216 + ], + "type": "inline_equation", + "content": "K_{i} / K_{d} / K_{m}" + }, + { + "bbox": [ + 55, + 72, + 296, + 216 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 55, + 72, + 296, + 216 + ], + "type": "inline_equation", + "content": "V_{i} / V_{d} / V_{m}" + }, + { + "bbox": [ + 55, + 72, + 296, + 216 + ], + "type": "text", + "content": " represent the query, key, and value within the text-to-image/depth/mask cross-attention module, respectively. Since text-to-image models are pre-trained on high-quality and large-scale image-caption datasets, they exhibit strong controllability and generalization. Therefore, sharing the implicit layouts from the text-to-image model is the optimal choice to ensure the quality of the generated data. As shown in Fig. 3(a), the implicit layouts from the block in the text-to-image model are shared with the cross-attention in the block of text-to-dense annotation models. The implicit layouts refer to:" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 112, + 220, + 295, + 235 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 112, + 220, + 295, + 235 + ], + "spans": [ + { + "bbox": [ + 112, + 220, + 295, + 235 + ], + "type": "interline_equation", + "content": "\\boldsymbol {M} _ {i} = \\operatorname {s o f t m a x} \\left(\\boldsymbol {Q} _ {i} \\boldsymbol {K} _ {i} ^ {\\top} / \\sqrt {c}\\right). \\tag {4}", + "image_path": "7b3a51a7584c15ef89aa3d38683b6c2febaea6aa994313152637169754ce2fd9.jpg" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 239, + 296, + 275 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 239, + 296, + 275 + ], + "spans": [ + { + "bbox": [ + 55, + 239, + 296, + 275 + ], + "type": "text", + "content": "By sharing the implicit layouts from the text-to-image model, the cross-attention of text-to-depth " + }, + { + "bbox": [ + 55, + 239, + 296, + 275 + ], + "type": "inline_equation", + "content": "(\\mathsf{Attn}_d)" + }, + { + "bbox": [ + 55, + 239, + 296, + 275 + ], + "type": "text", + "content": " and text-to-mask " + }, + { + "bbox": [ + 55, + 239, + 296, + 275 + ], + "type": "inline_equation", + "content": "(\\mathsf{Attn}_m)" + }, + { + "bbox": [ + 55, + 239, + 296, + 275 + ], + "type": "text", + "content": " can be simplified as follows:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 107, + 280, + 253, + 293 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 280, + 253, + 293 + ], + "spans": [ + { + "bbox": [ + 107, + 280, + 253, + 293 + ], + "type": "interline_equation", + "content": "\\operatorname {A t t n} _ {d} \\left(\\boldsymbol {Q} _ {d}, \\boldsymbol {K} _ {d}, \\boldsymbol {V} _ {d}\\right) = \\boldsymbol {M} _ {i} \\times \\boldsymbol {V} _ {d},", + "image_path": "d6ce945bb00d0cd6dd4e991802f41dc2ee2d9fe47ac5c251bc565361dba361da.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 96, + 291, + 294, + 308 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 96, + 291, + 294, + 308 + ], + "spans": [ + { + "bbox": [ + 96, + 291, + 294, + 308 + ], + "type": "interline_equation", + "content": "\\operatorname {A t t n} _ {m} \\left(\\boldsymbol {Q} _ {m}, \\boldsymbol {K} _ {m}, \\boldsymbol {V} _ {m}\\right) = \\boldsymbol {M} _ {i} \\times \\boldsymbol {V} _ {m}, \\tag {5}", + "image_path": "1079a03a180cacc8539626c1d969ae2d97b12c5eaff39f014e44bc105bdc5a4b.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 312, + 296, + 384 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 312, + 296, + 384 + ], + "spans": [ + { + "bbox": [ + 55, + 312, + 296, + 384 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 312, + 296, + 384 + ], + "type": "inline_equation", + "content": "\\times" + }, + { + "bbox": [ + 55, + 312, + 296, + 384 + ], + "type": "text", + "content": " refers to matrix multiplication. Implicit Layout Sharing is an elegant and efficient method that unifies image and dense annotation generation, improving consistency between them. It also reduces the overall generation cost, as there is no need to compute separate cross-attention maps for the text-to-dense annotation models." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 390, + 218, + 403 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 390, + 218, + 403 + ], + "spans": [ + { + "bbox": [ + 55, + 390, + 218, + 403 + ], + "type": "text", + "content": "4.3. Time Adaptive Normalization" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 408, + 296, + 456 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 408, + 296, + 456 + ], + "spans": [ + { + "bbox": [ + 55, + 408, + 296, + 456 + ], + "type": "text", + "content": "Considering the complementary nature of different modality features, we propose a cross-modal feature interaction method called Time Adaptive Normalization (TAN), as shown in Fig. 4." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "spans": [ + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "text", + "content": "Specifically, TAN is utilized to adjust the image layout by leveraging the cross-modal features " + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "inline_equation", + "content": "\\boldsymbol{x}_f" + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "text", + "content": " from different branches. The cross-modal features are mapped to two normalization parameters, " + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "inline_equation", + "content": "\\beta" + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "text", + "content": ", by MLPs, which are used to control the variation in the image layout. In this context, the features from text-to-depth and text-to-mask serve as cross-modal input features for each other. For instance, in the TAN corresponding to the " + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "text", + "content": "-th text-to-depth block, the outputs from both the " + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "text", + "content": "-th text-to-depth block and the " + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "text", + "content": "-th text-to-mask block serve as the input feature " + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "inline_equation", + "content": "\\boldsymbol{x}" + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "text", + "content": " and cross-modal input feature " + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "inline_equation", + "content": "\\boldsymbol{x}_f" + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "text", + "content": ", respectively. A slight difference is that for text-to-image, the features from both text-to-depth and text-to-mask serve as the cross-modal features. In the TAN cross-modal interaction process of text-to-image, two sets of " + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "inline_equation", + "content": "\\beta" + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "text", + "content": " are obtained, provided by the different modalities features from text-to-depth and text-to-mask. These two sets of parameters are averaged to the " + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "inline_equation", + "content": "\\bar{\\gamma}" + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "inline_equation", + "content": "\\bar{\\beta}" + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "text", + "content": ". Then, time embeddings " + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "inline_equation", + "content": "\\boldsymbol{x}_t" + }, + { + "bbox": [ + 55, + 456, + 296, + 696 + ], + "type": "text", + "content": " is introduced to adaptively control the influence of the cross-modal features. The normalization can be formalized as follows:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 102, + 700, + 295, + 715 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 102, + 700, + 295, + 715 + ], + "spans": [ + { + "bbox": [ + 102, + 700, + 295, + 715 + ], + "type": "interline_equation", + "content": "\\boldsymbol {x} ^ {\\prime} = \\alpha \\cdot \\gamma \\boldsymbol {x} + \\alpha \\cdot \\beta , \\boldsymbol {x} ^ {*} = \\boldsymbol {x} ^ {\\prime} + \\boldsymbol {x}, \\tag {6}", + "image_path": "ef670747212bffbd1cae4b470d5f596e41bf2f11bb2c739ef00655849a4dce70.jpg" + } + ] + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 340, + 69, + 536, + 173 + ], + "blocks": [ + { + "bbox": [ + 340, + 69, + 536, + 173 + ], + "lines": [ + { + "bbox": [ + 340, + 69, + 536, + 173 + ], + "spans": [ + { + "bbox": [ + 340, + 69, + 536, + 173 + ], + "type": "image", + "image_path": "3699708a205ef92db41f8131b1aaa0f5d3fe93d139c57d08349f9880048dc5ce.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 326, + 179, + 542, + 192 + ], + "lines": [ + { + "bbox": [ + 326, + 179, + 542, + 192 + ], + "spans": [ + { + "bbox": [ + 326, + 179, + 542, + 192 + ], + "type": "text", + "content": "Sigmoid " + }, + { + "bbox": [ + 326, + 179, + 542, + 192 + ], + "type": "inline_equation", + "content": "\\oplus" + }, + { + "bbox": [ + 326, + 179, + 542, + 192 + ], + "type": "text", + "content": " Element-wise add " + }, + { + "bbox": [ + 326, + 179, + 542, + 192 + ], + "type": "inline_equation", + "content": "\\otimes" + }, + { + "bbox": [ + 326, + 179, + 542, + 192 + ], + "type": "text", + "content": " Element-wise product" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 313, + 200, + 555, + 234 + ], + "lines": [ + { + "bbox": [ + 313, + 200, + 555, + 234 + ], + "spans": [ + { + "bbox": [ + 313, + 200, + 555, + 234 + ], + "type": "text", + "content": "Figure 4. In TAN, the cross-modal features are first mapped to the modulation parameters " + }, + { + "bbox": [ + 313, + 200, + 555, + 234 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 313, + 200, + 555, + 234 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 200, + 555, + 234 + ], + "type": "inline_equation", + "content": "\\beta" + }, + { + "bbox": [ + 313, + 200, + 555, + 234 + ], + "type": "text", + "content": ". Then, a time-adaptive confidence " + }, + { + "bbox": [ + 313, + 200, + 555, + 234 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 313, + 200, + 555, + 234 + ], + "type": "text", + "content": " is introduced to control the degree of normalization." + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 253, + 555, + 386 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 253, + 555, + 386 + ], + "spans": [ + { + "bbox": [ + 313, + 253, + 555, + 386 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 253, + 555, + 386 + ], + "type": "inline_equation", + "content": "\\pmb{x}" + }, + { + "bbox": [ + 313, + 253, + 555, + 386 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 253, + 555, + 386 + ], + "type": "inline_equation", + "content": "\\pmb{x}'" + }, + { + "bbox": [ + 313, + 253, + 555, + 386 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 313, + 253, + 555, + 386 + ], + "type": "inline_equation", + "content": "\\pmb{x}^*" + }, + { + "bbox": [ + 313, + 253, + 555, + 386 + ], + "type": "text", + "content": " are the input feature, normalized feature, and output feature, respectively. " + }, + { + "bbox": [ + 313, + 253, + 555, + 386 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 313, + 253, + 555, + 386 + ], + "type": "text", + "content": " is the time adaptive coefficients obtained from " + }, + { + "bbox": [ + 313, + 253, + 555, + 386 + ], + "type": "inline_equation", + "content": "\\pmb{x}_t" + }, + { + "bbox": [ + 313, + 253, + 555, + 386 + ], + "type": "text", + "content": " through linear transformation and sigmoid. The TAN will be applied not only from text-to-dense annotations to text-to-image but also between text-to-depth and text-to-mask to improve the consistency among dense annotations. Implicit Layout Sharing and Time Adaptive Normalization are two complementary methods that construct a joint interaction process, optimizing the consistency between the generated image and dense annotations during training." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 314, + 392, + 428, + 405 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 392, + 428, + 405 + ], + "spans": [ + { + "bbox": [ + 314, + 392, + 428, + 405 + ], + "type": "text", + "content": "4.4. Learning Objective" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 411, + 554, + 471 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 411, + 554, + 471 + ], + "spans": [ + { + "bbox": [ + 313, + 411, + 554, + 471 + ], + "type": "text", + "content": "During training, the learnable parameters include only the proposed TAN module and the LoRA [15] used to fine-tune the pre-trained transformer. The overall loss " + }, + { + "bbox": [ + 313, + 411, + 554, + 471 + ], + "type": "inline_equation", + "content": "\\mathcal{L}" + }, + { + "bbox": [ + 313, + 411, + 554, + 471 + ], + "type": "text", + "content": " is composed equally of the denoising losses from the three branches: text-to-image, text-to-depth, and text-to-mask:" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 376, + 480, + 554, + 496 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 376, + 480, + 554, + 496 + ], + "spans": [ + { + "bbox": [ + 376, + 480, + 554, + 496 + ], + "type": "interline_equation", + "content": "\\mathcal {L} = \\mathcal {L} _ {m s e} ^ {I} + \\mathcal {L} _ {m s e} ^ {D} + \\mathcal {L} _ {m s e} ^ {M}. \\tag {7}", + "image_path": "16dde66d9253ccac2f8019ef2a89f5108ec45ab58bcbaba601c8669361ecac56.jpg" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 502, + 406, + 514 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 502, + 406, + 514 + ], + "spans": [ + { + "bbox": [ + 313, + 502, + 406, + 514 + ], + "type": "text", + "content": "4.5. Data Synthesis" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 520, + 554, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 520, + 554, + 567 + ], + "spans": [ + { + "bbox": [ + 313, + 520, + 554, + 567 + ], + "type": "text", + "content": "Thanks to the proposed ILS and TAN, TIDE can generate realistic and highly consistent underwater images and dense annotations after training, using only text conditions, as shown in Fig. 3(b)." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 313, + 568, + 554, + 663 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 568, + 554, + 663 + ], + "spans": [ + { + "bbox": [ + 313, + 568, + 554, + 663 + ], + "type": "text", + "content": "We filter out redundant parts from the 14K captions obtained in Sec. 4.1, resulting in approximately 5K nonredundant captions as text conditions. For each caption, we generate ten samples to construct a large-scale synthetic dataset named SynTIDE. Some representative examples are shown in Fig. 1. The SynTIDE dataset is utilized to validate the effectiveness of our method in the dense prediction task for underwater scenes." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 313, + 671, + 377, + 684 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 671, + 377, + 684 + ], + "spans": [ + { + "bbox": [ + 313, + 671, + 377, + 684 + ], + "type": "text", + "content": "4.6. Analysis" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 313, + 689, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 689, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 689, + 555, + 713 + ], + "type": "text", + "content": "Insights of framework design. In the text-to-image model, the cross-attention map contains the layout information of" + } + ] + } + ], + "index": 21 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 297, + 749, + 313, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 297, + 749, + 313, + 757 + ], + "spans": [ + { + "bbox": [ + 297, + 749, + 313, + 757 + ], + "type": "text", + "content": "965" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 58, + 92, + 553, + 357 + ], + "blocks": [ + { + "bbox": [ + 160, + 71, + 449, + 82 + ], + "lines": [ + { + "bbox": [ + 160, + 71, + 449, + 82 + ], + "spans": [ + { + "bbox": [ + 160, + 71, + 449, + 82 + ], + "type": "text", + "content": "Table 2. Quantitative comparisons on real underwater depth estimation datasets." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 58, + 92, + 553, + 357 + ], + "lines": [ + { + "bbox": [ + 58, + 92, + 553, + 357 + ], + "spans": [ + { + "bbox": [ + 58, + 92, + 553, + 357 + ], + "type": "table", + "html": "
MethodFine-tuning DatasetReference\\( S I_{log} \\)↓A.Rel↓\\( log_{10} \\)↓RMSE↓S.Rel↓RMSElog↓\\( \\delta_1 \\uparrow \\)\\( \\delta_2 \\uparrow \\)\\( \\delta_3 \\uparrow \\)
Quantitative comparisons on the D3 and D5 subsets of Sea-thru [3] dataset.
AdaBins [5]Atlantis [41]SynTIDE (Ours)CVPR 24-38.2426.92(-11.32)1.331.31(-0.02)0.120.08(-0.04)1.411.12(-0.29)12.8915.74(+2.85)0.390.27(-0.12)0.500.71(+0.21)0.810.95(+0.14)0.920.99(+0.07)
NewCRFs [40]Atlantis [41]SynTIDE (Ours)CVPR 24-37.1022.37(-14.73)1.681.50(-0.18)0.120.06(-0.06)1.441.24(-0.20)14.7622.50(+7.74)0.380.23(-0.15)0.480.84(+0.36)0.840.97(+0.13)0.950.99(+0.04)
PixelFormer [1]Atlantis [41]SynTIDE (Ours)CVPR 24-23.7021.39(-2.31)1.341.46(+0.12)0.060.05(-0.01)1.171.15(-0.02)17.2921.79(+4.50)0.240.22(-0.02)0.810.88(+0.07)0.970.98(+0.01)0.990.99(+0.00)
MIM [34]Atlantis [41]SynTIDE (Ours)CVPR 24-37.0122.49(-14.52)1.371.27(-0.10)0.110.06(-0.05)1.511.01(-0.50)14.4216.46(+2.04)0.380.23(-0.15)0.560.85(+0.29)0.840.97(+0.13)0.940.99(+0.05)
Quantitative comparisons on the SQUID [4] dataset.
AdaBins [5]Atlantis [41]SynTIDE (Ours)CVPR 24-29.5625.63(-3.93)0.280.23(-0.05)0.110.09(-0.02)2.242.69(+0.45)0.690.92(+0.23)0.310.27(-0.04)0.560.67(+0.11)0.860.90(+0.04)0.940.97(+0.03)
NewCRFs [40]Atlantis [41]SynTIDE (Ours)CVPR 24-25.1925.55(+0.36)0.230.23(-0.00)0.090.09(+0.00)2.563.02(+0.46)0.831.07(+0.24)0.260.27(+0.01)0.680.68(+0.00)0.900.91(+0.01)0.960.97(+0.01)
PixelFormer [1]Atlantis [41]SynTIDE (Ours)CVPR 24-21.3419.08(-2.26)0.180.16(-0.02)0.070.07(-0.00)1.861.75(-0.11)0.430.36(-0.07)0.220.19(-0.03)0.760.79(+0.03)0.940.97(+0.03)0.980.99(+0.01)
MIM [34]Atlantis [41]SynTIDE (Ours)CVPR 24-27.4526.98(-0.47)0.260.25(-0.01)0.100.09(-0.01)2.143.04(+0.90)0.681.11(+0.43)0.280.28(-0.00)0.610.65(+0.04)0.880.89(+0.01)0.950.96(+0.01)
", + "image_path": "95f68ad81dfc45669541655eb1b6bf8315f035e53a8fa702e5c529bbb95ae049.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 54, + 376, + 294, + 460 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 376, + 294, + 460 + ], + "spans": [ + { + "bbox": [ + 54, + 376, + 294, + 460 + ], + "type": "text", + "content": "the image. Thus, the cross-attention map can be viewed as an implicit layout. If two text-to-image models share the same implicit layout and undergo proper fine-tuning, the generated images are likely to exhibit strong layout similarity. Therefore, we share the same implicit layout across multiple text-to-image models. Meanwhile, we use LoRA to fine-tune the multiple text-to-image models [6]." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 54, + 460, + 295, + 581 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 460, + 295, + 581 + ], + "spans": [ + { + "bbox": [ + 54, + 460, + 295, + 581 + ], + "type": "text", + "content": "Zero-shot generation ability. Thanks to our training strategy, which fine-tunes the pre-trained text-to-image model using only LoRA, the generalization ability of the text-to-image model is retained to some extent. This enables TIDE to generate underwater images during inference that are not seen during training. Furthermore, due to the proposed Implicit Layout Sharing and Time Adaptive Normalization mechanisms, the generated depth maps align well with these images. Therefore, TIDE has the ability to generate zero-shot underwater image-depth map pairs." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 591, + 137, + 604 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 591, + 137, + 604 + ], + "spans": [ + { + "bbox": [ + 55, + 591, + 137, + 604 + ], + "type": "text", + "content": "5. Experiments" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 610, + 226, + 623 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 610, + 226, + 623 + ], + "spans": [ + { + "bbox": [ + 55, + 610, + 226, + 623 + ], + "type": "text", + "content": "5.1. Dataset and Evaluation Metrics" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 629, + 295, + 700 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 629, + 295, + 700 + ], + "spans": [ + { + "bbox": [ + 55, + 629, + 295, + 700 + ], + "type": "text", + "content": "Underwater Depth Estimation. We follow the work [41], the D3 and D5 subsets of Sea-thru [3], and the SQUID dataset [4] used to evaluate the depth estimation capability in underwater scenes. These datasets include underwater images with depth maps obtained via the Structure-fromMotion (SfM) algorithm." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 701, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 701, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 67, + 701, + 295, + 713 + ], + "type": "text", + "content": "The quantitative evaluation metrics include root" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 376, + 553, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 376, + 553, + 437 + ], + "spans": [ + { + "bbox": [ + 313, + 376, + 553, + 437 + ], + "type": "text", + "content": "mean square error (RMSE) and its logarithmic variant " + }, + { + "bbox": [ + 313, + 376, + 553, + 437 + ], + "type": "inline_equation", + "content": "(RMSE_{log})" + }, + { + "bbox": [ + 313, + 376, + 553, + 437 + ], + "type": "text", + "content": ", absolute error in log-scale " + }, + { + "bbox": [ + 313, + 376, + 553, + 437 + ], + "type": "inline_equation", + "content": "(\\log_{10})" + }, + { + "bbox": [ + 313, + 376, + 553, + 437 + ], + "type": "text", + "content": ", absolute relative error (A.Rel), squared relative error (S.Rel), the percentage of inlier pixels " + }, + { + "bbox": [ + 313, + 376, + 553, + 437 + ], + "type": "inline_equation", + "content": "(\\delta_i)" + }, + { + "bbox": [ + 313, + 376, + 553, + 437 + ], + "type": "text", + "content": " with thresholds of " + }, + { + "bbox": [ + 313, + 376, + 553, + 437 + ], + "type": "inline_equation", + "content": "1.25^i" + }, + { + "bbox": [ + 313, + 376, + 553, + 437 + ], + "type": "text", + "content": ", and scale-invariant error in log-scale " + }, + { + "bbox": [ + 313, + 376, + 553, + 437 + ], + "type": "inline_equation", + "content": "(SI_{log})" + }, + { + "bbox": [ + 313, + 376, + 553, + 437 + ], + "type": "text", + "content": ": " + }, + { + "bbox": [ + 313, + 376, + 553, + 437 + ], + "type": "inline_equation", + "content": "100\\sqrt{Var(\\epsilon_{log})}" + }, + { + "bbox": [ + 313, + 376, + 553, + 437 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 439, + 553, + 510 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 439, + 553, + 510 + ], + "spans": [ + { + "bbox": [ + 313, + 439, + 553, + 510 + ], + "type": "text", + "content": "Underwater Semantic Segmentation. The UIIS [19] and USIS10K [20] datasets are chosen to validate the effectiveness of our method in underwater semantic segmentation tasks. Instance masks belonging to the same semantic category are merged to construct semantic segmentation annotations for the UIIS and USIS10K datasets." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 512, + 553, + 559 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 512, + 553, + 559 + ], + "spans": [ + { + "bbox": [ + 313, + 512, + 553, + 559 + ], + "type": "text", + "content": "We calculate the mean Intersection over Union (mIoU) for six categories (i.e., Fish, Reefs, Aquatic Plants, Wrecks, Human Divers, and Robots) to evaluate the accuracy of the segmentation results." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 574, + 448, + 586 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 574, + 448, + 586 + ], + "spans": [ + { + "bbox": [ + 313, + 574, + 448, + 586 + ], + "type": "text", + "content": "5.1.1 Implementation Details" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 312, + 594, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 594, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 312, + 594, + 555, + 713 + ], + "type": "text", + "content": "The training process consists of two parts: pre-training the mini-transformer and training TIDE. In the first stage, the mini-transformer is initialized with the first ten layers of the PixArt-" + }, + { + "bbox": [ + 312, + 594, + 555, + 713 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 312, + 594, + 555, + 713 + ], + "type": "text", + "content": " [6] pre-trained transformer. Then, the mini-transformer is trained for 60K iterations on the text-to-image task with all parameters. The training data consists of 14K underwater image-caption pairs from Sec. 4.1. In the second stage, the PixArt-" + }, + { + "bbox": [ + 312, + 594, + 555, + 713 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 312, + 594, + 555, + 713 + ], + "type": "text", + "content": " pre-trained transformer and the mini-transformer are used as initial weights for the text-to-image and text-to-dense annotation models, respectively." + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 297, + 749, + 313, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 297, + 749, + 313, + 757 + ], + "spans": [ + { + "bbox": [ + 297, + 749, + 313, + 757 + ], + "type": "text", + "content": "966" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 56, + 91, + 294, + 246 + ], + "blocks": [ + { + "bbox": [ + 56, + 71, + 294, + 82 + ], + "lines": [ + { + "bbox": [ + 56, + 71, + 294, + 82 + ], + "spans": [ + { + "bbox": [ + 56, + 71, + 294, + 82 + ], + "type": "text", + "content": "Table 3. Quantitative results of underwater semantic segmentation." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 56, + 91, + 294, + 246 + ], + "lines": [ + { + "bbox": [ + 56, + 91, + 294, + 246 + ], + "spans": [ + { + "bbox": [ + 56, + 91, + 294, + 246 + ], + "type": "table", + "html": "
MethodBackboneTraining DatamIoU
RealSynTIDEUIISUSIS10K
Segformer [33] (NeurIPS 21)MiT-B470.274.6
76.572.8
75.4(+5.2)76.1(+1.5)
Mask2former [8] (CVPR 22)Swin-B72.776.1
74.272.9
74.3(+1.6)77.1(+1.0)
ViT-Adapter [7] (ICLR 23)ViT-Adapter-B73.574.6
75.772.6
75.1(+1.6)76.7(+2.1)
", + "image_path": "0200a5a38c2c5778a8d09f9647797d24355f7219dda2eaeb1bdcdc0f638f6aa5.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 267, + 295, + 326 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 267, + 295, + 326 + ], + "spans": [ + { + "bbox": [ + 55, + 267, + 295, + 326 + ], + "type": "text", + "content": "Meanwhile, they are fine-tuned using LoRA [15] for 200K iterations with a batch size of 4. The LoRA ranks of the text-to-image/depth/mask branches are 32, 64, and 64, respectively. All experiments are conducted on a server with four NVIDIA 4090 24G GPUs." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 335, + 138, + 346 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 335, + 138, + 346 + ], + "spans": [ + { + "bbox": [ + 55, + 335, + 138, + 346 + ], + "type": "text", + "content": "5.2. Main results" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 354, + 218, + 367 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 354, + 218, + 367 + ], + "spans": [ + { + "bbox": [ + 55, + 354, + 218, + 367 + ], + "type": "text", + "content": "5.2.1 Underwater Depth Estimation" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 54, + 374, + 295, + 553 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 374, + 295, + 553 + ], + "spans": [ + { + "bbox": [ + 54, + 374, + 295, + 553 + ], + "type": "text", + "content": "We train four representative depth estimation models, Adasbin [5], NewCRFs [29], PixelFormer [1], and MIM [34], to present quantitative results, as shown in Tab. 2. Compared to previous underwater data synthesis work Atlantis [41], depth estimation models trained on our SynTIDE dataset show consistent improvements across most quantitative metrics on two evaluated datasets. Especially on MIM [34], a powerful pre-trained model, our method reduces the " + }, + { + "bbox": [ + 54, + 374, + 295, + 553 + ], + "type": "inline_equation", + "content": "SI_{\\text{log}}" + }, + { + "bbox": [ + 54, + 374, + 295, + 553 + ], + "type": "text", + "content": " metric from " + }, + { + "bbox": [ + 54, + 374, + 295, + 553 + ], + "type": "inline_equation", + "content": "37.01 \\rightarrow 22.49" + }, + { + "bbox": [ + 54, + 374, + 295, + 553 + ], + "type": "text", + "content": " (-14.52) and improves " + }, + { + "bbox": [ + 54, + 374, + 295, + 553 + ], + "type": "inline_equation", + "content": "\\delta_1" + }, + { + "bbox": [ + 54, + 374, + 295, + 553 + ], + "type": "text", + "content": " from " + }, + { + "bbox": [ + 54, + 374, + 295, + 553 + ], + "type": "inline_equation", + "content": "0.56 \\rightarrow 0.85" + }, + { + "bbox": [ + 54, + 374, + 295, + 553 + ], + "type": "text", + "content": " (+0.29) on the D3 and D5 subsets of the Seathru dataset. Meanwhile, on PixelFormer [1], a depth estimation model with outstanding generalization that also performs best for Atlantis, our method achieves better performance across nearly all quantitative metrics on both evaluated underwater depth estimation datasets." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 554, + 295, + 638 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 554, + 295, + 638 + ], + "spans": [ + { + "bbox": [ + 55, + 554, + 295, + 638 + ], + "type": "text", + "content": "These results demonstrate that our method achieves highly competitive consistency compared to Atlantis, which uses stronger dense conditions. Furthermore, the data generated by TIDE is closer to natural underwater scenes and shows rich species diversity. Most importantly, TIDE unifies the generation of images and multiple highly consistent dense annotations, capabilities that Atlantis lacks." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 647, + 248, + 659 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 647, + 248, + 659 + ], + "spans": [ + { + "bbox": [ + 55, + 647, + 248, + 659 + ], + "type": "text", + "content": "5.3. Underwater Semantic Segmentation" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 665, + 295, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 665, + 295, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 665, + 295, + 714 + ], + "type": "text", + "content": "In the underwater semantic segmentation task, we validate the effectiveness of our method by pre-training with the SynTIDE dataset in three representative semantic segmentation models, Segformer [33], Mask2former [8], and ViT" + } + ] + } + ], + "index": 8 + }, + { + "type": "table", + "bbox": [ + 316, + 91, + 553, + 142 + ], + "blocks": [ + { + "bbox": [ + 328, + 71, + 539, + 82 + ], + "lines": [ + { + "bbox": [ + 328, + 71, + 539, + 82 + ], + "spans": [ + { + "bbox": [ + 328, + 71, + 539, + 82 + ], + "type": "text", + "content": "Table 4. Ablation on the impact of each TIDE component." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 316, + 91, + 553, + 142 + ], + "lines": [ + { + "bbox": [ + 316, + 91, + 553, + 142 + ], + "spans": [ + { + "bbox": [ + 316, + 91, + 553, + 142 + ], + "type": "table", + "html": "
ILSTANSIlog ↓A.Rel ↓δ1 ↑mIoU
24.461.230.7636.8
24.591.400.7836.2
23.711.370.7942.1
", + "image_path": "1ab5913f1600c53e492cc97506b93072f4c8acbb9d39c2959551d276e8ebeb25.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "table_body" + } + ], + "index": 10 + }, + { + "type": "table", + "bbox": [ + 317, + 196, + 553, + 247 + ], + "blocks": [ + { + "bbox": [ + 313, + 154, + 553, + 187 + ], + "lines": [ + { + "bbox": [ + 313, + 154, + 553, + 187 + ], + "spans": [ + { + "bbox": [ + 313, + 154, + 553, + 187 + ], + "type": "text", + "content": "Table 5. Ablation on the impact of the component positions. \" {Start, End}\" indicates the starting and ending positions where the operations are applied, with a step size of 3." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 317, + 196, + 553, + 247 + ], + "lines": [ + { + "bbox": [ + 317, + 196, + 553, + 247 + ], + "spans": [ + { + "bbox": [ + 317, + 196, + 553, + 247 + ], + "type": "table", + "html": "
{Start, End}SIlog ↓A.Rel ↓δ1 ↑mIoU
{0,12}22.061.460.8634.7
{15,27}44.861.090.438.6
{0,27}23.711.370.7942.1
", + "image_path": "e5b74515cb139fe94eaf5850506d08926c46dd30e0ca4b38008bf988a3ab7b35.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "table_body" + } + ], + "index": 12 + }, + { + "type": "table", + "bbox": [ + 316, + 290, + 553, + 361 + ], + "blocks": [ + { + "bbox": [ + 313, + 258, + 553, + 281 + ], + "lines": [ + { + "bbox": [ + 313, + 258, + 553, + 281 + ], + "spans": [ + { + "bbox": [ + 313, + 258, + 553, + 281 + ], + "type": "text", + "content": "Table 6. Ablation on the impact of scaling synthetic data for underwater dense prediction tasks." + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 316, + 290, + 553, + 361 + ], + "lines": [ + { + "bbox": [ + 316, + 290, + 553, + 361 + ], + "spans": [ + { + "bbox": [ + 316, + 290, + 553, + 361 + ], + "type": "table", + "html": "
N SampleSIlog ↓A.Rel ↓δ1↑mIoU
123.491.460.8255.3
322.941.540.8560.9
622.961.560.8563.6
1022.371.500.8464.2
", + "image_path": "5a0f5f50021144b5c5cb781663c58a069792b30633c53ec1a01e1d26a09c3477.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "table_body" + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 381, + 553, + 405 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 381, + 553, + 405 + ], + "spans": [ + { + "bbox": [ + 313, + 381, + 553, + 405 + ], + "type": "text", + "content": "Adapter [7]. Following the work [37], we filter the noise in the generated annotations with 1.5 tolerance." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 406, + 554, + 502 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 406, + 554, + 502 + ], + "spans": [ + { + "bbox": [ + 313, + 406, + 554, + 502 + ], + "type": "text", + "content": "Pre-training on high-quality synthetic datasets is widely recognized as a way to gain strong prior knowledge. On the UIIS dataset, models trained on the SynTIDE dataset consistently achieve superior results compared to real data. On the other larger USIS10K dataset, by further fine-tuning the model on the UIIS10K train set, we achieve notable improvements. Especially on ViT-Adapter, we enhance the performance of the model from " + }, + { + "bbox": [ + 313, + 406, + 554, + 502 + ], + "type": "inline_equation", + "content": "74.6\\% \\rightarrow 76.7\\%" + }, + { + "bbox": [ + 313, + 406, + 554, + 502 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 502, + 554, + 586 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 502, + 554, + 586 + ], + "spans": [ + { + "bbox": [ + 313, + 502, + 554, + 586 + ], + "type": "text", + "content": "These results show that models pre-trained on the SynTIDE dataset exhibit strong prior knowledge in the underwater semantic segmentation task. Additionally, these results demonstrate that the unified image and dense annotation generation model proposed in this paper can generate highly consistent image-dense annotation pairs, making it suitable for various underwater dense prediction tasks." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 596, + 416, + 607 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 596, + 416, + 607 + ], + "spans": [ + { + "bbox": [ + 313, + 596, + 416, + 607 + ], + "type": "text", + "content": "5.4. Ablation Studies" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 313, + 614, + 554, + 687 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 614, + 554, + 687 + ], + "spans": [ + { + "bbox": [ + 313, + 614, + 554, + 687 + ], + "type": "text", + "content": "Unless otherwise specified, we conduct ablation studies by training TIDE for 30K iterations. We synthesize three samples for each caption, as described in Sec. 4.5. We conduct ablation studies on the USIS10K dataset with SegFormer-B4 for semantic segmentation and the D3 and D5 subsets of the Sea-thru dataset with NewCRFs for depth estimation." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 313, + 689, + 554, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 689, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 689, + 554, + 713 + ], + "type": "text", + "content": "Ablation on the effectiveness of each component. We first evaluate the contribution of each component within" + } + ] + } + ], + "index": 20 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 297, + 749, + 312, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 297, + 749, + 312, + 757 + ], + "spans": [ + { + "bbox": [ + 297, + 749, + 312, + 757 + ], + "type": "text", + "content": "967" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 294, + 239 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 294, + 239 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 294, + 239 + ], + "type": "text", + "content": "TIDE, as shown in Tab. 4. When utilizing only the Implicit Layout Sharing (ILS) mechanism or Time Adaptive Normalization (TAN), the former outperforms the latter in depth estimation and semantic segmentation. Combining both methods results in a significant improvement " + }, + { + "bbox": [ + 55, + 72, + 294, + 239 + ], + "type": "inline_equation", + "content": "(36.8\\% \\rightarrow 42.1\\%)" + }, + { + "bbox": [ + 55, + 72, + 294, + 239 + ], + "type": "text", + "content": " in semantic segmentation. These results indicate that ILS and TAN are complementary methods. By combining them for end-to-end training, the consistency between images and dense annotations can be further optimized. Additionally, we further demonstrate the effectiveness of the time-adaptive operation. As shown in the last row of Tab. 4, without time-adaptive parameters, the quality of the generated data will be varying degrees of degradation, especially for the semantic segmentation task." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 239, + 294, + 358 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 239, + 294, + 358 + ], + "spans": [ + { + "bbox": [ + 55, + 239, + 294, + 358 + ], + "type": "text", + "content": "Ablation on the position of components. We then study the effect of the position of ILS and TAN, as shown in Tab. 5. We find that applying the ILS and TAN mechanisms in the first half of the transformer of text-to-image yields better performance than using them in the second half. This can be attributed to the layout information produced in the first half of the transformer, which is mismatched with the ILS introduced in the latter part. Meanwhile, the results demonstrate that combining both achieves better consistency between the image and dense annotations." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 362, + 295, + 469 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 362, + 295, + 469 + ], + "spans": [ + { + "bbox": [ + 55, + 362, + 295, + 469 + ], + "type": "text", + "content": "Ablation on data scaling. Finally, we synthesize " + }, + { + "bbox": [ + 55, + 362, + 295, + 469 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 55, + 362, + 295, + 469 + ], + "type": "text", + "content": " samples for each caption to validate the impact of synthetic data scale on underwater dense prediction tasks, as shown in Tab. 6. It can be observed that as the amount of synthetic data increases, there is no substantial improvement in the underwater depth estimation task. However, for the underwater semantic segmentation task, a significant gain is observed in the early stages as " + }, + { + "bbox": [ + 55, + 362, + 295, + 469 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 55, + 362, + 295, + 469 + ], + "type": "text", + "content": " increases, but the tendency of improvement begins to flatten after " + }, + { + "bbox": [ + 55, + 362, + 295, + 469 + ], + "type": "inline_equation", + "content": "N = 6" + }, + { + "bbox": [ + 55, + 362, + 295, + 469 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 477, + 294, + 490 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 477, + 294, + 490 + ], + "spans": [ + { + "bbox": [ + 55, + 477, + 294, + 490 + ], + "type": "text", + "content": "5.5. More Challenging Underwater Data Synthesis" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 496, + 295, + 664 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 496, + 295, + 664 + ], + "spans": [ + { + "bbox": [ + 55, + 496, + 295, + 664 + ], + "type": "text", + "content": "We validate whether TIDE can generate more challenging data by adding extra text prompts about underwater lighting or water quality (e.g., low light, turbidity) to the original underwater scene caption. As shown in Fig. 5, the results demonstrate that TIDE can generate more challenging underwater images. While annotating these underwater images may be extremely difficult for humans, TIDE can effortlessly produce highly consistent and accurate dense annotations, which hold great practical value for real-world underwater applications. In additional, to demonstrate the diversity of generated underwater data, we generate twelve underwater images from the same text prompt, as shown in Fig. 6. It can be observed that, despite sharing the same text prompt, the generated images exhibit rich diversity." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 671, + 128, + 682 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 671, + 128, + 682 + ], + "spans": [ + { + "bbox": [ + 55, + 671, + 128, + 682 + ], + "type": "text", + "content": "5.6. Limitation" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 689, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 689, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 689, + 295, + 713 + ], + "type": "text", + "content": "Despite the promising results achieved, our method still has some limitations. First, our approach cannot gener-" + } + ] + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 317, + 71, + 553, + 195 + ], + "blocks": [ + { + "bbox": [ + 317, + 71, + 553, + 195 + ], + "lines": [ + { + "bbox": [ + 317, + 71, + 553, + 195 + ], + "spans": [ + { + "bbox": [ + 317, + 71, + 553, + 195 + ], + "type": "image", + "image_path": "f365ef2c20cdc9bd13840b7a564002ebdccc654c8edfb99a69b2dcfceb71171a.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 316, + 202, + 551, + 213 + ], + "lines": [ + { + "bbox": [ + 316, + 202, + 551, + 213 + ], + "spans": [ + { + "bbox": [ + 316, + 202, + 551, + 213 + ], + "type": "text", + "content": "Figure 5. More challenging underwater data generated by TIDE." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 316, + 224, + 553, + 306 + ], + "blocks": [ + { + "bbox": [ + 316, + 224, + 553, + 306 + ], + "lines": [ + { + "bbox": [ + 316, + 224, + 553, + 306 + ], + "spans": [ + { + "bbox": [ + 316, + 224, + 553, + 306 + ], + "type": "image", + "image_path": "5d98c398f2a8ca0fdebbecb449d4a876f85b15b8d1979a6867739d416b881fa2.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 341, + 314, + 526, + 326 + ], + "lines": [ + { + "bbox": [ + 341, + 314, + 526, + 326 + ], + "spans": [ + { + "bbox": [ + 341, + 314, + 526, + 326 + ], + "type": "text", + "content": "Figure 6. Visualization of generated data diversity." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 346, + 555, + 441 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 346, + 555, + 441 + ], + "spans": [ + { + "bbox": [ + 313, + 346, + 555, + 441 + ], + "type": "text", + "content": "ate instance-level semantic masks from the generation perspective. Relying on text prompts to guide the generation of instance-level masks with semantic annotations remains challenging. Additionally, although TIDE can leverage the powerful priors of pre-trained T2I models to generate highly challenging underwater images (e.g., low light, turbidity), there is still room for improvement. These will be key directions for future expansion." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 453, + 388, + 464 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 453, + 388, + 464 + ], + "spans": [ + { + "bbox": [ + 313, + 453, + 388, + 464 + ], + "type": "text", + "content": "6. Conclusion" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 472, + 555, + 675 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 472, + 555, + 675 + ], + "spans": [ + { + "bbox": [ + 313, + 472, + 555, + 675 + ], + "type": "text", + "content": "This paper introduces a unified text-to-image and dense annotation generation model for underwater scenes. The model can generate realistic underwater images and multiple highly consistent dense annotations using only text prompts as input. We validate the effectiveness of our method on underwater depth estimation and semantic segmentation tasks by synthesizing a large-scale underwater dataset containing images along with highly consistent depth and semantic segmentation annotations. In the depth estimation task, extensive experimental results show that our method, using only text as input, achieves highly competitive results compared to previous methods that required stronger dense conditions for underwater depth synthesis. Meanwhile, pre-training with data synthesized using our method further improves model performance in the semantic segmentation task. Our study provides a new perspective for alleviating data scarcity in other fields." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 679, + 555, + 703 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 679, + 555, + 703 + ], + "spans": [ + { + "bbox": [ + 313, + 679, + 555, + 703 + ], + "type": "text", + "content": "Acknowledgement. This work was supported by the NSFC (Grant U234120202 and 62225603)." + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 297, + 749, + 312, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 297, + 749, + 312, + 757 + ], + "spans": [ + { + "bbox": [ + 297, + 749, + 312, + 757 + ], + "type": "text", + "content": "968" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "spans": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 57, + 91, + 296, + 713 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 61, + 91, + 296, + 135 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 91, + 296, + 135 + ], + "spans": [ + { + "bbox": [ + 61, + 91, + 296, + 135 + ], + "type": "text", + "content": "[1] Ashutosh Agarwal and Chetan Arora. Attention attention everywhere: Monocular depth prediction with skip attention. In Proc. of IEEE Winter Conf. on Applications of Computer Vision, pages 5861-5870, 2023. 6, 7" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 137, + 296, + 190 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 137, + 296, + 190 + ], + "spans": [ + { + "bbox": [ + 61, + 137, + 296, + 190 + ], + "type": "text", + "content": "[2] Jiwoon Ahn and Suha Kwak. Learning pixel-level semantic affinity with image-level supervision for weakly supervised semantic segmentation. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 4981-4990, 2018. 3" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 62, + 194, + 296, + 236 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 194, + 296, + 236 + ], + "spans": [ + { + "bbox": [ + 62, + 194, + 296, + 236 + ], + "type": "text", + "content": "[3] Derya Akkaynak and Tali Treibitz. Sea-thru: A method for removing water from underwater images. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 1682-1691, 2019. 2, 6" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 62, + 239, + 295, + 292 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 239, + 295, + 292 + ], + "spans": [ + { + "bbox": [ + 62, + 239, + 295, + 292 + ], + "type": "text", + "content": "[4] Dana Berman, Deborah Levy, Shai Avidan, and Tali Treibitz. Underwater single image color restoration using haze-lines and a new quantitative dataset. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(8):2822-2837, 2020. 6" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 62, + 295, + 296, + 338 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 295, + 296, + 338 + ], + "spans": [ + { + "bbox": [ + 62, + 295, + 296, + 338 + ], + "type": "text", + "content": "[5] Shariq Farooq Bhat, Ibrahim Alhashim, and Peter Wonka. Adabins: Depth estimation using adaptive bins. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 4009-4018, 2021. 6, 7" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 341, + 295, + 394 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 341, + 295, + 394 + ], + "spans": [ + { + "bbox": [ + 62, + 341, + 295, + 394 + ], + "type": "text", + "content": "[6] Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li. Pixart-α: Fast training of diffusion transformer for photorealistic text-to-image synthesis, 2024. 2, 3, 4, 6" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 396, + 295, + 441 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 396, + 295, + 441 + ], + "spans": [ + { + "bbox": [ + 62, + 396, + 295, + 441 + ], + "type": "text", + "content": "[7] Zhe Chen, Yuchen Duan, Wenhai Wang, Junjun He, Tong Lu, Jifeng Dai, and Yu Qiao. Vision transformer adapter for dense predictions. In Proc. of Intl. Conf. on Learning Representations, 2023. 2, 7" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 442, + 296, + 496 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 442, + 296, + 496 + ], + "spans": [ + { + "bbox": [ + 62, + 442, + 296, + 496 + ], + "type": "text", + "content": "[8] Bowen Cheng, Ishan Misra, Alexander G Schwing, Alexander Kirillov, and Rohit Girdhar. Masked-attention mask transformer for universal image segmentation. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 1290–1299, 2022. 7" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 498, + 295, + 552 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 498, + 295, + 552 + ], + "spans": [ + { + "bbox": [ + 62, + 498, + 295, + 552 + ], + "type": "text", + "content": "[9] Paulo LJ Drews, Erickson R Nascimento, Silvia SC Botelho, and Mario Fernando Montenegro Campos. Underwater depth estimation and image restoration based on single images. IEEE computer graphics and applications, 36(2):24-35, 2016. 3" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 57, + 555, + 295, + 588 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 555, + 295, + 588 + ], + "spans": [ + { + "bbox": [ + 57, + 555, + 295, + 588 + ], + "type": "text", + "content": "[10] Honey Gupta and Kaushik Mitra. Unsupervised single image underwater depth estimation. In Proc. of IEEE Intl. Conf. on Image Processing, pages 624-628. IEEE, 2019. 3" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 57, + 590, + 295, + 622 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 590, + 295, + 622 + ], + "spans": [ + { + "bbox": [ + 57, + 590, + 295, + 622 + ], + "type": "text", + "content": "[11] Praful Hamberde, Subrahmanyam Murala, and Abhinav Dhall. Uw-gan: Single-image depth estimation and image enhancement for underwater images. 70:1-12, 2021. 3" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 624, + 295, + 666 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 624, + 295, + 666 + ], + "spans": [ + { + "bbox": [ + 57, + 624, + 295, + 666 + ], + "type": "text", + "content": "[12] Kaiming He, Jian Sun, and Xiaou Tang. Single image haze removal using dark channel prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(12):2341-2353, 2010. 3" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 57, + 670, + 295, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 670, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 57, + 670, + 295, + 713 + ], + "type": "text", + "content": "[13] Ruifei He, Shuyang Sun, Xin Yu, Chuhui Xue, Wenqing Zhang, Philip Torr, Song Bai, and Xiaojuan Qi. Is synthetic data from generative models ready for image recognition? In Proc. of Intl. Conf. on Learning Representations, 2023. 3" + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 554, + 713 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 316, + 73, + 554, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 73, + 554, + 106 + ], + "spans": [ + { + "bbox": [ + 316, + 73, + 554, + 106 + ], + "type": "text", + "content": "[14] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Proc. of Advances in Neural Information Processing Systems, 33:6840-6851, 2020. 2, 3" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 317, + 107, + 554, + 150 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 107, + 554, + 150 + ], + "spans": [ + { + "bbox": [ + 317, + 107, + 554, + 150 + ], + "type": "text", + "content": "[15] Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. In Proc. of Intl. Conf. on Learning Representations, 2022. 5, 7" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 317, + 152, + 554, + 215 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 152, + 554, + 215 + ], + "spans": [ + { + "bbox": [ + 317, + 152, + 554, + 215 + ], + "type": "text", + "content": "[16] Md Jahidul Islam, Chelsey Edge, Yuyang Xiao, Peigen Luo, Munteqim Mehtaz, Christopher Morse, Sadman Sakib Enan, and Junaed Sattar. Semantic segmentation of underwater imagery: Dataset and benchmark. In Proc. of the IEEE Int. Conf. on Intelligent Robots and Systems, pages 1769-1776. IEEE, 2020. 3, 4" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 217, + 554, + 270 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 217, + 554, + 270 + ], + "spans": [ + { + "bbox": [ + 316, + 217, + 554, + 270 + ], + "type": "text", + "content": "[17] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. In Porc. of IEEE Intl. Conf. on Computer Vision, pages 4015-4026, 2023. 3" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 317, + 272, + 554, + 326 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 272, + 554, + 326 + ], + "spans": [ + { + "bbox": [ + 317, + 272, + 554, + 326 + ], + "type": "text", + "content": "[18] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In Proc. of Intl. Conf. on Machine Learning, pages 19730–19742. PMLR, 2023. 4" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 327, + 554, + 371 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 327, + 554, + 371 + ], + "spans": [ + { + "bbox": [ + 316, + 327, + 554, + 371 + ], + "type": "text", + "content": "[19] Shijie Lian, Hua Li, Runmin Cong, Suqi Li, Wei Zhang, and Sam Kwong. Watermask: Instance segmentation for underwater imagery. In Porc. of IEEE Intl. Conf. on Computer Vision, pages 1305-1315, 2023. 3, 4, 6" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 372, + 554, + 426 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 372, + 554, + 426 + ], + "spans": [ + { + "bbox": [ + 316, + 372, + 554, + 426 + ], + "type": "text", + "content": "[20] Shijie Lian, Ziyi Zhang, Hua Li, Wenjie Li, Laurence Tianruo Yang, Sam Kwong, and Runmin Cong. Diving into underwater: Segment anything model guided underwater salient instance segmentation and a large-scale dataset. In Proc. of Intl. Conf. on Machine Learning, 2024. 2, 3, 4, 6" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 317, + 427, + 554, + 480 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 427, + 554, + 480 + ], + "spans": [ + { + "bbox": [ + 317, + 427, + 554, + 480 + ], + "type": "text", + "content": "[21] Zhengyao Lv, Yuxiang Wei, Wangmeng Zuo, and KwanYee K Wong. Place: Adaptive layout-semantic fusion for semantic image synthesis. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 9264-9274, 2024. 3, 4" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 317, + 482, + 554, + 525 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 482, + 554, + 525 + ], + "spans": [ + { + "bbox": [ + 317, + 482, + 554, + 525 + ], + "type": "text", + "content": "[22] Quang Nguyen, Truong Vu, Anh Tran, and Khoi Nguyen. Dataset diffusion: Diffusion-based synthetic data generation for pixel-level semantic segmentation. In Proc. of Advances in Neural Information Processing Systems, 2024. 2, 3" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 317, + 526, + 554, + 591 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 526, + 554, + 591 + ], + "spans": [ + { + "bbox": [ + 317, + 526, + 554, + 591 + ], + "type": "text", + "content": "[23] Alexander Quinn Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mcgrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In Proc. of Intl. Conf. on Machine Learning, pages 16784-16804. PMLR, 2022. 3" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 317, + 592, + 554, + 657 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 592, + 554, + 657 + ], + "spans": [ + { + "bbox": [ + 317, + 592, + 554, + 657 + ], + "type": "text", + "content": "[24] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In Proc. of Intl. Conf. on Machine Learning, pages 8748-8763. PMLR, 2021. 4" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 317, + 658, + 554, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 658, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 317, + 658, + 554, + 713 + ], + "type": "text", + "content": "[25] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):1-67, 2020. 4" + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 298, + 749, + 312, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 298, + 749, + 312, + 757 + ], + "spans": [ + { + "bbox": [ + 298, + 749, + 312, + 757 + ], + "type": "text", + "content": "969" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 73, + 294, + 712 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 56, + 73, + 294, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 73, + 294, + 116 + ], + "spans": [ + { + "bbox": [ + 56, + 73, + 294, + 116 + ], + "type": "text", + "content": "[26] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 1 (2):3, 2022. 3" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 118, + 294, + 173 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 118, + 294, + 173 + ], + "spans": [ + { + "bbox": [ + 56, + 118, + 294, + 173 + ], + "type": "text", + "content": "[27] Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Ronghang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman Radle, Chloe Rolland, Laura Gustafson, et al. Sam 2: Segment anything in images and videos. In Proc. of Intl. Conf. on Learning Representations, 2025. 3" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 175, + 294, + 228 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 175, + 294, + 228 + ], + "spans": [ + { + "bbox": [ + 56, + 175, + 294, + 228 + ], + "type": "text", + "content": "[28] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 10684-10695, 2022. 2, 3, 4" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 231, + 294, + 263 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 231, + 294, + 263 + ], + "spans": [ + { + "bbox": [ + 56, + 231, + 294, + 263 + ], + "type": "text", + "content": "[29] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. In Proc. of Advances in Neural Information Processing Systems, 2017. 3, 7" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 265, + 294, + 341 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 265, + 294, + 341 + ], + "spans": [ + { + "bbox": [ + 56, + 265, + 294, + 341 + ], + "type": "text", + "content": "[30] Yibo Wang, Ruiyuan Gao, Kai Chen, Kaiqiang Zhou, Yingjie Cai, Lanqing Hong, Zhenguo Li, Lihui Jiang, DitYan Yeung, Qiang Xu, et al. Detdiffusion: Synergizing generative and perceptive models for enhanced data generation and perception. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 7246-7255, 2024. 2, 3" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 343, + 294, + 408 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 343, + 294, + 408 + ], + "spans": [ + { + "bbox": [ + 56, + 343, + 294, + 408 + ], + "type": "text", + "content": "[31] Weijia Wu, Yuzhong Zhao, Hao Chen, Yuchao Gu, Rui Zhao, Yefei He, Hong Zhou, Mike Zheng Shou, and Chunhua Shen. Datasetdm: Synthesizing data with perception annotations using diffusion models. Proc. of Advances in Neural Information Processing Systems, 36:54683-54695, 2023. 2, 3" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 411, + 294, + 464 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 411, + 294, + 464 + ], + "spans": [ + { + "bbox": [ + 56, + 411, + 294, + 464 + ], + "type": "text", + "content": "[32] Weijia Wu, Yuzhong Zhao, Mike Zheng Shou, Hong Zhou, and Chunhua Shen. Diffumask: Synthesizing images with pixel-level annotations for semantic segmentation using diffusion models. In Porc. of IEEE Intl. Conf. on Computer Vision, pages 1206-1217, 2023. 3" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 467, + 294, + 509 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 467, + 294, + 509 + ], + "spans": [ + { + "bbox": [ + 56, + 467, + 294, + 509 + ], + "type": "text", + "content": "[33] Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M Alvarez, and Ping Luo. Segformer: Simple and efficient design for semantic segmentation with transformers. 34:12077-12090, 2021. 7" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 512, + 294, + 566 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 512, + 294, + 566 + ], + "spans": [ + { + "bbox": [ + 56, + 512, + 294, + 566 + ], + "type": "text", + "content": "[34] Zhenda Xie, Zigang Geng, Jingcheng Hu, Zheng Zhang, Han Hu, and Yue Cao. Revealing the dark secrets of masked image modeling. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 14475-14485, 2023. 6, 7" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 568, + 294, + 611 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 568, + 294, + 611 + ], + "spans": [ + { + "bbox": [ + 56, + 568, + 294, + 611 + ], + "type": "text", + "content": "[35] Han Xue, Zhiwu Huang, Qianru Sun, Li Song, and Wenjun Zhang. Freestyle layout-to-image synthesis. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 14256-14266, 2023. 3, 4" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 613, + 294, + 656 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 613, + 294, + 656 + ], + "spans": [ + { + "bbox": [ + 56, + 613, + 294, + 656 + ], + "type": "text", + "content": "[36] Tianyu Yan, Zifu Wan, Xinhao Deng, Pingping Zhang, Yang Liu, and Huchuan Lu. Mas-sam: Segment any marine animal with aggregated features. In Proc. of Intl. Joint Conf. on Artificial Intelligence, 2024. 3" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 658, + 294, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 658, + 294, + 712 + ], + "spans": [ + { + "bbox": [ + 56, + 658, + 294, + 712 + ], + "type": "text", + "content": "[37] Lihe Yang, Xiaogang Xu, Bingyi Kang, Yinghuan Shi, and Hengshuang Zhao. Freemask: Synthetic images with dense annotations make stronger segmentation models. Proc. of Advances in Neural Information Processing Systems, 36, 2023. 2, 3, 7" + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 553, + 362 + ], + "type": "list", + "angle": 0, + "index": 19, + "blocks": [ + { + "bbox": [ + 316, + 73, + 553, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 73, + 553, + 116 + ], + "spans": [ + { + "bbox": [ + 316, + 73, + 553, + 116 + ], + "type": "text", + "content": "[38] Lihe Yang, Bingyi Kang, Zilong Huang, Zhen Zhao, Xiaogang Xu, Jiashi Feng, and Hengshuang Zhao. Depth anything v2. In Proc. of Advances in Neural Information Processing Systems, 2024. 4" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 316, + 118, + 553, + 161 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 118, + 553, + 161 + ], + "spans": [ + { + "bbox": [ + 316, + 118, + 553, + 161 + ], + "type": "text", + "content": "[39] Hanrong Ye, Jason Kuen, Qing Liu, Zhe Lin, Brian Price, and Dan Xu. Seggen: Supercharging segmentation models with text2mask and mask2img synthesis. arXiv preprint arXiv:2311.03355, 2023. 3" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 163, + 553, + 216 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 163, + 553, + 216 + ], + "spans": [ + { + "bbox": [ + 316, + 163, + 553, + 216 + ], + "type": "text", + "content": "[40] Weihao Yuan, Xiaodong Gu, Zuozhuo Dai, Siyu Zhu, and Ping Tan. Neural window fully-connected crfs for monocular depth estimation. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 3916-3925, 2022. 2, 6" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 219, + 553, + 262 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 219, + 553, + 262 + ], + "spans": [ + { + "bbox": [ + 316, + 219, + 553, + 262 + ], + "type": "text", + "content": "[41] Fan Zhang, Shaodi You, Yu Li, and Ying Fu. Atlantis: Enabling underwater depth estimation with stable diffusion. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 11852-11861, 2024. 2, 3, 6, 7" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 264, + 553, + 306 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 264, + 553, + 306 + ], + "spans": [ + { + "bbox": [ + 316, + 264, + 553, + 306 + ], + "type": "text", + "content": "[42] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Porc. of IEEE Intl. Conf. on Computer Vision, pages 3836-3847, 2023. 2, 3" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 308, + 553, + 362 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 308, + 553, + 362 + ], + "spans": [ + { + "bbox": [ + 316, + 308, + 553, + 362 + ], + "type": "text", + "content": "[43] Pingping Zhang, Tianyu Yan, Yang Liu, and Hutchuan Lu. Fantastic animals and where to find them: Segment any marine animal with dual sam. In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 2578-2587, 2024. 3" + } + ] + } + ], + "index": 18 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 297, + 748, + 313, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 297, + 748, + 313, + 757 + ], + "spans": [ + { + "bbox": [ + 297, + 748, + 313, + 757 + ], + "type": "text", + "content": "970" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/A Unified Latent Schrodinger Bridge Diffusion Model for Unsupervised Anomaly Detection and Localization/0371bea6-a128-4eff-9f4f-dffd7eab7a85_content_list.json b/2025/A Unified Latent Schrodinger Bridge Diffusion Model for Unsupervised Anomaly Detection and Localization/0371bea6-a128-4eff-9f4f-dffd7eab7a85_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..237cb3fcb6a5593b876c4928958fdab21dbe10a2 --- /dev/null +++ b/2025/A Unified Latent Schrodinger Bridge Diffusion Model for Unsupervised Anomaly Detection and Localization/0371bea6-a128-4eff-9f4f-dffd7eab7a85_content_list.json @@ -0,0 +1,1500 @@ +[ + { + "type": "text", + "text": "A Unified Latent Schrödinger Bridge Diffusion Model for Unsupervised Anomaly Detection and Localization", + "text_level": 1, + "bbox": [ + 137, + 128, + 859, + 176 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Shilhora Akshay $^{1,*}$ Niveditha Lakshmi Narasimhan $^{2}$ Jacob George $^{2}$ Vineeth N Balasubramanian $^{1}$ $^{1}$ Indian Institute of Technology, Hyderabad $^{2}$ KLA Corporation *shilhora.akshay333@gmail.com", + "bbox": [ + 181, + 202, + 815, + 275 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 246, + 308, + 326, + 325 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Anomaly detection and localization remain pivotal challenges in computer vision, with applications ranging from industrial inspection to medical diagnostics. While current supervised methods offer high precision, they are often impractical due to the scarcity of annotated data and the infrequent occurrence of anomalies. Recent advancements in unsupervised approaches, particularly reconstruction-based methods, have addressed these issues by training models exclusively on normal data, enabling them to identify anomalies during inference. However, these methods frequently rely on auxiliary networks or specialized adaptations, which can limit their robustness and practicality. This work introduces the Latent Anomaly Schrödinger Bridge (LASB), a unified unsupervised anomaly detection model that operates entirely in the latent space without requiring additional networks or custom modifications. LASB transforms anomaly images into normal images by preserving structural integrity across varying anomaly classes, lighting, and pose conditions, making it highly robust and versatile. Unlike previous methods, LASB does not focus solely on reconstructing anomaly features, but emphasizes anomaly transformation, achieving smooth anomaly-to-normal image conversions. Our method achieves state-of-the-art performance on both the MVTec-AD and VisA datasets, excelling in detection and localization tasks. Our code is available at https://github.com/ShilhoraAkshayPatel/LASB.", + "bbox": [ + 89, + 340, + 483, + 750 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 91, + 782, + 220, + 799 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Anomaly detection and localization is a critical task in computer vision with several applications in medical [4, 18, 52, 53] and industrial [21, 31, 43, 60], which has attracted significant attention from the research community. The main goal of anomaly detection is to identify and localize abnormal patterns that are unusual from those seen in normal", + "bbox": [ + 89, + 809, + 483, + 901 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/348132e01528d7ebd2ebe824dd596fe2dffeeb94379f6c2db810f1dcd844479c.jpg", + "image_caption": [ + "Figure 1. Illustration of the Gaussian and Bridge diffusion processes, with reverse image trajectory from LASB. LASB learns a direct diffusion bridge between anomaly and normal distributions, enhancing interpretability and anomaly-free transformation." + ], + "image_footnote": [], + "bbox": [ + 517, + 305, + 903, + 470 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "instances. In the past few years the research community has proposed various supervised anomaly detection methods [15, 24, 30, 44, 50, 59]. The extensive need for annotations is expensive and the infrequent presence of anomalous samples makes these methods unsuitable for practical applications due to their limitations in addressing real-world scenarios.", + "bbox": [ + 511, + 564, + 906, + 669 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Recent research has sought to overcome the limitations of supervised anomaly detection methods by relying solely on normal images during training and localizing abnormal patterns at inference time. Among the various categories within the unsupervised paradigm, reconstruction-based methods have garnered significant attention due to their promising results and strong performance in real-world scenarios. The core concept behind these methods is that the model is trained exclusively on normal images and during inference, it reconstructs abnormal samples as normal samples. Thus, doing so benefits in two directions, one can interpret how the model's ability to detect and localize anomalies. Second, it aids the model to perform better while reconstructing the anomalies. Recent advancements have introduced novel approaches lever", + "bbox": [ + 511, + 674, + 908, + 902 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 46 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "25528", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "aging AutoEncoder-based (AE) [3, 6, 56-58], Generative Adversarial Networks-based (GANs) [1, 26, 45, 62], and diffusion-based approaches [4, 18, 21, 31, 43, 52, 60]. Our research specifically focuses on various diffusion models tailored for anomaly detection tasks.", + "bbox": [ + 89, + 90, + 480, + 167 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Existing diffusion-based methods primarily focus on developing novel noise-conditioning techniques, often incorporating an additional discriminative sub-network [60]. Another approach utilizes a score-based diffusion model to identify anomalies by evaluating how effectively samples can return to the normal data distribution after perturbation, though it fails to deliver competitive performance [43]. Additionally, DiAD [21] proposes novel semantic guidance to specifically understand the anomalies and showcase its robustness. However, earlier diffusion-based methods generally depend on auxiliary networks or tailored diffusion processes to extract anomaly features. Moreover, most of these methods prioritize the extraction and reconstruction of anomaly features.", + "bbox": [ + 89, + 176, + 482, + 387 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this work, we introduce an intuitive mechanism that neither requires model-specific adaptations to extract anomaly features nor relies on an additional feature extractor for training or inference, to extract discriminative features. By leveraging the capabilities of Schrödinger Bridge [8, 11, 14, 42, 51], we propose the Latent Anomaly Schrödinger Bridge (LASB), which operates entirely in the latent space, transforming anomalous images into normal ones, regardless of anomaly class, and demonstrating robustness to variations in lighting and pose. In the following, we outline the key contributions of our study.", + "bbox": [ + 89, + 396, + 482, + 561 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- We propose LASB, a unsupervised bridge-based anomaly detection model that operates in latent space. LASB transforms anomaly images into normal images, regardless of the anomaly class.", + "- Unified Framework: LASB offers a comprehensive framework for both anomaly detection and localization without the need for auxiliary networks. Unlike conventional methods that rely on Gaussian diffusion, LASB employs a bridge-based diffusion process that inherently preserves structural integrity, marking the first application of such a process in latent diffusion model for anomaly detection.", + "- Efficient and Scalable: By utilizing the Linear Schrödinger Bridge in latent space, LASB significantly reduces training time, memory consumption, and sampling speed, enhancing overall efficiency and scalability.", + "- Our method achieves state-of-the-art performance on the MVtec dataset, with a image-level $\\mathrm{AUROC}_{cls} / \\mathrm{AP}_{cls}$ of $99.2\\% / 99.3\\%$ and an pixel-level $\\mathrm{AUROC}_{seg} / \\mathrm{AP}_{seg}$ of $98.6\\% / 78.2\\%$ on the test set, establishing a new benchmark for unsupervised anomaly detection." + ], + "bbox": [ + 89, + 583, + 478, + 898 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related Work", + "text_level": 1, + "bbox": [ + 513, + 89, + 653, + 104 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Reconstruction-based Methods: Reconstruction-based anomaly detection models operate on the premise that networks trained exclusively on normal images will fail to accurately reconstruct anomalous ones due to their unfamiliarity with the abnormal distribution. Notably, autoencoder [6, 10, 28] and Generative Adversarial Network (GAN) [1, 41] frameworks have been widely utilized for this task, where anomaly scores are computed based on the reconstruction error between the input and its generated counterpart. Nevertheless, such approaches frequently struggle with direct copy issues and elevated false positive rates due to their limited generalization capacity over anomalous regions or due to the overgeneralization capacity of reconstructive methods. To mitigate these generalization challenges, recent innovations have pivoted towards inpainting strategies [35, 57] or integrating supplementary memory modules [17, 47]. Furthermore, the DRAEM [56] model enhances performance by leveraging pseudoanomalies and coupling autoencoders with a segmentation network, though its effectiveness diminishes when faced with substantial deviations between real and pseudo anomalies.", + "bbox": [ + 511, + 111, + 903, + 441 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Diffusion-based Methods: Diffusion models, known for their high-fidelity image synthesis, are increasingly applied to anomaly detection in both medical and industrial domains [18, 21, 43, 52, 60]. In the medical field, SANO [18] leverages score-based diffusion models to localize skin conditions such as eczema, while AnoDDPM [52] employs the Denoising Diffusion Probabilistic Model (DDPM) [23] to segment tumors, initiating with simplex noise. These methods primarily utilize standard DDPM [23] or Score-based Generative Models (SGMs) [46], yet they often struggle to capture structural features because they rely on starting from pure Gaussian or simplex noise and frequently require external guidance for complex feature integration. In industrial applications, DiffAD [60] introduces a noise interpolation technique coupled with a discriminative sub-network to enhance detection capabilities. AD-SPR [43] emphasizes the significance of perturbed resilience using SGMs [46] to identify anomalies. Additionally, DiAD [21] presents a novel semantic guidance network that directs a stable diffusion model [37] to effectively detect and localize anomalies in both industrial and selected medical imaging applications. However, these models often require class-specific training or additional networks to guide the diffusion process or to perform further segmentation via a discriminative sub-network.", + "bbox": [ + 511, + 453, + 903, + 829 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Unified/Multi-class Methods: Early approaches in the literature developed models on a per-class basis, which limited their generalizability. More recent research has shifted towards unified or multi-class models that are trained on", + "bbox": [ + 511, + 839, + 903, + 898 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "25529", + "bbox": [ + 478, + 945, + 517, + 955 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/4fb7253a32f925bb900a6dde4f0bd8f50e6ac343e6b27b7991abe1f462799f5d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 98, + 61, + 890, + 193 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/e4ffb779d8ba2249faef708cb0ab8c73765de979b76a5380378ff518a1c2e26e.jpg", + "image_caption": [ + "Figure 2. The LASB model framework consists of two key stages: training and inference. During training, anomaly augmentations are applied to images, introducing distortions that the LASB model learns to remove, ultimately reconstructing a normal image. This iterative process continues until the model effectively filters out the anomalies. In the inference stage, the model processes real anomalous images, reconstructing normal versions. The anomaly detection is achieved by computing the difference $(p_B - p_A)$ between the original and reconstructed images, with anomalies visualized via a heatmap." + ], + "image_footnote": [], + "bbox": [ + 98, + 196, + 264, + 306 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/2168f79c8e9cbdaec09dbc1aa9abacddcfda286c1ca5b93866ccc3559d979ead.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 271, + 196, + 888, + 306 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "entire datasets, demonstrating improved robustness [21, 22, 32, 55]. For instance, UniAD [55] utilizes layer-wise query decoding, neighbor-masked attention, and feature jittering techniques to enhance multi-class anomaly detection. In contrast, HVQ-Trans [32] employs a hierarchical vector quantized prototype-oriented Transformer to improve discriminative capacity across multiple classes. Both methods focus on tackling the \"identical shortcut\" issue. On the other hand, RLR [22] introduces a novel framework using learnable reference representations with locality constraints to explicitly learn normal patterns and circumvent \"learning shortcuts\", achieving superior results on standard datasets (MVTec-AD and VisA). Additionally, the DiAD [21] model, designed for multi-class settings and based on diffusion-based reconstruction, outperforms UniAD [55], RLR [22], and HVQ-Trans [32] in terms of localization and detection performance.", + "bbox": [ + 88, + 405, + 485, + 662 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. Proposed Methodology", + "text_level": 1, + "bbox": [ + 89, + 676, + 307, + 694 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Preliminaries", + "text_level": 1, + "bbox": [ + 89, + 700, + 232, + 715 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Before introducing the full methodology, we begin with a preliminary overview of score-based generative models (SGMs). Next, we explain the Schrödinger Bridge concept and discuss its connection to SGMs. Finally, we present the core methodology, detailing how and why the latent Schrödinger Bridge effectively addresses anomaly detection and localization.", + "bbox": [ + 89, + 723, + 483, + 829 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Notations. Here we introduce some notations and use them throughout the work. Let $X_{t}(\\in \\mathbb{R}^{d})$ denote a stochastic process, where $t\\in [0,1]$ is a continuous time step. The intermediate steps are uniformly distributed in the interval", + "bbox": [ + 89, + 839, + 483, + 901 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "$(\\mathcal{U}(t \\sim [0,1]))$ . The initial distribution is the corrupted data distribution is denoted as $p_A$ and the terminal distribution is clean data distribution and is denoted as $p_B$ . The Wiener process and its reversed counterpart, adopted from Anderson et al. [2], are represented as $W_t$ and $\\overline{W}_t$ , respectively, both in $\\mathbb{R}^d$ . $\\mathbb{I} \\in \\mathbb{R}^{d \\times d}$ is the identity matrix.", + "bbox": [ + 511, + 405, + 906, + 497 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Score-based Generative Models. Score-based Generative Models (SGMs) perturb the data stochastically across continuous noise scales and then use reverse-time stochastic differential equations (SDEs) to learn the reverse-time diffusion process. This reverse-time SDE relies on a score function enabling the reconstruction of any distribution starting from a Gaussian [46]. Given data $X_0 \\sim p_A$ , forward and backward SDEs are formulated as:", + "bbox": [ + 511, + 506, + 908, + 626 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\nd X _ {t} = f _ {t} \\left(X _ {t}\\right) d t + \\sqrt {\\beta_ {t}} d W _ {t}, \\quad X _ {0} \\sim p _ {A}, \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 550, + 637, + 903, + 655 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\nd X _ {t} = \\left[ f _ {t} - \\beta_ {t} \\nabla \\log p \\left(X _ {t}, t\\right) \\right] d t + \\sqrt {\\beta_ {t}} d \\bar {W} _ {t}, \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 552, + 659, + 903, + 676 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where, $f(\\cdot ,t):\\mathbb{R}^n\\to \\mathbb{R}^n$ and the terminal distributions (i.e., at $t = 1$ ) approach Gaussian $(X_{1}\\sim \\mathcal{N}(0,I))$ . To achieve this, diffusion coefficient $\\beta_{t}\\in \\mathbb{R}$ is carefully tuned and ensuring that the base drift $f_{t}$ is linear in $X_{t}$ . Here, $p$ is the marginal density of equation (2) at time $t$ , and $\\nabla \\log p$ denotes its score [46].", + "bbox": [ + 511, + 688, + 906, + 779 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "SGMs as Schrödinger Bridge. The absence of flexibility in SGM's to transport data to desired distribution, demands a versatile strategy. The Schrödinger Bridge (SB) is a strategy often applied in optimal transport to find the optimal path measure between two marginal densities and it is expressed as,", + "bbox": [ + 511, + 787, + 908, + 878 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\min _ {Q \\in P \\left(p _ {A}, p _ {B}\\right)} \\operatorname {K L} (\\mathbb {Q} | | \\mathbb {P}), \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 629, + 878, + 903, + 902 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "25530", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Here, $\\mathbb{Q}$ represents a path measure within $\\mathbb{P}(p_A,p_B)$ , characterized by having marginal densities $p_A$ and $p_B$ at times $t = 0$ and 1, respectively. We consider $\\mathbb{P}$ as the reference measure, specifically chosen as the path measure in equation (1). We will further elaborate on the conditions that define the optimality of the SB (as shown in equation (3)) with specified boundary conditions. The optimality condition for SB is characterized by solving PDEs [9][8]. Let $\\Psi (t,x)$ and $\\widehat{\\Psi} (t,x)$ be the solutions to the following PDEs:", + "bbox": [ + 89, + 90, + 483, + 241 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\frac {\\partial \\Psi (z , t)}{\\partial t} = - \\nabla \\Psi^ {T} f - \\frac {1}{2} \\beta \\Delta \\Psi \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 171, + 250, + 482, + 281 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\frac {\\partial \\widehat {\\Psi} (z , t)}{\\partial t} = - \\nabla \\cdot (\\widehat {\\Psi} f) + \\frac {1}{2} \\beta \\Delta \\widehat {\\Psi} \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 174, + 282, + 482, + 316 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "subject to the conditions,", + "bbox": [ + 89, + 327, + 259, + 340 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\Psi (z, 0) \\widehat {\\Psi} (z, 0) = p _ {A} (z), \\quad \\Psi (z, 1) \\widehat {\\Psi} (z, 1) = p _ {B} (z).\n$$\n", + "text_format": "latex", + "bbox": [ + 109, + 351, + 465, + 371 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Then, the solution to optimization (3) can be expressed by the path measure of the following forward equation (6), or equivalently backward equation (7), SDE:", + "bbox": [ + 89, + 383, + 483, + 429 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nd X _ {t} = \\left[ f _ {t} + \\beta_ {t} \\nabla \\log \\Psi (X _ {t}, t) \\right] d t + \\sqrt {\\beta_ {t}} d W _ {t}, \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 119, + 440, + 482, + 459 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nd X _ {t} = \\left[ f _ {t} - \\beta_ {t} \\nabla \\log \\hat {\\Psi} \\left(X _ {t}, t\\right) \\right] d t + \\sqrt {\\beta_ {t}} d \\bar {W} _ {t}, \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 119, + 462, + 482, + 481 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\nabla \\log \\Psi$ and $\\nabla \\log \\widehat{\\Psi}$ represent the non-linear optimal forward and backward drifts for the Schrödinger Bridge. This SB has a non-linear behavior and it generalizes the SGMs with variation in prior data. Note that, forward and backward equations (6, 7) of Schrödinger Bridge are same as SGMs forward-backward equations (1, 2) except the forward drift term ( $\\nabla \\log \\Psi$ ). Thus, from the Nelson's Duality [34] we obtain $\\Psi(x,t)\\widehat{\\Psi}(x,t) = q^{\\mathrm{SB}}(x,t)$ . As the backward drift of Schrödinger Bridge does not act as a score function anymore with two indivisible components. Thus, from equation (7) and Nelson's Duality [34] we obtain $dX_{t} = [f_{t} - \\beta_{t}(\\nabla \\log q^{\\mathrm{SB}}(X_{t},t) - \\nabla \\log \\Psi(X_{t},t))]dt + \\sqrt{\\beta_{t}} d\\overline{W}_{t}$ .", + "bbox": [ + 89, + 494, + 483, + 694 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Linear-SB. The constraints posed above cause the model to be intractable and to overcome this impediment existing literature suggests various lines of strategies such as Iterative Proportional Fitting [11, 25], likelihood-based training of SB [8] etc. These methods help the SGMs to construct non-linear diffusion bridges. But, we can make them tractable by posing some linear conditions. A more detailed examination of SB theory with SGM framework reveals that the nonlinear drifts specified in equations (6, 7) correspond to the score functions described in equation (2). Assuming that $\\Psi (\\cdot ,t)$ and $\\widehat{\\Psi} (\\cdot ,t)$ function as probability density functions, we reformulate equations (4, 5) to effectively address the solutions of the Fokker-Planck equation [36].", + "bbox": [ + 89, + 704, + 483, + 901 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The Schrödinger Bridges in equation (4, 5) are satisfied, the backward and forward drifts, $\\nabla \\log \\widehat{\\Psi}(X_t, t)$ and $\\nabla \\log \\Psi(X_t, t)$ represent the score functions for the following linear SDEs, respectively [36][27]:", + "bbox": [ + 511, + 90, + 905, + 151 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nd X _ {t} = f _ {t} \\left(X _ {t}\\right) d t + \\sqrt {\\beta_ {t}} d W _ {t}, \\quad X _ {0} \\sim \\Psi (\\cdot , 0), \\tag {8}\n$$\n", + "text_format": "latex", + "bbox": [ + 539, + 162, + 906, + 181 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nd X _ {t} = f _ {t} \\left(X _ {t}\\right) d t + \\sqrt {\\beta_ {t}} d \\overline {{W}} _ {t}, \\quad X _ {1} \\sim \\widehat {\\Psi} (\\cdot , 1). \\qquad (9)\n$$\n", + "text_format": "latex", + "bbox": [ + 540, + 185, + 906, + 204 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The key characteristic of the linear SDEs presented in equations (8, 9) lies in their differing boundary conditions, resulting in distinct distributions compared to nonlinear SDEs. Sampling is facilitated by parameterizing $\\nabla \\log \\widehat{\\Psi}$ using a score network and applying established SGM methods. However, the computational complexity arises due to the intractability of the boundary conditions $\\Psi(\\cdot, 0)$ and $\\widehat{\\Psi}(\\cdot, 1)$ . To resolve this, we introduce Dirac delta functions as boundary conditions for these previously intractable terms, thus rendering them manageable. Now to address this we impose Dirac delta boundary conditions [36][27]. Assume $p_A(\\cdot) = \\delta_\\kappa(\\cdot)$ is Dirac delta distribution that is centered at $\\kappa \\in \\mathbb{R}^d$ . Then, the initial distributions of the linear SDEs (equation (8,9)) are:", + "bbox": [ + 511, + 215, + 906, + 429 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\np _ {B} = \\Psi (\\cdot , 1) \\widehat {\\Psi} (\\cdot , 1), \\quad \\widehat {\\Psi} (\\cdot , 0) = \\delta_ {\\kappa} (\\cdot), \\tag {10}\n$$\n", + "text_format": "latex", + "bbox": [ + 575, + 439, + 906, + 459 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The above equation (10) suggests that the optimal backward drift of the process in equation (8) consistently targets the Dirac delta $\\delta_{\\kappa}(\\cdot)$ , achieving convergence to $\\kappa$ independent of $p_B$ . This concept is integrated into the loss function, where the score is recalculated as $\\nabla \\log p(X_t,t|X_0 = \\kappa)$ for each instance $\\kappa$ . This approach enhances computational efficiency and establishes a robust mathematical basis for training $\\nabla \\log \\widehat{\\Psi} (\\cdot)$ .", + "bbox": [ + 511, + 470, + 905, + 592 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2. Linear Schrödinger Bridge in the Latent Space", + "text_level": 1, + "bbox": [ + 511, + 601, + 905, + 618 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "We now describe how the above discussion on linear Schrödinger bridges can be used for anomaly detection and localization.", + "bbox": [ + 511, + 623, + 906, + 667 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Overall Pipeline. Figure 2 illustrates the entire pipeline of the proposed method. Initially, anomaly augmentation [56] is applied to a normal image, which is then processed by the Latent Anomaly Schrödinger Bridge (LASB) model. The LASB model learns the anomaly statistics and transforms the augmented anomaly image back into a normal image. Crucially, the aim of this step is not to learn the anomalies themselves, but rather to reconstruct the normal image regardless of the anomalies introduced during training. During training, various masks and noise patterns are applied to augment the images. After training, the LASB model is evaluated on anomaly images from the test set, with its weights frozen. During the inference phase, the model is presented with an anomaly image from the test", + "bbox": [ + 511, + 688, + 906, + 900 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "25531", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Algorithm 1 Training Procedure", + "bbox": [ + 91, + 90, + 310, + 106 + ], + "page_idx": 4 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1: Input: normal and anomaly latent $p_A^{(z)}(\\cdot), p_B^{(z)}(\\cdot | z_0)$", + "2: repeat", + "3: $z_0\\sim p_A^{(z)}(z_0),z_1\\sim p_B^{(z)}(z_1|z_0)$", + "4: $z_{t}\\sim q(z_{t}|z_{0},z_{1})$ according to equation (11)", + "5: calculate gradient $\\nabla \\epsilon (z_t,t;\\theta)$ using equation (12)", + "6: until convergence" + ], + "bbox": [ + 104, + 109, + 434, + 195 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "set, which it then reconstructs as a normal image. Both the original anomaly image and its reconstructed normal image are subsequently passed through a difference block. In this step, multi-scale features are extracted using a pre-trained ImageNet model, and the discrepancies between these features are computed to pinpoint regions of interest (ROI). These differences are then detected and localized, following a strategy akin to DiAD [21].", + "bbox": [ + 89, + 224, + 483, + 345 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Latent Anomaly Schrödinger Bridge. The foundational basis of Latent Anomaly Schrödinger Bridge (LASB) is inspired by linear Schrödinger Bridge's from equation (10) i.e., the optimal backward drift in equation (8) consistently aims for the Dirac delta $\\delta_{\\kappa}(\\cdot)$ , ensuring convergence to $\\kappa$ regardless of $p_B$ . That means, the forward process transforms the noisy anomaly image statistics and learns to reconstruct a normal image irrespective of terminal distribution. Now to elucidate the working mechanism of the LASB diffusion model which will consider a Chang et al. [7] design strategy", + "bbox": [ + 89, + 352, + 483, + 517 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "SGMs and diffusion models operating in pixel space require significant computational resources and memory. To address this challenge, we introduce the LASB model. Before training the Linear-SB, we first train $E_{VQ}$ and $D_{VQ}$ , the encoder and decoder of the VQ-VAE model [49], for perceptual compression, following the approach of stable diffusion [37]. During VQ-VAE training, we input either normal or anomaly-augmented images from the training set into $E_{VQ}$ and aim to reconstruct the input using $D_{VQ}$ . Once the model converges, its weights are frozen, and we proceed to train the Linear-SB in the latent space. Unlike Gaussian diffusion models, where noise is progressively added at each time step, leading to the deterioration of image structure at later stages, we adopt a semidegradation strategy similar to Saharia et al. [40]. This ensures that, during the forward pass, the LASB model performs a smooth transformation while preserving structural information. Figure 1 illustrates the contrast between the Gaussian diffusion process and the Schrödinger Bridge diffusion process when applied to anomaly-free reconstruction. According to Liu et al. [27] for this smooth structural transformation the posterior for the SB (equation 6, 7)", + "bbox": [ + 89, + 518, + 483, + 853 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Algorithm 2 Inference/Sampling Procedure", + "bbox": [ + 514, + 90, + 805, + 106 + ], + "page_idx": 4 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1: Input: $z_{N} \\sim p_{B}^{(z)}(z_{N})$ , trained $\\epsilon_{\\theta}(\\cdot, \\cdot)$", + "2: for $n = N$ to 1 do", + "3: Predict $z_0^\\epsilon$ using $\\epsilon_{\\theta}(z_n, t_n)$", + "4: $z_{n - 1}\\sim p(z_{n - 1}|z_0^\\epsilon ,z_n)$", + "5: end for", + "6: return $z_0$" + ], + "bbox": [ + 524, + 109, + 771, + 191 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "given a boundary pair condition $(X_0, X_1)$ reveals an analytic. Now as we inject the image statistics via perceptual compression using VQ-VAE into latent space, the analytical form can be formulated as,", + "bbox": [ + 511, + 220, + 905, + 279 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nq \\left(z _ {t} \\mid z _ {0}, z _ {1}\\right) = \\mathcal {N} \\left(z _ {t}, \\mu_ {t} \\left(z _ {0}, z _ {1}\\right), \\Sigma_ {t}\\right) \\tag {11}\n$$\n", + "text_format": "latex", + "bbox": [ + 586, + 294, + 905, + 311 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\mu_t = \\frac{\\bar{\\sigma}_t^{(z)2}}{\\bar{\\sigma}_t^{(z)2} + \\sigma_t^{(z)2}} z_0 + \\frac{\\sigma_t^{(z)2}}{\\bar{\\sigma}_t^{(z)2} + \\sigma_t^{(z)2}} z_1$ , and", + "bbox": [ + 511, + 316, + 905, + 344 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\Sigma_ {t} = \\frac {\\bar {\\sigma} _ {t} ^ {(z) 2} \\sigma_ {t} ^ {(z) 2}}{\\bar {\\sigma} _ {t} ^ {(z) 2} + \\sigma_ {t} ^ {(z) 2}} \\mathbb {I}.\n$$\n", + "text_format": "latex", + "bbox": [ + 514, + 343, + 640, + 371 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The accumulated variances are, $\\sigma_t^{(z)2}$ $\\begin{array}{r}\\int_0^t\\beta_\\tau d\\tau \\quad \\mathrm{and}\\quad \\overline{\\sigma}_t^2 \\coloneqq \\int_t^1\\beta_\\tau d\\tau . \\end{array}$", + "bbox": [ + 513, + 375, + 905, + 414 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We hence train the proposed LASB using our objective function:", + "bbox": [ + 511, + 426, + 905, + 455 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {L A S B} := \\left| \\left| \\epsilon_ {\\theta} \\left(z _ {t}, t\\right) - \\left(\\frac {z _ {t} - z _ {0}}{\\sigma_ {t} ^ {(z)}}\\right) \\right| \\right| \\tag {12}\n$$\n", + "text_format": "latex", + "bbox": [ + 601, + 462, + 905, + 494 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\epsilon (\\cdot)$ is the network and $\\theta$ are the parameters associated with it. The training and the inference procedures are summarized in Algorithms 1 and 2.", + "bbox": [ + 511, + 502, + 905, + 547 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4. Experiments and Results", + "text_level": 1, + "bbox": [ + 511, + 560, + 748, + 577 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Datasets. To validate the efficacy of the LASB method, we utilize two challenging datasets in industrial anomaly detection: the MVTec-AD [5] dataset and the VisA [63] dataset. The MVTec-AD dataset comprises 15 categories, including 10 object types and 5 texture types, with a total of 5,354 high-resolution images. Of these, 3,629 anomaly-free images are used for training, while the remaining 1,725 test images include both normal and anomalous samples. The VisA dataset consists of 12 distinct objects organized into 12 subsets, categorized into complex Structures, Multiple Instances, and Single Instance types. It includes 10,821 high-resolution images, with 9,621 normal images and 1,200 anomalous images exhibiting 78 distinct anomalies, offering a comprehensive benchmark for anomaly detection and localization methods. Both datasets provide pixel-level ground truth annotations to facilitate the evaluation of anomaly localization performance.", + "bbox": [ + 511, + 580, + 906, + 838 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Evaluation Metrics. For a rigorous quantitative evaluation of our experimental results in anomaly detection and localization, we employ AUROC (Area Under the Receiver Operating Characteristic Curve), AP (Average Precision),", + "bbox": [ + 511, + 839, + 905, + 901 + ], + "page_idx": 4 + }, + { + "type": "page_footnote", + "text": "1Chang et al. [7] work focuses on explaining diffusion models by signifying the design principles of the training strategy, sampling strategy, and the objective function.", + "bbox": [ + 89, + 862, + 482, + 900 + ], + "page_idx": 4 + }, + { + "type": "footer", + "text": "25532", + "bbox": [ + 478, + 944, + 519, + 955 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/bebe370c000dcd6188296baeead296d86b1951bde9a039801ca88b3425d3a5fb.jpg", + "image_caption": [ + "Figure 3. Comparison of training time (left), training memory (middle), and sampling time (right) across different image resolutions for SGM, SB, DDPM, and LASB models. The LASB model (green) demonstrates consistently lower training time, memory usage, and sampling time, particularly in contrast to the SB model (red), which exhibits the highest resource consumption." + ], + "image_footnote": [], + "bbox": [ + 99, + 80, + 344, + 184 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/e28d21147abc88f15c8b3804ebe7c08beeb9e9b3d821d6fba38cf1c7600684d3.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 346, + 80, + 594, + 184 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/97f2c5035eabf69b86385ba0d9a96a5e665d6104fc16570eb1ad549b60262cdb.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 598, + 80, + 897, + 184 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "and F1max (maximum F1-score) as our primary metrics [21]. Here, $cls$ denotes image-level anomaly detection, while seg pertains to pixel-level anomaly localization. A detailed class-specific comparison and results for all the aforementioned metrics are provided in Section S2 of supplementary material. In Table 1, we present the average performance of each model across the evaluated dataset.", + "bbox": [ + 88, + 244, + 480, + 351 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/65dbbd6c6ef928e4eab682d0e8102959b00113a6ed59265a7af2d3a780196dd9.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
MethodVenueAdopted MethodMvTecViSA
Class-based
SimpleNet [29]CVPR 2023Embedding-based99.6 / 98.196.2 / 98.5
PatchCore [38]CVPR 2022Embedding-based99.1 / 98.191.0 / 98.1
DSR [58]ECCV 2022-98.2 / 70.2-
PaDiM [12]ICPR 2021Memory Bank97.5 / 92.189.1 / 85.9
CS-Flow [39]WACV 2022Normalization Flow97.2 / 84.5-
CFLOW-AD [19]WACV 2022Normalization Flow96.2 / 97.1-
OCR-GAN [26]TIP 2023GAN98.3 / -97.9 / -
DRAEM [56]ICCV 2021Encoder98.0 / 97.388.7/93.5
RD4AD [13]CVPR 2022Embedding-based98.5 / 97.896.9 / 98.3
ADSPR [43]ICCV 2023Diffusion97.67 / 97.36-
DiffAD [60]ICCV 2023Diffusion98.72 / 98.2689.79 / 68.28
D3AD [48]CVPR 2024Diffusion97.15 / 97.4495.51 / 94.27
DDAD [33]Arxiv 2023Diffusion99.84 / 98.0598.9 / 97.58
TransFusion [16]ECCV 2024Diffusion99.24 / 94.3398.53 / 86.26
LASB (Ours)-Diffusion99.66 / 99.1598.52 / 99.06
Unified (multi-class)
DRAEM [56]ICCV 2021Encoder88.1 / 87.279.1 / 91.3
HVQ-Trans [32]NeurIPS 2023Non-Diffusion98.0 / 97.393.2 / 98.7
MambaAD [20]NeurIPS 2024Non-Diffusion98.6 / 97.794.3 / 98.5
OmniAL [61]CVPR 2023Non-Diffusion97.2 / 98.394.2 / 96.0
GLAD [54]ECCV 2024Diffusion97.5 / 97.491.8 / 97.8
UniAD [55]NeurIPS 2022Transformer96.52 / 96.885.5 / 95.925
DIAD [21]AAAI 2024Diffusion97.15 / 96.7686.75 / 96.04
LASB (Ours)-Diffusion99.14 / 98.6694.2 / 98.18
", + "bbox": [ + 89, + 362, + 486, + 633 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Table 1. Comparison of state-of-the-art (SOTA) anomaly detection methods categorized into class-based and unified (multi-class) approaches. The table highlights the adopted methodologies and their class-wise average performance in terms of AUROCcls / AUROCseg metrics on two datasets: MvTec and ViSA respectively. An exhaustive list of metrics along with class specific performance details is comprehensively illustrated in Section S2 of supplementary material.", + "bbox": [ + 88, + 643, + 482, + 755 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Implementation Details. For all our experiments, we resized MVTec-AD images to $256 \\times 256$ resolution. To fine-tune the VQ-VAE (Auto-encoder) utilizing the KL-based method, we initialized the network weights from the Stable Diffusion [37] model, where the image is encoded into a latent vector with a size of $64 \\times 64 \\times 3$ . We chose this latent vector size due to its preservation of details and low FID generation capabilities. After training the auto-encoder, we froze the weights and trained the latent denoising network.", + "bbox": [ + 88, + 763, + 482, + 900 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We used the U-Net model as in [23] as our latent denoising network. During inference, we pass only the anomaly test image as input to the DDPM sampler [23]. Further details on batch size and various other network hyperparameters are in Section S4 of the appendix.", + "bbox": [ + 511, + 244, + 903, + 320 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Baselines. To benchmark LASB, we provide a detailed comparison with state-of-the-art (SOTA) reconstruction methods, as summarized in Table 1. These methods are categorized into two distinct groups: class-based and unified (multi-class) approaches. Class-based methods train models specific to each class, with the number of models increasing linearly with the number of classes, making scalability a challenge. In contrast, unified approaches handle multiple classes within a single model, offering a more scalable solution. By separating the class-based and unified approaches, we ensure a fair and comprehensive comparison.", + "bbox": [ + 511, + 330, + 906, + 500 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Class-based Methods: Performance Analysis. Among the class-based methods, PaDiM [12], with its memory bank-based architecture, achieves $97.5\\%$ on MVtec-AD and $89.1\\%$ on ViSA. While effective for simpler anomalies, its scalability and ability to model inter-class variations remain limited. In contrast, DiffAD [60] demonstrates strong competitiveness, achieving $98.72\\%$ on MVtec-AD. However, its performance on ViSA drops to $89.79\\%$ , indicating its difficulty in adapting to diverse industrial anomalies. DDAD [33] shows highly competitive results, achieving $99.84\\%$ on MVtec-AD and $98.9\\%$ on ViSA. Its diffusion-based modeling effectively captures structural patterns, making it one of the closest competitors to LASB. However, LASB surpasses DDAD on ViSA by a margin of $0.38\\% \\uparrow$ , demonstrating superior adaptability to multi-class industrial scenarios.", + "bbox": [ + 511, + 508, + 908, + 750 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/abf4288f45dcbc8a463f6251f63a7ef76fe45893f3e1a8eccef5af770d17222e.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
TaskMetricsPixel SpaceLatent Space
DDPM [23]DDAD [33]GLAD [54]LDM [37]DiAD [21]LASB (Ours)
DetectionAUROCcls71.999.897.576.697.299.2 (-0.6%)
APcls81.699.599.187.699.099.3 (-0.2%)
F1maxcls86.697.996.688.196.598.5 (+2.0%)
LocalizationAUROCseg75.698.097.485.196.898.6 (+0.60%)
APseg13.359.060.827.652.678.2 (+17.4%)
F1maxseg19.559.460.731.055.570.7 (+10.0%)
", + "bbox": [ + 516, + 763, + 921, + 829 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Table 2. Performance comparison of LASB with diffusion-based models on MVTec-AD using AUROC, AP, and F1max. The best results are in bold; the second-best is underline. Improvements are shown as a percentage over the second-best.", + "bbox": [ + 511, + 840, + 906, + 897 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "25533", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/16e30b7fc9635c0159686acf423a06eb1cae9a146bb93bc63681736cf4af9f66.jpg", + "image_caption": [ + "Figure 4. Visual representation of test samples from the MVtec dataset, depicted through heatmaps for various models for different categories." + ], + "image_footnote": [], + "bbox": [ + 93, + 88, + 480, + 253 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Unified Models: Performance Analysis. For unified approaches, DRAEM [56] emerges as one of the least competitive models, with AUROCcls scores of $88.1\\%$ on MVTec-AD and $79.1\\%$ on ViSA. Its reliance on autoencoder-based reconstruction, combined with limited augmentation strategies, hampers its ability to generalize to diverse and complex anomaly scenarios, particularly in the multi-class ViSA dataset. Similarly, UniAD [55] performs less competitively, achieving $96.52\\%$ on MVTec-AD and $85.5\\%$ on ViSA. Its transformer-based architecture, while robust for certain tasks, struggles to balance efficiency and accuracy in anomaly detection across diverse classes. Similarly, DIAD [21] achieves $97.15\\%$ on MVTec-AD and $86.75\\%$ on ViSA but falls short in handling the intricate class variations present in the datasets. Among the competitive unified methods, MambaAD [20] achieves $98.6\\%$ on MVTec-AD and $94.3\\%$ on ViSA, showcasing its strength in handling multi-class scenarios using state-space models. However, LASB outperforms MambaAD with a score of $99.14\\%$ on MVTec-AD and $94.2\\%$ on ViSA, highlighting its superior capability to balance computational efficiency with detection accuracy. Furthermore, HVQ-Trans [32], with scores of $98.0\\%$ on MVTec-AD and $93.2\\%$ on ViSA, demonstrates solid performance but remains slightly behind LASB in scalability and robustness. Detailed class-specific results are available in Section S2 of supplementary material, and Figure 4 provides qualitative heat-map visualizations of anomaly regions, further demonstrating LASB's efficacy in both class-based and unified settings.", + "bbox": [ + 91, + 323, + 483, + 760 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/07c5be218534ee6eeac16f04697ff4f8758723f638d944b43eaafa4a8ad199ba.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
TaskMetricsNon-Diffusion MethodDiffusion-based Method
DRAEM [23]UniAD [55]DiffAD [60]DiAD [21]LASB (Ours)
DetectionAUROCcls79.185.589.586.894.2
APcls81.985.5-88.392.2
F1maxcls78.984.4-85.194.5
LocalizationAUROCseg91.395.971.296.098.2
APseg23.521.0-26.146.4
F1maxseg29.527.0-33.052.6
", + "bbox": [ + 98, + 775, + 478, + 843 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 3. Results for multi-class anomaly detection and localization on VisA dataset. The best results are indicated in bold, and the second-best results are denoted with an underline.", + "bbox": [ + 89, + 854, + 482, + 896 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5. Ablation Studies and Analysis", + "text_level": 1, + "bbox": [ + 513, + 89, + 790, + 107 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "In this section, we first differentiate the proposed LASB method's performance in latent spaces and explain the advantages of utilizing Linear-SB in the latent space. We then assess the LASB model's effectiveness when compared with Stable Diffusion [37] and other models for anomaly localization and detection tasks. Finally, we demonstrate the robustness of our proposed method, demonstrating its ability to deliver precise and consistent outcomes during test-time evaluations.", + "bbox": [ + 511, + 114, + 906, + 252 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "LASB vs Standard Diffusion Models. In the realm of anomaly detection, standard diffusion models such as DDPM [23] and LDM [37] achieve competitive performance in anomaly detection tasks but exhibit significant limitations in localization, as evidenced in Table 2. To mitigate these limitations, DiAD [21] integrates a semantic guidance network to improve LDM's localization by incorporating additional contextual information. Despite these improvements, DiAD [21] still falls short of achieving state-of-the-art (SOTA) results, indicating persistent challenges in fully leveraging diffusion models for comprehensive anomaly detection tasks. In contrast, our LASB employs a novel approach that leverages Linear-SB within the latent space, avoiding the limitations associated with reconstructing from pure Gaussian noise, a common issue in standard diffusion processes when applied to anomaly-free reconstruction. Moreover, LASB semi-degrades latent space to retain structural integrity, facilitating more effective anomaly detection and localization. This method not only enhances the robustness of the detection process but also significantly improves localization accuracy. Consequently, the LASB model surpasses standard diffusion models across multiple metrics, demonstrating superior performance in multi-class anomaly detection, as evidenced in Table 2.", + "bbox": [ + 511, + 265, + 906, + 641 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We compare the performance of our proposed Latent Anomaly Schrödinger Bridge (LASB) method with the DDPM [23], DDAD [33], GLAD [54], LDM [37], and DiAD [21] models to demonstrate the effectiveness and efficiency of our approach. First, we evaluate the models on the MVTec-AD dataset for both anomaly detection and localization tasks. As shown in Table 2, LASB significantly outperforms all the other models in localization tasks for all the metrics. Specifically, LASB achieves a $0.6\\% \\uparrow$ , $17.4\\% \\uparrow$ , and $10.0\\% \\uparrow$ improvement in AUROC $_{seg}$ , AP $_{seg}$ , F1max $_{seg}$ respectively. Also, it shows $2.0\\% \\uparrow$ enhancement in F1max $_{cls}$ for detection, compared to GLAD [54]. These improvements highlight the robustness and effectiveness of the latent space approach specifically for localization. Furthermore, as shown in Figure 3, pixel-space models like DDPM [23], SGM [46], and SB models trained via Iterative Proportional Fitting [11, 25] or likelihood-based methods [8] re", + "bbox": [ + 511, + 643, + 908, + 902 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "25534", + "bbox": [ + 478, + 944, + 519, + 955 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/77246e5458f59ae670233d60f8c09452120b8561e5dbcb12113f31cedd9729f7.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
NFE-1NFE-2NFE-5NFE-10NFE-100NFE-500NFE-1000
MVTec-AD97.42 / 96.8298.16 / 97.8498.94 / 98.1699.14 / 98.6699.14 / 98.6699.14 / 98.6699.14 / 98.66
VisA92.74 / 96.7193.21 / 97.1593.96 / 97.9094.2 / 98.1894.2 / 98.1894.2 / 98.1894.2 / 98.18
Inference Time (secs)0.120.250.520.741.57.515
", + "bbox": [ + 96, + 88, + 898, + 157 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/878d2f07c2ce0bfe3510d9e46c9903dbe71a031b9d92c5c95bb1b55e2e476a90.jpg", + "table_caption": [ + "Table 4. Performance evaluation across varying numbers of function evaluations (NFEs) on the MVTec-AD and VisA datasets. The tabulated metrics, AUROCcls / AUROCseg, provide a comprehensive overview of image-level and pixel-level anomaly detection performance at each NFE. Additionally, the inference time, measured in wall-clock seconds on an NVIDIA V100 GPU, underscores the computational trade-offs associated with increasing NFEs, reflecting the balance between model efficiency and performance." + ], + "table_footnote": [], + "table_body": "
MethodDR/EM [56]PatchCore [38]DiffAD [60]TransFusion [16]DIAD [21]GLAD [54]ADSPR [43]DDAD [33]LASB (ours)
Inference Time (secs)0.150.442.61.22.81.81.61.50.74
", + "bbox": [ + 102, + 238, + 893, + 273 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "require more training time, memory, and exhibit slower sampling. In contrast, latent-space LASB excels in all areas, achieving up to $5 \\times$ and $2 \\times$ faster training, $3 \\times$ and $2 \\times$ memory reduction, and $4 \\times$ and $2 \\times$ faster sampling compared to SB-based models [8, 11, 25] and DDPM [23], respectively.", + "bbox": [ + 89, + 311, + 480, + 388 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/60abe7f53a7023b79d9385b26baef4c74b5ed16a2b886f5d2b25dabea2a98b06.jpg", + "table_caption": [ + "Table 5. Inference times for various methods where the inference time, measured in wall-clock seconds on an NVIDIA V100 GPU." + ], + "table_footnote": [], + "table_body": "
TaskMetrics
DetectionAUROCcls99.0599.0199.0198.98
APcls99.1599.1499.1499.11
F1maxcls98.5998.3399.8398.32
LocalizationAUROCseg98.5898.5898.5998.57
APseg78.1778.1478.1478.12
F1maxseg70.6270.5970.6170.58
", + "bbox": [ + 106, + 398, + 470, + 494 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Table 6. Stability analysis of LASB on the MVTec-AD dataset for detection and localization tasks by performing the sampling multiple times. Results reported are mean values across classes and multiple samplings (see Section S1 of appendix for more detailed results).", + "bbox": [ + 89, + 503, + 482, + 571 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Generalization and Sampling Stability. In the field of anomaly detection using diffusion models, achieving consistent outcomes during the sampling or inference stage is notably challenging due to the inherent stochastic nature of generative processes. This inconsistency can be particularly problematic in real-world applications where reliable and stable detection is critical. Therefore, assessing the stability of model outputs across multiple inferences is essential. Our approach involves training the model once and then conducting multiple sampling or inference tests to evaluate if the outcomes remain consistent over time. A critical consideration for real-world applicability is achieving fast inference while minimizing computational overhead. The proposed LASB model strikes a balance between performance and efficiency, ensuring its suitability for deployment in practical scenarios.", + "bbox": [ + 89, + 583, + 482, + 824 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Our findings, as detailed in Table 6, reveal that the LASB model exhibits remarkable stability across all evaluation metrics, showing negligible variance across numerous inferences. This consistency is attributed to the model's capability to maintain structural integrity and effectively tran", + "bbox": [ + 89, + 825, + 482, + 901 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "sition from anomalous to normal latent spaces. The LASB model is designed to reconstruct a normal image regardless of the underlying anomalies, compelling it to disregard anomalous features during reconstruction. This process not only ensures that anomalies of various patterns, sizes, and orientations are effectively handled but also enhances the overall reliability of the model.", + "bbox": [ + 511, + 311, + 905, + 417 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Inference Complexity. As shown in Table 4, LASB demonstrates exceptional efficiency with its rapid sampling capabilities, achieving near-optimal performance with as few as 10 NFEs (Number of Function Evaluations) and an inference time of only 0.74 seconds on an NVIDIA V100 GPU. This highlights LASB's practicality for real-world applications, where fast and reliable anomaly detection and localization are crucial. To further illustrate its efficiency, we compare LASB's inference time with existing state-of-the-art (SOTA) models in Table 5. LASB is approximately $1.6 \\times$ faster than TransFusion [16], a leading class-based method, while also outperforming it in both anomaly localization and detection tasks. Compared to DiAD [21], LASB achieves $2 \\times$ faster inference times, delivering significantly stronger performance in both detection and localization metrics. This demonstrates LASB's clear advantage in balancing computational efficiency and robust anomaly detection.", + "bbox": [ + 511, + 429, + 906, + 700 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6. Conclusions and Future Work", + "text_level": 1, + "bbox": [ + 511, + 714, + 790, + 729 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "The proposed Latent Anomaly Schrödinger Bridge (LASB) model demonstrates robust performance in anomaly detection and localization tasks. Its unified nature delimits the need for extra guidance or additional network components. LASB also excels in producing stable inferential results, requires less computational memory, and benefits from faster training and sampling rates. Given these advantages, LASB is well-suited for deployment in real-world industrial applications. Looking ahead, future research could focus on enhancing the robust translation mechanisms specific to anomaly detection and localization tasks.", + "bbox": [ + 511, + 733, + 905, + 900 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "25535", + "bbox": [ + 478, + 945, + 517, + 955 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Acknowledgements. We thank the anonymous reviewers for their valuable feedback that improved the presentation of this paper.", + "bbox": [ + 89, + 90, + 485, + 137 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 150, + 187, + 166 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Samet Akcay, Amir Atapour-Abarghouei, and Toby P Breckon. Ganomaly: Semi-supervised anomaly detection via adversarial training. In Computer Vision-ACCV 2018. Springer International Publishing, 2019. 2", + "[2] Brian DO Anderson. Reverse-time diffusion equation models. Stochastic Processes and their Applications, 12(3):313-326, 1982. 3", + "[3] Alexander Bauer, Shinichi Nakajima, and Klaus-Robert Müller. Self-supervised autoencoders for visual anomaly detection. Mathematics, 12(24), 2024. 2", + "[4] Finn Behrendt et al. Patched diffusion models for unsupervised anomaly detection in brain migraine. In Medical Imaging with Deep Learning, page PMLR, 2024. 1, 2", + "[5] Paul Bergmann, Michael Fauser, David Sattlegger, and Carsten Steger. Mvtec ad-a comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9592-9600, 2019. 5", + "[6] Paul Bergmann et al. Improving unsupervised defect segmentation by applying structural similarity to autoencoders. arXiv preprint arXiv:1807.02011, 2018. 2", + "[7] Ziyi Chang, George A Koulieris, and Hubert PH Shum. On the design fundamentals of diffusion models: A survey. arXiv preprint arXiv:2306.04542, 2023. 5", + "[8] Tianrong Chen, Guan-Horng Liu, and Evangelos Theodorou. Likelihood training of schrödinger bridge using forward-backward SDEs theory. In International Conference on Learning Representations, 2022. 2, 4, 7, 8", + "[9] Yongxin Chen, Tryphon T Georgiou, and Michele Pavon. Stochastic control liaisons: Richard sinkhorn meets gaspard monge on a schrodinger bridge. Siam Review, 63(2):249-313, 2021. 4", + "[10] Anne-Sophie Collin and Christophe De Vleeschouwer. Improved anomaly detection by training an autoencoder with skip connections on images corrupted with stain-shaped noise. In 2020 25th International Conference on Pattern Recognition (ICPR), pages 7915-7922. IEEE, 2021. 2", + "[11] Valentin De Bortoli, James Thornton, Jeremy Heng, and Arnaud Doucet. Diffusion schrödinger bridge with applications to score-based generative modeling. Advances in Neural Information Processing Systems, 34:17695-17709, 2021. 2, 4, 7, 8", + "[12] Thomas Defard, Aleksandr Setkov, Angelique Loesch, and Romaric Audigier. Padim: a patch distribution modeling framework for anomaly detection and localization. In International Conference on Pattern Recognition, pages 475-489. Springer, 2021. 6", + "[13] Hanqiu Deng and Xingyu Li. Anomaly detection via reverse distillation from one-class embedding. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9737-9746, 2022. 6" + ], + "bbox": [ + 93, + 175, + 483, + 900 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[14] Wei Deng, Weijian Luo, Yixin Tan, Marin Bilos, Yu Chen, Yuriy Nevmyvaka, and Ricky T. Q. Chen. Variational schrödinger diffusion models. In International Conference on Machine Learning (ICML), 2024. 2", + "[15] Chouro Ding, Guansong Pang, and Chunhua Shen. Catching both gray and black swans: Open-set supervised anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022. 1", + "[16] Matic Fučka, Vitjan Zavrtanik, and Danijel Skočaj. Transfusion-a transparency-based diffusion model for anomaly detection. In European conference on computer vision, pages 91-108. Springer, 2024. 6, 8", + "[17] Dong Gong, Lingqiao Liu, Vuong Le, Budhaditya Saha, Moussa Reda Mansour, Svetha Venkatesh, and Anton van den Hengel. Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1705-1714, 2019. 2", + "[18] Alvaro Gonzalez-Jimenez et al. Sano: Score-based diffusion model for anomaly localization in dermatology. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023. 1, 2", + "[19] Denis Gudovskiy, Shun Ishizaka, and Kazuki Kozuka. Cflow-ad: Real-time unsupervised anomaly detection with localization via conditional normalizing flows. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 98-107, 2022. 6", + "[20] Haoyang He, Yuhu Bai, Jiangning Zhang, Qingdong He, Hongxu Chen, Zhenye Gan, Chengjie Wang, Xiangtai Li, Guanzhong Tian, and Lei Xie. MambaAD: Exploring state space models for multi-class unsupervised anomaly detection. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. 6, 7", + "[21] Haoyang He, Jiangning Zhang, Hongxu Chen, Xuhai Chen, Zhishan Li, Xu Chen, Yabiao Wang, Chengjie Wang, and Lei Xie. A diffusion-based framework for multi-class anomaly detection. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 8472-8480, 2024. 1, 2, 3, 5, 6, 7, 8", + "[22] Liren He, Zhengkai Jiang, Jinlong Peng, Liang Liu, Qianggang Du, Xiaobin Hu, Wenbing Zhu, Mingmin Chi, Yabiao Wang, and Chengjie Wang. Learning unified reference representation for unsupervised multi-class anomaly detection. arXiv preprint arXiv:2403.11561, 2024. 3", + "[23] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020. 2, 6, 7, 8", + "[24] Bozhen Hu et al. A lightweight spatial and temporal multifeature fusion network for defect detection. IEEE Transactions on Image Processing, 30:472-486, 2020. 1", + "[25] Solomon Kullback. Probability densities with given marginals. The Annals of Mathematical Statistics, 39(4): 1236-1243, 1968. 4, 7, 8", + "[26] Yufei Liang et al. Omni-frequency channel-selection representations for unsupervised anomaly detection. IEEE Transactions on Image Processing, 2023. 2, 6" + ], + "bbox": [ + 516, + 92, + 906, + 900 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "25536", + "bbox": [ + 478, + 945, + 519, + 955 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[27] Guan-Horng Liu, Arash Vahdat, De-An Huang, Evangelos A. Theodorou, Weili Nie, and Anima Anandkumar. I2sb: Image-to-image schrödinger bridge. In International Conference on Machine Learning, 2023. 4, 5", + "[28] Wenqian Liu, Runze Li, Meng Zheng, Srikrishna Karanam, Ziyan Wu, Bir Bhanu, Richard J Radke, and Octavia Camps. Towards visually explaining variational autoencoders. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8642-8651, 2020. 2", + "[29] Zhikang Liu, Yiming Zhou, Yuansheng Xu, and Zilei Wang. Simplenet: A simple network for image anomaly detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20402-20411, 2023. 6", + "[30] Xingming Long et al. Fabric defect detection using tactile information. In 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021. 1", + "[31] Fanbin Lu et al. Removing anomalies as noises for industrial defect localization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023. 1, 2", + "[32] Ruiying Lu, YuJie Wu, Long Tian, Dongsheng Wang, Bo Chen, Xiyang Liu, and Ruimin Hu. Hierarchical vector quantized transformer for multi-class unsupervised anomaly detection. Advances in Neural Information Processing Systems, 36:8487-8500, 2023. 3, 6, 7", + "[33] Arian Mousakhan, Thomas Brox, and Jawad Tayyub. Anomaly detection with conditioned denoising diffusion models. arXiv preprint arXiv:2305.15956, 2023. 6, 7, 8", + "[34] Edward Nelson. Dynamical theories of Brownian motion. Princeton university press, 2020. 4", + "[35] Jonathan Pinnay and Keng Chai. Inpainting transformer for anomaly detection. In International Conference on Image Analysis and Processing, pages 394-406. Springer, 2022. 2", + "[36] Hannes Risken and Hannes Risken. Fokker-planck equation. Springer, 1996. 4", + "[37] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695, 2022. 2, 5, 6, 7", + "[38] Karsten Roth, Latha Pemula, Joaquin Zepeda, Bernhard Schölkopf, Thomas Brox, and Peter Gehler. Towards total recall in industrial anomaly detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14318-14328, 2022. 6, 8", + "[39] Marco Rudolph, Tom Wehrbein, Bodo Rosenhahn, and Bastian Wandt. Fully convolutional cross-scale-flows for image-based defect detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1088-1097, 2022. 6", + "[40] Chitwan Sahara, William Chan, Huiwen Chang, Chris Lee, Jonathan Ho, Tim Salimans, David Fleet, and Mohammad Norouzi. Palette: Image-to-image diffusion models. In ACM SIGGRAPH 2022 conference proceedings, pages 1-10, 2022. 5", + "[41] Thomas Schlegl, Philipp Seebock, Sebastian M Waldstein, Georg Langs, and Ursula Schmidt-Erfurth. f-anogan: Fast" + ], + "bbox": [ + 91, + 92, + 480, + 900 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "unsupervised anomaly detection with generative adversarial networks. Medical image analysis, 54:30-44, 2019. 2", + "[42] Yuyang Shi, Valentin De Bortoli, Andrew Campbell, and Arnaud Doucet. Diffusion schrödinger bridge matching. Advances in Neural Information Processing Systems, 36, 2024. 2", + "[43] Woosang Shin, Jonghyeon Lee, Taehan Lee, Sangmoon Lee, and Jong Pil Yun. Anomaly detection using score-based perturbation resilience. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 23372-23382, 2023. 1, 2, 6, 8", + "[44] Kihyuk Sohn et al. Anomaly clustering: Grouping images into coherent clusters of anomaly types. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023. 1", + "[45] Jouwon Song et al. Anoseg: anomaly segmentation network using self-supervised learning. arXiv preprint arXiv:2110.03396, 2021. 2", + "[46] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021. 2, 3, 7", + "[47] Daniel Stanley Tan, Yi-Chun Chen, Trista Pei-Chun Chen, and Wei-Chao Chen. Trustmae: A noise-resilient defect classification framework using memory-augmented auto-encoders with trust regions. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 276–285, 2021. 2", + "[48] Justin Tebbe and Jawad Tayyyub. Dynamic addition of noise in a diffusion model for anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 3940-3949, 2024. 6", + "[49] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017. 5", + "[50] Shashanka Venkataramanan et al. Attention guided anomaly localization in images. In European Conference on Computer Vision. Springer International Publishing, 2020. 1", + "[51] Gefei Wang, Yuling Jiao, Qian Xu, Yang Wang, and Can Yang. Deep generative learning via schrödinger bridge. In International conference on machine learning, pages 10794-10804. PMLR, 2021. 2", + "[52] Julian Wyatt et al. Anoddpm: Anomaly detection with denoising diffusion probabilistic models using simplex noise. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022. 1, 2", + "[53] Rui Xu, Yunke Wang, and Bo Du. Maediff: Masked autoencoder-enhanced diffusion models for unsupervised anomaly detection in brain images. arXiv preprint arXiv:2401.10561, 2024. 1", + "[54] Hang Yao, Ming Liu, Zhicun Yin, Zifei Yan, Xiaopeng Hong, and Wangmeng Zuo. Glad: towards better reconstruction with global and local adaptive diffusion models for unsupervised anomaly detection. In European Conference on Computer Vision, pages 1-17. Springer, 2024. 6, 7, 8" + ], + "bbox": [ + 516, + 92, + 906, + 900 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "25537", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[55] Zhiyuan You, Lei Cui, Yujun Shen, Kai Yang, Xin Lu, Yu Zheng, and Xinyi Le. A unified model for multi-class anomaly detection. Advances in Neural Information Processing Systems, 35:4571-4584, 2022. 3, 6, 7", + "[56] Vitjan Zavrtanik, Matej Kristan, and Danijel Skočaj. Draem-a discriminatively trained reconstruction embedding for surface anomaly detection. In Proceedings of the IEEE/CVF international conference on computer vision, pages 8330-8339, 2021. 2, 4, 6, 7, 8", + "[57] Vitjan Zavrtanik, Matej Kristan, and Danijel Skočaj. Reconstruction by inpainting for visual anomaly detection. Pattern Recognition, 112:107706, 2021. 2", + "[58] Vitjan Zavrtanik, Matej Kristan, and Danijel Skočaj. Dsr-a dual subspace re-projection network for surface anomaly detection. In European conference on computer vision. Springer Nature Switzerland, 2022. 2, 6", + "[59] Zhaoyang Zeng et al. Reference-based defect detection network. IEEE Transactions on Image Processing, 30:6637-6647, 2021. 1", + "[60] Xinyi Zhang, Naiqi Li, Jiawei Li, Tao Dai, Yong Jiang, and Shu-Tao Xia. Unsupervised surface anomaly detection with diffusion probabilistic model. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6782-6791, 2023. 1, 2, 6, 7, 8", + "[61] Ying Zhao. Omnial: A unified cnn framework for unsupervised anomaly localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3924-3933, 2023. 6", + "[62] Zhixuan Zhao et al. A surface defect detection method based on positive samples. In PRICAI 2018: Trends in Artificial Intelligence. Springer International Publishing, 2018. 2", + "[63] Yang Zou, Jongheon Jeong, Latha Pemula, Dongqing Zhang, and Onkar Dabeer. Spot-the-difference self-supervised pretraining for anomaly detection and segmentation. In European Conference on Computer Vision. Springer, Springer Nature Switzerland, 2022. 5" + ], + "bbox": [ + 91, + 90, + 482, + 599 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "25538", + "bbox": [ + 478, + 945, + 517, + 955 + ], + "page_idx": 10 + } +] \ No newline at end of file diff --git a/2025/A Unified Latent Schrodinger Bridge Diffusion Model for Unsupervised Anomaly Detection and Localization/0371bea6-a128-4eff-9f4f-dffd7eab7a85_model.json b/2025/A Unified Latent Schrodinger Bridge Diffusion Model for Unsupervised Anomaly Detection and Localization/0371bea6-a128-4eff-9f4f-dffd7eab7a85_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5c4b04f68adc4e4aff72605e37077c24e7162ec4 --- /dev/null +++ b/2025/A Unified Latent Schrodinger Bridge Diffusion Model for Unsupervised Anomaly Detection and Localization/0371bea6-a128-4eff-9f4f-dffd7eab7a85_model.json @@ -0,0 +1,2301 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.043 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.001, + 0.812, + 0.047 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.138, + 0.13, + 0.861, + 0.177 + ], + "angle": 0, + "content": "A Unified Latent Schrödinger Bridge Diffusion Model for Unsupervised Anomaly Detection and Localization" + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.203, + 0.816, + 0.276 + ], + "angle": 0, + "content": "Shilhora Akshay\\(^{1,*}\\) Niveditha Lakshmi Narasimhan\\(^{2}\\) Jacob George\\(^{2}\\) Vineeth N Balasubramanian\\(^{1}\\) \n\\(^{1}\\)Indian Institute of Technology, Hyderabad \\(^{2}\\)KLA Corporation *shilhora.akshay333@gmail.com" + }, + { + "type": "title", + "bbox": [ + 0.248, + 0.309, + 0.327, + 0.326 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.342, + 0.485, + 0.751 + ], + "angle": 0, + "content": "Anomaly detection and localization remain pivotal challenges in computer vision, with applications ranging from industrial inspection to medical diagnostics. While current supervised methods offer high precision, they are often impractical due to the scarcity of annotated data and the infrequent occurrence of anomalies. Recent advancements in unsupervised approaches, particularly reconstruction-based methods, have addressed these issues by training models exclusively on normal data, enabling them to identify anomalies during inference. However, these methods frequently rely on auxiliary networks or specialized adaptations, which can limit their robustness and practicality. This work introduces the Latent Anomaly Schrödinger Bridge (LASB), a unified unsupervised anomaly detection model that operates entirely in the latent space without requiring additional networks or custom modifications. LASB transforms anomaly images into normal images by preserving structural integrity across varying anomaly classes, lighting, and pose conditions, making it highly robust and versatile. Unlike previous methods, LASB does not focus solely on reconstructing anomaly features, but emphasizes anomaly transformation, achieving smooth anomaly-to-normal image conversions. Our method achieves state-of-the-art performance on both the MVTec-AD and VisA datasets, excelling in detection and localization tasks. Our code is available at https://github.com/ShilhoraAkshayPatel/LASB." + }, + { + "type": "title", + "bbox": [ + 0.092, + 0.783, + 0.222, + 0.8 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.81, + 0.484, + 0.902 + ], + "angle": 0, + "content": "Anomaly detection and localization is a critical task in computer vision with several applications in medical [4, 18, 52, 53] and industrial [21, 31, 43, 60], which has attracted significant attention from the research community. The main goal of anomaly detection is to identify and localize abnormal patterns that are unusual from those seen in normal" + }, + { + "type": "image", + "bbox": [ + 0.518, + 0.306, + 0.905, + 0.472 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.513, + 0.48, + 0.908, + 0.537 + ], + "angle": 0, + "content": "Figure 1. Illustration of the Gaussian and Bridge diffusion processes, with reverse image trajectory from LASB. LASB learns a direct diffusion bridge between anomaly and normal distributions, enhancing interpretability and anomaly-free transformation." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.565, + 0.907, + 0.67 + ], + "angle": 0, + "content": "instances. In the past few years the research community has proposed various supervised anomaly detection methods [15, 24, 30, 44, 50, 59]. The extensive need for annotations is expensive and the infrequent presence of anomalous samples makes these methods unsuitable for practical applications due to their limitations in addressing real-world scenarios." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.675, + 0.909, + 0.903 + ], + "angle": 0, + "content": "Recent research has sought to overcome the limitations of supervised anomaly detection methods by relying solely on normal images during training and localizing abnormal patterns at inference time. Among the various categories within the unsupervised paradigm, reconstruction-based methods have garnered significant attention due to their promising results and strong performance in real-world scenarios. The core concept behind these methods is that the model is trained exclusively on normal images and during inference, it reconstructs abnormal samples as normal samples. Thus, doing so benefits in two directions, one can interpret how the model's ability to detect and localize anomalies. Second, it aids the model to perform better while reconstructing the anomalies. Recent advancements have introduced novel approaches lever" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.521, + 0.958 + ], + "angle": 0, + "content": "25528" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.482, + 0.168 + ], + "angle": 0, + "content": "aging AutoEncoder-based (AE) [3, 6, 56-58], Generative Adversarial Networks-based (GANs) [1, 26, 45, 62], and diffusion-based approaches [4, 18, 21, 31, 43, 52, 60]. Our research specifically focuses on various diffusion models tailored for anomaly detection tasks." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.177, + 0.483, + 0.388 + ], + "angle": 0, + "content": "Existing diffusion-based methods primarily focus on developing novel noise-conditioning techniques, often incorporating an additional discriminative sub-network [60]. Another approach utilizes a score-based diffusion model to identify anomalies by evaluating how effectively samples can return to the normal data distribution after perturbation, though it fails to deliver competitive performance [43]. Additionally, DiAD [21] proposes novel semantic guidance to specifically understand the anomalies and showcase its robustness. However, earlier diffusion-based methods generally depend on auxiliary networks or tailored diffusion processes to extract anomaly features. Moreover, most of these methods prioritize the extraction and reconstruction of anomaly features." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.397, + 0.483, + 0.563 + ], + "angle": 0, + "content": "In this work, we introduce an intuitive mechanism that neither requires model-specific adaptations to extract anomaly features nor relies on an additional feature extractor for training or inference, to extract discriminative features. By leveraging the capabilities of Schrödinger Bridge [8, 11, 14, 42, 51], we propose the Latent Anomaly Schrödinger Bridge (LASB), which operates entirely in the latent space, transforming anomalous images into normal ones, regardless of anomaly class, and demonstrating robustness to variations in lighting and pose. In the following, we outline the key contributions of our study." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.584, + 0.48, + 0.643 + ], + "angle": 0, + "content": "- We propose LASB, a unsupervised bridge-based anomaly detection model that operates in latent space. LASB transforms anomaly images into normal images, regardless of the anomaly class." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.645, + 0.48, + 0.761 + ], + "angle": 0, + "content": "- Unified Framework: LASB offers a comprehensive framework for both anomaly detection and localization without the need for auxiliary networks. Unlike conventional methods that rely on Gaussian diffusion, LASB employs a bridge-based diffusion process that inherently preserves structural integrity, marking the first application of such a process in latent diffusion model for anomaly detection." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.765, + 0.48, + 0.824 + ], + "angle": 0, + "content": "- Efficient and Scalable: By utilizing the Linear Schrödinger Bridge in latent space, LASB significantly reduces training time, memory consumption, and sampling speed, enhancing overall efficiency and scalability." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.826, + 0.48, + 0.899 + ], + "angle": 0, + "content": "- Our method achieves state-of-the-art performance on the MVtec dataset, with a image-level \\(\\mathrm{AUROC}_{cls} / \\mathrm{AP}_{cls}\\) of \\(99.2\\% / 99.3\\%\\) and an pixel-level \\(\\mathrm{AUROC}_{seg} / \\mathrm{AP}_{seg}\\) of \\(98.6\\% / 78.2\\%\\) on the test set, establishing a new benchmark for unsupervised anomaly detection." + }, + { + "type": "list", + "bbox": [ + 0.091, + 0.584, + 0.48, + 0.899 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.09, + 0.655, + 0.105 + ], + "angle": 0, + "content": "2. Related Work" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.112, + 0.905, + 0.442 + ], + "angle": 0, + "content": "Reconstruction-based Methods: Reconstruction-based anomaly detection models operate on the premise that networks trained exclusively on normal images will fail to accurately reconstruct anomalous ones due to their unfamiliarity with the abnormal distribution. Notably, autoencoder [6, 10, 28] and Generative Adversarial Network (GAN) [1, 41] frameworks have been widely utilized for this task, where anomaly scores are computed based on the reconstruction error between the input and its generated counterpart. Nevertheless, such approaches frequently struggle with direct copy issues and elevated false positive rates due to their limited generalization capacity over anomalous regions or due to the overgeneralization capacity of reconstructive methods. To mitigate these generalization challenges, recent innovations have pivoted towards inpainting strategies [35, 57] or integrating supplementary memory modules [17, 47]. Furthermore, the DRAEM [56] model enhances performance by leveraging pseudoanomalies and coupling autoencoders with a segmentation network, though its effectiveness diminishes when faced with substantial deviations between real and pseudo anomalies." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.454, + 0.905, + 0.83 + ], + "angle": 0, + "content": "Diffusion-based Methods: Diffusion models, known for their high-fidelity image synthesis, are increasingly applied to anomaly detection in both medical and industrial domains [18, 21, 43, 52, 60]. In the medical field, SANO [18] leverages score-based diffusion models to localize skin conditions such as eczema, while AnoDDPM [52] employs the Denoising Diffusion Probabilistic Model (DDPM) [23] to segment tumors, initiating with simplex noise. These methods primarily utilize standard DDPM [23] or Score-based Generative Models (SGMs) [46], yet they often struggle to capture structural features because they rely on starting from pure Gaussian or simplex noise and frequently require external guidance for complex feature integration. In industrial applications, DiffAD [60] introduces a noise interpolation technique coupled with a discriminative sub-network to enhance detection capabilities. AD-SPR [43] emphasizes the significance of perturbed resilience using SGMs [46] to identify anomalies. Additionally, DiAD [21] presents a novel semantic guidance network that directs a stable diffusion model [37] to effectively detect and localize anomalies in both industrial and selected medical imaging applications. However, these models often require class-specific training or additional networks to guide the diffusion process or to perform further segmentation via a discriminative sub-network." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.84, + 0.905, + 0.899 + ], + "angle": 0, + "content": "Unified/Multi-class Methods: Early approaches in the literature developed models on a per-class basis, which limited their generalizability. More recent research has shifted towards unified or multi-class models that are trained on" + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.946, + 0.518, + 0.957 + ], + "angle": 0, + "content": "25529" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.099, + 0.063, + 0.891, + 0.194 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.099, + 0.198, + 0.266, + 0.307 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.272, + 0.198, + 0.89, + 0.307 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.31, + 0.907, + 0.382 + ], + "angle": 0, + "content": "Figure 2. The LASB model framework consists of two key stages: training and inference. During training, anomaly augmentations are applied to images, introducing distortions that the LASB model learns to remove, ultimately reconstructing a normal image. This iterative process continues until the model effectively filters out the anomalies. In the inference stage, the model processes real anomalous images, reconstructing normal versions. The anomaly detection is achieved by computing the difference \\((p_B - p_A)\\) between the original and reconstructed images, with anomalies visualized via a heatmap." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.406, + 0.486, + 0.664 + ], + "angle": 0, + "content": "entire datasets, demonstrating improved robustness [21, 22, 32, 55]. For instance, UniAD [55] utilizes layer-wise query decoding, neighbor-masked attention, and feature jittering techniques to enhance multi-class anomaly detection. In contrast, HVQ-Trans [32] employs a hierarchical vector quantized prototype-oriented Transformer to improve discriminative capacity across multiple classes. Both methods focus on tackling the \"identical shortcut\" issue. On the other hand, RLR [22] introduces a novel framework using learnable reference representations with locality constraints to explicitly learn normal patterns and circumvent \"learning shortcuts\", achieving superior results on standard datasets (MVTec-AD and VisA). Additionally, the DiAD [21] model, designed for multi-class settings and based on diffusion-based reconstruction, outperforms UniAD [55], RLR [22], and HVQ-Trans [32] in terms of localization and detection performance." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.677, + 0.308, + 0.695 + ], + "angle": 0, + "content": "3. Proposed Methodology" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.702, + 0.233, + 0.717 + ], + "angle": 0, + "content": "3.1. Preliminaries" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.724, + 0.485, + 0.83 + ], + "angle": 0, + "content": "Before introducing the full methodology, we begin with a preliminary overview of score-based generative models (SGMs). Next, we explain the Schrödinger Bridge concept and discuss its connection to SGMs. Finally, we present the core methodology, detailing how and why the latent Schrödinger Bridge effectively addresses anomaly detection and localization." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.84, + 0.485, + 0.902 + ], + "angle": 0, + "content": "Notations. Here we introduce some notations and use them throughout the work. Let \\( X_{t}(\\in \\mathbb{R}^{d}) \\) denote a stochastic process, where \\( t\\in [0,1] \\) is a continuous time step. The intermediate steps are uniformly distributed in the interval" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.406, + 0.907, + 0.498 + ], + "angle": 0, + "content": "\\((\\mathcal{U}(t \\sim [0,1]))\\). The initial distribution is the corrupted data distribution is denoted as \\(p_A\\) and the terminal distribution is clean data distribution and is denoted as \\(p_B\\). The Wiener process and its reversed counterpart, adopted from Anderson et al. [2], are represented as \\(W_t\\) and \\(\\overline{W}_t\\), respectively, both in \\(\\mathbb{R}^d\\). \\(\\mathbb{I} \\in \\mathbb{R}^{d \\times d}\\) is the identity matrix." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.507, + 0.909, + 0.627 + ], + "angle": 0, + "content": "Score-based Generative Models. Score-based Generative Models (SGMs) perturb the data stochastically across continuous noise scales and then use reverse-time stochastic differential equations (SDEs) to learn the reverse-time diffusion process. This reverse-time SDE relies on a score function enabling the reconstruction of any distribution starting from a Gaussian [46]. Given data \\( X_0 \\sim p_A \\), forward and backward SDEs are formulated as:" + }, + { + "type": "equation", + "bbox": [ + 0.552, + 0.638, + 0.905, + 0.656 + ], + "angle": 0, + "content": "\\[\nd X _ {t} = f _ {t} \\left(X _ {t}\\right) d t + \\sqrt {\\beta_ {t}} d W _ {t}, \\quad X _ {0} \\sim p _ {A}, \\tag {1}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.553, + 0.66, + 0.905, + 0.678 + ], + "angle": 0, + "content": "\\[\nd X _ {t} = \\left[ f _ {t} - \\beta_ {t} \\nabla \\log p \\left(X _ {t}, t\\right) \\right] d t + \\sqrt {\\beta_ {t}} d \\bar {W} _ {t}, \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.689, + 0.907, + 0.78 + ], + "angle": 0, + "content": "where, \\(f(\\cdot ,t):\\mathbb{R}^n\\to \\mathbb{R}^n\\) and the terminal distributions (i.e., at \\(t = 1\\)) approach Gaussian \\((X_{1}\\sim \\mathcal{N}(0,I))\\). To achieve this, diffusion coefficient \\(\\beta_{t}\\in \\mathbb{R}\\) is carefully tuned and ensuring that the base drift \\(f_{t}\\) is linear in \\(X_{t}\\). Here, \\(p\\) is the marginal density of equation (2) at time \\(t\\), and \\(\\nabla \\log p\\) denotes its score [46]." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.789, + 0.909, + 0.879 + ], + "angle": 0, + "content": "SGMs as Schrödinger Bridge. The absence of flexibility in SGM's to transport data to desired distribution, demands a versatile strategy. The Schrödinger Bridge (SB) is a strategy often applied in optimal transport to find the optimal path measure between two marginal densities and it is expressed as," + }, + { + "type": "equation", + "bbox": [ + 0.63, + 0.88, + 0.905, + 0.904 + ], + "angle": 0, + "content": "\\[\n\\min _ {Q \\in P \\left(p _ {A}, p _ {B}\\right)} \\operatorname {K L} (\\mathbb {Q} | | \\mathbb {P}), \\tag {3}\n\\]" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.521, + 0.958 + ], + "angle": 0, + "content": "25530" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.485, + 0.242 + ], + "angle": 0, + "content": "Here, \\(\\mathbb{Q}\\) represents a path measure within \\(\\mathbb{P}(p_A,p_B)\\), characterized by having marginal densities \\(p_A\\) and \\(p_B\\) at times \\(t = 0\\) and 1, respectively. We consider \\(\\mathbb{P}\\) as the reference measure, specifically chosen as the path measure in equation (1). We will further elaborate on the conditions that define the optimality of the SB (as shown in equation (3)) with specified boundary conditions. The optimality condition for SB is characterized by solving PDEs [9][8]. Let \\(\\Psi (t,x)\\) and \\(\\widehat{\\Psi} (t,x)\\) be the solutions to the following PDEs:" + }, + { + "type": "equation", + "bbox": [ + 0.172, + 0.251, + 0.483, + 0.282 + ], + "angle": 0, + "content": "\\[\n\\frac {\\partial \\Psi (z , t)}{\\partial t} = - \\nabla \\Psi^ {T} f - \\frac {1}{2} \\beta \\Delta \\Psi \\tag {4}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.175, + 0.284, + 0.483, + 0.317 + ], + "angle": 0, + "content": "\\[\n\\frac {\\partial \\widehat {\\Psi} (z , t)}{\\partial t} = - \\nabla \\cdot (\\widehat {\\Psi} f) + \\frac {1}{2} \\beta \\Delta \\widehat {\\Psi} \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.328, + 0.26, + 0.342 + ], + "angle": 0, + "content": "subject to the conditions," + }, + { + "type": "equation", + "bbox": [ + 0.11, + 0.352, + 0.466, + 0.372 + ], + "angle": 0, + "content": "\\[\n\\Psi (z, 0) \\widehat {\\Psi} (z, 0) = p _ {A} (z), \\quad \\Psi (z, 1) \\widehat {\\Psi} (z, 1) = p _ {B} (z).\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.384, + 0.485, + 0.43 + ], + "angle": 0, + "content": "Then, the solution to optimization (3) can be expressed by the path measure of the following forward equation (6), or equivalently backward equation (7), SDE:" + }, + { + "type": "equation", + "bbox": [ + 0.12, + 0.441, + 0.483, + 0.46 + ], + "angle": 0, + "content": "\\[\nd X _ {t} = \\left[ f _ {t} + \\beta_ {t} \\nabla \\log \\Psi (X _ {t}, t) \\right] d t + \\sqrt {\\beta_ {t}} d W _ {t}, \\tag {6}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.12, + 0.463, + 0.483, + 0.482 + ], + "angle": 0, + "content": "\\[\nd X _ {t} = \\left[ f _ {t} - \\beta_ {t} \\nabla \\log \\hat {\\Psi} \\left(X _ {t}, t\\right) \\right] d t + \\sqrt {\\beta_ {t}} d \\bar {W} _ {t}, \\tag {7}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.496, + 0.484, + 0.695 + ], + "angle": 0, + "content": "where \\(\\nabla \\log \\Psi\\) and \\(\\nabla \\log \\widehat{\\Psi}\\) represent the non-linear optimal forward and backward drifts for the Schrödinger Bridge. This SB has a non-linear behavior and it generalizes the SGMs with variation in prior data. Note that, forward and backward equations (6, 7) of Schrödinger Bridge are same as SGMs forward-backward equations (1, 2) except the forward drift term (\\(\\nabla \\log \\Psi\\)). Thus, from the Nelson's Duality [34] we obtain \\(\\Psi(x,t)\\widehat{\\Psi}(x,t) = q^{\\mathrm{SB}}(x,t)\\). As the backward drift of Schrödinger Bridge does not act as a score function anymore with two indivisible components. Thus, from equation (7) and Nelson's Duality [34] we obtain \\(dX_{t} = [f_{t} - \\beta_{t}(\\nabla \\log q^{\\mathrm{SB}}(X_{t},t) - \\nabla \\log \\Psi(X_{t},t))]dt + \\sqrt{\\beta_{t}} d\\overline{W}_{t}\\)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.705, + 0.484, + 0.902 + ], + "angle": 0, + "content": "Linear-SB. The constraints posed above cause the model to be intractable and to overcome this impediment existing literature suggests various lines of strategies such as Iterative Proportional Fitting [11, 25], likelihood-based training of SB [8] etc. These methods help the SGMs to construct non-linear diffusion bridges. But, we can make them tractable by posing some linear conditions. A more detailed examination of SB theory with SGM framework reveals that the nonlinear drifts specified in equations (6, 7) correspond to the score functions described in equation (2). Assuming that \\(\\Psi (\\cdot ,t)\\) and \\(\\widehat{\\Psi} (\\cdot ,t)\\) function as probability density functions, we reformulate equations (4, 5) to effectively address the solutions of the Fokker-Planck equation [36]." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.092, + 0.906, + 0.152 + ], + "angle": 0, + "content": "The Schrödinger Bridges in equation (4, 5) are satisfied, the backward and forward drifts, \\(\\nabla \\log \\widehat{\\Psi}(X_t, t)\\) and \\(\\nabla \\log \\Psi(X_t, t)\\) represent the score functions for the following linear SDEs, respectively [36][27]:" + }, + { + "type": "equation", + "bbox": [ + 0.54, + 0.164, + 0.907, + 0.183 + ], + "angle": 0, + "content": "\\[\nd X _ {t} = f _ {t} \\left(X _ {t}\\right) d t + \\sqrt {\\beta_ {t}} d W _ {t}, \\quad X _ {0} \\sim \\Psi (\\cdot , 0), \\tag {8}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.541, + 0.186, + 0.907, + 0.205 + ], + "angle": 0, + "content": "\\[\nd X _ {t} = f _ {t} \\left(X _ {t}\\right) d t + \\sqrt {\\beta_ {t}} d \\overline {{W}} _ {t}, \\quad X _ {1} \\sim \\widehat {\\Psi} (\\cdot , 1). \\qquad (9)\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.217, + 0.907, + 0.43 + ], + "angle": 0, + "content": "The key characteristic of the linear SDEs presented in equations (8, 9) lies in their differing boundary conditions, resulting in distinct distributions compared to nonlinear SDEs. Sampling is facilitated by parameterizing \\(\\nabla \\log \\widehat{\\Psi}\\) using a score network and applying established SGM methods. However, the computational complexity arises due to the intractability of the boundary conditions \\(\\Psi(\\cdot, 0)\\) and \\(\\widehat{\\Psi}(\\cdot, 1)\\). To resolve this, we introduce Dirac delta functions as boundary conditions for these previously intractable terms, thus rendering them manageable. Now to address this we impose Dirac delta boundary conditions [36][27]. Assume \\(p_A(\\cdot) = \\delta_\\kappa(\\cdot)\\) is Dirac delta distribution that is centered at \\(\\kappa \\in \\mathbb{R}^d\\). Then, the initial distributions of the linear SDEs (equation (8,9)) are:" + }, + { + "type": "equation", + "bbox": [ + 0.576, + 0.44, + 0.907, + 0.46 + ], + "angle": 0, + "content": "\\[\np _ {B} = \\Psi (\\cdot , 1) \\widehat {\\Psi} (\\cdot , 1), \\quad \\widehat {\\Psi} (\\cdot , 0) = \\delta_ {\\kappa} (\\cdot), \\tag {10}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.471, + 0.906, + 0.593 + ], + "angle": 0, + "content": "The above equation (10) suggests that the optimal backward drift of the process in equation (8) consistently targets the Dirac delta \\(\\delta_{\\kappa}(\\cdot)\\), achieving convergence to \\(\\kappa\\) independent of \\(p_B\\). This concept is integrated into the loss function, where the score is recalculated as \\(\\nabla \\log p(X_t,t|X_0 = \\kappa)\\) for each instance \\(\\kappa\\). This approach enhances computational efficiency and establishes a robust mathematical basis for training \\(\\nabla \\log \\widehat{\\Psi} (\\cdot)\\)." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.602, + 0.906, + 0.619 + ], + "angle": 0, + "content": "3.2. Linear Schrödinger Bridge in the Latent Space" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.624, + 0.907, + 0.669 + ], + "angle": 0, + "content": "We now describe how the above discussion on linear Schrödinger bridges can be used for anomaly detection and localization." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.689, + 0.907, + 0.901 + ], + "angle": 0, + "content": "Overall Pipeline. Figure 2 illustrates the entire pipeline of the proposed method. Initially, anomaly augmentation [56] is applied to a normal image, which is then processed by the Latent Anomaly Schrödinger Bridge (LASB) model. The LASB model learns the anomaly statistics and transforms the augmented anomaly image back into a normal image. Crucially, the aim of this step is not to learn the anomalies themselves, but rather to reconstruct the normal image regardless of the anomalies introduced during training. During training, various masks and noise patterns are applied to augment the images. After training, the LASB model is evaluated on anomaly images from the test set, with its weights frozen. During the inference phase, the model is presented with an anomaly image from the test" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.518, + 0.957 + ], + "angle": 0, + "content": "25531" + } + ], + [ + { + "type": "code_caption", + "bbox": [ + 0.093, + 0.091, + 0.312, + 0.107 + ], + "angle": 0, + "content": "Algorithm 1 Training Procedure" + }, + { + "type": "text", + "bbox": [ + 0.105, + 0.11, + 0.432, + 0.127 + ], + "angle": 0, + "content": "1: Input: normal and anomaly latent \\( p_A^{(z)}(\\cdot), p_B^{(z)}(\\cdot | z_0) \\)" + }, + { + "type": "text", + "bbox": [ + 0.105, + 0.128, + 0.165, + 0.139 + ], + "angle": 0, + "content": "2: repeat" + }, + { + "type": "text", + "bbox": [ + 0.106, + 0.14, + 0.326, + 0.156 + ], + "angle": 0, + "content": "3: \\(z_0\\sim p_A^{(z)}(z_0),z_1\\sim p_B^{(z)}(z_1|z_0)\\)" + }, + { + "type": "text", + "bbox": [ + 0.105, + 0.156, + 0.397, + 0.169 + ], + "angle": 0, + "content": "4: \\(z_{t}\\sim q(z_{t}|z_{0},z_{1})\\) according to equation (11)" + }, + { + "type": "text", + "bbox": [ + 0.105, + 0.17, + 0.435, + 0.182 + ], + "angle": 0, + "content": "5: calculate gradient \\(\\nabla \\epsilon (z_t,t;\\theta)\\) using equation (12)" + }, + { + "type": "text", + "bbox": [ + 0.105, + 0.183, + 0.231, + 0.196 + ], + "angle": 0, + "content": "6: until convergence" + }, + { + "type": "list", + "bbox": [ + 0.105, + 0.11, + 0.435, + 0.196 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.225, + 0.484, + 0.347 + ], + "angle": 0, + "content": "set, which it then reconstructs as a normal image. Both the original anomaly image and its reconstructed normal image are subsequently passed through a difference block. In this step, multi-scale features are extracted using a pre-trained ImageNet model, and the discrepancies between these features are computed to pinpoint regions of interest (ROI). These differences are then detected and localized, following a strategy akin to DiAD [21]." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.353, + 0.484, + 0.518 + ], + "angle": 0, + "content": "Latent Anomaly Schrödinger Bridge. The foundational basis of Latent Anomaly Schrödinger Bridge (LASB) is inspired by linear Schrödinger Bridge's from equation (10) i.e., the optimal backward drift in equation (8) consistently aims for the Dirac delta \\(\\delta_{\\kappa}(\\cdot)\\), ensuring convergence to \\(\\kappa\\) regardless of \\(p_B\\). That means, the forward process transforms the noisy anomaly image statistics and learns to reconstruct a normal image irrespective of terminal distribution. Now to elucidate the working mechanism of the LASB diffusion model which will consider a Chang et al. [7] design strategy" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.52, + 0.484, + 0.854 + ], + "angle": 0, + "content": "SGMs and diffusion models operating in pixel space require significant computational resources and memory. To address this challenge, we introduce the LASB model. Before training the Linear-SB, we first train \\( E_{VQ} \\) and \\( D_{VQ} \\), the encoder and decoder of the VQ-VAE model [49], for perceptual compression, following the approach of stable diffusion [37]. During VQ-VAE training, we input either normal or anomaly-augmented images from the training set into \\( E_{VQ} \\) and aim to reconstruct the input using \\( D_{VQ} \\). Once the model converges, its weights are frozen, and we proceed to train the Linear-SB in the latent space. Unlike Gaussian diffusion models, where noise is progressively added at each time step, leading to the deterioration of image structure at later stages, we adopt a semidegradation strategy similar to Saharia et al. [40]. This ensures that, during the forward pass, the LASB model performs a smooth transformation while preserving structural information. Figure 1 illustrates the contrast between the Gaussian diffusion process and the Schrödinger Bridge diffusion process when applied to anomaly-free reconstruction. According to Liu et al. [27] for this smooth structural transformation the posterior for the SB (equation 6, 7)" + }, + { + "type": "code_caption", + "bbox": [ + 0.516, + 0.091, + 0.807, + 0.107 + ], + "angle": 0, + "content": "Algorithm 2 Inference/Sampling Procedure" + }, + { + "type": "text", + "bbox": [ + 0.526, + 0.11, + 0.772, + 0.127 + ], + "angle": 0, + "content": "1: Input: \\( z_{N} \\sim p_{B}^{(z)}(z_{N}) \\), trained \\( \\epsilon_{\\theta}(\\cdot, \\cdot) \\)" + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.128, + 0.658, + 0.139 + ], + "angle": 0, + "content": "2: for \\( n = N \\) to 1 do" + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.14, + 0.718, + 0.154 + ], + "angle": 0, + "content": "3: Predict \\( z_0^\\epsilon \\) using \\( \\epsilon_{\\theta}(z_n, t_n) \\)" + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.155, + 0.702, + 0.168 + ], + "angle": 0, + "content": "4: \\(z_{n - 1}\\sim p(z_{n - 1}|z_0^\\epsilon ,z_n)\\)" + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.169, + 0.592, + 0.18 + ], + "angle": 0, + "content": "5: end for" + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.182, + 0.604, + 0.193 + ], + "angle": 0, + "content": "6: return \\( z_0 \\)" + }, + { + "type": "list", + "bbox": [ + 0.526, + 0.11, + 0.772, + 0.193 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.221, + 0.906, + 0.28 + ], + "angle": 0, + "content": "given a boundary pair condition \\((X_0, X_1)\\) reveals an analytic. Now as we inject the image statistics via perceptual compression using VQ-VAE into latent space, the analytical form can be formulated as," + }, + { + "type": "equation", + "bbox": [ + 0.588, + 0.295, + 0.906, + 0.313 + ], + "angle": 0, + "content": "\\[\nq \\left(z _ {t} \\mid z _ {0}, z _ {1}\\right) = \\mathcal {N} \\left(z _ {t}, \\mu_ {t} \\left(z _ {0}, z _ {1}\\right), \\Sigma_ {t}\\right) \\tag {11}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.317, + 0.906, + 0.345 + ], + "angle": 0, + "content": "where \\(\\mu_t = \\frac{\\bar{\\sigma}_t^{(z)2}}{\\bar{\\sigma}_t^{(z)2} + \\sigma_t^{(z)2}} z_0 + \\frac{\\sigma_t^{(z)2}}{\\bar{\\sigma}_t^{(z)2} + \\sigma_t^{(z)2}} z_1\\), and" + }, + { + "type": "equation", + "bbox": [ + 0.515, + 0.344, + 0.641, + 0.372 + ], + "angle": 0, + "content": "\\[\n\\Sigma_ {t} = \\frac {\\bar {\\sigma} _ {t} ^ {(z) 2} \\sigma_ {t} ^ {(z) 2}}{\\bar {\\sigma} _ {t} ^ {(z) 2} + \\sigma_ {t} ^ {(z) 2}} \\mathbb {I}.\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.376, + 0.906, + 0.415 + ], + "angle": 0, + "content": "The accumulated variances are, \\(\\sigma_t^{(z)2}\\) \\(\\begin{array}{r}\\int_0^t\\beta_\\tau d\\tau \\quad \\mathrm{and}\\quad \\overline{\\sigma}_t^2 \\coloneqq \\int_t^1\\beta_\\tau d\\tau . \\end{array}\\)" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.427, + 0.906, + 0.456 + ], + "angle": 0, + "content": "We hence train the proposed LASB using our objective function:" + }, + { + "type": "equation", + "bbox": [ + 0.602, + 0.463, + 0.906, + 0.495 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {L A S B} := \\left| \\left| \\epsilon_ {\\theta} \\left(z _ {t}, t\\right) - \\left(\\frac {z _ {t} - z _ {0}}{\\sigma_ {t} ^ {(z)}}\\right) \\right| \\right| \\tag {12}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.503, + 0.906, + 0.549 + ], + "angle": 0, + "content": "where \\(\\epsilon (\\cdot)\\) is the network and \\(\\theta\\) are the parameters associated with it. The training and the inference procedures are summarized in Algorithms 1 and 2." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.561, + 0.749, + 0.578 + ], + "angle": 0, + "content": "4. Experiments and Results" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.582, + 0.907, + 0.839 + ], + "angle": 0, + "content": "Datasets. To validate the efficacy of the LASB method, we utilize two challenging datasets in industrial anomaly detection: the MVTec-AD [5] dataset and the VisA [63] dataset. The MVTec-AD dataset comprises 15 categories, including 10 object types and 5 texture types, with a total of 5,354 high-resolution images. Of these, 3,629 anomaly-free images are used for training, while the remaining 1,725 test images include both normal and anomalous samples. The VisA dataset consists of 12 distinct objects organized into 12 subsets, categorized into complex Structures, Multiple Instances, and Single Instance types. It includes 10,821 high-resolution images, with 9,621 normal images and 1,200 anomalous images exhibiting 78 distinct anomalies, offering a comprehensive benchmark for anomaly detection and localization methods. Both datasets provide pixel-level ground truth annotations to facilitate the evaluation of anomaly localization performance." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.84, + 0.906, + 0.902 + ], + "angle": 0, + "content": "Evaluation Metrics. For a rigorous quantitative evaluation of our experimental results in anomaly detection and localization, we employ AUROC (Area Under the Receiver Operating Characteristic Curve), AP (Average Precision)," + }, + { + "type": "page_footnote", + "bbox": [ + 0.09, + 0.863, + 0.483, + 0.901 + ], + "angle": 0, + "content": "1Chang et al. [7] work focuses on explaining diffusion models by signifying the design principles of the training strategy, sampling strategy, and the objective function." + }, + { + "type": "footer", + "bbox": [ + 0.479, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "25532" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.1, + 0.082, + 0.346, + 0.185 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.347, + 0.081, + 0.596, + 0.185 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.599, + 0.082, + 0.898, + 0.185 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.189, + 0.908, + 0.232 + ], + "angle": 0, + "content": "Figure 3. Comparison of training time (left), training memory (middle), and sampling time (right) across different image resolutions for SGM, SB, DDPM, and LASB models. The LASB model (green) demonstrates consistently lower training time, memory usage, and sampling time, particularly in contrast to the SB model (red), which exhibits the highest resource consumption." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.245, + 0.482, + 0.352 + ], + "angle": 0, + "content": "and F1max (maximum F1-score) as our primary metrics [21]. Here, \\(cls\\) denotes image-level anomaly detection, while seg pertains to pixel-level anomaly localization. A detailed class-specific comparison and results for all the aforementioned metrics are provided in Section S2 of supplementary material. In Table 1, we present the average performance of each model across the evaluated dataset." + }, + { + "type": "table", + "bbox": [ + 0.091, + 0.363, + 0.487, + 0.634 + ], + "angle": 0, + "content": "
MethodVenueAdopted MethodMvTecViSA
Class-based
SimpleNet [29]CVPR 2023Embedding-based99.6 / 98.196.2 / 98.5
PatchCore [38]CVPR 2022Embedding-based99.1 / 98.191.0 / 98.1
DSR [58]ECCV 2022-98.2 / 70.2-
PaDiM [12]ICPR 2021Memory Bank97.5 / 92.189.1 / 85.9
CS-Flow [39]WACV 2022Normalization Flow97.2 / 84.5-
CFLOW-AD [19]WACV 2022Normalization Flow96.2 / 97.1-
OCR-GAN [26]TIP 2023GAN98.3 / -97.9 / -
DRAEM [56]ICCV 2021Encoder98.0 / 97.388.7/93.5
RD4AD [13]CVPR 2022Embedding-based98.5 / 97.896.9 / 98.3
ADSPR [43]ICCV 2023Diffusion97.67 / 97.36-
DiffAD [60]ICCV 2023Diffusion98.72 / 98.2689.79 / 68.28
D3AD [48]CVPR 2024Diffusion97.15 / 97.4495.51 / 94.27
DDAD [33]Arxiv 2023Diffusion99.84 / 98.0598.9 / 97.58
TransFusion [16]ECCV 2024Diffusion99.24 / 94.3398.53 / 86.26
LASB (Ours)-Diffusion99.66 / 99.1598.52 / 99.06
Unified (multi-class)
DRAEM [56]ICCV 2021Encoder88.1 / 87.279.1 / 91.3
HVQ-Trans [32]NeurIPS 2023Non-Diffusion98.0 / 97.393.2 / 98.7
MambaAD [20]NeurIPS 2024Non-Diffusion98.6 / 97.794.3 / 98.5
OmniAL [61]CVPR 2023Non-Diffusion97.2 / 98.394.2 / 96.0
GLAD [54]ECCV 2024Diffusion97.5 / 97.491.8 / 97.8
UniAD [55]NeurIPS 2022Transformer96.52 / 96.885.5 / 95.925
DIAD [21]AAAI 2024Diffusion97.15 / 96.7686.75 / 96.04
LASB (Ours)-Diffusion99.14 / 98.6694.2 / 98.18
" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.645, + 0.483, + 0.756 + ], + "angle": 0, + "content": "Table 1. Comparison of state-of-the-art (SOTA) anomaly detection methods categorized into class-based and unified (multi-class) approaches. The table highlights the adopted methodologies and their class-wise average performance in terms of AUROCcls / AUROCseg metrics on two datasets: MvTec and ViSA respectively. An exhaustive list of metrics along with class specific performance details is comprehensively illustrated in Section S2 of supplementary material." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.765, + 0.483, + 0.901 + ], + "angle": 0, + "content": "Implementation Details. For all our experiments, we resized MVTec-AD images to \\(256 \\times 256\\) resolution. To fine-tune the VQ-VAE (Auto-encoder) utilizing the KL-based method, we initialized the network weights from the Stable Diffusion [37] model, where the image is encoded into a latent vector with a size of \\(64 \\times 64 \\times 3\\). We chose this latent vector size due to its preservation of details and low FID generation capabilities. After training the auto-encoder, we froze the weights and trained the latent denoising network." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.245, + 0.905, + 0.321 + ], + "angle": 0, + "content": "We used the U-Net model as in [23] as our latent denoising network. During inference, we pass only the anomaly test image as input to the DDPM sampler [23]. Further details on batch size and various other network hyperparameters are in Section S4 of the appendix." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.332, + 0.907, + 0.5 + ], + "angle": 0, + "content": "Baselines. To benchmark LASB, we provide a detailed comparison with state-of-the-art (SOTA) reconstruction methods, as summarized in Table 1. These methods are categorized into two distinct groups: class-based and unified (multi-class) approaches. Class-based methods train models specific to each class, with the number of models increasing linearly with the number of classes, making scalability a challenge. In contrast, unified approaches handle multiple classes within a single model, offering a more scalable solution. By separating the class-based and unified approaches, we ensure a fair and comprehensive comparison." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.51, + 0.909, + 0.751 + ], + "angle": 0, + "content": "Class-based Methods: Performance Analysis. Among the class-based methods, PaDiM [12], with its memory bank-based architecture, achieves \\(97.5\\%\\) on MVtec-AD and \\(89.1\\%\\) on ViSA. While effective for simpler anomalies, its scalability and ability to model inter-class variations remain limited. In contrast, DiffAD [60] demonstrates strong competitiveness, achieving \\(98.72\\%\\) on MVtec-AD. However, its performance on ViSA drops to \\(89.79\\%\\), indicating its difficulty in adapting to diverse industrial anomalies. DDAD [33] shows highly competitive results, achieving \\(99.84\\%\\) on MVtec-AD and \\(98.9\\%\\) on ViSA. Its diffusion-based modeling effectively captures structural patterns, making it one of the closest competitors to LASB. However, LASB surpasses DDAD on ViSA by a margin of \\(0.38\\% \\uparrow\\), demonstrating superior adaptability to multi-class industrial scenarios." + }, + { + "type": "table", + "bbox": [ + 0.517, + 0.764, + 0.922, + 0.83 + ], + "angle": 0, + "content": "
TaskMetricsPixel SpaceLatent Space
DDPM [23]DDAD [33]GLAD [54]LDM [37]DiAD [21]LASB (Ours)
DetectionAUROCcls71.999.897.576.697.299.2 (-0.6%)
APcls81.699.599.187.699.099.3 (-0.2%)
F1maxcls86.697.996.688.196.598.5 (+2.0%)
LocalizationAUROCseg75.698.097.485.196.898.6 (+0.60%)
APseg13.359.060.827.652.678.2 (+17.4%)
F1maxseg19.559.460.731.055.570.7 (+10.0%)
" + }, + { + "type": "table_caption", + "bbox": [ + 0.512, + 0.841, + 0.908, + 0.898 + ], + "angle": 0, + "content": "Table 2. Performance comparison of LASB with diffusion-based models on MVTec-AD using AUROC, AP, and F1max. The best results are in bold; the second-best is underline. Improvements are shown as a percentage over the second-best." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "25533" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.094, + 0.089, + 0.482, + 0.254 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.264, + 0.483, + 0.307 + ], + "angle": 0, + "content": "Figure 4. Visual representation of test samples from the MVtec dataset, depicted through heatmaps for various models for different categories." + }, + { + "type": "text", + "bbox": [ + 0.093, + 0.324, + 0.485, + 0.761 + ], + "angle": 0, + "content": "Unified Models: Performance Analysis. For unified approaches, DRAEM [56] emerges as one of the least competitive models, with AUROCcls scores of \\(88.1\\%\\) on MVTec-AD and \\(79.1\\%\\) on ViSA. Its reliance on autoencoder-based reconstruction, combined with limited augmentation strategies, hampers its ability to generalize to diverse and complex anomaly scenarios, particularly in the multi-class ViSA dataset. Similarly, UniAD [55] performs less competitively, achieving \\(96.52\\%\\) on MVTec-AD and \\(85.5\\%\\) on ViSA. Its transformer-based architecture, while robust for certain tasks, struggles to balance efficiency and accuracy in anomaly detection across diverse classes. Similarly, DIAD [21] achieves \\(97.15\\%\\) on MVTec-AD and \\(86.75\\%\\) on ViSA but falls short in handling the intricate class variations present in the datasets. Among the competitive unified methods, MambaAD [20] achieves \\(98.6\\%\\) on MVTec-AD and \\(94.3\\%\\) on ViSA, showcasing its strength in handling multi-class scenarios using state-space models. However, LASB outperforms MambaAD with a score of \\(99.14\\%\\) on MVTec-AD and \\(94.2\\%\\) on ViSA, highlighting its superior capability to balance computational efficiency with detection accuracy. Furthermore, HVQ-Trans [32], with scores of \\(98.0\\%\\) on MVTec-AD and \\(93.2\\%\\) on ViSA, demonstrates solid performance but remains slightly behind LASB in scalability and robustness. Detailed class-specific results are available in Section S2 of supplementary material, and Figure 4 provides qualitative heat-map visualizations of anomaly regions, further demonstrating LASB's efficacy in both class-based and unified settings." + }, + { + "type": "table", + "bbox": [ + 0.099, + 0.776, + 0.48, + 0.844 + ], + "angle": 0, + "content": "
TaskMetricsNon-Diffusion MethodDiffusion-based Method
DRAEM [23]UniAD [55]DiffAD [60]DiAD [21]LASB (Ours)
DetectionAUROCcls79.185.589.586.894.2
APcls81.985.5-88.392.2
F1maxcls78.984.4-85.194.5
LocalizationAUROCseg91.395.971.296.098.2
APseg23.521.0-26.146.4
F1maxseg29.527.0-33.052.6
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.855, + 0.483, + 0.897 + ], + "angle": 0, + "content": "Table 3. Results for multi-class anomaly detection and localization on VisA dataset. The best results are indicated in bold, and the second-best results are denoted with an underline." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.09, + 0.791, + 0.108 + ], + "angle": 0, + "content": "5. Ablation Studies and Analysis" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.116, + 0.907, + 0.253 + ], + "angle": 0, + "content": "In this section, we first differentiate the proposed LASB method's performance in latent spaces and explain the advantages of utilizing Linear-SB in the latent space. We then assess the LASB model's effectiveness when compared with Stable Diffusion [37] and other models for anomaly localization and detection tasks. Finally, we demonstrate the robustness of our proposed method, demonstrating its ability to deliver precise and consistent outcomes during test-time evaluations." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.266, + 0.908, + 0.642 + ], + "angle": 0, + "content": "LASB vs Standard Diffusion Models. In the realm of anomaly detection, standard diffusion models such as DDPM [23] and LDM [37] achieve competitive performance in anomaly detection tasks but exhibit significant limitations in localization, as evidenced in Table 2. To mitigate these limitations, DiAD [21] integrates a semantic guidance network to improve LDM's localization by incorporating additional contextual information. Despite these improvements, DiAD [21] still falls short of achieving state-of-the-art (SOTA) results, indicating persistent challenges in fully leveraging diffusion models for comprehensive anomaly detection tasks. In contrast, our LASB employs a novel approach that leverages Linear-SB within the latent space, avoiding the limitations associated with reconstructing from pure Gaussian noise, a common issue in standard diffusion processes when applied to anomaly-free reconstruction. Moreover, LASB semi-degrades latent space to retain structural integrity, facilitating more effective anomaly detection and localization. This method not only enhances the robustness of the detection process but also significantly improves localization accuracy. Consequently, the LASB model surpasses standard diffusion models across multiple metrics, demonstrating superior performance in multi-class anomaly detection, as evidenced in Table 2." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.645, + 0.909, + 0.903 + ], + "angle": 0, + "content": "We compare the performance of our proposed Latent Anomaly Schrödinger Bridge (LASB) method with the DDPM [23], DDAD [33], GLAD [54], LDM [37], and DiAD [21] models to demonstrate the effectiveness and efficiency of our approach. First, we evaluate the models on the MVTec-AD dataset for both anomaly detection and localization tasks. As shown in Table 2, LASB significantly outperforms all the other models in localization tasks for all the metrics. Specifically, LASB achieves a \\(0.6\\% \\uparrow\\), \\(17.4\\% \\uparrow\\), and \\(10.0\\% \\uparrow\\) improvement in AUROC\\(_{seg}\\), AP\\(_{seg}\\), F1max\\(_{seg}\\) respectively. Also, it shows \\(2.0\\% \\uparrow\\) enhancement in F1max\\(_{cls}\\) for detection, compared to GLAD [54]. These improvements highlight the robustness and effectiveness of the latent space approach specifically for localization. Furthermore, as shown in Figure 3, pixel-space models like DDPM [23], SGM [46], and SB models trained via Iterative Proportional Fitting [11, 25] or likelihood-based methods [8] re" + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "25534" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.097, + 0.089, + 0.9, + 0.159 + ], + "angle": 0, + "content": "
NFE-1NFE-2NFE-5NFE-10NFE-100NFE-500NFE-1000
MVTec-AD97.42 / 96.8298.16 / 97.8498.94 / 98.1699.14 / 98.6699.14 / 98.6699.14 / 98.6699.14 / 98.66
VisA92.74 / 96.7193.21 / 97.1593.96 / 97.9094.2 / 98.1894.2 / 98.1894.2 / 98.1894.2 / 98.18
Inference Time (secs)0.120.250.520.741.57.515
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.169, + 0.905, + 0.227 + ], + "angle": 0, + "content": "Table 4. Performance evaluation across varying numbers of function evaluations (NFEs) on the MVTec-AD and VisA datasets. The tabulated metrics, AUROCcls / AUROCseg, provide a comprehensive overview of image-level and pixel-level anomaly detection performance at each NFE. Additionally, the inference time, measured in wall-clock seconds on an NVIDIA V100 GPU, underscores the computational trade-offs associated with increasing NFEs, reflecting the balance between model efficiency and performance." + }, + { + "type": "table", + "bbox": [ + 0.103, + 0.239, + 0.895, + 0.274 + ], + "angle": 0, + "content": "
MethodDR/EM [56]PatchCore [38]DiffAD [60]TransFusion [16]DIAD [21]GLAD [54]ADSPR [43]DDAD [33]LASB (ours)
Inference Time (secs)0.150.442.61.22.81.81.61.50.74
" + }, + { + "type": "table_caption", + "bbox": [ + 0.11, + 0.284, + 0.885, + 0.298 + ], + "angle": 0, + "content": "Table 5. Inference times for various methods where the inference time, measured in wall-clock seconds on an NVIDIA V100 GPU." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.313, + 0.482, + 0.389 + ], + "angle": 0, + "content": "require more training time, memory, and exhibit slower sampling. In contrast, latent-space LASB excels in all areas, achieving up to \\(5 \\times\\) and \\(2 \\times\\) faster training, \\(3 \\times\\) and \\(2 \\times\\) memory reduction, and \\(4 \\times\\) and \\(2 \\times\\) faster sampling compared to SB-based models [8, 11, 25] and DDPM [23], respectively." + }, + { + "type": "table", + "bbox": [ + 0.107, + 0.399, + 0.472, + 0.495 + ], + "angle": 0, + "content": "
TaskMetrics
DetectionAUROCcls99.0599.0199.0198.98
APcls99.1599.1499.1499.11
F1maxcls98.5998.3399.8398.32
LocalizationAUROCseg98.5898.5898.5998.57
APseg78.1778.1478.1478.12
F1maxseg70.6270.5970.6170.58
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.504, + 0.483, + 0.573 + ], + "angle": 0, + "content": "Table 6. Stability analysis of LASB on the MVTec-AD dataset for detection and localization tasks by performing the sampling multiple times. Results reported are mean values across classes and multiple samplings (see Section S1 of appendix for more detailed results)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.584, + 0.483, + 0.825 + ], + "angle": 0, + "content": "Generalization and Sampling Stability. In the field of anomaly detection using diffusion models, achieving consistent outcomes during the sampling or inference stage is notably challenging due to the inherent stochastic nature of generative processes. This inconsistency can be particularly problematic in real-world applications where reliable and stable detection is critical. Therefore, assessing the stability of model outputs across multiple inferences is essential. Our approach involves training the model once and then conducting multiple sampling or inference tests to evaluate if the outcomes remain consistent over time. A critical consideration for real-world applicability is achieving fast inference while minimizing computational overhead. The proposed LASB model strikes a balance between performance and efficiency, ensuring its suitability for deployment in practical scenarios." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.826, + 0.483, + 0.902 + ], + "angle": 0, + "content": "Our findings, as detailed in Table 6, reveal that the LASB model exhibits remarkable stability across all evaluation metrics, showing negligible variance across numerous inferences. This consistency is attributed to the model's capability to maintain structural integrity and effectively tran" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.313, + 0.906, + 0.418 + ], + "angle": 0, + "content": "sition from anomalous to normal latent spaces. The LASB model is designed to reconstruct a normal image regardless of the underlying anomalies, compelling it to disregard anomalous features during reconstruction. This process not only ensures that anomalies of various patterns, sizes, and orientations are effectively handled but also enhances the overall reliability of the model." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.43, + 0.907, + 0.701 + ], + "angle": 0, + "content": "Inference Complexity. As shown in Table 4, LASB demonstrates exceptional efficiency with its rapid sampling capabilities, achieving near-optimal performance with as few as 10 NFEs (Number of Function Evaluations) and an inference time of only 0.74 seconds on an NVIDIA V100 GPU. This highlights LASB's practicality for real-world applications, where fast and reliable anomaly detection and localization are crucial. To further illustrate its efficiency, we compare LASB's inference time with existing state-of-the-art (SOTA) models in Table 5. LASB is approximately \\(1.6 \\times\\) faster than TransFusion [16], a leading class-based method, while also outperforming it in both anomaly localization and detection tasks. Compared to DiAD [21], LASB achieves \\(2 \\times\\) faster inference times, delivering significantly stronger performance in both detection and localization metrics. This demonstrates LASB's clear advantage in balancing computational efficiency and robust anomaly detection." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.715, + 0.791, + 0.731 + ], + "angle": 0, + "content": "6. Conclusions and Future Work" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.734, + 0.906, + 0.901 + ], + "angle": 0, + "content": "The proposed Latent Anomaly Schrödinger Bridge (LASB) model demonstrates robust performance in anomaly detection and localization tasks. Its unified nature delimits the need for extra guidance or additional network components. LASB also excels in producing stable inferential results, requires less computational memory, and benefits from faster training and sampling rates. Given these advantages, LASB is well-suited for deployment in real-world industrial applications. Looking ahead, future research could focus on enhancing the robust translation mechanisms specific to anomaly detection and localization tasks." + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.946, + 0.518, + 0.957 + ], + "angle": 0, + "content": "25535" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.091, + 0.092, + 0.486, + 0.138 + ], + "angle": 0, + "content": "Acknowledgements. We thank the anonymous reviewers for their valuable feedback that improved the presentation of this paper." + }, + { + "type": "title", + "bbox": [ + 0.093, + 0.151, + 0.188, + 0.167 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.176, + 0.483, + 0.233 + ], + "angle": 0, + "content": "[1] Samet Akcay, Amir Atapour-Abarghouei, and Toby P Breckon. Ganomaly: Semi-supervised anomaly detection via adversarial training. In Computer Vision-ACCV 2018. Springer International Publishing, 2019. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.234, + 0.484, + 0.274 + ], + "angle": 0, + "content": "[2] Brian DO Anderson. Reverse-time diffusion equation models. Stochastic Processes and their Applications, 12(3):313-326, 1982. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.276, + 0.483, + 0.319 + ], + "angle": 0, + "content": "[3] Alexander Bauer, Shinichi Nakajima, and Klaus-Robert Müller. Self-supervised autoencoders for visual anomaly detection. Mathematics, 12(24), 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.32, + 0.483, + 0.362 + ], + "angle": 0, + "content": "[4] Finn Behrendt et al. Patched diffusion models for unsupervised anomaly detection in brain migraine. In Medical Imaging with Deep Learning, page PMLR, 2024. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.363, + 0.483, + 0.432 + ], + "angle": 0, + "content": "[5] Paul Bergmann, Michael Fauser, David Sattlegger, and Carsten Steger. Mvtec ad-a comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9592-9600, 2019. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.434, + 0.483, + 0.475 + ], + "angle": 0, + "content": "[6] Paul Bergmann et al. Improving unsupervised defect segmentation by applying structural similarity to autoencoders. arXiv preprint arXiv:1807.02011, 2018. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.477, + 0.482, + 0.518 + ], + "angle": 0, + "content": "[7] Ziyi Chang, George A Koulieris, and Hubert PH Shum. On the design fundamentals of diffusion models: A survey. arXiv preprint arXiv:2306.04542, 2023. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.52, + 0.483, + 0.575 + ], + "angle": 0, + "content": "[8] Tianrong Chen, Guan-Horng Liu, and Evangelos Theodorou. Likelihood training of schrödinger bridge using forward-backward SDEs theory. In International Conference on Learning Representations, 2022. 2, 4, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.577, + 0.483, + 0.631 + ], + "angle": 0, + "content": "[9] Yongxin Chen, Tryphon T Georgiou, and Michele Pavon. Stochastic control liaisons: Richard sinkhorn meets gaspard monge on a schrodinger bridge. Siam Review, 63(2):249-313, 2021. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.633, + 0.483, + 0.702 + ], + "angle": 0, + "content": "[10] Anne-Sophie Collin and Christophe De Vleeschouwer. Improved anomaly detection by training an autoencoder with skip connections on images corrupted with stain-shaped noise. In 2020 25th International Conference on Pattern Recognition (ICPR), pages 7915-7922. IEEE, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.704, + 0.483, + 0.773 + ], + "angle": 0, + "content": "[11] Valentin De Bortoli, James Thornton, Jeremy Heng, and Arnaud Doucet. Diffusion schrödinger bridge with applications to score-based generative modeling. Advances in Neural Information Processing Systems, 34:17695-17709, 2021. 2, 4, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.775, + 0.483, + 0.844 + ], + "angle": 0, + "content": "[12] Thomas Defard, Aleksandr Setkov, Angelique Loesch, and Romaric Audigier. Padim: a patch distribution modeling framework for anomaly detection and localization. In International Conference on Pattern Recognition, pages 475-489. Springer, 2021. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.846, + 0.484, + 0.901 + ], + "angle": 0, + "content": "[13] Hanqiu Deng and Xingyu Li. Anomaly detection via reverse distillation from one-class embedding. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9737-9746, 2022. 6" + }, + { + "type": "list", + "bbox": [ + 0.094, + 0.176, + 0.484, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.148 + ], + "angle": 0, + "content": "[14] Wei Deng, Weijian Luo, Yixin Tan, Marin Bilos, Yu Chen, Yuriy Nevmyvaka, and Ricky T. Q. Chen. Variational schrödinger diffusion models. In International Conference on Machine Learning (ICML), 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.15, + 0.905, + 0.205 + ], + "angle": 0, + "content": "[15] Chouro Ding, Guansong Pang, and Chunhua Shen. Catching both gray and black swans: Open-set supervised anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.207, + 0.905, + 0.263 + ], + "angle": 0, + "content": "[16] Matic Fučka, Vitjan Zavrtanik, and Danijel Skočaj. Transfusion-a transparency-based diffusion model for anomaly detection. In European conference on computer vision, pages 91-108. Springer, 2024. 6, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.264, + 0.905, + 0.358 + ], + "angle": 0, + "content": "[17] Dong Gong, Lingqiao Liu, Vuong Le, Budhaditya Saha, Moussa Reda Mansour, Svetha Venkatesh, and Anton van den Hengel. Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1705-1714, 2019. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.362, + 0.905, + 0.417 + ], + "angle": 0, + "content": "[18] Alvaro Gonzalez-Jimenez et al. Sano: Score-based diffusion model for anomaly localization in dermatology. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.42, + 0.907, + 0.488 + ], + "angle": 0, + "content": "[19] Denis Gudovskiy, Shun Ishizaka, and Kazuki Kozuka. Cflow-ad: Real-time unsupervised anomaly detection with localization via conditional normalizing flows. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 98-107, 2022. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.49, + 0.905, + 0.573 + ], + "angle": 0, + "content": "[20] Haoyang He, Yuhu Bai, Jiangning Zhang, Qingdong He, Hongxu Chen, Zhenye Gan, Chengjie Wang, Xiangtai Li, Guanzhong Tian, and Lei Xie. MambaAD: Exploring state space models for multi-class unsupervised anomaly detection. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.575, + 0.905, + 0.655 + ], + "angle": 0, + "content": "[21] Haoyang He, Jiangning Zhang, Hongxu Chen, Xuhai Chen, Zhishan Li, Xu Chen, Yabiao Wang, Chengjie Wang, and Lei Xie. A diffusion-based framework for multi-class anomaly detection. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 8472-8480, 2024. 1, 2, 3, 5, 6, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.659, + 0.905, + 0.728 + ], + "angle": 0, + "content": "[22] Liren He, Zhengkai Jiang, Jinlong Peng, Liang Liu, Qianggang Du, Xiaobin Hu, Wenbing Zhu, Mingmin Chi, Yabiao Wang, and Chengjie Wang. Learning unified reference representation for unsupervised multi-class anomaly detection. arXiv preprint arXiv:2403.11561, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.73, + 0.905, + 0.772 + ], + "angle": 0, + "content": "[23] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020. 2, 6, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.774, + 0.905, + 0.814 + ], + "angle": 0, + "content": "[24] Bozhen Hu et al. A lightweight spatial and temporal multifeature fusion network for defect detection. IEEE Transactions on Image Processing, 30:472-486, 2020. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.817, + 0.905, + 0.857 + ], + "angle": 0, + "content": "[25] Solomon Kullback. Probability densities with given marginals. The Annals of Mathematical Statistics, 39(4): 1236-1243, 1968. 4, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.859, + 0.905, + 0.901 + ], + "angle": 0, + "content": "[26] Yufei Liang et al. Omni-frequency channel-selection representations for unsupervised anomaly detection. IEEE Transactions on Image Processing, 2023. 2, 6" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.907, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.946, + 0.52, + 0.957 + ], + "angle": 0, + "content": "25536" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.093, + 0.482, + 0.148 + ], + "angle": 0, + "content": "[27] Guan-Horng Liu, Arash Vahdat, De-An Huang, Evangelos A. Theodorou, Weili Nie, and Anima Anandkumar. I2sb: Image-to-image schrödinger bridge. In International Conference on Machine Learning, 2023. 4, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.149, + 0.482, + 0.219 + ], + "angle": 0, + "content": "[28] Wenqian Liu, Runze Li, Meng Zheng, Srikrishna Karanam, Ziyan Wu, Bir Bhanu, Richard J Radke, and Octavia Camps. Towards visually explaining variational autoencoders. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8642-8651, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.22, + 0.482, + 0.288 + ], + "angle": 0, + "content": "[29] Zhikang Liu, Yiming Zhou, Yuansheng Xu, and Zilei Wang. Simplenet: A simple network for image anomaly detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20402-20411, 2023. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.291, + 0.482, + 0.332 + ], + "angle": 0, + "content": "[30] Xingming Long et al. Fabric defect detection using tactile information. In 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.334, + 0.482, + 0.375 + ], + "angle": 0, + "content": "[31] Fanbin Lu et al. Removing anomalies as noises for industrial defect localization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.377, + 0.482, + 0.445 + ], + "angle": 0, + "content": "[32] Ruiying Lu, YuJie Wu, Long Tian, Dongsheng Wang, Bo Chen, Xiyang Liu, and Ruimin Hu. Hierarchical vector quantized transformer for multi-class unsupervised anomaly detection. Advances in Neural Information Processing Systems, 36:8487-8500, 2023. 3, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.447, + 0.482, + 0.489 + ], + "angle": 0, + "content": "[33] Arian Mousakhan, Thomas Brox, and Jawad Tayyub. Anomaly detection with conditioned denoising diffusion models. arXiv preprint arXiv:2305.15956, 2023. 6, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.49, + 0.482, + 0.518 + ], + "angle": 0, + "content": "[34] Edward Nelson. Dynamical theories of Brownian motion. Princeton university press, 2020. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.519, + 0.482, + 0.561 + ], + "angle": 0, + "content": "[35] Jonathan Pinnay and Keng Chai. Inpainting transformer for anomaly detection. In International Conference on Image Analysis and Processing, pages 394-406. Springer, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.562, + 0.482, + 0.59 + ], + "angle": 0, + "content": "[36] Hannes Risken and Hannes Risken. Fokker-planck equation. Springer, 1996. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.591, + 0.482, + 0.66 + ], + "angle": 0, + "content": "[37] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695, 2022. 2, 5, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.661, + 0.482, + 0.731 + ], + "angle": 0, + "content": "[38] Karsten Roth, Latha Pemula, Joaquin Zepeda, Bernhard Schölkopf, Thomas Brox, and Peter Gehler. Towards total recall in industrial anomaly detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14318-14328, 2022. 6, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.732, + 0.482, + 0.802 + ], + "angle": 0, + "content": "[39] Marco Rudolph, Tom Wehrbein, Bodo Rosenhahn, and Bastian Wandt. Fully convolutional cross-scale-flows for image-based defect detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1088-1097, 2022. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.803, + 0.482, + 0.87 + ], + "angle": 0, + "content": "[40] Chitwan Sahara, William Chan, Huiwen Chang, Chris Lee, Jonathan Ho, Tim Salimans, David Fleet, and Mohammad Norouzi. Palette: Image-to-image diffusion models. In ACM SIGGRAPH 2022 conference proceedings, pages 1-10, 2022. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.872, + 0.482, + 0.901 + ], + "angle": 0, + "content": "[41] Thomas Schlegl, Philipp Seebock, Sebastian M Waldstein, Georg Langs, and Ursula Schmidt-Erfurth. f-anogan: Fast" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.093, + 0.482, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.545, + 0.093, + 0.905, + 0.12 + ], + "angle": 0, + "content": "unsupervised anomaly detection with generative adversarial networks. Medical image analysis, 54:30-44, 2019. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.122, + 0.905, + 0.176 + ], + "angle": 0, + "content": "[42] Yuyang Shi, Valentin De Bortoli, Andrew Campbell, and Arnaud Doucet. Diffusion schrödinger bridge matching. Advances in Neural Information Processing Systems, 36, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.179, + 0.905, + 0.247 + ], + "angle": 0, + "content": "[43] Woosang Shin, Jonghyeon Lee, Taehan Lee, Sangmoon Lee, and Jong Pil Yun. Anomaly detection using score-based perturbation resilience. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 23372-23382, 2023. 1, 2, 6, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.249, + 0.907, + 0.305 + ], + "angle": 0, + "content": "[44] Kihyuk Sohn et al. Anomaly clustering: Grouping images into coherent clusters of anomaly types. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.307, + 0.905, + 0.347 + ], + "angle": 0, + "content": "[45] Jouwon Song et al. Anoseg: anomaly segmentation network using self-supervised learning. arXiv preprint arXiv:2110.03396, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.349, + 0.905, + 0.417 + ], + "angle": 0, + "content": "[46] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021. 2, 3, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.42, + 0.905, + 0.502 + ], + "angle": 0, + "content": "[47] Daniel Stanley Tan, Yi-Chun Chen, Trista Pei-Chun Chen, and Wei-Chao Chen. Trustmae: A noise-resilient defect classification framework using memory-augmented auto-encoders with trust regions. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 276–285, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.504, + 0.905, + 0.572 + ], + "angle": 0, + "content": "[48] Justin Tebbe and Jawad Tayyyub. Dynamic addition of noise in a diffusion model for anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 3940-3949, 2024. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.575, + 0.905, + 0.616 + ], + "angle": 0, + "content": "[49] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.618, + 0.905, + 0.659 + ], + "angle": 0, + "content": "[50] Shashanka Venkataramanan et al. Attention guided anomaly localization in images. In European Conference on Computer Vision. Springer International Publishing, 2020. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.661, + 0.905, + 0.716 + ], + "angle": 0, + "content": "[51] Gefei Wang, Yuling Jiao, Qian Xu, Yang Wang, and Can Yang. Deep generative learning via schrödinger bridge. In International conference on machine learning, pages 10794-10804. PMLR, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.718, + 0.905, + 0.773 + ], + "angle": 0, + "content": "[52] Julian Wyatt et al. Anoddpm: Anomaly detection with denoising diffusion probabilistic models using simplex noise. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.775, + 0.905, + 0.83 + ], + "angle": 0, + "content": "[53] Rui Xu, Yunke Wang, and Bo Du. Maediff: Masked autoencoder-enhanced diffusion models for unsupervised anomaly detection in brain images. arXiv preprint arXiv:2401.10561, 2024. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.832, + 0.905, + 0.901 + ], + "angle": 0, + "content": "[54] Hang Yao, Ming Liu, Zhicun Yin, Zifei Yan, Xiaopeng Hong, and Wangmeng Zuo. Glad: towards better reconstruction with global and local adaptive diffusion models for unsupervised anomaly detection. In European Conference on Computer Vision, pages 1-17. Springer, 2024. 6, 7, 8" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.907, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "25537" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.092, + 0.482, + 0.147 + ], + "angle": 0, + "content": "[55] Zhiyuan You, Lei Cui, Yujun Shen, Kai Yang, Xin Lu, Yu Zheng, and Xinyi Le. A unified model for multi-class anomaly detection. Advances in Neural Information Processing Systems, 35:4571-4584, 2022. 3, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.149, + 0.483, + 0.218 + ], + "angle": 0, + "content": "[56] Vitjan Zavrtanik, Matej Kristan, and Danijel Skočaj. Draem-a discriminatively trained reconstruction embedding for surface anomaly detection. In Proceedings of the IEEE/CVF international conference on computer vision, pages 8330-8339, 2021. 2, 4, 6, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.22, + 0.482, + 0.261 + ], + "angle": 0, + "content": "[57] Vitjan Zavrtanik, Matej Kristan, and Danijel Skočaj. Reconstruction by inpainting for visual anomaly detection. Pattern Recognition, 112:107706, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.263, + 0.482, + 0.318 + ], + "angle": 0, + "content": "[58] Vitjan Zavrtanik, Matej Kristan, and Danijel Skočaj. Dsr-a dual subspace re-projection network for surface anomaly detection. In European conference on computer vision. Springer Nature Switzerland, 2022. 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.32, + 0.482, + 0.36 + ], + "angle": 0, + "content": "[59] Zhaoyang Zeng et al. Reference-based defect detection network. IEEE Transactions on Image Processing, 30:6637-6647, 2021. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.362, + 0.482, + 0.432 + ], + "angle": 0, + "content": "[60] Xinyi Zhang, Naiqi Li, Jiawei Li, Tao Dai, Yong Jiang, and Shu-Tao Xia. Unsupervised surface anomaly detection with diffusion probabilistic model. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6782-6791, 2023. 1, 2, 6, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.434, + 0.482, + 0.488 + ], + "angle": 0, + "content": "[61] Ying Zhao. Omnial: A unified cnn framework for unsupervised anomaly localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3924-3933, 2023. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.49, + 0.482, + 0.531 + ], + "angle": 0, + "content": "[62] Zhixuan Zhao et al. A surface defect detection method based on positive samples. In PRICAI 2018: Trends in Artificial Intelligence. Springer International Publishing, 2018. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.532, + 0.482, + 0.6 + ], + "angle": 0, + "content": "[63] Yang Zou, Jongheon Jeong, Latha Pemula, Dongqing Zhang, and Onkar Dabeer. Spot-the-difference self-supervised pretraining for anomaly detection and segmentation. In European Conference on Computer Vision. Springer, Springer Nature Switzerland, 2022. 5" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.483, + 0.6 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.946, + 0.519, + 0.956 + ], + "angle": 0, + "content": "25538" + } + ] +] \ No newline at end of file diff --git a/2025/A Unified Latent Schrodinger Bridge Diffusion Model for Unsupervised Anomaly Detection and Localization/0371bea6-a128-4eff-9f4f-dffd7eab7a85_origin.pdf b/2025/A Unified Latent Schrodinger Bridge Diffusion Model for Unsupervised Anomaly Detection and Localization/0371bea6-a128-4eff-9f4f-dffd7eab7a85_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a5cadb94be9f62d160fa37e0fae443d643631223 --- /dev/null +++ b/2025/A Unified Latent Schrodinger Bridge Diffusion Model for Unsupervised Anomaly Detection and Localization/0371bea6-a128-4eff-9f4f-dffd7eab7a85_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f9e83e531f73a755dd8c90a78d252609456cedd5817431827e72882b0c7cfa0 +size 5409781 diff --git a/2025/A Unified Latent Schrodinger Bridge Diffusion Model for Unsupervised Anomaly Detection and Localization/full.md b/2025/A Unified Latent Schrodinger Bridge Diffusion Model for Unsupervised Anomaly Detection and Localization/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b38a87c67c2bef33ad2541fb014857070481efc4 --- /dev/null +++ b/2025/A Unified Latent Schrodinger Bridge Diffusion Model for Unsupervised Anomaly Detection and Localization/full.md @@ -0,0 +1,321 @@ +# A Unified Latent Schrödinger Bridge Diffusion Model for Unsupervised Anomaly Detection and Localization + +Shilhora Akshay $^{1,*}$ Niveditha Lakshmi Narasimhan $^{2}$ Jacob George $^{2}$ Vineeth N Balasubramanian $^{1}$ $^{1}$ Indian Institute of Technology, Hyderabad $^{2}$ KLA Corporation *shilhora.akshay333@gmail.com + +# Abstract + +Anomaly detection and localization remain pivotal challenges in computer vision, with applications ranging from industrial inspection to medical diagnostics. While current supervised methods offer high precision, they are often impractical due to the scarcity of annotated data and the infrequent occurrence of anomalies. Recent advancements in unsupervised approaches, particularly reconstruction-based methods, have addressed these issues by training models exclusively on normal data, enabling them to identify anomalies during inference. However, these methods frequently rely on auxiliary networks or specialized adaptations, which can limit their robustness and practicality. This work introduces the Latent Anomaly Schrödinger Bridge (LASB), a unified unsupervised anomaly detection model that operates entirely in the latent space without requiring additional networks or custom modifications. LASB transforms anomaly images into normal images by preserving structural integrity across varying anomaly classes, lighting, and pose conditions, making it highly robust and versatile. Unlike previous methods, LASB does not focus solely on reconstructing anomaly features, but emphasizes anomaly transformation, achieving smooth anomaly-to-normal image conversions. Our method achieves state-of-the-art performance on both the MVTec-AD and VisA datasets, excelling in detection and localization tasks. Our code is available at https://github.com/ShilhoraAkshayPatel/LASB. + +# 1. Introduction + +Anomaly detection and localization is a critical task in computer vision with several applications in medical [4, 18, 52, 53] and industrial [21, 31, 43, 60], which has attracted significant attention from the research community. The main goal of anomaly detection is to identify and localize abnormal patterns that are unusual from those seen in normal + +![](images/348132e01528d7ebd2ebe824dd596fe2dffeeb94379f6c2db810f1dcd844479c.jpg) +Figure 1. Illustration of the Gaussian and Bridge diffusion processes, with reverse image trajectory from LASB. LASB learns a direct diffusion bridge between anomaly and normal distributions, enhancing interpretability and anomaly-free transformation. + +instances. In the past few years the research community has proposed various supervised anomaly detection methods [15, 24, 30, 44, 50, 59]. The extensive need for annotations is expensive and the infrequent presence of anomalous samples makes these methods unsuitable for practical applications due to their limitations in addressing real-world scenarios. + +Recent research has sought to overcome the limitations of supervised anomaly detection methods by relying solely on normal images during training and localizing abnormal patterns at inference time. Among the various categories within the unsupervised paradigm, reconstruction-based methods have garnered significant attention due to their promising results and strong performance in real-world scenarios. The core concept behind these methods is that the model is trained exclusively on normal images and during inference, it reconstructs abnormal samples as normal samples. Thus, doing so benefits in two directions, one can interpret how the model's ability to detect and localize anomalies. Second, it aids the model to perform better while reconstructing the anomalies. Recent advancements have introduced novel approaches lever + +aging AutoEncoder-based (AE) [3, 6, 56-58], Generative Adversarial Networks-based (GANs) [1, 26, 45, 62], and diffusion-based approaches [4, 18, 21, 31, 43, 52, 60]. Our research specifically focuses on various diffusion models tailored for anomaly detection tasks. + +Existing diffusion-based methods primarily focus on developing novel noise-conditioning techniques, often incorporating an additional discriminative sub-network [60]. Another approach utilizes a score-based diffusion model to identify anomalies by evaluating how effectively samples can return to the normal data distribution after perturbation, though it fails to deliver competitive performance [43]. Additionally, DiAD [21] proposes novel semantic guidance to specifically understand the anomalies and showcase its robustness. However, earlier diffusion-based methods generally depend on auxiliary networks or tailored diffusion processes to extract anomaly features. Moreover, most of these methods prioritize the extraction and reconstruction of anomaly features. + +In this work, we introduce an intuitive mechanism that neither requires model-specific adaptations to extract anomaly features nor relies on an additional feature extractor for training or inference, to extract discriminative features. By leveraging the capabilities of Schrödinger Bridge [8, 11, 14, 42, 51], we propose the Latent Anomaly Schrödinger Bridge (LASB), which operates entirely in the latent space, transforming anomalous images into normal ones, regardless of anomaly class, and demonstrating robustness to variations in lighting and pose. In the following, we outline the key contributions of our study. + +- We propose LASB, a unsupervised bridge-based anomaly detection model that operates in latent space. LASB transforms anomaly images into normal images, regardless of the anomaly class. +- Unified Framework: LASB offers a comprehensive framework for both anomaly detection and localization without the need for auxiliary networks. Unlike conventional methods that rely on Gaussian diffusion, LASB employs a bridge-based diffusion process that inherently preserves structural integrity, marking the first application of such a process in latent diffusion model for anomaly detection. +- Efficient and Scalable: By utilizing the Linear Schrödinger Bridge in latent space, LASB significantly reduces training time, memory consumption, and sampling speed, enhancing overall efficiency and scalability. +- Our method achieves state-of-the-art performance on the MVtec dataset, with a image-level $\mathrm{AUROC}_{cls} / \mathrm{AP}_{cls}$ of $99.2\% / 99.3\%$ and an pixel-level $\mathrm{AUROC}_{seg} / \mathrm{AP}_{seg}$ of $98.6\% / 78.2\%$ on the test set, establishing a new benchmark for unsupervised anomaly detection. + +# 2. Related Work + +Reconstruction-based Methods: Reconstruction-based anomaly detection models operate on the premise that networks trained exclusively on normal images will fail to accurately reconstruct anomalous ones due to their unfamiliarity with the abnormal distribution. Notably, autoencoder [6, 10, 28] and Generative Adversarial Network (GAN) [1, 41] frameworks have been widely utilized for this task, where anomaly scores are computed based on the reconstruction error between the input and its generated counterpart. Nevertheless, such approaches frequently struggle with direct copy issues and elevated false positive rates due to their limited generalization capacity over anomalous regions or due to the overgeneralization capacity of reconstructive methods. To mitigate these generalization challenges, recent innovations have pivoted towards inpainting strategies [35, 57] or integrating supplementary memory modules [17, 47]. Furthermore, the DRAEM [56] model enhances performance by leveraging pseudoanomalies and coupling autoencoders with a segmentation network, though its effectiveness diminishes when faced with substantial deviations between real and pseudo anomalies. + +Diffusion-based Methods: Diffusion models, known for their high-fidelity image synthesis, are increasingly applied to anomaly detection in both medical and industrial domains [18, 21, 43, 52, 60]. In the medical field, SANO [18] leverages score-based diffusion models to localize skin conditions such as eczema, while AnoDDPM [52] employs the Denoising Diffusion Probabilistic Model (DDPM) [23] to segment tumors, initiating with simplex noise. These methods primarily utilize standard DDPM [23] or Score-based Generative Models (SGMs) [46], yet they often struggle to capture structural features because they rely on starting from pure Gaussian or simplex noise and frequently require external guidance for complex feature integration. In industrial applications, DiffAD [60] introduces a noise interpolation technique coupled with a discriminative sub-network to enhance detection capabilities. AD-SPR [43] emphasizes the significance of perturbed resilience using SGMs [46] to identify anomalies. Additionally, DiAD [21] presents a novel semantic guidance network that directs a stable diffusion model [37] to effectively detect and localize anomalies in both industrial and selected medical imaging applications. However, these models often require class-specific training or additional networks to guide the diffusion process or to perform further segmentation via a discriminative sub-network. + +Unified/Multi-class Methods: Early approaches in the literature developed models on a per-class basis, which limited their generalizability. More recent research has shifted towards unified or multi-class models that are trained on + +![](images/4fb7253a32f925bb900a6dde4f0bd8f50e6ac343e6b27b7991abe1f462799f5d.jpg) + +![](images/e4ffb779d8ba2249faef708cb0ab8c73765de979b76a5380378ff518a1c2e26e.jpg) +Figure 2. The LASB model framework consists of two key stages: training and inference. During training, anomaly augmentations are applied to images, introducing distortions that the LASB model learns to remove, ultimately reconstructing a normal image. This iterative process continues until the model effectively filters out the anomalies. In the inference stage, the model processes real anomalous images, reconstructing normal versions. The anomaly detection is achieved by computing the difference $(p_B - p_A)$ between the original and reconstructed images, with anomalies visualized via a heatmap. + +![](images/2168f79c8e9cbdaec09dbc1aa9abacddcfda286c1ca5b93866ccc3559d979ead.jpg) + +entire datasets, demonstrating improved robustness [21, 22, 32, 55]. For instance, UniAD [55] utilizes layer-wise query decoding, neighbor-masked attention, and feature jittering techniques to enhance multi-class anomaly detection. In contrast, HVQ-Trans [32] employs a hierarchical vector quantized prototype-oriented Transformer to improve discriminative capacity across multiple classes. Both methods focus on tackling the "identical shortcut" issue. On the other hand, RLR [22] introduces a novel framework using learnable reference representations with locality constraints to explicitly learn normal patterns and circumvent "learning shortcuts", achieving superior results on standard datasets (MVTec-AD and VisA). Additionally, the DiAD [21] model, designed for multi-class settings and based on diffusion-based reconstruction, outperforms UniAD [55], RLR [22], and HVQ-Trans [32] in terms of localization and detection performance. + +# 3. Proposed Methodology + +# 3.1. Preliminaries + +Before introducing the full methodology, we begin with a preliminary overview of score-based generative models (SGMs). Next, we explain the Schrödinger Bridge concept and discuss its connection to SGMs. Finally, we present the core methodology, detailing how and why the latent Schrödinger Bridge effectively addresses anomaly detection and localization. + +Notations. Here we introduce some notations and use them throughout the work. Let $X_{t}(\in \mathbb{R}^{d})$ denote a stochastic process, where $t\in [0,1]$ is a continuous time step. The intermediate steps are uniformly distributed in the interval + +$(\mathcal{U}(t \sim [0,1]))$ . The initial distribution is the corrupted data distribution is denoted as $p_A$ and the terminal distribution is clean data distribution and is denoted as $p_B$ . The Wiener process and its reversed counterpart, adopted from Anderson et al. [2], are represented as $W_t$ and $\overline{W}_t$ , respectively, both in $\mathbb{R}^d$ . $\mathbb{I} \in \mathbb{R}^{d \times d}$ is the identity matrix. + +Score-based Generative Models. Score-based Generative Models (SGMs) perturb the data stochastically across continuous noise scales and then use reverse-time stochastic differential equations (SDEs) to learn the reverse-time diffusion process. This reverse-time SDE relies on a score function enabling the reconstruction of any distribution starting from a Gaussian [46]. Given data $X_0 \sim p_A$ , forward and backward SDEs are formulated as: + +$$ +d X _ {t} = f _ {t} \left(X _ {t}\right) d t + \sqrt {\beta_ {t}} d W _ {t}, \quad X _ {0} \sim p _ {A}, \tag {1} +$$ + +$$ +d X _ {t} = \left[ f _ {t} - \beta_ {t} \nabla \log p \left(X _ {t}, t\right) \right] d t + \sqrt {\beta_ {t}} d \bar {W} _ {t}, \tag {2} +$$ + +where, $f(\cdot ,t):\mathbb{R}^n\to \mathbb{R}^n$ and the terminal distributions (i.e., at $t = 1$ ) approach Gaussian $(X_{1}\sim \mathcal{N}(0,I))$ . To achieve this, diffusion coefficient $\beta_{t}\in \mathbb{R}$ is carefully tuned and ensuring that the base drift $f_{t}$ is linear in $X_{t}$ . Here, $p$ is the marginal density of equation (2) at time $t$ , and $\nabla \log p$ denotes its score [46]. + +SGMs as Schrödinger Bridge. The absence of flexibility in SGM's to transport data to desired distribution, demands a versatile strategy. The Schrödinger Bridge (SB) is a strategy often applied in optimal transport to find the optimal path measure between two marginal densities and it is expressed as, + +$$ +\min _ {Q \in P \left(p _ {A}, p _ {B}\right)} \operatorname {K L} (\mathbb {Q} | | \mathbb {P}), \tag {3} +$$ + +Here, $\mathbb{Q}$ represents a path measure within $\mathbb{P}(p_A,p_B)$ , characterized by having marginal densities $p_A$ and $p_B$ at times $t = 0$ and 1, respectively. We consider $\mathbb{P}$ as the reference measure, specifically chosen as the path measure in equation (1). We will further elaborate on the conditions that define the optimality of the SB (as shown in equation (3)) with specified boundary conditions. The optimality condition for SB is characterized by solving PDEs [9][8]. Let $\Psi (t,x)$ and $\widehat{\Psi} (t,x)$ be the solutions to the following PDEs: + +$$ +\frac {\partial \Psi (z , t)}{\partial t} = - \nabla \Psi^ {T} f - \frac {1}{2} \beta \Delta \Psi \tag {4} +$$ + +$$ +\frac {\partial \widehat {\Psi} (z , t)}{\partial t} = - \nabla \cdot (\widehat {\Psi} f) + \frac {1}{2} \beta \Delta \widehat {\Psi} \tag {5} +$$ + +subject to the conditions, + +$$ +\Psi (z, 0) \widehat {\Psi} (z, 0) = p _ {A} (z), \quad \Psi (z, 1) \widehat {\Psi} (z, 1) = p _ {B} (z). +$$ + +Then, the solution to optimization (3) can be expressed by the path measure of the following forward equation (6), or equivalently backward equation (7), SDE: + +$$ +d X _ {t} = \left[ f _ {t} + \beta_ {t} \nabla \log \Psi (X _ {t}, t) \right] d t + \sqrt {\beta_ {t}} d W _ {t}, \tag {6} +$$ + +$$ +d X _ {t} = \left[ f _ {t} - \beta_ {t} \nabla \log \hat {\Psi} \left(X _ {t}, t\right) \right] d t + \sqrt {\beta_ {t}} d \bar {W} _ {t}, \tag {7} +$$ + +where $\nabla \log \Psi$ and $\nabla \log \widehat{\Psi}$ represent the non-linear optimal forward and backward drifts for the Schrödinger Bridge. This SB has a non-linear behavior and it generalizes the SGMs with variation in prior data. Note that, forward and backward equations (6, 7) of Schrödinger Bridge are same as SGMs forward-backward equations (1, 2) except the forward drift term ( $\nabla \log \Psi$ ). Thus, from the Nelson's Duality [34] we obtain $\Psi(x,t)\widehat{\Psi}(x,t) = q^{\mathrm{SB}}(x,t)$ . As the backward drift of Schrödinger Bridge does not act as a score function anymore with two indivisible components. Thus, from equation (7) and Nelson's Duality [34] we obtain $dX_{t} = [f_{t} - \beta_{t}(\nabla \log q^{\mathrm{SB}}(X_{t},t) - \nabla \log \Psi(X_{t},t))]dt + \sqrt{\beta_{t}} d\overline{W}_{t}$ . + +Linear-SB. The constraints posed above cause the model to be intractable and to overcome this impediment existing literature suggests various lines of strategies such as Iterative Proportional Fitting [11, 25], likelihood-based training of SB [8] etc. These methods help the SGMs to construct non-linear diffusion bridges. But, we can make them tractable by posing some linear conditions. A more detailed examination of SB theory with SGM framework reveals that the nonlinear drifts specified in equations (6, 7) correspond to the score functions described in equation (2). Assuming that $\Psi (\cdot ,t)$ and $\widehat{\Psi} (\cdot ,t)$ function as probability density functions, we reformulate equations (4, 5) to effectively address the solutions of the Fokker-Planck equation [36]. + +The Schrödinger Bridges in equation (4, 5) are satisfied, the backward and forward drifts, $\nabla \log \widehat{\Psi}(X_t, t)$ and $\nabla \log \Psi(X_t, t)$ represent the score functions for the following linear SDEs, respectively [36][27]: + +$$ +d X _ {t} = f _ {t} \left(X _ {t}\right) d t + \sqrt {\beta_ {t}} d W _ {t}, \quad X _ {0} \sim \Psi (\cdot , 0), \tag {8} +$$ + +$$ +d X _ {t} = f _ {t} \left(X _ {t}\right) d t + \sqrt {\beta_ {t}} d \overline {{W}} _ {t}, \quad X _ {1} \sim \widehat {\Psi} (\cdot , 1). \qquad (9) +$$ + +The key characteristic of the linear SDEs presented in equations (8, 9) lies in their differing boundary conditions, resulting in distinct distributions compared to nonlinear SDEs. Sampling is facilitated by parameterizing $\nabla \log \widehat{\Psi}$ using a score network and applying established SGM methods. However, the computational complexity arises due to the intractability of the boundary conditions $\Psi(\cdot, 0)$ and $\widehat{\Psi}(\cdot, 1)$ . To resolve this, we introduce Dirac delta functions as boundary conditions for these previously intractable terms, thus rendering them manageable. Now to address this we impose Dirac delta boundary conditions [36][27]. Assume $p_A(\cdot) = \delta_\kappa(\cdot)$ is Dirac delta distribution that is centered at $\kappa \in \mathbb{R}^d$ . Then, the initial distributions of the linear SDEs (equation (8,9)) are: + +$$ +p _ {B} = \Psi (\cdot , 1) \widehat {\Psi} (\cdot , 1), \quad \widehat {\Psi} (\cdot , 0) = \delta_ {\kappa} (\cdot), \tag {10} +$$ + +The above equation (10) suggests that the optimal backward drift of the process in equation (8) consistently targets the Dirac delta $\delta_{\kappa}(\cdot)$ , achieving convergence to $\kappa$ independent of $p_B$ . This concept is integrated into the loss function, where the score is recalculated as $\nabla \log p(X_t,t|X_0 = \kappa)$ for each instance $\kappa$ . This approach enhances computational efficiency and establishes a robust mathematical basis for training $\nabla \log \widehat{\Psi} (\cdot)$ . + +# 3.2. Linear Schrödinger Bridge in the Latent Space + +We now describe how the above discussion on linear Schrödinger bridges can be used for anomaly detection and localization. + +Overall Pipeline. Figure 2 illustrates the entire pipeline of the proposed method. Initially, anomaly augmentation [56] is applied to a normal image, which is then processed by the Latent Anomaly Schrödinger Bridge (LASB) model. The LASB model learns the anomaly statistics and transforms the augmented anomaly image back into a normal image. Crucially, the aim of this step is not to learn the anomalies themselves, but rather to reconstruct the normal image regardless of the anomalies introduced during training. During training, various masks and noise patterns are applied to augment the images. After training, the LASB model is evaluated on anomaly images from the test set, with its weights frozen. During the inference phase, the model is presented with an anomaly image from the test + +Algorithm 1 Training Procedure + +1: Input: normal and anomaly latent $p_A^{(z)}(\cdot), p_B^{(z)}(\cdot | z_0)$ +2: repeat +3: $z_0\sim p_A^{(z)}(z_0),z_1\sim p_B^{(z)}(z_1|z_0)$ +4: $z_{t}\sim q(z_{t}|z_{0},z_{1})$ according to equation (11) +5: calculate gradient $\nabla \epsilon (z_t,t;\theta)$ using equation (12) +6: until convergence + +set, which it then reconstructs as a normal image. Both the original anomaly image and its reconstructed normal image are subsequently passed through a difference block. In this step, multi-scale features are extracted using a pre-trained ImageNet model, and the discrepancies between these features are computed to pinpoint regions of interest (ROI). These differences are then detected and localized, following a strategy akin to DiAD [21]. + +Latent Anomaly Schrödinger Bridge. The foundational basis of Latent Anomaly Schrödinger Bridge (LASB) is inspired by linear Schrödinger Bridge's from equation (10) i.e., the optimal backward drift in equation (8) consistently aims for the Dirac delta $\delta_{\kappa}(\cdot)$ , ensuring convergence to $\kappa$ regardless of $p_B$ . That means, the forward process transforms the noisy anomaly image statistics and learns to reconstruct a normal image irrespective of terminal distribution. Now to elucidate the working mechanism of the LASB diffusion model which will consider a Chang et al. [7] design strategy + +SGMs and diffusion models operating in pixel space require significant computational resources and memory. To address this challenge, we introduce the LASB model. Before training the Linear-SB, we first train $E_{VQ}$ and $D_{VQ}$ , the encoder and decoder of the VQ-VAE model [49], for perceptual compression, following the approach of stable diffusion [37]. During VQ-VAE training, we input either normal or anomaly-augmented images from the training set into $E_{VQ}$ and aim to reconstruct the input using $D_{VQ}$ . Once the model converges, its weights are frozen, and we proceed to train the Linear-SB in the latent space. Unlike Gaussian diffusion models, where noise is progressively added at each time step, leading to the deterioration of image structure at later stages, we adopt a semidegradation strategy similar to Saharia et al. [40]. This ensures that, during the forward pass, the LASB model performs a smooth transformation while preserving structural information. Figure 1 illustrates the contrast between the Gaussian diffusion process and the Schrödinger Bridge diffusion process when applied to anomaly-free reconstruction. According to Liu et al. [27] for this smooth structural transformation the posterior for the SB (equation 6, 7) + +Algorithm 2 Inference/Sampling Procedure + +1: Input: $z_{N} \sim p_{B}^{(z)}(z_{N})$ , trained $\epsilon_{\theta}(\cdot, \cdot)$ +2: for $n = N$ to 1 do +3: Predict $z_0^\epsilon$ using $\epsilon_{\theta}(z_n, t_n)$ +4: $z_{n - 1}\sim p(z_{n - 1}|z_0^\epsilon ,z_n)$ +5: end for +6: return $z_0$ + +given a boundary pair condition $(X_0, X_1)$ reveals an analytic. Now as we inject the image statistics via perceptual compression using VQ-VAE into latent space, the analytical form can be formulated as, + +$$ +q \left(z _ {t} \mid z _ {0}, z _ {1}\right) = \mathcal {N} \left(z _ {t}, \mu_ {t} \left(z _ {0}, z _ {1}\right), \Sigma_ {t}\right) \tag {11} +$$ + +where $\mu_t = \frac{\bar{\sigma}_t^{(z)2}}{\bar{\sigma}_t^{(z)2} + \sigma_t^{(z)2}} z_0 + \frac{\sigma_t^{(z)2}}{\bar{\sigma}_t^{(z)2} + \sigma_t^{(z)2}} z_1$ , and + +$$ +\Sigma_ {t} = \frac {\bar {\sigma} _ {t} ^ {(z) 2} \sigma_ {t} ^ {(z) 2}}{\bar {\sigma} _ {t} ^ {(z) 2} + \sigma_ {t} ^ {(z) 2}} \mathbb {I}. +$$ + +The accumulated variances are, $\sigma_t^{(z)2}$ $\begin{array}{r}\int_0^t\beta_\tau d\tau \quad \mathrm{and}\quad \overline{\sigma}_t^2 \coloneqq \int_t^1\beta_\tau d\tau . \end{array}$ + +We hence train the proposed LASB using our objective function: + +$$ +\mathcal {L} _ {L A S B} := \left| \left| \epsilon_ {\theta} \left(z _ {t}, t\right) - \left(\frac {z _ {t} - z _ {0}}{\sigma_ {t} ^ {(z)}}\right) \right| \right| \tag {12} +$$ + +where $\epsilon (\cdot)$ is the network and $\theta$ are the parameters associated with it. The training and the inference procedures are summarized in Algorithms 1 and 2. + +# 4. Experiments and Results + +Datasets. To validate the efficacy of the LASB method, we utilize two challenging datasets in industrial anomaly detection: the MVTec-AD [5] dataset and the VisA [63] dataset. The MVTec-AD dataset comprises 15 categories, including 10 object types and 5 texture types, with a total of 5,354 high-resolution images. Of these, 3,629 anomaly-free images are used for training, while the remaining 1,725 test images include both normal and anomalous samples. The VisA dataset consists of 12 distinct objects organized into 12 subsets, categorized into complex Structures, Multiple Instances, and Single Instance types. It includes 10,821 high-resolution images, with 9,621 normal images and 1,200 anomalous images exhibiting 78 distinct anomalies, offering a comprehensive benchmark for anomaly detection and localization methods. Both datasets provide pixel-level ground truth annotations to facilitate the evaluation of anomaly localization performance. + +Evaluation Metrics. For a rigorous quantitative evaluation of our experimental results in anomaly detection and localization, we employ AUROC (Area Under the Receiver Operating Characteristic Curve), AP (Average Precision), + +![](images/bebe370c000dcd6188296baeead296d86b1951bde9a039801ca88b3425d3a5fb.jpg) +Figure 3. Comparison of training time (left), training memory (middle), and sampling time (right) across different image resolutions for SGM, SB, DDPM, and LASB models. The LASB model (green) demonstrates consistently lower training time, memory usage, and sampling time, particularly in contrast to the SB model (red), which exhibits the highest resource consumption. + +![](images/e28d21147abc88f15c8b3804ebe7c08beeb9e9b3d821d6fba38cf1c7600684d3.jpg) + +![](images/97f2c5035eabf69b86385ba0d9a96a5e665d6104fc16570eb1ad549b60262cdb.jpg) + +and F1max (maximum F1-score) as our primary metrics [21]. Here, $cls$ denotes image-level anomaly detection, while seg pertains to pixel-level anomaly localization. A detailed class-specific comparison and results for all the aforementioned metrics are provided in Section S2 of supplementary material. In Table 1, we present the average performance of each model across the evaluated dataset. + +
MethodVenueAdopted MethodMvTecViSA
Class-based
SimpleNet [29]CVPR 2023Embedding-based99.6 / 98.196.2 / 98.5
PatchCore [38]CVPR 2022Embedding-based99.1 / 98.191.0 / 98.1
DSR [58]ECCV 2022-98.2 / 70.2-
PaDiM [12]ICPR 2021Memory Bank97.5 / 92.189.1 / 85.9
CS-Flow [39]WACV 2022Normalization Flow97.2 / 84.5-
CFLOW-AD [19]WACV 2022Normalization Flow96.2 / 97.1-
OCR-GAN [26]TIP 2023GAN98.3 / -97.9 / -
DRAEM [56]ICCV 2021Encoder98.0 / 97.388.7/93.5
RD4AD [13]CVPR 2022Embedding-based98.5 / 97.896.9 / 98.3
ADSPR [43]ICCV 2023Diffusion97.67 / 97.36-
DiffAD [60]ICCV 2023Diffusion98.72 / 98.2689.79 / 68.28
D3AD [48]CVPR 2024Diffusion97.15 / 97.4495.51 / 94.27
DDAD [33]Arxiv 2023Diffusion99.84 / 98.0598.9 / 97.58
TransFusion [16]ECCV 2024Diffusion99.24 / 94.3398.53 / 86.26
LASB (Ours)-Diffusion99.66 / 99.1598.52 / 99.06
Unified (multi-class)
DRAEM [56]ICCV 2021Encoder88.1 / 87.279.1 / 91.3
HVQ-Trans [32]NeurIPS 2023Non-Diffusion98.0 / 97.393.2 / 98.7
MambaAD [20]NeurIPS 2024Non-Diffusion98.6 / 97.794.3 / 98.5
OmniAL [61]CVPR 2023Non-Diffusion97.2 / 98.394.2 / 96.0
GLAD [54]ECCV 2024Diffusion97.5 / 97.491.8 / 97.8
UniAD [55]NeurIPS 2022Transformer96.52 / 96.885.5 / 95.925
DIAD [21]AAAI 2024Diffusion97.15 / 96.7686.75 / 96.04
LASB (Ours)-Diffusion99.14 / 98.6694.2 / 98.18
+ +Table 1. Comparison of state-of-the-art (SOTA) anomaly detection methods categorized into class-based and unified (multi-class) approaches. The table highlights the adopted methodologies and their class-wise average performance in terms of AUROCcls / AUROCseg metrics on two datasets: MvTec and ViSA respectively. An exhaustive list of metrics along with class specific performance details is comprehensively illustrated in Section S2 of supplementary material. + +Implementation Details. For all our experiments, we resized MVTec-AD images to $256 \times 256$ resolution. To fine-tune the VQ-VAE (Auto-encoder) utilizing the KL-based method, we initialized the network weights from the Stable Diffusion [37] model, where the image is encoded into a latent vector with a size of $64 \times 64 \times 3$ . We chose this latent vector size due to its preservation of details and low FID generation capabilities. After training the auto-encoder, we froze the weights and trained the latent denoising network. + +We used the U-Net model as in [23] as our latent denoising network. During inference, we pass only the anomaly test image as input to the DDPM sampler [23]. Further details on batch size and various other network hyperparameters are in Section S4 of the appendix. + +Baselines. To benchmark LASB, we provide a detailed comparison with state-of-the-art (SOTA) reconstruction methods, as summarized in Table 1. These methods are categorized into two distinct groups: class-based and unified (multi-class) approaches. Class-based methods train models specific to each class, with the number of models increasing linearly with the number of classes, making scalability a challenge. In contrast, unified approaches handle multiple classes within a single model, offering a more scalable solution. By separating the class-based and unified approaches, we ensure a fair and comprehensive comparison. + +Class-based Methods: Performance Analysis. Among the class-based methods, PaDiM [12], with its memory bank-based architecture, achieves $97.5\%$ on MVtec-AD and $89.1\%$ on ViSA. While effective for simpler anomalies, its scalability and ability to model inter-class variations remain limited. In contrast, DiffAD [60] demonstrates strong competitiveness, achieving $98.72\%$ on MVtec-AD. However, its performance on ViSA drops to $89.79\%$ , indicating its difficulty in adapting to diverse industrial anomalies. DDAD [33] shows highly competitive results, achieving $99.84\%$ on MVtec-AD and $98.9\%$ on ViSA. Its diffusion-based modeling effectively captures structural patterns, making it one of the closest competitors to LASB. However, LASB surpasses DDAD on ViSA by a margin of $0.38\% \uparrow$ , demonstrating superior adaptability to multi-class industrial scenarios. + +
TaskMetricsPixel SpaceLatent Space
DDPM [23]DDAD [33]GLAD [54]LDM [37]DiAD [21]LASB (Ours)
DetectionAUROCcls71.999.897.576.697.299.2 (-0.6%)
APcls81.699.599.187.699.099.3 (-0.2%)
F1maxcls86.697.996.688.196.598.5 (+2.0%)
LocalizationAUROCseg75.698.097.485.196.898.6 (+0.60%)
APseg13.359.060.827.652.678.2 (+17.4%)
F1maxseg19.559.460.731.055.570.7 (+10.0%)
+ +Table 2. Performance comparison of LASB with diffusion-based models on MVTec-AD using AUROC, AP, and F1max. The best results are in bold; the second-best is underline. Improvements are shown as a percentage over the second-best. + +![](images/16e30b7fc9635c0159686acf423a06eb1cae9a146bb93bc63681736cf4af9f66.jpg) +Figure 4. Visual representation of test samples from the MVtec dataset, depicted through heatmaps for various models for different categories. + +Unified Models: Performance Analysis. For unified approaches, DRAEM [56] emerges as one of the least competitive models, with AUROCcls scores of $88.1\%$ on MVTec-AD and $79.1\%$ on ViSA. Its reliance on autoencoder-based reconstruction, combined with limited augmentation strategies, hampers its ability to generalize to diverse and complex anomaly scenarios, particularly in the multi-class ViSA dataset. Similarly, UniAD [55] performs less competitively, achieving $96.52\%$ on MVTec-AD and $85.5\%$ on ViSA. Its transformer-based architecture, while robust for certain tasks, struggles to balance efficiency and accuracy in anomaly detection across diverse classes. Similarly, DIAD [21] achieves $97.15\%$ on MVTec-AD and $86.75\%$ on ViSA but falls short in handling the intricate class variations present in the datasets. Among the competitive unified methods, MambaAD [20] achieves $98.6\%$ on MVTec-AD and $94.3\%$ on ViSA, showcasing its strength in handling multi-class scenarios using state-space models. However, LASB outperforms MambaAD with a score of $99.14\%$ on MVTec-AD and $94.2\%$ on ViSA, highlighting its superior capability to balance computational efficiency with detection accuracy. Furthermore, HVQ-Trans [32], with scores of $98.0\%$ on MVTec-AD and $93.2\%$ on ViSA, demonstrates solid performance but remains slightly behind LASB in scalability and robustness. Detailed class-specific results are available in Section S2 of supplementary material, and Figure 4 provides qualitative heat-map visualizations of anomaly regions, further demonstrating LASB's efficacy in both class-based and unified settings. + +
TaskMetricsNon-Diffusion MethodDiffusion-based Method
DRAEM [23]UniAD [55]DiffAD [60]DiAD [21]LASB (Ours)
DetectionAUROCcls79.185.589.586.894.2
APcls81.985.5-88.392.2
F1maxcls78.984.4-85.194.5
LocalizationAUROCseg91.395.971.296.098.2
APseg23.521.0-26.146.4
F1maxseg29.527.0-33.052.6
+ +Table 3. Results for multi-class anomaly detection and localization on VisA dataset. The best results are indicated in bold, and the second-best results are denoted with an underline. + +# 5. Ablation Studies and Analysis + +In this section, we first differentiate the proposed LASB method's performance in latent spaces and explain the advantages of utilizing Linear-SB in the latent space. We then assess the LASB model's effectiveness when compared with Stable Diffusion [37] and other models for anomaly localization and detection tasks. Finally, we demonstrate the robustness of our proposed method, demonstrating its ability to deliver precise and consistent outcomes during test-time evaluations. + +LASB vs Standard Diffusion Models. In the realm of anomaly detection, standard diffusion models such as DDPM [23] and LDM [37] achieve competitive performance in anomaly detection tasks but exhibit significant limitations in localization, as evidenced in Table 2. To mitigate these limitations, DiAD [21] integrates a semantic guidance network to improve LDM's localization by incorporating additional contextual information. Despite these improvements, DiAD [21] still falls short of achieving state-of-the-art (SOTA) results, indicating persistent challenges in fully leveraging diffusion models for comprehensive anomaly detection tasks. In contrast, our LASB employs a novel approach that leverages Linear-SB within the latent space, avoiding the limitations associated with reconstructing from pure Gaussian noise, a common issue in standard diffusion processes when applied to anomaly-free reconstruction. Moreover, LASB semi-degrades latent space to retain structural integrity, facilitating more effective anomaly detection and localization. This method not only enhances the robustness of the detection process but also significantly improves localization accuracy. Consequently, the LASB model surpasses standard diffusion models across multiple metrics, demonstrating superior performance in multi-class anomaly detection, as evidenced in Table 2. + +We compare the performance of our proposed Latent Anomaly Schrödinger Bridge (LASB) method with the DDPM [23], DDAD [33], GLAD [54], LDM [37], and DiAD [21] models to demonstrate the effectiveness and efficiency of our approach. First, we evaluate the models on the MVTec-AD dataset for both anomaly detection and localization tasks. As shown in Table 2, LASB significantly outperforms all the other models in localization tasks for all the metrics. Specifically, LASB achieves a $0.6\% \uparrow$ , $17.4\% \uparrow$ , and $10.0\% \uparrow$ improvement in AUROC $_{seg}$ , AP $_{seg}$ , F1max $_{seg}$ respectively. Also, it shows $2.0\% \uparrow$ enhancement in F1max $_{cls}$ for detection, compared to GLAD [54]. These improvements highlight the robustness and effectiveness of the latent space approach specifically for localization. Furthermore, as shown in Figure 3, pixel-space models like DDPM [23], SGM [46], and SB models trained via Iterative Proportional Fitting [11, 25] or likelihood-based methods [8] re + +
NFE-1NFE-2NFE-5NFE-10NFE-100NFE-500NFE-1000
MVTec-AD97.42 / 96.8298.16 / 97.8498.94 / 98.1699.14 / 98.6699.14 / 98.6699.14 / 98.6699.14 / 98.66
VisA92.74 / 96.7193.21 / 97.1593.96 / 97.9094.2 / 98.1894.2 / 98.1894.2 / 98.1894.2 / 98.18
Inference Time (secs)0.120.250.520.741.57.515
+ +Table 4. Performance evaluation across varying numbers of function evaluations (NFEs) on the MVTec-AD and VisA datasets. The tabulated metrics, AUROCcls / AUROCseg, provide a comprehensive overview of image-level and pixel-level anomaly detection performance at each NFE. Additionally, the inference time, measured in wall-clock seconds on an NVIDIA V100 GPU, underscores the computational trade-offs associated with increasing NFEs, reflecting the balance between model efficiency and performance. + +
MethodDR/EM [56]PatchCore [38]DiffAD [60]TransFusion [16]DIAD [21]GLAD [54]ADSPR [43]DDAD [33]LASB (ours)
Inference Time (secs)0.150.442.61.22.81.81.61.50.74
+ +require more training time, memory, and exhibit slower sampling. In contrast, latent-space LASB excels in all areas, achieving up to $5 \times$ and $2 \times$ faster training, $3 \times$ and $2 \times$ memory reduction, and $4 \times$ and $2 \times$ faster sampling compared to SB-based models [8, 11, 25] and DDPM [23], respectively. + +Table 5. Inference times for various methods where the inference time, measured in wall-clock seconds on an NVIDIA V100 GPU. + +
TaskMetrics
DetectionAUROCcls99.0599.0199.0198.98
APcls99.1599.1499.1499.11
F1maxcls98.5998.3399.8398.32
LocalizationAUROCseg98.5898.5898.5998.57
APseg78.1778.1478.1478.12
F1maxseg70.6270.5970.6170.58
+ +Table 6. Stability analysis of LASB on the MVTec-AD dataset for detection and localization tasks by performing the sampling multiple times. Results reported are mean values across classes and multiple samplings (see Section S1 of appendix for more detailed results). + +Generalization and Sampling Stability. In the field of anomaly detection using diffusion models, achieving consistent outcomes during the sampling or inference stage is notably challenging due to the inherent stochastic nature of generative processes. This inconsistency can be particularly problematic in real-world applications where reliable and stable detection is critical. Therefore, assessing the stability of model outputs across multiple inferences is essential. Our approach involves training the model once and then conducting multiple sampling or inference tests to evaluate if the outcomes remain consistent over time. A critical consideration for real-world applicability is achieving fast inference while minimizing computational overhead. The proposed LASB model strikes a balance between performance and efficiency, ensuring its suitability for deployment in practical scenarios. + +Our findings, as detailed in Table 6, reveal that the LASB model exhibits remarkable stability across all evaluation metrics, showing negligible variance across numerous inferences. This consistency is attributed to the model's capability to maintain structural integrity and effectively tran + +sition from anomalous to normal latent spaces. The LASB model is designed to reconstruct a normal image regardless of the underlying anomalies, compelling it to disregard anomalous features during reconstruction. This process not only ensures that anomalies of various patterns, sizes, and orientations are effectively handled but also enhances the overall reliability of the model. + +Inference Complexity. As shown in Table 4, LASB demonstrates exceptional efficiency with its rapid sampling capabilities, achieving near-optimal performance with as few as 10 NFEs (Number of Function Evaluations) and an inference time of only 0.74 seconds on an NVIDIA V100 GPU. This highlights LASB's practicality for real-world applications, where fast and reliable anomaly detection and localization are crucial. To further illustrate its efficiency, we compare LASB's inference time with existing state-of-the-art (SOTA) models in Table 5. LASB is approximately $1.6 \times$ faster than TransFusion [16], a leading class-based method, while also outperforming it in both anomaly localization and detection tasks. Compared to DiAD [21], LASB achieves $2 \times$ faster inference times, delivering significantly stronger performance in both detection and localization metrics. This demonstrates LASB's clear advantage in balancing computational efficiency and robust anomaly detection. + +# 6. Conclusions and Future Work + +The proposed Latent Anomaly Schrödinger Bridge (LASB) model demonstrates robust performance in anomaly detection and localization tasks. Its unified nature delimits the need for extra guidance or additional network components. LASB also excels in producing stable inferential results, requires less computational memory, and benefits from faster training and sampling rates. Given these advantages, LASB is well-suited for deployment in real-world industrial applications. Looking ahead, future research could focus on enhancing the robust translation mechanisms specific to anomaly detection and localization tasks. + +Acknowledgements. We thank the anonymous reviewers for their valuable feedback that improved the presentation of this paper. + +# References + +[1] Samet Akcay, Amir Atapour-Abarghouei, and Toby P Breckon. Ganomaly: Semi-supervised anomaly detection via adversarial training. In Computer Vision-ACCV 2018. Springer International Publishing, 2019. 2 +[2] Brian DO Anderson. Reverse-time diffusion equation models. Stochastic Processes and their Applications, 12(3):313-326, 1982. 3 +[3] Alexander Bauer, Shinichi Nakajima, and Klaus-Robert Müller. Self-supervised autoencoders for visual anomaly detection. Mathematics, 12(24), 2024. 2 +[4] Finn Behrendt et al. Patched diffusion models for unsupervised anomaly detection in brain migraine. In Medical Imaging with Deep Learning, page PMLR, 2024. 1, 2 +[5] Paul Bergmann, Michael Fauser, David Sattlegger, and Carsten Steger. Mvtec ad-a comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9592-9600, 2019. 5 +[6] Paul Bergmann et al. Improving unsupervised defect segmentation by applying structural similarity to autoencoders. arXiv preprint arXiv:1807.02011, 2018. 2 +[7] Ziyi Chang, George A Koulieris, and Hubert PH Shum. On the design fundamentals of diffusion models: A survey. arXiv preprint arXiv:2306.04542, 2023. 5 +[8] Tianrong Chen, Guan-Horng Liu, and Evangelos Theodorou. Likelihood training of schrödinger bridge using forward-backward SDEs theory. In International Conference on Learning Representations, 2022. 2, 4, 7, 8 +[9] Yongxin Chen, Tryphon T Georgiou, and Michele Pavon. Stochastic control liaisons: Richard sinkhorn meets gaspard monge on a schrodinger bridge. Siam Review, 63(2):249-313, 2021. 4 +[10] Anne-Sophie Collin and Christophe De Vleeschouwer. Improved anomaly detection by training an autoencoder with skip connections on images corrupted with stain-shaped noise. In 2020 25th International Conference on Pattern Recognition (ICPR), pages 7915-7922. IEEE, 2021. 2 +[11] Valentin De Bortoli, James Thornton, Jeremy Heng, and Arnaud Doucet. Diffusion schrödinger bridge with applications to score-based generative modeling. Advances in Neural Information Processing Systems, 34:17695-17709, 2021. 2, 4, 7, 8 +[12] Thomas Defard, Aleksandr Setkov, Angelique Loesch, and Romaric Audigier. Padim: a patch distribution modeling framework for anomaly detection and localization. In International Conference on Pattern Recognition, pages 475-489. Springer, 2021. 6 +[13] Hanqiu Deng and Xingyu Li. Anomaly detection via reverse distillation from one-class embedding. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9737-9746, 2022. 6 + +[14] Wei Deng, Weijian Luo, Yixin Tan, Marin Bilos, Yu Chen, Yuriy Nevmyvaka, and Ricky T. Q. Chen. Variational schrödinger diffusion models. In International Conference on Machine Learning (ICML), 2024. 2 +[15] Chouro Ding, Guansong Pang, and Chunhua Shen. Catching both gray and black swans: Open-set supervised anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022. 1 +[16] Matic Fučka, Vitjan Zavrtanik, and Danijel Skočaj. Transfusion-a transparency-based diffusion model for anomaly detection. In European conference on computer vision, pages 91-108. Springer, 2024. 6, 8 +[17] Dong Gong, Lingqiao Liu, Vuong Le, Budhaditya Saha, Moussa Reda Mansour, Svetha Venkatesh, and Anton van den Hengel. Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1705-1714, 2019. 2 +[18] Alvaro Gonzalez-Jimenez et al. Sano: Score-based diffusion model for anomaly localization in dermatology. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023. 1, 2 +[19] Denis Gudovskiy, Shun Ishizaka, and Kazuki Kozuka. Cflow-ad: Real-time unsupervised anomaly detection with localization via conditional normalizing flows. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 98-107, 2022. 6 +[20] Haoyang He, Yuhu Bai, Jiangning Zhang, Qingdong He, Hongxu Chen, Zhenye Gan, Chengjie Wang, Xiangtai Li, Guanzhong Tian, and Lei Xie. MambaAD: Exploring state space models for multi-class unsupervised anomaly detection. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. 6, 7 +[21] Haoyang He, Jiangning Zhang, Hongxu Chen, Xuhai Chen, Zhishan Li, Xu Chen, Yabiao Wang, Chengjie Wang, and Lei Xie. A diffusion-based framework for multi-class anomaly detection. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 8472-8480, 2024. 1, 2, 3, 5, 6, 7, 8 +[22] Liren He, Zhengkai Jiang, Jinlong Peng, Liang Liu, Qianggang Du, Xiaobin Hu, Wenbing Zhu, Mingmin Chi, Yabiao Wang, and Chengjie Wang. Learning unified reference representation for unsupervised multi-class anomaly detection. arXiv preprint arXiv:2403.11561, 2024. 3 +[23] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020. 2, 6, 7, 8 +[24] Bozhen Hu et al. A lightweight spatial and temporal multifeature fusion network for defect detection. IEEE Transactions on Image Processing, 30:472-486, 2020. 1 +[25] Solomon Kullback. Probability densities with given marginals. The Annals of Mathematical Statistics, 39(4): 1236-1243, 1968. 4, 7, 8 +[26] Yufei Liang et al. Omni-frequency channel-selection representations for unsupervised anomaly detection. IEEE Transactions on Image Processing, 2023. 2, 6 + +[27] Guan-Horng Liu, Arash Vahdat, De-An Huang, Evangelos A. Theodorou, Weili Nie, and Anima Anandkumar. I2sb: Image-to-image schrödinger bridge. In International Conference on Machine Learning, 2023. 4, 5 +[28] Wenqian Liu, Runze Li, Meng Zheng, Srikrishna Karanam, Ziyan Wu, Bir Bhanu, Richard J Radke, and Octavia Camps. Towards visually explaining variational autoencoders. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8642-8651, 2020. 2 +[29] Zhikang Liu, Yiming Zhou, Yuansheng Xu, and Zilei Wang. Simplenet: A simple network for image anomaly detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20402-20411, 2023. 6 +[30] Xingming Long et al. Fabric defect detection using tactile information. In 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021. 1 +[31] Fanbin Lu et al. Removing anomalies as noises for industrial defect localization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023. 1, 2 +[32] Ruiying Lu, YuJie Wu, Long Tian, Dongsheng Wang, Bo Chen, Xiyang Liu, and Ruimin Hu. Hierarchical vector quantized transformer for multi-class unsupervised anomaly detection. Advances in Neural Information Processing Systems, 36:8487-8500, 2023. 3, 6, 7 +[33] Arian Mousakhan, Thomas Brox, and Jawad Tayyub. Anomaly detection with conditioned denoising diffusion models. arXiv preprint arXiv:2305.15956, 2023. 6, 7, 8 +[34] Edward Nelson. Dynamical theories of Brownian motion. Princeton university press, 2020. 4 +[35] Jonathan Pinnay and Keng Chai. Inpainting transformer for anomaly detection. In International Conference on Image Analysis and Processing, pages 394-406. Springer, 2022. 2 +[36] Hannes Risken and Hannes Risken. Fokker-planck equation. Springer, 1996. 4 +[37] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695, 2022. 2, 5, 6, 7 +[38] Karsten Roth, Latha Pemula, Joaquin Zepeda, Bernhard Schölkopf, Thomas Brox, and Peter Gehler. Towards total recall in industrial anomaly detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14318-14328, 2022. 6, 8 +[39] Marco Rudolph, Tom Wehrbein, Bodo Rosenhahn, and Bastian Wandt. Fully convolutional cross-scale-flows for image-based defect detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1088-1097, 2022. 6 +[40] Chitwan Sahara, William Chan, Huiwen Chang, Chris Lee, Jonathan Ho, Tim Salimans, David Fleet, and Mohammad Norouzi. Palette: Image-to-image diffusion models. In ACM SIGGRAPH 2022 conference proceedings, pages 1-10, 2022. 5 +[41] Thomas Schlegl, Philipp Seebock, Sebastian M Waldstein, Georg Langs, and Ursula Schmidt-Erfurth. f-anogan: Fast + +unsupervised anomaly detection with generative adversarial networks. Medical image analysis, 54:30-44, 2019. 2 +[42] Yuyang Shi, Valentin De Bortoli, Andrew Campbell, and Arnaud Doucet. Diffusion schrödinger bridge matching. Advances in Neural Information Processing Systems, 36, 2024. 2 +[43] Woosang Shin, Jonghyeon Lee, Taehan Lee, Sangmoon Lee, and Jong Pil Yun. Anomaly detection using score-based perturbation resilience. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 23372-23382, 2023. 1, 2, 6, 8 +[44] Kihyuk Sohn et al. Anomaly clustering: Grouping images into coherent clusters of anomaly types. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023. 1 +[45] Jouwon Song et al. Anoseg: anomaly segmentation network using self-supervised learning. arXiv preprint arXiv:2110.03396, 2021. 2 +[46] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021. 2, 3, 7 +[47] Daniel Stanley Tan, Yi-Chun Chen, Trista Pei-Chun Chen, and Wei-Chao Chen. Trustmae: A noise-resilient defect classification framework using memory-augmented auto-encoders with trust regions. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 276–285, 2021. 2 +[48] Justin Tebbe and Jawad Tayyyub. Dynamic addition of noise in a diffusion model for anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 3940-3949, 2024. 6 +[49] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017. 5 +[50] Shashanka Venkataramanan et al. Attention guided anomaly localization in images. In European Conference on Computer Vision. Springer International Publishing, 2020. 1 +[51] Gefei Wang, Yuling Jiao, Qian Xu, Yang Wang, and Can Yang. Deep generative learning via schrödinger bridge. In International conference on machine learning, pages 10794-10804. PMLR, 2021. 2 +[52] Julian Wyatt et al. Anoddpm: Anomaly detection with denoising diffusion probabilistic models using simplex noise. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022. 1, 2 +[53] Rui Xu, Yunke Wang, and Bo Du. Maediff: Masked autoencoder-enhanced diffusion models for unsupervised anomaly detection in brain images. arXiv preprint arXiv:2401.10561, 2024. 1 +[54] Hang Yao, Ming Liu, Zhicun Yin, Zifei Yan, Xiaopeng Hong, and Wangmeng Zuo. Glad: towards better reconstruction with global and local adaptive diffusion models for unsupervised anomaly detection. In European Conference on Computer Vision, pages 1-17. Springer, 2024. 6, 7, 8 + +[55] Zhiyuan You, Lei Cui, Yujun Shen, Kai Yang, Xin Lu, Yu Zheng, and Xinyi Le. A unified model for multi-class anomaly detection. Advances in Neural Information Processing Systems, 35:4571-4584, 2022. 3, 6, 7 +[56] Vitjan Zavrtanik, Matej Kristan, and Danijel Skočaj. Draem-a discriminatively trained reconstruction embedding for surface anomaly detection. In Proceedings of the IEEE/CVF international conference on computer vision, pages 8330-8339, 2021. 2, 4, 6, 7, 8 +[57] Vitjan Zavrtanik, Matej Kristan, and Danijel Skočaj. Reconstruction by inpainting for visual anomaly detection. Pattern Recognition, 112:107706, 2021. 2 +[58] Vitjan Zavrtanik, Matej Kristan, and Danijel Skočaj. Dsr-a dual subspace re-projection network for surface anomaly detection. In European conference on computer vision. Springer Nature Switzerland, 2022. 2, 6 +[59] Zhaoyang Zeng et al. Reference-based defect detection network. IEEE Transactions on Image Processing, 30:6637-6647, 2021. 1 +[60] Xinyi Zhang, Naiqi Li, Jiawei Li, Tao Dai, Yong Jiang, and Shu-Tao Xia. Unsupervised surface anomaly detection with diffusion probabilistic model. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6782-6791, 2023. 1, 2, 6, 7, 8 +[61] Ying Zhao. Omnial: A unified cnn framework for unsupervised anomaly localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3924-3933, 2023. 6 +[62] Zhixuan Zhao et al. A surface defect detection method based on positive samples. In PRICAI 2018: Trends in Artificial Intelligence. Springer International Publishing, 2018. 2 +[63] Yang Zou, Jongheon Jeong, Latha Pemula, Dongqing Zhang, and Onkar Dabeer. Spot-the-difference self-supervised pretraining for anomaly detection and segmentation. In European Conference on Computer Vision. Springer, Springer Nature Switzerland, 2022. 5 \ No newline at end of file diff --git a/2025/A Unified Latent Schrodinger Bridge Diffusion Model for Unsupervised Anomaly Detection and Localization/images.zip b/2025/A Unified Latent Schrodinger Bridge Diffusion Model for Unsupervised Anomaly Detection and Localization/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..01ae7b3a29558436bf6468d473f18190304ef8bf --- /dev/null +++ b/2025/A Unified Latent Schrodinger Bridge Diffusion Model for Unsupervised Anomaly Detection and Localization/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eff38f1a11e72d71feede3b48c1888f7761e8a71f804048cecc8742ad085f172 +size 487990 diff --git a/2025/A Unified Latent Schrodinger Bridge Diffusion Model for Unsupervised Anomaly Detection and Localization/layout.json b/2025/A Unified Latent Schrodinger Bridge Diffusion Model for Unsupervised Anomaly Detection and Localization/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..858ea619c4ffdaae047a125ce958cdc944b578d5 --- /dev/null +++ b/2025/A Unified Latent Schrodinger Bridge Diffusion Model for Unsupervised Anomaly Detection and Localization/layout.json @@ -0,0 +1,9294 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 84, + 102, + 526, + 140 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 102, + 526, + 140 + ], + "spans": [ + { + "bbox": [ + 84, + 102, + 526, + 140 + ], + "type": "text", + "content": "A Unified Latent Schrödinger Bridge Diffusion Model for Unsupervised Anomaly Detection and Localization" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 111, + 160, + 499, + 218 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 160, + 499, + 218 + ], + "spans": [ + { + "bbox": [ + 111, + 160, + 499, + 218 + ], + "type": "text", + "content": "Shilhora Akshay" + }, + { + "bbox": [ + 111, + 160, + 499, + 218 + ], + "type": "inline_equation", + "content": "^{1,*}" + }, + { + "bbox": [ + 111, + 160, + 499, + 218 + ], + "type": "text", + "content": " Niveditha Lakshmi Narasimhan" + }, + { + "bbox": [ + 111, + 160, + 499, + 218 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 111, + 160, + 499, + 218 + ], + "type": "text", + "content": " Jacob George" + }, + { + "bbox": [ + 111, + 160, + 499, + 218 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 111, + 160, + 499, + 218 + ], + "type": "text", + "content": " Vineeth N Balasubramanian" + }, + { + "bbox": [ + 111, + 160, + 499, + 218 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 111, + 160, + 499, + 218 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 111, + 160, + 499, + 218 + ], + "type": "text", + "content": "Indian Institute of Technology, Hyderabad " + }, + { + "bbox": [ + 111, + 160, + 499, + 218 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 111, + 160, + 499, + 218 + ], + "type": "text", + "content": "KLA Corporation *shilhora.akshay333@gmail.com" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 151, + 244, + 200, + 258 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 244, + 200, + 258 + ], + "spans": [ + { + "bbox": [ + 151, + 244, + 200, + 258 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 270, + 296, + 594 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 270, + 296, + 594 + ], + "spans": [ + { + "bbox": [ + 55, + 270, + 296, + 594 + ], + "type": "text", + "content": "Anomaly detection and localization remain pivotal challenges in computer vision, with applications ranging from industrial inspection to medical diagnostics. While current supervised methods offer high precision, they are often impractical due to the scarcity of annotated data and the infrequent occurrence of anomalies. Recent advancements in unsupervised approaches, particularly reconstruction-based methods, have addressed these issues by training models exclusively on normal data, enabling them to identify anomalies during inference. However, these methods frequently rely on auxiliary networks or specialized adaptations, which can limit their robustness and practicality. This work introduces the Latent Anomaly Schrödinger Bridge (LASB), a unified unsupervised anomaly detection model that operates entirely in the latent space without requiring additional networks or custom modifications. LASB transforms anomaly images into normal images by preserving structural integrity across varying anomaly classes, lighting, and pose conditions, making it highly robust and versatile. Unlike previous methods, LASB does not focus solely on reconstructing anomaly features, but emphasizes anomaly transformation, achieving smooth anomaly-to-normal image conversions. Our method achieves state-of-the-art performance on both the MVTec-AD and VisA datasets, excelling in detection and localization tasks. Our code is available at https://github.com/ShilhoraAkshayPatel/LASB." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 620, + 135, + 633 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 620, + 135, + 633 + ], + "spans": [ + { + "bbox": [ + 56, + 620, + 135, + 633 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 641, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 641, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 641, + 296, + 714 + ], + "type": "text", + "content": "Anomaly detection and localization is a critical task in computer vision with several applications in medical [4, 18, 52, 53] and industrial [21, 31, 43, 60], which has attracted significant attention from the research community. The main goal of anomaly detection is to identify and localize abnormal patterns that are unusual from those seen in normal" + } + ] + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 317, + 242, + 553, + 373 + ], + "blocks": [ + { + "bbox": [ + 317, + 242, + 553, + 373 + ], + "lines": [ + { + "bbox": [ + 317, + 242, + 553, + 373 + ], + "spans": [ + { + "bbox": [ + 317, + 242, + 553, + 373 + ], + "type": "image", + "image_path": "348132e01528d7ebd2ebe824dd596fe2dffeeb94379f6c2db810f1dcd844479c.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 380, + 555, + 425 + ], + "lines": [ + { + "bbox": [ + 313, + 380, + 555, + 425 + ], + "spans": [ + { + "bbox": [ + 313, + 380, + 555, + 425 + ], + "type": "text", + "content": "Figure 1. Illustration of the Gaussian and Bridge diffusion processes, with reverse image trajectory from LASB. LASB learns a direct diffusion bridge between anomaly and normal distributions, enhancing interpretability and anomaly-free transformation." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 447, + 555, + 530 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 447, + 555, + 530 + ], + "spans": [ + { + "bbox": [ + 313, + 447, + 555, + 530 + ], + "type": "text", + "content": "instances. In the past few years the research community has proposed various supervised anomaly detection methods [15, 24, 30, 44, 50, 59]. The extensive need for annotations is expensive and the infrequent presence of anomalous samples makes these methods unsuitable for practical applications due to their limitations in addressing real-world scenarios." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 534, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 534, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 534, + 556, + 715 + ], + "type": "text", + "content": "Recent research has sought to overcome the limitations of supervised anomaly detection methods by relying solely on normal images during training and localizing abnormal patterns at inference time. Among the various categories within the unsupervised paradigm, reconstruction-based methods have garnered significant attention due to their promising results and strong performance in real-world scenarios. The core concept behind these methods is that the model is trained exclusively on normal images and during inference, it reconstructs abnormal samples as normal samples. Thus, doing so benefits in two directions, one can interpret how the model's ability to detect and localize anomalies. Second, it aids the model to perform better while reconstructing the anomalies. Recent advancements have introduced novel approaches lever" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "25528" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 294, + 133 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 294, + 133 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 294, + 133 + ], + "type": "text", + "content": "aging AutoEncoder-based (AE) [3, 6, 56-58], Generative Adversarial Networks-based (GANs) [1, 26, 45, 62], and diffusion-based approaches [4, 18, 21, 31, 43, 52, 60]. Our research specifically focuses on various diffusion models tailored for anomaly detection tasks." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 140, + 295, + 307 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 140, + 295, + 307 + ], + "spans": [ + { + "bbox": [ + 55, + 140, + 295, + 307 + ], + "type": "text", + "content": "Existing diffusion-based methods primarily focus on developing novel noise-conditioning techniques, often incorporating an additional discriminative sub-network [60]. Another approach utilizes a score-based diffusion model to identify anomalies by evaluating how effectively samples can return to the normal data distribution after perturbation, though it fails to deliver competitive performance [43]. Additionally, DiAD [21] proposes novel semantic guidance to specifically understand the anomalies and showcase its robustness. However, earlier diffusion-based methods generally depend on auxiliary networks or tailored diffusion processes to extract anomaly features. Moreover, most of these methods prioritize the extraction and reconstruction of anomaly features." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 314, + 295, + 445 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 314, + 295, + 445 + ], + "spans": [ + { + "bbox": [ + 55, + 314, + 295, + 445 + ], + "type": "text", + "content": "In this work, we introduce an intuitive mechanism that neither requires model-specific adaptations to extract anomaly features nor relies on an additional feature extractor for training or inference, to extract discriminative features. By leveraging the capabilities of Schrödinger Bridge [8, 11, 14, 42, 51], we propose the Latent Anomaly Schrödinger Bridge (LASB), which operates entirely in the latent space, transforming anomalous images into normal ones, regardless of anomaly class, and demonstrating robustness to variations in lighting and pose. In the following, we outline the key contributions of our study." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 462, + 293, + 712 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 55, + 462, + 293, + 509 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 462, + 293, + 509 + ], + "spans": [ + { + "bbox": [ + 55, + 462, + 293, + 509 + ], + "type": "text", + "content": "- We propose LASB, a unsupervised bridge-based anomaly detection model that operates in latent space. LASB transforms anomaly images into normal images, regardless of the anomaly class." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 510, + 293, + 602 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 510, + 293, + 602 + ], + "spans": [ + { + "bbox": [ + 55, + 510, + 293, + 602 + ], + "type": "text", + "content": "- Unified Framework: LASB offers a comprehensive framework for both anomaly detection and localization without the need for auxiliary networks. Unlike conventional methods that rely on Gaussian diffusion, LASB employs a bridge-based diffusion process that inherently preserves structural integrity, marking the first application of such a process in latent diffusion model for anomaly detection." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 605, + 293, + 652 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 605, + 293, + 652 + ], + "spans": [ + { + "bbox": [ + 55, + 605, + 293, + 652 + ], + "type": "text", + "content": "- Efficient and Scalable: By utilizing the Linear Schrödinger Bridge in latent space, LASB significantly reduces training time, memory consumption, and sampling speed, enhancing overall efficiency and scalability." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 654, + 293, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 654, + 293, + 712 + ], + "spans": [ + { + "bbox": [ + 55, + 654, + 293, + 712 + ], + "type": "text", + "content": "- Our method achieves state-of-the-art performance on the MVtec dataset, with a image-level " + }, + { + "bbox": [ + 55, + 654, + 293, + 712 + ], + "type": "inline_equation", + "content": "\\mathrm{AUROC}_{cls} / \\mathrm{AP}_{cls}" + }, + { + "bbox": [ + 55, + 654, + 293, + 712 + ], + "type": "text", + "content": " of " + }, + { + "bbox": [ + 55, + 654, + 293, + 712 + ], + "type": "inline_equation", + "content": "99.2\\% / 99.3\\%" + }, + { + "bbox": [ + 55, + 654, + 293, + 712 + ], + "type": "text", + "content": " and an pixel-level " + }, + { + "bbox": [ + 55, + 654, + 293, + 712 + ], + "type": "inline_equation", + "content": "\\mathrm{AUROC}_{seg} / \\mathrm{AP}_{seg}" + }, + { + "bbox": [ + 55, + 654, + 293, + 712 + ], + "type": "text", + "content": " of " + }, + { + "bbox": [ + 55, + 654, + 293, + 712 + ], + "type": "inline_equation", + "content": "98.6\\% / 78.2\\%" + }, + { + "bbox": [ + 55, + 654, + 293, + 712 + ], + "type": "text", + "content": " on the test set, establishing a new benchmark for unsupervised anomaly detection." + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 314, + 71, + 400, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 71, + 400, + 83 + ], + "spans": [ + { + "bbox": [ + 314, + 71, + 400, + 83 + ], + "type": "text", + "content": "2. Related Work" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 88, + 553, + 350 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 88, + 553, + 350 + ], + "spans": [ + { + "bbox": [ + 313, + 88, + 553, + 350 + ], + "type": "text", + "content": "Reconstruction-based Methods: Reconstruction-based anomaly detection models operate on the premise that networks trained exclusively on normal images will fail to accurately reconstruct anomalous ones due to their unfamiliarity with the abnormal distribution. Notably, autoencoder [6, 10, 28] and Generative Adversarial Network (GAN) [1, 41] frameworks have been widely utilized for this task, where anomaly scores are computed based on the reconstruction error between the input and its generated counterpart. Nevertheless, such approaches frequently struggle with direct copy issues and elevated false positive rates due to their limited generalization capacity over anomalous regions or due to the overgeneralization capacity of reconstructive methods. To mitigate these generalization challenges, recent innovations have pivoted towards inpainting strategies [35, 57] or integrating supplementary memory modules [17, 47]. Furthermore, the DRAEM [56] model enhances performance by leveraging pseudoanomalies and coupling autoencoders with a segmentation network, though its effectiveness diminishes when faced with substantial deviations between real and pseudo anomalies." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 359, + 553, + 657 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 359, + 553, + 657 + ], + "spans": [ + { + "bbox": [ + 313, + 359, + 553, + 657 + ], + "type": "text", + "content": "Diffusion-based Methods: Diffusion models, known for their high-fidelity image synthesis, are increasingly applied to anomaly detection in both medical and industrial domains [18, 21, 43, 52, 60]. In the medical field, SANO [18] leverages score-based diffusion models to localize skin conditions such as eczema, while AnoDDPM [52] employs the Denoising Diffusion Probabilistic Model (DDPM) [23] to segment tumors, initiating with simplex noise. These methods primarily utilize standard DDPM [23] or Score-based Generative Models (SGMs) [46], yet they often struggle to capture structural features because they rely on starting from pure Gaussian or simplex noise and frequently require external guidance for complex feature integration. In industrial applications, DiffAD [60] introduces a noise interpolation technique coupled with a discriminative sub-network to enhance detection capabilities. AD-SPR [43] emphasizes the significance of perturbed resilience using SGMs [46] to identify anomalies. Additionally, DiAD [21] presents a novel semantic guidance network that directs a stable diffusion model [37] to effectively detect and localize anomalies in both industrial and selected medical imaging applications. However, these models often require class-specific training or additional networks to guide the diffusion process or to perform further segmentation via a discriminative sub-network." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 665, + 553, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 665, + 553, + 712 + ], + "spans": [ + { + "bbox": [ + 313, + 665, + 553, + 712 + ], + "type": "text", + "content": "Unified/Multi-class Methods: Early approaches in the literature developed models on a per-class basis, which limited their generalizability. More recent research has shifted towards unified or multi-class models that are trained on" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "type": "text", + "content": "25529" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 60, + 49, + 545, + 153 + ], + "blocks": [ + { + "bbox": [ + 60, + 49, + 545, + 153 + ], + "lines": [ + { + "bbox": [ + 60, + 49, + 545, + 153 + ], + "spans": [ + { + "bbox": [ + 60, + 49, + 545, + 153 + ], + "type": "image", + "image_path": "4fb7253a32f925bb900a6dde4f0bd8f50e6ac343e6b27b7991abe1f462799f5d.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 60, + 156, + 162, + 243 + ], + "blocks": [ + { + "bbox": [ + 60, + 156, + 162, + 243 + ], + "lines": [ + { + "bbox": [ + 60, + 156, + 162, + 243 + ], + "spans": [ + { + "bbox": [ + 60, + 156, + 162, + 243 + ], + "type": "image", + "image_path": "e4ffb779d8ba2249faef708cb0ab8c73765de979b76a5380378ff518a1c2e26e.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 54, + 245, + 555, + 302 + ], + "lines": [ + { + "bbox": [ + 54, + 245, + 555, + 302 + ], + "spans": [ + { + "bbox": [ + 54, + 245, + 555, + 302 + ], + "type": "text", + "content": "Figure 2. The LASB model framework consists of two key stages: training and inference. During training, anomaly augmentations are applied to images, introducing distortions that the LASB model learns to remove, ultimately reconstructing a normal image. This iterative process continues until the model effectively filters out the anomalies. In the inference stage, the model processes real anomalous images, reconstructing normal versions. The anomaly detection is achieved by computing the difference " + }, + { + "bbox": [ + 54, + 245, + 555, + 302 + ], + "type": "inline_equation", + "content": "(p_B - p_A)" + }, + { + "bbox": [ + 54, + 245, + 555, + 302 + ], + "type": "text", + "content": " between the original and reconstructed images, with anomalies visualized via a heatmap." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 166, + 156, + 544, + 243 + ], + "blocks": [ + { + "bbox": [ + 166, + 156, + 544, + 243 + ], + "lines": [ + { + "bbox": [ + 166, + 156, + 544, + 243 + ], + "spans": [ + { + "bbox": [ + 166, + 156, + 544, + 243 + ], + "type": "image", + "image_path": "2168f79c8e9cbdaec09dbc1aa9abacddcfda286c1ca5b93866ccc3559d979ead.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 54, + 321, + 297, + 525 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 321, + 297, + 525 + ], + "spans": [ + { + "bbox": [ + 54, + 321, + 297, + 525 + ], + "type": "text", + "content": "entire datasets, demonstrating improved robustness [21, 22, 32, 55]. For instance, UniAD [55] utilizes layer-wise query decoding, neighbor-masked attention, and feature jittering techniques to enhance multi-class anomaly detection. In contrast, HVQ-Trans [32] employs a hierarchical vector quantized prototype-oriented Transformer to improve discriminative capacity across multiple classes. Both methods focus on tackling the \"identical shortcut\" issue. On the other hand, RLR [22] introduces a novel framework using learnable reference representations with locality constraints to explicitly learn normal patterns and circumvent \"learning shortcuts\", achieving superior results on standard datasets (MVTec-AD and VisA). Additionally, the DiAD [21] model, designed for multi-class settings and based on diffusion-based reconstruction, outperforms UniAD [55], RLR [22], and HVQ-Trans [32] in terms of localization and detection performance." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 536, + 188, + 550 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 536, + 188, + 550 + ], + "spans": [ + { + "bbox": [ + 55, + 536, + 188, + 550 + ], + "type": "text", + "content": "3. Proposed Methodology" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 555, + 142, + 567 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 555, + 142, + 567 + ], + "spans": [ + { + "bbox": [ + 55, + 555, + 142, + 567 + ], + "type": "text", + "content": "3.1. Preliminaries" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 573, + 296, + 657 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 573, + 296, + 657 + ], + "spans": [ + { + "bbox": [ + 55, + 573, + 296, + 657 + ], + "type": "text", + "content": "Before introducing the full methodology, we begin with a preliminary overview of score-based generative models (SGMs). Next, we explain the Schrödinger Bridge concept and discuss its connection to SGMs. Finally, we present the core methodology, detailing how and why the latent Schrödinger Bridge effectively addresses anomaly detection and localization." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 665, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 665, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 665, + 296, + 714 + ], + "type": "text", + "content": "Notations. Here we introduce some notations and use them throughout the work. Let " + }, + { + "bbox": [ + 55, + 665, + 296, + 714 + ], + "type": "inline_equation", + "content": "X_{t}(\\in \\mathbb{R}^{d})" + }, + { + "bbox": [ + 55, + 665, + 296, + 714 + ], + "type": "text", + "content": " denote a stochastic process, where " + }, + { + "bbox": [ + 55, + 665, + 296, + 714 + ], + "type": "inline_equation", + "content": "t\\in [0,1]" + }, + { + "bbox": [ + 55, + 665, + 296, + 714 + ], + "type": "text", + "content": " is a continuous time step. The intermediate steps are uniformly distributed in the interval" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 321, + 555, + 394 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 321, + 555, + 394 + ], + "spans": [ + { + "bbox": [ + 313, + 321, + 555, + 394 + ], + "type": "inline_equation", + "content": "(\\mathcal{U}(t \\sim [0,1]))" + }, + { + "bbox": [ + 313, + 321, + 555, + 394 + ], + "type": "text", + "content": ". The initial distribution is the corrupted data distribution is denoted as " + }, + { + "bbox": [ + 313, + 321, + 555, + 394 + ], + "type": "inline_equation", + "content": "p_A" + }, + { + "bbox": [ + 313, + 321, + 555, + 394 + ], + "type": "text", + "content": " and the terminal distribution is clean data distribution and is denoted as " + }, + { + "bbox": [ + 313, + 321, + 555, + 394 + ], + "type": "inline_equation", + "content": "p_B" + }, + { + "bbox": [ + 313, + 321, + 555, + 394 + ], + "type": "text", + "content": ". The Wiener process and its reversed counterpart, adopted from Anderson et al. [2], are represented as " + }, + { + "bbox": [ + 313, + 321, + 555, + 394 + ], + "type": "inline_equation", + "content": "W_t" + }, + { + "bbox": [ + 313, + 321, + 555, + 394 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 321, + 555, + 394 + ], + "type": "inline_equation", + "content": "\\overline{W}_t" + }, + { + "bbox": [ + 313, + 321, + 555, + 394 + ], + "type": "text", + "content": ", respectively, both in " + }, + { + "bbox": [ + 313, + 321, + 555, + 394 + ], + "type": "inline_equation", + "content": "\\mathbb{R}^d" + }, + { + "bbox": [ + 313, + 321, + 555, + 394 + ], + "type": "text", + "content": ". " + }, + { + "bbox": [ + 313, + 321, + 555, + 394 + ], + "type": "inline_equation", + "content": "\\mathbb{I} \\in \\mathbb{R}^{d \\times d}" + }, + { + "bbox": [ + 313, + 321, + 555, + 394 + ], + "type": "text", + "content": " is the identity matrix." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 401, + 556, + 496 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 401, + 556, + 496 + ], + "spans": [ + { + "bbox": [ + 313, + 401, + 556, + 496 + ], + "type": "text", + "content": "Score-based Generative Models. Score-based Generative Models (SGMs) perturb the data stochastically across continuous noise scales and then use reverse-time stochastic differential equations (SDEs) to learn the reverse-time diffusion process. This reverse-time SDE relies on a score function enabling the reconstruction of any distribution starting from a Gaussian [46]. Given data " + }, + { + "bbox": [ + 313, + 401, + 556, + 496 + ], + "type": "inline_equation", + "content": "X_0 \\sim p_A" + }, + { + "bbox": [ + 313, + 401, + 556, + 496 + ], + "type": "text", + "content": ", forward and backward SDEs are formulated as:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 337, + 505, + 553, + 519 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 337, + 505, + 553, + 519 + ], + "spans": [ + { + "bbox": [ + 337, + 505, + 553, + 519 + ], + "type": "interline_equation", + "content": "d X _ {t} = f _ {t} \\left(X _ {t}\\right) d t + \\sqrt {\\beta_ {t}} d W _ {t}, \\quad X _ {0} \\sim p _ {A}, \\tag {1}", + "image_path": "d708aa2790896ca02b8eb674e66b67990fa23da5e63afe8cbbd52fb759d4bf46.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 338, + 522, + 553, + 536 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 338, + 522, + 553, + 536 + ], + "spans": [ + { + "bbox": [ + 338, + 522, + 553, + 536 + ], + "type": "interline_equation", + "content": "d X _ {t} = \\left[ f _ {t} - \\beta_ {t} \\nabla \\log p \\left(X _ {t}, t\\right) \\right] d t + \\sqrt {\\beta_ {t}} d \\bar {W} _ {t}, \\tag {2}", + "image_path": "3882b1142327199ee8c8521bf22d3bbc9a67ea4ef73a492843fa8c90b277bb69.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 545, + 555, + 617 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 545, + 555, + 617 + ], + "spans": [ + { + "bbox": [ + 313, + 545, + 555, + 617 + ], + "type": "text", + "content": "where, " + }, + { + "bbox": [ + 313, + 545, + 555, + 617 + ], + "type": "inline_equation", + "content": "f(\\cdot ,t):\\mathbb{R}^n\\to \\mathbb{R}^n" + }, + { + "bbox": [ + 313, + 545, + 555, + 617 + ], + "type": "text", + "content": " and the terminal distributions (i.e., at " + }, + { + "bbox": [ + 313, + 545, + 555, + 617 + ], + "type": "inline_equation", + "content": "t = 1" + }, + { + "bbox": [ + 313, + 545, + 555, + 617 + ], + "type": "text", + "content": ") approach Gaussian " + }, + { + "bbox": [ + 313, + 545, + 555, + 617 + ], + "type": "inline_equation", + "content": "(X_{1}\\sim \\mathcal{N}(0,I))" + }, + { + "bbox": [ + 313, + 545, + 555, + 617 + ], + "type": "text", + "content": ". To achieve this, diffusion coefficient " + }, + { + "bbox": [ + 313, + 545, + 555, + 617 + ], + "type": "inline_equation", + "content": "\\beta_{t}\\in \\mathbb{R}" + }, + { + "bbox": [ + 313, + 545, + 555, + 617 + ], + "type": "text", + "content": " is carefully tuned and ensuring that the base drift " + }, + { + "bbox": [ + 313, + 545, + 555, + 617 + ], + "type": "inline_equation", + "content": "f_{t}" + }, + { + "bbox": [ + 313, + 545, + 555, + 617 + ], + "type": "text", + "content": " is linear in " + }, + { + "bbox": [ + 313, + 545, + 555, + 617 + ], + "type": "inline_equation", + "content": "X_{t}" + }, + { + "bbox": [ + 313, + 545, + 555, + 617 + ], + "type": "text", + "content": ". Here, " + }, + { + "bbox": [ + 313, + 545, + 555, + 617 + ], + "type": "inline_equation", + "content": "p" + }, + { + "bbox": [ + 313, + 545, + 555, + 617 + ], + "type": "text", + "content": " is the marginal density of equation (2) at time " + }, + { + "bbox": [ + 313, + 545, + 555, + 617 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 313, + 545, + 555, + 617 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 313, + 545, + 555, + 617 + ], + "type": "inline_equation", + "content": "\\nabla \\log p" + }, + { + "bbox": [ + 313, + 545, + 555, + 617 + ], + "type": "text", + "content": " denotes its score [46]." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 624, + 556, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 624, + 556, + 696 + ], + "spans": [ + { + "bbox": [ + 313, + 624, + 556, + 696 + ], + "type": "text", + "content": "SGMs as Schrödinger Bridge. The absence of flexibility in SGM's to transport data to desired distribution, demands a versatile strategy. The Schrödinger Bridge (SB) is a strategy often applied in optimal transport to find the optimal path measure between two marginal densities and it is expressed as," + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 385, + 696, + 553, + 715 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 385, + 696, + 553, + 715 + ], + "spans": [ + { + "bbox": [ + 385, + 696, + 553, + 715 + ], + "type": "interline_equation", + "content": "\\min _ {Q \\in P \\left(p _ {A}, p _ {B}\\right)} \\operatorname {K L} (\\mathbb {Q} | | \\mathbb {P}), \\tag {3}", + "image_path": "1afa158deefe408142e42cc04e6301e115f912546f4d8ba94d07916166a5bfeb.jpg" + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "25530" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "text", + "content": "Here, " + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "inline_equation", + "content": "\\mathbb{Q}" + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "text", + "content": " represents a path measure within " + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "inline_equation", + "content": "\\mathbb{P}(p_A,p_B)" + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "text", + "content": ", characterized by having marginal densities " + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "inline_equation", + "content": "p_A" + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "inline_equation", + "content": "p_B" + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "text", + "content": " at times " + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "inline_equation", + "content": "t = 0" + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "text", + "content": " and 1, respectively. We consider " + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "inline_equation", + "content": "\\mathbb{P}" + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "text", + "content": " as the reference measure, specifically chosen as the path measure in equation (1). We will further elaborate on the conditions that define the optimality of the SB (as shown in equation (3)) with specified boundary conditions. The optimality condition for SB is characterized by solving PDEs [9][8]. Let " + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "inline_equation", + "content": "\\Psi (t,x)" + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "inline_equation", + "content": "\\widehat{\\Psi} (t,x)" + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "text", + "content": " be the solutions to the following PDEs:" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 105, + 198, + 295, + 223 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 198, + 295, + 223 + ], + "spans": [ + { + "bbox": [ + 105, + 198, + 295, + 223 + ], + "type": "interline_equation", + "content": "\\frac {\\partial \\Psi (z , t)}{\\partial t} = - \\nabla \\Psi^ {T} f - \\frac {1}{2} \\beta \\Delta \\Psi \\tag {4}", + "image_path": "d8d541067eb3d48226353441f67fed94fc5ebdb8e7c86f472dc014bc3a4d6048.jpg" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 107, + 224, + 295, + 251 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 224, + 295, + 251 + ], + "spans": [ + { + "bbox": [ + 107, + 224, + 295, + 251 + ], + "type": "interline_equation", + "content": "\\frac {\\partial \\widehat {\\Psi} (z , t)}{\\partial t} = - \\nabla \\cdot (\\widehat {\\Psi} f) + \\frac {1}{2} \\beta \\Delta \\widehat {\\Psi} \\tag {5}", + "image_path": "5d1545e60b5eb2bb6d0fca1ffe20d1ce10879b7565e96e9b0e65b65af1c0bcb7.jpg" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 259, + 159, + 270 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 259, + 159, + 270 + ], + "spans": [ + { + "bbox": [ + 55, + 259, + 159, + 270 + ], + "type": "text", + "content": "subject to the conditions," + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 278, + 285, + 294 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 278, + 285, + 294 + ], + "spans": [ + { + "bbox": [ + 67, + 278, + 285, + 294 + ], + "type": "interline_equation", + "content": "\\Psi (z, 0) \\widehat {\\Psi} (z, 0) = p _ {A} (z), \\quad \\Psi (z, 1) \\widehat {\\Psi} (z, 1) = p _ {B} (z).", + "image_path": "a7f92a65e3bfb0774145bd2bbc2427462fb9424053a5996092e3bae658bb1d35.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 304, + 296, + 340 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 304, + 296, + 340 + ], + "spans": [ + { + "bbox": [ + 55, + 304, + 296, + 340 + ], + "type": "text", + "content": "Then, the solution to optimization (3) can be expressed by the path measure of the following forward equation (6), or equivalently backward equation (7), SDE:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 73, + 349, + 295, + 364 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 73, + 349, + 295, + 364 + ], + "spans": [ + { + "bbox": [ + 73, + 349, + 295, + 364 + ], + "type": "interline_equation", + "content": "d X _ {t} = \\left[ f _ {t} + \\beta_ {t} \\nabla \\log \\Psi (X _ {t}, t) \\right] d t + \\sqrt {\\beta_ {t}} d W _ {t}, \\tag {6}", + "image_path": "17f102e1c86c7d5d610c5e2cee9ecbfc8a9d017953804414e4362a2231977e81.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 73, + 366, + 295, + 381 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 73, + 366, + 295, + 381 + ], + "spans": [ + { + "bbox": [ + 73, + 366, + 295, + 381 + ], + "type": "interline_equation", + "content": "d X _ {t} = \\left[ f _ {t} - \\beta_ {t} \\nabla \\log \\hat {\\Psi} \\left(X _ {t}, t\\right) \\right] d t + \\sqrt {\\beta_ {t}} d \\bar {W} _ {t}, \\tag {7}", + "image_path": "597d1e47be6c29315c0b60cd5d4e90693521d051486682fc29e7a3dfc4d7b61e.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 392, + 296, + 550 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 392, + 296, + 550 + ], + "spans": [ + { + "bbox": [ + 55, + 392, + 296, + 550 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 392, + 296, + 550 + ], + "type": "inline_equation", + "content": "\\nabla \\log \\Psi" + }, + { + "bbox": [ + 55, + 392, + 296, + 550 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 392, + 296, + 550 + ], + "type": "inline_equation", + "content": "\\nabla \\log \\widehat{\\Psi}" + }, + { + "bbox": [ + 55, + 392, + 296, + 550 + ], + "type": "text", + "content": " represent the non-linear optimal forward and backward drifts for the Schrödinger Bridge. This SB has a non-linear behavior and it generalizes the SGMs with variation in prior data. Note that, forward and backward equations (6, 7) of Schrödinger Bridge are same as SGMs forward-backward equations (1, 2) except the forward drift term (" + }, + { + "bbox": [ + 55, + 392, + 296, + 550 + ], + "type": "inline_equation", + "content": "\\nabla \\log \\Psi" + }, + { + "bbox": [ + 55, + 392, + 296, + 550 + ], + "type": "text", + "content": "). Thus, from the Nelson's Duality [34] we obtain " + }, + { + "bbox": [ + 55, + 392, + 296, + 550 + ], + "type": "inline_equation", + "content": "\\Psi(x,t)\\widehat{\\Psi}(x,t) = q^{\\mathrm{SB}}(x,t)" + }, + { + "bbox": [ + 55, + 392, + 296, + 550 + ], + "type": "text", + "content": ". As the backward drift of Schrödinger Bridge does not act as a score function anymore with two indivisible components. Thus, from equation (7) and Nelson's Duality [34] we obtain " + }, + { + "bbox": [ + 55, + 392, + 296, + 550 + ], + "type": "inline_equation", + "content": "dX_{t} = [f_{t} - \\beta_{t}(\\nabla \\log q^{\\mathrm{SB}}(X_{t},t) - \\nabla \\log \\Psi(X_{t},t))]dt + \\sqrt{\\beta_{t}} d\\overline{W}_{t}" + }, + { + "bbox": [ + 55, + 392, + 296, + 550 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 558, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 558, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 558, + 296, + 714 + ], + "type": "text", + "content": "Linear-SB. The constraints posed above cause the model to be intractable and to overcome this impediment existing literature suggests various lines of strategies such as Iterative Proportional Fitting [11, 25], likelihood-based training of SB [8] etc. These methods help the SGMs to construct non-linear diffusion bridges. But, we can make them tractable by posing some linear conditions. A more detailed examination of SB theory with SGM framework reveals that the nonlinear drifts specified in equations (6, 7) correspond to the score functions described in equation (2). Assuming that " + }, + { + "bbox": [ + 55, + 558, + 296, + 714 + ], + "type": "inline_equation", + "content": "\\Psi (\\cdot ,t)" + }, + { + "bbox": [ + 55, + 558, + 296, + 714 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 558, + 296, + 714 + ], + "type": "inline_equation", + "content": "\\widehat{\\Psi} (\\cdot ,t)" + }, + { + "bbox": [ + 55, + 558, + 296, + 714 + ], + "type": "text", + "content": " function as probability density functions, we reformulate equations (4, 5) to effectively address the solutions of the Fokker-Planck equation [36]." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 72, + 554, + 120 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 554, + 120 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 554, + 120 + ], + "type": "text", + "content": "The Schrödinger Bridges in equation (4, 5) are satisfied, the backward and forward drifts, " + }, + { + "bbox": [ + 313, + 72, + 554, + 120 + ], + "type": "inline_equation", + "content": "\\nabla \\log \\widehat{\\Psi}(X_t, t)" + }, + { + "bbox": [ + 313, + 72, + 554, + 120 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 72, + 554, + 120 + ], + "type": "inline_equation", + "content": "\\nabla \\log \\Psi(X_t, t)" + }, + { + "bbox": [ + 313, + 72, + 554, + 120 + ], + "type": "text", + "content": " represent the score functions for the following linear SDEs, respectively [36][27]:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 330, + 129, + 555, + 144 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 330, + 129, + 555, + 144 + ], + "spans": [ + { + "bbox": [ + 330, + 129, + 555, + 144 + ], + "type": "interline_equation", + "content": "d X _ {t} = f _ {t} \\left(X _ {t}\\right) d t + \\sqrt {\\beta_ {t}} d W _ {t}, \\quad X _ {0} \\sim \\Psi (\\cdot , 0), \\tag {8}", + "image_path": "110b38012e5ed6f6b79b5b37734333b72362b5783cdb84ee198552a612ffb332.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 331, + 147, + 555, + 162 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 331, + 147, + 555, + 162 + ], + "spans": [ + { + "bbox": [ + 331, + 147, + 555, + 162 + ], + "type": "interline_equation", + "content": "d X _ {t} = f _ {t} \\left(X _ {t}\\right) d t + \\sqrt {\\beta_ {t}} d \\overline {{W}} _ {t}, \\quad X _ {1} \\sim \\widehat {\\Psi} (\\cdot , 1). \\qquad (9)", + "image_path": "0b6499f15ecffffdd9a34e28d721ceb6416381f2daf59645128f054e8ea0a0bc.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 171, + 555, + 340 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 171, + 555, + 340 + ], + "spans": [ + { + "bbox": [ + 313, + 171, + 555, + 340 + ], + "type": "text", + "content": "The key characteristic of the linear SDEs presented in equations (8, 9) lies in their differing boundary conditions, resulting in distinct distributions compared to nonlinear SDEs. Sampling is facilitated by parameterizing " + }, + { + "bbox": [ + 313, + 171, + 555, + 340 + ], + "type": "inline_equation", + "content": "\\nabla \\log \\widehat{\\Psi}" + }, + { + "bbox": [ + 313, + 171, + 555, + 340 + ], + "type": "text", + "content": " using a score network and applying established SGM methods. However, the computational complexity arises due to the intractability of the boundary conditions " + }, + { + "bbox": [ + 313, + 171, + 555, + 340 + ], + "type": "inline_equation", + "content": "\\Psi(\\cdot, 0)" + }, + { + "bbox": [ + 313, + 171, + 555, + 340 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 171, + 555, + 340 + ], + "type": "inline_equation", + "content": "\\widehat{\\Psi}(\\cdot, 1)" + }, + { + "bbox": [ + 313, + 171, + 555, + 340 + ], + "type": "text", + "content": ". To resolve this, we introduce Dirac delta functions as boundary conditions for these previously intractable terms, thus rendering them manageable. Now to address this we impose Dirac delta boundary conditions [36][27]. Assume " + }, + { + "bbox": [ + 313, + 171, + 555, + 340 + ], + "type": "inline_equation", + "content": "p_A(\\cdot) = \\delta_\\kappa(\\cdot)" + }, + { + "bbox": [ + 313, + 171, + 555, + 340 + ], + "type": "text", + "content": " is Dirac delta distribution that is centered at " + }, + { + "bbox": [ + 313, + 171, + 555, + 340 + ], + "type": "inline_equation", + "content": "\\kappa \\in \\mathbb{R}^d" + }, + { + "bbox": [ + 313, + 171, + 555, + 340 + ], + "type": "text", + "content": ". Then, the initial distributions of the linear SDEs (equation (8,9)) are:" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 352, + 348, + 555, + 364 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 352, + 348, + 555, + 364 + ], + "spans": [ + { + "bbox": [ + 352, + 348, + 555, + 364 + ], + "type": "interline_equation", + "content": "p _ {B} = \\Psi (\\cdot , 1) \\widehat {\\Psi} (\\cdot , 1), \\quad \\widehat {\\Psi} (\\cdot , 0) = \\delta_ {\\kappa} (\\cdot), \\tag {10}", + "image_path": "7316584c78861dec6fc80cded966ce9bf3e597580ae300c00823ef131a9f908e.jpg" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 373, + 554, + 469 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 373, + 554, + 469 + ], + "spans": [ + { + "bbox": [ + 313, + 373, + 554, + 469 + ], + "type": "text", + "content": "The above equation (10) suggests that the optimal backward drift of the process in equation (8) consistently targets the Dirac delta " + }, + { + "bbox": [ + 313, + 373, + 554, + 469 + ], + "type": "inline_equation", + "content": "\\delta_{\\kappa}(\\cdot)" + }, + { + "bbox": [ + 313, + 373, + 554, + 469 + ], + "type": "text", + "content": ", achieving convergence to " + }, + { + "bbox": [ + 313, + 373, + 554, + 469 + ], + "type": "inline_equation", + "content": "\\kappa" + }, + { + "bbox": [ + 313, + 373, + 554, + 469 + ], + "type": "text", + "content": " independent of " + }, + { + "bbox": [ + 313, + 373, + 554, + 469 + ], + "type": "inline_equation", + "content": "p_B" + }, + { + "bbox": [ + 313, + 373, + 554, + 469 + ], + "type": "text", + "content": ". This concept is integrated into the loss function, where the score is recalculated as " + }, + { + "bbox": [ + 313, + 373, + 554, + 469 + ], + "type": "inline_equation", + "content": "\\nabla \\log p(X_t,t|X_0 = \\kappa)" + }, + { + "bbox": [ + 313, + 373, + 554, + 469 + ], + "type": "text", + "content": " for each instance " + }, + { + "bbox": [ + 313, + 373, + 554, + 469 + ], + "type": "inline_equation", + "content": "\\kappa" + }, + { + "bbox": [ + 313, + 373, + 554, + 469 + ], + "type": "text", + "content": ". This approach enhances computational efficiency and establishes a robust mathematical basis for training " + }, + { + "bbox": [ + 313, + 373, + 554, + 469 + ], + "type": "inline_equation", + "content": "\\nabla \\log \\widehat{\\Psi} (\\cdot)" + }, + { + "bbox": [ + 313, + 373, + 554, + 469 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 476, + 554, + 490 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 476, + 554, + 490 + ], + "spans": [ + { + "bbox": [ + 313, + 476, + 554, + 490 + ], + "type": "text", + "content": "3.2. Linear Schrödinger Bridge in the Latent Space" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 494, + 555, + 529 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 494, + 555, + 529 + ], + "spans": [ + { + "bbox": [ + 313, + 494, + 555, + 529 + ], + "type": "text", + "content": "We now describe how the above discussion on linear Schrödinger bridges can be used for anomaly detection and localization." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 545, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 545, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 545, + 555, + 713 + ], + "type": "text", + "content": "Overall Pipeline. Figure 2 illustrates the entire pipeline of the proposed method. Initially, anomaly augmentation [56] is applied to a normal image, which is then processed by the Latent Anomaly Schrödinger Bridge (LASB) model. The LASB model learns the anomaly statistics and transforms the augmented anomaly image back into a normal image. Crucially, the aim of this step is not to learn the anomalies themselves, but rather to reconstruct the normal image regardless of the anomalies introduced during training. During training, various masks and noise patterns are applied to augment the images. After training, the LASB model is evaluated on anomaly images from the test set, with its weights frozen. During the inference phase, the model is presented with an anomaly image from the test" + } + ] + } + ], + "index": 18 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "25531" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 190, + 84 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 190, + 84 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 190, + 84 + ], + "type": "text", + "content": "Algorithm 1 Training Procedure" + } + ] + } + ], + "index": 0, + "type": "text" + }, + { + "bbox": [ + 64, + 87, + 266, + 155 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 64, + 87, + 264, + 100 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 64, + 87, + 264, + 100 + ], + "spans": [ + { + "bbox": [ + 64, + 87, + 264, + 100 + ], + "type": "text", + "content": "1: Input: normal and anomaly latent " + }, + { + "bbox": [ + 64, + 87, + 264, + 100 + ], + "type": "inline_equation", + "content": "p_A^{(z)}(\\cdot), p_B^{(z)}(\\cdot | z_0)" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 64, + 101, + 100, + 110 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 64, + 101, + 100, + 110 + ], + "spans": [ + { + "bbox": [ + 64, + 101, + 100, + 110 + ], + "type": "text", + "content": "2: repeat" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 64, + 110, + 199, + 123 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 64, + 110, + 199, + 123 + ], + "spans": [ + { + "bbox": [ + 64, + 110, + 199, + 123 + ], + "type": "text", + "content": "3: " + }, + { + "bbox": [ + 64, + 110, + 199, + 123 + ], + "type": "inline_equation", + "content": "z_0\\sim p_A^{(z)}(z_0),z_1\\sim p_B^{(z)}(z_1|z_0)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 64, + 123, + 242, + 133 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 64, + 123, + 242, + 133 + ], + "spans": [ + { + "bbox": [ + 64, + 123, + 242, + 133 + ], + "type": "text", + "content": "4: " + }, + { + "bbox": [ + 64, + 123, + 242, + 133 + ], + "type": "inline_equation", + "content": "z_{t}\\sim q(z_{t}|z_{0},z_{1})" + }, + { + "bbox": [ + 64, + 123, + 242, + 133 + ], + "type": "text", + "content": " according to equation (11)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 64, + 134, + 266, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 64, + 134, + 266, + 144 + ], + "spans": [ + { + "bbox": [ + 64, + 134, + 266, + 144 + ], + "type": "text", + "content": "5: calculate gradient " + }, + { + "bbox": [ + 64, + 134, + 266, + 144 + ], + "type": "inline_equation", + "content": "\\nabla \\epsilon (z_t,t;\\theta)" + }, + { + "bbox": [ + 64, + 134, + 266, + 144 + ], + "type": "text", + "content": " using equation (12)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 64, + 144, + 141, + 155 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 64, + 144, + 141, + 155 + ], + "spans": [ + { + "bbox": [ + 64, + 144, + 141, + 155 + ], + "type": "text", + "content": "6: until convergence" + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 55, + 178, + 296, + 274 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 178, + 296, + 274 + ], + "spans": [ + { + "bbox": [ + 55, + 178, + 296, + 274 + ], + "type": "text", + "content": "set, which it then reconstructs as a normal image. Both the original anomaly image and its reconstructed normal image are subsequently passed through a difference block. In this step, multi-scale features are extracted using a pre-trained ImageNet model, and the discrepancies between these features are computed to pinpoint regions of interest (ROI). These differences are then detected and localized, following a strategy akin to DiAD [21]." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 279, + 296, + 410 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 279, + 296, + 410 + ], + "spans": [ + { + "bbox": [ + 55, + 279, + 296, + 410 + ], + "type": "text", + "content": "Latent Anomaly Schrödinger Bridge. The foundational basis of Latent Anomaly Schrödinger Bridge (LASB) is inspired by linear Schrödinger Bridge's from equation (10) i.e., the optimal backward drift in equation (8) consistently aims for the Dirac delta " + }, + { + "bbox": [ + 55, + 279, + 296, + 410 + ], + "type": "inline_equation", + "content": "\\delta_{\\kappa}(\\cdot)" + }, + { + "bbox": [ + 55, + 279, + 296, + 410 + ], + "type": "text", + "content": ", ensuring convergence to " + }, + { + "bbox": [ + 55, + 279, + 296, + 410 + ], + "type": "inline_equation", + "content": "\\kappa" + }, + { + "bbox": [ + 55, + 279, + 296, + 410 + ], + "type": "text", + "content": " regardless of " + }, + { + "bbox": [ + 55, + 279, + 296, + 410 + ], + "type": "inline_equation", + "content": "p_B" + }, + { + "bbox": [ + 55, + 279, + 296, + 410 + ], + "type": "text", + "content": ". That means, the forward process transforms the noisy anomaly image statistics and learns to reconstruct a normal image irrespective of terminal distribution. Now to elucidate the working mechanism of the LASB diffusion model which will consider a Chang et al. [7] design strategy" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 411, + 296, + 676 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 411, + 296, + 676 + ], + "spans": [ + { + "bbox": [ + 55, + 411, + 296, + 676 + ], + "type": "text", + "content": "SGMs and diffusion models operating in pixel space require significant computational resources and memory. To address this challenge, we introduce the LASB model. Before training the Linear-SB, we first train " + }, + { + "bbox": [ + 55, + 411, + 296, + 676 + ], + "type": "inline_equation", + "content": "E_{VQ}" + }, + { + "bbox": [ + 55, + 411, + 296, + 676 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 411, + 296, + 676 + ], + "type": "inline_equation", + "content": "D_{VQ}" + }, + { + "bbox": [ + 55, + 411, + 296, + 676 + ], + "type": "text", + "content": ", the encoder and decoder of the VQ-VAE model [49], for perceptual compression, following the approach of stable diffusion [37]. During VQ-VAE training, we input either normal or anomaly-augmented images from the training set into " + }, + { + "bbox": [ + 55, + 411, + 296, + 676 + ], + "type": "inline_equation", + "content": "E_{VQ}" + }, + { + "bbox": [ + 55, + 411, + 296, + 676 + ], + "type": "text", + "content": " and aim to reconstruct the input using " + }, + { + "bbox": [ + 55, + 411, + 296, + 676 + ], + "type": "inline_equation", + "content": "D_{VQ}" + }, + { + "bbox": [ + 55, + 411, + 296, + 676 + ], + "type": "text", + "content": ". Once the model converges, its weights are frozen, and we proceed to train the Linear-SB in the latent space. Unlike Gaussian diffusion models, where noise is progressively added at each time step, leading to the deterioration of image structure at later stages, we adopt a semidegradation strategy similar to Saharia et al. [40]. This ensures that, during the forward pass, the LASB model performs a smooth transformation while preserving structural information. Figure 1 illustrates the contrast between the Gaussian diffusion process and the Schrödinger Bridge diffusion process when applied to anomaly-free reconstruction. According to Liu et al. [27] for this smooth structural transformation the posterior for the SB (equation 6, 7)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 315, + 72, + 493, + 84 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 72, + 493, + 84 + ], + "spans": [ + { + "bbox": [ + 315, + 72, + 493, + 84 + ], + "type": "text", + "content": "Algorithm 2 Inference/Sampling Procedure" + } + ] + } + ], + "index": 11, + "type": "text" + }, + { + "bbox": [ + 321, + 87, + 472, + 152 + ], + "type": "list", + "angle": 0, + "index": 18, + "blocks": [ + { + "bbox": [ + 321, + 87, + 472, + 100 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 87, + 472, + 100 + ], + "spans": [ + { + "bbox": [ + 321, + 87, + 472, + 100 + ], + "type": "text", + "content": "1: Input: " + }, + { + "bbox": [ + 321, + 87, + 472, + 100 + ], + "type": "inline_equation", + "content": "z_{N} \\sim p_{B}^{(z)}(z_{N})" + }, + { + "bbox": [ + 321, + 87, + 472, + 100 + ], + "type": "text", + "content": ", trained " + }, + { + "bbox": [ + 321, + 87, + 472, + 100 + ], + "type": "inline_equation", + "content": "\\epsilon_{\\theta}(\\cdot, \\cdot)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 323, + 101, + 402, + 110 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 323, + 101, + 402, + 110 + ], + "spans": [ + { + "bbox": [ + 323, + 101, + 402, + 110 + ], + "type": "text", + "content": "2: for " + }, + { + "bbox": [ + 323, + 101, + 402, + 110 + ], + "type": "inline_equation", + "content": "n = N" + }, + { + "bbox": [ + 323, + 101, + 402, + 110 + ], + "type": "text", + "content": " to 1 do" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 323, + 110, + 439, + 121 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 323, + 110, + 439, + 121 + ], + "spans": [ + { + "bbox": [ + 323, + 110, + 439, + 121 + ], + "type": "text", + "content": "3: Predict " + }, + { + "bbox": [ + 323, + 110, + 439, + 121 + ], + "type": "inline_equation", + "content": "z_0^\\epsilon" + }, + { + "bbox": [ + 323, + 110, + 439, + 121 + ], + "type": "text", + "content": " using " + }, + { + "bbox": [ + 323, + 110, + 439, + 121 + ], + "type": "inline_equation", + "content": "\\epsilon_{\\theta}(z_n, t_n)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 323, + 122, + 429, + 133 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 323, + 122, + 429, + 133 + ], + "spans": [ + { + "bbox": [ + 323, + 122, + 429, + 133 + ], + "type": "text", + "content": "4: " + }, + { + "bbox": [ + 323, + 122, + 429, + 133 + ], + "type": "inline_equation", + "content": "z_{n - 1}\\sim p(z_{n - 1}|z_0^\\epsilon ,z_n)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 323, + 133, + 362, + 142 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 323, + 133, + 362, + 142 + ], + "spans": [ + { + "bbox": [ + 323, + 133, + 362, + 142 + ], + "type": "text", + "content": "5: end for" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 323, + 144, + 369, + 152 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 323, + 144, + 369, + 152 + ], + "spans": [ + { + "bbox": [ + 323, + 144, + 369, + 152 + ], + "type": "text", + "content": "6: return " + }, + { + "bbox": [ + 323, + 144, + 369, + 152 + ], + "type": "inline_equation", + "content": "z_0" + } + ] + } + ], + "index": 17 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 313, + 175, + 554, + 221 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 175, + 554, + 221 + ], + "spans": [ + { + "bbox": [ + 313, + 175, + 554, + 221 + ], + "type": "text", + "content": "given a boundary pair condition " + }, + { + "bbox": [ + 313, + 175, + 554, + 221 + ], + "type": "inline_equation", + "content": "(X_0, X_1)" + }, + { + "bbox": [ + 313, + 175, + 554, + 221 + ], + "type": "text", + "content": " reveals an analytic. Now as we inject the image statistics via perceptual compression using VQ-VAE into latent space, the analytical form can be formulated as," + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 359, + 233, + 554, + 247 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 359, + 233, + 554, + 247 + ], + "spans": [ + { + "bbox": [ + 359, + 233, + 554, + 247 + ], + "type": "interline_equation", + "content": "q \\left(z _ {t} \\mid z _ {0}, z _ {1}\\right) = \\mathcal {N} \\left(z _ {t}, \\mu_ {t} \\left(z _ {0}, z _ {1}\\right), \\Sigma_ {t}\\right) \\tag {11}", + "image_path": "39b6c7e9aaf885424cfdfddd5e98c70997a7a310a72a3c89126f91d3cd5d4ec1.jpg" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 313, + 251, + 554, + 273 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 251, + 554, + 273 + ], + "spans": [ + { + "bbox": [ + 313, + 251, + 554, + 273 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 251, + 554, + 273 + ], + "type": "inline_equation", + "content": "\\mu_t = \\frac{\\bar{\\sigma}_t^{(z)2}}{\\bar{\\sigma}_t^{(z)2} + \\sigma_t^{(z)2}} z_0 + \\frac{\\sigma_t^{(z)2}}{\\bar{\\sigma}_t^{(z)2} + \\sigma_t^{(z)2}} z_1" + }, + { + "bbox": [ + 313, + 251, + 554, + 273 + ], + "type": "text", + "content": ", and" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 315, + 272, + 392, + 294 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 272, + 392, + 294 + ], + "spans": [ + { + "bbox": [ + 315, + 272, + 392, + 294 + ], + "type": "interline_equation", + "content": "\\Sigma_ {t} = \\frac {\\bar {\\sigma} _ {t} ^ {(z) 2} \\sigma_ {t} ^ {(z) 2}}{\\bar {\\sigma} _ {t} ^ {(z) 2} + \\sigma_ {t} ^ {(z) 2}} \\mathbb {I}.", + "image_path": "c6685d8264cc6a16579d47c1e8a647f11fe62806209109f51ab64d532c7ef524.jpg" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 314, + 297, + 554, + 328 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 297, + 554, + 328 + ], + "spans": [ + { + "bbox": [ + 314, + 297, + 554, + 328 + ], + "type": "text", + "content": "The accumulated variances are, " + }, + { + "bbox": [ + 314, + 297, + 554, + 328 + ], + "type": "inline_equation", + "content": "\\sigma_t^{(z)2}" + }, + { + "bbox": [ + 314, + 297, + 554, + 328 + ], + "type": "inline_equation", + "content": "\\begin{array}{r}\\int_0^t\\beta_\\tau d\\tau \\quad \\mathrm{and}\\quad \\overline{\\sigma}_t^2 \\coloneqq \\int_t^1\\beta_\\tau d\\tau . \\end{array}" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 313, + 338, + 554, + 361 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 338, + 554, + 361 + ], + "spans": [ + { + "bbox": [ + 313, + 338, + 554, + 361 + ], + "type": "text", + "content": "We hence train the proposed LASB using our objective function:" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 368, + 366, + 554, + 392 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 368, + 366, + 554, + 392 + ], + "spans": [ + { + "bbox": [ + 368, + 366, + 554, + 392 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {L A S B} := \\left| \\left| \\epsilon_ {\\theta} \\left(z _ {t}, t\\right) - \\left(\\frac {z _ {t} - z _ {0}}{\\sigma_ {t} ^ {(z)}}\\right) \\right| \\right| \\tag {12}", + "image_path": "a845152d654202642134370078f6488e580fcee302824f3cb7d46aaf04fcebf8.jpg" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 313, + 398, + 554, + 434 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 398, + 554, + 434 + ], + "spans": [ + { + "bbox": [ + 313, + 398, + 554, + 434 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 398, + 554, + 434 + ], + "type": "inline_equation", + "content": "\\epsilon (\\cdot)" + }, + { + "bbox": [ + 313, + 398, + 554, + 434 + ], + "type": "text", + "content": " is the network and " + }, + { + "bbox": [ + 313, + 398, + 554, + 434 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 313, + 398, + 554, + 434 + ], + "type": "text", + "content": " are the parameters associated with it. The training and the inference procedures are summarized in Algorithms 1 and 2." + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 313, + 444, + 458, + 457 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 444, + 458, + 457 + ], + "spans": [ + { + "bbox": [ + 313, + 444, + 458, + 457 + ], + "type": "text", + "content": "4. Experiments and Results" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 313, + 460, + 555, + 664 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 460, + 555, + 664 + ], + "spans": [ + { + "bbox": [ + 313, + 460, + 555, + 664 + ], + "type": "text", + "content": "Datasets. To validate the efficacy of the LASB method, we utilize two challenging datasets in industrial anomaly detection: the MVTec-AD [5] dataset and the VisA [63] dataset. The MVTec-AD dataset comprises 15 categories, including 10 object types and 5 texture types, with a total of 5,354 high-resolution images. Of these, 3,629 anomaly-free images are used for training, while the remaining 1,725 test images include both normal and anomalous samples. The VisA dataset consists of 12 distinct objects organized into 12 subsets, categorized into complex Structures, Multiple Instances, and Single Instance types. It includes 10,821 high-resolution images, with 9,621 normal images and 1,200 anomalous images exhibiting 78 distinct anomalies, offering a comprehensive benchmark for anomaly detection and localization methods. Both datasets provide pixel-level ground truth annotations to facilitate the evaluation of anomaly localization performance." + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 313, + 665, + 554, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 665, + 554, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 665, + 554, + 714 + ], + "type": "text", + "content": "Evaluation Metrics. For a rigorous quantitative evaluation of our experimental results in anomaly detection and localization, we employ AUROC (Area Under the Receiver Operating Characteristic Curve), AP (Average Precision)," + } + ] + } + ], + "index": 29 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 55, + 683, + 295, + 713 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 683, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 683, + 295, + 713 + ], + "type": "text", + "content": "1Chang et al. [7] work focuses on explaining diffusion models by signifying the design principles of the training strategy, sampling strategy, and the objective function." + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "text", + "content": "25532" + } + ] + } + ], + "index": 31 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 61, + 64, + 211, + 146 + ], + "blocks": [ + { + "bbox": [ + 61, + 64, + 211, + 146 + ], + "lines": [ + { + "bbox": [ + 61, + 64, + 211, + 146 + ], + "spans": [ + { + "bbox": [ + 61, + 64, + 211, + 146 + ], + "type": "image", + "image_path": "bebe370c000dcd6188296baeead296d86b1951bde9a039801ca88b3425d3a5fb.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 54, + 149, + 555, + 183 + ], + "lines": [ + { + "bbox": [ + 54, + 149, + 555, + 183 + ], + "spans": [ + { + "bbox": [ + 54, + 149, + 555, + 183 + ], + "type": "text", + "content": "Figure 3. Comparison of training time (left), training memory (middle), and sampling time (right) across different image resolutions for SGM, SB, DDPM, and LASB models. The LASB model (green) demonstrates consistently lower training time, memory usage, and sampling time, particularly in contrast to the SB model (red), which exhibits the highest resource consumption." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 212, + 64, + 364, + 146 + ], + "blocks": [ + { + "bbox": [ + 212, + 64, + 364, + 146 + ], + "lines": [ + { + "bbox": [ + 212, + 64, + 364, + 146 + ], + "spans": [ + { + "bbox": [ + 212, + 64, + 364, + 146 + ], + "type": "image", + "image_path": "e28d21147abc88f15c8b3804ebe7c08beeb9e9b3d821d6fba38cf1c7600684d3.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 366, + 64, + 549, + 146 + ], + "blocks": [ + { + "bbox": [ + 366, + 64, + 549, + 146 + ], + "lines": [ + { + "bbox": [ + 366, + 64, + 549, + 146 + ], + "spans": [ + { + "bbox": [ + 366, + 64, + 549, + 146 + ], + "type": "image", + "image_path": "97f2c5035eabf69b86385ba0d9a96a5e665d6104fc16570eb1ad549b60262cdb.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 54, + 194, + 294, + 278 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 194, + 294, + 278 + ], + "spans": [ + { + "bbox": [ + 54, + 194, + 294, + 278 + ], + "type": "text", + "content": "and F1max (maximum F1-score) as our primary metrics [21]. Here, " + }, + { + "bbox": [ + 54, + 194, + 294, + 278 + ], + "type": "inline_equation", + "content": "cls" + }, + { + "bbox": [ + 54, + 194, + 294, + 278 + ], + "type": "text", + "content": " denotes image-level anomaly detection, while seg pertains to pixel-level anomaly localization. A detailed class-specific comparison and results for all the aforementioned metrics are provided in Section S2 of supplementary material. In Table 1, we present the average performance of each model across the evaluated dataset." + } + ] + } + ], + "index": 4 + }, + { + "type": "table", + "bbox": [ + 55, + 287, + 298, + 502 + ], + "blocks": [ + { + "bbox": [ + 55, + 287, + 298, + 502 + ], + "lines": [ + { + "bbox": [ + 55, + 287, + 298, + 502 + ], + "spans": [ + { + "bbox": [ + 55, + 287, + 298, + 502 + ], + "type": "table", + "html": "
MethodVenueAdopted MethodMvTecViSA
Class-based
SimpleNet [29]CVPR 2023Embedding-based99.6 / 98.196.2 / 98.5
PatchCore [38]CVPR 2022Embedding-based99.1 / 98.191.0 / 98.1
DSR [58]ECCV 2022-98.2 / 70.2-
PaDiM [12]ICPR 2021Memory Bank97.5 / 92.189.1 / 85.9
CS-Flow [39]WACV 2022Normalization Flow97.2 / 84.5-
CFLOW-AD [19]WACV 2022Normalization Flow96.2 / 97.1-
OCR-GAN [26]TIP 2023GAN98.3 / -97.9 / -
DRAEM [56]ICCV 2021Encoder98.0 / 97.388.7/93.5
RD4AD [13]CVPR 2022Embedding-based98.5 / 97.896.9 / 98.3
ADSPR [43]ICCV 2023Diffusion97.67 / 97.36-
DiffAD [60]ICCV 2023Diffusion98.72 / 98.2689.79 / 68.28
D3AD [48]CVPR 2024Diffusion97.15 / 97.4495.51 / 94.27
DDAD [33]Arxiv 2023Diffusion99.84 / 98.0598.9 / 97.58
TransFusion [16]ECCV 2024Diffusion99.24 / 94.3398.53 / 86.26
LASB (Ours)-Diffusion99.66 / 99.1598.52 / 99.06
Unified (multi-class)
DRAEM [56]ICCV 2021Encoder88.1 / 87.279.1 / 91.3
HVQ-Trans [32]NeurIPS 2023Non-Diffusion98.0 / 97.393.2 / 98.7
MambaAD [20]NeurIPS 2024Non-Diffusion98.6 / 97.794.3 / 98.5
OmniAL [61]CVPR 2023Non-Diffusion97.2 / 98.394.2 / 96.0
GLAD [54]ECCV 2024Diffusion97.5 / 97.491.8 / 97.8
UniAD [55]NeurIPS 2022Transformer96.52 / 96.885.5 / 95.925
DIAD [21]AAAI 2024Diffusion97.15 / 96.7686.75 / 96.04
LASB (Ours)-Diffusion99.14 / 98.6694.2 / 98.18
", + "image_path": "65dbbd6c6ef928e4eab682d0e8102959b00113a6ed59265a7af2d3a780196dd9.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + }, + { + "bbox": [ + 54, + 510, + 295, + 598 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 510, + 295, + 598 + ], + "spans": [ + { + "bbox": [ + 54, + 510, + 295, + 598 + ], + "type": "text", + "content": "Table 1. Comparison of state-of-the-art (SOTA) anomaly detection methods categorized into class-based and unified (multi-class) approaches. The table highlights the adopted methodologies and their class-wise average performance in terms of AUROCcls / AUROCseg metrics on two datasets: MvTec and ViSA respectively. An exhaustive list of metrics along with class specific performance details is comprehensively illustrated in Section S2 of supplementary material." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 54, + 605, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 605, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 54, + 605, + 295, + 713 + ], + "type": "text", + "content": "Implementation Details. For all our experiments, we resized MVTec-AD images to " + }, + { + "bbox": [ + 54, + 605, + 295, + 713 + ], + "type": "inline_equation", + "content": "256 \\times 256" + }, + { + "bbox": [ + 54, + 605, + 295, + 713 + ], + "type": "text", + "content": " resolution. To fine-tune the VQ-VAE (Auto-encoder) utilizing the KL-based method, we initialized the network weights from the Stable Diffusion [37] model, where the image is encoded into a latent vector with a size of " + }, + { + "bbox": [ + 54, + 605, + 295, + 713 + ], + "type": "inline_equation", + "content": "64 \\times 64 \\times 3" + }, + { + "bbox": [ + 54, + 605, + 295, + 713 + ], + "type": "text", + "content": ". We chose this latent vector size due to its preservation of details and low FID generation capabilities. After training the auto-encoder, we froze the weights and trained the latent denoising network." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 194, + 553, + 254 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 194, + 553, + 254 + ], + "spans": [ + { + "bbox": [ + 313, + 194, + 553, + 254 + ], + "type": "text", + "content": "We used the U-Net model as in [23] as our latent denoising network. During inference, we pass only the anomaly test image as input to the DDPM sampler [23]. Further details on batch size and various other network hyperparameters are in Section S4 of the appendix." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 262, + 555, + 396 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 262, + 555, + 396 + ], + "spans": [ + { + "bbox": [ + 313, + 262, + 555, + 396 + ], + "type": "text", + "content": "Baselines. To benchmark LASB, we provide a detailed comparison with state-of-the-art (SOTA) reconstruction methods, as summarized in Table 1. These methods are categorized into two distinct groups: class-based and unified (multi-class) approaches. Class-based methods train models specific to each class, with the number of models increasing linearly with the number of classes, making scalability a challenge. In contrast, unified approaches handle multiple classes within a single model, offering a more scalable solution. By separating the class-based and unified approaches, we ensure a fair and comprehensive comparison." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 403, + 556, + 594 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 403, + 556, + 594 + ], + "spans": [ + { + "bbox": [ + 313, + 403, + 556, + 594 + ], + "type": "text", + "content": "Class-based Methods: Performance Analysis. Among the class-based methods, PaDiM [12], with its memory bank-based architecture, achieves " + }, + { + "bbox": [ + 313, + 403, + 556, + 594 + ], + "type": "inline_equation", + "content": "97.5\\%" + }, + { + "bbox": [ + 313, + 403, + 556, + 594 + ], + "type": "text", + "content": " on MVtec-AD and " + }, + { + "bbox": [ + 313, + 403, + 556, + 594 + ], + "type": "inline_equation", + "content": "89.1\\%" + }, + { + "bbox": [ + 313, + 403, + 556, + 594 + ], + "type": "text", + "content": " on ViSA. While effective for simpler anomalies, its scalability and ability to model inter-class variations remain limited. In contrast, DiffAD [60] demonstrates strong competitiveness, achieving " + }, + { + "bbox": [ + 313, + 403, + 556, + 594 + ], + "type": "inline_equation", + "content": "98.72\\%" + }, + { + "bbox": [ + 313, + 403, + 556, + 594 + ], + "type": "text", + "content": " on MVtec-AD. However, its performance on ViSA drops to " + }, + { + "bbox": [ + 313, + 403, + 556, + 594 + ], + "type": "inline_equation", + "content": "89.79\\%" + }, + { + "bbox": [ + 313, + 403, + 556, + 594 + ], + "type": "text", + "content": ", indicating its difficulty in adapting to diverse industrial anomalies. DDAD [33] shows highly competitive results, achieving " + }, + { + "bbox": [ + 313, + 403, + 556, + 594 + ], + "type": "inline_equation", + "content": "99.84\\%" + }, + { + "bbox": [ + 313, + 403, + 556, + 594 + ], + "type": "text", + "content": " on MVtec-AD and " + }, + { + "bbox": [ + 313, + 403, + 556, + 594 + ], + "type": "inline_equation", + "content": "98.9\\%" + }, + { + "bbox": [ + 313, + 403, + 556, + 594 + ], + "type": "text", + "content": " on ViSA. Its diffusion-based modeling effectively captures structural patterns, making it one of the closest competitors to LASB. However, LASB surpasses DDAD on ViSA by a margin of " + }, + { + "bbox": [ + 313, + 403, + 556, + 594 + ], + "type": "inline_equation", + "content": "0.38\\% \\uparrow" + }, + { + "bbox": [ + 313, + 403, + 556, + 594 + ], + "type": "text", + "content": ", demonstrating superior adaptability to multi-class industrial scenarios." + } + ] + } + ], + "index": 10 + }, + { + "type": "table", + "bbox": [ + 316, + 605, + 564, + 657 + ], + "blocks": [ + { + "bbox": [ + 316, + 605, + 564, + 657 + ], + "lines": [ + { + "bbox": [ + 316, + 605, + 564, + 657 + ], + "spans": [ + { + "bbox": [ + 316, + 605, + 564, + 657 + ], + "type": "table", + "html": "
TaskMetricsPixel SpaceLatent Space
DDPM [23]DDAD [33]GLAD [54]LDM [37]DiAD [21]LASB (Ours)
DetectionAUROCcls71.999.897.576.697.299.2 (-0.6%)
APcls81.699.599.187.699.099.3 (-0.2%)
F1maxcls86.697.996.688.196.598.5 (+2.0%)
LocalizationAUROCseg75.698.097.485.196.898.6 (+0.60%)
APseg13.359.060.827.652.678.2 (+17.4%)
F1maxseg19.559.460.731.055.570.7 (+10.0%)
", + "image_path": "abf4288f45dcbc8a463f6251f63a7ef76fe45893f3e1a8eccef5af770d17222e.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "table_body" + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 666, + 555, + 711 + ], + "lines": [ + { + "bbox": [ + 313, + 666, + 555, + 711 + ], + "spans": [ + { + "bbox": [ + 313, + 666, + 555, + 711 + ], + "type": "text", + "content": "Table 2. Performance comparison of LASB with diffusion-based models on MVTec-AD using AUROC, AP, and F1max. The best results are in bold; the second-best is underline. Improvements are shown as a percentage over the second-best." + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "25533" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 57, + 70, + 294, + 201 + ], + "blocks": [ + { + "bbox": [ + 57, + 70, + 294, + 201 + ], + "lines": [ + { + "bbox": [ + 57, + 70, + 294, + 201 + ], + "spans": [ + { + "bbox": [ + 57, + 70, + 294, + 201 + ], + "type": "image", + "image_path": "16e30b7fc9635c0159686acf423a06eb1cae9a146bb93bc63681736cf4af9f66.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 209, + 295, + 243 + ], + "lines": [ + { + "bbox": [ + 55, + 209, + 295, + 243 + ], + "spans": [ + { + "bbox": [ + 55, + 209, + 295, + 243 + ], + "type": "text", + "content": "Figure 4. Visual representation of test samples from the MVtec dataset, depicted through heatmaps for various models for different categories." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "spans": [ + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "text", + "content": "Unified Models: Performance Analysis. For unified approaches, DRAEM [56] emerges as one of the least competitive models, with AUROCcls scores of " + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "inline_equation", + "content": "88.1\\%" + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "text", + "content": " on MVTec-AD and " + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "inline_equation", + "content": "79.1\\%" + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "text", + "content": " on ViSA. Its reliance on autoencoder-based reconstruction, combined with limited augmentation strategies, hampers its ability to generalize to diverse and complex anomaly scenarios, particularly in the multi-class ViSA dataset. Similarly, UniAD [55] performs less competitively, achieving " + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "inline_equation", + "content": "96.52\\%" + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "text", + "content": " on MVTec-AD and " + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "inline_equation", + "content": "85.5\\%" + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "text", + "content": " on ViSA. Its transformer-based architecture, while robust for certain tasks, struggles to balance efficiency and accuracy in anomaly detection across diverse classes. Similarly, DIAD [21] achieves " + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "inline_equation", + "content": "97.15\\%" + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "text", + "content": " on MVTec-AD and " + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "inline_equation", + "content": "86.75\\%" + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "text", + "content": " on ViSA but falls short in handling the intricate class variations present in the datasets. Among the competitive unified methods, MambaAD [20] achieves " + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "inline_equation", + "content": "98.6\\%" + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "text", + "content": " on MVTec-AD and " + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "inline_equation", + "content": "94.3\\%" + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "text", + "content": " on ViSA, showcasing its strength in handling multi-class scenarios using state-space models. However, LASB outperforms MambaAD with a score of " + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "inline_equation", + "content": "99.14\\%" + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "text", + "content": " on MVTec-AD and " + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "inline_equation", + "content": "94.2\\%" + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "text", + "content": " on ViSA, highlighting its superior capability to balance computational efficiency with detection accuracy. Furthermore, HVQ-Trans [32], with scores of " + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "inline_equation", + "content": "98.0\\%" + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "text", + "content": " on MVTec-AD and " + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "inline_equation", + "content": "93.2\\%" + }, + { + "bbox": [ + 56, + 256, + 296, + 602 + ], + "type": "text", + "content": " on ViSA, demonstrates solid performance but remains slightly behind LASB in scalability and robustness. Detailed class-specific results are available in Section S2 of supplementary material, and Figure 4 provides qualitative heat-map visualizations of anomaly regions, further demonstrating LASB's efficacy in both class-based and unified settings." + } + ] + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 60, + 614, + 293, + 668 + ], + "blocks": [ + { + "bbox": [ + 60, + 614, + 293, + 668 + ], + "lines": [ + { + "bbox": [ + 60, + 614, + 293, + 668 + ], + "spans": [ + { + "bbox": [ + 60, + 614, + 293, + 668 + ], + "type": "table", + "html": "
TaskMetricsNon-Diffusion MethodDiffusion-based Method
DRAEM [23]UniAD [55]DiffAD [60]DiAD [21]LASB (Ours)
DetectionAUROCcls79.185.589.586.894.2
APcls81.985.5-88.392.2
F1maxcls78.984.4-85.194.5
LocalizationAUROCseg91.395.971.296.098.2
APseg23.521.0-26.146.4
F1maxseg29.527.0-33.052.6
", + "image_path": "07c5be218534ee6eeac16f04697ff4f8758723f638d944b43eaafa4a8ad199ba.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 677, + 295, + 710 + ], + "lines": [ + { + "bbox": [ + 55, + 677, + 295, + 710 + ], + "spans": [ + { + "bbox": [ + 55, + 677, + 295, + 710 + ], + "type": "text", + "content": "Table 3. Results for multi-class anomaly detection and localization on VisA dataset. The best results are indicated in bold, and the second-best results are denoted with an underline." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 314, + 71, + 484, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 71, + 484, + 85 + ], + "spans": [ + { + "bbox": [ + 314, + 71, + 484, + 85 + ], + "type": "text", + "content": "5. Ablation Studies and Analysis" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 91, + 555, + 200 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 91, + 555, + 200 + ], + "spans": [ + { + "bbox": [ + 313, + 91, + 555, + 200 + ], + "type": "text", + "content": "In this section, we first differentiate the proposed LASB method's performance in latent spaces and explain the advantages of utilizing Linear-SB in the latent space. We then assess the LASB model's effectiveness when compared with Stable Diffusion [37] and other models for anomaly localization and detection tasks. Finally, we demonstrate the robustness of our proposed method, demonstrating its ability to deliver precise and consistent outcomes during test-time evaluations." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 210, + 555, + 508 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 210, + 555, + 508 + ], + "spans": [ + { + "bbox": [ + 313, + 210, + 555, + 508 + ], + "type": "text", + "content": "LASB vs Standard Diffusion Models. In the realm of anomaly detection, standard diffusion models such as DDPM [23] and LDM [37] achieve competitive performance in anomaly detection tasks but exhibit significant limitations in localization, as evidenced in Table 2. To mitigate these limitations, DiAD [21] integrates a semantic guidance network to improve LDM's localization by incorporating additional contextual information. Despite these improvements, DiAD [21] still falls short of achieving state-of-the-art (SOTA) results, indicating persistent challenges in fully leveraging diffusion models for comprehensive anomaly detection tasks. In contrast, our LASB employs a novel approach that leverages Linear-SB within the latent space, avoiding the limitations associated with reconstructing from pure Gaussian noise, a common issue in standard diffusion processes when applied to anomaly-free reconstruction. Moreover, LASB semi-degrades latent space to retain structural integrity, facilitating more effective anomaly detection and localization. This method not only enhances the robustness of the detection process but also significantly improves localization accuracy. Consequently, the LASB model surpasses standard diffusion models across multiple metrics, demonstrating superior performance in multi-class anomaly detection, as evidenced in Table 2." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 510, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 510, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 510, + 556, + 715 + ], + "type": "text", + "content": "We compare the performance of our proposed Latent Anomaly Schrödinger Bridge (LASB) method with the DDPM [23], DDAD [33], GLAD [54], LDM [37], and DiAD [21] models to demonstrate the effectiveness and efficiency of our approach. First, we evaluate the models on the MVTec-AD dataset for both anomaly detection and localization tasks. As shown in Table 2, LASB significantly outperforms all the other models in localization tasks for all the metrics. Specifically, LASB achieves a " + }, + { + "bbox": [ + 313, + 510, + 556, + 715 + ], + "type": "inline_equation", + "content": "0.6\\% \\uparrow" + }, + { + "bbox": [ + 313, + 510, + 556, + 715 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 510, + 556, + 715 + ], + "type": "inline_equation", + "content": "17.4\\% \\uparrow" + }, + { + "bbox": [ + 313, + 510, + 556, + 715 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 313, + 510, + 556, + 715 + ], + "type": "inline_equation", + "content": "10.0\\% \\uparrow" + }, + { + "bbox": [ + 313, + 510, + 556, + 715 + ], + "type": "text", + "content": " improvement in AUROC" + }, + { + "bbox": [ + 313, + 510, + 556, + 715 + ], + "type": "inline_equation", + "content": "_{seg}" + }, + { + "bbox": [ + 313, + 510, + 556, + 715 + ], + "type": "text", + "content": ", AP" + }, + { + "bbox": [ + 313, + 510, + 556, + 715 + ], + "type": "inline_equation", + "content": "_{seg}" + }, + { + "bbox": [ + 313, + 510, + 556, + 715 + ], + "type": "text", + "content": ", F1max" + }, + { + "bbox": [ + 313, + 510, + 556, + 715 + ], + "type": "inline_equation", + "content": "_{seg}" + }, + { + "bbox": [ + 313, + 510, + 556, + 715 + ], + "type": "text", + "content": " respectively. Also, it shows " + }, + { + "bbox": [ + 313, + 510, + 556, + 715 + ], + "type": "inline_equation", + "content": "2.0\\% \\uparrow" + }, + { + "bbox": [ + 313, + 510, + 556, + 715 + ], + "type": "text", + "content": " enhancement in F1max" + }, + { + "bbox": [ + 313, + 510, + 556, + 715 + ], + "type": "inline_equation", + "content": "_{cls}" + }, + { + "bbox": [ + 313, + 510, + 556, + 715 + ], + "type": "text", + "content": " for detection, compared to GLAD [54]. These improvements highlight the robustness and effectiveness of the latent space approach specifically for localization. Furthermore, as shown in Figure 3, pixel-space models like DDPM [23], SGM [46], and SB models trained via Iterative Proportional Fitting [11, 25] or likelihood-based methods [8] re" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "text", + "content": "25534" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 59, + 70, + 550, + 125 + ], + "blocks": [ + { + "bbox": [ + 59, + 70, + 550, + 125 + ], + "lines": [ + { + "bbox": [ + 59, + 70, + 550, + 125 + ], + "spans": [ + { + "bbox": [ + 59, + 70, + 550, + 125 + ], + "type": "table", + "html": "
NFE-1NFE-2NFE-5NFE-10NFE-100NFE-500NFE-1000
MVTec-AD97.42 / 96.8298.16 / 97.8498.94 / 98.1699.14 / 98.6699.14 / 98.6699.14 / 98.6699.14 / 98.66
VisA92.74 / 96.7193.21 / 97.1593.96 / 97.9094.2 / 98.1894.2 / 98.1894.2 / 98.1894.2 / 98.18
Inference Time (secs)0.120.250.520.741.57.515
", + "image_path": "77246e5458f59ae670233d60f8c09452120b8561e5dbcb12113f31cedd9729f7.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 63, + 189, + 547, + 217 + ], + "blocks": [ + { + "bbox": [ + 55, + 133, + 553, + 179 + ], + "lines": [ + { + "bbox": [ + 55, + 133, + 553, + 179 + ], + "spans": [ + { + "bbox": [ + 55, + 133, + 553, + 179 + ], + "type": "text", + "content": "Table 4. Performance evaluation across varying numbers of function evaluations (NFEs) on the MVTec-AD and VisA datasets. The tabulated metrics, AUROCcls / AUROCseg, provide a comprehensive overview of image-level and pixel-level anomaly detection performance at each NFE. Additionally, the inference time, measured in wall-clock seconds on an NVIDIA V100 GPU, underscores the computational trade-offs associated with increasing NFEs, reflecting the balance between model efficiency and performance." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 63, + 189, + 547, + 217 + ], + "lines": [ + { + "bbox": [ + 63, + 189, + 547, + 217 + ], + "spans": [ + { + "bbox": [ + 63, + 189, + 547, + 217 + ], + "type": "table", + "html": "
MethodDR/EM [56]PatchCore [38]DiffAD [60]TransFusion [16]DIAD [21]GLAD [54]ADSPR [43]DDAD [33]LASB (ours)
Inference Time (secs)0.150.442.61.22.81.81.61.50.74
", + "image_path": "878d2f07c2ce0bfe3510d9e46c9903dbe71a031b9d92c5c95bb1b55e2e476a90.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 247, + 294, + 308 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 247, + 294, + 308 + ], + "spans": [ + { + "bbox": [ + 55, + 247, + 294, + 308 + ], + "type": "text", + "content": "require more training time, memory, and exhibit slower sampling. In contrast, latent-space LASB excels in all areas, achieving up to " + }, + { + "bbox": [ + 55, + 247, + 294, + 308 + ], + "type": "inline_equation", + "content": "5 \\times" + }, + { + "bbox": [ + 55, + 247, + 294, + 308 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 247, + 294, + 308 + ], + "type": "inline_equation", + "content": "2 \\times" + }, + { + "bbox": [ + 55, + 247, + 294, + 308 + ], + "type": "text", + "content": " faster training, " + }, + { + "bbox": [ + 55, + 247, + 294, + 308 + ], + "type": "inline_equation", + "content": "3 \\times" + }, + { + "bbox": [ + 55, + 247, + 294, + 308 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 247, + 294, + 308 + ], + "type": "inline_equation", + "content": "2 \\times" + }, + { + "bbox": [ + 55, + 247, + 294, + 308 + ], + "type": "text", + "content": " memory reduction, and " + }, + { + "bbox": [ + 55, + 247, + 294, + 308 + ], + "type": "inline_equation", + "content": "4 \\times" + }, + { + "bbox": [ + 55, + 247, + 294, + 308 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 247, + 294, + 308 + ], + "type": "inline_equation", + "content": "2 \\times" + }, + { + "bbox": [ + 55, + 247, + 294, + 308 + ], + "type": "text", + "content": " faster sampling compared to SB-based models [8, 11, 25] and DDPM [23], respectively." + } + ] + } + ], + "index": 4 + }, + { + "type": "table", + "bbox": [ + 65, + 316, + 288, + 392 + ], + "blocks": [ + { + "bbox": [ + 67, + 224, + 541, + 236 + ], + "lines": [ + { + "bbox": [ + 67, + 224, + 541, + 236 + ], + "spans": [ + { + "bbox": [ + 67, + 224, + 541, + 236 + ], + "type": "text", + "content": "Table 5. Inference times for various methods where the inference time, measured in wall-clock seconds on an NVIDIA V100 GPU." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 65, + 316, + 288, + 392 + ], + "lines": [ + { + "bbox": [ + 65, + 316, + 288, + 392 + ], + "spans": [ + { + "bbox": [ + 65, + 316, + 288, + 392 + ], + "type": "table", + "html": "
TaskMetrics
DetectionAUROCcls99.0599.0199.0198.98
APcls99.1599.1499.1499.11
F1maxcls98.5998.3399.8398.32
LocalizationAUROCseg98.5898.5898.5998.57
APseg78.1778.1478.1478.12
F1maxseg70.6270.5970.6170.58
", + "image_path": "60abe7f53a7023b79d9385b26baef4c74b5ed16a2b886f5d2b25dabea2a98b06.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 399, + 295, + 453 + ], + "lines": [ + { + "bbox": [ + 55, + 399, + 295, + 453 + ], + "spans": [ + { + "bbox": [ + 55, + 399, + 295, + 453 + ], + "type": "text", + "content": "Table 6. Stability analysis of LASB on the MVTec-AD dataset for detection and localization tasks by performing the sampling multiple times. Results reported are mean values across classes and multiple samplings (see Section S1 of appendix for more detailed results)." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 55, + 462, + 295, + 653 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 462, + 295, + 653 + ], + "spans": [ + { + "bbox": [ + 55, + 462, + 295, + 653 + ], + "type": "text", + "content": "Generalization and Sampling Stability. In the field of anomaly detection using diffusion models, achieving consistent outcomes during the sampling or inference stage is notably challenging due to the inherent stochastic nature of generative processes. This inconsistency can be particularly problematic in real-world applications where reliable and stable detection is critical. Therefore, assessing the stability of model outputs across multiple inferences is essential. Our approach involves training the model once and then conducting multiple sampling or inference tests to evaluate if the outcomes remain consistent over time. A critical consideration for real-world applicability is achieving fast inference while minimizing computational overhead. The proposed LASB model strikes a balance between performance and efficiency, ensuring its suitability for deployment in practical scenarios." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 654, + 295, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 654, + 295, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 654, + 295, + 714 + ], + "type": "text", + "content": "Our findings, as detailed in Table 6, reveal that the LASB model exhibits remarkable stability across all evaluation metrics, showing negligible variance across numerous inferences. This consistency is attributed to the model's capability to maintain structural integrity and effectively tran" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 247, + 554, + 331 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 247, + 554, + 331 + ], + "spans": [ + { + "bbox": [ + 313, + 247, + 554, + 331 + ], + "type": "text", + "content": "sition from anomalous to normal latent spaces. The LASB model is designed to reconstruct a normal image regardless of the underlying anomalies, compelling it to disregard anomalous features during reconstruction. This process not only ensures that anomalies of various patterns, sizes, and orientations are effectively handled but also enhances the overall reliability of the model." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 340, + 555, + 555 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 340, + 555, + 555 + ], + "spans": [ + { + "bbox": [ + 313, + 340, + 555, + 555 + ], + "type": "text", + "content": "Inference Complexity. As shown in Table 4, LASB demonstrates exceptional efficiency with its rapid sampling capabilities, achieving near-optimal performance with as few as 10 NFEs (Number of Function Evaluations) and an inference time of only 0.74 seconds on an NVIDIA V100 GPU. This highlights LASB's practicality for real-world applications, where fast and reliable anomaly detection and localization are crucial. To further illustrate its efficiency, we compare LASB's inference time with existing state-of-the-art (SOTA) models in Table 5. LASB is approximately " + }, + { + "bbox": [ + 313, + 340, + 555, + 555 + ], + "type": "inline_equation", + "content": "1.6 \\times" + }, + { + "bbox": [ + 313, + 340, + 555, + 555 + ], + "type": "text", + "content": " faster than TransFusion [16], a leading class-based method, while also outperforming it in both anomaly localization and detection tasks. Compared to DiAD [21], LASB achieves " + }, + { + "bbox": [ + 313, + 340, + 555, + 555 + ], + "type": "inline_equation", + "content": "2 \\times" + }, + { + "bbox": [ + 313, + 340, + 555, + 555 + ], + "type": "text", + "content": " faster inference times, delivering significantly stronger performance in both detection and localization metrics. This demonstrates LASB's clear advantage in balancing computational efficiency and robust anomaly detection." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 566, + 484, + 578 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 566, + 484, + 578 + ], + "spans": [ + { + "bbox": [ + 313, + 566, + 484, + 578 + ], + "type": "text", + "content": "6. Conclusions and Future Work" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 581, + 554, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 581, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 581, + 554, + 713 + ], + "type": "text", + "content": "The proposed Latent Anomaly Schrödinger Bridge (LASB) model demonstrates robust performance in anomaly detection and localization tasks. Its unified nature delimits the need for extra guidance or additional network components. LASB also excels in producing stable inferential results, requires less computational memory, and benefits from faster training and sampling rates. Given these advantages, LASB is well-suited for deployment in real-world industrial applications. Looking ahead, future research could focus on enhancing the robust translation mechanisms specific to anomaly detection and localization tasks." + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "type": "text", + "content": "25535" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 297, + 109 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 297, + 109 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 297, + 109 + ], + "type": "text", + "content": "Acknowledgements. We thank the anonymous reviewers for their valuable feedback that improved the presentation of this paper." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 119, + 115, + 132 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 119, + 115, + 132 + ], + "spans": [ + { + "bbox": [ + 56, + 119, + 115, + 132 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 57, + 139, + 296, + 713 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 61, + 139, + 295, + 184 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 139, + 295, + 184 + ], + "spans": [ + { + "bbox": [ + 61, + 139, + 295, + 184 + ], + "type": "text", + "content": "[1] Samet Akcay, Amir Atapour-Abarghouei, and Toby P Breckon. Ganomaly: Semi-supervised anomaly detection via adversarial training. In Computer Vision-ACCV 2018. Springer International Publishing, 2019. 2" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 62, + 185, + 296, + 217 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 185, + 296, + 217 + ], + "spans": [ + { + "bbox": [ + 62, + 185, + 296, + 217 + ], + "type": "text", + "content": "[2] Brian DO Anderson. Reverse-time diffusion equation models. Stochastic Processes and their Applications, 12(3):313-326, 1982. 3" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 62, + 218, + 295, + 252 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 218, + 295, + 252 + ], + "spans": [ + { + "bbox": [ + 62, + 218, + 295, + 252 + ], + "type": "text", + "content": "[3] Alexander Bauer, Shinichi Nakajima, and Klaus-Robert Müller. Self-supervised autoencoders for visual anomaly detection. Mathematics, 12(24), 2024. 2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 62, + 253, + 295, + 286 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 253, + 295, + 286 + ], + "spans": [ + { + "bbox": [ + 62, + 253, + 295, + 286 + ], + "type": "text", + "content": "[4] Finn Behrendt et al. Patched diffusion models for unsupervised anomaly detection in brain migraine. In Medical Imaging with Deep Learning, page PMLR, 2024. 1, 2" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 287, + 295, + 342 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 287, + 295, + 342 + ], + "spans": [ + { + "bbox": [ + 62, + 287, + 295, + 342 + ], + "type": "text", + "content": "[5] Paul Bergmann, Michael Fauser, David Sattlegger, and Carsten Steger. Mvtec ad-a comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9592-9600, 2019. 5" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 343, + 295, + 376 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 343, + 295, + 376 + ], + "spans": [ + { + "bbox": [ + 62, + 343, + 295, + 376 + ], + "type": "text", + "content": "[6] Paul Bergmann et al. Improving unsupervised defect segmentation by applying structural similarity to autoencoders. arXiv preprint arXiv:1807.02011, 2018. 2" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 377, + 294, + 410 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 377, + 294, + 410 + ], + "spans": [ + { + "bbox": [ + 62, + 377, + 294, + 410 + ], + "type": "text", + "content": "[7] Ziyi Chang, George A Koulieris, and Hubert PH Shum. On the design fundamentals of diffusion models: A survey. arXiv preprint arXiv:2306.04542, 2023. 5" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 411, + 295, + 455 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 411, + 295, + 455 + ], + "spans": [ + { + "bbox": [ + 62, + 411, + 295, + 455 + ], + "type": "text", + "content": "[8] Tianrong Chen, Guan-Horng Liu, and Evangelos Theodorou. Likelihood training of schrödinger bridge using forward-backward SDEs theory. In International Conference on Learning Representations, 2022. 2, 4, 7, 8" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 62, + 456, + 295, + 499 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 456, + 295, + 499 + ], + "spans": [ + { + "bbox": [ + 62, + 456, + 295, + 499 + ], + "type": "text", + "content": "[9] Yongxin Chen, Tryphon T Georgiou, and Michele Pavon. Stochastic control liaisons: Richard sinkhorn meets gaspard monge on a schrodinger bridge. Siam Review, 63(2):249-313, 2021. 4" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 57, + 501, + 295, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 501, + 295, + 555 + ], + "spans": [ + { + "bbox": [ + 57, + 501, + 295, + 555 + ], + "type": "text", + "content": "[10] Anne-Sophie Collin and Christophe De Vleeschouwer. Improved anomaly detection by training an autoencoder with skip connections on images corrupted with stain-shaped noise. In 2020 25th International Conference on Pattern Recognition (ICPR), pages 7915-7922. IEEE, 2021. 2" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 557, + 295, + 612 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 557, + 295, + 612 + ], + "spans": [ + { + "bbox": [ + 57, + 557, + 295, + 612 + ], + "type": "text", + "content": "[11] Valentin De Bortoli, James Thornton, Jeremy Heng, and Arnaud Doucet. Diffusion schrödinger bridge with applications to score-based generative modeling. Advances in Neural Information Processing Systems, 34:17695-17709, 2021. 2, 4, 7, 8" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 57, + 613, + 295, + 668 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 613, + 295, + 668 + ], + "spans": [ + { + "bbox": [ + 57, + 613, + 295, + 668 + ], + "type": "text", + "content": "[12] Thomas Defard, Aleksandr Setkov, Angelique Loesch, and Romaric Audigier. Padim: a patch distribution modeling framework for anomaly detection and localization. In International Conference on Pattern Recognition, pages 475-489. Springer, 2021. 6" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 57, + 670, + 296, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 670, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 57, + 670, + 296, + 713 + ], + "type": "text", + "content": "[13] Hanqiu Deng and Xingyu Li. Anomaly detection via reverse distillation from one-class embedding. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9737-9746, 2022. 6" + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 555, + 713 + ], + "type": "list", + "angle": 0, + "index": 29, + "blocks": [ + { + "bbox": [ + 316, + 73, + 553, + 117 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 73, + 553, + 117 + ], + "spans": [ + { + "bbox": [ + 316, + 73, + 553, + 117 + ], + "type": "text", + "content": "[14] Wei Deng, Weijian Luo, Yixin Tan, Marin Bilos, Yu Chen, Yuriy Nevmyvaka, and Ricky T. Q. Chen. Variational schrödinger diffusion models. In International Conference on Machine Learning (ICML), 2024. 2" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 317, + 118, + 553, + 162 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 118, + 553, + 162 + ], + "spans": [ + { + "bbox": [ + 317, + 118, + 553, + 162 + ], + "type": "text", + "content": "[15] Chouro Ding, Guansong Pang, and Chunhua Shen. Catching both gray and black swans: Open-set supervised anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022. 1" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 317, + 163, + 553, + 208 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 163, + 553, + 208 + ], + "spans": [ + { + "bbox": [ + 317, + 163, + 553, + 208 + ], + "type": "text", + "content": "[16] Matic Fučka, Vitjan Zavrtanik, and Danijel Skočaj. Transfusion-a transparency-based diffusion model for anomaly detection. In European conference on computer vision, pages 91-108. Springer, 2024. 6, 8" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 317, + 209, + 553, + 283 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 209, + 553, + 283 + ], + "spans": [ + { + "bbox": [ + 317, + 209, + 553, + 283 + ], + "type": "text", + "content": "[17] Dong Gong, Lingqiao Liu, Vuong Le, Budhaditya Saha, Moussa Reda Mansour, Svetha Venkatesh, and Anton van den Hengel. Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1705-1714, 2019. 2" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 286, + 553, + 330 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 286, + 553, + 330 + ], + "spans": [ + { + "bbox": [ + 316, + 286, + 553, + 330 + ], + "type": "text", + "content": "[18] Alvaro Gonzalez-Jimenez et al. Sano: Score-based diffusion model for anomaly localization in dermatology. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023. 1, 2" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 332, + 555, + 386 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 332, + 555, + 386 + ], + "spans": [ + { + "bbox": [ + 316, + 332, + 555, + 386 + ], + "type": "text", + "content": "[19] Denis Gudovskiy, Shun Ishizaka, and Kazuki Kozuka. Cflow-ad: Real-time unsupervised anomaly detection with localization via conditional normalizing flows. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 98-107, 2022. 6" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 317, + 388, + 553, + 453 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 388, + 553, + 453 + ], + "spans": [ + { + "bbox": [ + 317, + 388, + 553, + 453 + ], + "type": "text", + "content": "[20] Haoyang He, Yuhu Bai, Jiangning Zhang, Qingdong He, Hongxu Chen, Zhenye Gan, Chengjie Wang, Xiangtai Li, Guanzhong Tian, and Lei Xie. MambaAD: Exploring state space models for multi-class unsupervised anomaly detection. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. 6, 7" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 317, + 455, + 553, + 518 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 455, + 553, + 518 + ], + "spans": [ + { + "bbox": [ + 317, + 455, + 553, + 518 + ], + "type": "text", + "content": "[21] Haoyang He, Jiangning Zhang, Hongxu Chen, Xuhai Chen, Zhishan Li, Xu Chen, Yabiao Wang, Chengjie Wang, and Lei Xie. A diffusion-based framework for multi-class anomaly detection. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 8472-8480, 2024. 1, 2, 3, 5, 6, 7, 8" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 317, + 521, + 553, + 576 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 521, + 553, + 576 + ], + "spans": [ + { + "bbox": [ + 317, + 521, + 553, + 576 + ], + "type": "text", + "content": "[22] Liren He, Zhengkai Jiang, Jinlong Peng, Liang Liu, Qianggang Du, Xiaobin Hu, Wenbing Zhu, Mingmin Chi, Yabiao Wang, and Chengjie Wang. Learning unified reference representation for unsupervised multi-class anomaly detection. arXiv preprint arXiv:2403.11561, 2024. 3" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 317, + 578, + 553, + 611 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 578, + 553, + 611 + ], + "spans": [ + { + "bbox": [ + 317, + 578, + 553, + 611 + ], + "type": "text", + "content": "[23] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020. 2, 6, 7, 8" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 317, + 613, + 553, + 644 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 613, + 553, + 644 + ], + "spans": [ + { + "bbox": [ + 317, + 613, + 553, + 644 + ], + "type": "text", + "content": "[24] Bozhen Hu et al. A lightweight spatial and temporal multifeature fusion network for defect detection. IEEE Transactions on Image Processing, 30:472-486, 2020. 1" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 316, + 647, + 553, + 678 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 647, + 553, + 678 + ], + "spans": [ + { + "bbox": [ + 316, + 647, + 553, + 678 + ], + "type": "text", + "content": "[25] Solomon Kullback. Probability densities with given marginals. The Annals of Mathematical Statistics, 39(4): 1236-1243, 1968. 4, 7, 8" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 316, + 680, + 553, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 680, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 680, + 553, + 713 + ], + "type": "text", + "content": "[26] Yufei Liang et al. Omni-frequency channel-selection representations for unsupervised anomaly detection. IEEE Transactions on Image Processing, 2023. 2, 6" + } + ] + } + ], + "index": 28 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 749, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 749, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 749, + 318, + 757 + ], + "type": "text", + "content": "25536" + } + ] + } + ], + "index": 30 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 73, + 294, + 713 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 56, + 73, + 294, + 117 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 73, + 294, + 117 + ], + "spans": [ + { + "bbox": [ + 56, + 73, + 294, + 117 + ], + "type": "text", + "content": "[27] Guan-Horng Liu, Arash Vahdat, De-An Huang, Evangelos A. Theodorou, Weili Nie, and Anima Anandkumar. I2sb: Image-to-image schrödinger bridge. In International Conference on Machine Learning, 2023. 4, 5" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 118, + 294, + 173 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 118, + 294, + 173 + ], + "spans": [ + { + "bbox": [ + 56, + 118, + 294, + 173 + ], + "type": "text", + "content": "[28] Wenqian Liu, Runze Li, Meng Zheng, Srikrishna Karanam, Ziyan Wu, Bir Bhanu, Richard J Radke, and Octavia Camps. Towards visually explaining variational autoencoders. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8642-8651, 2020. 2" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 174, + 294, + 228 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 174, + 294, + 228 + ], + "spans": [ + { + "bbox": [ + 56, + 174, + 294, + 228 + ], + "type": "text", + "content": "[29] Zhikang Liu, Yiming Zhou, Yuansheng Xu, and Zilei Wang. Simplenet: A simple network for image anomaly detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20402-20411, 2023. 6" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 230, + 294, + 262 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 230, + 294, + 262 + ], + "spans": [ + { + "bbox": [ + 56, + 230, + 294, + 262 + ], + "type": "text", + "content": "[30] Xingming Long et al. Fabric defect detection using tactile information. In 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021. 1" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 264, + 294, + 297 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 264, + 294, + 297 + ], + "spans": [ + { + "bbox": [ + 56, + 264, + 294, + 297 + ], + "type": "text", + "content": "[31] Fanbin Lu et al. Removing anomalies as noises for industrial defect localization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023. 1, 2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 298, + 294, + 352 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 298, + 294, + 352 + ], + "spans": [ + { + "bbox": [ + 56, + 298, + 294, + 352 + ], + "type": "text", + "content": "[32] Ruiying Lu, YuJie Wu, Long Tian, Dongsheng Wang, Bo Chen, Xiyang Liu, and Ruimin Hu. Hierarchical vector quantized transformer for multi-class unsupervised anomaly detection. Advances in Neural Information Processing Systems, 36:8487-8500, 2023. 3, 6, 7" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 354, + 294, + 387 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 354, + 294, + 387 + ], + "spans": [ + { + "bbox": [ + 56, + 354, + 294, + 387 + ], + "type": "text", + "content": "[33] Arian Mousakhan, Thomas Brox, and Jawad Tayyub. Anomaly detection with conditioned denoising diffusion models. arXiv preprint arXiv:2305.15956, 2023. 6, 7, 8" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 388, + 294, + 410 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 388, + 294, + 410 + ], + "spans": [ + { + "bbox": [ + 56, + 388, + 294, + 410 + ], + "type": "text", + "content": "[34] Edward Nelson. Dynamical theories of Brownian motion. Princeton university press, 2020. 4" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 411, + 294, + 444 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 411, + 294, + 444 + ], + "spans": [ + { + "bbox": [ + 56, + 411, + 294, + 444 + ], + "type": "text", + "content": "[35] Jonathan Pinnay and Keng Chai. Inpainting transformer for anomaly detection. In International Conference on Image Analysis and Processing, pages 394-406. Springer, 2022. 2" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 445, + 294, + 467 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 445, + 294, + 467 + ], + "spans": [ + { + "bbox": [ + 56, + 445, + 294, + 467 + ], + "type": "text", + "content": "[36] Hannes Risken and Hannes Risken. Fokker-planck equation. Springer, 1996. 4" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 468, + 294, + 522 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 468, + 294, + 522 + ], + "spans": [ + { + "bbox": [ + 56, + 468, + 294, + 522 + ], + "type": "text", + "content": "[37] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695, 2022. 2, 5, 6, 7" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 523, + 294, + 578 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 523, + 294, + 578 + ], + "spans": [ + { + "bbox": [ + 56, + 523, + 294, + 578 + ], + "type": "text", + "content": "[38] Karsten Roth, Latha Pemula, Joaquin Zepeda, Bernhard Schölkopf, Thomas Brox, and Peter Gehler. Towards total recall in industrial anomaly detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14318-14328, 2022. 6, 8" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 56, + 579, + 294, + 635 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 579, + 294, + 635 + ], + "spans": [ + { + "bbox": [ + 56, + 579, + 294, + 635 + ], + "type": "text", + "content": "[39] Marco Rudolph, Tom Wehrbein, Bodo Rosenhahn, and Bastian Wandt. Fully convolutional cross-scale-flows for image-based defect detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1088-1097, 2022. 6" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 56, + 635, + 294, + 689 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 635, + 294, + 689 + ], + "spans": [ + { + "bbox": [ + 56, + 635, + 294, + 689 + ], + "type": "text", + "content": "[40] Chitwan Sahara, William Chan, Huiwen Chang, Chris Lee, Jonathan Ho, Tim Salimans, David Fleet, and Mohammad Norouzi. Palette: Image-to-image diffusion models. In ACM SIGGRAPH 2022 conference proceedings, pages 1-10, 2022. 5" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 56, + 690, + 294, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 690, + 294, + 713 + ], + "spans": [ + { + "bbox": [ + 56, + 690, + 294, + 713 + ], + "type": "text", + "content": "[41] Thomas Schlegl, Philipp Seebock, Sebastian M Waldstein, Georg Langs, and Ursula Schmidt-Erfurth. f-anogan: Fast" + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 555, + 713 + ], + "type": "list", + "angle": 0, + "index": 30, + "blocks": [ + { + "bbox": [ + 333, + 73, + 553, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 333, + 73, + 553, + 95 + ], + "spans": [ + { + "bbox": [ + 333, + 73, + 553, + 95 + ], + "type": "text", + "content": "unsupervised anomaly detection with generative adversarial networks. Medical image analysis, 54:30-44, 2019. 2" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 96, + 553, + 139 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 96, + 553, + 139 + ], + "spans": [ + { + "bbox": [ + 316, + 96, + 553, + 139 + ], + "type": "text", + "content": "[42] Yuyang Shi, Valentin De Bortoli, Andrew Campbell, and Arnaud Doucet. Diffusion schrödinger bridge matching. Advances in Neural Information Processing Systems, 36, 2024. 2" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 141, + 553, + 195 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 141, + 553, + 195 + ], + "spans": [ + { + "bbox": [ + 316, + 141, + 553, + 195 + ], + "type": "text", + "content": "[43] Woosang Shin, Jonghyeon Lee, Taehan Lee, Sangmoon Lee, and Jong Pil Yun. Anomaly detection using score-based perturbation resilience. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 23372-23382, 2023. 1, 2, 6, 8" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 197, + 555, + 241 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 197, + 555, + 241 + ], + "spans": [ + { + "bbox": [ + 316, + 197, + 555, + 241 + ], + "type": "text", + "content": "[44] Kihyuk Sohn et al. Anomaly clustering: Grouping images into coherent clusters of anomaly types. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023. 1" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 243, + 553, + 274 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 243, + 553, + 274 + ], + "spans": [ + { + "bbox": [ + 316, + 243, + 553, + 274 + ], + "type": "text", + "content": "[45] Jouwon Song et al. Anoseg: anomaly segmentation network using self-supervised learning. arXiv preprint arXiv:2110.03396, 2021. 2" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 276, + 553, + 330 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 276, + 553, + 330 + ], + "spans": [ + { + "bbox": [ + 316, + 276, + 553, + 330 + ], + "type": "text", + "content": "[46] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021. 2, 3, 7" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 332, + 553, + 397 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 332, + 553, + 397 + ], + "spans": [ + { + "bbox": [ + 316, + 332, + 553, + 397 + ], + "type": "text", + "content": "[47] Daniel Stanley Tan, Yi-Chun Chen, Trista Pei-Chun Chen, and Wei-Chao Chen. Trustmae: A noise-resilient defect classification framework using memory-augmented auto-encoders with trust regions. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 276–285, 2021. 2" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 399, + 553, + 453 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 399, + 553, + 453 + ], + "spans": [ + { + "bbox": [ + 316, + 399, + 553, + 453 + ], + "type": "text", + "content": "[48] Justin Tebbe and Jawad Tayyyub. Dynamic addition of noise in a diffusion model for anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 3940-3949, 2024. 6" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 455, + 553, + 487 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 455, + 553, + 487 + ], + "spans": [ + { + "bbox": [ + 316, + 455, + 553, + 487 + ], + "type": "text", + "content": "[49] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017. 5" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 489, + 553, + 521 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 489, + 553, + 521 + ], + "spans": [ + { + "bbox": [ + 316, + 489, + 553, + 521 + ], + "type": "text", + "content": "[50] Shashanka Venkataramanan et al. Attention guided anomaly localization in images. In European Conference on Computer Vision. Springer International Publishing, 2020. 1" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 523, + 553, + 567 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 523, + 553, + 567 + ], + "spans": [ + { + "bbox": [ + 316, + 523, + 553, + 567 + ], + "type": "text", + "content": "[51] Gefei Wang, Yuling Jiao, Qian Xu, Yang Wang, and Can Yang. Deep generative learning via schrödinger bridge. In International conference on machine learning, pages 10794-10804. PMLR, 2021. 2" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 316, + 568, + 553, + 612 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 568, + 553, + 612 + ], + "spans": [ + { + "bbox": [ + 316, + 568, + 553, + 612 + ], + "type": "text", + "content": "[52] Julian Wyatt et al. Anoddpm: Anomaly detection with denoising diffusion probabilistic models using simplex noise. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022. 1, 2" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 316, + 613, + 553, + 657 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 613, + 553, + 657 + ], + "spans": [ + { + "bbox": [ + 316, + 613, + 553, + 657 + ], + "type": "text", + "content": "[53] Rui Xu, Yunke Wang, and Bo Du. Maediff: Masked autoencoder-enhanced diffusion models for unsupervised anomaly detection in brain images. arXiv preprint arXiv:2401.10561, 2024. 1" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 316, + 658, + 553, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 658, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 658, + 553, + 713 + ], + "type": "text", + "content": "[54] Hang Yao, Ming Liu, Zhicun Yin, Zifei Yan, Xiaopeng Hong, and Wangmeng Zuo. Glad: towards better reconstruction with global and local adaptive diffusion models for unsupervised anomaly detection. In European Conference on Computer Vision, pages 1-17. Springer, 2024. 6, 7, 8" + } + ] + } + ], + "index": 29 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "25537" + } + ] + } + ], + "index": 31 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 295, + 475 + ], + "type": "list", + "angle": 0, + "index": 9, + "blocks": [ + { + "bbox": [ + 56, + 72, + 294, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 294, + 116 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 294, + 116 + ], + "type": "text", + "content": "[55] Zhiyuan You, Lei Cui, Yujun Shen, Kai Yang, Xin Lu, Yu Zheng, and Xinyi Le. A unified model for multi-class anomaly detection. Advances in Neural Information Processing Systems, 35:4571-4584, 2022. 3, 6, 7" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 118, + 295, + 172 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 118, + 295, + 172 + ], + "spans": [ + { + "bbox": [ + 56, + 118, + 295, + 172 + ], + "type": "text", + "content": "[56] Vitjan Zavrtanik, Matej Kristan, and Danijel Skočaj. Draem-a discriminatively trained reconstruction embedding for surface anomaly detection. In Proceedings of the IEEE/CVF international conference on computer vision, pages 8330-8339, 2021. 2, 4, 6, 7, 8" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 174, + 294, + 206 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 174, + 294, + 206 + ], + "spans": [ + { + "bbox": [ + 56, + 174, + 294, + 206 + ], + "type": "text", + "content": "[57] Vitjan Zavrtanik, Matej Kristan, and Danijel Skočaj. Reconstruction by inpainting for visual anomaly detection. Pattern Recognition, 112:107706, 2021. 2" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 208, + 294, + 251 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 208, + 294, + 251 + ], + "spans": [ + { + "bbox": [ + 56, + 208, + 294, + 251 + ], + "type": "text", + "content": "[58] Vitjan Zavrtanik, Matej Kristan, and Danijel Skočaj. Dsr-a dual subspace re-projection network for surface anomaly detection. In European conference on computer vision. Springer Nature Switzerland, 2022. 2, 6" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 253, + 294, + 285 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 253, + 294, + 285 + ], + "spans": [ + { + "bbox": [ + 56, + 253, + 294, + 285 + ], + "type": "text", + "content": "[59] Zhaoyang Zeng et al. Reference-based defect detection network. IEEE Transactions on Image Processing, 30:6637-6647, 2021. 1" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 286, + 294, + 342 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 286, + 294, + 342 + ], + "spans": [ + { + "bbox": [ + 56, + 286, + 294, + 342 + ], + "type": "text", + "content": "[60] Xinyi Zhang, Naiqi Li, Jiawei Li, Tao Dai, Yong Jiang, and Shu-Tao Xia. Unsupervised surface anomaly detection with diffusion probabilistic model. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6782-6791, 2023. 1, 2, 6, 7, 8" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 343, + 294, + 386 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 343, + 294, + 386 + ], + "spans": [ + { + "bbox": [ + 56, + 343, + 294, + 386 + ], + "type": "text", + "content": "[61] Ying Zhao. Omnial: A unified cnn framework for unsupervised anomaly localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3924-3933, 2023. 6" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 388, + 294, + 420 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 388, + 294, + 420 + ], + "spans": [ + { + "bbox": [ + 56, + 388, + 294, + 420 + ], + "type": "text", + "content": "[62] Zhixuan Zhao et al. A surface defect detection method based on positive samples. In PRICAI 2018: Trends in Artificial Intelligence. Springer International Publishing, 2018. 2" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 421, + 294, + 475 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 421, + 294, + 475 + ], + "spans": [ + { + "bbox": [ + 56, + 421, + 294, + 475 + ], + "type": "text", + "content": "[63] Yang Zou, Jongheon Jeong, Latha Pemula, Dongqing Zhang, and Onkar Dabeer. Spot-the-difference self-supervised pretraining for anomaly detection and segmentation. In European Conference on Computer Vision. Springer, Springer Nature Switzerland, 2022. 5" + } + ] + } + ], + "index": 8 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "type": "text", + "content": "25538" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/A Unified Model for Compressed Sensing MRI Across Undersampling Patterns/e66975d9-0abf-4453-9e0f-887ea6234025_content_list.json b/2025/A Unified Model for Compressed Sensing MRI Across Undersampling Patterns/e66975d9-0abf-4453-9e0f-887ea6234025_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..99cf007ea1ed4ba5919c1b6dee4700fda0552580 --- /dev/null +++ b/2025/A Unified Model for Compressed Sensing MRI Across Undersampling Patterns/e66975d9-0abf-4453-9e0f-887ea6234025_content_list.json @@ -0,0 +1,1513 @@ +[ + { + "type": "text", + "text": "A Unified Model for Compressed Sensing MRI Across Undersampling Patterns", + "text_level": 1, + "bbox": [ + 86, + 130, + 883, + 152 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Armeet Singh Jatyani* Miguel Liu-Schiaffini", + "bbox": [ + 158, + 180, + 339, + 215 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Jiayun Wang* Aditi Chandrashekar Zihui Wu Bahareh Tolooshams Anima Anandkumar Ifornia Institute of Technology", + "bbox": [ + 377, + 181, + 810, + 234 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "{armeet,peterw,ajchandr,zwu2,mliuschi,btoloosh,ana}@caltech.edu", + "bbox": [ + 197, + 237, + 772, + 250 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 233, + 286, + 313, + 301 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled measurements, thereby reducing scan time—the time subjects need to remain still. Recently, deep learning has shown great potential for reconstructing high-fidelity images from highly undersampled measurements. However, one needs to train multiple models for different undersampling patterns and desired output image resolutions, since most networks operate on a fixed discretization. Such approaches are highly impractical in clinical settings, where undersampling patterns and image resolutions are frequently changed to accommodate different real-time imaging and diagnostic requirements.", + "bbox": [ + 75, + 318, + 470, + 500 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "We propose a unified MRI reconstruction model robust to various measurement undersampling patterns and image resolutions. Our approach uses neural operators—a discretization-agnostic architecture applied in both image and measurement spaces—to capture local and global features. Empirically, our model improves SSIM by $11\\%$ and PSNR by 4 dB over a state-of-the-art CNN (End-to-End VarNet), with $600\\times$ faster inference than diffusion methods. The resolution-agnostic design also enables zero-shot super-resolution and extended field-of-view reconstruction, offering a versatile and efficient solution for clinical MR imaging. Our unified model offers a versatile solution for MRI, adapting seamlessly to various measurement undersampling and imaging resolutions, making it highly effective for flexible and reliable clinical imaging. Our code is available at https://armeet.ca/nomri.", + "bbox": [ + 75, + 501, + 473, + 743 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 76, + 773, + 209, + 789 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Magnetic Resonance Imaging (MRI) is a popular noninvasive imaging technology, used in numerous medical and scientific applications such as neurosurgery [38], clinical oncology [20], diagnostic testing [16], neuroscience [22], and pharmaceutical research [36]. MRI is greatly limited by a", + "bbox": [ + 75, + 799, + 470, + 876 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "slow data acquisition process, which sometimes requires patients to remain still for an hour [4, 39]. Hence, accelerating MRI scan has garnered tremendous attention [11, 18, 28].", + "bbox": [ + 500, + 287, + 893, + 333 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Compressed Sensing (CS) [9] enables MRI at sub-Nyquist rates and reduces acquisition time for greater clinical utility. This is framed as an ill-posed inverse problem [12], where prior knowledge about MR images is crucial for reconstruction. Traditional Compressed Sensing MRI assumes a sparse prior in a transform domain (e.g., wavelets [3]). Recent deep learning methods learn underlying data structures to achieve superior performance [5, 40]. Current state-of-the-art models establish an end-to-end mapping [15, 40] from undersampled measurements to image reconstruction in both image and frequency domains. However, these models often struggle with generalization across varying resolutions, a critical need in clinical practice where flexible resolution adjustments are necessary. A unified model that is agnostic to discretizations would greatly improve efficiency.", + "bbox": [ + 496, + 335, + 893, + 561 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Neural Operators (NOs) [21] are a deep learning framework that learns mappings between infinite-dimensional function spaces, making them agnostic to discretizations (resolutions). This property makes them suitable for tasks with data at varying resolutions, such as partial differential equations (PDEs) [21, 25, 34] and PDE-related applications [32, 35]. NOs could also be suitable for compressed sensing MRI due to measurements with multiple undersampling patterns. Various NO architectures [24, 26, 34] have been proposed. Recently, discrete-continuous (DISCO) convolutions [26, 31] have emerged as an efficient neural operator that captures local features and leverages GPU acceleration for standard convolutions. Due to the similarity to standard convolutions, the building blocks of many existing MRI deep learning models [5, 40], DISCO is a good candidate for resolution-agnostic MRI reconstruction.", + "bbox": [ + 496, + 565, + 895, + 806 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Our approach: We propose a unified model based on NOs, that is robust to different undersampling patterns and image resolutions in compressed sensing MRI (Fig. 1a). Our model follows an unrolled network design [15, 40] with DISCO [26, 31]. As the image resolution increases, DISCO maintains a resolution-agnostic kernel with a consistent con", + "bbox": [ + 496, + 809, + 895, + 900 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 46 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "*Equal contribution.", + "bbox": [ + 94, + 887, + 205, + 898 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "26004", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/79bd6ab2cc51219f5cbccc1f37da7e35cd83009cc487046729e06a66ec617a2f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 84, + 92, + 483, + 270 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/fac92d5640d352f6c9f5be132c40c252f7c8960477bf38717cfcc6057fcf75af.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 483, + 93, + 866, + 284 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/4c2e564d9eecb317644849f140a69641654d87616d351ad9b836f7c25fa1567f.jpg", + "image_caption": [ + "Figure 1. (a) We propose a unified model for MRI reconstruction, called neural operator (NO), which works across various measurement undersampling patterns, overcoming the resolution dependency limit of CNN-based methods like [40] that require a specific model for each pattern. (b) NO achieves consistent performance across undersampling patterns and outperforms CNN architectures such as [40] (for $2 \\times$ acceleration with one unrolled network cascade). (c) NO is resolution-agnostic. As image resolution increases, it maintains a consistent kernel size for alias-free rescaling, unlike CNNs with variable kernel sizes that risk aliasing. (d) NO enhances zero-shot super-resolution MRI reconstruction, outperforming CNNs [40]." + ], + "image_footnote": [], + "bbox": [ + 84, + 273, + 478, + 443 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/bd4d3c4cd2c6b2889be5b0e7e58493399d38bcf4d297f231bf96b7ad58336866.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 480, + 297, + 867, + 445 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "volution patch size, while the regular convolution kernel contracts to a point (Fig. 1c). The DISCO operators learn in both measurement/frequency $\\mathbf{k}$ space $(\\mathrm{NO_k})$ and image space $(\\mathrm{NO_i})$ . $\\mathrm{NO_k}$ makes our framework agnostic to different measurement undersampling patterns, and $\\mathrm{NO_i}$ makes the framework agnostic to different image resolutions. Additionally, the learning in both frequency and image space allows the model to capture both local and global features of images due to the duality of the Fourier transform that connects the frequency and image space. The resolution-agnostic design also enables super-resolution in both frequency and image space, allowing the extended field of view (FOV) and super-resolution of the reconstructed MR images.", + "bbox": [ + 75, + 556, + 468, + 752 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "We empirically demonstrate that our model is robust to different measurement undersampling rates and patterns (Fig. 1a). Our model performs consistently across these pattern variations, whereas the existing method drops in performance (Fig. 1b). We achieve up to $4 \\times$ lower NMSE and 5 dB PSNR improvement from the baseline when evaluating on different undersampling patterns. The model is efficient and $600 \\times$ faster than the diffusion baseline [5, 17, 43]. We also show that our model outperforms the state-of-the-art in", + "bbox": [ + 75, + 763, + 470, + 900 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "zero-shot super-resolution inference (Fig. 1d) and extended FOV reconstruction (Fig. 5).", + "bbox": [ + 498, + 556, + 890, + 587 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Our work has two main contributions: 1) We propose a unified neural operator model that learns in function space and shows robust performance across different undersampling patterns and image resolutions in compressed sensing MRI. To the best of our knowledge, this is the first resolution-agnostic framework for MRI reconstruction. 2) Our model demonstrates empirical robustness across measurement undersampling rates and patterns, reconstructing MR images with zero-shot higher resolutions and a larger field of view.", + "bbox": [ + 496, + 588, + 892, + 723 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related Works", + "text_level": 1, + "bbox": [ + 500, + 738, + 648, + 753 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Accelerated MRI. One way to accelerate MRI scan speed is parallel imaging, in which multiple receiver coils acquire different views of the object of interest simultaneously, and then combine them into a single image [11, 30, 37]. When MRI reconstruction is paired with compressed sensing, predefined priors or regularization filters can be leveraged to improve reconstruction quality [27, 28]. Recent works have shown that learned deep-learning priors outperform handcrafted priors in reconstruction fidelity. Convolutional neural", + "bbox": [ + 496, + 763, + 893, + 900 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "26005", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "networks (CNNs) [8, 15, 18, 40], variational networks (based on variational minimization) [15, 40], and generative adversarial networks (GANs) [7, 18] have all demonstrated superior performance than traditional optimization approach for compressed sensing MRI reconstruction from undersampled measurements. However, unlike conventional compressed sensing which operates in the function space and is agnostic to measurement undersampling patterns, the aforementioned deep learning methods operate on a fixed resolution. As a result, changes in resolution lead to degradation in performance, and multiple models are needed for different settings. We propose a resolution-agnostic unified model.", + "bbox": [ + 75, + 90, + 472, + 271 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Discretization-Agnostic Learning and Neural Operators. Empirically, diffusion models have shown relatively consistent performance with different measurement undersampling patterns in accelerated MRI [14]. However, diffusion models usually take more runtime at inference and need extensive hyperparameter tuning for good performance (Section 4.5). Additionally, they are not fundamentally discretization-agnostic by design. Neural operators [1, 21] are deep learning architectures specifically designed to learn mappings between infinite-dimensional function spaces. They are discretization-agnostic, allowing evaluation at any resolution, and converge to a desired operator as the resolution approaches infinity. Neural operators have empirically achieved good performance as surrogate models of numerical solutions to partial differential equations (PDEs) [21, 25, 34] with various applications, such as material science [35], weather forecasting [32], and photoacoustic imaging [13]. The design of neural operators often depends on the application at hand. For example, the Fourier neural operator (FNO) [24], which performs global convolutions, has shown consistent discretization-agnostic performance in various applications [1]. Other designs of neural operators [23, 26] rely on integration with locally-supported kernels to capture local features, which has shown to be useful in applications where local features are important, such as modeling turbulent fluids [23]. Additionally, neural operators with local integrals can be made efficient with parallel computing compared to those requiring global integrals. Our MRI framework, based on neural operators with local integrals, is agnostic to undersampling patterns and output image resolutions.", + "bbox": [ + 75, + 272, + 472, + 724 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. Methods", + "text_level": 1, + "bbox": [ + 76, + 739, + 174, + 753 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "We first discuss the background of compressed sensing MRI and the unrolled network framework we use. We then discuss how we can extend the existing network building block, standard convolution, to resolution-agnostic neural operators. We also introduce DISCO [31], a neural operator design we adopt, and we capture global and local image features with DISCO. We conclude the section with the super-resolution designs. We call the measurement or frequency space $\\mathbf{k}$ -space, and physical or spatial space image space hereafter.", + "bbox": [ + 75, + 763, + 472, + 902 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. MRI Reconstruction with Unrolled Networks", + "text_level": 1, + "bbox": [ + 498, + 90, + 883, + 106 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Background. In MRI, anatomical images $\\mathbf{x}$ of the patient are reconstructed by acquiring frequency-domain measurements $\\mathbf{k}$ , where the relationship is defined as:", + "bbox": [ + 498, + 113, + 890, + 159 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {k} := \\mathcal {F} (\\mathbf {x}) + \\epsilon \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 642, + 167, + 893, + 185 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $\\epsilon$ is the measurement noise and $\\mathcal{F}$ is the Fourier transform. In this paper, we consider the parallel imaging setting with multiple receiver coils [19, 44], where each coil captures a different region of the anatomy. The forward process of the $i^{\\mathrm{th}}$ coil measures $\\mathbf{k}_i\\coloneqq \\mathcal{F}(S_i\\mathbf{x}) + \\epsilon_i$ where $S_{i}$ is a position-dependent sensitivity map for the $i^{\\mathrm{th}}$ coil. To speed up the imaging process, measurements are undersampled as $\\tilde{\\mathbf{k}} = M\\mathbf{k}$ in the compressed sensing MRI setting, where $M$ is a binary mask that selects a subset of the k-space points. Classical compressed sensing methods reconstruct the image $\\hat{\\mathbf{x}}$ by solving an optimization problem", + "bbox": [ + 496, + 194, + 893, + 361 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\hat {\\mathbf {x}} = \\operatorname {a r g m i n} _ {\\mathbf {x}} \\frac {1}{2} \\sum_ {i} \\left\\| \\mathcal {A} (\\mathbf {x}) - \\tilde {\\mathbf {k}} \\right\\| _ {2} ^ {2} + \\lambda \\Psi (\\mathbf {x}) \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 547, + 367, + 892, + 404 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $i$ is the coil index, $\\mathcal{A}(\\cdot) \\coloneqq MFS_{i}(\\cdot)$ is the linear forward operator, and $\\Psi(\\mathbf{x})$ is a regularization term. The optimization objective can be considered as a combination of physics constraint and prior. While the above optimization can be solved using classical optimization toolboxes, an increasing line of works uses deep neural networks to learn data priors and show improved reconstruction performance [15, 40]. Among them, unrolled networks [15, 40] have gained popularity as they incorporate the known forward model, resulting in state-of-the-art performance. Unrolling, which started with the nominal work of LISTA [10], proposes to design networks using iterations of an optimization algorithm to solve inverse problems. This approach incorporates domain knowledge (i.e., the forward model) and leverages deep learning to learn implicit priors from data [29, 41]. In the context of MRI and assuming a differential regularization term, the optimization problem is expanded to iterative gradient descent steps with injected CNN-based data priors. Each layer mimics the gradient descent step from $\\mathbf{x}^t$ to $\\mathbf{x}^{t+1}$ :", + "bbox": [ + 496, + 412, + 895, + 700 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {x} ^ {t + 1} \\leftarrow \\mathbf {x} ^ {t} - \\eta^ {t} \\mathcal {A} ^ {*} (\\mathcal {A} (\\mathbf {x} ^ {t}) - \\tilde {\\mathbf {k}}) + \\lambda^ {t} \\operatorname {C N N} (\\mathbf {x} ^ {t}) \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 534, + 722, + 892, + 741 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $\\eta^t$ controls the weight of data consistency term and $\\lambda^t$ controls that of the data-driven prior term. The data consistency term samples the data in the frequency domain, hence it is applicable to any spatial resolution. However, the prior term only operates on a specific resolution with CNNs. This means when changing the undersampling patterns, one needs another CNN trained for that setting, which greatly limits the flexibility of the reconstruction system.", + "bbox": [ + 496, + 750, + 893, + 869 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Extending to Neural Operators. We learn the prior in function space via discretization-agnostic neural operators", + "bbox": [ + 498, + 869, + 893, + 901 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "26006", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/1cfb141d7675bddef93de0b6f015a4dc0b22a2793f8b945af4a3bae97a2d8b20.jpg", + "image_caption": [ + "Figure 2. MRI reconstruction pipeline. NO learns data priors in function space with infinite resolution. Specifically we propose NOs in the k (frequency) space $\\mathrm{NO_k}$ (k space NO) and image space $\\mathrm{NO_i}$ (image space NO), which capture both global and local image features, due to the duality between physical and frequency space. $\\mathcal{F}^{-1}$ refers to the inverse Fourier transform. We provide the framework design details in Section 3.1 and NO design details in Section 3.2." + ], + "image_footnote": [], + "bbox": [ + 83, + 78, + 869, + 203 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "in $\\mathbf{k}$ space $(\\mathrm{NO_k})$ and image space $(\\mathrm{NO_i})$ . Specifically, we first use a $\\mathbf{k}$ space neural operator $\\mathrm{NO_k}$ to learn $\\mathbf{k}$ space prior and then apply a cascade of unrolled layers, each of which features a data consistency loss and the image space $\\mathrm{NO_i}$ for image prior learning:", + "bbox": [ + 75, + 291, + 470, + 367 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {x} ^ {0} \\leftarrow \\mathcal {F} ^ {- 1} \\left(\\mathrm {N O} _ {\\mathbf {k}} (\\tilde {\\mathbf {k}})\\right) \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 132, + 377, + 468, + 395 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {x} ^ {t + 1} \\leftarrow \\mathbf {x} ^ {t} - \\eta^ {t} \\mathcal {A} ^ {*} (\\mathcal {A} (\\mathbf {x} ^ {t}) - \\tilde {\\mathbf {k}}) + \\lambda^ {t} \\mathrm {N O} _ {\\mathbf {i}} ^ {t} (\\mathbf {x} ^ {t}) \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 117, + 398, + 468, + 417 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\mathrm{NO}_{\\mathbf{i}}^{t}$ refers to the image-space NO at cascade $t$ . We follow existing works [15, 40] and only have one $\\mathrm{NO}_{\\mathbf{k}}$ for the first cascade. Our framework flexibly works for different resolutions with the design details in Section 3.2.", + "bbox": [ + 75, + 429, + 468, + 489 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Framework Overview. Fig. 2 depicts the pipeline of our neural operator framework for MRI reconstruction. The undersampled measurement $\\tilde{\\mathbf{k}}$ is first fed to a neural operator $\\mathrm{NO_k}$ which operates in measurement $\\mathbf{k}$ space to learn global image features and then inverse Fourier transformed to get an image. Following Eqn. 4 and 5, we iterate a few cascades of unrolled layers, consisting of a neural operator $\\mathrm{NO_i}$ which operates in image $\\mathbf{x}$ space and a data consistency update.", + "bbox": [ + 75, + 489, + 468, + 612 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2. Neural Operator Design", + "text_level": 1, + "bbox": [ + 76, + 619, + 299, + 636 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Neural operators, which learn mappings between function spaces, offer a unified approach to discretization-agnostic MRI reconstruction. Given that accurate MRI reconstruction depends on capturing both local and global image features, we propose a neural operator architecture that incorporates both global and local inductive biases. We first discuss how we learn local features with local integration operators.", + "bbox": [ + 75, + 643, + 468, + 750 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Local Features via Local Integration Operator. Historically, the most common method of embedding a local inductive bias into deep neural networks has been by using locally-supported convolutional kernels, as in convolutional neural networks (CNNs). However, standard discrete convolutional kernels used in CNNs do not satisfy the resolution-agnostic properties of neural operators. Specifically, Liu et al. [26] show that CNN-style convolutional kernels converge to pointwise linear operators as the resolution is increased, instead of the desired local integration in the limit of infinite", + "bbox": [ + 75, + 750, + 470, + 901 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "resolution. For a kernel $\\kappa$ and input function $g$ defined over some compact subset $D\\subset \\mathbb{R}^d$ , the local convolution operator in a standard convolution layer, which transforms input $u$ to output $v$ , is given by", + "bbox": [ + 498, + 291, + 893, + 353 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n(k \\star g) (v) = \\int_ {D} \\kappa (u - v) \\cdot g (u) d u. \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 571, + 357, + 890, + 390 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Given a particular set of input points $(u_{j})_{j = 1}^{m}\\subset D$ with corresponding quadrature weights $q_{j}$ and output positions $v_{i}\\in D$ , we adopt the discrete-continuous convolutions (DISCO) framework for operator learning [26, 31] and approximate the continuous convolution (Eqn. 6) as", + "bbox": [ + 498, + 395, + 893, + 470 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n(k \\star g) (v _ {i}) \\approx \\sum_ {j = 1} ^ {m} \\kappa \\left(u _ {j} - v _ {i}\\right) \\cdot g \\left(x _ {j}\\right) q _ {j}. \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 563, + 476, + 892, + 517 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "We follow parameterize $\\kappa$ as a linear combination of predefined basis functions $\\kappa^{\\ell}$ : $\\kappa = \\sum_{\\ell=1}^{L} \\theta^{\\ell} \\cdot \\kappa^{\\ell}$ , where $\\theta^{\\ell}$ are learnable parameters. We choose the linear piecewise basis from [26] as this achieves the greatest empirical results (see Sections B.3 & E of the supplementary). The convolutional kernel is thus parameterized by a finite number of parameters, independently of the grid on which the kernel is evaluated. The kernel is resolution-agnostic because we disentangle the resolution-agnostic basis and discrete learnable parameters. The basis $\\kappa^{\\ell}$ is defined in the function space, and will be discretized at the desired resolution; discrete parameters $\\theta^{\\ell}$ can be learned with gradient descent. Since we are operating on an equidistant grid on a compact subset of $\\mathbb{R}^2$ , we follow [26] and implement Eqn. 7 using standard convolutional kernels (thus enjoying the benefits of acceleration on GPUs using standard deep learning libraries) with two crucial modifications: 1) the kernel itself is defined as a linear combination of basis functions $\\kappa^{\\ell}$ , and 2) the size of the kernel scales with the input resolution so as to remain a fixed size w.r.t. the input domain. We adopt the same basis functions as [26] in our experiments, and we use the local integration operator as the resolution-agnostic building block for the measurement space and image space operators.", + "bbox": [ + 496, + 523, + 893, + 869 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "DISCO vs Standard 2D Convolution with Varying Resolutions. As the input resolution increases (the discretization", + "bbox": [ + 498, + 869, + 893, + 900 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "26007", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "becomes denser), DISCO [31] maintains the kernel size for each convolution and finally converges to a local integral. The standard 2D convolution kernel, however, gets increasingly smaller and finally converges to a point-wise operator (Fig. 1c). Although one could alleviate the issue of standard convolutions by interpolating the convolutional kernel shape to match with corresponding convolution patch sizes for different resolutions, the interpolated kernel will have artifacts that affect performance at new resolutions (Fig. 1d). DISCO, however, is agnostic to resolution changes as the kernel is in the function space.", + "bbox": [ + 75, + 90, + 472, + 256 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Global Features. A common neural operator architecture for learning global features is the Fourier neural operator (FNO) [24]. FNO takes the Fourier transform of the input, truncates the result beyond some fixed number of modes, and pointwise multiplies the result with a learned weight tensor, which is equivalent to a global convolution on the input by the convolution theorem. Interestingly, the forward process of MRI is a Fourier transformation, which means that local operations in measurement $\\mathbf{k}$ space are equivalent to global operators in image $\\mathbf{x}$ space and vice versa, due to their duality. Following FNO, we could apply a pointwise multiplication between the measurement $\\mathbf{k}$ and a learned weight tensor to capture global image features. However, FNO truncates high frequencies, which are crucial for MRI reconstruction. To address this, we directly apply the DISCO local integration operator on the measurement space to capture global image features without feature map truncation.", + "bbox": [ + 75, + 257, + 472, + 513 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "UDNO: the Building Block. Without loss of generality, we make both the image-space $\\mathrm{NO}_{\\mathrm{i}}$ and $\\mathbf{k}$ space $\\mathrm{NO}_{\\mathbf{k}}$ be local neural operators that capture local features in the corresponding domain. Such a design learns both global and local image features due to domain duality. Motivations for adopting the U-shaped architecture are in Fig. 7 and Section A of the Supplementary. Each operator consists of multiple sub-layers, to which we refer as the U-Shaped DISCO Neural Operator, or UDNO. The motivation is that multi-scale designs have shown great success in capturing features at different scales in images and that U-shaped networks are among the most popular architectures in computer vision, demonstrating strong performance in various applications from medical imaging to diffusion [6, 33, 37]. Further, UDNO makes our framework very similar to an existing state-of-the-art E2E-VN [40], with the difference being standard convolutions replaced by DISCO operators. The UDNO follows the encoder/decoder architecture of the U-Net [37], replacing regular convolutions with DISCO layers.", + "bbox": [ + 75, + 513, + 472, + 800 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Loss. The parameters of the proposed neural operator are estimated from the training data by minimizing the structural similarity loss between the reconstruction $\\mathbf{x}$ and the ground truth image $\\mathbf{x}^*$ (the same as the E2E-VN [40]):", + "bbox": [ + 75, + 801, + 470, + 861 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} (\\hat {\\mathbf {x}}, \\mathbf {x} ^ {*}) = - \\operatorname {S S I M} (\\hat {\\mathbf {x}}, \\mathbf {x} ^ {*}), \\tag {8}\n$$\n", + "text_format": "latex", + "bbox": [ + 176, + 862, + 468, + 878 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where SSIM is the Structural Similarity Index Measure [42].", + "bbox": [ + 76, + 885, + 472, + 901 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/16a9f889ff062d7a55042de1d4ad45ad5bd849d690d5af4a4369881e0fcda26e.jpg", + "image_caption": [ + "Figure 3. Super resolution (denser discretization) in $\\mathbf{k}$ space or image space increases the FOV or resolution of the reconstructed image. With denser discretization, NO maintains a resolution-agnostic kernel while CNN kernels become relatively smaller in size. Empirically our NO outperforms CNNs [40] (Section 4.4)." + ], + "image_footnote": [], + "bbox": [ + 501, + 89, + 883, + 266 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.3. Super-Resolution", + "text_level": 1, + "bbox": [ + 500, + 382, + 669, + 398 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Neural operators enable zero-shot super-resolution. As shown in Fig. 3, increasing resolution corresponds to denser discretization between fixed minimum and maximum values, while the overall domain range remains constant. Due to the dual nature of frequency and image space, enhancing resolution in $\\mathbf{k}$ space extends the field of view (FOV) in the reconstructed image, whereas increasing resolution in image space enhances the image's detail. Our proposed NO framework includes resolution-agnostic neural operators for both $\\mathbf{k}$ space $(\\mathrm{NO_k})$ and image space $(\\mathrm{NO_i})$ , facilitating zero-shot super-resolution in both domains. We present empirical zero-shot super-resolution results in Section 4.4, comparing our NO framework to E2E-VN [40], a CNN-based architecture with a similar design.", + "bbox": [ + 496, + 405, + 893, + 617 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4. Experiments", + "text_level": 1, + "bbox": [ + 500, + 630, + 633, + 647 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We discuss the datasets and experimental setup, followed by comparisons of our and baseline methods with different k undersampling rates and patterns. We conclude the section with zero-shot super-resolution and additional analysis.", + "bbox": [ + 496, + 655, + 893, + 717 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.1. Dataset and Setup", + "text_level": 1, + "bbox": [ + 500, + 726, + 676, + 742 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Datasets: The fastMRI dataset [44] is a large and open dataset of knee and brain fully-sampled MRIs.", + "bbox": [ + 500, + 750, + 890, + 779 + ], + "page_idx": 4 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- fastMRI knee: We use the multi-coil knee reconstruction dataset with 34,742 slices for training and 7,135 slices for evaluation. All samples contain data from 15 coils.", + "- fastMRI brain: We use the T2 contrast subset of the multi-coil brain reconstruction dataset with 6,262 training slices and 502 evaluation slices. We filter for samples with data from 16 coils." + ], + "bbox": [ + 500, + 780, + 890, + 883 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Undersampling Patterns and Rates. We use equispaced,", + "bbox": [ + 500, + 885, + 893, + 901 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "26008", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/a49b0bbff4d866cd57f8890ef7355a48f215d9daf65d26c1ff6d4d42f0f60fc2.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
CategoryMethodfastMRI kneefastMRI brain
PSNR (dB)SSIMPSNR (dB)SSIM
Learning-freeZero-filled31.00±3.330.7848±0.061630.86±1.730.8065±0.0376
ℓ1-Wavelet [27]25.67±3.910.5667±0.200128.68±1.310.6783±0.0684
DiffusionCSGM [17]26.52±3.210.6789±0.1220--
ScoreMRI [5]25.72±1.800.5789±0.0910--
PnP-DM [43]26.52±3.140.6383±0.1320--
End-to-endU-Net [37]37.07±2.470.8803±0.050437.27±1.760.9211±0.0272
E2E-VN [40]38.33±3.060.9048±0.073238.06±2.700.9620±0.0107
Ours39.14±2.930.9219±0.072438.82±2.770.9621±0.0086
", + "bbox": [ + 81, + 88, + 467, + 195 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Table 1. MRI reconstruction performance on $4 \\times$ equispaced undersampling. NO outperforms existing methods (classical, diffusion, and end-to-end). NO also shows consistent performance across k space undersampling patterns (Section 4.3). Zero-filled refers to reconstructing the image from zero-filled k space.", + "bbox": [ + 75, + 205, + 470, + 275 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "random, magic, Gaussian, radial, and Poisson undersampling patterns [5, 44] and $2\\mathrm{x}$ , $4\\times$ , $6\\times$ , and $8\\times$ undersampling rates (visualizations are in Fig. 8 in the Supplementary). Higher rates result in sparser $\\mathbf{k}$ space samples and shorter imaging time at the cost of a more ill-posed/harder inversion process. Section B in the Supplementary provides additional undersampling details along with mask visualizations.", + "bbox": [ + 75, + 305, + 468, + 411 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Neural Operator Model. NO follows Fig. 2. The $\\mathrm{NO_k}$ (k space neural operator) and $\\mathrm{NO_i}$ (image space neural operator) are implemented as UDNOs with 2 input and output channels. This is because complex numbers, commonly used in MRI data, are represented using two channels: one for the real part and one for the imaginary part. We provide UDNO details, DISCO kernel basis configurations and training hyper-parameters in Section B of the Supplementary.", + "bbox": [ + 75, + 411, + 468, + 532 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Baseline: Compressed Sensing. We compare with a learning-free compressed sensing method with wavelet $\\ell_1$ regularization for a classical comparison [27].", + "bbox": [ + 76, + 534, + 468, + 579 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Baselines: Unrolled Networks. We compare with the E2E-VN (End-to-End VarNet) [40], which shares a similar network structure with our approach, but uses the standard CNNs with resolution-dependent convolutions. Since E2E-VN [40] is only trained on specific resolution, we also consider E2E-VN++, where we train [40] with multiple-patterns that match our NO's training data for fair comparisons. Number of cascades $t$ is set to 12 following [40].", + "bbox": [ + 75, + 580, + 468, + 700 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Baselines: Diffusion. Diffusion models have shown strong performance on inverse problems such as MRI reconstruction. We compare our approach to three prominent diffusion-based methods that leverage these capabilities: Score-based diffusion models for accelerated MRI (ScoreMRI) [5], Compressive Sensing using Generative Models (CSGM) [17], and Plug-and-Play Diffusion Models (PnP-DM) [43]. We replicate the experimental settings described in their respective papers. While they report results on MVUE targets, we evaluate metrics on RSS targets at inference for a fair comparison with our methods.", + "bbox": [ + 75, + 702, + 468, + 868 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Hardware and Training. While models can be trained on a single RTX 4090 GPU, we accelerate the training of our", + "bbox": [ + 76, + 869, + 468, + 900 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "model and baselines with a batch size of 16 across 4 A100 (40G) GPUs. We follow baseline settings for comparison.", + "bbox": [ + 498, + 90, + 890, + 119 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Evaluation Protocols. We evaluate image reconstruction performance using normalized mean square error (NMSE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) which are standard for the fastMRI dataset and MRI [44].", + "bbox": [ + 498, + 121, + 890, + 196 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2. Reconstruction with Different k Space Undersampling Patterns", + "text_level": 1, + "bbox": [ + 500, + 207, + 795, + 238 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We train our NO model, E2E-VN and E2E-VN++ on $4 \\times$ equispaced samples for 50 epochs. The performance on the single $4 \\times$ equispace undersampling pattern in Table 1. We further fine-tune NO and E2E-VN++ for an additional 20 epochs on a small dataset (3,474 samples) of equispaced, random, magic, Gaussian, radial, and Poisson samples.", + "bbox": [ + 496, + 244, + 890, + 335 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "fastMRI Knee. We also provide detailed metric results in Table 2, with a line plot in Fig. 6a, where our NO achieves consistent performance across different patterns. Across all patterns, we achieve an average improvement of $4.17\\mathrm{dB}$ PSNR and $8.4\\%$ SSIM over the E2E-VN. On rectilinear patterns (equispaced, magic, random), our performance remains comparable to $\\mathrm{E2E - VN + + }$ (0.3 dB PSNR gain). Across the irregular patterns (radial, Gaussian, Poisson), we achieve a 0.6 dB PSNR improvement over the improved baseline $(\\mathrm{E2E - VN + + })$", + "bbox": [ + 496, + 337, + 890, + 487 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "fastMRI Brain. On irregular patterns, we achieve an average improvement of 4.7 dB PSNR and $10\\%$ SSIM over the E2E-VN. On rectilinear patterns (equispaced, magic, random), our performance remains comparable to the E2E-VN. Detailed numbers are reported in Table 7 of Supplementary. Visualization. We observe visual improvements in reconstruction integrity (see Fig. 4). Our model is robust to inference across multiple patterns. We highlight important local regions where our NO is better.", + "bbox": [ + 496, + 488, + 892, + 623 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The setting here where multiple patterns are trained together is a common clinical setting where the undersampling patterns are known. We also consider the setting where undersampling patterns are unknown. Zero-shot evaluations of the equispaced-trained $(4\\times)$ model across different patterns show that our NO shows 1.8 dB PSNR gain over E2E-VN.", + "bbox": [ + 496, + 625, + 890, + 715 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.3. Reconstruction with Different k Space Undersampling Rates", + "text_level": 1, + "bbox": [ + 500, + 724, + 774, + 756 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We train our NO model, E2E-VN and E2E-VN++ on $4 \\times$ equispaced samples for 50 epochs. We further fine-tune NO and E2E-VN++ for an additional 20 epochs on a small dataset (3,474 samples) of $4 \\times$ , $6 \\times$ , $8 \\times$ , and $16 \\times$ equispaced samples.", + "bbox": [ + 496, + 763, + 890, + 839 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "For fastMRI Knee, we report the multi-rate performance in Fig. 6b and Table 6 of the Supplementary. For fastMRI Brain, we report the multi-rate performance in Table 8 of the Supplementary. Our neural operator model consistently out", + "bbox": [ + 496, + 839, + 890, + 900 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "26009", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/98e9d07e8faf4877e2f41511c26d15c331bbe65b9534d219ae2fdae8975dafea.jpg", + "image_caption": [ + "Figure 4. MRI reconstructions with different undersampling patterns of various methods: NO (ours), E2E-VN++, E2E-VN [40], L1-Wavelet (learning-free compressed sensing) [27], and CSGM (diffusion) [17]. NO reconstructs high-fidelity images across various downsampling patterns. Zoom-in view in the lower right of each image. Row 1: $4 \\times$ Equispaced undersampling. Row 2: $4 \\times$ Gaussian 2d undersampling. Row 3: $4 \\times$ Radial 2d undersampling." + ], + "image_footnote": [], + "bbox": [ + 86, + 79, + 883, + 354 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/3c0c5d0e839c218fb5d08b01dd075e31ba51b384bb945c84bb069d9078d7a133.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
PatternPSNR (dB) ↑SSIM ↑NMSE ↓
NO (ours)E2E-VN++E2E-VN [40]NO (ours)E2E-VN++E2E-VN [40]NO (ours)E2E-VN++E2E-VN [40]
In-domainEquispaced37.40 ± 2.6137.50 ± 2.7938.35 ± 3.050.899 ± 0.0720.900 ± 0.0720.905 ± 0.0730.007 ± 0.0060.007 ± 0.0060.006 ± 0.006
Random36.66 ± 2.4836.79 ± 2.6537.34 ± 2.750.891 ± 0.0700.892 ± 0.0720.897 ± 0.0710.008 ± 0.0060.008 ± 0.0070.007 ± 0.005
Magic38.46 ± 2.9938.34 ± 3.0638.94 ± 3.550.914 ± 0.0700.914 ± 0.0690.917 ± 0.0710.006 ± 0.0060.007 ± 0.0060.006 ± 0.006
OODRadial36.23 ± 2.2135.50 ± 2.2427.02 ± 3.920.900 ± 0.0710.892 ± 0.0700.764 ± 0.0700.009 ± 0.0060.011 ± 0.0090.069 ± 0.030
Poisson33.42 ± 2.3433.01 ± 2.6723.61 ± 3.850.878 ± 0.0620.873 ± 0.0600.687 ± 0.0830.016 ± 0.0080.017 ± 0.0080.152 ± 0.068
Gaussian31.25 ± 2.7030.65 ± 2.5523.14 ± 3.950.863 ± 0.0580.851 ± 0.0590.673 ± 0.0880.024 ± 0.0050.028 ± 0.0070.170 ± 0.073
", + "bbox": [ + 80, + 434, + 890, + 534 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 2. MRI reconstruction performance across different undersampling patterns. Across multiple patterns, NO maintains reconstruction performance, while baselines do not perform well on out-of-domain (OOD) undersampling patterns (Poisson, radial, Gaussian). Metrics are calculated for the fastMRI knee dataset with a fixed $4 \\times$ acceleration rate. We observe that the E2E-VN overfits to rectilinear patterns, and drops off heavily when evaluated on the irregular patterns (Poisson, radial, Gaussian).", + "bbox": [ + 75, + 544, + 893, + 602 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "performs the E2E-VN [40], achieving 3.2 dB higher PSNR and $5.8\\%$ higher SSIM on fastMRI knee and 2.0 dB higher PSNR and $7.5\\%$ higher SSIM on fastMRI brain.", + "bbox": [ + 75, + 626, + 470, + 672 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.4. Zero-Shot Super-Resolution", + "text_level": 1, + "bbox": [ + 76, + 681, + 328, + 698 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We study $\\mathrm{NO_i}$ and $\\mathrm{NO_k}$ zero-shot super-resolution performance and compare them with E2E-VN [40].", + "bbox": [ + 75, + 704, + 470, + 733 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Higher MRI Resolution with $\\mathrm{NO_i}$ super-resolution. We train our NO model and the E2E-VN models on $320\\times 320$ knee samples. We then keep the $\\mathrm{NO_k}$ unchanged and use bilinear interpolation to increase the input to $\\mathrm{NO_i}$ to $640\\times$ 640. We directly evaluate models without fine-tuning against fully sampled $640\\times 640$ bilinear interpolated ground truth reconstructions. For [40] relying on CNNs, the absolute kernel size stays the same, and the ratio of kernel size over feature map is halved, while the ratio of NO stays the same. Compared to our NO model, the CNN-based E2E-VN [40] produces reconstructions with noticeable artifacts and higher", + "bbox": [ + 75, + 734, + 470, + 901 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "PSNR and image reconstruction quality (Fig. 1d and Fig. 5b). Larger MRI FOV with $\\mathrm{NO_k}$ super-resolution. k space super-resolution expands the MRI reconstruction field of view (FOV). To validate model performance, we design a proof-of-concept FOV experiment. Our NO model and the E2E-VN [40] train on $160\\times 160$ downsampled k space brain slice samples, where sparse k space sampling results in a reduced FOV in image space. We then perform zero-shot inference on $320\\times 320$ full-FOV k space data. Although neither model encounters data outside the $160\\times 160$ FOV during training, our NO model reconstructs features in this extended region with significantly fewer artifacts compared to E2E-VN [40] (visualizations in Fig. 5a).", + "bbox": [ + 496, + 626, + 893, + 824 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.5. Additional Analysis", + "text_level": 1, + "bbox": [ + 500, + 832, + 689, + 848 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Model Inference and Tuning Time. In Table 3, we compare the model development and inference times of our end-to-end neural operator (NO) with diffusion models. We", + "bbox": [ + 496, + 854, + 893, + 900 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "26010", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/25101edd51d9fb68dfa6f411ff3bd607dcc3f746b05a3e90d3175f2d27b83c82.jpg", + "image_caption": [ + "Figure 5. Zero-shot super-resolution results in both extended FOV $(\\mathrm{NO_k})$ and high-resolution image space $(\\mathrm{NO_i})$ . (a) Zero-shot extended FOV reconstructions: Our NO model shows fewer artifacts and higher PSNR in the reconstructed brain slices compared to the CNN-based E2E-VN [40] on $4\\times$ Gaussian, despite neither model seeing data outside the initial $160\\times 160$ FOV during training. (b) Zero-shot super-resolution reconstructions in image space on $2\\times$ radial: with input resolution increased to $640\\times 640$ through bilinear interpolation, our NO model preserves reconstruction quality, while E2E-VN [40] produces visible artifacts." + ], + "image_footnote": [], + "bbox": [ + 78, + 80, + 483, + 275 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/71aa9ec5616091411c146c9cf0c72104a31cc5d81296e6b8db73d4be9855c4ae.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 491, + 80, + 890, + 272 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/2a7e541d6cb095f704283590550c5580ca00bf0900a101a82cd573f0b0f4a467.jpg", + "image_caption": [ + "Figure 6. Performance across different undersampling patterns and rates of ours and baseline methods: end-to-end [40], diffusion [17] and learning-free [27]. Our NO remains relatively consistent in performance when evaluated at different undersampling patterns and rates. Note that a high undersampling rate makes the task more difficult and thus a worse score is expected." + ], + "image_footnote": [], + "bbox": [ + 81, + 383, + 285, + 491 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/fc1d912141782f9bd93595af6b448723774b8130bc00713c1c085f57cff68ee5.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 285, + 383, + 464, + 491 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/cdc28e0877c20cf6bc1d85bcc88539302a708e97f17bcb445902b156978771c6.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
CategoryMethodInference Time (s)Tuning* Required
Learning-free\\( \\ell_1 \\)-Wavelet [27]5.45
DiffusionCSGM [17]93.84
PnP-DM [43]84.46
ScoreMRI [5]96.16
VariationalE2E-VN [40]0.104
NO (ours)0.158
", + "bbox": [ + 80, + 603, + 467, + 704 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Table 3. Inference and tuning time of methods tested on NVIDIA A100. NO is approximately $600 \\times$ faster than diffusion, and $35 \\times$ faster than the classical baseline based on learning-free compressed sensing methods. *Tuning refers to the $\\mathbf{k}$ undersampling pattern-specific hyperparameter tuning during inference/after model training. Both the $\\ell_{1}$ -Wavelet [27] ( $\\sim 0.5$ hrs per pattern) and diffusion methods ( $\\sim 6$ hrs per pattern) require pattern-specific tuning, while our NO is trained once for all patterns.", + "bbox": [ + 75, + 713, + 470, + 827 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "observe that diffusion models require pattern-specific hyperparameter tuning and are over 600 times slower in inference. MRI-diffusion models [5, 17, 43] are unconditionally trained and undersampling patterns are not available during training.", + "bbox": [ + 75, + 839, + 472, + 902 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Thus, we empirically tune hyperparameters such as learning rate and guidance scale for each downsampling pattern for approximately 6 hours each time. Traditional learning-free methods like $\\ell_1$ -Wavelet [27] still require hyperparameter tuning for specific $\\mathbf{k}$ undersampling patterns during optimization. Consequently, end-to-end methods, e.g. NO, are significantly more efficient.", + "bbox": [ + 496, + 387, + 893, + 492 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Performance Under Same Parameter Size. We show our NO outperforms baseline unrolled network E2E-VN [40] on different patterns and rates with a similar architecture and number of parameters in the Supplementary.", + "bbox": [ + 496, + 493, + 893, + 554 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5. Conclusion", + "text_level": 1, + "bbox": [ + 500, + 566, + 617, + 583 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Our unified model for compressed sensing MRI addresses the need to train multiple models for different measurement undersampling patterns and image resolutions, a common clinical issue. By leveraging discretization-agnostic neural operators, the model captures both local and global features, enabling flexible MRI reconstruction. With extensive experiments on fastMRI knee and brain datasets, our model maintains consistent performance across undersampling patterns and outperforms state-of-the-art methods in accuracy and robustness. It also enhances zero-shot super-resolution and extended FOV (field of view). The work has some limitations: 1) We only explore one neural operator design, DISCO, and future work could explore other operator learning architectures for MRI. 2) We only benchmark the image reconstruction performance without diagnostic accuracy, which is of more clinical relevance.", + "bbox": [ + 496, + 592, + 893, + 833 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In short, our approach offers a versatile solution for efficient MRI, with significant utility in clinical settings where flexibility and adaptability to varying undersampling patterns and image resolutions are crucial.", + "bbox": [ + 496, + 834, + 893, + 893 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "26011", + "bbox": [ + 478, + 944, + 517, + 957 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Acknowledgement", + "text_level": 1, + "bbox": [ + 76, + 90, + 236, + 107 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "This work is supported in part by ONR (MURI grant N000142312654 and N000142012786). J.W. is supported in part by Schmidt Sciences. A.S.J. and A.C. are supported in part by the Undergraduate Research Fellowships (SURF) at Caltech. Z.W. is supported in part by the Amazon AI4Science Fellowship. B.T. is supported in part by the Swartz Foundation Fellowship. M.L.-S. is supported in part by the Mellon Mays Undergraduate Fellowship. A.A. is supported in part by Bren endowed chair and the AI2050 senior fellow program at Schmidt Sciences.", + "bbox": [ + 75, + 113, + 472, + 255 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 78, + 281, + 174, + 296 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Kamyar Azizzadenesheli, Nikola Kovachki, Zongyi Li, Miguel Liu-Schiaffini, Jean Kossaifi, and Anima Anandkumar. Neural operators for accelerating scientific simulations and design. Nature Reviews Physics, pages 1-9, 2024. 3", + "[2] Max Born and Emil Wolf. Principles of optics: electromagnetic theory of propagation, interference and diffraction of light. Elsevier, 2013. 15", + "[3] Scott Shaobing Chen, David L Donoho, and Michael A Saunders. Atomic decomposition by basis pursuit. SIAM review, 43(1):129-159, 2001. 1", + "[4] Yutong Chen, Carola-Bibiane Schonlieb, Pietro Liò, Tim Leiner, Pier Luigi Dragotti, Ge Wang, Daniel Rueckert, David Firmin, and Guang Yang. AI-based reconstruction for fast mri—a systematic review and meta-analysis. Proceedings of the IEEE, 110(2):224-245, 2022. 1", + "[5] Hyungjin Chung and Jong Chul Ye. Score-based diffusion models for accelerated mri. Medical Image Analysis, 80: 102479, 2022. 1, 2, 6, 8", + "[6] Florinel-Alin Croitoru, Vlad Hondru, Radu Tudor Ionescu, and Mubarak Shah. Diffusion models in vision: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(9):10850-10869, 2023. 5", + "[7] Salman UH Dar, Mahmut Yurt, Mohammad Shahdloo, Muhammed Emrullah Ildiz, Berk Tinaz, and Tolga Cukur. Prior-guided image reconstruction for accelerated multicontrast mri via generative adversarial networks. IEEE Journal of Selected Topics in Signal Processing, 14(6):1072-1087, 2020. 3", + "[8] Mohammad Zalbagi Darestani and Reinhard Heckel. Accelerated mri with un-trained neural networks. IEEE Transactions on Computational Imaging, 7:724-733, 2021. 3", + "[9] D.L. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289-1306, 2006. 1", + "[10] Karol Gregor and Yann LeCun. Learning fast approximations of sparse coding. In Proceedings of the 27th International Conference on Machine Learning, pages 399-406, 2010. 3", + "[11] Mark A Griswold, Peter M Jakob, Robin M Heidemann, Mathias Nittka, Vladimir Jellus, Jianmin Wang, Berthold Kiefer, and Axel Haase. Generalized autocalibrating partially parallel acquisitions (grappa). Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine, 47(6):1202-1210, 2002. 1, 2" + ], + "bbox": [ + 78, + 305, + 472, + 898 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[12] Charles W Groetsch and CW Groetsch. Inverse problems in the mathematical sciences. Springer, 1993. 1", + "[13] Steven Guan, Ko-Tsung Hsu, and Parag V Chitnis. Fourier neural operator network for fast photoacoustic wave simulations. Algorithms, 16(2):124, 2023. 3", + "[14] Alper Güngör, Salman UH Dar, Şaban Öztürk, Yilmaz Korkmaz, Hasan A Bedel, Gokberk Elmas, Muzaffer Ozbey, and Tolga Çukur. Adaptive diffusion priors for accelerated mri reconstruction. Medical image analysis, 88:102872, 2023. 3", + "[15] Kerstin Hammernik, Teresa Klatzer, Erich Kobler, Michael P Recht, Daniel K Sodickson, Thomas Pock, and Florian Knoll. Learning a variational network for reconstruction of accelerated mri data. Magnetic resonance in medicine, 79(6): 3055-3071, 2018. 1, 3, 4", + "[16] DJ Husband, KA Grant, and CS Romaniuk. Mri in the diagnosis and treatment of suspected malignant spinal cord compression. The British journal of radiology, 74:15-23, 2001. 1", + "[17] Ajil Jalal, Marius Arvinte, Giannis Daras, Eric Price, Alexandros G Dimakis, and Jonathan I Tamir. Robust compressed sensing mri with deep generative priors. Advances in Neural Information Processing Systems, 2021. 2, 6, 7, 8", + "[18] Patricia M Johnson and Maria Drangova. Conditional generative adversarial network for 3d rigid-body motion correction in mri. Magnetic resonance in medicine, 82(3):901-910, 2019. 1, 3", + "[19] Christoph Juchem, Omar M Nahnass, Terence W Nixon, and Robin A de Graaf. Multi-slice mri with the dynamic multicoil technique. NMR in Biomedicine, 28(11):1526-1534, 2015. 3", + "[20] Dow-Mu Koh and David J Collins. Diffusion-weighted mri in the body: applications and challenges in oncology. American Journal of Roentgenology, 188(6):1622-1635, 2007. 1", + "[21] Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadeneheshi, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural operator: Learning maps between function spaces with applications to pdes. Journal of Machine Learning Research, 24(89):1-97, 2023. 1, 3", + "[22] Denis Le Bihan. Looking into the functional architecture of the brain with diffusion mri. Nature reviews neuroscience, 4 (6):469-480, 2003. 1", + "[23] Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural operator: Graph kernel network for partial differential equations. 2020. 3", + "[24] Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. 2021. 1, 3, 5, 11", + "[25] Zongyi Li, Hongkai Zheng, Nikola Kovachki, David Jin, Haoxuan Chen, Burigede Liu, Kamyar Azizzadenesheli, and Anima Anandkumar. Physics-informed neural operator for learning partial differential equations. ACM/JMS Journal of Data Science, 1(3):1-27, 2024. 1, 3", + "[26] Miguel Liu-Schiaffini, Julius Berner, Boris Bonev, Thorsten Kurth, Kamyar Azizzadenesheli, and Anima Anandkumar." + ], + "bbox": [ + 501, + 92, + 893, + 900 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "26012", + "bbox": [ + 478, + 944, + 519, + 955 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Neural operators with localized integral and differential kernels. In *Forty-first International Conference on Machine Learning*, 2024. 1, 3, 4, 11, 14", + "[27] Michael Lustig, David Donoho, and John M Pauly. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magn. Reson. Med., 58(6):1182-1195, 2007. 2, 6, 7, 8", + "[28] Michael Lustig, David L Donoho, Juan M Santos, and John M Pauly. Compressed sensing mri. IEEE signal processing magazine, 25(2):72-82, 2008. 1, 2", + "[29] Morteza Mardani, Qingyun Sun, David Donoho, Vardan Papyan, Hatef Monajemi, Shreyas Vasanawala, and John Pauly. Neural proximal gradient descent for compressive imaging. Advances in Neural Information Processing Systems, 31, 2018. 3", + "[30] Mark Murphy, Marcus Alley, James Demmel, Kurt Keutzer, Shreyas Vasanawala, and Michael Lustig. Fast 11-spirit compressed sensing parallel imaging mri: scalable parallel implementation and clinically feasible runtime. IEEE transactions on medical imaging, 31(6):1250-1262, 2012. 2", + "[31] Jeremy Ocampo, Matthew A Price, and Jason D McEwen. Scalable and equivariant spherical cnns by discrete-continuous (disco) convolutions. arXiv preprint arXiv:2209.13603, 2022. 1, 3, 4, 5, 12, 14", + "[32] Jaideep Pathak, Shashank Subramanian, Peter Harrington, Sanjeev Raja, Ashesh Chattopadhyay, Morteza Mardani, Thorsten Kurth, David Hall, Zongyi Li, Kamyar Azizzadenesheli, et al. Fourcastnet: A global data-driven high-resolution weather model using adaptive fourier neural operators. arXiv preprint arXiv:2202.11214, 2022. 1, 3", + "[33] William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4195-4205, 2023. 5", + "[34] Bogdan Raonic, Roberto Molinaro, Tim De Ryck, Tobias Rohner, Francesca Bartolucci, Rima Alaifari, Siddhartha Mishra, and Emmanuel de Bezenac. Convolutional neural operators for robust and accurate learning of pdes. Advances in Neural Information Processing Systems, 36, 2024. 1, 3", + "[35] Meer Mehran Rashid, Tanu Pittie, Souvik Chakraborty, and NM Anoop Krishnan. Learning the stress-strain fields in digital composites using fourier neural operator. Iscience, 25 (11), 2022. 1, 3", + "[36] J Craig Richardson, Richard W Bowtell, Karsten Mäder, and Colin D Melia. Pharmaceutical applications of magnetic resonance imaging (mri). Advanced drug delivery reviews, 57 (8):1191-1209, 2005. 1", + "[37] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical image computing and computer-assisted intervention-MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18, pages 234-241. Springer, 2015. 2, 5, 6, 11", + "[38] V Seifert, M Zimmermann, C Trantakis, H-E Vitzthum, K Kühnel, A Raabe, F Bootz, J-P Schneider, F Schmidt, and J Dietrich. Open mri-guided neurosurgery. Acta neurochirurgica, 141:455-464, 1999. 1" + ], + "bbox": [ + 78, + 90, + 470, + 900 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[39] Dilbag Singh, Anmol Monga, Hector L de Moura, Xiaoxia Zhang, Marcelo VW Zibetti, and Ravinder R Regatte. Emerging trends in fast mri using deep-learning reconstruction on undersampled k-space data: a systematic review. Bioengineering, 10(9):1012, 2023. 1", + "[40] Anuroop Sriram, Jure Zbontar, Tullie Murrell, Aaron Defazio, C Lawrence Zitnick, Nafissa Yakubova, Florian Knoll, and Patricia Johnson. End-to-end variational networks for accelerated mri reconstruction. In Medical Image Computing and Computer Assisted Intervention-MICCAI 2020: 23rd International Conference, Lima, Peru, October 4-8, 2020, Proceedings, Part II 23, pages 64-73. Springer, 2020. 1, 2, 3, 4, 5, 6, 7, 8, 11, 12, 13", + "[41] Jian Sun, Huibin Li, Zongben Xu, et al. Deep admm-net for compressive sensing mri. Advances in neural information processing systems, 29, 2016. 3", + "[42] Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multiscale structural similarity for image quality assessment. In Asilomar Conference on Signals, Systems & Computers, 2003. 5", + "[43] Zihui Wu, Yu Sun, Yifan Chen, Bingliang Zhang, Yisong Yue, and Katherine Bouman. Principled probabilistic imaging using diffusion models as plug-and-play priors. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. 2, 6, 8", + "[44] Jure Zbontar, Florian Knoll, Anuroop Sriram, Tullie Murrell, Zhengnan Huang, Matthew J Muckley, Aaron Defazio, et al. fastmri: An open dataset and benchmarks for accelerated mri. arXiv preprint arXiv:1811.08839, 2018. 3, 5, 6, 12" + ], + "bbox": [ + 501, + 92, + 893, + 486 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "26013", + "bbox": [ + 478, + 944, + 519, + 955 + ], + "page_idx": 9 + } +] \ No newline at end of file diff --git a/2025/A Unified Model for Compressed Sensing MRI Across Undersampling Patterns/e66975d9-0abf-4453-9e0f-887ea6234025_model.json b/2025/A Unified Model for Compressed Sensing MRI Across Undersampling Patterns/e66975d9-0abf-4453-9e0f-887ea6234025_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0b364e5fc39a30381692f07e601949423ac5b99c --- /dev/null +++ b/2025/A Unified Model for Compressed Sensing MRI Across Undersampling Patterns/e66975d9-0abf-4453-9e0f-887ea6234025_model.json @@ -0,0 +1,1991 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.043 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.001, + 0.812, + 0.047 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.088, + 0.131, + 0.885, + 0.154 + ], + "angle": 0, + "content": "A Unified Model for Compressed Sensing MRI Across Undersampling Patterns" + }, + { + "type": "text", + "bbox": [ + 0.16, + 0.181, + 0.34, + 0.217 + ], + "angle": 0, + "content": "Armeet Singh Jatyani* Miguel Liu-Schiaffini" + }, + { + "type": "text", + "bbox": [ + 0.379, + 0.182, + 0.811, + 0.235 + ], + "angle": 0, + "content": "Jiayun Wang* Aditi Chandrashekar Zihui Wu Bahareh Tolooshams Anima Anandkumar Ifornia Institute of Technology" + }, + { + "type": "text", + "bbox": [ + 0.198, + 0.238, + 0.773, + 0.251 + ], + "angle": 0, + "content": "{armeet,peterw,ajchandr,zwu2,mliuschi,btoloosh,ana}@caltech.edu" + }, + { + "type": "title", + "bbox": [ + 0.235, + 0.287, + 0.314, + 0.303 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.319, + 0.471, + 0.501 + ], + "angle": 0, + "content": "Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled measurements, thereby reducing scan time—the time subjects need to remain still. Recently, deep learning has shown great potential for reconstructing high-fidelity images from highly undersampled measurements. However, one needs to train multiple models for different undersampling patterns and desired output image resolutions, since most networks operate on a fixed discretization. Such approaches are highly impractical in clinical settings, where undersampling patterns and image resolutions are frequently changed to accommodate different real-time imaging and diagnostic requirements." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.502, + 0.474, + 0.744 + ], + "angle": 0, + "content": "We propose a unified MRI reconstruction model robust to various measurement undersampling patterns and image resolutions. Our approach uses neural operators—a discretization-agnostic architecture applied in both image and measurement spaces—to capture local and global features. Empirically, our model improves SSIM by \\(11\\%\\) and PSNR by 4 dB over a state-of-the-art CNN (End-to-End VarNet), with \\(600\\times\\) faster inference than diffusion methods. The resolution-agnostic design also enables zero-shot super-resolution and extended field-of-view reconstruction, offering a versatile and efficient solution for clinical MR imaging. Our unified model offers a versatile solution for MRI, adapting seamlessly to various measurement undersampling and imaging resolutions, making it highly effective for flexible and reliable clinical imaging. Our code is available at https://armeet.ca/nomri." + }, + { + "type": "title", + "bbox": [ + 0.078, + 0.774, + 0.21, + 0.79 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.8, + 0.472, + 0.877 + ], + "angle": 0, + "content": "Magnetic Resonance Imaging (MRI) is a popular noninvasive imaging technology, used in numerous medical and scientific applications such as neurosurgery [38], clinical oncology [20], diagnostic testing [16], neuroscience [22], and pharmaceutical research [36]. MRI is greatly limited by a" + }, + { + "type": "text", + "bbox": [ + 0.5, + 0.288, + 0.895, + 0.334 + ], + "angle": 0, + "content": "slow data acquisition process, which sometimes requires patients to remain still for an hour [4, 39]. Hence, accelerating MRI scan has garnered tremendous attention [11, 18, 28]." + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.336, + 0.895, + 0.563 + ], + "angle": 0, + "content": "Compressed Sensing (CS) [9] enables MRI at sub-Nyquist rates and reduces acquisition time for greater clinical utility. This is framed as an ill-posed inverse problem [12], where prior knowledge about MR images is crucial for reconstruction. Traditional Compressed Sensing MRI assumes a sparse prior in a transform domain (e.g., wavelets [3]). Recent deep learning methods learn underlying data structures to achieve superior performance [5, 40]. Current state-of-the-art models establish an end-to-end mapping [15, 40] from undersampled measurements to image reconstruction in both image and frequency domains. However, these models often struggle with generalization across varying resolutions, a critical need in clinical practice where flexible resolution adjustments are necessary. A unified model that is agnostic to discretizations would greatly improve efficiency." + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.566, + 0.896, + 0.807 + ], + "angle": 0, + "content": "Neural Operators (NOs) [21] are a deep learning framework that learns mappings between infinite-dimensional function spaces, making them agnostic to discretizations (resolutions). This property makes them suitable for tasks with data at varying resolutions, such as partial differential equations (PDEs) [21, 25, 34] and PDE-related applications [32, 35]. NOs could also be suitable for compressed sensing MRI due to measurements with multiple undersampling patterns. Various NO architectures [24, 26, 34] have been proposed. Recently, discrete-continuous (DISCO) convolutions [26, 31] have emerged as an efficient neural operator that captures local features and leverages GPU acceleration for standard convolutions. Due to the similarity to standard convolutions, the building blocks of many existing MRI deep learning models [5, 40], DISCO is a good candidate for resolution-agnostic MRI reconstruction." + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.81, + 0.897, + 0.901 + ], + "angle": 0, + "content": "Our approach: We propose a unified model based on NOs, that is robust to different undersampling patterns and image resolutions in compressed sensing MRI (Fig. 1a). Our model follows an unrolled network design [15, 40] with DISCO [26, 31]. As the image resolution increases, DISCO maintains a resolution-agnostic kernel with a consistent con" + }, + { + "type": "page_footnote", + "bbox": [ + 0.096, + 0.888, + 0.206, + 0.9 + ], + "angle": 0, + "content": "*Equal contribution." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.521, + 0.958 + ], + "angle": 0, + "content": "26004" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.086, + 0.093, + 0.484, + 0.271 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.484, + 0.094, + 0.867, + 0.285 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.086, + 0.274, + 0.48, + 0.444 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.482, + 0.299, + 0.868, + 0.446 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.076, + 0.459, + 0.892, + 0.543 + ], + "angle": 0, + "content": "Figure 1. (a) We propose a unified model for MRI reconstruction, called neural operator (NO), which works across various measurement undersampling patterns, overcoming the resolution dependency limit of CNN-based methods like [40] that require a specific model for each pattern. (b) NO achieves consistent performance across undersampling patterns and outperforms CNN architectures such as [40] (for \\(2 \\times\\) acceleration with one unrolled network cascade). (c) NO is resolution-agnostic. As image resolution increases, it maintains a consistent kernel size for alias-free rescaling, unlike CNNs with variable kernel sizes that risk aliasing. (d) NO enhances zero-shot super-resolution MRI reconstruction, outperforming CNNs [40]." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.557, + 0.47, + 0.753 + ], + "angle": 0, + "content": "volution patch size, while the regular convolution kernel contracts to a point (Fig. 1c). The DISCO operators learn in both measurement/frequency \\(\\mathbf{k}\\) space \\((\\mathrm{NO_k})\\) and image space \\((\\mathrm{NO_i})\\). \\(\\mathrm{NO_k}\\) makes our framework agnostic to different measurement undersampling patterns, and \\(\\mathrm{NO_i}\\) makes the framework agnostic to different image resolutions. Additionally, the learning in both frequency and image space allows the model to capture both local and global features of images due to the duality of the Fourier transform that connects the frequency and image space. The resolution-agnostic design also enables super-resolution in both frequency and image space, allowing the extended field of view (FOV) and super-resolution of the reconstructed MR images." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.765, + 0.471, + 0.901 + ], + "angle": 0, + "content": "We empirically demonstrate that our model is robust to different measurement undersampling rates and patterns (Fig. 1a). Our model performs consistently across these pattern variations, whereas the existing method drops in performance (Fig. 1b). We achieve up to \\(4 \\times\\) lower NMSE and 5 dB PSNR improvement from the baseline when evaluating on different undersampling patterns. The model is efficient and \\(600 \\times\\) faster than the diffusion baseline [5, 17, 43]. We also show that our model outperforms the state-of-the-art in" + }, + { + "type": "text", + "bbox": [ + 0.499, + 0.557, + 0.892, + 0.588 + ], + "angle": 0, + "content": "zero-shot super-resolution inference (Fig. 1d) and extended FOV reconstruction (Fig. 5)." + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.589, + 0.893, + 0.724 + ], + "angle": 0, + "content": "Our work has two main contributions: 1) We propose a unified neural operator model that learns in function space and shows robust performance across different undersampling patterns and image resolutions in compressed sensing MRI. To the best of our knowledge, this is the first resolution-agnostic framework for MRI reconstruction. 2) Our model demonstrates empirical robustness across measurement undersampling rates and patterns, reconstructing MR images with zero-shot higher resolutions and a larger field of view." + }, + { + "type": "title", + "bbox": [ + 0.5, + 0.739, + 0.65, + 0.755 + ], + "angle": 0, + "content": "2. Related Works" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.765, + 0.895, + 0.901 + ], + "angle": 0, + "content": "Accelerated MRI. One way to accelerate MRI scan speed is parallel imaging, in which multiple receiver coils acquire different views of the object of interest simultaneously, and then combine them into a single image [11, 30, 37]. When MRI reconstruction is paired with compressed sensing, predefined priors or regularization filters can be leveraged to improve reconstruction quality [27, 28]. Recent works have shown that learned deep-learning priors outperform handcrafted priors in reconstruction fidelity. Convolutional neural" + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.518, + 0.957 + ], + "angle": 0, + "content": "26005" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.076, + 0.092, + 0.473, + 0.272 + ], + "angle": 0, + "content": "networks (CNNs) [8, 15, 18, 40], variational networks (based on variational minimization) [15, 40], and generative adversarial networks (GANs) [7, 18] have all demonstrated superior performance than traditional optimization approach for compressed sensing MRI reconstruction from undersampled measurements. However, unlike conventional compressed sensing which operates in the function space and is agnostic to measurement undersampling patterns, the aforementioned deep learning methods operate on a fixed resolution. As a result, changes in resolution lead to degradation in performance, and multiple models are needed for different settings. We propose a resolution-agnostic unified model." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.273, + 0.473, + 0.726 + ], + "angle": 0, + "content": "Discretization-Agnostic Learning and Neural Operators. Empirically, diffusion models have shown relatively consistent performance with different measurement undersampling patterns in accelerated MRI [14]. However, diffusion models usually take more runtime at inference and need extensive hyperparameter tuning for good performance (Section 4.5). Additionally, they are not fundamentally discretization-agnostic by design. Neural operators [1, 21] are deep learning architectures specifically designed to learn mappings between infinite-dimensional function spaces. They are discretization-agnostic, allowing evaluation at any resolution, and converge to a desired operator as the resolution approaches infinity. Neural operators have empirically achieved good performance as surrogate models of numerical solutions to partial differential equations (PDEs) [21, 25, 34] with various applications, such as material science [35], weather forecasting [32], and photoacoustic imaging [13]. The design of neural operators often depends on the application at hand. For example, the Fourier neural operator (FNO) [24], which performs global convolutions, has shown consistent discretization-agnostic performance in various applications [1]. Other designs of neural operators [23, 26] rely on integration with locally-supported kernels to capture local features, which has shown to be useful in applications where local features are important, such as modeling turbulent fluids [23]. Additionally, neural operators with local integrals can be made efficient with parallel computing compared to those requiring global integrals. Our MRI framework, based on neural operators with local integrals, is agnostic to undersampling patterns and output image resolutions." + }, + { + "type": "title", + "bbox": [ + 0.077, + 0.74, + 0.176, + 0.755 + ], + "angle": 0, + "content": "3. Methods" + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.765, + 0.473, + 0.903 + ], + "angle": 0, + "content": "We first discuss the background of compressed sensing MRI and the unrolled network framework we use. We then discuss how we can extend the existing network building block, standard convolution, to resolution-agnostic neural operators. We also introduce DISCO [31], a neural operator design we adopt, and we capture global and local image features with DISCO. We conclude the section with the super-resolution designs. We call the measurement or frequency space \\(\\mathbf{k}\\)-space, and physical or spatial space image space hereafter." + }, + { + "type": "title", + "bbox": [ + 0.499, + 0.091, + 0.885, + 0.107 + ], + "angle": 0, + "content": "3.1. MRI Reconstruction with Unrolled Networks" + }, + { + "type": "text", + "bbox": [ + 0.499, + 0.114, + 0.892, + 0.16 + ], + "angle": 0, + "content": "Background. In MRI, anatomical images \\(\\mathbf{x}\\) of the patient are reconstructed by acquiring frequency-domain measurements \\(\\mathbf{k}\\), where the relationship is defined as:" + }, + { + "type": "equation", + "bbox": [ + 0.643, + 0.169, + 0.894, + 0.186 + ], + "angle": 0, + "content": "\\[\n\\mathbf {k} := \\mathcal {F} (\\mathbf {x}) + \\epsilon \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.195, + 0.895, + 0.362 + ], + "angle": 0, + "content": "where \\(\\epsilon\\) is the measurement noise and \\(\\mathcal{F}\\) is the Fourier transform. In this paper, we consider the parallel imaging setting with multiple receiver coils [19, 44], where each coil captures a different region of the anatomy. The forward process of the \\(i^{\\mathrm{th}}\\) coil measures \\(\\mathbf{k}_i\\coloneqq \\mathcal{F}(S_i\\mathbf{x}) + \\epsilon_i\\) where \\(S_{i}\\) is a position-dependent sensitivity map for the \\(i^{\\mathrm{th}}\\) coil. To speed up the imaging process, measurements are undersampled as \\(\\tilde{\\mathbf{k}} = M\\mathbf{k}\\) in the compressed sensing MRI setting, where \\(M\\) is a binary mask that selects a subset of the k-space points. Classical compressed sensing methods reconstruct the image \\(\\hat{\\mathbf{x}}\\) by solving an optimization problem" + }, + { + "type": "equation", + "bbox": [ + 0.548, + 0.368, + 0.893, + 0.405 + ], + "angle": 0, + "content": "\\[\n\\hat {\\mathbf {x}} = \\operatorname {a r g m i n} _ {\\mathbf {x}} \\frac {1}{2} \\sum_ {i} \\left\\| \\mathcal {A} (\\mathbf {x}) - \\tilde {\\mathbf {k}} \\right\\| _ {2} ^ {2} + \\lambda \\Psi (\\mathbf {x}) \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.413, + 0.897, + 0.701 + ], + "angle": 0, + "content": "where \\(i\\) is the coil index, \\(\\mathcal{A}(\\cdot) \\coloneqq MFS_{i}(\\cdot)\\) is the linear forward operator, and \\(\\Psi(\\mathbf{x})\\) is a regularization term. The optimization objective can be considered as a combination of physics constraint and prior. While the above optimization can be solved using classical optimization toolboxes, an increasing line of works uses deep neural networks to learn data priors and show improved reconstruction performance [15, 40]. Among them, unrolled networks [15, 40] have gained popularity as they incorporate the known forward model, resulting in state-of-the-art performance. Unrolling, which started with the nominal work of LISTA [10], proposes to design networks using iterations of an optimization algorithm to solve inverse problems. This approach incorporates domain knowledge (i.e., the forward model) and leverages deep learning to learn implicit priors from data [29, 41]. In the context of MRI and assuming a differential regularization term, the optimization problem is expanded to iterative gradient descent steps with injected CNN-based data priors. Each layer mimics the gradient descent step from \\(\\mathbf{x}^t\\) to \\(\\mathbf{x}^{t+1}\\):" + }, + { + "type": "equation", + "bbox": [ + 0.535, + 0.723, + 0.893, + 0.742 + ], + "angle": 0, + "content": "\\[\n\\mathbf {x} ^ {t + 1} \\leftarrow \\mathbf {x} ^ {t} - \\eta^ {t} \\mathcal {A} ^ {*} (\\mathcal {A} (\\mathbf {x} ^ {t}) - \\tilde {\\mathbf {k}}) + \\lambda^ {t} \\operatorname {C N N} (\\mathbf {x} ^ {t}) \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.75, + 0.895, + 0.87 + ], + "angle": 0, + "content": "where \\(\\eta^t\\) controls the weight of data consistency term and \\(\\lambda^t\\) controls that of the data-driven prior term. The data consistency term samples the data in the frequency domain, hence it is applicable to any spatial resolution. However, the prior term only operates on a specific resolution with CNNs. This means when changing the undersampling patterns, one needs another CNN trained for that setting, which greatly limits the flexibility of the reconstruction system." + }, + { + "type": "text", + "bbox": [ + 0.499, + 0.871, + 0.894, + 0.902 + ], + "angle": 0, + "content": "Extending to Neural Operators. We learn the prior in function space via discretization-agnostic neural operators" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.521, + 0.958 + ], + "angle": 0, + "content": "26006" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.084, + 0.079, + 0.87, + 0.204 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.076, + 0.222, + 0.895, + 0.28 + ], + "angle": 0, + "content": "Figure 2. MRI reconstruction pipeline. NO learns data priors in function space with infinite resolution. Specifically we propose NOs in the k (frequency) space \\(\\mathrm{NO_k}\\) (k space NO) and image space \\(\\mathrm{NO_i}\\) (image space NO), which capture both global and local image features, due to the duality between physical and frequency space. \\(\\mathcal{F}^{-1}\\) refers to the inverse Fourier transform. We provide the framework design details in Section 3.1 and NO design details in Section 3.2." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.292, + 0.471, + 0.368 + ], + "angle": 0, + "content": "in \\(\\mathbf{k}\\) space \\((\\mathrm{NO_k})\\) and image space \\((\\mathrm{NO_i})\\). Specifically, we first use a \\(\\mathbf{k}\\) space neural operator \\(\\mathrm{NO_k}\\) to learn \\(\\mathbf{k}\\) space prior and then apply a cascade of unrolled layers, each of which features a data consistency loss and the image space \\(\\mathrm{NO_i}\\) for image prior learning:" + }, + { + "type": "equation", + "bbox": [ + 0.133, + 0.378, + 0.47, + 0.396 + ], + "angle": 0, + "content": "\\[\n\\mathbf {x} ^ {0} \\leftarrow \\mathcal {F} ^ {- 1} \\left(\\mathrm {N O} _ {\\mathbf {k}} (\\tilde {\\mathbf {k}})\\right) \\tag {4}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.119, + 0.4, + 0.47, + 0.418 + ], + "angle": 0, + "content": "\\[\n\\mathbf {x} ^ {t + 1} \\leftarrow \\mathbf {x} ^ {t} - \\eta^ {t} \\mathcal {A} ^ {*} (\\mathcal {A} (\\mathbf {x} ^ {t}) - \\tilde {\\mathbf {k}}) + \\lambda^ {t} \\mathrm {N O} _ {\\mathbf {i}} ^ {t} (\\mathbf {x} ^ {t}) \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.43, + 0.47, + 0.49 + ], + "angle": 0, + "content": "where \\(\\mathrm{NO}_{\\mathbf{i}}^{t}\\) refers to the image-space NO at cascade \\(t\\). We follow existing works [15, 40] and only have one \\(\\mathrm{NO}_{\\mathbf{k}}\\) for the first cascade. Our framework flexibly works for different resolutions with the design details in Section 3.2." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.491, + 0.47, + 0.613 + ], + "angle": 0, + "content": "Framework Overview. Fig. 2 depicts the pipeline of our neural operator framework for MRI reconstruction. The undersampled measurement \\(\\tilde{\\mathbf{k}}\\) is first fed to a neural operator \\(\\mathrm{NO_k}\\) which operates in measurement \\(\\mathbf{k}\\) space to learn global image features and then inverse Fourier transformed to get an image. Following Eqn. 4 and 5, we iterate a few cascades of unrolled layers, consisting of a neural operator \\(\\mathrm{NO_i}\\) which operates in image \\(\\mathbf{x}\\) space and a data consistency update." + }, + { + "type": "title", + "bbox": [ + 0.077, + 0.621, + 0.3, + 0.637 + ], + "angle": 0, + "content": "3.2. Neural Operator Design" + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.644, + 0.47, + 0.75 + ], + "angle": 0, + "content": "Neural operators, which learn mappings between function spaces, offer a unified approach to discretization-agnostic MRI reconstruction. Given that accurate MRI reconstruction depends on capturing both local and global image features, we propose a neural operator architecture that incorporates both global and local inductive biases. We first discuss how we learn local features with local integration operators." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.75, + 0.471, + 0.902 + ], + "angle": 0, + "content": "Local Features via Local Integration Operator. Historically, the most common method of embedding a local inductive bias into deep neural networks has been by using locally-supported convolutional kernels, as in convolutional neural networks (CNNs). However, standard discrete convolutional kernels used in CNNs do not satisfy the resolution-agnostic properties of neural operators. Specifically, Liu et al. [26] show that CNN-style convolutional kernels converge to pointwise linear operators as the resolution is increased, instead of the desired local integration in the limit of infinite" + }, + { + "type": "text", + "bbox": [ + 0.499, + 0.292, + 0.895, + 0.354 + ], + "angle": 0, + "content": "resolution. For a kernel \\(\\kappa\\) and input function \\(g\\) defined over some compact subset \\(D\\subset \\mathbb{R}^d\\), the local convolution operator in a standard convolution layer, which transforms input \\(u\\) to output \\(v\\), is given by" + }, + { + "type": "equation", + "bbox": [ + 0.573, + 0.358, + 0.892, + 0.391 + ], + "angle": 0, + "content": "\\[\n(k \\star g) (v) = \\int_ {D} \\kappa (u - v) \\cdot g (u) d u. \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.499, + 0.396, + 0.895, + 0.472 + ], + "angle": 0, + "content": "Given a particular set of input points \\((u_{j})_{j = 1}^{m}\\subset D\\) with corresponding quadrature weights \\(q_{j}\\) and output positions \\(v_{i}\\in D\\), we adopt the discrete-continuous convolutions (DISCO) framework for operator learning [26, 31] and approximate the continuous convolution (Eqn. 6) as" + }, + { + "type": "equation", + "bbox": [ + 0.564, + 0.477, + 0.893, + 0.518 + ], + "angle": 0, + "content": "\\[\n(k \\star g) (v _ {i}) \\approx \\sum_ {j = 1} ^ {m} \\kappa \\left(u _ {j} - v _ {i}\\right) \\cdot g \\left(x _ {j}\\right) q _ {j}. \\tag {7}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.524, + 0.895, + 0.871 + ], + "angle": 0, + "content": "We follow parameterize \\(\\kappa\\) as a linear combination of predefined basis functions \\(\\kappa^{\\ell}\\): \\(\\kappa = \\sum_{\\ell=1}^{L} \\theta^{\\ell} \\cdot \\kappa^{\\ell}\\), where \\(\\theta^{\\ell}\\) are learnable parameters. We choose the linear piecewise basis from [26] as this achieves the greatest empirical results (see Sections B.3 & E of the supplementary). The convolutional kernel is thus parameterized by a finite number of parameters, independently of the grid on which the kernel is evaluated. The kernel is resolution-agnostic because we disentangle the resolution-agnostic basis and discrete learnable parameters. The basis \\(\\kappa^{\\ell}\\) is defined in the function space, and will be discretized at the desired resolution; discrete parameters \\(\\theta^{\\ell}\\) can be learned with gradient descent. Since we are operating on an equidistant grid on a compact subset of \\(\\mathbb{R}^2\\), we follow [26] and implement Eqn. 7 using standard convolutional kernels (thus enjoying the benefits of acceleration on GPUs using standard deep learning libraries) with two crucial modifications: 1) the kernel itself is defined as a linear combination of basis functions \\(\\kappa^{\\ell}\\), and 2) the size of the kernel scales with the input resolution so as to remain a fixed size w.r.t. the input domain. We adopt the same basis functions as [26] in our experiments, and we use the local integration operator as the resolution-agnostic building block for the measurement space and image space operators." + }, + { + "type": "text", + "bbox": [ + 0.499, + 0.871, + 0.895, + 0.901 + ], + "angle": 0, + "content": "DISCO vs Standard 2D Convolution with Varying Resolutions. As the input resolution increases (the discretization" + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "26007" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.076, + 0.091, + 0.473, + 0.257 + ], + "angle": 0, + "content": "becomes denser), DISCO [31] maintains the kernel size for each convolution and finally converges to a local integral. The standard 2D convolution kernel, however, gets increasingly smaller and finally converges to a point-wise operator (Fig. 1c). Although one could alleviate the issue of standard convolutions by interpolating the convolutional kernel shape to match with corresponding convolution patch sizes for different resolutions, the interpolated kernel will have artifacts that affect performance at new resolutions (Fig. 1d). DISCO, however, is agnostic to resolution changes as the kernel is in the function space." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.258, + 0.473, + 0.514 + ], + "angle": 0, + "content": "Global Features. A common neural operator architecture for learning global features is the Fourier neural operator (FNO) [24]. FNO takes the Fourier transform of the input, truncates the result beyond some fixed number of modes, and pointwise multiplies the result with a learned weight tensor, which is equivalent to a global convolution on the input by the convolution theorem. Interestingly, the forward process of MRI is a Fourier transformation, which means that local operations in measurement \\(\\mathbf{k}\\) space are equivalent to global operators in image \\(\\mathbf{x}\\) space and vice versa, due to their duality. Following FNO, we could apply a pointwise multiplication between the measurement \\(\\mathbf{k}\\) and a learned weight tensor to capture global image features. However, FNO truncates high frequencies, which are crucial for MRI reconstruction. To address this, we directly apply the DISCO local integration operator on the measurement space to capture global image features without feature map truncation." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.515, + 0.473, + 0.801 + ], + "angle": 0, + "content": "UDNO: the Building Block. Without loss of generality, we make both the image-space \\(\\mathrm{NO}_{\\mathrm{i}}\\) and \\(\\mathbf{k}\\) space \\(\\mathrm{NO}_{\\mathbf{k}}\\) be local neural operators that capture local features in the corresponding domain. Such a design learns both global and local image features due to domain duality. Motivations for adopting the U-shaped architecture are in Fig. 7 and Section A of the Supplementary. Each operator consists of multiple sub-layers, to which we refer as the U-Shaped DISCO Neural Operator, or UDNO. The motivation is that multi-scale designs have shown great success in capturing features at different scales in images and that U-shaped networks are among the most popular architectures in computer vision, demonstrating strong performance in various applications from medical imaging to diffusion [6, 33, 37]. Further, UDNO makes our framework very similar to an existing state-of-the-art E2E-VN [40], with the difference being standard convolutions replaced by DISCO operators. The UDNO follows the encoder/decoder architecture of the U-Net [37], replacing regular convolutions with DISCO layers." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.802, + 0.472, + 0.862 + ], + "angle": 0, + "content": "Loss. The parameters of the proposed neural operator are estimated from the training data by minimizing the structural similarity loss between the reconstruction \\(\\mathbf{x}\\) and the ground truth image \\(\\mathbf{x}^*\\) (the same as the E2E-VN [40]):" + }, + { + "type": "equation", + "bbox": [ + 0.178, + 0.863, + 0.47, + 0.88 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} (\\hat {\\mathbf {x}}, \\mathbf {x} ^ {*}) = - \\operatorname {S S I M} (\\hat {\\mathbf {x}}, \\mathbf {x} ^ {*}), \\tag {8}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.077, + 0.886, + 0.473, + 0.902 + ], + "angle": 0, + "content": "where SSIM is the Structural Similarity Index Measure [42]." + }, + { + "type": "image", + "bbox": [ + 0.503, + 0.09, + 0.885, + 0.267 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.498, + 0.286, + 0.895, + 0.358 + ], + "angle": 0, + "content": "Figure 3. Super resolution (denser discretization) in \\(\\mathbf{k}\\) space or image space increases the FOV or resolution of the reconstructed image. With denser discretization, NO maintains a resolution-agnostic kernel while CNN kernels become relatively smaller in size. Empirically our NO outperforms CNNs [40] (Section 4.4)." + }, + { + "type": "title", + "bbox": [ + 0.5, + 0.383, + 0.671, + 0.399 + ], + "angle": 0, + "content": "3.3. Super-Resolution" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.406, + 0.895, + 0.618 + ], + "angle": 0, + "content": "Neural operators enable zero-shot super-resolution. As shown in Fig. 3, increasing resolution corresponds to denser discretization between fixed minimum and maximum values, while the overall domain range remains constant. Due to the dual nature of frequency and image space, enhancing resolution in \\(\\mathbf{k}\\) space extends the field of view (FOV) in the reconstructed image, whereas increasing resolution in image space enhances the image's detail. Our proposed NO framework includes resolution-agnostic neural operators for both \\(\\mathbf{k}\\) space \\((\\mathrm{NO_k})\\) and image space \\((\\mathrm{NO_i})\\), facilitating zero-shot super-resolution in both domains. We present empirical zero-shot super-resolution results in Section 4.4, comparing our NO framework to E2E-VN [40], a CNN-based architecture with a similar design." + }, + { + "type": "title", + "bbox": [ + 0.5, + 0.631, + 0.634, + 0.648 + ], + "angle": 0, + "content": "4. Experiments" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.656, + 0.894, + 0.718 + ], + "angle": 0, + "content": "We discuss the datasets and experimental setup, followed by comparisons of our and baseline methods with different k undersampling rates and patterns. We conclude the section with zero-shot super-resolution and additional analysis." + }, + { + "type": "title", + "bbox": [ + 0.5, + 0.727, + 0.677, + 0.743 + ], + "angle": 0, + "content": "4.1. Dataset and Setup" + }, + { + "type": "text", + "bbox": [ + 0.5, + 0.75, + 0.892, + 0.78 + ], + "angle": 0, + "content": "Datasets: The fastMRI dataset [44] is a large and open dataset of knee and brain fully-sampled MRIs." + }, + { + "type": "text", + "bbox": [ + 0.5, + 0.781, + 0.892, + 0.825 + ], + "angle": 0, + "content": "- fastMRI knee: We use the multi-coil knee reconstruction dataset with 34,742 slices for training and 7,135 slices for evaluation. All samples contain data from 15 coils." + }, + { + "type": "text", + "bbox": [ + 0.5, + 0.826, + 0.892, + 0.884 + ], + "angle": 0, + "content": "- fastMRI brain: We use the T2 contrast subset of the multi-coil brain reconstruction dataset with 6,262 training slices and 502 evaluation slices. We filter for samples with data from 16 coils." + }, + { + "type": "list", + "bbox": [ + 0.5, + 0.781, + 0.892, + 0.884 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.5, + 0.886, + 0.894, + 0.902 + ], + "angle": 0, + "content": "Undersampling Patterns and Rates. We use equispaced," + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "26008" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.082, + 0.089, + 0.468, + 0.196 + ], + "angle": 0, + "content": "
CategoryMethodfastMRI kneefastMRI brain
PSNR (dB)SSIMPSNR (dB)SSIM
Learning-freeZero-filled31.00±3.330.7848±0.061630.86±1.730.8065±0.0376
ℓ1-Wavelet [27]25.67±3.910.5667±0.200128.68±1.310.6783±0.0684
DiffusionCSGM [17]26.52±3.210.6789±0.1220--
ScoreMRI [5]25.72±1.800.5789±0.0910--
PnP-DM [43]26.52±3.140.6383±0.1320--
End-to-endU-Net [37]37.07±2.470.8803±0.050437.27±1.760.9211±0.0272
E2E-VN [40]38.33±3.060.9048±0.073238.06±2.700.9620±0.0107
Ours39.14±2.930.9219±0.072438.82±2.770.9621±0.0086
" + }, + { + "type": "table_caption", + "bbox": [ + 0.076, + 0.207, + 0.471, + 0.276 + ], + "angle": 0, + "content": "Table 1. MRI reconstruction performance on \\(4 \\times\\) equispaced undersampling. NO outperforms existing methods (classical, diffusion, and end-to-end). NO also shows consistent performance across k space undersampling patterns (Section 4.3). Zero-filled refers to reconstructing the image from zero-filled k space." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.306, + 0.47, + 0.412 + ], + "angle": 0, + "content": "random, magic, Gaussian, radial, and Poisson undersampling patterns [5, 44] and \\(2\\mathrm{x}\\), \\(4\\times\\), \\(6\\times\\), and \\(8\\times\\) undersampling rates (visualizations are in Fig. 8 in the Supplementary). Higher rates result in sparser \\(\\mathbf{k}\\) space samples and shorter imaging time at the cost of a more ill-posed/harder inversion process. Section B in the Supplementary provides additional undersampling details along with mask visualizations." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.412, + 0.47, + 0.534 + ], + "angle": 0, + "content": "Neural Operator Model. NO follows Fig. 2. The \\(\\mathrm{NO_k}\\) (k space neural operator) and \\(\\mathrm{NO_i}\\) (image space neural operator) are implemented as UDNOs with 2 input and output channels. This is because complex numbers, commonly used in MRI data, are represented using two channels: one for the real part and one for the imaginary part. We provide UDNO details, DISCO kernel basis configurations and training hyper-parameters in Section B of the Supplementary." + }, + { + "type": "text", + "bbox": [ + 0.077, + 0.535, + 0.469, + 0.58 + ], + "angle": 0, + "content": "Baseline: Compressed Sensing. We compare with a learning-free compressed sensing method with wavelet \\(\\ell_1\\) regularization for a classical comparison [27]." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.581, + 0.47, + 0.702 + ], + "angle": 0, + "content": "Baselines: Unrolled Networks. We compare with the E2E-VN (End-to-End VarNet) [40], which shares a similar network structure with our approach, but uses the standard CNNs with resolution-dependent convolutions. Since E2E-VN [40] is only trained on specific resolution, we also consider E2E-VN++, where we train [40] with multiple-patterns that match our NO's training data for fair comparisons. Number of cascades \\( t \\) is set to 12 following [40]." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.703, + 0.47, + 0.869 + ], + "angle": 0, + "content": "Baselines: Diffusion. Diffusion models have shown strong performance on inverse problems such as MRI reconstruction. We compare our approach to three prominent diffusion-based methods that leverage these capabilities: Score-based diffusion models for accelerated MRI (ScoreMRI) [5], Compressive Sensing using Generative Models (CSGM) [17], and Plug-and-Play Diffusion Models (PnP-DM) [43]. We replicate the experimental settings described in their respective papers. While they report results on MVUE targets, we evaluate metrics on RSS targets at inference for a fair comparison with our methods." + }, + { + "type": "text", + "bbox": [ + 0.077, + 0.871, + 0.469, + 0.901 + ], + "angle": 0, + "content": "Hardware and Training. While models can be trained on a single RTX 4090 GPU, we accelerate the training of our" + }, + { + "type": "text", + "bbox": [ + 0.499, + 0.092, + 0.892, + 0.121 + ], + "angle": 0, + "content": "model and baselines with a batch size of 16 across 4 A100 (40G) GPUs. We follow baseline settings for comparison." + }, + { + "type": "text", + "bbox": [ + 0.499, + 0.122, + 0.892, + 0.198 + ], + "angle": 0, + "content": "Evaluation Protocols. We evaluate image reconstruction performance using normalized mean square error (NMSE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) which are standard for the fastMRI dataset and MRI [44]." + }, + { + "type": "title", + "bbox": [ + 0.5, + 0.208, + 0.796, + 0.239 + ], + "angle": 0, + "content": "4.2. Reconstruction with Different k Space Undersampling Patterns" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.246, + 0.892, + 0.337 + ], + "angle": 0, + "content": "We train our NO model, E2E-VN and E2E-VN++ on \\(4 \\times\\) equispaced samples for 50 epochs. The performance on the single \\(4 \\times\\) equispace undersampling pattern in Table 1. We further fine-tune NO and E2E-VN++ for an additional 20 epochs on a small dataset (3,474 samples) of equispaced, random, magic, Gaussian, radial, and Poisson samples." + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.338, + 0.892, + 0.488 + ], + "angle": 0, + "content": "fastMRI Knee. We also provide detailed metric results in Table 2, with a line plot in Fig. 6a, where our NO achieves consistent performance across different patterns. Across all patterns, we achieve an average improvement of \\(4.17\\mathrm{dB}\\) PSNR and \\(8.4\\%\\) SSIM over the E2E-VN. On rectilinear patterns (equispaced, magic, random), our performance remains comparable to \\(\\mathrm{E2E - VN + + }\\) (0.3 dB PSNR gain). Across the irregular patterns (radial, Gaussian, Poisson), we achieve a 0.6 dB PSNR improvement over the improved baseline \\((\\mathrm{E2E - VN + + })\\)" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.489, + 0.893, + 0.624 + ], + "angle": 0, + "content": "fastMRI Brain. On irregular patterns, we achieve an average improvement of 4.7 dB PSNR and \\(10\\%\\) SSIM over the E2E-VN. On rectilinear patterns (equispaced, magic, random), our performance remains comparable to the E2E-VN. Detailed numbers are reported in Table 7 of Supplementary. Visualization. We observe visual improvements in reconstruction integrity (see Fig. 4). Our model is robust to inference across multiple patterns. We highlight important local regions where our NO is better." + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.625, + 0.892, + 0.716 + ], + "angle": 0, + "content": "The setting here where multiple patterns are trained together is a common clinical setting where the undersampling patterns are known. We also consider the setting where undersampling patterns are unknown. Zero-shot evaluations of the equispaced-trained \\((4\\times)\\) model across different patterns show that our NO shows 1.8 dB PSNR gain over E2E-VN." + }, + { + "type": "title", + "bbox": [ + 0.5, + 0.726, + 0.776, + 0.757 + ], + "angle": 0, + "content": "4.3. Reconstruction with Different k Space Undersampling Rates" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.765, + 0.892, + 0.84 + ], + "angle": 0, + "content": "We train our NO model, E2E-VN and E2E-VN++ on \\(4 \\times\\) equispaced samples for 50 epochs. We further fine-tune NO and E2E-VN++ for an additional 20 epochs on a small dataset (3,474 samples) of \\(4 \\times\\), \\(6 \\times\\), \\(8 \\times\\), and \\(16 \\times\\) equispaced samples." + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.84, + 0.892, + 0.901 + ], + "angle": 0, + "content": "For fastMRI Knee, we report the multi-rate performance in Fig. 6b and Table 6 of the Supplementary. For fastMRI Brain, we report the multi-rate performance in Table 8 of the Supplementary. Our neural operator model consistently out" + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "26009" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.087, + 0.08, + 0.885, + 0.356 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.076, + 0.365, + 0.893, + 0.422 + ], + "angle": 0, + "content": "Figure 4. MRI reconstructions with different undersampling patterns of various methods: NO (ours), E2E-VN++, E2E-VN [40], L1-Wavelet (learning-free compressed sensing) [27], and CSGM (diffusion) [17]. NO reconstructs high-fidelity images across various downsampling patterns. Zoom-in view in the lower right of each image. Row 1: \\(4 \\times\\) Equispaced undersampling. Row 2: \\(4 \\times\\) Gaussian 2d undersampling. Row 3: \\(4 \\times\\) Radial 2d undersampling." + }, + { + "type": "table", + "bbox": [ + 0.081, + 0.435, + 0.891, + 0.535 + ], + "angle": 0, + "content": "
PatternPSNR (dB) ↑SSIM ↑NMSE ↓
NO (ours)E2E-VN++E2E-VN [40]NO (ours)E2E-VN++E2E-VN [40]NO (ours)E2E-VN++E2E-VN [40]
In-domainEquispaced37.40 ± 2.6137.50 ± 2.7938.35 ± 3.050.899 ± 0.0720.900 ± 0.0720.905 ± 0.0730.007 ± 0.0060.007 ± 0.0060.006 ± 0.006
Random36.66 ± 2.4836.79 ± 2.6537.34 ± 2.750.891 ± 0.0700.892 ± 0.0720.897 ± 0.0710.008 ± 0.0060.008 ± 0.0070.007 ± 0.005
Magic38.46 ± 2.9938.34 ± 3.0638.94 ± 3.550.914 ± 0.0700.914 ± 0.0690.917 ± 0.0710.006 ± 0.0060.007 ± 0.0060.006 ± 0.006
OODRadial36.23 ± 2.2135.50 ± 2.2427.02 ± 3.920.900 ± 0.0710.892 ± 0.0700.764 ± 0.0700.009 ± 0.0060.011 ± 0.0090.069 ± 0.030
Poisson33.42 ± 2.3433.01 ± 2.6723.61 ± 3.850.878 ± 0.0620.873 ± 0.0600.687 ± 0.0830.016 ± 0.0080.017 ± 0.0080.152 ± 0.068
Gaussian31.25 ± 2.7030.65 ± 2.5523.14 ± 3.950.863 ± 0.0580.851 ± 0.0590.673 ± 0.0880.024 ± 0.0050.028 ± 0.0070.170 ± 0.073
" + }, + { + "type": "table_caption", + "bbox": [ + 0.076, + 0.545, + 0.895, + 0.603 + ], + "angle": 0, + "content": "Table 2. MRI reconstruction performance across different undersampling patterns. Across multiple patterns, NO maintains reconstruction performance, while baselines do not perform well on out-of-domain (OOD) undersampling patterns (Poisson, radial, Gaussian). Metrics are calculated for the fastMRI knee dataset with a fixed \\(4 \\times\\) acceleration rate. We observe that the E2E-VN overfits to rectilinear patterns, and drops off heavily when evaluated on the irregular patterns (Poisson, radial, Gaussian)." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.627, + 0.471, + 0.673 + ], + "angle": 0, + "content": "performs the E2E-VN [40], achieving 3.2 dB higher PSNR and \\(5.8\\%\\) higher SSIM on fastMRI knee and 2.0 dB higher PSNR and \\(7.5\\%\\) higher SSIM on fastMRI brain." + }, + { + "type": "title", + "bbox": [ + 0.077, + 0.682, + 0.33, + 0.699 + ], + "angle": 0, + "content": "4.4. Zero-Shot Super-Resolution" + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.705, + 0.471, + 0.734 + ], + "angle": 0, + "content": "We study \\(\\mathrm{NO_i}\\) and \\(\\mathrm{NO_k}\\) zero-shot super-resolution performance and compare them with E2E-VN [40]." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.735, + 0.471, + 0.902 + ], + "angle": 0, + "content": "Higher MRI Resolution with \\(\\mathrm{NO_i}\\) super-resolution. We train our NO model and the E2E-VN models on \\(320\\times 320\\) knee samples. We then keep the \\(\\mathrm{NO_k}\\) unchanged and use bilinear interpolation to increase the input to \\(\\mathrm{NO_i}\\) to \\(640\\times\\) 640. We directly evaluate models without fine-tuning against fully sampled \\(640\\times 640\\) bilinear interpolated ground truth reconstructions. For [40] relying on CNNs, the absolute kernel size stays the same, and the ratio of kernel size over feature map is halved, while the ratio of NO stays the same. Compared to our NO model, the CNN-based E2E-VN [40] produces reconstructions with noticeable artifacts and higher" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.627, + 0.895, + 0.825 + ], + "angle": 0, + "content": "PSNR and image reconstruction quality (Fig. 1d and Fig. 5b). Larger MRI FOV with \\(\\mathrm{NO_k}\\) super-resolution. k space super-resolution expands the MRI reconstruction field of view (FOV). To validate model performance, we design a proof-of-concept FOV experiment. Our NO model and the E2E-VN [40] train on \\(160\\times 160\\) downsampled k space brain slice samples, where sparse k space sampling results in a reduced FOV in image space. We then perform zero-shot inference on \\(320\\times 320\\) full-FOV k space data. Although neither model encounters data outside the \\(160\\times 160\\) FOV during training, our NO model reconstructs features in this extended region with significantly fewer artifacts compared to E2E-VN [40] (visualizations in Fig. 5a)." + }, + { + "type": "title", + "bbox": [ + 0.5, + 0.833, + 0.691, + 0.849 + ], + "angle": 0, + "content": "4.5. Additional Analysis" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.856, + 0.895, + 0.901 + ], + "angle": 0, + "content": "Model Inference and Tuning Time. In Table 3, we compare the model development and inference times of our end-to-end neural operator (NO) with diffusion models. We" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.521, + 0.958 + ], + "angle": 0, + "content": "26010" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.08, + 0.081, + 0.484, + 0.276 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.493, + 0.081, + 0.892, + 0.273 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.076, + 0.291, + 0.897, + 0.364 + ], + "angle": 0, + "content": "Figure 5. Zero-shot super-resolution results in both extended FOV \\((\\mathrm{NO_k})\\) and high-resolution image space \\((\\mathrm{NO_i})\\). (a) Zero-shot extended FOV reconstructions: Our NO model shows fewer artifacts and higher PSNR in the reconstructed brain slices compared to the CNN-based E2E-VN [40] on \\(4\\times\\) Gaussian, despite neither model seeing data outside the initial \\(160\\times 160\\) FOV during training. (b) Zero-shot super-resolution reconstructions in image space on \\(2\\times\\) radial: with input resolution increased to \\(640\\times 640\\) through bilinear interpolation, our NO model preserves reconstruction quality, while E2E-VN [40] produces visible artifacts." + }, + { + "type": "image", + "bbox": [ + 0.083, + 0.385, + 0.287, + 0.492 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.287, + 0.384, + 0.465, + 0.492 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.076, + 0.507, + 0.471, + 0.592 + ], + "angle": 0, + "content": "Figure 6. Performance across different undersampling patterns and rates of ours and baseline methods: end-to-end [40], diffusion [17] and learning-free [27]. Our NO remains relatively consistent in performance when evaluated at different undersampling patterns and rates. Note that a high undersampling rate makes the task more difficult and thus a worse score is expected." + }, + { + "type": "table", + "bbox": [ + 0.081, + 0.604, + 0.468, + 0.705 + ], + "angle": 0, + "content": "
CategoryMethodInference Time (s)Tuning* Required
Learning-free\\( \\ell_1 \\)-Wavelet [27]5.45
DiffusionCSGM [17]93.84
PnP-DM [43]84.46
ScoreMRI [5]96.16
VariationalE2E-VN [40]0.104
NO (ours)0.158
" + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.714, + 0.472, + 0.828 + ], + "angle": 0, + "content": "Table 3. Inference and tuning time of methods tested on NVIDIA A100. NO is approximately \\(600 \\times\\) faster than diffusion, and \\(35 \\times\\) faster than the classical baseline based on learning-free compressed sensing methods. *Tuning refers to the \\(\\mathbf{k}\\) undersampling pattern-specific hyperparameter tuning during inference/after model training. Both the \\(\\ell_{1}\\)-Wavelet [27] (\\(\\sim 0.5\\) hrs per pattern) and diffusion methods (\\(\\sim 6\\) hrs per pattern) require pattern-specific tuning, while our NO is trained once for all patterns." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.84, + 0.473, + 0.903 + ], + "angle": 0, + "content": "observe that diffusion models require pattern-specific hyperparameter tuning and are over 600 times slower in inference. MRI-diffusion models [5, 17, 43] are unconditionally trained and undersampling patterns are not available during training." + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.388, + 0.895, + 0.493 + ], + "angle": 0, + "content": "Thus, we empirically tune hyperparameters such as learning rate and guidance scale for each downsampling pattern for approximately 6 hours each time. Traditional learning-free methods like \\(\\ell_1\\)-Wavelet [27] still require hyperparameter tuning for specific \\(\\mathbf{k}\\) undersampling patterns during optimization. Consequently, end-to-end methods, e.g. NO, are significantly more efficient." + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.494, + 0.895, + 0.555 + ], + "angle": 0, + "content": "Performance Under Same Parameter Size. We show our NO outperforms baseline unrolled network E2E-VN [40] on different patterns and rates with a similar architecture and number of parameters in the Supplementary." + }, + { + "type": "title", + "bbox": [ + 0.5, + 0.568, + 0.619, + 0.584 + ], + "angle": 0, + "content": "5. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.593, + 0.895, + 0.834 + ], + "angle": 0, + "content": "Our unified model for compressed sensing MRI addresses the need to train multiple models for different measurement undersampling patterns and image resolutions, a common clinical issue. By leveraging discretization-agnostic neural operators, the model captures both local and global features, enabling flexible MRI reconstruction. With extensive experiments on fastMRI knee and brain datasets, our model maintains consistent performance across undersampling patterns and outperforms state-of-the-art methods in accuracy and robustness. It also enhances zero-shot super-resolution and extended FOV (field of view). The work has some limitations: 1) We only explore one neural operator design, DISCO, and future work could explore other operator learning architectures for MRI. 2) We only benchmark the image reconstruction performance without diagnostic accuracy, which is of more clinical relevance." + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.835, + 0.895, + 0.895 + ], + "angle": 0, + "content": "In short, our approach offers a versatile solution for efficient MRI, with significant utility in clinical settings where flexibility and adaptability to varying undersampling patterns and image resolutions are crucial." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.518, + 0.958 + ], + "angle": 0, + "content": "26011" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.078, + 0.091, + 0.238, + 0.108 + ], + "angle": 0, + "content": "Acknowledgement" + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.114, + 0.473, + 0.256 + ], + "angle": 0, + "content": "This work is supported in part by ONR (MURI grant N000142312654 and N000142012786). J.W. is supported in part by Schmidt Sciences. A.S.J. and A.C. are supported in part by the Undergraduate Research Fellowships (SURF) at Caltech. Z.W. is supported in part by the Amazon AI4Science Fellowship. B.T. is supported in part by the Swartz Foundation Fellowship. M.L.-S. is supported in part by the Mellon Mays Undergraduate Fellowship. A.A. is supported in part by Bren endowed chair and the AI2050 senior fellow program at Schmidt Sciences." + }, + { + "type": "title", + "bbox": [ + 0.079, + 0.282, + 0.175, + 0.297 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.087, + 0.306, + 0.472, + 0.363 + ], + "angle": 0, + "content": "[1] Kamyar Azizzadenesheli, Nikola Kovachki, Zongyi Li, Miguel Liu-Schiaffini, Jean Kossaifi, and Anima Anandkumar. Neural operators for accelerating scientific simulations and design. Nature Reviews Physics, pages 1-9, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.087, + 0.364, + 0.473, + 0.405 + ], + "angle": 0, + "content": "[2] Max Born and Emil Wolf. Principles of optics: electromagnetic theory of propagation, interference and diffraction of light. Elsevier, 2013. 15" + }, + { + "type": "ref_text", + "bbox": [ + 0.087, + 0.406, + 0.472, + 0.447 + ], + "angle": 0, + "content": "[3] Scott Shaobing Chen, David L Donoho, and Michael A Saunders. Atomic decomposition by basis pursuit. SIAM review, 43(1):129-159, 2001. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.087, + 0.449, + 0.472, + 0.517 + ], + "angle": 0, + "content": "[4] Yutong Chen, Carola-Bibiane Schonlieb, Pietro Liò, Tim Leiner, Pier Luigi Dragotti, Ge Wang, Daniel Rueckert, David Firmin, and Guang Yang. AI-based reconstruction for fast mri—a systematic review and meta-analysis. Proceedings of the IEEE, 110(2):224-245, 2022. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.087, + 0.519, + 0.472, + 0.56 + ], + "angle": 0, + "content": "[5] Hyungjin Chung and Jong Chul Ye. Score-based diffusion models for accelerated mri. Medical Image Analysis, 80: 102479, 2022. 1, 2, 6, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.087, + 0.562, + 0.472, + 0.617 + ], + "angle": 0, + "content": "[6] Florinel-Alin Croitoru, Vlad Hondru, Radu Tudor Ionescu, and Mubarak Shah. Diffusion models in vision: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(9):10850-10869, 2023. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.087, + 0.619, + 0.472, + 0.701 + ], + "angle": 0, + "content": "[7] Salman UH Dar, Mahmut Yurt, Mohammad Shahdloo, Muhammed Emrullah Ildiz, Berk Tinaz, and Tolga Cukur. Prior-guided image reconstruction for accelerated multicontrast mri via generative adversarial networks. IEEE Journal of Selected Topics in Signal Processing, 14(6):1072-1087, 2020. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.087, + 0.703, + 0.472, + 0.745 + ], + "angle": 0, + "content": "[8] Mohammad Zalbagi Darestani and Reinhard Heckel. Accelerated mri with un-trained neural networks. IEEE Transactions on Computational Imaging, 7:724-733, 2021. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.087, + 0.746, + 0.472, + 0.773 + ], + "angle": 0, + "content": "[9] D.L. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289-1306, 2006. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.775, + 0.472, + 0.817 + ], + "angle": 0, + "content": "[10] Karol Gregor and Yann LeCun. Learning fast approximations of sparse coding. In Proceedings of the 27th International Conference on Machine Learning, pages 399-406, 2010. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.818, + 0.472, + 0.9 + ], + "angle": 0, + "content": "[11] Mark A Griswold, Peter M Jakob, Robin M Heidemann, Mathias Nittka, Vladimir Jellus, Jianmin Wang, Berthold Kiefer, and Axel Haase. Generalized autocalibrating partially parallel acquisitions (grappa). Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine, 47(6):1202-1210, 2002. 1, 2" + }, + { + "type": "list", + "bbox": [ + 0.08, + 0.306, + 0.473, + 0.9 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.093, + 0.892, + 0.121 + ], + "angle": 0, + "content": "[12] Charles W Groetsch and CW Groetsch. Inverse problems in the mathematical sciences. Springer, 1993. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.123, + 0.894, + 0.164 + ], + "angle": 0, + "content": "[13] Steven Guan, Ko-Tsung Hsu, and Parag V Chitnis. Fourier neural operator network for fast photoacoustic wave simulations. Algorithms, 16(2):124, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.166, + 0.894, + 0.222 + ], + "angle": 0, + "content": "[14] Alper Güngör, Salman UH Dar, Şaban Öztürk, Yilmaz Korkmaz, Hasan A Bedel, Gokberk Elmas, Muzaffer Ozbey, and Tolga Çukur. Adaptive diffusion priors for accelerated mri reconstruction. Medical image analysis, 88:102872, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.224, + 0.894, + 0.293 + ], + "angle": 0, + "content": "[15] Kerstin Hammernik, Teresa Klatzer, Erich Kobler, Michael P Recht, Daniel K Sodickson, Thomas Pock, and Florian Knoll. Learning a variational network for reconstruction of accelerated mri data. Magnetic resonance in medicine, 79(6): 3055-3071, 2018. 1, 3, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.296, + 0.894, + 0.35 + ], + "angle": 0, + "content": "[16] DJ Husband, KA Grant, and CS Romaniuk. Mri in the diagnosis and treatment of suspected malignant spinal cord compression. The British journal of radiology, 74:15-23, 2001. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.354, + 0.894, + 0.409 + ], + "angle": 0, + "content": "[17] Ajil Jalal, Marius Arvinte, Giannis Daras, Eric Price, Alexandros G Dimakis, and Jonathan I Tamir. Robust compressed sensing mri with deep generative priors. Advances in Neural Information Processing Systems, 2021. 2, 6, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.412, + 0.894, + 0.466 + ], + "angle": 0, + "content": "[18] Patricia M Johnson and Maria Drangova. Conditional generative adversarial network for 3d rigid-body motion correction in mri. Magnetic resonance in medicine, 82(3):901-910, 2019. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.469, + 0.894, + 0.523 + ], + "angle": 0, + "content": "[19] Christoph Juchem, Omar M Nahnass, Terence W Nixon, and Robin A de Graaf. Multi-slice mri with the dynamic multicoil technique. NMR in Biomedicine, 28(11):1526-1534, 2015. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.526, + 0.892, + 0.569 + ], + "angle": 0, + "content": "[20] Dow-Mu Koh and David J Collins. Diffusion-weighted mri in the body: applications and challenges in oncology. American Journal of Roentgenology, 188(6):1622-1635, 2007. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.571, + 0.894, + 0.64 + ], + "angle": 0, + "content": "[21] Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadeneheshi, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural operator: Learning maps between function spaces with applications to pdes. Journal of Machine Learning Research, 24(89):1-97, 2023. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.642, + 0.894, + 0.683 + ], + "angle": 0, + "content": "[22] Denis Le Bihan. Looking into the functional architecture of the brain with diffusion mri. Nature reviews neuroscience, 4 (6):469-480, 2003. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.686, + 0.894, + 0.741 + ], + "angle": 0, + "content": "[23] Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural operator: Graph kernel network for partial differential equations. 2020. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.743, + 0.894, + 0.799 + ], + "angle": 0, + "content": "[24] Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. 2021. 1, 3, 5, 11" + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.801, + 0.894, + 0.87 + ], + "angle": 0, + "content": "[25] Zongyi Li, Hongkai Zheng, Nikola Kovachki, David Jin, Haoxuan Chen, Burigede Liu, Kamyar Azizzadenesheli, and Anima Anandkumar. Physics-informed neural operator for learning partial differential equations. ACM/JMS Journal of Data Science, 1(3):1-27, 2024. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.873, + 0.894, + 0.901 + ], + "angle": 0, + "content": "[26] Miguel Liu-Schiaffini, Julius Berner, Boris Bonev, Thorsten Kurth, Kamyar Azizzadenesheli, and Anima Anandkumar." + }, + { + "type": "list", + "bbox": [ + 0.503, + 0.093, + 0.894, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.521, + 0.957 + ], + "angle": 0, + "content": "26012" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.108, + 0.092, + 0.472, + 0.134 + ], + "angle": 0, + "content": "Neural operators with localized integral and differential kernels. In *Forty-first International Conference on Machine Learning*, 2024. 1, 3, 4, 11, 14" + }, + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.136, + 0.472, + 0.19 + ], + "angle": 0, + "content": "[27] Michael Lustig, David Donoho, and John M Pauly. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magn. Reson. Med., 58(6):1182-1195, 2007. 2, 6, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.192, + 0.471, + 0.234 + ], + "angle": 0, + "content": "[28] Michael Lustig, David L Donoho, Juan M Santos, and John M Pauly. Compressed sensing mri. IEEE signal processing magazine, 25(2):72-82, 2008. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.236, + 0.472, + 0.305 + ], + "angle": 0, + "content": "[29] Morteza Mardani, Qingyun Sun, David Donoho, Vardan Papyan, Hatef Monajemi, Shreyas Vasanawala, and John Pauly. Neural proximal gradient descent for compressive imaging. Advances in Neural Information Processing Systems, 31, 2018. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.307, + 0.472, + 0.376 + ], + "angle": 0, + "content": "[30] Mark Murphy, Marcus Alley, James Demmel, Kurt Keutzer, Shreyas Vasanawala, and Michael Lustig. Fast 11-spirit compressed sensing parallel imaging mri: scalable parallel implementation and clinically feasible runtime. IEEE transactions on medical imaging, 31(6):1250-1262, 2012. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.377, + 0.471, + 0.433 + ], + "angle": 0, + "content": "[31] Jeremy Ocampo, Matthew A Price, and Jason D McEwen. Scalable and equivariant spherical cnns by discrete-continuous (disco) convolutions. arXiv preprint arXiv:2209.13603, 2022. 1, 3, 4, 5, 12, 14" + }, + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.435, + 0.472, + 0.517 + ], + "angle": 0, + "content": "[32] Jaideep Pathak, Shashank Subramanian, Peter Harrington, Sanjeev Raja, Ashesh Chattopadhyay, Morteza Mardani, Thorsten Kurth, David Hall, Zongyi Li, Kamyar Azizzadenesheli, et al. Fourcastnet: A global data-driven high-resolution weather model using adaptive fourier neural operators. arXiv preprint arXiv:2202.11214, 2022. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.519, + 0.472, + 0.573 + ], + "angle": 0, + "content": "[33] William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4195-4205, 2023. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.576, + 0.472, + 0.646 + ], + "angle": 0, + "content": "[34] Bogdan Raonic, Roberto Molinaro, Tim De Ryck, Tobias Rohner, Francesca Bartolucci, Rima Alaifari, Siddhartha Mishra, and Emmanuel de Bezenac. Convolutional neural operators for robust and accurate learning of pdes. Advances in Neural Information Processing Systems, 36, 2024. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.647, + 0.471, + 0.702 + ], + "angle": 0, + "content": "[35] Meer Mehran Rashid, Tanu Pittie, Souvik Chakraborty, and NM Anoop Krishnan. Learning the stress-strain fields in digital composites using fourier neural operator. Iscience, 25 (11), 2022. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.704, + 0.471, + 0.759 + ], + "angle": 0, + "content": "[36] J Craig Richardson, Richard W Bowtell, Karsten Mäder, and Colin D Melia. Pharmaceutical applications of magnetic resonance imaging (mri). Advanced drug delivery reviews, 57 (8):1191-1209, 2005. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.761, + 0.472, + 0.844 + ], + "angle": 0, + "content": "[37] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical image computing and computer-assisted intervention-MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18, pages 234-241. Springer, 2015. 2, 5, 6, 11" + }, + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.846, + 0.472, + 0.901 + ], + "angle": 0, + "content": "[38] V Seifert, M Zimmermann, C Trantakis, H-E Vitzthum, K Kühnel, A Raabe, F Bootz, J-P Schneider, F Schmidt, and J Dietrich. Open mri-guided neurosurgery. Acta neurochirurgica, 141:455-464, 1999. 1" + }, + { + "type": "list", + "bbox": [ + 0.08, + 0.092, + 0.472, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.093, + 0.894, + 0.162 + ], + "angle": 0, + "content": "[39] Dilbag Singh, Anmol Monga, Hector L de Moura, Xiaoxia Zhang, Marcelo VW Zibetti, and Ravinder R Regatte. Emerging trends in fast mri using deep-learning reconstruction on undersampled k-space data: a systematic review. Bioengineering, 10(9):1012, 2023. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.164, + 0.895, + 0.274 + ], + "angle": 0, + "content": "[40] Anuroop Sriram, Jure Zbontar, Tullie Murrell, Aaron Defazio, C Lawrence Zitnick, Nafissa Yakubova, Florian Knoll, and Patricia Johnson. End-to-end variational networks for accelerated mri reconstruction. In Medical Image Computing and Computer Assisted Intervention-MICCAI 2020: 23rd International Conference, Lima, Peru, October 4-8, 2020, Proceedings, Part II 23, pages 64-73. Springer, 2020. 1, 2, 3, 4, 5, 6, 7, 8, 11, 12, 13" + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.276, + 0.894, + 0.317 + ], + "angle": 0, + "content": "[41] Jian Sun, Huibin Li, Zongben Xu, et al. Deep admm-net for compressive sensing mri. Advances in neural information processing systems, 29, 2016. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.318, + 0.894, + 0.36 + ], + "angle": 0, + "content": "[42] Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multiscale structural similarity for image quality assessment. In Asilomar Conference on Signals, Systems & Computers, 2003. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.504, + 0.361, + 0.895, + 0.43 + ], + "angle": 0, + "content": "[43] Zihui Wu, Yu Sun, Yifan Chen, Bingliang Zhang, Yisong Yue, and Katherine Bouman. Principled probabilistic imaging using diffusion models as plug-and-play priors. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. 2, 6, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.504, + 0.431, + 0.895, + 0.487 + ], + "angle": 0, + "content": "[44] Jure Zbontar, Florian Knoll, Anuroop Sriram, Tullie Murrell, Zhengnan Huang, Matthew J Muckley, Aaron Defazio, et al. fastmri: An open dataset and benchmarks for accelerated mri. arXiv preprint arXiv:1811.08839, 2018. 3, 5, 6, 12" + }, + { + "type": "list", + "bbox": [ + 0.503, + 0.093, + 0.895, + 0.487 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "26013" + } + ] +] \ No newline at end of file diff --git a/2025/A Unified Model for Compressed Sensing MRI Across Undersampling Patterns/e66975d9-0abf-4453-9e0f-887ea6234025_origin.pdf b/2025/A Unified Model for Compressed Sensing MRI Across Undersampling Patterns/e66975d9-0abf-4453-9e0f-887ea6234025_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6b726884ed3439d298cd5842e70bd3d63b00731e --- /dev/null +++ b/2025/A Unified Model for Compressed Sensing MRI Across Undersampling Patterns/e66975d9-0abf-4453-9e0f-887ea6234025_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a8013bc7eaa660a729d2339109d2a2d9d4c18921b65fc91433759a8bfa051472 +size 9829469 diff --git a/2025/A Unified Model for Compressed Sensing MRI Across Undersampling Patterns/full.md b/2025/A Unified Model for Compressed Sensing MRI Across Undersampling Patterns/full.md new file mode 100644 index 0000000000000000000000000000000000000000..4f189c7f547e3ea37c2c8a06bf71a81974d086b5 --- /dev/null +++ b/2025/A Unified Model for Compressed Sensing MRI Across Undersampling Patterns/full.md @@ -0,0 +1,289 @@ +# A Unified Model for Compressed Sensing MRI Across Undersampling Patterns + +Armeet Singh Jatyani* Miguel Liu-Schiaffini + +Jiayun Wang* Aditi Chandrashekar Zihui Wu Bahareh Tolooshams Anima Anandkumar Ifornia Institute of Technology + +{armeet,peterw,ajchandr,zwu2,mliuschi,btoloosh,ana}@caltech.edu + +# Abstract + +Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled measurements, thereby reducing scan time—the time subjects need to remain still. Recently, deep learning has shown great potential for reconstructing high-fidelity images from highly undersampled measurements. However, one needs to train multiple models for different undersampling patterns and desired output image resolutions, since most networks operate on a fixed discretization. Such approaches are highly impractical in clinical settings, where undersampling patterns and image resolutions are frequently changed to accommodate different real-time imaging and diagnostic requirements. + +We propose a unified MRI reconstruction model robust to various measurement undersampling patterns and image resolutions. Our approach uses neural operators—a discretization-agnostic architecture applied in both image and measurement spaces—to capture local and global features. Empirically, our model improves SSIM by $11\%$ and PSNR by 4 dB over a state-of-the-art CNN (End-to-End VarNet), with $600\times$ faster inference than diffusion methods. The resolution-agnostic design also enables zero-shot super-resolution and extended field-of-view reconstruction, offering a versatile and efficient solution for clinical MR imaging. Our unified model offers a versatile solution for MRI, adapting seamlessly to various measurement undersampling and imaging resolutions, making it highly effective for flexible and reliable clinical imaging. Our code is available at https://armeet.ca/nomri. + +# 1. Introduction + +Magnetic Resonance Imaging (MRI) is a popular noninvasive imaging technology, used in numerous medical and scientific applications such as neurosurgery [38], clinical oncology [20], diagnostic testing [16], neuroscience [22], and pharmaceutical research [36]. MRI is greatly limited by a + +slow data acquisition process, which sometimes requires patients to remain still for an hour [4, 39]. Hence, accelerating MRI scan has garnered tremendous attention [11, 18, 28]. + +Compressed Sensing (CS) [9] enables MRI at sub-Nyquist rates and reduces acquisition time for greater clinical utility. This is framed as an ill-posed inverse problem [12], where prior knowledge about MR images is crucial for reconstruction. Traditional Compressed Sensing MRI assumes a sparse prior in a transform domain (e.g., wavelets [3]). Recent deep learning methods learn underlying data structures to achieve superior performance [5, 40]. Current state-of-the-art models establish an end-to-end mapping [15, 40] from undersampled measurements to image reconstruction in both image and frequency domains. However, these models often struggle with generalization across varying resolutions, a critical need in clinical practice where flexible resolution adjustments are necessary. A unified model that is agnostic to discretizations would greatly improve efficiency. + +Neural Operators (NOs) [21] are a deep learning framework that learns mappings between infinite-dimensional function spaces, making them agnostic to discretizations (resolutions). This property makes them suitable for tasks with data at varying resolutions, such as partial differential equations (PDEs) [21, 25, 34] and PDE-related applications [32, 35]. NOs could also be suitable for compressed sensing MRI due to measurements with multiple undersampling patterns. Various NO architectures [24, 26, 34] have been proposed. Recently, discrete-continuous (DISCO) convolutions [26, 31] have emerged as an efficient neural operator that captures local features and leverages GPU acceleration for standard convolutions. Due to the similarity to standard convolutions, the building blocks of many existing MRI deep learning models [5, 40], DISCO is a good candidate for resolution-agnostic MRI reconstruction. + +Our approach: We propose a unified model based on NOs, that is robust to different undersampling patterns and image resolutions in compressed sensing MRI (Fig. 1a). Our model follows an unrolled network design [15, 40] with DISCO [26, 31]. As the image resolution increases, DISCO maintains a resolution-agnostic kernel with a consistent con + +![](images/79bd6ab2cc51219f5cbccc1f37da7e35cd83009cc487046729e06a66ec617a2f.jpg) + +![](images/fac92d5640d352f6c9f5be132c40c252f7c8960477bf38717cfcc6057fcf75af.jpg) + +![](images/4c2e564d9eecb317644849f140a69641654d87616d351ad9b836f7c25fa1567f.jpg) +Figure 1. (a) We propose a unified model for MRI reconstruction, called neural operator (NO), which works across various measurement undersampling patterns, overcoming the resolution dependency limit of CNN-based methods like [40] that require a specific model for each pattern. (b) NO achieves consistent performance across undersampling patterns and outperforms CNN architectures such as [40] (for $2 \times$ acceleration with one unrolled network cascade). (c) NO is resolution-agnostic. As image resolution increases, it maintains a consistent kernel size for alias-free rescaling, unlike CNNs with variable kernel sizes that risk aliasing. (d) NO enhances zero-shot super-resolution MRI reconstruction, outperforming CNNs [40]. + +![](images/bd4d3c4cd2c6b2889be5b0e7e58493399d38bcf4d297f231bf96b7ad58336866.jpg) + +volution patch size, while the regular convolution kernel contracts to a point (Fig. 1c). The DISCO operators learn in both measurement/frequency $\mathbf{k}$ space $(\mathrm{NO_k})$ and image space $(\mathrm{NO_i})$ . $\mathrm{NO_k}$ makes our framework agnostic to different measurement undersampling patterns, and $\mathrm{NO_i}$ makes the framework agnostic to different image resolutions. Additionally, the learning in both frequency and image space allows the model to capture both local and global features of images due to the duality of the Fourier transform that connects the frequency and image space. The resolution-agnostic design also enables super-resolution in both frequency and image space, allowing the extended field of view (FOV) and super-resolution of the reconstructed MR images. + +We empirically demonstrate that our model is robust to different measurement undersampling rates and patterns (Fig. 1a). Our model performs consistently across these pattern variations, whereas the existing method drops in performance (Fig. 1b). We achieve up to $4 \times$ lower NMSE and 5 dB PSNR improvement from the baseline when evaluating on different undersampling patterns. The model is efficient and $600 \times$ faster than the diffusion baseline [5, 17, 43]. We also show that our model outperforms the state-of-the-art in + +zero-shot super-resolution inference (Fig. 1d) and extended FOV reconstruction (Fig. 5). + +Our work has two main contributions: 1) We propose a unified neural operator model that learns in function space and shows robust performance across different undersampling patterns and image resolutions in compressed sensing MRI. To the best of our knowledge, this is the first resolution-agnostic framework for MRI reconstruction. 2) Our model demonstrates empirical robustness across measurement undersampling rates and patterns, reconstructing MR images with zero-shot higher resolutions and a larger field of view. + +# 2. Related Works + +Accelerated MRI. One way to accelerate MRI scan speed is parallel imaging, in which multiple receiver coils acquire different views of the object of interest simultaneously, and then combine them into a single image [11, 30, 37]. When MRI reconstruction is paired with compressed sensing, predefined priors or regularization filters can be leveraged to improve reconstruction quality [27, 28]. Recent works have shown that learned deep-learning priors outperform handcrafted priors in reconstruction fidelity. Convolutional neural + +networks (CNNs) [8, 15, 18, 40], variational networks (based on variational minimization) [15, 40], and generative adversarial networks (GANs) [7, 18] have all demonstrated superior performance than traditional optimization approach for compressed sensing MRI reconstruction from undersampled measurements. However, unlike conventional compressed sensing which operates in the function space and is agnostic to measurement undersampling patterns, the aforementioned deep learning methods operate on a fixed resolution. As a result, changes in resolution lead to degradation in performance, and multiple models are needed for different settings. We propose a resolution-agnostic unified model. + +Discretization-Agnostic Learning and Neural Operators. Empirically, diffusion models have shown relatively consistent performance with different measurement undersampling patterns in accelerated MRI [14]. However, diffusion models usually take more runtime at inference and need extensive hyperparameter tuning for good performance (Section 4.5). Additionally, they are not fundamentally discretization-agnostic by design. Neural operators [1, 21] are deep learning architectures specifically designed to learn mappings between infinite-dimensional function spaces. They are discretization-agnostic, allowing evaluation at any resolution, and converge to a desired operator as the resolution approaches infinity. Neural operators have empirically achieved good performance as surrogate models of numerical solutions to partial differential equations (PDEs) [21, 25, 34] with various applications, such as material science [35], weather forecasting [32], and photoacoustic imaging [13]. The design of neural operators often depends on the application at hand. For example, the Fourier neural operator (FNO) [24], which performs global convolutions, has shown consistent discretization-agnostic performance in various applications [1]. Other designs of neural operators [23, 26] rely on integration with locally-supported kernels to capture local features, which has shown to be useful in applications where local features are important, such as modeling turbulent fluids [23]. Additionally, neural operators with local integrals can be made efficient with parallel computing compared to those requiring global integrals. Our MRI framework, based on neural operators with local integrals, is agnostic to undersampling patterns and output image resolutions. + +# 3. Methods + +We first discuss the background of compressed sensing MRI and the unrolled network framework we use. We then discuss how we can extend the existing network building block, standard convolution, to resolution-agnostic neural operators. We also introduce DISCO [31], a neural operator design we adopt, and we capture global and local image features with DISCO. We conclude the section with the super-resolution designs. We call the measurement or frequency space $\mathbf{k}$ -space, and physical or spatial space image space hereafter. + +# 3.1. MRI Reconstruction with Unrolled Networks + +Background. In MRI, anatomical images $\mathbf{x}$ of the patient are reconstructed by acquiring frequency-domain measurements $\mathbf{k}$ , where the relationship is defined as: + +$$ +\mathbf {k} := \mathcal {F} (\mathbf {x}) + \epsilon \tag {1} +$$ + +where $\epsilon$ is the measurement noise and $\mathcal{F}$ is the Fourier transform. In this paper, we consider the parallel imaging setting with multiple receiver coils [19, 44], where each coil captures a different region of the anatomy. The forward process of the $i^{\mathrm{th}}$ coil measures $\mathbf{k}_i\coloneqq \mathcal{F}(S_i\mathbf{x}) + \epsilon_i$ where $S_{i}$ is a position-dependent sensitivity map for the $i^{\mathrm{th}}$ coil. To speed up the imaging process, measurements are undersampled as $\tilde{\mathbf{k}} = M\mathbf{k}$ in the compressed sensing MRI setting, where $M$ is a binary mask that selects a subset of the k-space points. Classical compressed sensing methods reconstruct the image $\hat{\mathbf{x}}$ by solving an optimization problem + +$$ +\hat {\mathbf {x}} = \operatorname {a r g m i n} _ {\mathbf {x}} \frac {1}{2} \sum_ {i} \left\| \mathcal {A} (\mathbf {x}) - \tilde {\mathbf {k}} \right\| _ {2} ^ {2} + \lambda \Psi (\mathbf {x}) \tag {2} +$$ + +where $i$ is the coil index, $\mathcal{A}(\cdot) \coloneqq MFS_{i}(\cdot)$ is the linear forward operator, and $\Psi(\mathbf{x})$ is a regularization term. The optimization objective can be considered as a combination of physics constraint and prior. While the above optimization can be solved using classical optimization toolboxes, an increasing line of works uses deep neural networks to learn data priors and show improved reconstruction performance [15, 40]. Among them, unrolled networks [15, 40] have gained popularity as they incorporate the known forward model, resulting in state-of-the-art performance. Unrolling, which started with the nominal work of LISTA [10], proposes to design networks using iterations of an optimization algorithm to solve inverse problems. This approach incorporates domain knowledge (i.e., the forward model) and leverages deep learning to learn implicit priors from data [29, 41]. In the context of MRI and assuming a differential regularization term, the optimization problem is expanded to iterative gradient descent steps with injected CNN-based data priors. Each layer mimics the gradient descent step from $\mathbf{x}^t$ to $\mathbf{x}^{t+1}$ : + +$$ +\mathbf {x} ^ {t + 1} \leftarrow \mathbf {x} ^ {t} - \eta^ {t} \mathcal {A} ^ {*} (\mathcal {A} (\mathbf {x} ^ {t}) - \tilde {\mathbf {k}}) + \lambda^ {t} \operatorname {C N N} (\mathbf {x} ^ {t}) \tag {3} +$$ + +where $\eta^t$ controls the weight of data consistency term and $\lambda^t$ controls that of the data-driven prior term. The data consistency term samples the data in the frequency domain, hence it is applicable to any spatial resolution. However, the prior term only operates on a specific resolution with CNNs. This means when changing the undersampling patterns, one needs another CNN trained for that setting, which greatly limits the flexibility of the reconstruction system. + +Extending to Neural Operators. We learn the prior in function space via discretization-agnostic neural operators + +![](images/1cfb141d7675bddef93de0b6f015a4dc0b22a2793f8b945af4a3bae97a2d8b20.jpg) +Figure 2. MRI reconstruction pipeline. NO learns data priors in function space with infinite resolution. Specifically we propose NOs in the k (frequency) space $\mathrm{NO_k}$ (k space NO) and image space $\mathrm{NO_i}$ (image space NO), which capture both global and local image features, due to the duality between physical and frequency space. $\mathcal{F}^{-1}$ refers to the inverse Fourier transform. We provide the framework design details in Section 3.1 and NO design details in Section 3.2. + +in $\mathbf{k}$ space $(\mathrm{NO_k})$ and image space $(\mathrm{NO_i})$ . Specifically, we first use a $\mathbf{k}$ space neural operator $\mathrm{NO_k}$ to learn $\mathbf{k}$ space prior and then apply a cascade of unrolled layers, each of which features a data consistency loss and the image space $\mathrm{NO_i}$ for image prior learning: + +$$ +\mathbf {x} ^ {0} \leftarrow \mathcal {F} ^ {- 1} \left(\mathrm {N O} _ {\mathbf {k}} (\tilde {\mathbf {k}})\right) \tag {4} +$$ + +$$ +\mathbf {x} ^ {t + 1} \leftarrow \mathbf {x} ^ {t} - \eta^ {t} \mathcal {A} ^ {*} (\mathcal {A} (\mathbf {x} ^ {t}) - \tilde {\mathbf {k}}) + \lambda^ {t} \mathrm {N O} _ {\mathbf {i}} ^ {t} (\mathbf {x} ^ {t}) \tag {5} +$$ + +where $\mathrm{NO}_{\mathbf{i}}^{t}$ refers to the image-space NO at cascade $t$ . We follow existing works [15, 40] and only have one $\mathrm{NO}_{\mathbf{k}}$ for the first cascade. Our framework flexibly works for different resolutions with the design details in Section 3.2. + +Framework Overview. Fig. 2 depicts the pipeline of our neural operator framework for MRI reconstruction. The undersampled measurement $\tilde{\mathbf{k}}$ is first fed to a neural operator $\mathrm{NO_k}$ which operates in measurement $\mathbf{k}$ space to learn global image features and then inverse Fourier transformed to get an image. Following Eqn. 4 and 5, we iterate a few cascades of unrolled layers, consisting of a neural operator $\mathrm{NO_i}$ which operates in image $\mathbf{x}$ space and a data consistency update. + +# 3.2. Neural Operator Design + +Neural operators, which learn mappings between function spaces, offer a unified approach to discretization-agnostic MRI reconstruction. Given that accurate MRI reconstruction depends on capturing both local and global image features, we propose a neural operator architecture that incorporates both global and local inductive biases. We first discuss how we learn local features with local integration operators. + +Local Features via Local Integration Operator. Historically, the most common method of embedding a local inductive bias into deep neural networks has been by using locally-supported convolutional kernels, as in convolutional neural networks (CNNs). However, standard discrete convolutional kernels used in CNNs do not satisfy the resolution-agnostic properties of neural operators. Specifically, Liu et al. [26] show that CNN-style convolutional kernels converge to pointwise linear operators as the resolution is increased, instead of the desired local integration in the limit of infinite + +resolution. For a kernel $\kappa$ and input function $g$ defined over some compact subset $D\subset \mathbb{R}^d$ , the local convolution operator in a standard convolution layer, which transforms input $u$ to output $v$ , is given by + +$$ +(k \star g) (v) = \int_ {D} \kappa (u - v) \cdot g (u) d u. \tag {6} +$$ + +Given a particular set of input points $(u_{j})_{j = 1}^{m}\subset D$ with corresponding quadrature weights $q_{j}$ and output positions $v_{i}\in D$ , we adopt the discrete-continuous convolutions (DISCO) framework for operator learning [26, 31] and approximate the continuous convolution (Eqn. 6) as + +$$ +(k \star g) (v _ {i}) \approx \sum_ {j = 1} ^ {m} \kappa \left(u _ {j} - v _ {i}\right) \cdot g \left(x _ {j}\right) q _ {j}. \tag {7} +$$ + +We follow parameterize $\kappa$ as a linear combination of predefined basis functions $\kappa^{\ell}$ : $\kappa = \sum_{\ell=1}^{L} \theta^{\ell} \cdot \kappa^{\ell}$ , where $\theta^{\ell}$ are learnable parameters. We choose the linear piecewise basis from [26] as this achieves the greatest empirical results (see Sections B.3 & E of the supplementary). The convolutional kernel is thus parameterized by a finite number of parameters, independently of the grid on which the kernel is evaluated. The kernel is resolution-agnostic because we disentangle the resolution-agnostic basis and discrete learnable parameters. The basis $\kappa^{\ell}$ is defined in the function space, and will be discretized at the desired resolution; discrete parameters $\theta^{\ell}$ can be learned with gradient descent. Since we are operating on an equidistant grid on a compact subset of $\mathbb{R}^2$ , we follow [26] and implement Eqn. 7 using standard convolutional kernels (thus enjoying the benefits of acceleration on GPUs using standard deep learning libraries) with two crucial modifications: 1) the kernel itself is defined as a linear combination of basis functions $\kappa^{\ell}$ , and 2) the size of the kernel scales with the input resolution so as to remain a fixed size w.r.t. the input domain. We adopt the same basis functions as [26] in our experiments, and we use the local integration operator as the resolution-agnostic building block for the measurement space and image space operators. + +DISCO vs Standard 2D Convolution with Varying Resolutions. As the input resolution increases (the discretization + +becomes denser), DISCO [31] maintains the kernel size for each convolution and finally converges to a local integral. The standard 2D convolution kernel, however, gets increasingly smaller and finally converges to a point-wise operator (Fig. 1c). Although one could alleviate the issue of standard convolutions by interpolating the convolutional kernel shape to match with corresponding convolution patch sizes for different resolutions, the interpolated kernel will have artifacts that affect performance at new resolutions (Fig. 1d). DISCO, however, is agnostic to resolution changes as the kernel is in the function space. + +Global Features. A common neural operator architecture for learning global features is the Fourier neural operator (FNO) [24]. FNO takes the Fourier transform of the input, truncates the result beyond some fixed number of modes, and pointwise multiplies the result with a learned weight tensor, which is equivalent to a global convolution on the input by the convolution theorem. Interestingly, the forward process of MRI is a Fourier transformation, which means that local operations in measurement $\mathbf{k}$ space are equivalent to global operators in image $\mathbf{x}$ space and vice versa, due to their duality. Following FNO, we could apply a pointwise multiplication between the measurement $\mathbf{k}$ and a learned weight tensor to capture global image features. However, FNO truncates high frequencies, which are crucial for MRI reconstruction. To address this, we directly apply the DISCO local integration operator on the measurement space to capture global image features without feature map truncation. + +UDNO: the Building Block. Without loss of generality, we make both the image-space $\mathrm{NO}_{\mathrm{i}}$ and $\mathbf{k}$ space $\mathrm{NO}_{\mathbf{k}}$ be local neural operators that capture local features in the corresponding domain. Such a design learns both global and local image features due to domain duality. Motivations for adopting the U-shaped architecture are in Fig. 7 and Section A of the Supplementary. Each operator consists of multiple sub-layers, to which we refer as the U-Shaped DISCO Neural Operator, or UDNO. The motivation is that multi-scale designs have shown great success in capturing features at different scales in images and that U-shaped networks are among the most popular architectures in computer vision, demonstrating strong performance in various applications from medical imaging to diffusion [6, 33, 37]. Further, UDNO makes our framework very similar to an existing state-of-the-art E2E-VN [40], with the difference being standard convolutions replaced by DISCO operators. The UDNO follows the encoder/decoder architecture of the U-Net [37], replacing regular convolutions with DISCO layers. + +Loss. The parameters of the proposed neural operator are estimated from the training data by minimizing the structural similarity loss between the reconstruction $\mathbf{x}$ and the ground truth image $\mathbf{x}^*$ (the same as the E2E-VN [40]): + +$$ +\mathcal {L} (\hat {\mathbf {x}}, \mathbf {x} ^ {*}) = - \operatorname {S S I M} (\hat {\mathbf {x}}, \mathbf {x} ^ {*}), \tag {8} +$$ + +where SSIM is the Structural Similarity Index Measure [42]. + +![](images/16a9f889ff062d7a55042de1d4ad45ad5bd849d690d5af4a4369881e0fcda26e.jpg) +Figure 3. Super resolution (denser discretization) in $\mathbf{k}$ space or image space increases the FOV or resolution of the reconstructed image. With denser discretization, NO maintains a resolution-agnostic kernel while CNN kernels become relatively smaller in size. Empirically our NO outperforms CNNs [40] (Section 4.4). + +# 3.3. Super-Resolution + +Neural operators enable zero-shot super-resolution. As shown in Fig. 3, increasing resolution corresponds to denser discretization between fixed minimum and maximum values, while the overall domain range remains constant. Due to the dual nature of frequency and image space, enhancing resolution in $\mathbf{k}$ space extends the field of view (FOV) in the reconstructed image, whereas increasing resolution in image space enhances the image's detail. Our proposed NO framework includes resolution-agnostic neural operators for both $\mathbf{k}$ space $(\mathrm{NO_k})$ and image space $(\mathrm{NO_i})$ , facilitating zero-shot super-resolution in both domains. We present empirical zero-shot super-resolution results in Section 4.4, comparing our NO framework to E2E-VN [40], a CNN-based architecture with a similar design. + +# 4. Experiments + +We discuss the datasets and experimental setup, followed by comparisons of our and baseline methods with different k undersampling rates and patterns. We conclude the section with zero-shot super-resolution and additional analysis. + +# 4.1. Dataset and Setup + +Datasets: The fastMRI dataset [44] is a large and open dataset of knee and brain fully-sampled MRIs. + +- fastMRI knee: We use the multi-coil knee reconstruction dataset with 34,742 slices for training and 7,135 slices for evaluation. All samples contain data from 15 coils. +- fastMRI brain: We use the T2 contrast subset of the multi-coil brain reconstruction dataset with 6,262 training slices and 502 evaluation slices. We filter for samples with data from 16 coils. + +Undersampling Patterns and Rates. We use equispaced, + +
CategoryMethodfastMRI kneefastMRI brain
PSNR (dB)SSIMPSNR (dB)SSIM
Learning-freeZero-filled31.00±3.330.7848±0.061630.86±1.730.8065±0.0376
ℓ1-Wavelet [27]25.67±3.910.5667±0.200128.68±1.310.6783±0.0684
DiffusionCSGM [17]26.52±3.210.6789±0.1220--
ScoreMRI [5]25.72±1.800.5789±0.0910--
PnP-DM [43]26.52±3.140.6383±0.1320--
End-to-endU-Net [37]37.07±2.470.8803±0.050437.27±1.760.9211±0.0272
E2E-VN [40]38.33±3.060.9048±0.073238.06±2.700.9620±0.0107
Ours39.14±2.930.9219±0.072438.82±2.770.9621±0.0086
+ +Table 1. MRI reconstruction performance on $4 \times$ equispaced undersampling. NO outperforms existing methods (classical, diffusion, and end-to-end). NO also shows consistent performance across k space undersampling patterns (Section 4.3). Zero-filled refers to reconstructing the image from zero-filled k space. + +random, magic, Gaussian, radial, and Poisson undersampling patterns [5, 44] and $2\mathrm{x}$ , $4\times$ , $6\times$ , and $8\times$ undersampling rates (visualizations are in Fig. 8 in the Supplementary). Higher rates result in sparser $\mathbf{k}$ space samples and shorter imaging time at the cost of a more ill-posed/harder inversion process. Section B in the Supplementary provides additional undersampling details along with mask visualizations. + +Neural Operator Model. NO follows Fig. 2. The $\mathrm{NO_k}$ (k space neural operator) and $\mathrm{NO_i}$ (image space neural operator) are implemented as UDNOs with 2 input and output channels. This is because complex numbers, commonly used in MRI data, are represented using two channels: one for the real part and one for the imaginary part. We provide UDNO details, DISCO kernel basis configurations and training hyper-parameters in Section B of the Supplementary. + +Baseline: Compressed Sensing. We compare with a learning-free compressed sensing method with wavelet $\ell_1$ regularization for a classical comparison [27]. + +Baselines: Unrolled Networks. We compare with the E2E-VN (End-to-End VarNet) [40], which shares a similar network structure with our approach, but uses the standard CNNs with resolution-dependent convolutions. Since E2E-VN [40] is only trained on specific resolution, we also consider E2E-VN++, where we train [40] with multiple-patterns that match our NO's training data for fair comparisons. Number of cascades $t$ is set to 12 following [40]. + +Baselines: Diffusion. Diffusion models have shown strong performance on inverse problems such as MRI reconstruction. We compare our approach to three prominent diffusion-based methods that leverage these capabilities: Score-based diffusion models for accelerated MRI (ScoreMRI) [5], Compressive Sensing using Generative Models (CSGM) [17], and Plug-and-Play Diffusion Models (PnP-DM) [43]. We replicate the experimental settings described in their respective papers. While they report results on MVUE targets, we evaluate metrics on RSS targets at inference for a fair comparison with our methods. + +Hardware and Training. While models can be trained on a single RTX 4090 GPU, we accelerate the training of our + +model and baselines with a batch size of 16 across 4 A100 (40G) GPUs. We follow baseline settings for comparison. + +Evaluation Protocols. We evaluate image reconstruction performance using normalized mean square error (NMSE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) which are standard for the fastMRI dataset and MRI [44]. + +# 4.2. Reconstruction with Different k Space Undersampling Patterns + +We train our NO model, E2E-VN and E2E-VN++ on $4 \times$ equispaced samples for 50 epochs. The performance on the single $4 \times$ equispace undersampling pattern in Table 1. We further fine-tune NO and E2E-VN++ for an additional 20 epochs on a small dataset (3,474 samples) of equispaced, random, magic, Gaussian, radial, and Poisson samples. + +fastMRI Knee. We also provide detailed metric results in Table 2, with a line plot in Fig. 6a, where our NO achieves consistent performance across different patterns. Across all patterns, we achieve an average improvement of $4.17\mathrm{dB}$ PSNR and $8.4\%$ SSIM over the E2E-VN. On rectilinear patterns (equispaced, magic, random), our performance remains comparable to $\mathrm{E2E - VN + + }$ (0.3 dB PSNR gain). Across the irregular patterns (radial, Gaussian, Poisson), we achieve a 0.6 dB PSNR improvement over the improved baseline $(\mathrm{E2E - VN + + })$ + +fastMRI Brain. On irregular patterns, we achieve an average improvement of 4.7 dB PSNR and $10\%$ SSIM over the E2E-VN. On rectilinear patterns (equispaced, magic, random), our performance remains comparable to the E2E-VN. Detailed numbers are reported in Table 7 of Supplementary. Visualization. We observe visual improvements in reconstruction integrity (see Fig. 4). Our model is robust to inference across multiple patterns. We highlight important local regions where our NO is better. + +The setting here where multiple patterns are trained together is a common clinical setting where the undersampling patterns are known. We also consider the setting where undersampling patterns are unknown. Zero-shot evaluations of the equispaced-trained $(4\times)$ model across different patterns show that our NO shows 1.8 dB PSNR gain over E2E-VN. + +# 4.3. Reconstruction with Different k Space Undersampling Rates + +We train our NO model, E2E-VN and E2E-VN++ on $4 \times$ equispaced samples for 50 epochs. We further fine-tune NO and E2E-VN++ for an additional 20 epochs on a small dataset (3,474 samples) of $4 \times$ , $6 \times$ , $8 \times$ , and $16 \times$ equispaced samples. + +For fastMRI Knee, we report the multi-rate performance in Fig. 6b and Table 6 of the Supplementary. For fastMRI Brain, we report the multi-rate performance in Table 8 of the Supplementary. Our neural operator model consistently out + +![](images/98e9d07e8faf4877e2f41511c26d15c331bbe65b9534d219ae2fdae8975dafea.jpg) +Figure 4. MRI reconstructions with different undersampling patterns of various methods: NO (ours), E2E-VN++, E2E-VN [40], L1-Wavelet (learning-free compressed sensing) [27], and CSGM (diffusion) [17]. NO reconstructs high-fidelity images across various downsampling patterns. Zoom-in view in the lower right of each image. Row 1: $4 \times$ Equispaced undersampling. Row 2: $4 \times$ Gaussian 2d undersampling. Row 3: $4 \times$ Radial 2d undersampling. + +
PatternPSNR (dB) ↑SSIM ↑NMSE ↓
NO (ours)E2E-VN++E2E-VN [40]NO (ours)E2E-VN++E2E-VN [40]NO (ours)E2E-VN++E2E-VN [40]
In-domainEquispaced37.40 ± 2.6137.50 ± 2.7938.35 ± 3.050.899 ± 0.0720.900 ± 0.0720.905 ± 0.0730.007 ± 0.0060.007 ± 0.0060.006 ± 0.006
Random36.66 ± 2.4836.79 ± 2.6537.34 ± 2.750.891 ± 0.0700.892 ± 0.0720.897 ± 0.0710.008 ± 0.0060.008 ± 0.0070.007 ± 0.005
Magic38.46 ± 2.9938.34 ± 3.0638.94 ± 3.550.914 ± 0.0700.914 ± 0.0690.917 ± 0.0710.006 ± 0.0060.007 ± 0.0060.006 ± 0.006
OODRadial36.23 ± 2.2135.50 ± 2.2427.02 ± 3.920.900 ± 0.0710.892 ± 0.0700.764 ± 0.0700.009 ± 0.0060.011 ± 0.0090.069 ± 0.030
Poisson33.42 ± 2.3433.01 ± 2.6723.61 ± 3.850.878 ± 0.0620.873 ± 0.0600.687 ± 0.0830.016 ± 0.0080.017 ± 0.0080.152 ± 0.068
Gaussian31.25 ± 2.7030.65 ± 2.5523.14 ± 3.950.863 ± 0.0580.851 ± 0.0590.673 ± 0.0880.024 ± 0.0050.028 ± 0.0070.170 ± 0.073
+ +Table 2. MRI reconstruction performance across different undersampling patterns. Across multiple patterns, NO maintains reconstruction performance, while baselines do not perform well on out-of-domain (OOD) undersampling patterns (Poisson, radial, Gaussian). Metrics are calculated for the fastMRI knee dataset with a fixed $4 \times$ acceleration rate. We observe that the E2E-VN overfits to rectilinear patterns, and drops off heavily when evaluated on the irregular patterns (Poisson, radial, Gaussian). + +performs the E2E-VN [40], achieving 3.2 dB higher PSNR and $5.8\%$ higher SSIM on fastMRI knee and 2.0 dB higher PSNR and $7.5\%$ higher SSIM on fastMRI brain. + +# 4.4. Zero-Shot Super-Resolution + +We study $\mathrm{NO_i}$ and $\mathrm{NO_k}$ zero-shot super-resolution performance and compare them with E2E-VN [40]. + +Higher MRI Resolution with $\mathrm{NO_i}$ super-resolution. We train our NO model and the E2E-VN models on $320\times 320$ knee samples. We then keep the $\mathrm{NO_k}$ unchanged and use bilinear interpolation to increase the input to $\mathrm{NO_i}$ to $640\times$ 640. We directly evaluate models without fine-tuning against fully sampled $640\times 640$ bilinear interpolated ground truth reconstructions. For [40] relying on CNNs, the absolute kernel size stays the same, and the ratio of kernel size over feature map is halved, while the ratio of NO stays the same. Compared to our NO model, the CNN-based E2E-VN [40] produces reconstructions with noticeable artifacts and higher + +PSNR and image reconstruction quality (Fig. 1d and Fig. 5b). Larger MRI FOV with $\mathrm{NO_k}$ super-resolution. k space super-resolution expands the MRI reconstruction field of view (FOV). To validate model performance, we design a proof-of-concept FOV experiment. Our NO model and the E2E-VN [40] train on $160\times 160$ downsampled k space brain slice samples, where sparse k space sampling results in a reduced FOV in image space. We then perform zero-shot inference on $320\times 320$ full-FOV k space data. Although neither model encounters data outside the $160\times 160$ FOV during training, our NO model reconstructs features in this extended region with significantly fewer artifacts compared to E2E-VN [40] (visualizations in Fig. 5a). + +# 4.5. Additional Analysis + +Model Inference and Tuning Time. In Table 3, we compare the model development and inference times of our end-to-end neural operator (NO) with diffusion models. We + +![](images/25101edd51d9fb68dfa6f411ff3bd607dcc3f746b05a3e90d3175f2d27b83c82.jpg) +Figure 5. Zero-shot super-resolution results in both extended FOV $(\mathrm{NO_k})$ and high-resolution image space $(\mathrm{NO_i})$ . (a) Zero-shot extended FOV reconstructions: Our NO model shows fewer artifacts and higher PSNR in the reconstructed brain slices compared to the CNN-based E2E-VN [40] on $4\times$ Gaussian, despite neither model seeing data outside the initial $160\times 160$ FOV during training. (b) Zero-shot super-resolution reconstructions in image space on $2\times$ radial: with input resolution increased to $640\times 640$ through bilinear interpolation, our NO model preserves reconstruction quality, while E2E-VN [40] produces visible artifacts. + +![](images/71aa9ec5616091411c146c9cf0c72104a31cc5d81296e6b8db73d4be9855c4ae.jpg) + +![](images/2a7e541d6cb095f704283590550c5580ca00bf0900a101a82cd573f0b0f4a467.jpg) +Figure 6. Performance across different undersampling patterns and rates of ours and baseline methods: end-to-end [40], diffusion [17] and learning-free [27]. Our NO remains relatively consistent in performance when evaluated at different undersampling patterns and rates. Note that a high undersampling rate makes the task more difficult and thus a worse score is expected. + +![](images/fc1d912141782f9bd93595af6b448723774b8130bc00713c1c085f57cff68ee5.jpg) + +
CategoryMethodInference Time (s)Tuning* Required
Learning-free\( \ell_1 \)-Wavelet [27]5.45
DiffusionCSGM [17]93.84
PnP-DM [43]84.46
ScoreMRI [5]96.16
VariationalE2E-VN [40]0.104
NO (ours)0.158
+ +Table 3. Inference and tuning time of methods tested on NVIDIA A100. NO is approximately $600 \times$ faster than diffusion, and $35 \times$ faster than the classical baseline based on learning-free compressed sensing methods. *Tuning refers to the $\mathbf{k}$ undersampling pattern-specific hyperparameter tuning during inference/after model training. Both the $\ell_{1}$ -Wavelet [27] ( $\sim 0.5$ hrs per pattern) and diffusion methods ( $\sim 6$ hrs per pattern) require pattern-specific tuning, while our NO is trained once for all patterns. + +observe that diffusion models require pattern-specific hyperparameter tuning and are over 600 times slower in inference. MRI-diffusion models [5, 17, 43] are unconditionally trained and undersampling patterns are not available during training. + +Thus, we empirically tune hyperparameters such as learning rate and guidance scale for each downsampling pattern for approximately 6 hours each time. Traditional learning-free methods like $\ell_1$ -Wavelet [27] still require hyperparameter tuning for specific $\mathbf{k}$ undersampling patterns during optimization. Consequently, end-to-end methods, e.g. NO, are significantly more efficient. + +Performance Under Same Parameter Size. We show our NO outperforms baseline unrolled network E2E-VN [40] on different patterns and rates with a similar architecture and number of parameters in the Supplementary. + +# 5. Conclusion + +Our unified model for compressed sensing MRI addresses the need to train multiple models for different measurement undersampling patterns and image resolutions, a common clinical issue. By leveraging discretization-agnostic neural operators, the model captures both local and global features, enabling flexible MRI reconstruction. With extensive experiments on fastMRI knee and brain datasets, our model maintains consistent performance across undersampling patterns and outperforms state-of-the-art methods in accuracy and robustness. It also enhances zero-shot super-resolution and extended FOV (field of view). The work has some limitations: 1) We only explore one neural operator design, DISCO, and future work could explore other operator learning architectures for MRI. 2) We only benchmark the image reconstruction performance without diagnostic accuracy, which is of more clinical relevance. + +In short, our approach offers a versatile solution for efficient MRI, with significant utility in clinical settings where flexibility and adaptability to varying undersampling patterns and image resolutions are crucial. + +# Acknowledgement + +This work is supported in part by ONR (MURI grant N000142312654 and N000142012786). J.W. is supported in part by Schmidt Sciences. A.S.J. and A.C. are supported in part by the Undergraduate Research Fellowships (SURF) at Caltech. Z.W. is supported in part by the Amazon AI4Science Fellowship. B.T. is supported in part by the Swartz Foundation Fellowship. M.L.-S. is supported in part by the Mellon Mays Undergraduate Fellowship. A.A. is supported in part by Bren endowed chair and the AI2050 senior fellow program at Schmidt Sciences. + +# References + +[1] Kamyar Azizzadenesheli, Nikola Kovachki, Zongyi Li, Miguel Liu-Schiaffini, Jean Kossaifi, and Anima Anandkumar. Neural operators for accelerating scientific simulations and design. Nature Reviews Physics, pages 1-9, 2024. 3 +[2] Max Born and Emil Wolf. Principles of optics: electromagnetic theory of propagation, interference and diffraction of light. Elsevier, 2013. 15 +[3] Scott Shaobing Chen, David L Donoho, and Michael A Saunders. Atomic decomposition by basis pursuit. SIAM review, 43(1):129-159, 2001. 1 +[4] Yutong Chen, Carola-Bibiane Schonlieb, Pietro Liò, Tim Leiner, Pier Luigi Dragotti, Ge Wang, Daniel Rueckert, David Firmin, and Guang Yang. AI-based reconstruction for fast mri—a systematic review and meta-analysis. Proceedings of the IEEE, 110(2):224-245, 2022. 1 +[5] Hyungjin Chung and Jong Chul Ye. Score-based diffusion models for accelerated mri. Medical Image Analysis, 80: 102479, 2022. 1, 2, 6, 8 +[6] Florinel-Alin Croitoru, Vlad Hondru, Radu Tudor Ionescu, and Mubarak Shah. Diffusion models in vision: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(9):10850-10869, 2023. 5 +[7] Salman UH Dar, Mahmut Yurt, Mohammad Shahdloo, Muhammed Emrullah Ildiz, Berk Tinaz, and Tolga Cukur. Prior-guided image reconstruction for accelerated multicontrast mri via generative adversarial networks. IEEE Journal of Selected Topics in Signal Processing, 14(6):1072-1087, 2020. 3 +[8] Mohammad Zalbagi Darestani and Reinhard Heckel. Accelerated mri with un-trained neural networks. IEEE Transactions on Computational Imaging, 7:724-733, 2021. 3 +[9] D.L. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289-1306, 2006. 1 +[10] Karol Gregor and Yann LeCun. Learning fast approximations of sparse coding. In Proceedings of the 27th International Conference on Machine Learning, pages 399-406, 2010. 3 +[11] Mark A Griswold, Peter M Jakob, Robin M Heidemann, Mathias Nittka, Vladimir Jellus, Jianmin Wang, Berthold Kiefer, and Axel Haase. Generalized autocalibrating partially parallel acquisitions (grappa). Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine, 47(6):1202-1210, 2002. 1, 2 + +[12] Charles W Groetsch and CW Groetsch. Inverse problems in the mathematical sciences. Springer, 1993. 1 +[13] Steven Guan, Ko-Tsung Hsu, and Parag V Chitnis. Fourier neural operator network for fast photoacoustic wave simulations. Algorithms, 16(2):124, 2023. 3 +[14] Alper Güngör, Salman UH Dar, Şaban Öztürk, Yilmaz Korkmaz, Hasan A Bedel, Gokberk Elmas, Muzaffer Ozbey, and Tolga Çukur. Adaptive diffusion priors for accelerated mri reconstruction. Medical image analysis, 88:102872, 2023. 3 +[15] Kerstin Hammernik, Teresa Klatzer, Erich Kobler, Michael P Recht, Daniel K Sodickson, Thomas Pock, and Florian Knoll. Learning a variational network for reconstruction of accelerated mri data. Magnetic resonance in medicine, 79(6): 3055-3071, 2018. 1, 3, 4 +[16] DJ Husband, KA Grant, and CS Romaniuk. Mri in the diagnosis and treatment of suspected malignant spinal cord compression. The British journal of radiology, 74:15-23, 2001. 1 +[17] Ajil Jalal, Marius Arvinte, Giannis Daras, Eric Price, Alexandros G Dimakis, and Jonathan I Tamir. Robust compressed sensing mri with deep generative priors. Advances in Neural Information Processing Systems, 2021. 2, 6, 7, 8 +[18] Patricia M Johnson and Maria Drangova. Conditional generative adversarial network for 3d rigid-body motion correction in mri. Magnetic resonance in medicine, 82(3):901-910, 2019. 1, 3 +[19] Christoph Juchem, Omar M Nahnass, Terence W Nixon, and Robin A de Graaf. Multi-slice mri with the dynamic multicoil technique. NMR in Biomedicine, 28(11):1526-1534, 2015. 3 +[20] Dow-Mu Koh and David J Collins. Diffusion-weighted mri in the body: applications and challenges in oncology. American Journal of Roentgenology, 188(6):1622-1635, 2007. 1 +[21] Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadeneheshi, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural operator: Learning maps between function spaces with applications to pdes. Journal of Machine Learning Research, 24(89):1-97, 2023. 1, 3 +[22] Denis Le Bihan. Looking into the functional architecture of the brain with diffusion mri. Nature reviews neuroscience, 4 (6):469-480, 2003. 1 +[23] Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural operator: Graph kernel network for partial differential equations. 2020. 3 +[24] Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. 2021. 1, 3, 5, 11 +[25] Zongyi Li, Hongkai Zheng, Nikola Kovachki, David Jin, Haoxuan Chen, Burigede Liu, Kamyar Azizzadenesheli, and Anima Anandkumar. Physics-informed neural operator for learning partial differential equations. ACM/JMS Journal of Data Science, 1(3):1-27, 2024. 1, 3 +[26] Miguel Liu-Schiaffini, Julius Berner, Boris Bonev, Thorsten Kurth, Kamyar Azizzadenesheli, and Anima Anandkumar. + +Neural operators with localized integral and differential kernels. In *Forty-first International Conference on Machine Learning*, 2024. 1, 3, 4, 11, 14 +[27] Michael Lustig, David Donoho, and John M Pauly. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magn. Reson. Med., 58(6):1182-1195, 2007. 2, 6, 7, 8 +[28] Michael Lustig, David L Donoho, Juan M Santos, and John M Pauly. Compressed sensing mri. IEEE signal processing magazine, 25(2):72-82, 2008. 1, 2 +[29] Morteza Mardani, Qingyun Sun, David Donoho, Vardan Papyan, Hatef Monajemi, Shreyas Vasanawala, and John Pauly. Neural proximal gradient descent for compressive imaging. Advances in Neural Information Processing Systems, 31, 2018. 3 +[30] Mark Murphy, Marcus Alley, James Demmel, Kurt Keutzer, Shreyas Vasanawala, and Michael Lustig. Fast 11-spirit compressed sensing parallel imaging mri: scalable parallel implementation and clinically feasible runtime. IEEE transactions on medical imaging, 31(6):1250-1262, 2012. 2 +[31] Jeremy Ocampo, Matthew A Price, and Jason D McEwen. Scalable and equivariant spherical cnns by discrete-continuous (disco) convolutions. arXiv preprint arXiv:2209.13603, 2022. 1, 3, 4, 5, 12, 14 +[32] Jaideep Pathak, Shashank Subramanian, Peter Harrington, Sanjeev Raja, Ashesh Chattopadhyay, Morteza Mardani, Thorsten Kurth, David Hall, Zongyi Li, Kamyar Azizzadenesheli, et al. Fourcastnet: A global data-driven high-resolution weather model using adaptive fourier neural operators. arXiv preprint arXiv:2202.11214, 2022. 1, 3 +[33] William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4195-4205, 2023. 5 +[34] Bogdan Raonic, Roberto Molinaro, Tim De Ryck, Tobias Rohner, Francesca Bartolucci, Rima Alaifari, Siddhartha Mishra, and Emmanuel de Bezenac. Convolutional neural operators for robust and accurate learning of pdes. Advances in Neural Information Processing Systems, 36, 2024. 1, 3 +[35] Meer Mehran Rashid, Tanu Pittie, Souvik Chakraborty, and NM Anoop Krishnan. Learning the stress-strain fields in digital composites using fourier neural operator. Iscience, 25 (11), 2022. 1, 3 +[36] J Craig Richardson, Richard W Bowtell, Karsten Mäder, and Colin D Melia. Pharmaceutical applications of magnetic resonance imaging (mri). Advanced drug delivery reviews, 57 (8):1191-1209, 2005. 1 +[37] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical image computing and computer-assisted intervention-MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18, pages 234-241. Springer, 2015. 2, 5, 6, 11 +[38] V Seifert, M Zimmermann, C Trantakis, H-E Vitzthum, K Kühnel, A Raabe, F Bootz, J-P Schneider, F Schmidt, and J Dietrich. Open mri-guided neurosurgery. Acta neurochirurgica, 141:455-464, 1999. 1 + +[39] Dilbag Singh, Anmol Monga, Hector L de Moura, Xiaoxia Zhang, Marcelo VW Zibetti, and Ravinder R Regatte. Emerging trends in fast mri using deep-learning reconstruction on undersampled k-space data: a systematic review. Bioengineering, 10(9):1012, 2023. 1 +[40] Anuroop Sriram, Jure Zbontar, Tullie Murrell, Aaron Defazio, C Lawrence Zitnick, Nafissa Yakubova, Florian Knoll, and Patricia Johnson. End-to-end variational networks for accelerated mri reconstruction. In Medical Image Computing and Computer Assisted Intervention-MICCAI 2020: 23rd International Conference, Lima, Peru, October 4-8, 2020, Proceedings, Part II 23, pages 64-73. Springer, 2020. 1, 2, 3, 4, 5, 6, 7, 8, 11, 12, 13 +[41] Jian Sun, Huibin Li, Zongben Xu, et al. Deep admm-net for compressive sensing mri. Advances in neural information processing systems, 29, 2016. 3 +[42] Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multiscale structural similarity for image quality assessment. In Asilomar Conference on Signals, Systems & Computers, 2003. 5 +[43] Zihui Wu, Yu Sun, Yifan Chen, Bingliang Zhang, Yisong Yue, and Katherine Bouman. Principled probabilistic imaging using diffusion models as plug-and-play priors. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. 2, 6, 8 +[44] Jure Zbontar, Florian Knoll, Anuroop Sriram, Tullie Murrell, Zhengnan Huang, Matthew J Muckley, Aaron Defazio, et al. fastmri: An open dataset and benchmarks for accelerated mri. arXiv preprint arXiv:1811.08839, 2018. 3, 5, 6, 12 \ No newline at end of file diff --git a/2025/A Unified Model for Compressed Sensing MRI Across Undersampling Patterns/images.zip b/2025/A Unified Model for Compressed Sensing MRI Across Undersampling Patterns/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..f86f6a7e5eb4b99f09ecefaffe1c4e36732c3a78 --- /dev/null +++ b/2025/A Unified Model for Compressed Sensing MRI Across Undersampling Patterns/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:73ca3059e9bc2955ece2ec6a942b7a73e6debdbe4ed748968f9c0c211b4a8981 +size 644852 diff --git a/2025/A Unified Model for Compressed Sensing MRI Across Undersampling Patterns/layout.json b/2025/A Unified Model for Compressed Sensing MRI Across Undersampling Patterns/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..09c03c53dd746c9dc60e73b883acf4c6477934cc --- /dev/null +++ b/2025/A Unified Model for Compressed Sensing MRI Across Undersampling Patterns/layout.json @@ -0,0 +1,8717 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 53, + 103, + 541, + 121 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 103, + 541, + 121 + ], + "spans": [ + { + "bbox": [ + 53, + 103, + 541, + 121 + ], + "type": "text", + "content": "A Unified Model for Compressed Sensing MRI Across Undersampling Patterns" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 97, + 143, + 208, + 171 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 97, + 143, + 208, + 171 + ], + "spans": [ + { + "bbox": [ + 97, + 143, + 208, + 171 + ], + "type": "text", + "content": "Armeet Singh Jatyani* Miguel Liu-Schiaffini" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 231, + 144, + 496, + 186 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 231, + 144, + 496, + 186 + ], + "spans": [ + { + "bbox": [ + 231, + 144, + 496, + 186 + ], + "type": "text", + "content": "Jiayun Wang* Aditi Chandrashekar Zihui Wu Bahareh Tolooshams Anima Anandkumar Ifornia Institute of Technology" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 121, + 188, + 473, + 198 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 121, + 188, + 473, + 198 + ], + "spans": [ + { + "bbox": [ + 121, + 188, + 473, + 198 + ], + "type": "text", + "content": "{armeet,peterw,ajchandr,zwu2,mliuschi,btoloosh,ana}@caltech.edu" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 143, + 227, + 192, + 239 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 143, + 227, + 192, + 239 + ], + "spans": [ + { + "bbox": [ + 143, + 227, + 192, + 239 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 46, + 252, + 288, + 396 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 252, + 288, + 396 + ], + "spans": [ + { + "bbox": [ + 46, + 252, + 288, + 396 + ], + "type": "text", + "content": "Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled measurements, thereby reducing scan time—the time subjects need to remain still. Recently, deep learning has shown great potential for reconstructing high-fidelity images from highly undersampled measurements. However, one needs to train multiple models for different undersampling patterns and desired output image resolutions, since most networks operate on a fixed discretization. Such approaches are highly impractical in clinical settings, where undersampling patterns and image resolutions are frequently changed to accommodate different real-time imaging and diagnostic requirements." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 46, + 397, + 290, + 589 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 397, + 290, + 589 + ], + "spans": [ + { + "bbox": [ + 46, + 397, + 290, + 589 + ], + "type": "text", + "content": "We propose a unified MRI reconstruction model robust to various measurement undersampling patterns and image resolutions. Our approach uses neural operators—a discretization-agnostic architecture applied in both image and measurement spaces—to capture local and global features. Empirically, our model improves SSIM by " + }, + { + "bbox": [ + 46, + 397, + 290, + 589 + ], + "type": "inline_equation", + "content": "11\\%" + }, + { + "bbox": [ + 46, + 397, + 290, + 589 + ], + "type": "text", + "content": " and PSNR by 4 dB over a state-of-the-art CNN (End-to-End VarNet), with " + }, + { + "bbox": [ + 46, + 397, + 290, + 589 + ], + "type": "inline_equation", + "content": "600\\times" + }, + { + "bbox": [ + 46, + 397, + 290, + 589 + ], + "type": "text", + "content": " faster inference than diffusion methods. The resolution-agnostic design also enables zero-shot super-resolution and extended field-of-view reconstruction, offering a versatile and efficient solution for clinical MR imaging. Our unified model offers a versatile solution for MRI, adapting seamlessly to various measurement undersampling and imaging resolutions, making it highly effective for flexible and reliable clinical imaging. Our code is available at https://armeet.ca/nomri." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 47, + 613, + 128, + 625 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 47, + 613, + 128, + 625 + ], + "spans": [ + { + "bbox": [ + 47, + 613, + 128, + 625 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 46, + 633, + 288, + 694 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 633, + 288, + 694 + ], + "spans": [ + { + "bbox": [ + 46, + 633, + 288, + 694 + ], + "type": "text", + "content": "Magnetic Resonance Imaging (MRI) is a popular noninvasive imaging technology, used in numerous medical and scientific applications such as neurosurgery [38], clinical oncology [20], diagnostic testing [16], neuroscience [22], and pharmaceutical research [36]. MRI is greatly limited by a" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 306, + 228, + 547, + 264 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 228, + 547, + 264 + ], + "spans": [ + { + "bbox": [ + 306, + 228, + 547, + 264 + ], + "type": "text", + "content": "slow data acquisition process, which sometimes requires patients to remain still for an hour [4, 39]. Hence, accelerating MRI scan has garnered tremendous attention [11, 18, 28]." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 304, + 266, + 547, + 445 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 266, + 547, + 445 + ], + "spans": [ + { + "bbox": [ + 304, + 266, + 547, + 445 + ], + "type": "text", + "content": "Compressed Sensing (CS) [9] enables MRI at sub-Nyquist rates and reduces acquisition time for greater clinical utility. This is framed as an ill-posed inverse problem [12], where prior knowledge about MR images is crucial for reconstruction. Traditional Compressed Sensing MRI assumes a sparse prior in a transform domain (e.g., wavelets [3]). Recent deep learning methods learn underlying data structures to achieve superior performance [5, 40]. Current state-of-the-art models establish an end-to-end mapping [15, 40] from undersampled measurements to image reconstruction in both image and frequency domains. However, these models often struggle with generalization across varying resolutions, a critical need in clinical practice where flexible resolution adjustments are necessary. A unified model that is agnostic to discretizations would greatly improve efficiency." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 448, + 548, + 639 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 448, + 548, + 639 + ], + "spans": [ + { + "bbox": [ + 304, + 448, + 548, + 639 + ], + "type": "text", + "content": "Neural Operators (NOs) [21] are a deep learning framework that learns mappings between infinite-dimensional function spaces, making them agnostic to discretizations (resolutions). This property makes them suitable for tasks with data at varying resolutions, such as partial differential equations (PDEs) [21, 25, 34] and PDE-related applications [32, 35]. NOs could also be suitable for compressed sensing MRI due to measurements with multiple undersampling patterns. Various NO architectures [24, 26, 34] have been proposed. Recently, discrete-continuous (DISCO) convolutions [26, 31] have emerged as an efficient neural operator that captures local features and leverages GPU acceleration for standard convolutions. Due to the similarity to standard convolutions, the building blocks of many existing MRI deep learning models [5, 40], DISCO is a good candidate for resolution-agnostic MRI reconstruction." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 641, + 548, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 641, + 548, + 713 + ], + "spans": [ + { + "bbox": [ + 304, + 641, + 548, + 713 + ], + "type": "text", + "content": "Our approach: We propose a unified model based on NOs, that is robust to different undersampling patterns and image resolutions in compressed sensing MRI (Fig. 1a). Our model follows an unrolled network design [15, 40] with DISCO [26, 31]. As the image resolution increases, DISCO maintains a resolution-agnostic kernel with a consistent con" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 58, + 703, + 126, + 712 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 703, + 126, + 712 + ], + "spans": [ + { + "bbox": [ + 58, + 703, + 126, + 712 + ], + "type": "text", + "content": "*Equal contribution." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "26004" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 52, + 73, + 296, + 214 + ], + "blocks": [ + { + "bbox": [ + 52, + 73, + 296, + 214 + ], + "lines": [ + { + "bbox": [ + 52, + 73, + 296, + 214 + ], + "spans": [ + { + "bbox": [ + 52, + 73, + 296, + 214 + ], + "type": "image", + "image_path": "79bd6ab2cc51219f5cbccc1f37da7e35cd83009cc487046729e06a66ec617a2f.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 296, + 74, + 530, + 225 + ], + "blocks": [ + { + "bbox": [ + 296, + 74, + 530, + 225 + ], + "lines": [ + { + "bbox": [ + 296, + 74, + 530, + 225 + ], + "spans": [ + { + "bbox": [ + 296, + 74, + 530, + 225 + ], + "type": "image", + "image_path": "fac92d5640d352f6c9f5be132c40c252f7c8960477bf38717cfcc6057fcf75af.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 52, + 217, + 293, + 351 + ], + "blocks": [ + { + "bbox": [ + 52, + 217, + 293, + 351 + ], + "lines": [ + { + "bbox": [ + 52, + 217, + 293, + 351 + ], + "spans": [ + { + "bbox": [ + 52, + 217, + 293, + 351 + ], + "type": "image", + "image_path": "4c2e564d9eecb317644849f140a69641654d87616d351ad9b836f7c25fa1567f.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 46, + 363, + 545, + 430 + ], + "lines": [ + { + "bbox": [ + 46, + 363, + 545, + 430 + ], + "spans": [ + { + "bbox": [ + 46, + 363, + 545, + 430 + ], + "type": "text", + "content": "Figure 1. (a) We propose a unified model for MRI reconstruction, called neural operator (NO), which works across various measurement undersampling patterns, overcoming the resolution dependency limit of CNN-based methods like [40] that require a specific model for each pattern. (b) NO achieves consistent performance across undersampling patterns and outperforms CNN architectures such as [40] (for " + }, + { + "bbox": [ + 46, + 363, + 545, + 430 + ], + "type": "inline_equation", + "content": "2 \\times" + }, + { + "bbox": [ + 46, + 363, + 545, + 430 + ], + "type": "text", + "content": " acceleration with one unrolled network cascade). (c) NO is resolution-agnostic. As image resolution increases, it maintains a consistent kernel size for alias-free rescaling, unlike CNNs with variable kernel sizes that risk aliasing. (d) NO enhances zero-shot super-resolution MRI reconstruction, outperforming CNNs [40]." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 294, + 236, + 531, + 353 + ], + "blocks": [ + { + "bbox": [ + 294, + 236, + 531, + 353 + ], + "lines": [ + { + "bbox": [ + 294, + 236, + 531, + 353 + ], + "spans": [ + { + "bbox": [ + 294, + 236, + 531, + 353 + ], + "type": "image", + "image_path": "bd4d3c4cd2c6b2889be5b0e7e58493399d38bcf4d297f231bf96b7ad58336866.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 46, + 441, + 287, + 596 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 441, + 287, + 596 + ], + "spans": [ + { + "bbox": [ + 46, + 441, + 287, + 596 + ], + "type": "text", + "content": "volution patch size, while the regular convolution kernel contracts to a point (Fig. 1c). The DISCO operators learn in both measurement/frequency " + }, + { + "bbox": [ + 46, + 441, + 287, + 596 + ], + "type": "inline_equation", + "content": "\\mathbf{k}" + }, + { + "bbox": [ + 46, + 441, + 287, + 596 + ], + "type": "text", + "content": " space " + }, + { + "bbox": [ + 46, + 441, + 287, + 596 + ], + "type": "inline_equation", + "content": "(\\mathrm{NO_k})" + }, + { + "bbox": [ + 46, + 441, + 287, + 596 + ], + "type": "text", + "content": " and image space " + }, + { + "bbox": [ + 46, + 441, + 287, + 596 + ], + "type": "inline_equation", + "content": "(\\mathrm{NO_i})" + }, + { + "bbox": [ + 46, + 441, + 287, + 596 + ], + "type": "text", + "content": ". " + }, + { + "bbox": [ + 46, + 441, + 287, + 596 + ], + "type": "inline_equation", + "content": "\\mathrm{NO_k}" + }, + { + "bbox": [ + 46, + 441, + 287, + 596 + ], + "type": "text", + "content": " makes our framework agnostic to different measurement undersampling patterns, and " + }, + { + "bbox": [ + 46, + 441, + 287, + 596 + ], + "type": "inline_equation", + "content": "\\mathrm{NO_i}" + }, + { + "bbox": [ + 46, + 441, + 287, + 596 + ], + "type": "text", + "content": " makes the framework agnostic to different image resolutions. Additionally, the learning in both frequency and image space allows the model to capture both local and global features of images due to the duality of the Fourier transform that connects the frequency and image space. The resolution-agnostic design also enables super-resolution in both frequency and image space, allowing the extended field of view (FOV) and super-resolution of the reconstructed MR images." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 46, + 605, + 288, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 605, + 288, + 713 + ], + "spans": [ + { + "bbox": [ + 46, + 605, + 288, + 713 + ], + "type": "text", + "content": "We empirically demonstrate that our model is robust to different measurement undersampling rates and patterns (Fig. 1a). Our model performs consistently across these pattern variations, whereas the existing method drops in performance (Fig. 1b). We achieve up to " + }, + { + "bbox": [ + 46, + 605, + 288, + 713 + ], + "type": "inline_equation", + "content": "4 \\times" + }, + { + "bbox": [ + 46, + 605, + 288, + 713 + ], + "type": "text", + "content": " lower NMSE and 5 dB PSNR improvement from the baseline when evaluating on different undersampling patterns. The model is efficient and " + }, + { + "bbox": [ + 46, + 605, + 288, + 713 + ], + "type": "inline_equation", + "content": "600 \\times" + }, + { + "bbox": [ + 46, + 605, + 288, + 713 + ], + "type": "text", + "content": " faster than the diffusion baseline [5, 17, 43]. We also show that our model outperforms the state-of-the-art in" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 305, + 441, + 545, + 465 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 441, + 545, + 465 + ], + "spans": [ + { + "bbox": [ + 305, + 441, + 545, + 465 + ], + "type": "text", + "content": "zero-shot super-resolution inference (Fig. 1d) and extended FOV reconstruction (Fig. 5)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 304, + 466, + 546, + 573 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 466, + 546, + 573 + ], + "spans": [ + { + "bbox": [ + 304, + 466, + 546, + 573 + ], + "type": "text", + "content": "Our work has two main contributions: 1) We propose a unified neural operator model that learns in function space and shows robust performance across different undersampling patterns and image resolutions in compressed sensing MRI. To the best of our knowledge, this is the first resolution-agnostic framework for MRI reconstruction. 2) Our model demonstrates empirical robustness across measurement undersampling rates and patterns, reconstructing MR images with zero-shot higher resolutions and a larger field of view." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 306, + 585, + 397, + 597 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 585, + 397, + 597 + ], + "spans": [ + { + "bbox": [ + 306, + 585, + 397, + 597 + ], + "type": "text", + "content": "2. Related Works" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 304, + 605, + 547, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 605, + 547, + 713 + ], + "spans": [ + { + "bbox": [ + 304, + 605, + 547, + 713 + ], + "type": "text", + "content": "Accelerated MRI. One way to accelerate MRI scan speed is parallel imaging, in which multiple receiver coils acquire different views of the object of interest simultaneously, and then combine them into a single image [11, 30, 37]. When MRI reconstruction is paired with compressed sensing, predefined priors or regularization filters can be leveraged to improve reconstruction quality [27, 28]. Recent works have shown that learned deep-learning priors outperform handcrafted priors in reconstruction fidelity. Convolutional neural" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "26005" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 46, + 72, + 289, + 215 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 72, + 289, + 215 + ], + "spans": [ + { + "bbox": [ + 46, + 72, + 289, + 215 + ], + "type": "text", + "content": "networks (CNNs) [8, 15, 18, 40], variational networks (based on variational minimization) [15, 40], and generative adversarial networks (GANs) [7, 18] have all demonstrated superior performance than traditional optimization approach for compressed sensing MRI reconstruction from undersampled measurements. However, unlike conventional compressed sensing which operates in the function space and is agnostic to measurement undersampling patterns, the aforementioned deep learning methods operate on a fixed resolution. As a result, changes in resolution lead to degradation in performance, and multiple models are needed for different settings. We propose a resolution-agnostic unified model." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 46, + 216, + 289, + 574 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 216, + 289, + 574 + ], + "spans": [ + { + "bbox": [ + 46, + 216, + 289, + 574 + ], + "type": "text", + "content": "Discretization-Agnostic Learning and Neural Operators. Empirically, diffusion models have shown relatively consistent performance with different measurement undersampling patterns in accelerated MRI [14]. However, diffusion models usually take more runtime at inference and need extensive hyperparameter tuning for good performance (Section 4.5). Additionally, they are not fundamentally discretization-agnostic by design. Neural operators [1, 21] are deep learning architectures specifically designed to learn mappings between infinite-dimensional function spaces. They are discretization-agnostic, allowing evaluation at any resolution, and converge to a desired operator as the resolution approaches infinity. Neural operators have empirically achieved good performance as surrogate models of numerical solutions to partial differential equations (PDEs) [21, 25, 34] with various applications, such as material science [35], weather forecasting [32], and photoacoustic imaging [13]. The design of neural operators often depends on the application at hand. For example, the Fourier neural operator (FNO) [24], which performs global convolutions, has shown consistent discretization-agnostic performance in various applications [1]. Other designs of neural operators [23, 26] rely on integration with locally-supported kernels to capture local features, which has shown to be useful in applications where local features are important, such as modeling turbulent fluids [23]. Additionally, neural operators with local integrals can be made efficient with parallel computing compared to those requiring global integrals. Our MRI framework, based on neural operators with local integrals, is agnostic to undersampling patterns and output image resolutions." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 47, + 586, + 107, + 597 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 47, + 586, + 107, + 597 + ], + "spans": [ + { + "bbox": [ + 47, + 586, + 107, + 597 + ], + "type": "text", + "content": "3. Methods" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 46, + 605, + 289, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 605, + 289, + 715 + ], + "spans": [ + { + "bbox": [ + 46, + 605, + 289, + 715 + ], + "type": "text", + "content": "We first discuss the background of compressed sensing MRI and the unrolled network framework we use. We then discuss how we can extend the existing network building block, standard convolution, to resolution-agnostic neural operators. We also introduce DISCO [31], a neural operator design we adopt, and we capture global and local image features with DISCO. We conclude the section with the super-resolution designs. We call the measurement or frequency space " + }, + { + "bbox": [ + 46, + 605, + 289, + 715 + ], + "type": "inline_equation", + "content": "\\mathbf{k}" + }, + { + "bbox": [ + 46, + 605, + 289, + 715 + ], + "type": "text", + "content": "-space, and physical or spatial space image space hereafter." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 305, + 72, + 541, + 84 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 72, + 541, + 84 + ], + "spans": [ + { + "bbox": [ + 305, + 72, + 541, + 84 + ], + "type": "text", + "content": "3.1. MRI Reconstruction with Unrolled Networks" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 305, + 90, + 545, + 126 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 90, + 545, + 126 + ], + "spans": [ + { + "bbox": [ + 305, + 90, + 545, + 126 + ], + "type": "text", + "content": "Background. In MRI, anatomical images " + }, + { + "bbox": [ + 305, + 90, + 545, + 126 + ], + "type": "inline_equation", + "content": "\\mathbf{x}" + }, + { + "bbox": [ + 305, + 90, + 545, + 126 + ], + "type": "text", + "content": " of the patient are reconstructed by acquiring frequency-domain measurements " + }, + { + "bbox": [ + 305, + 90, + 545, + 126 + ], + "type": "inline_equation", + "content": "\\mathbf{k}" + }, + { + "bbox": [ + 305, + 90, + 545, + 126 + ], + "type": "text", + "content": ", where the relationship is defined as:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 393, + 133, + 547, + 147 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 393, + 133, + 547, + 147 + ], + "spans": [ + { + "bbox": [ + 393, + 133, + 547, + 147 + ], + "type": "interline_equation", + "content": "\\mathbf {k} := \\mathcal {F} (\\mathbf {x}) + \\epsilon \\tag {1}", + "image_path": "c6dd10124736af1e610574d26d0b80b2ce9b426d84a8133bd08bb3ef98dccd3b.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 304, + 154, + 547, + 286 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 154, + 547, + 286 + ], + "spans": [ + { + "bbox": [ + 304, + 154, + 547, + 286 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 304, + 154, + 547, + 286 + ], + "type": "inline_equation", + "content": "\\epsilon" + }, + { + "bbox": [ + 304, + 154, + 547, + 286 + ], + "type": "text", + "content": " is the measurement noise and " + }, + { + "bbox": [ + 304, + 154, + 547, + 286 + ], + "type": "inline_equation", + "content": "\\mathcal{F}" + }, + { + "bbox": [ + 304, + 154, + 547, + 286 + ], + "type": "text", + "content": " is the Fourier transform. In this paper, we consider the parallel imaging setting with multiple receiver coils [19, 44], where each coil captures a different region of the anatomy. The forward process of the " + }, + { + "bbox": [ + 304, + 154, + 547, + 286 + ], + "type": "inline_equation", + "content": "i^{\\mathrm{th}}" + }, + { + "bbox": [ + 304, + 154, + 547, + 286 + ], + "type": "text", + "content": " coil measures " + }, + { + "bbox": [ + 304, + 154, + 547, + 286 + ], + "type": "inline_equation", + "content": "\\mathbf{k}_i\\coloneqq \\mathcal{F}(S_i\\mathbf{x}) + \\epsilon_i" + }, + { + "bbox": [ + 304, + 154, + 547, + 286 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 304, + 154, + 547, + 286 + ], + "type": "inline_equation", + "content": "S_{i}" + }, + { + "bbox": [ + 304, + 154, + 547, + 286 + ], + "type": "text", + "content": " is a position-dependent sensitivity map for the " + }, + { + "bbox": [ + 304, + 154, + 547, + 286 + ], + "type": "inline_equation", + "content": "i^{\\mathrm{th}}" + }, + { + "bbox": [ + 304, + 154, + 547, + 286 + ], + "type": "text", + "content": " coil. To speed up the imaging process, measurements are undersampled as " + }, + { + "bbox": [ + 304, + 154, + 547, + 286 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{k}} = M\\mathbf{k}" + }, + { + "bbox": [ + 304, + 154, + 547, + 286 + ], + "type": "text", + "content": " in the compressed sensing MRI setting, where " + }, + { + "bbox": [ + 304, + 154, + 547, + 286 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 304, + 154, + 547, + 286 + ], + "type": "text", + "content": " is a binary mask that selects a subset of the k-space points. Classical compressed sensing methods reconstruct the image " + }, + { + "bbox": [ + 304, + 154, + 547, + 286 + ], + "type": "inline_equation", + "content": "\\hat{\\mathbf{x}}" + }, + { + "bbox": [ + 304, + 154, + 547, + 286 + ], + "type": "text", + "content": " by solving an optimization problem" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 335, + 291, + 546, + 320 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 335, + 291, + 546, + 320 + ], + "spans": [ + { + "bbox": [ + 335, + 291, + 546, + 320 + ], + "type": "interline_equation", + "content": "\\hat {\\mathbf {x}} = \\operatorname {a r g m i n} _ {\\mathbf {x}} \\frac {1}{2} \\sum_ {i} \\left\\| \\mathcal {A} (\\mathbf {x}) - \\tilde {\\mathbf {k}} \\right\\| _ {2} ^ {2} + \\lambda \\Psi (\\mathbf {x}) \\tag {2}", + "image_path": "699311f61010aa06f3df191d11e82f6d494817e2433b6e8f1b96682bb4340a2e.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 304, + 327, + 548, + 555 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 327, + 548, + 555 + ], + "spans": [ + { + "bbox": [ + 304, + 327, + 548, + 555 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 304, + 327, + 548, + 555 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 304, + 327, + 548, + 555 + ], + "type": "text", + "content": " is the coil index, " + }, + { + "bbox": [ + 304, + 327, + 548, + 555 + ], + "type": "inline_equation", + "content": "\\mathcal{A}(\\cdot) \\coloneqq MFS_{i}(\\cdot)" + }, + { + "bbox": [ + 304, + 327, + 548, + 555 + ], + "type": "text", + "content": " is the linear forward operator, and " + }, + { + "bbox": [ + 304, + 327, + 548, + 555 + ], + "type": "inline_equation", + "content": "\\Psi(\\mathbf{x})" + }, + { + "bbox": [ + 304, + 327, + 548, + 555 + ], + "type": "text", + "content": " is a regularization term. The optimization objective can be considered as a combination of physics constraint and prior. While the above optimization can be solved using classical optimization toolboxes, an increasing line of works uses deep neural networks to learn data priors and show improved reconstruction performance [15, 40]. Among them, unrolled networks [15, 40] have gained popularity as they incorporate the known forward model, resulting in state-of-the-art performance. Unrolling, which started with the nominal work of LISTA [10], proposes to design networks using iterations of an optimization algorithm to solve inverse problems. This approach incorporates domain knowledge (i.e., the forward model) and leverages deep learning to learn implicit priors from data [29, 41]. In the context of MRI and assuming a differential regularization term, the optimization problem is expanded to iterative gradient descent steps with injected CNN-based data priors. Each layer mimics the gradient descent step from " + }, + { + "bbox": [ + 304, + 327, + 548, + 555 + ], + "type": "inline_equation", + "content": "\\mathbf{x}^t" + }, + { + "bbox": [ + 304, + 327, + 548, + 555 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 304, + 327, + 548, + 555 + ], + "type": "inline_equation", + "content": "\\mathbf{x}^{t+1}" + }, + { + "bbox": [ + 304, + 327, + 548, + 555 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 327, + 572, + 546, + 587 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 327, + 572, + 546, + 587 + ], + "spans": [ + { + "bbox": [ + 327, + 572, + 546, + 587 + ], + "type": "interline_equation", + "content": "\\mathbf {x} ^ {t + 1} \\leftarrow \\mathbf {x} ^ {t} - \\eta^ {t} \\mathcal {A} ^ {*} (\\mathcal {A} (\\mathbf {x} ^ {t}) - \\tilde {\\mathbf {k}}) + \\lambda^ {t} \\operatorname {C N N} (\\mathbf {x} ^ {t}) \\tag {3}", + "image_path": "1049e4ba96ab035341c5fa42dc7f918ae913cb29158ee77cdedb601a3d306812.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 304, + 594, + 547, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 594, + 547, + 689 + ], + "spans": [ + { + "bbox": [ + 304, + 594, + 547, + 689 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 304, + 594, + 547, + 689 + ], + "type": "inline_equation", + "content": "\\eta^t" + }, + { + "bbox": [ + 304, + 594, + 547, + 689 + ], + "type": "text", + "content": " controls the weight of data consistency term and " + }, + { + "bbox": [ + 304, + 594, + 547, + 689 + ], + "type": "inline_equation", + "content": "\\lambda^t" + }, + { + "bbox": [ + 304, + 594, + 547, + 689 + ], + "type": "text", + "content": " controls that of the data-driven prior term. The data consistency term samples the data in the frequency domain, hence it is applicable to any spatial resolution. However, the prior term only operates on a specific resolution with CNNs. This means when changing the undersampling patterns, one needs another CNN trained for that setting, which greatly limits the flexibility of the reconstruction system." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 305, + 689, + 547, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 689, + 547, + 714 + ], + "spans": [ + { + "bbox": [ + 305, + 689, + 547, + 714 + ], + "type": "text", + "content": "Extending to Neural Operators. We learn the prior in function space via discretization-agnostic neural operators" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "26006" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 51, + 62, + 532, + 161 + ], + "blocks": [ + { + "bbox": [ + 51, + 62, + 532, + 161 + ], + "lines": [ + { + "bbox": [ + 51, + 62, + 532, + 161 + ], + "spans": [ + { + "bbox": [ + 51, + 62, + 532, + 161 + ], + "type": "image", + "image_path": "1cfb141d7675bddef93de0b6f015a4dc0b22a2793f8b945af4a3bae97a2d8b20.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 46, + 175, + 547, + 221 + ], + "lines": [ + { + "bbox": [ + 46, + 175, + 547, + 221 + ], + "spans": [ + { + "bbox": [ + 46, + 175, + 547, + 221 + ], + "type": "text", + "content": "Figure 2. MRI reconstruction pipeline. NO learns data priors in function space with infinite resolution. Specifically we propose NOs in the k (frequency) space " + }, + { + "bbox": [ + 46, + 175, + 547, + 221 + ], + "type": "inline_equation", + "content": "\\mathrm{NO_k}" + }, + { + "bbox": [ + 46, + 175, + 547, + 221 + ], + "type": "text", + "content": " (k space NO) and image space " + }, + { + "bbox": [ + 46, + 175, + 547, + 221 + ], + "type": "inline_equation", + "content": "\\mathrm{NO_i}" + }, + { + "bbox": [ + 46, + 175, + 547, + 221 + ], + "type": "text", + "content": " (image space NO), which capture both global and local image features, due to the duality between physical and frequency space. " + }, + { + "bbox": [ + 46, + 175, + 547, + 221 + ], + "type": "inline_equation", + "content": "\\mathcal{F}^{-1}" + }, + { + "bbox": [ + 46, + 175, + 547, + 221 + ], + "type": "text", + "content": " refers to the inverse Fourier transform. We provide the framework design details in Section 3.1 and NO design details in Section 3.2." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 46, + 231, + 288, + 291 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 231, + 288, + 291 + ], + "spans": [ + { + "bbox": [ + 46, + 231, + 288, + 291 + ], + "type": "text", + "content": "in " + }, + { + "bbox": [ + 46, + 231, + 288, + 291 + ], + "type": "inline_equation", + "content": "\\mathbf{k}" + }, + { + "bbox": [ + 46, + 231, + 288, + 291 + ], + "type": "text", + "content": " space " + }, + { + "bbox": [ + 46, + 231, + 288, + 291 + ], + "type": "inline_equation", + "content": "(\\mathrm{NO_k})" + }, + { + "bbox": [ + 46, + 231, + 288, + 291 + ], + "type": "text", + "content": " and image space " + }, + { + "bbox": [ + 46, + 231, + 288, + 291 + ], + "type": "inline_equation", + "content": "(\\mathrm{NO_i})" + }, + { + "bbox": [ + 46, + 231, + 288, + 291 + ], + "type": "text", + "content": ". Specifically, we first use a " + }, + { + "bbox": [ + 46, + 231, + 288, + 291 + ], + "type": "inline_equation", + "content": "\\mathbf{k}" + }, + { + "bbox": [ + 46, + 231, + 288, + 291 + ], + "type": "text", + "content": " space neural operator " + }, + { + "bbox": [ + 46, + 231, + 288, + 291 + ], + "type": "inline_equation", + "content": "\\mathrm{NO_k}" + }, + { + "bbox": [ + 46, + 231, + 288, + 291 + ], + "type": "text", + "content": " to learn " + }, + { + "bbox": [ + 46, + 231, + 288, + 291 + ], + "type": "inline_equation", + "content": "\\mathbf{k}" + }, + { + "bbox": [ + 46, + 231, + 288, + 291 + ], + "type": "text", + "content": " space prior and then apply a cascade of unrolled layers, each of which features a data consistency loss and the image space " + }, + { + "bbox": [ + 46, + 231, + 288, + 291 + ], + "type": "inline_equation", + "content": "\\mathrm{NO_i}" + }, + { + "bbox": [ + 46, + 231, + 288, + 291 + ], + "type": "text", + "content": " for image prior learning:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 81, + 299, + 287, + 313 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 299, + 287, + 313 + ], + "spans": [ + { + "bbox": [ + 81, + 299, + 287, + 313 + ], + "type": "interline_equation", + "content": "\\mathbf {x} ^ {0} \\leftarrow \\mathcal {F} ^ {- 1} \\left(\\mathrm {N O} _ {\\mathbf {k}} (\\tilde {\\mathbf {k}})\\right) \\tag {4}", + "image_path": "b43cb7aa58284213e5c9d0144e63d9fac69edb78d5fb28ae807fcd3efe2559b2.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 72, + 316, + 287, + 331 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 72, + 316, + 287, + 331 + ], + "spans": [ + { + "bbox": [ + 72, + 316, + 287, + 331 + ], + "type": "interline_equation", + "content": "\\mathbf {x} ^ {t + 1} \\leftarrow \\mathbf {x} ^ {t} - \\eta^ {t} \\mathcal {A} ^ {*} (\\mathcal {A} (\\mathbf {x} ^ {t}) - \\tilde {\\mathbf {k}}) + \\lambda^ {t} \\mathrm {N O} _ {\\mathbf {i}} ^ {t} (\\mathbf {x} ^ {t}) \\tag {5}", + "image_path": "bc420c2d1075627c9cf5fc074e9b671656ff4bb36a9c01052c9d3420d433d597.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 46, + 340, + 287, + 388 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 340, + 287, + 388 + ], + "spans": [ + { + "bbox": [ + 46, + 340, + 287, + 388 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 46, + 340, + 287, + 388 + ], + "type": "inline_equation", + "content": "\\mathrm{NO}_{\\mathbf{i}}^{t}" + }, + { + "bbox": [ + 46, + 340, + 287, + 388 + ], + "type": "text", + "content": " refers to the image-space NO at cascade " + }, + { + "bbox": [ + 46, + 340, + 287, + 388 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 46, + 340, + 287, + 388 + ], + "type": "text", + "content": ". We follow existing works [15, 40] and only have one " + }, + { + "bbox": [ + 46, + 340, + 287, + 388 + ], + "type": "inline_equation", + "content": "\\mathrm{NO}_{\\mathbf{k}}" + }, + { + "bbox": [ + 46, + 340, + 287, + 388 + ], + "type": "text", + "content": " for the first cascade. Our framework flexibly works for different resolutions with the design details in Section 3.2." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 46, + 388, + 287, + 485 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 388, + 287, + 485 + ], + "spans": [ + { + "bbox": [ + 46, + 388, + 287, + 485 + ], + "type": "text", + "content": "Framework Overview. Fig. 2 depicts the pipeline of our neural operator framework for MRI reconstruction. The undersampled measurement " + }, + { + "bbox": [ + 46, + 388, + 287, + 485 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{k}}" + }, + { + "bbox": [ + 46, + 388, + 287, + 485 + ], + "type": "text", + "content": " is first fed to a neural operator " + }, + { + "bbox": [ + 46, + 388, + 287, + 485 + ], + "type": "inline_equation", + "content": "\\mathrm{NO_k}" + }, + { + "bbox": [ + 46, + 388, + 287, + 485 + ], + "type": "text", + "content": " which operates in measurement " + }, + { + "bbox": [ + 46, + 388, + 287, + 485 + ], + "type": "inline_equation", + "content": "\\mathbf{k}" + }, + { + "bbox": [ + 46, + 388, + 287, + 485 + ], + "type": "text", + "content": " space to learn global image features and then inverse Fourier transformed to get an image. Following Eqn. 4 and 5, we iterate a few cascades of unrolled layers, consisting of a neural operator " + }, + { + "bbox": [ + 46, + 388, + 287, + 485 + ], + "type": "inline_equation", + "content": "\\mathrm{NO_i}" + }, + { + "bbox": [ + 46, + 388, + 287, + 485 + ], + "type": "text", + "content": " which operates in image " + }, + { + "bbox": [ + 46, + 388, + 287, + 485 + ], + "type": "inline_equation", + "content": "\\mathbf{x}" + }, + { + "bbox": [ + 46, + 388, + 287, + 485 + ], + "type": "text", + "content": " space and a data consistency update." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 47, + 491, + 183, + 504 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 47, + 491, + 183, + 504 + ], + "spans": [ + { + "bbox": [ + 47, + 491, + 183, + 504 + ], + "type": "text", + "content": "3.2. Neural Operator Design" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 46, + 510, + 287, + 594 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 510, + 287, + 594 + ], + "spans": [ + { + "bbox": [ + 46, + 510, + 287, + 594 + ], + "type": "text", + "content": "Neural operators, which learn mappings between function spaces, offer a unified approach to discretization-agnostic MRI reconstruction. Given that accurate MRI reconstruction depends on capturing both local and global image features, we propose a neural operator architecture that incorporates both global and local inductive biases. We first discuss how we learn local features with local integration operators." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 46, + 594, + 288, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 594, + 288, + 714 + ], + "spans": [ + { + "bbox": [ + 46, + 594, + 288, + 714 + ], + "type": "text", + "content": "Local Features via Local Integration Operator. Historically, the most common method of embedding a local inductive bias into deep neural networks has been by using locally-supported convolutional kernels, as in convolutional neural networks (CNNs). However, standard discrete convolutional kernels used in CNNs do not satisfy the resolution-agnostic properties of neural operators. Specifically, Liu et al. [26] show that CNN-style convolutional kernels converge to pointwise linear operators as the resolution is increased, instead of the desired local integration in the limit of infinite" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 305, + 231, + 547, + 280 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 231, + 547, + 280 + ], + "spans": [ + { + "bbox": [ + 305, + 231, + 547, + 280 + ], + "type": "text", + "content": "resolution. For a kernel " + }, + { + "bbox": [ + 305, + 231, + 547, + 280 + ], + "type": "inline_equation", + "content": "\\kappa" + }, + { + "bbox": [ + 305, + 231, + 547, + 280 + ], + "type": "text", + "content": " and input function " + }, + { + "bbox": [ + 305, + 231, + 547, + 280 + ], + "type": "inline_equation", + "content": "g" + }, + { + "bbox": [ + 305, + 231, + 547, + 280 + ], + "type": "text", + "content": " defined over some compact subset " + }, + { + "bbox": [ + 305, + 231, + 547, + 280 + ], + "type": "inline_equation", + "content": "D\\subset \\mathbb{R}^d" + }, + { + "bbox": [ + 305, + 231, + 547, + 280 + ], + "type": "text", + "content": ", the local convolution operator in a standard convolution layer, which transforms input " + }, + { + "bbox": [ + 305, + 231, + 547, + 280 + ], + "type": "inline_equation", + "content": "u" + }, + { + "bbox": [ + 305, + 231, + 547, + 280 + ], + "type": "text", + "content": " to output " + }, + { + "bbox": [ + 305, + 231, + 547, + 280 + ], + "type": "inline_equation", + "content": "v" + }, + { + "bbox": [ + 305, + 231, + 547, + 280 + ], + "type": "text", + "content": ", is given by" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 350, + 283, + 545, + 309 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 350, + 283, + 545, + 309 + ], + "spans": [ + { + "bbox": [ + 350, + 283, + 545, + 309 + ], + "type": "interline_equation", + "content": "(k \\star g) (v) = \\int_ {D} \\kappa (u - v) \\cdot g (u) d u. \\tag {6}", + "image_path": "67a381478f41550282821217b2fde865698c79496a9607441ca39c60c6ae5063.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 305, + 313, + 547, + 373 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 313, + 547, + 373 + ], + "spans": [ + { + "bbox": [ + 305, + 313, + 547, + 373 + ], + "type": "text", + "content": "Given a particular set of input points " + }, + { + "bbox": [ + 305, + 313, + 547, + 373 + ], + "type": "inline_equation", + "content": "(u_{j})_{j = 1}^{m}\\subset D" + }, + { + "bbox": [ + 305, + 313, + 547, + 373 + ], + "type": "text", + "content": " with corresponding quadrature weights " + }, + { + "bbox": [ + 305, + 313, + 547, + 373 + ], + "type": "inline_equation", + "content": "q_{j}" + }, + { + "bbox": [ + 305, + 313, + 547, + 373 + ], + "type": "text", + "content": " and output positions " + }, + { + "bbox": [ + 305, + 313, + 547, + 373 + ], + "type": "inline_equation", + "content": "v_{i}\\in D" + }, + { + "bbox": [ + 305, + 313, + 547, + 373 + ], + "type": "text", + "content": ", we adopt the discrete-continuous convolutions (DISCO) framework for operator learning [26, 31] and approximate the continuous convolution (Eqn. 6) as" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 345, + 377, + 546, + 410 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 345, + 377, + 546, + 410 + ], + "spans": [ + { + "bbox": [ + 345, + 377, + 546, + 410 + ], + "type": "interline_equation", + "content": "(k \\star g) (v _ {i}) \\approx \\sum_ {j = 1} ^ {m} \\kappa \\left(u _ {j} - v _ {i}\\right) \\cdot g \\left(x _ {j}\\right) q _ {j}. \\tag {7}", + "image_path": "424d39dcb36a60b1c5007b9c933bd217dc9a70b983e49ad63465cd69c739bd98.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 415, + 547, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 415, + 547, + 689 + ], + "spans": [ + { + "bbox": [ + 304, + 415, + 547, + 689 + ], + "type": "text", + "content": "We follow parameterize " + }, + { + "bbox": [ + 304, + 415, + 547, + 689 + ], + "type": "inline_equation", + "content": "\\kappa" + }, + { + "bbox": [ + 304, + 415, + 547, + 689 + ], + "type": "text", + "content": " as a linear combination of predefined basis functions " + }, + { + "bbox": [ + 304, + 415, + 547, + 689 + ], + "type": "inline_equation", + "content": "\\kappa^{\\ell}" + }, + { + "bbox": [ + 304, + 415, + 547, + 689 + ], + "type": "text", + "content": ": " + }, + { + "bbox": [ + 304, + 415, + 547, + 689 + ], + "type": "inline_equation", + "content": "\\kappa = \\sum_{\\ell=1}^{L} \\theta^{\\ell} \\cdot \\kappa^{\\ell}" + }, + { + "bbox": [ + 304, + 415, + 547, + 689 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 304, + 415, + 547, + 689 + ], + "type": "inline_equation", + "content": "\\theta^{\\ell}" + }, + { + "bbox": [ + 304, + 415, + 547, + 689 + ], + "type": "text", + "content": " are learnable parameters. We choose the linear piecewise basis from [26] as this achieves the greatest empirical results (see Sections B.3 & E of the supplementary). The convolutional kernel is thus parameterized by a finite number of parameters, independently of the grid on which the kernel is evaluated. The kernel is resolution-agnostic because we disentangle the resolution-agnostic basis and discrete learnable parameters. The basis " + }, + { + "bbox": [ + 304, + 415, + 547, + 689 + ], + "type": "inline_equation", + "content": "\\kappa^{\\ell}" + }, + { + "bbox": [ + 304, + 415, + 547, + 689 + ], + "type": "text", + "content": " is defined in the function space, and will be discretized at the desired resolution; discrete parameters " + }, + { + "bbox": [ + 304, + 415, + 547, + 689 + ], + "type": "inline_equation", + "content": "\\theta^{\\ell}" + }, + { + "bbox": [ + 304, + 415, + 547, + 689 + ], + "type": "text", + "content": " can be learned with gradient descent. Since we are operating on an equidistant grid on a compact subset of " + }, + { + "bbox": [ + 304, + 415, + 547, + 689 + ], + "type": "inline_equation", + "content": "\\mathbb{R}^2" + }, + { + "bbox": [ + 304, + 415, + 547, + 689 + ], + "type": "text", + "content": ", we follow [26] and implement Eqn. 7 using standard convolutional kernels (thus enjoying the benefits of acceleration on GPUs using standard deep learning libraries) with two crucial modifications: 1) the kernel itself is defined as a linear combination of basis functions " + }, + { + "bbox": [ + 304, + 415, + 547, + 689 + ], + "type": "inline_equation", + "content": "\\kappa^{\\ell}" + }, + { + "bbox": [ + 304, + 415, + 547, + 689 + ], + "type": "text", + "content": ", and 2) the size of the kernel scales with the input resolution so as to remain a fixed size w.r.t. the input domain. We adopt the same basis functions as [26] in our experiments, and we use the local integration operator as the resolution-agnostic building block for the measurement space and image space operators." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 305, + 689, + 547, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 689, + 547, + 713 + ], + "spans": [ + { + "bbox": [ + 305, + 689, + 547, + 713 + ], + "type": "text", + "content": "DISCO vs Standard 2D Convolution with Varying Resolutions. As the input resolution increases (the discretization" + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "26007" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 46, + 72, + 289, + 203 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 72, + 289, + 203 + ], + "spans": [ + { + "bbox": [ + 46, + 72, + 289, + 203 + ], + "type": "text", + "content": "becomes denser), DISCO [31] maintains the kernel size for each convolution and finally converges to a local integral. The standard 2D convolution kernel, however, gets increasingly smaller and finally converges to a point-wise operator (Fig. 1c). Although one could alleviate the issue of standard convolutions by interpolating the convolutional kernel shape to match with corresponding convolution patch sizes for different resolutions, the interpolated kernel will have artifacts that affect performance at new resolutions (Fig. 1d). DISCO, however, is agnostic to resolution changes as the kernel is in the function space." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 46, + 204, + 289, + 407 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 204, + 289, + 407 + ], + "spans": [ + { + "bbox": [ + 46, + 204, + 289, + 407 + ], + "type": "text", + "content": "Global Features. A common neural operator architecture for learning global features is the Fourier neural operator (FNO) [24]. FNO takes the Fourier transform of the input, truncates the result beyond some fixed number of modes, and pointwise multiplies the result with a learned weight tensor, which is equivalent to a global convolution on the input by the convolution theorem. Interestingly, the forward process of MRI is a Fourier transformation, which means that local operations in measurement " + }, + { + "bbox": [ + 46, + 204, + 289, + 407 + ], + "type": "inline_equation", + "content": "\\mathbf{k}" + }, + { + "bbox": [ + 46, + 204, + 289, + 407 + ], + "type": "text", + "content": " space are equivalent to global operators in image " + }, + { + "bbox": [ + 46, + 204, + 289, + 407 + ], + "type": "inline_equation", + "content": "\\mathbf{x}" + }, + { + "bbox": [ + 46, + 204, + 289, + 407 + ], + "type": "text", + "content": " space and vice versa, due to their duality. Following FNO, we could apply a pointwise multiplication between the measurement " + }, + { + "bbox": [ + 46, + 204, + 289, + 407 + ], + "type": "inline_equation", + "content": "\\mathbf{k}" + }, + { + "bbox": [ + 46, + 204, + 289, + 407 + ], + "type": "text", + "content": " and a learned weight tensor to capture global image features. However, FNO truncates high frequencies, which are crucial for MRI reconstruction. To address this, we directly apply the DISCO local integration operator on the measurement space to capture global image features without feature map truncation." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 46, + 407, + 289, + 634 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 407, + 289, + 634 + ], + "spans": [ + { + "bbox": [ + 46, + 407, + 289, + 634 + ], + "type": "text", + "content": "UDNO: the Building Block. Without loss of generality, we make both the image-space " + }, + { + "bbox": [ + 46, + 407, + 289, + 634 + ], + "type": "inline_equation", + "content": "\\mathrm{NO}_{\\mathrm{i}}" + }, + { + "bbox": [ + 46, + 407, + 289, + 634 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 46, + 407, + 289, + 634 + ], + "type": "inline_equation", + "content": "\\mathbf{k}" + }, + { + "bbox": [ + 46, + 407, + 289, + 634 + ], + "type": "text", + "content": " space " + }, + { + "bbox": [ + 46, + 407, + 289, + 634 + ], + "type": "inline_equation", + "content": "\\mathrm{NO}_{\\mathbf{k}}" + }, + { + "bbox": [ + 46, + 407, + 289, + 634 + ], + "type": "text", + "content": " be local neural operators that capture local features in the corresponding domain. Such a design learns both global and local image features due to domain duality. Motivations for adopting the U-shaped architecture are in Fig. 7 and Section A of the Supplementary. Each operator consists of multiple sub-layers, to which we refer as the U-Shaped DISCO Neural Operator, or UDNO. The motivation is that multi-scale designs have shown great success in capturing features at different scales in images and that U-shaped networks are among the most popular architectures in computer vision, demonstrating strong performance in various applications from medical imaging to diffusion [6, 33, 37]. Further, UDNO makes our framework very similar to an existing state-of-the-art E2E-VN [40], with the difference being standard convolutions replaced by DISCO operators. The UDNO follows the encoder/decoder architecture of the U-Net [37], replacing regular convolutions with DISCO layers." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 46, + 635, + 288, + 682 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 635, + 288, + 682 + ], + "spans": [ + { + "bbox": [ + 46, + 635, + 288, + 682 + ], + "type": "text", + "content": "Loss. The parameters of the proposed neural operator are estimated from the training data by minimizing the structural similarity loss between the reconstruction " + }, + { + "bbox": [ + 46, + 635, + 288, + 682 + ], + "type": "inline_equation", + "content": "\\mathbf{x}" + }, + { + "bbox": [ + 46, + 635, + 288, + 682 + ], + "type": "text", + "content": " and the ground truth image " + }, + { + "bbox": [ + 46, + 635, + 288, + 682 + ], + "type": "inline_equation", + "content": "\\mathbf{x}^*" + }, + { + "bbox": [ + 46, + 635, + 288, + 682 + ], + "type": "text", + "content": " (the same as the E2E-VN [40]):" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 108, + 683, + 287, + 696 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 683, + 287, + 696 + ], + "spans": [ + { + "bbox": [ + 108, + 683, + 287, + 696 + ], + "type": "interline_equation", + "content": "\\mathcal {L} (\\hat {\\mathbf {x}}, \\mathbf {x} ^ {*}) = - \\operatorname {S S I M} (\\hat {\\mathbf {x}}, \\mathbf {x} ^ {*}), \\tag {8}", + "image_path": "beab58ca1bd78a00c94db5858a131e9d2337f932280b3f05113ef230bf3ee29c.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 47, + 701, + 289, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 47, + 701, + 289, + 714 + ], + "spans": [ + { + "bbox": [ + 47, + 701, + 289, + 714 + ], + "type": "text", + "content": "where SSIM is the Structural Similarity Index Measure [42]." + } + ] + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 307, + 71, + 541, + 211 + ], + "blocks": [ + { + "bbox": [ + 307, + 71, + 541, + 211 + ], + "lines": [ + { + "bbox": [ + 307, + 71, + 541, + 211 + ], + "spans": [ + { + "bbox": [ + 307, + 71, + 541, + 211 + ], + "type": "image", + "image_path": "16a9f889ff062d7a55042de1d4ad45ad5bd849d690d5af4a4369881e0fcda26e.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 304, + 226, + 547, + 283 + ], + "lines": [ + { + "bbox": [ + 304, + 226, + 547, + 283 + ], + "spans": [ + { + "bbox": [ + 304, + 226, + 547, + 283 + ], + "type": "text", + "content": "Figure 3. Super resolution (denser discretization) in " + }, + { + "bbox": [ + 304, + 226, + 547, + 283 + ], + "type": "inline_equation", + "content": "\\mathbf{k}" + }, + { + "bbox": [ + 304, + 226, + 547, + 283 + ], + "type": "text", + "content": " space or image space increases the FOV or resolution of the reconstructed image. With denser discretization, NO maintains a resolution-agnostic kernel while CNN kernels become relatively smaller in size. Empirically our NO outperforms CNNs [40] (Section 4.4)." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "bbox": [ + 306, + 303, + 410, + 316 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 303, + 410, + 316 + ], + "spans": [ + { + "bbox": [ + 306, + 303, + 410, + 316 + ], + "type": "text", + "content": "3.3. Super-Resolution" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 304, + 321, + 547, + 489 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 321, + 547, + 489 + ], + "spans": [ + { + "bbox": [ + 304, + 321, + 547, + 489 + ], + "type": "text", + "content": "Neural operators enable zero-shot super-resolution. As shown in Fig. 3, increasing resolution corresponds to denser discretization between fixed minimum and maximum values, while the overall domain range remains constant. Due to the dual nature of frequency and image space, enhancing resolution in " + }, + { + "bbox": [ + 304, + 321, + 547, + 489 + ], + "type": "inline_equation", + "content": "\\mathbf{k}" + }, + { + "bbox": [ + 304, + 321, + 547, + 489 + ], + "type": "text", + "content": " space extends the field of view (FOV) in the reconstructed image, whereas increasing resolution in image space enhances the image's detail. Our proposed NO framework includes resolution-agnostic neural operators for both " + }, + { + "bbox": [ + 304, + 321, + 547, + 489 + ], + "type": "inline_equation", + "content": "\\mathbf{k}" + }, + { + "bbox": [ + 304, + 321, + 547, + 489 + ], + "type": "text", + "content": " space " + }, + { + "bbox": [ + 304, + 321, + 547, + 489 + ], + "type": "inline_equation", + "content": "(\\mathrm{NO_k})" + }, + { + "bbox": [ + 304, + 321, + 547, + 489 + ], + "type": "text", + "content": " and image space " + }, + { + "bbox": [ + 304, + 321, + 547, + 489 + ], + "type": "inline_equation", + "content": "(\\mathrm{NO_i})" + }, + { + "bbox": [ + 304, + 321, + 547, + 489 + ], + "type": "text", + "content": ", facilitating zero-shot super-resolution in both domains. We present empirical zero-shot super-resolution results in Section 4.4, comparing our NO framework to E2E-VN [40], a CNN-based architecture with a similar design." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 306, + 499, + 388, + 513 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 499, + 388, + 513 + ], + "spans": [ + { + "bbox": [ + 306, + 499, + 388, + 513 + ], + "type": "text", + "content": "4. Experiments" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 304, + 519, + 547, + 568 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 519, + 547, + 568 + ], + "spans": [ + { + "bbox": [ + 304, + 519, + 547, + 568 + ], + "type": "text", + "content": "We discuss the datasets and experimental setup, followed by comparisons of our and baseline methods with different k undersampling rates and patterns. We conclude the section with zero-shot super-resolution and additional analysis." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 306, + 575, + 414, + 588 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 575, + 414, + 588 + ], + "spans": [ + { + "bbox": [ + 306, + 575, + 414, + 588 + ], + "type": "text", + "content": "4.1. Dataset and Setup" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 306, + 594, + 545, + 617 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 594, + 545, + 617 + ], + "spans": [ + { + "bbox": [ + 306, + 594, + 545, + 617 + ], + "type": "text", + "content": "Datasets: The fastMRI dataset [44] is a large and open dataset of knee and brain fully-sampled MRIs." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 306, + 618, + 545, + 700 + ], + "type": "list", + "angle": 0, + "index": 16, + "blocks": [ + { + "bbox": [ + 306, + 618, + 545, + 653 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 618, + 545, + 653 + ], + "spans": [ + { + "bbox": [ + 306, + 618, + 545, + 653 + ], + "type": "text", + "content": "- fastMRI knee: We use the multi-coil knee reconstruction dataset with 34,742 slices for training and 7,135 slices for evaluation. All samples contain data from 15 coils." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 306, + 654, + 545, + 700 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 654, + 545, + 700 + ], + "spans": [ + { + "bbox": [ + 306, + 654, + 545, + 700 + ], + "type": "text", + "content": "- fastMRI brain: We use the T2 contrast subset of the multi-coil brain reconstruction dataset with 6,262 training slices and 502 evaluation slices. We filter for samples with data from 16 coils." + } + ] + } + ], + "index": 15 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 306, + 701, + 547, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 701, + 547, + 714 + ], + "spans": [ + { + "bbox": [ + 306, + 701, + 547, + 714 + ], + "type": "text", + "content": "Undersampling Patterns and Rates. We use equispaced," + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "26008" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 50, + 70, + 286, + 155 + ], + "blocks": [ + { + "bbox": [ + 50, + 70, + 286, + 155 + ], + "lines": [ + { + "bbox": [ + 50, + 70, + 286, + 155 + ], + "spans": [ + { + "bbox": [ + 50, + 70, + 286, + 155 + ], + "type": "table", + "html": "
CategoryMethodfastMRI kneefastMRI brain
PSNR (dB)SSIMPSNR (dB)SSIM
Learning-freeZero-filled31.00±3.330.7848±0.061630.86±1.730.8065±0.0376
ℓ1-Wavelet [27]25.67±3.910.5667±0.200128.68±1.310.6783±0.0684
DiffusionCSGM [17]26.52±3.210.6789±0.1220--
ScoreMRI [5]25.72±1.800.5789±0.0910--
PnP-DM [43]26.52±3.140.6383±0.1320--
End-to-endU-Net [37]37.07±2.470.8803±0.050437.27±1.760.9211±0.0272
E2E-VN [40]38.33±3.060.9048±0.073238.06±2.700.9620±0.0107
Ours39.14±2.930.9219±0.072438.82±2.770.9621±0.0086
", + "image_path": "a49b0bbff4d866cd57f8890ef7355a48f215d9daf65d26c1ff6d4d42f0f60fc2.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 46, + 163, + 288, + 218 + ], + "lines": [ + { + "bbox": [ + 46, + 163, + 288, + 218 + ], + "spans": [ + { + "bbox": [ + 46, + 163, + 288, + 218 + ], + "type": "text", + "content": "Table 1. MRI reconstruction performance on " + }, + { + "bbox": [ + 46, + 163, + 288, + 218 + ], + "type": "inline_equation", + "content": "4 \\times" + }, + { + "bbox": [ + 46, + 163, + 288, + 218 + ], + "type": "text", + "content": " equispaced undersampling. NO outperforms existing methods (classical, diffusion, and end-to-end). NO also shows consistent performance across k space undersampling patterns (Section 4.3). Zero-filled refers to reconstructing the image from zero-filled k space." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 46, + 242, + 287, + 326 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 242, + 287, + 326 + ], + "spans": [ + { + "bbox": [ + 46, + 242, + 287, + 326 + ], + "type": "text", + "content": "random, magic, Gaussian, radial, and Poisson undersampling patterns [5, 44] and " + }, + { + "bbox": [ + 46, + 242, + 287, + 326 + ], + "type": "inline_equation", + "content": "2\\mathrm{x}" + }, + { + "bbox": [ + 46, + 242, + 287, + 326 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 46, + 242, + 287, + 326 + ], + "type": "inline_equation", + "content": "4\\times" + }, + { + "bbox": [ + 46, + 242, + 287, + 326 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 46, + 242, + 287, + 326 + ], + "type": "inline_equation", + "content": "6\\times" + }, + { + "bbox": [ + 46, + 242, + 287, + 326 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 46, + 242, + 287, + 326 + ], + "type": "inline_equation", + "content": "8\\times" + }, + { + "bbox": [ + 46, + 242, + 287, + 326 + ], + "type": "text", + "content": " undersampling rates (visualizations are in Fig. 8 in the Supplementary). Higher rates result in sparser " + }, + { + "bbox": [ + 46, + 242, + 287, + 326 + ], + "type": "inline_equation", + "content": "\\mathbf{k}" + }, + { + "bbox": [ + 46, + 242, + 287, + 326 + ], + "type": "text", + "content": " space samples and shorter imaging time at the cost of a more ill-posed/harder inversion process. Section B in the Supplementary provides additional undersampling details along with mask visualizations." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 46, + 326, + 287, + 422 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 326, + 287, + 422 + ], + "spans": [ + { + "bbox": [ + 46, + 326, + 287, + 422 + ], + "type": "text", + "content": "Neural Operator Model. NO follows Fig. 2. The " + }, + { + "bbox": [ + 46, + 326, + 287, + 422 + ], + "type": "inline_equation", + "content": "\\mathrm{NO_k}" + }, + { + "bbox": [ + 46, + 326, + 287, + 422 + ], + "type": "text", + "content": " (k space neural operator) and " + }, + { + "bbox": [ + 46, + 326, + 287, + 422 + ], + "type": "inline_equation", + "content": "\\mathrm{NO_i}" + }, + { + "bbox": [ + 46, + 326, + 287, + 422 + ], + "type": "text", + "content": " (image space neural operator) are implemented as UDNOs with 2 input and output channels. This is because complex numbers, commonly used in MRI data, are represented using two channels: one for the real part and one for the imaginary part. We provide UDNO details, DISCO kernel basis configurations and training hyper-parameters in Section B of the Supplementary." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 47, + 423, + 287, + 459 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 47, + 423, + 287, + 459 + ], + "spans": [ + { + "bbox": [ + 47, + 423, + 287, + 459 + ], + "type": "text", + "content": "Baseline: Compressed Sensing. We compare with a learning-free compressed sensing method with wavelet " + }, + { + "bbox": [ + 47, + 423, + 287, + 459 + ], + "type": "inline_equation", + "content": "\\ell_1" + }, + { + "bbox": [ + 47, + 423, + 287, + 459 + ], + "type": "text", + "content": " regularization for a classical comparison [27]." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 46, + 460, + 287, + 555 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 460, + 287, + 555 + ], + "spans": [ + { + "bbox": [ + 46, + 460, + 287, + 555 + ], + "type": "text", + "content": "Baselines: Unrolled Networks. We compare with the E2E-VN (End-to-End VarNet) [40], which shares a similar network structure with our approach, but uses the standard CNNs with resolution-dependent convolutions. Since E2E-VN [40] is only trained on specific resolution, we also consider E2E-VN++, where we train [40] with multiple-patterns that match our NO's training data for fair comparisons. Number of cascades " + }, + { + "bbox": [ + 46, + 460, + 287, + 555 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 46, + 460, + 287, + 555 + ], + "type": "text", + "content": " is set to 12 following [40]." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 46, + 556, + 287, + 688 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 556, + 287, + 688 + ], + "spans": [ + { + "bbox": [ + 46, + 556, + 287, + 688 + ], + "type": "text", + "content": "Baselines: Diffusion. Diffusion models have shown strong performance on inverse problems such as MRI reconstruction. We compare our approach to three prominent diffusion-based methods that leverage these capabilities: Score-based diffusion models for accelerated MRI (ScoreMRI) [5], Compressive Sensing using Generative Models (CSGM) [17], and Plug-and-Play Diffusion Models (PnP-DM) [43]. We replicate the experimental settings described in their respective papers. While they report results on MVUE targets, we evaluate metrics on RSS targets at inference for a fair comparison with our methods." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 47, + 689, + 287, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 47, + 689, + 287, + 713 + ], + "spans": [ + { + "bbox": [ + 47, + 689, + 287, + 713 + ], + "type": "text", + "content": "Hardware and Training. While models can be trained on a single RTX 4090 GPU, we accelerate the training of our" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 305, + 72, + 545, + 95 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 72, + 545, + 95 + ], + "spans": [ + { + "bbox": [ + 305, + 72, + 545, + 95 + ], + "type": "text", + "content": "model and baselines with a batch size of 16 across 4 A100 (40G) GPUs. We follow baseline settings for comparison." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 305, + 96, + 545, + 156 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 96, + 545, + 156 + ], + "spans": [ + { + "bbox": [ + 305, + 96, + 545, + 156 + ], + "type": "text", + "content": "Evaluation Protocols. We evaluate image reconstruction performance using normalized mean square error (NMSE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) which are standard for the fastMRI dataset and MRI [44]." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 306, + 164, + 487, + 189 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 164, + 487, + 189 + ], + "spans": [ + { + "bbox": [ + 306, + 164, + 487, + 189 + ], + "type": "text", + "content": "4.2. Reconstruction with Different k Space Undersampling Patterns" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 304, + 194, + 545, + 266 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 194, + 545, + 266 + ], + "spans": [ + { + "bbox": [ + 304, + 194, + 545, + 266 + ], + "type": "text", + "content": "We train our NO model, E2E-VN and E2E-VN++ on " + }, + { + "bbox": [ + 304, + 194, + 545, + 266 + ], + "type": "inline_equation", + "content": "4 \\times" + }, + { + "bbox": [ + 304, + 194, + 545, + 266 + ], + "type": "text", + "content": " equispaced samples for 50 epochs. The performance on the single " + }, + { + "bbox": [ + 304, + 194, + 545, + 266 + ], + "type": "inline_equation", + "content": "4 \\times" + }, + { + "bbox": [ + 304, + 194, + 545, + 266 + ], + "type": "text", + "content": " equispace undersampling pattern in Table 1. We further fine-tune NO and E2E-VN++ for an additional 20 epochs on a small dataset (3,474 samples) of equispaced, random, magic, Gaussian, radial, and Poisson samples." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 304, + 267, + 545, + 386 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 267, + 545, + 386 + ], + "spans": [ + { + "bbox": [ + 304, + 267, + 545, + 386 + ], + "type": "text", + "content": "fastMRI Knee. We also provide detailed metric results in Table 2, with a line plot in Fig. 6a, where our NO achieves consistent performance across different patterns. Across all patterns, we achieve an average improvement of " + }, + { + "bbox": [ + 304, + 267, + 545, + 386 + ], + "type": "inline_equation", + "content": "4.17\\mathrm{dB}" + }, + { + "bbox": [ + 304, + 267, + 545, + 386 + ], + "type": "text", + "content": " PSNR and " + }, + { + "bbox": [ + 304, + 267, + 545, + 386 + ], + "type": "inline_equation", + "content": "8.4\\%" + }, + { + "bbox": [ + 304, + 267, + 545, + 386 + ], + "type": "text", + "content": " SSIM over the E2E-VN. On rectilinear patterns (equispaced, magic, random), our performance remains comparable to " + }, + { + "bbox": [ + 304, + 267, + 545, + 386 + ], + "type": "inline_equation", + "content": "\\mathrm{E2E - VN + + }" + }, + { + "bbox": [ + 304, + 267, + 545, + 386 + ], + "type": "text", + "content": " (0.3 dB PSNR gain). Across the irregular patterns (radial, Gaussian, Poisson), we achieve a 0.6 dB PSNR improvement over the improved baseline " + }, + { + "bbox": [ + 304, + 267, + 545, + 386 + ], + "type": "inline_equation", + "content": "(\\mathrm{E2E - VN + + })" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 387, + 546, + 494 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 387, + 546, + 494 + ], + "spans": [ + { + "bbox": [ + 304, + 387, + 546, + 494 + ], + "type": "text", + "content": "fastMRI Brain. On irregular patterns, we achieve an average improvement of 4.7 dB PSNR and " + }, + { + "bbox": [ + 304, + 387, + 546, + 494 + ], + "type": "inline_equation", + "content": "10\\%" + }, + { + "bbox": [ + 304, + 387, + 546, + 494 + ], + "type": "text", + "content": " SSIM over the E2E-VN. On rectilinear patterns (equispaced, magic, random), our performance remains comparable to the E2E-VN. Detailed numbers are reported in Table 7 of Supplementary. Visualization. We observe visual improvements in reconstruction integrity (see Fig. 4). Our model is robust to inference across multiple patterns. We highlight important local regions where our NO is better." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 495, + 545, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 495, + 545, + 567 + ], + "spans": [ + { + "bbox": [ + 304, + 495, + 545, + 567 + ], + "type": "text", + "content": "The setting here where multiple patterns are trained together is a common clinical setting where the undersampling patterns are known. We also consider the setting where undersampling patterns are unknown. Zero-shot evaluations of the equispaced-trained " + }, + { + "bbox": [ + 304, + 495, + 545, + 567 + ], + "type": "inline_equation", + "content": "(4\\times)" + }, + { + "bbox": [ + 304, + 495, + 545, + 567 + ], + "type": "text", + "content": " model across different patterns show that our NO shows 1.8 dB PSNR gain over E2E-VN." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 306, + 574, + 474, + 599 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 574, + 474, + 599 + ], + "spans": [ + { + "bbox": [ + 306, + 574, + 474, + 599 + ], + "type": "text", + "content": "4.3. Reconstruction with Different k Space Undersampling Rates" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 605, + 545, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 605, + 545, + 665 + ], + "spans": [ + { + "bbox": [ + 304, + 605, + 545, + 665 + ], + "type": "text", + "content": "We train our NO model, E2E-VN and E2E-VN++ on " + }, + { + "bbox": [ + 304, + 605, + 545, + 665 + ], + "type": "inline_equation", + "content": "4 \\times" + }, + { + "bbox": [ + 304, + 605, + 545, + 665 + ], + "type": "text", + "content": " equispaced samples for 50 epochs. We further fine-tune NO and E2E-VN++ for an additional 20 epochs on a small dataset (3,474 samples) of " + }, + { + "bbox": [ + 304, + 605, + 545, + 665 + ], + "type": "inline_equation", + "content": "4 \\times" + }, + { + "bbox": [ + 304, + 605, + 545, + 665 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 304, + 605, + 545, + 665 + ], + "type": "inline_equation", + "content": "6 \\times" + }, + { + "bbox": [ + 304, + 605, + 545, + 665 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 304, + 605, + 545, + 665 + ], + "type": "inline_equation", + "content": "8 \\times" + }, + { + "bbox": [ + 304, + 605, + 545, + 665 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 304, + 605, + 545, + 665 + ], + "type": "inline_equation", + "content": "16 \\times" + }, + { + "bbox": [ + 304, + 605, + 545, + 665 + ], + "type": "text", + "content": " equispaced samples." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 665, + 545, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 665, + 545, + 713 + ], + "spans": [ + { + "bbox": [ + 304, + 665, + 545, + 713 + ], + "type": "text", + "content": "For fastMRI Knee, we report the multi-rate performance in Fig. 6b and Table 6 of the Supplementary. For fastMRI Brain, we report the multi-rate performance in Table 8 of the Supplementary. Our neural operator model consistently out" + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "26009" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 53, + 63, + 541, + 281 + ], + "blocks": [ + { + "bbox": [ + 53, + 63, + 541, + 281 + ], + "lines": [ + { + "bbox": [ + 53, + 63, + 541, + 281 + ], + "spans": [ + { + "bbox": [ + 53, + 63, + 541, + 281 + ], + "type": "image", + "image_path": "98e9d07e8faf4877e2f41511c26d15c331bbe65b9534d219ae2fdae8975dafea.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 46, + 289, + 546, + 334 + ], + "lines": [ + { + "bbox": [ + 46, + 289, + 546, + 334 + ], + "spans": [ + { + "bbox": [ + 46, + 289, + 546, + 334 + ], + "type": "text", + "content": "Figure 4. MRI reconstructions with different undersampling patterns of various methods: NO (ours), E2E-VN++, E2E-VN [40], L1-Wavelet (learning-free compressed sensing) [27], and CSGM (diffusion) [17]. NO reconstructs high-fidelity images across various downsampling patterns. Zoom-in view in the lower right of each image. Row 1: " + }, + { + "bbox": [ + 46, + 289, + 546, + 334 + ], + "type": "inline_equation", + "content": "4 \\times" + }, + { + "bbox": [ + 46, + 289, + 546, + 334 + ], + "type": "text", + "content": " Equispaced undersampling. Row 2: " + }, + { + "bbox": [ + 46, + 289, + 546, + 334 + ], + "type": "inline_equation", + "content": "4 \\times" + }, + { + "bbox": [ + 46, + 289, + 546, + 334 + ], + "type": "text", + "content": " Gaussian 2d undersampling. Row 3: " + }, + { + "bbox": [ + 46, + 289, + 546, + 334 + ], + "type": "inline_equation", + "content": "4 \\times" + }, + { + "bbox": [ + 46, + 289, + 546, + 334 + ], + "type": "text", + "content": " Radial 2d undersampling." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 49, + 344, + 545, + 423 + ], + "blocks": [ + { + "bbox": [ + 49, + 344, + 545, + 423 + ], + "lines": [ + { + "bbox": [ + 49, + 344, + 545, + 423 + ], + "spans": [ + { + "bbox": [ + 49, + 344, + 545, + 423 + ], + "type": "table", + "html": "
PatternPSNR (dB) ↑SSIM ↑NMSE ↓
NO (ours)E2E-VN++E2E-VN [40]NO (ours)E2E-VN++E2E-VN [40]NO (ours)E2E-VN++E2E-VN [40]
In-domainEquispaced37.40 ± 2.6137.50 ± 2.7938.35 ± 3.050.899 ± 0.0720.900 ± 0.0720.905 ± 0.0730.007 ± 0.0060.007 ± 0.0060.006 ± 0.006
Random36.66 ± 2.4836.79 ± 2.6537.34 ± 2.750.891 ± 0.0700.892 ± 0.0720.897 ± 0.0710.008 ± 0.0060.008 ± 0.0070.007 ± 0.005
Magic38.46 ± 2.9938.34 ± 3.0638.94 ± 3.550.914 ± 0.0700.914 ± 0.0690.917 ± 0.0710.006 ± 0.0060.007 ± 0.0060.006 ± 0.006
OODRadial36.23 ± 2.2135.50 ± 2.2427.02 ± 3.920.900 ± 0.0710.892 ± 0.0700.764 ± 0.0700.009 ± 0.0060.011 ± 0.0090.069 ± 0.030
Poisson33.42 ± 2.3433.01 ± 2.6723.61 ± 3.850.878 ± 0.0620.873 ± 0.0600.687 ± 0.0830.016 ± 0.0080.017 ± 0.0080.152 ± 0.068
Gaussian31.25 ± 2.7030.65 ± 2.5523.14 ± 3.950.863 ± 0.0580.851 ± 0.0590.673 ± 0.0880.024 ± 0.0050.028 ± 0.0070.170 ± 0.073
", + "image_path": "3c0c5d0e839c218fb5d08b01dd075e31ba51b384bb945c84bb069d9078d7a133.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 46, + 431, + 547, + 477 + ], + "lines": [ + { + "bbox": [ + 46, + 431, + 547, + 477 + ], + "spans": [ + { + "bbox": [ + 46, + 431, + 547, + 477 + ], + "type": "text", + "content": "Table 2. MRI reconstruction performance across different undersampling patterns. Across multiple patterns, NO maintains reconstruction performance, while baselines do not perform well on out-of-domain (OOD) undersampling patterns (Poisson, radial, Gaussian). Metrics are calculated for the fastMRI knee dataset with a fixed " + }, + { + "bbox": [ + 46, + 431, + 547, + 477 + ], + "type": "inline_equation", + "content": "4 \\times" + }, + { + "bbox": [ + 46, + 431, + 547, + 477 + ], + "type": "text", + "content": " acceleration rate. We observe that the E2E-VN overfits to rectilinear patterns, and drops off heavily when evaluated on the irregular patterns (Poisson, radial, Gaussian)." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 46, + 496, + 288, + 533 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 496, + 288, + 533 + ], + "spans": [ + { + "bbox": [ + 46, + 496, + 288, + 533 + ], + "type": "text", + "content": "performs the E2E-VN [40], achieving 3.2 dB higher PSNR and " + }, + { + "bbox": [ + 46, + 496, + 288, + 533 + ], + "type": "inline_equation", + "content": "5.8\\%" + }, + { + "bbox": [ + 46, + 496, + 288, + 533 + ], + "type": "text", + "content": " higher SSIM on fastMRI knee and 2.0 dB higher PSNR and " + }, + { + "bbox": [ + 46, + 496, + 288, + 533 + ], + "type": "inline_equation", + "content": "7.5\\%" + }, + { + "bbox": [ + 46, + 496, + 288, + 533 + ], + "type": "text", + "content": " higher SSIM on fastMRI brain." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 47, + 540, + 201, + 553 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 47, + 540, + 201, + 553 + ], + "spans": [ + { + "bbox": [ + 47, + 540, + 201, + 553 + ], + "type": "text", + "content": "4.4. Zero-Shot Super-Resolution" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 46, + 558, + 288, + 581 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 558, + 288, + 581 + ], + "spans": [ + { + "bbox": [ + 46, + 558, + 288, + 581 + ], + "type": "text", + "content": "We study " + }, + { + "bbox": [ + 46, + 558, + 288, + 581 + ], + "type": "inline_equation", + "content": "\\mathrm{NO_i}" + }, + { + "bbox": [ + 46, + 558, + 288, + 581 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 46, + 558, + 288, + 581 + ], + "type": "inline_equation", + "content": "\\mathrm{NO_k}" + }, + { + "bbox": [ + 46, + 558, + 288, + 581 + ], + "type": "text", + "content": " zero-shot super-resolution performance and compare them with E2E-VN [40]." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 46, + 582, + 288, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 582, + 288, + 714 + ], + "spans": [ + { + "bbox": [ + 46, + 582, + 288, + 714 + ], + "type": "text", + "content": "Higher MRI Resolution with " + }, + { + "bbox": [ + 46, + 582, + 288, + 714 + ], + "type": "inline_equation", + "content": "\\mathrm{NO_i}" + }, + { + "bbox": [ + 46, + 582, + 288, + 714 + ], + "type": "text", + "content": " super-resolution. We train our NO model and the E2E-VN models on " + }, + { + "bbox": [ + 46, + 582, + 288, + 714 + ], + "type": "inline_equation", + "content": "320\\times 320" + }, + { + "bbox": [ + 46, + 582, + 288, + 714 + ], + "type": "text", + "content": " knee samples. We then keep the " + }, + { + "bbox": [ + 46, + 582, + 288, + 714 + ], + "type": "inline_equation", + "content": "\\mathrm{NO_k}" + }, + { + "bbox": [ + 46, + 582, + 288, + 714 + ], + "type": "text", + "content": " unchanged and use bilinear interpolation to increase the input to " + }, + { + "bbox": [ + 46, + 582, + 288, + 714 + ], + "type": "inline_equation", + "content": "\\mathrm{NO_i}" + }, + { + "bbox": [ + 46, + 582, + 288, + 714 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 46, + 582, + 288, + 714 + ], + "type": "inline_equation", + "content": "640\\times" + }, + { + "bbox": [ + 46, + 582, + 288, + 714 + ], + "type": "text", + "content": " 640. We directly evaluate models without fine-tuning against fully sampled " + }, + { + "bbox": [ + 46, + 582, + 288, + 714 + ], + "type": "inline_equation", + "content": "640\\times 640" + }, + { + "bbox": [ + 46, + 582, + 288, + 714 + ], + "type": "text", + "content": " bilinear interpolated ground truth reconstructions. For [40] relying on CNNs, the absolute kernel size stays the same, and the ratio of kernel size over feature map is halved, while the ratio of NO stays the same. Compared to our NO model, the CNN-based E2E-VN [40] produces reconstructions with noticeable artifacts and higher" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 304, + 496, + 547, + 653 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 496, + 547, + 653 + ], + "spans": [ + { + "bbox": [ + 304, + 496, + 547, + 653 + ], + "type": "text", + "content": "PSNR and image reconstruction quality (Fig. 1d and Fig. 5b). Larger MRI FOV with " + }, + { + "bbox": [ + 304, + 496, + 547, + 653 + ], + "type": "inline_equation", + "content": "\\mathrm{NO_k}" + }, + { + "bbox": [ + 304, + 496, + 547, + 653 + ], + "type": "text", + "content": " super-resolution. k space super-resolution expands the MRI reconstruction field of view (FOV). To validate model performance, we design a proof-of-concept FOV experiment. Our NO model and the E2E-VN [40] train on " + }, + { + "bbox": [ + 304, + 496, + 547, + 653 + ], + "type": "inline_equation", + "content": "160\\times 160" + }, + { + "bbox": [ + 304, + 496, + 547, + 653 + ], + "type": "text", + "content": " downsampled k space brain slice samples, where sparse k space sampling results in a reduced FOV in image space. We then perform zero-shot inference on " + }, + { + "bbox": [ + 304, + 496, + 547, + 653 + ], + "type": "inline_equation", + "content": "320\\times 320" + }, + { + "bbox": [ + 304, + 496, + 547, + 653 + ], + "type": "text", + "content": " full-FOV k space data. Although neither model encounters data outside the " + }, + { + "bbox": [ + 304, + 496, + 547, + 653 + ], + "type": "inline_equation", + "content": "160\\times 160" + }, + { + "bbox": [ + 304, + 496, + 547, + 653 + ], + "type": "text", + "content": " FOV during training, our NO model reconstructs features in this extended region with significantly fewer artifacts compared to E2E-VN [40] (visualizations in Fig. 5a)." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 306, + 659, + 422, + 672 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 659, + 422, + 672 + ], + "spans": [ + { + "bbox": [ + 306, + 659, + 422, + 672 + ], + "type": "text", + "content": "4.5. Additional Analysis" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 304, + 677, + 547, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 677, + 547, + 713 + ], + "spans": [ + { + "bbox": [ + 304, + 677, + 547, + 713 + ], + "type": "text", + "content": "Model Inference and Tuning Time. In Table 3, we compare the model development and inference times of our end-to-end neural operator (NO) with diffusion models. We" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "26010" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 48, + 64, + 296, + 218 + ], + "blocks": [ + { + "bbox": [ + 48, + 64, + 296, + 218 + ], + "lines": [ + { + "bbox": [ + 48, + 64, + 296, + 218 + ], + "spans": [ + { + "bbox": [ + 48, + 64, + 296, + 218 + ], + "type": "image", + "image_path": "25101edd51d9fb68dfa6f411ff3bd607dcc3f746b05a3e90d3175f2d27b83c82.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 46, + 230, + 548, + 288 + ], + "lines": [ + { + "bbox": [ + 46, + 230, + 548, + 288 + ], + "spans": [ + { + "bbox": [ + 46, + 230, + 548, + 288 + ], + "type": "text", + "content": "Figure 5. Zero-shot super-resolution results in both extended FOV " + }, + { + "bbox": [ + 46, + 230, + 548, + 288 + ], + "type": "inline_equation", + "content": "(\\mathrm{NO_k})" + }, + { + "bbox": [ + 46, + 230, + 548, + 288 + ], + "type": "text", + "content": " and high-resolution image space " + }, + { + "bbox": [ + 46, + 230, + 548, + 288 + ], + "type": "inline_equation", + "content": "(\\mathrm{NO_i})" + }, + { + "bbox": [ + 46, + 230, + 548, + 288 + ], + "type": "text", + "content": ". (a) Zero-shot extended FOV reconstructions: Our NO model shows fewer artifacts and higher PSNR in the reconstructed brain slices compared to the CNN-based E2E-VN [40] on " + }, + { + "bbox": [ + 46, + 230, + 548, + 288 + ], + "type": "inline_equation", + "content": "4\\times" + }, + { + "bbox": [ + 46, + 230, + 548, + 288 + ], + "type": "text", + "content": " Gaussian, despite neither model seeing data outside the initial " + }, + { + "bbox": [ + 46, + 230, + 548, + 288 + ], + "type": "inline_equation", + "content": "160\\times 160" + }, + { + "bbox": [ + 46, + 230, + 548, + 288 + ], + "type": "text", + "content": " FOV during training. (b) Zero-shot super-resolution reconstructions in image space on " + }, + { + "bbox": [ + 46, + 230, + 548, + 288 + ], + "type": "inline_equation", + "content": "2\\times" + }, + { + "bbox": [ + 46, + 230, + 548, + 288 + ], + "type": "text", + "content": " radial: with input resolution increased to " + }, + { + "bbox": [ + 46, + 230, + 548, + 288 + ], + "type": "inline_equation", + "content": "640\\times 640" + }, + { + "bbox": [ + 46, + 230, + 548, + 288 + ], + "type": "text", + "content": " through bilinear interpolation, our NO model preserves reconstruction quality, while E2E-VN [40] produces visible artifacts." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 301, + 64, + 545, + 216 + ], + "blocks": [ + { + "bbox": [ + 301, + 64, + 545, + 216 + ], + "lines": [ + { + "bbox": [ + 301, + 64, + 545, + 216 + ], + "spans": [ + { + "bbox": [ + 301, + 64, + 545, + 216 + ], + "type": "image", + "image_path": "71aa9ec5616091411c146c9cf0c72104a31cc5d81296e6b8db73d4be9855c4ae.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 50, + 304, + 175, + 389 + ], + "blocks": [ + { + "bbox": [ + 50, + 304, + 175, + 389 + ], + "lines": [ + { + "bbox": [ + 50, + 304, + 175, + 389 + ], + "spans": [ + { + "bbox": [ + 50, + 304, + 175, + 389 + ], + "type": "image", + "image_path": "2a7e541d6cb095f704283590550c5580ca00bf0900a101a82cd573f0b0f4a467.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 46, + 401, + 288, + 468 + ], + "lines": [ + { + "bbox": [ + 46, + 401, + 288, + 468 + ], + "spans": [ + { + "bbox": [ + 46, + 401, + 288, + 468 + ], + "type": "text", + "content": "Figure 6. Performance across different undersampling patterns and rates of ours and baseline methods: end-to-end [40], diffusion [17] and learning-free [27]. Our NO remains relatively consistent in performance when evaluated at different undersampling patterns and rates. Note that a high undersampling rate makes the task more difficult and thus a worse score is expected." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 175, + 304, + 284, + 389 + ], + "blocks": [ + { + "bbox": [ + 175, + 304, + 284, + 389 + ], + "lines": [ + { + "bbox": [ + 175, + 304, + 284, + 389 + ], + "spans": [ + { + "bbox": [ + 175, + 304, + 284, + 389 + ], + "type": "image", + "image_path": "fc1d912141782f9bd93595af6b448723774b8130bc00713c1c085f57cff68ee5.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "table", + "bbox": [ + 49, + 478, + 286, + 558 + ], + "blocks": [ + { + "bbox": [ + 49, + 478, + 286, + 558 + ], + "lines": [ + { + "bbox": [ + 49, + 478, + 286, + 558 + ], + "spans": [ + { + "bbox": [ + 49, + 478, + 286, + 558 + ], + "type": "table", + "html": "
CategoryMethodInference Time (s)Tuning* Required
Learning-free\\( \\ell_1 \\)-Wavelet [27]5.45
DiffusionCSGM [17]93.84
PnP-DM [43]84.46
ScoreMRI [5]96.16
VariationalE2E-VN [40]0.104
NO (ours)0.158
", + "image_path": "cdc28e0877c20cf6bc1d85bcc88539302a708e97f17bcb445902b156978771c6.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_body" + } + ], + "index": 6 + }, + { + "bbox": [ + 46, + 565, + 288, + 655 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 565, + 288, + 655 + ], + "spans": [ + { + "bbox": [ + 46, + 565, + 288, + 655 + ], + "type": "text", + "content": "Table 3. Inference and tuning time of methods tested on NVIDIA A100. NO is approximately " + }, + { + "bbox": [ + 46, + 565, + 288, + 655 + ], + "type": "inline_equation", + "content": "600 \\times" + }, + { + "bbox": [ + 46, + 565, + 288, + 655 + ], + "type": "text", + "content": " faster than diffusion, and " + }, + { + "bbox": [ + 46, + 565, + 288, + 655 + ], + "type": "inline_equation", + "content": "35 \\times" + }, + { + "bbox": [ + 46, + 565, + 288, + 655 + ], + "type": "text", + "content": " faster than the classical baseline based on learning-free compressed sensing methods. *Tuning refers to the " + }, + { + "bbox": [ + 46, + 565, + 288, + 655 + ], + "type": "inline_equation", + "content": "\\mathbf{k}" + }, + { + "bbox": [ + 46, + 565, + 288, + 655 + ], + "type": "text", + "content": " undersampling pattern-specific hyperparameter tuning during inference/after model training. Both the " + }, + { + "bbox": [ + 46, + 565, + 288, + 655 + ], + "type": "inline_equation", + "content": "\\ell_{1}" + }, + { + "bbox": [ + 46, + 565, + 288, + 655 + ], + "type": "text", + "content": "-Wavelet [27] (" + }, + { + "bbox": [ + 46, + 565, + 288, + 655 + ], + "type": "inline_equation", + "content": "\\sim 0.5" + }, + { + "bbox": [ + 46, + 565, + 288, + 655 + ], + "type": "text", + "content": " hrs per pattern) and diffusion methods (" + }, + { + "bbox": [ + 46, + 565, + 288, + 655 + ], + "type": "inline_equation", + "content": "\\sim 6" + }, + { + "bbox": [ + 46, + 565, + 288, + 655 + ], + "type": "text", + "content": " hrs per pattern) require pattern-specific tuning, while our NO is trained once for all patterns." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 46, + 665, + 289, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 665, + 289, + 715 + ], + "spans": [ + { + "bbox": [ + 46, + 665, + 289, + 715 + ], + "type": "text", + "content": "observe that diffusion models require pattern-specific hyperparameter tuning and are over 600 times slower in inference. MRI-diffusion models [5, 17, 43] are unconditionally trained and undersampling patterns are not available during training." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 304, + 307, + 547, + 390 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 307, + 547, + 390 + ], + "spans": [ + { + "bbox": [ + 304, + 307, + 547, + 390 + ], + "type": "text", + "content": "Thus, we empirically tune hyperparameters such as learning rate and guidance scale for each downsampling pattern for approximately 6 hours each time. Traditional learning-free methods like " + }, + { + "bbox": [ + 304, + 307, + 547, + 390 + ], + "type": "inline_equation", + "content": "\\ell_1" + }, + { + "bbox": [ + 304, + 307, + 547, + 390 + ], + "type": "text", + "content": "-Wavelet [27] still require hyperparameter tuning for specific " + }, + { + "bbox": [ + 304, + 307, + 547, + 390 + ], + "type": "inline_equation", + "content": "\\mathbf{k}" + }, + { + "bbox": [ + 304, + 307, + 547, + 390 + ], + "type": "text", + "content": " undersampling patterns during optimization. Consequently, end-to-end methods, e.g. NO, are significantly more efficient." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 304, + 391, + 547, + 439 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 391, + 547, + 439 + ], + "spans": [ + { + "bbox": [ + 304, + 391, + 547, + 439 + ], + "type": "text", + "content": "Performance Under Same Parameter Size. We show our NO outperforms baseline unrolled network E2E-VN [40] on different patterns and rates with a similar architecture and number of parameters in the Supplementary." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 306, + 449, + 378, + 462 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 449, + 378, + 462 + ], + "spans": [ + { + "bbox": [ + 306, + 449, + 378, + 462 + ], + "type": "text", + "content": "5. Conclusion" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 304, + 469, + 547, + 660 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 469, + 547, + 660 + ], + "spans": [ + { + "bbox": [ + 304, + 469, + 547, + 660 + ], + "type": "text", + "content": "Our unified model for compressed sensing MRI addresses the need to train multiple models for different measurement undersampling patterns and image resolutions, a common clinical issue. By leveraging discretization-agnostic neural operators, the model captures both local and global features, enabling flexible MRI reconstruction. With extensive experiments on fastMRI knee and brain datasets, our model maintains consistent performance across undersampling patterns and outperforms state-of-the-art methods in accuracy and robustness. It also enhances zero-shot super-resolution and extended FOV (field of view). The work has some limitations: 1) We only explore one neural operator design, DISCO, and future work could explore other operator learning architectures for MRI. 2) We only benchmark the image reconstruction performance without diagnostic accuracy, which is of more clinical relevance." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 661, + 547, + 708 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 661, + 547, + 708 + ], + "spans": [ + { + "bbox": [ + 304, + 661, + 547, + 708 + ], + "type": "text", + "content": "In short, our approach offers a versatile solution for efficient MRI, with significant utility in clinical settings where flexibility and adaptability to varying undersampling patterns and image resolutions are crucial." + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 758 + ], + "type": "text", + "content": "26011" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 47, + 72, + 145, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 47, + 72, + 145, + 85 + ], + "spans": [ + { + "bbox": [ + 47, + 72, + 145, + 85 + ], + "type": "text", + "content": "Acknowledgement" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 46, + 90, + 289, + 202 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 90, + 289, + 202 + ], + "spans": [ + { + "bbox": [ + 46, + 90, + 289, + 202 + ], + "type": "text", + "content": "This work is supported in part by ONR (MURI grant N000142312654 and N000142012786). J.W. is supported in part by Schmidt Sciences. A.S.J. and A.C. are supported in part by the Undergraduate Research Fellowships (SURF) at Caltech. Z.W. is supported in part by the Amazon AI4Science Fellowship. B.T. is supported in part by the Swartz Foundation Fellowship. M.L.-S. is supported in part by the Mellon Mays Undergraduate Fellowship. A.A. is supported in part by Bren endowed chair and the AI2050 senior fellow program at Schmidt Sciences." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 48, + 223, + 107, + 235 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 223, + 107, + 235 + ], + "spans": [ + { + "bbox": [ + 48, + 223, + 107, + 235 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 48, + 242, + 289, + 712 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 53, + 242, + 288, + 287 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 242, + 288, + 287 + ], + "spans": [ + { + "bbox": [ + 53, + 242, + 288, + 287 + ], + "type": "text", + "content": "[1] Kamyar Azizzadenesheli, Nikola Kovachki, Zongyi Li, Miguel Liu-Schiaffini, Jean Kossaifi, and Anima Anandkumar. Neural operators for accelerating scientific simulations and design. Nature Reviews Physics, pages 1-9, 2024. 3" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 53, + 288, + 289, + 320 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 288, + 289, + 320 + ], + "spans": [ + { + "bbox": [ + 53, + 288, + 289, + 320 + ], + "type": "text", + "content": "[2] Max Born and Emil Wolf. Principles of optics: electromagnetic theory of propagation, interference and diffraction of light. Elsevier, 2013. 15" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 53, + 321, + 288, + 354 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 321, + 288, + 354 + ], + "spans": [ + { + "bbox": [ + 53, + 321, + 288, + 354 + ], + "type": "text", + "content": "[3] Scott Shaobing Chen, David L Donoho, and Michael A Saunders. Atomic decomposition by basis pursuit. SIAM review, 43(1):129-159, 2001. 1" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 53, + 355, + 288, + 409 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 355, + 288, + 409 + ], + "spans": [ + { + "bbox": [ + 53, + 355, + 288, + 409 + ], + "type": "text", + "content": "[4] Yutong Chen, Carola-Bibiane Schonlieb, Pietro Liò, Tim Leiner, Pier Luigi Dragotti, Ge Wang, Daniel Rueckert, David Firmin, and Guang Yang. AI-based reconstruction for fast mri—a systematic review and meta-analysis. Proceedings of the IEEE, 110(2):224-245, 2022. 1" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 53, + 411, + 288, + 443 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 411, + 288, + 443 + ], + "spans": [ + { + "bbox": [ + 53, + 411, + 288, + 443 + ], + "type": "text", + "content": "[5] Hyungjin Chung and Jong Chul Ye. Score-based diffusion models for accelerated mri. Medical Image Analysis, 80: 102479, 2022. 1, 2, 6, 8" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 53, + 445, + 288, + 488 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 445, + 288, + 488 + ], + "spans": [ + { + "bbox": [ + 53, + 445, + 288, + 488 + ], + "type": "text", + "content": "[6] Florinel-Alin Croitoru, Vlad Hondru, Radu Tudor Ionescu, and Mubarak Shah. Diffusion models in vision: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(9):10850-10869, 2023. 5" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 53, + 490, + 288, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 490, + 288, + 555 + ], + "spans": [ + { + "bbox": [ + 53, + 490, + 288, + 555 + ], + "type": "text", + "content": "[7] Salman UH Dar, Mahmut Yurt, Mohammad Shahdloo, Muhammed Emrullah Ildiz, Berk Tinaz, and Tolga Cukur. Prior-guided image reconstruction for accelerated multicontrast mri via generative adversarial networks. IEEE Journal of Selected Topics in Signal Processing, 14(6):1072-1087, 2020. 3" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 53, + 556, + 288, + 590 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 556, + 288, + 590 + ], + "spans": [ + { + "bbox": [ + 53, + 556, + 288, + 590 + ], + "type": "text", + "content": "[8] Mohammad Zalbagi Darestani and Reinhard Heckel. Accelerated mri with un-trained neural networks. IEEE Transactions on Computational Imaging, 7:724-733, 2021. 3" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 53, + 590, + 288, + 612 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 590, + 288, + 612 + ], + "spans": [ + { + "bbox": [ + 53, + 590, + 288, + 612 + ], + "type": "text", + "content": "[9] D.L. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289-1306, 2006. 1" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 48, + 613, + 288, + 647 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 613, + 288, + 647 + ], + "spans": [ + { + "bbox": [ + 48, + 613, + 288, + 647 + ], + "type": "text", + "content": "[10] Karol Gregor and Yann LeCun. Learning fast approximations of sparse coding. In Proceedings of the 27th International Conference on Machine Learning, pages 399-406, 2010. 3" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 48, + 647, + 288, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 647, + 288, + 712 + ], + "spans": [ + { + "bbox": [ + 48, + 647, + 288, + 712 + ], + "type": "text", + "content": "[11] Mark A Griswold, Peter M Jakob, Robin M Heidemann, Mathias Nittka, Vladimir Jellus, Jianmin Wang, Berthold Kiefer, and Axel Haase. Generalized autocalibrating partially parallel acquisitions (grappa). Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine, 47(6):1202-1210, 2002. 1, 2" + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 307, + 73, + 547, + 713 + ], + "type": "list", + "angle": 0, + "index": 30, + "blocks": [ + { + "bbox": [ + 307, + 73, + 545, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 73, + 545, + 95 + ], + "spans": [ + { + "bbox": [ + 307, + 73, + 545, + 95 + ], + "type": "text", + "content": "[12] Charles W Groetsch and CW Groetsch. Inverse problems in the mathematical sciences. Springer, 1993. 1" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 307, + 97, + 547, + 129 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 97, + 547, + 129 + ], + "spans": [ + { + "bbox": [ + 307, + 97, + 547, + 129 + ], + "type": "text", + "content": "[13] Steven Guan, Ko-Tsung Hsu, and Parag V Chitnis. Fourier neural operator network for fast photoacoustic wave simulations. Algorithms, 16(2):124, 2023. 3" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 307, + 131, + 547, + 175 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 131, + 547, + 175 + ], + "spans": [ + { + "bbox": [ + 307, + 131, + 547, + 175 + ], + "type": "text", + "content": "[14] Alper Güngör, Salman UH Dar, Şaban Öztürk, Yilmaz Korkmaz, Hasan A Bedel, Gokberk Elmas, Muzaffer Ozbey, and Tolga Çukur. Adaptive diffusion priors for accelerated mri reconstruction. Medical image analysis, 88:102872, 2023. 3" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 307, + 177, + 547, + 232 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 177, + 547, + 232 + ], + "spans": [ + { + "bbox": [ + 307, + 177, + 547, + 232 + ], + "type": "text", + "content": "[15] Kerstin Hammernik, Teresa Klatzer, Erich Kobler, Michael P Recht, Daniel K Sodickson, Thomas Pock, and Florian Knoll. Learning a variational network for reconstruction of accelerated mri data. Magnetic resonance in medicine, 79(6): 3055-3071, 2018. 1, 3, 4" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 307, + 234, + 547, + 277 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 234, + 547, + 277 + ], + "spans": [ + { + "bbox": [ + 307, + 234, + 547, + 277 + ], + "type": "text", + "content": "[16] DJ Husband, KA Grant, and CS Romaniuk. Mri in the diagnosis and treatment of suspected malignant spinal cord compression. The British journal of radiology, 74:15-23, 2001. 1" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 307, + 280, + 547, + 323 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 280, + 547, + 323 + ], + "spans": [ + { + "bbox": [ + 307, + 280, + 547, + 323 + ], + "type": "text", + "content": "[17] Ajil Jalal, Marius Arvinte, Giannis Daras, Eric Price, Alexandros G Dimakis, and Jonathan I Tamir. Robust compressed sensing mri with deep generative priors. Advances in Neural Information Processing Systems, 2021. 2, 6, 7, 8" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 307, + 326, + 547, + 369 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 326, + 547, + 369 + ], + "spans": [ + { + "bbox": [ + 307, + 326, + 547, + 369 + ], + "type": "text", + "content": "[18] Patricia M Johnson and Maria Drangova. Conditional generative adversarial network for 3d rigid-body motion correction in mri. Magnetic resonance in medicine, 82(3):901-910, 2019. 1, 3" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 307, + 371, + 547, + 414 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 371, + 547, + 414 + ], + "spans": [ + { + "bbox": [ + 307, + 371, + 547, + 414 + ], + "type": "text", + "content": "[19] Christoph Juchem, Omar M Nahnass, Terence W Nixon, and Robin A de Graaf. Multi-slice mri with the dynamic multicoil technique. NMR in Biomedicine, 28(11):1526-1534, 2015. 3" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 307, + 416, + 545, + 450 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 416, + 545, + 450 + ], + "spans": [ + { + "bbox": [ + 307, + 416, + 545, + 450 + ], + "type": "text", + "content": "[20] Dow-Mu Koh and David J Collins. Diffusion-weighted mri in the body: applications and challenges in oncology. American Journal of Roentgenology, 188(6):1622-1635, 2007. 1" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 307, + 452, + 547, + 506 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 452, + 547, + 506 + ], + "spans": [ + { + "bbox": [ + 307, + 452, + 547, + 506 + ], + "type": "text", + "content": "[21] Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadeneheshi, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural operator: Learning maps between function spaces with applications to pdes. Journal of Machine Learning Research, 24(89):1-97, 2023. 1, 3" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 307, + 508, + 547, + 540 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 508, + 547, + 540 + ], + "spans": [ + { + "bbox": [ + 307, + 508, + 547, + 540 + ], + "type": "text", + "content": "[22] Denis Le Bihan. Looking into the functional architecture of the brain with diffusion mri. Nature reviews neuroscience, 4 (6):469-480, 2003. 1" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 307, + 543, + 547, + 586 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 543, + 547, + 586 + ], + "spans": [ + { + "bbox": [ + 307, + 543, + 547, + 586 + ], + "type": "text", + "content": "[23] Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural operator: Graph kernel network for partial differential equations. 2020. 3" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 307, + 588, + 547, + 632 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 588, + 547, + 632 + ], + "spans": [ + { + "bbox": [ + 307, + 588, + 547, + 632 + ], + "type": "text", + "content": "[24] Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. 2021. 1, 3, 5, 11" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 307, + 634, + 547, + 689 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 634, + 547, + 689 + ], + "spans": [ + { + "bbox": [ + 307, + 634, + 547, + 689 + ], + "type": "text", + "content": "[25] Zongyi Li, Hongkai Zheng, Nikola Kovachki, David Jin, Haoxuan Chen, Burigede Liu, Kamyar Azizzadenesheli, and Anima Anandkumar. Physics-informed neural operator for learning partial differential equations. ACM/JMS Journal of Data Science, 1(3):1-27, 2024. 1, 3" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 307, + 691, + 547, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 691, + 547, + 713 + ], + "spans": [ + { + "bbox": [ + 307, + 691, + 547, + 713 + ], + "type": "text", + "content": "[26] Miguel Liu-Schiaffini, Julius Berner, Boris Bonev, Thorsten Kurth, Kamyar Azizzadenesheli, and Anima Anandkumar." + } + ] + } + ], + "index": 29 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "text", + "content": "26012" + } + ] + } + ], + "index": 31 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 48, + 72, + 288, + 713 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 66, + 72, + 288, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 72, + 288, + 106 + ], + "spans": [ + { + "bbox": [ + 66, + 72, + 288, + 106 + ], + "type": "text", + "content": "Neural operators with localized integral and differential kernels. In *Forty-first International Conference on Machine Learning*, 2024. 1, 3, 4, 11, 14" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 48, + 107, + 288, + 150 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 107, + 288, + 150 + ], + "spans": [ + { + "bbox": [ + 48, + 107, + 288, + 150 + ], + "type": "text", + "content": "[27] Michael Lustig, David Donoho, and John M Pauly. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magn. Reson. Med., 58(6):1182-1195, 2007. 2, 6, 7, 8" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 48, + 152, + 288, + 185 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 152, + 288, + 185 + ], + "spans": [ + { + "bbox": [ + 48, + 152, + 288, + 185 + ], + "type": "text", + "content": "[28] Michael Lustig, David L Donoho, Juan M Santos, and John M Pauly. Compressed sensing mri. IEEE signal processing magazine, 25(2):72-82, 2008. 1, 2" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 48, + 186, + 288, + 241 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 186, + 288, + 241 + ], + "spans": [ + { + "bbox": [ + 48, + 186, + 288, + 241 + ], + "type": "text", + "content": "[29] Morteza Mardani, Qingyun Sun, David Donoho, Vardan Papyan, Hatef Monajemi, Shreyas Vasanawala, and John Pauly. Neural proximal gradient descent for compressive imaging. Advances in Neural Information Processing Systems, 31, 2018. 3" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 48, + 243, + 288, + 297 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 243, + 288, + 297 + ], + "spans": [ + { + "bbox": [ + 48, + 243, + 288, + 297 + ], + "type": "text", + "content": "[30] Mark Murphy, Marcus Alley, James Demmel, Kurt Keutzer, Shreyas Vasanawala, and Michael Lustig. Fast 11-spirit compressed sensing parallel imaging mri: scalable parallel implementation and clinically feasible runtime. IEEE transactions on medical imaging, 31(6):1250-1262, 2012. 2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 48, + 298, + 288, + 342 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 298, + 288, + 342 + ], + "spans": [ + { + "bbox": [ + 48, + 298, + 288, + 342 + ], + "type": "text", + "content": "[31] Jeremy Ocampo, Matthew A Price, and Jason D McEwen. Scalable and equivariant spherical cnns by discrete-continuous (disco) convolutions. arXiv preprint arXiv:2209.13603, 2022. 1, 3, 4, 5, 12, 14" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 48, + 344, + 288, + 409 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 344, + 288, + 409 + ], + "spans": [ + { + "bbox": [ + 48, + 344, + 288, + 409 + ], + "type": "text", + "content": "[32] Jaideep Pathak, Shashank Subramanian, Peter Harrington, Sanjeev Raja, Ashesh Chattopadhyay, Morteza Mardani, Thorsten Kurth, David Hall, Zongyi Li, Kamyar Azizzadenesheli, et al. Fourcastnet: A global data-driven high-resolution weather model using adaptive fourier neural operators. arXiv preprint arXiv:2202.11214, 2022. 1, 3" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 48, + 411, + 288, + 453 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 411, + 288, + 453 + ], + "spans": [ + { + "bbox": [ + 48, + 411, + 288, + 453 + ], + "type": "text", + "content": "[33] William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4195-4205, 2023. 5" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 48, + 456, + 288, + 511 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 456, + 288, + 511 + ], + "spans": [ + { + "bbox": [ + 48, + 456, + 288, + 511 + ], + "type": "text", + "content": "[34] Bogdan Raonic, Roberto Molinaro, Tim De Ryck, Tobias Rohner, Francesca Bartolucci, Rima Alaifari, Siddhartha Mishra, and Emmanuel de Bezenac. Convolutional neural operators for robust and accurate learning of pdes. Advances in Neural Information Processing Systems, 36, 2024. 1, 3" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 48, + 512, + 288, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 512, + 288, + 555 + ], + "spans": [ + { + "bbox": [ + 48, + 512, + 288, + 555 + ], + "type": "text", + "content": "[35] Meer Mehran Rashid, Tanu Pittie, Souvik Chakraborty, and NM Anoop Krishnan. Learning the stress-strain fields in digital composites using fourier neural operator. Iscience, 25 (11), 2022. 1, 3" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 48, + 557, + 288, + 601 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 557, + 288, + 601 + ], + "spans": [ + { + "bbox": [ + 48, + 557, + 288, + 601 + ], + "type": "text", + "content": "[36] J Craig Richardson, Richard W Bowtell, Karsten Mäder, and Colin D Melia. Pharmaceutical applications of magnetic resonance imaging (mri). Advanced drug delivery reviews, 57 (8):1191-1209, 2005. 1" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 48, + 602, + 288, + 668 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 602, + 288, + 668 + ], + "spans": [ + { + "bbox": [ + 48, + 602, + 288, + 668 + ], + "type": "text", + "content": "[37] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical image computing and computer-assisted intervention-MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18, pages 234-241. Springer, 2015. 2, 5, 6, 11" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 48, + 670, + 288, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 670, + 288, + 713 + ], + "spans": [ + { + "bbox": [ + 48, + 670, + 288, + 713 + ], + "type": "text", + "content": "[38] V Seifert, M Zimmermann, C Trantakis, H-E Vitzthum, K Kühnel, A Raabe, F Bootz, J-P Schneider, F Schmidt, and J Dietrich. Open mri-guided neurosurgery. Acta neurochirurgica, 141:455-464, 1999. 1" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 307, + 73, + 547, + 385 + ], + "type": "list", + "angle": 0, + "index": 20, + "blocks": [ + { + "bbox": [ + 307, + 73, + 547, + 128 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 73, + 547, + 128 + ], + "spans": [ + { + "bbox": [ + 307, + 73, + 547, + 128 + ], + "type": "text", + "content": "[39] Dilbag Singh, Anmol Monga, Hector L de Moura, Xiaoxia Zhang, Marcelo VW Zibetti, and Ravinder R Regatte. Emerging trends in fast mri using deep-learning reconstruction on undersampled k-space data: a systematic review. Bioengineering, 10(9):1012, 2023. 1" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 307, + 129, + 547, + 217 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 129, + 547, + 217 + ], + "spans": [ + { + "bbox": [ + 307, + 129, + 547, + 217 + ], + "type": "text", + "content": "[40] Anuroop Sriram, Jure Zbontar, Tullie Murrell, Aaron Defazio, C Lawrence Zitnick, Nafissa Yakubova, Florian Knoll, and Patricia Johnson. End-to-end variational networks for accelerated mri reconstruction. In Medical Image Computing and Computer Assisted Intervention-MICCAI 2020: 23rd International Conference, Lima, Peru, October 4-8, 2020, Proceedings, Part II 23, pages 64-73. Springer, 2020. 1, 2, 3, 4, 5, 6, 7, 8, 11, 12, 13" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 307, + 218, + 547, + 251 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 218, + 547, + 251 + ], + "spans": [ + { + "bbox": [ + 307, + 218, + 547, + 251 + ], + "type": "text", + "content": "[41] Jian Sun, Huibin Li, Zongben Xu, et al. Deep admm-net for compressive sensing mri. Advances in neural information processing systems, 29, 2016. 3" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 307, + 251, + 547, + 285 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 251, + 547, + 285 + ], + "spans": [ + { + "bbox": [ + 307, + 251, + 547, + 285 + ], + "type": "text", + "content": "[42] Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multiscale structural similarity for image quality assessment. In Asilomar Conference on Signals, Systems & Computers, 2003. 5" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 308, + 285, + 547, + 340 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 285, + 547, + 340 + ], + "spans": [ + { + "bbox": [ + 308, + 285, + 547, + 340 + ], + "type": "text", + "content": "[43] Zihui Wu, Yu Sun, Yifan Chen, Bingliang Zhang, Yisong Yue, and Katherine Bouman. Principled probabilistic imaging using diffusion models as plug-and-play priors. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. 2, 6, 8" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 308, + 341, + 547, + 385 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 341, + 547, + 385 + ], + "spans": [ + { + "bbox": [ + 308, + 341, + 547, + 385 + ], + "type": "text", + "content": "[44] Jure Zbontar, Florian Knoll, Anuroop Sriram, Tullie Murrell, Zhengnan Huang, Matthew J Muckley, Aaron Defazio, et al. fastmri: An open dataset and benchmarks for accelerated mri. arXiv preprint arXiv:1811.08839, 2018. 3, 5, 6, 12" + } + ] + } + ], + "index": 19 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "text", + "content": "26013" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/A Unified, Resilient, and Explainable Adversarial Patch Detector/b9c12ba3-81c5-4427-ad44-e661e361941c_content_list.json b/2025/A Unified, Resilient, and Explainable Adversarial Patch Detector/b9c12ba3-81c5-4427-ad44-e661e361941c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..dc9204e7ecac20906b9741635b68bd6251295e21 --- /dev/null +++ b/2025/A Unified, Resilient, and Explainable Adversarial Patch Detector/b9c12ba3-81c5-4427-ad44-e661e361941c_content_list.json @@ -0,0 +1,1314 @@ +[ + { + "type": "text", + "text": "A Unified, Resilient, and Explainable Adversarial Patch Detector", + "text_level": 1, + "bbox": [ + 171, + 130, + 826, + 151 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Vishesh Kumar, Akshay Agarwal \nTrustworthy BiometraVision Lab, IISER Bhopal, India", + "bbox": [ + 281, + 181, + 715, + 215 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "{vishesh22,akagarwal}@iiserb.ac.in", + "bbox": [ + 346, + 219, + 647, + 233 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 246, + 268, + 326, + 284 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Deep Neural Networks (DNNs), backbone architecture in 'almost' every computer vision task, are vulnerable to adversarial attacks, particularly physical out-of-distribution (OOD) adversarial patches. Existing defense models often struggle with interpreting these attacks in ways that align with human visual perception. Our proposed AdvPatchXAI approach introduces a generalized, robust, and explainable defense algorithm designed to defend DNNs against physical adversarial threats. AdvPatchXAI employs a novel patch decorrelation loss that reduces feature redundancy and enhances the distinctiveness of patch representations, enabling better generalization across unseen adversarial scenarios. It learns prototypical parts self-supervised, enhancing interpretability and correlation with human vision. The model utilizes a sparse linear layer for classification, making the decision process globally interpretable through a set of learned prototypes and locally explainable by pinpointing relevant prototypes within an image. Our comprehensive evaluation shows that AdvPatchXAI closes the \"semantic\" gap between latent space and pixel space and effectively handles unseen adversarial patches even perturbed with unseen corruptions, thereby significantly advancing DNN robustness in practical settings1.", + "bbox": [ + 89, + 300, + 485, + 648 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 91, + 675, + 220, + 690 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Besides being dominant in computer vision over the years, deep neural networks (DNNs) have been found vulnerable to adversarial attacks [1, 19]. Early adversarial attacks on DNNs tricked models using slight, barely noticeable noise [18, 52]. While the above attack perturbs an entire image, a few attacks often target the main object in the scene naturally [29, 63]. Another type of adversaries are poisoning attacks, misled models during training by introducing incorrect patterns, such as in poison frogs and backdoor attacks [3, 9, 50]. Surprisingly, many of these minute adversarial attacks are ineffective in the physical world due to", + "bbox": [ + 89, + 700, + 482, + 867 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "several unconstrained environmental factors, including rotation and translation properties of the objects. This led to the development of adversarial patch attacks, where a patterned sub-image is placed over the input image to deceive the model [8]. Due to expectation over transformation (EoT) constraints while learning the patches, they are highly effective in real-world scenarios. They can fool any possible deep networks, including vision transformers [5, 12, 20, 37, 53, 64].", + "bbox": [ + 511, + 268, + 906, + 407 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Therefore, securing DNN-based systems against stealthy and practical attacks is vital, especially in safety-critical domains such as autonomous driving, robotics, smart homes/cities, smart industries, video surveillance, and healthcare. Researchers are constantly developing new defenses or protection strategies [17, 21, 24, 39, 57] to tackle the limitations of DNNs against physical adversarial patches, but understanding their decision-making processes and evaluating the resiliency of the defense algorithms is increasingly important [42, 47]. It is to be noted that physical adversarial patches are visible to the human eye; however, they need extra precautions since the added patch can also be a real-world object. Due to this, automated detection of patches is challenging and leads to several false rejections. It is essential to align human and machine vision to address this issue. One way can be to align the functional properties of human and machine vision [16]. This alignment will help improve the machine's ability to detect and respond to out-of-distribution (OOD) adversarial patches more effectively in several generalized settings, such as unseen patches and unseen perturbation, ensuring better security and performance. \"To the best of our knowledge, no existing defense provides both high generalizability and explainability against adversarial patch attacks.\"", + "bbox": [ + 511, + 411, + 908, + 773 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "For the first time, we propose a generalized, robust, and explainable adversarial patch detector, namely AdvPatchXAI, by adding the model explainability since the beginning of the development of the network in contrast to the traditional methods, which plug in the explainability module once the model is thoroughly trained. Proposed AdvPatchXAI uses a sparse linear layer that connects learned prototypical parts to classes. This setup allows a user to", + "bbox": [ + 511, + 780, + 910, + 902 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 46 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "", + "bbox": [ + 89, + 875, + 482, + 900 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "30387", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "interpret the model by inspecting the prototypes and their relation to the classes. The weights of the linear layer are restricted to non-negative, ensuring that the presence of a class-relevant prototype increases the evidence for that class. This layer functions like a scoring sheet: the score for a class is the sum of all present prototypes multiplied by their weights. Using this interpretable and predictive linear layer, AdvPatchXAI ensures a direct relation between the prototypes and the classification. This approach improves the interpretability of the model's decisions and enhances its ability to detect and respond to adversarial patches. The significant strength of the proposed approach is that it can be plugged in with any deep learning architecture, including CNN and transformer-based architecture. In brief, the contributions of this research are:", + "bbox": [ + 89, + 90, + 480, + 316 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- We have proposed a generalized, robust, and explainable patch detector (AdvPatchXAI), which effectively detects unseen and out-of-distribution adversarial patches.", + "- For the first time in the literature, we have evaluated the robustness of adversarial patch detectors against several common corruptions, ensuring their practicality in the unconstrained physical world.", + "- Extensive experimental comparison with benchmark and state-of-the-art works demonstrate the proposed defense algorithm's effectiveness, generalizability, and explainability." + ], + "bbox": [ + 89, + 316, + 480, + 482 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Literature review", + "text_level": 1, + "bbox": [ + 89, + 497, + 261, + 513 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "We identify three main philosophies for developing robust defenses against adversarial patch attacks. These approaches are not mutually exclusive and can be adopted together.", + "bbox": [ + 89, + 523, + 480, + 582 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Adversarial Patched Dataset: The first time Brown et al.[8] introduced the concept of adversarial patches to fool the object detectors. Since then, several advancements have been made to create more effective and stealthy adversarial patches, including LaVAN (focusing on weaknesses) [28], Adversarial QR codes (appearing less suspicious) [10, 11], PS-GAN (improved quality) [34], and DiAP (data-independent) [61]. A survey [51] reveals the vulnerability of various state-of-the-art pre-trained models YOLOv4 [7], ViT-B/16, Unet++ [62], YOLOv3 [45], YOLOv2 [46], YOLOv5 against physical adversarial patch attacks. Our first philosophy lies in the lack of a standardized dataset. While several effective adversarial patch generation algorithms exist [10, 32, 38, 61, 64], a lack of standardized datasets hinders the development of robust defense mechanisms. Recent efforts by Pintor et al. [44] and Ojaswee et al.[43] propose benchmark datasets specifically for adversarial patches but missing the effect of natural noises in real-world scenarios [2, 23]. While Kumar & Agarwal [31], for the first time, explored the combined effect of both adversarial patches and natural noises, they have not", + "bbox": [ + 89, + 584, + 482, + 900 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "proposed any novel defense algorithm. Our Second philosophy lies with the defense algorithm. It is also to be noted that minimal defense works exist that can effectively detect adversarial patch attacks in several generalized settings such as unseen datasets, unseen adversarial patches, and unseen threat model [25, 27, 32, 43] and also lack explainability. To address these gaps, in our research, we have regenerated a large-scale dataset followed by [31] containing 10 different adversarial patches [44] and three natural noises (gaussian noise, shot noise, and impulse noise); the detailed description of the dataset given in subsection 4.1. Our third philosophy concerns the lack of a generalized and explainable patch defense algorithm. Generalized & Explainable Patch Defense: While several studies have been explored for adversarial example detection in image classification task [6, 15, 59, 60], very few studies talk about adversarial patch detection in image classification. Pintor et al. [44] introduced an ImageNet-Patch dataset to benchmark machine learning models' robustness against adversarial patches. This dataset includes patches optimized to generalize across different models, allowing for a more efficient and transferable robustness evaluation. The dataset's utility has been demonstrated by testing its effectiveness against 127 models, showcasing its potential as a standard benchmark for assessing and improving model robustness against adversarial patches. Further, Ojaswee et al. [43], using subsets of ImageNet [13] and COCO [33] datasets, developed benchmark datasets and generalized the effect of these patches by finetuning several state-of-the-art DNNs in different generalized settings such as (i) seen patch settings (same patches during testing and training), (ii) unseen patch setting (different patches during training and testing), (iii) seen patch + unseen dataset, and (iv) unseen patch + unseen dataset. Again, Kumar & Agarwal [31] extended this work by training traditional machine learning algorithm using the features extracted by state-of-the-art DNNs in several conditions such as (i) seen patch, (ii) seen patch + natural noises, (iii) unseen patch, and (iv) unseen patch + noise. These findings indicate that defending against adversarial patches in unseen settings is challenging, as the effectiveness of defenses is closely tied to the attributes of the patches used during training. This means detectors have a lower detection rate for new, unseen patches. It has been noticed that none of these studies propose novel algorithms for patch detection. Moreover, to our knowledge, existing detectors do not provide sufficient explanations for the alerts they raise, leaving the reasoning behind their decisions unclear. This research primarily focused on unseen settings such as (i) unseen patch setting, (ii) unseen patch + noise, (iii) Unseen patch + unseen dataset, and (iv) unseen patch + unseen dataset + noise. Generalized better than existing defenses and provided detailed explanations through the proposed AdvPatchXAI patch detector.", + "bbox": [ + 511, + 90, + 906, + 892 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "30388", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/9b6d36f85b9743b72767020bbec29aaae3ac69a31a7f32306720f1d190696f12.jpg", + "image_caption": [ + "Figure 1. Overview of our proposed method. OOD Adversarial patches used in this research show on top of the proposed architecture. Patch Decorrelation: A novel patch decorrelation loss $L_{PD}$ followed the contrastive learning approach applying to the backbone features along with loss $L_{A}$ applying on softmax output to assign the same prototype of two representations of patches for an image pair. To avoid trivial solutions and encourage the utilization of all prototypes, a tanh-loss $L_{T}$ is applied during the self-supervised pretraining phase. Connections between learned part-prototypes and classes are established through a sparse linear layer. The standard loss used is negative log-likelihood, denoted $L_{C}$ . Model outputs remain unnormalized during testing, allowing them to serve as straightforward scoring metrics." + ], + "image_footnote": [], + "bbox": [ + 178, + 92, + 820, + 441 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. Proposed Framework for AdvPatchXAI", + "text_level": 1, + "bbox": [ + 89, + 555, + 449, + 571 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "This section first briefly describes the binary classification problem of adversarial patch detection, followed by a comprehensive discussion of the proposed explainable adversarial patch detector, i.e., AdvPatchXAI.", + "bbox": [ + 89, + 580, + 482, + 642 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Problem Statement & Notations", + "text_level": 1, + "bbox": [ + 89, + 651, + 372, + 666 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Given a binary classification problem with training dataset $D_{tr} = \\{(x,y), x \\in X_{tr}, y \\in Y_{tr}\\}$ containing known classes $C_2 = \\{c_1 \\text{ (real)}, c_2 \\text{ (patched)}\\}$ and a testing dataset $D_{te} = \\{(x', y'), x' \\in X_{te}, y' \\in Y_{te}\\}$ , where, $X_{tr}$ and $X_{te}$ represent the input images of the training and testing dataset, respectively, while $Y_{tr}$ and $Y_{te}$ are the corresponding class labels. We aim to learn interpretable prototypes that can be used as input features for the model explainability. The backbone of our model consists of pre-trained CNNs and ViT, which learn an interpretable, 1-dimensional image encoding $p$ that indicates the presence or absence of prototypical parts in an image. These prototypical parts (prototypes) are then connected to classes through a sparse linear layer.", + "bbox": [ + 89, + 672, + 482, + 883 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Our framework introduces two steps to effectively iden-", + "bbox": [ + 109, + 885, + 482, + 900 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "tify and classify the given input accurately: (i) data augmentation followed by TrivialAugment [40] to perform self-supervised pretraining of learned prototype patches and (ii) training AdvPatchXAI for effective and explainable adversarial patch detection.", + "bbox": [ + 511, + 556, + 906, + 632 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.2. Self-Supervised Pretraining of Prototypes", + "text_level": 1, + "bbox": [ + 511, + 648, + 867, + 666 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Following prior self-supervised learning methods [26], we generate a positive pair, denoted as $x'$ and $x''$ by using TrivialAugment [40]. This recently introduced augmentation strategy is efficient, requiring no hyperparameter tuning, and applies a single augmentation per image. However, in contrast to the standard approach, we applied TrivialAugment twice, as shown in Figure 1. The first augmentation operation applies location-related transformations or focuses on spatial transformations, including shearing, rotation, and translation. This augmented image is then used as input to another TrivialAugment operation, which involves color alterations, including brightness, sharpness, hue, and contrast. In our approach, we followed grayscale conversion in the second augmentation stage. We also experimented with RGB and YCbCr conversion but found a less", + "bbox": [ + 511, + 674, + 906, + 900 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "30389", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "effective strategy than grayscale (see subsection 6.1).", + "bbox": [ + 89, + 90, + 442, + 107 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2.1. Patch Decoration Loss", + "text_level": 1, + "bbox": [ + 89, + 119, + 313, + 132 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "To reduce redundancy and enhance the distinctiveness of the patch features, we propose a patch decorrelation loss applied to patch features. We adopt a contrastive learning strategy similar to [56] to achieve alignment and uniformity of patch representations. However, unlike their image-level approach, we focus on patch-level alignment. The Patch Decorrelation Module plays a critical role in our method by promoting the similarity between features extracted from different views of the same patch while simultaneously reducing redundancy across different feature dimensions. An input image is first forwarded through the backbone DNN $f$ . The resulting output $z = f(x; w_f)$ consists of $D$ two-dimensional $(H \\times W)$ feature maps, where $w_f$ denotes the trainable parameters of $f$ . So, the feature maps for two different views of $x$ , $z' = f(x'; w_f)$ and $z'' = f(x''; w_f)$ . Since the feature dimensions of each are initially $(B, D, H, W)$ , where $B$ denotes the batch size and $D$ represents the total number of feature maps of dimension $H \\times W$ . We have converted it to $(B \\times H \\times W, D)$ , representing each image patch's features within the batch. Compute the cross-correlation matrix using the converted feature maps for both $x'$ and $x''$ as $[D_{cr}] = [d_{ij}]_{(D,D)} = \\frac{1}{B \\times H \\times W} [x']_{(B \\times H \\times W,D)}^{T} [x'']_{(B \\times H \\times W,D)}$ , where $i, j = 1, 2, \\dots, D$ . Consider an identity matrix $I = [I_{ij}]_{(D,D)}$ . Finally, the patch decorrelation loss (i.e., $L_{PD}$ ) is defined as:", + "bbox": [ + 89, + 138, + 483, + 532 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nL _ {P D} = \\frac {\\sum_ {i = 1} ^ {D} \\left(d _ {i i} - I _ {i i}\\right) ^ {2} + \\lambda \\times \\left(\\left(d _ {i j} - I _ {i j}\\right) _ {i \\neq j}\\right) ^ {2}}{D} \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 106, + 546, + 482, + 579 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Here, $\\lambda = 5 \\cdot e^{-3}$ is a trade-off parameter of the loss function. Our objective revolves around diagonalizing this cross-correlation matrix, which mitigates the risk of trivial solutions, such as activating the same prototype for all patches and promoting utilizing the entire prototype space.", + "bbox": [ + 89, + 594, + 483, + 672 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2.2. Alignment Loss on Softmax Outputs", + "text_level": 1, + "bbox": [ + 89, + 683, + 393, + 698 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "After obtaining raw features aligned using the Patch Decorrelation Module, we further refine the alignment using a softmax-based alignment loss. This alignment loss, denoted as $L_{A}$ [41], ensures that the softmax-normalized feature vectors of corresponding patches from two views are closely aligned. Apply a softmax over $D$ such that $\\sum_{i=1}^{D} s_{h,w,i} = 1$ , ensuring that a patch located at $h, w \\in H \\times W$ corresponds to prototype $i$ . Ideally, $s_{h,w,:}$ is a one-hot encoded vector indicating a perfect assignment to one prototype. To measure the similarity between corresponding patches from two augmented views, we calculate the dot product between their latent representations, $(s_{h,w,:}^{\\prime})$ and $s_{h,w,:}^{\\prime \\prime}$ :", + "bbox": [ + 89, + 704, + 483, + 904 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nL _ {A} = - \\frac {1}{H W} \\sum_ {(h, w) \\in H \\times W} \\log \\left(s _ {h, w,:} ^ {\\prime} \\cdot s _ {h, w,:} ^ {\\prime \\prime}\\right) \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 558, + 113, + 906, + 152 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Our goal is to identify the presence or absence of prototypical parts within an image, for which a max-pooling operation across each feature map dimension (denoted as $s::,d$ ) is applied. This results in a presence score tensor, $p \\in [0,1]^D$ , where each element $p_d$ represents the strength of the $d$ th prototype's presence in the image. We introduce the tanh-based regularization loss $L_{T}$ [41] to prevent trivial solutions. The tanh-loss encourages the presence of each prototype at least once in a mini-batch. The tanh loss is defined as:", + "bbox": [ + 511, + 162, + 906, + 311 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nL _ {T} (p) = - \\frac {1}{D} \\sum_ {i = 1} ^ {D} \\log (\\tanh (\\sum_ {b = 1} ^ {B} p _ {b}) + \\epsilon), \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 571, + 323, + 906, + 367 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Where tanh and log are element-wise operations, $B$ is the number of samples in a mini-batch, $D$ is the number of prototypes, and $\\epsilon$ is a small number for numerical stability. This loss ensures that each prototype is utilized across the mini-batch, preventing any prototype from dominating and promoting a balanced representation across all prototypes.", + "bbox": [ + 511, + 371, + 905, + 462 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The combination of patch decorrelation loss, alignment loss, and tanh loss enables our model to achieve robust patch-level alignment and uniformity. The patch decorrelation loss $(L_{PD})$ enhances the raw feature alignment, the alignment loss $(L_A)$ ensures the softmax-normalized features are closely matched, and the tanh loss $(L_T)$ prevents trivial solutions by promoting diversity in prototype assignments. The final objective of our pre-training phase of AdvPatchXAI is: $\\lambda_{PD}L_{PD} + \\lambda_{A}L_{A} + \\lambda_{T}L_{T}$ .", + "bbox": [ + 511, + 462, + 905, + 599 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.3. Training AdvPatchXAI", + "text_level": 1, + "bbox": [ + 511, + 606, + 730, + 622 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "After the pretraining phase of the prototypes, the patch presence score tensor, $p$ , is fed into a linear classification layer having non-negative weights, $w_{c} \\in \\mathbb{R}^{(D \\times C)} \\geq 0$ . It looks for only positive or true class evidence in an input image to which the image belongs. These weights connect prototypes to classes $(C)$ to ensure that the presence of prototypical parts contributes positively to the evidence for their associated class, enhancing interpretability. The bias term adjusts the classifier's decision threshold and is independent of the prototype's contributions. This separation ensures that the model remains interpretable, as each prototype's influence on the decision is straightforward and non-contradictory. The output score for each class is calculated by summing the element-wise product of the presence scores and the corresponding class weights from the linear layer. We incorporate a classification loss term, $L_{C}$ , to optimize model performance. This loss is calculated as the standard negative log-likelihood between the predicted", + "bbox": [ + 511, + 628, + 906, + 902 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "30390", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "class probabilities $\\hat{y}$ and the one-hot encoded ground truth label $y$ . While $L_{C}$ primarily influences the weights of the linear layer, it also fine-tunes the prototypes to discriminate features relevant to the classification task better. The overall objective for the second training phase of AdvPatchXAI is: $\\lambda_{PD}L_{PD} + \\lambda_{A}L_{A} + \\lambda_{T}L_{T} + \\lambda_{C}L_{C}$ .", + "bbox": [ + 89, + 90, + 480, + 183 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4. Experimental Result and Analysis", + "text_level": 1, + "bbox": [ + 89, + 195, + 401, + 212 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The dataset used in this research for evaluating the proposed and existing defenses is discussed in subsection 4.1, followed by the implementation details of the proposed algorithm. Finally, the analysis of the proposed defense algorithm is discussed in detail.", + "bbox": [ + 89, + 220, + 483, + 295 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.1. Out-of-Distribution Adversarial Patch Dataset", + "text_level": 1, + "bbox": [ + 89, + 304, + 483, + 319 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Following the procedure outlined by Kumar & Agarwal [31], this paper introduces two datasets focusing on physical adversarial patches and natural noises using ImageNet and COCO datasets. Images are attacked with 10 different styles of physical adversarial patches and three types of natural noises (Gaussian, Shot, and Impulse noise). For COCO, we randomly selected 2,000 clean images from the validation set. These images served as a clean subset. Another set of 2,000 images from the COCO validation set is selected, and 10 different adversarial patches are applied to each, generating 20,000 adversarial patch images. Clean and patched images are divided into training and testing sets in a 3:2 ratio. For example, 2,000 clean images of COCO and 2,000 images with a single patch are divided into 800 test images and 1200 train images. Also, applying three natural noises resulted in 2,400 noisy test images for both clean and patched sets. Since our primary focus is generalizability, we use ImageNet to test our model when trained on the COCO patched train subset. For ImageNet, we randomly selected 800 clean images from the validation set. Another set of 800 images is selected, and 10 different adversarial patches are applied to each image, generating 8000 adversarial patch images. Again, three different types of natural noise applied to each patch give 24,00 noisy images for clean and patched test sets. In total, the dataset includes 2,800 clean images, 28,000 adversarial patch images (20,000 from COCO and 8,000 from ImageNet), 4,800 noisy test images, and 48,000 images with both adversarial patches and natural noise.", + "bbox": [ + 89, + 325, + 483, + 763 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.2. Implementation Details", + "text_level": 1, + "bbox": [ + 89, + 773, + 307, + 787 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We utilize three convolutional backbones in AdvPatchXAI: ResNet50 [22] (R), ConvNext-tiny [36] (C), and MobileNetV2 [48] (M). A transformer-based network ViT [14] is also incorporated. The pre-trained models are used, but the strides of the last layers are modified from 2 to 1 to increase the width $(W)$ and height $(H)$ of the output feature maps (from $7 \\times 7$ to $28 \\times 28$ for ResNet and MobileNet,", + "bbox": [ + 89, + 794, + 480, + 900 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "$26 \\times 26$ for ConvNext, and reshaped the output feature into $14 \\times 14$ for ViT). This adjustment results in a finer-grained patch grid $z$ , improving patch similarity optimization. The backbone $f$ is fine-tuned using Adam with a learning rate 0.0005 and a cosine annealing schedule. The linear layer is trained with a learning rate of 0.05. Loss weights are set to $\\lambda_{C} = \\lambda_{T} = 2$ and $\\lambda_{PD} = \\lambda_{A} = 5$ . Prototypes are trained for 10 epochs, followed by training AdvPatchXAI for an additional 60 epochs. Images are resized to $224 \\times 224$ and augmented with TrivialAugment [40] in two stages: the first stage is related to location, and the second stage is related to color transformation (grayscale). Experiments are performed with seed value one to ensure reproducibility.", + "bbox": [ + 511, + 90, + 903, + 287 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.3. Result Analysis", + "text_level": 1, + "bbox": [ + 511, + 295, + 668, + 311 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We evaluated our proposed method, AdvPatchXAI, employed various backbone DNNs using a comprehensive set of experiments. We have trained our network on the COCO training subset for all 10 patches separately and evaluated them under several generalized zero-shot settings. These settings included scenarios with seen datasets with unseen patches and unseen datasets with unseen patches, both with and without natural noises. The effectiveness of AdvPatchXAI can be assessed through two key metrics: the robustness of the chosen backbone DNN and the success of the training patch in identifying unseen patches. The presence of natural noises during the testing ensures the detector's robustness in a black-box setting. To further strengthen the effectiveness of the proposed defense algorithm, we have evaluated its resiliency against adaptive attacks (discussed in supplementary).", + "bbox": [ + 511, + 316, + 903, + 559 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.3.1. Zero-Shot OOD Patch Detection in the Absence of Noise", + "text_level": 1, + "bbox": [ + 511, + 564, + 906, + 594 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Table 1 and Table 2 present the average classification accuracy (Mean) of our proposed AdvPatchXAI on COCO and ImageNet datasets, respectively, combined with OOD unseen patches in silent (without noise) settings. Based on our findings, it can be seen that AdvPatchXAI-R (i.e., with backbone R) achieved the highest mean accuracy of $99.33\\%$ and $98.94\\%$ , particularly with Patch-9 on COCO and ImageNet datasets, respectively, demonstrating exceptional performance and robustness. AdvPatchXAI-ViT (i.e., with backbone ViT) also performed well, especially for Patch-4 on COCO $96.42\\%$ and Patch-0 on ImageNet $94.31\\%$ , showing reliability with moderate standard deviation (SD) (see supplementary for more detail) but found more vulnerable on Patch-5. However, AdvPatchXAI-M (MobileNet) and AdvPatchXAI-C (ConvNeXt) exhibited higher variability, with significant accuracy drops for specific patches such as Patch-3. Figure 2 gives a clearer picture of the robustness of AdvPatchXAI-R and AdvPatchXAI-ViT on both the COCO and ImageNet datasets. For example, the performance of AdvPatchXAI-R on COCO and ImageNet, $91.8\\%$", + "bbox": [ + 511, + 599, + 903, + 900 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "30391", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/c4975fa67b7054f405952c701bfd7f6943de2a9fbbf4a6c31de04b8be9efe9de.jpg", + "table_caption": [ + "Table 1. Adversarial patch detection accuracy of the proposed AdvPatchXAI with different backbone on COCO (seen dataset) subset in several generalized settings such as Silent (unseen patch without any noise), Noisy (unseen patch+noise). The results are reported as mean. Patch-{0-9}\\{1} indicates models are trained on Patch-1 and tested on all other patches except Patch-1. R, M, ViT, and C represent ResNet50, MobileNet, Vision Transformer, and ConvNeXt backbones, respectively. The best mean values are highlighted." + ], + "table_footnote": [], + "table_body": "
MethodTestPatch-{0-9}\\{0}Patch-{0-9}\\{1}Patch-{0-9}\\{2}Patch-{0-9}\\{3}Patch-{0-9}\\{4}Patch-{0-9}\\{5}Patch-{0-9}\\{6}Patch-{0-9}\\{7}Patch-{0-9}\\{8}Patch-{0-9}\\{9}
AdvPatchXAI-RSilent94.0995.5784.6983.3097.9979.6585.5396.9194.7699.33
Noisy85.8188.8678.0772.0196.0969.0578.9594.5983.7995.92
AdvPatchXAI-MSilent94.1797.4577.3364.8898.4772.2893.9098.6192.9097.24
Noisy76.0970.8256.9150.2489.6351.3470.4583.6571.1368.11
AdvPatchXAI-ViTSilent95.8495.1993.2081.7496.4267.2789.0893.9089.0094.32
Noisy92.3192.6188.9178.9192.9564.5785.7490.5280.5693.13
AdvPatchXAI-CSilent90.0885.2986.0875.9496.2782.4278.6387.1892.7395.28
Noisy79.1679.8874.6264.6986.5759.7272.7471.0976.4184.06
", + "bbox": [ + 93, + 146, + 898, + 228 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "and $90.91\\%$ , respectively, is at least ranges in $[2.2 - 4.81]\\%$ better than AdvPatchXAI with other backbones on both datasets. It can be seen that there is only a minor difference $[0.66 - 1.95]\\%$ in performance when both the dataset and patches are unknown to the model.", + "bbox": [ + 89, + 238, + 480, + 315 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.3.2. Zero-Shot OOD Patch Detection Under Noise Perturbation", + "text_level": 1, + "bbox": [ + 89, + 321, + 482, + 353 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "To demonstrate the robustness capabilities of our defense, we evaluated its resiliency when trained on clean images (without exposure to any noise during training) using COCO subsets. This approach is crucial because natural noises are inherent in the environment [4], and training on every possible type of noise is impractical. Therefore, detectors must be resilient enough to handle unseen natural noises.", + "bbox": [ + 88, + 356, + 482, + 474 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The resiliency results of AdvPatchXAI in unseen patch detection in noisy (applying all three noise Gaussian, Shot, and Impulse with severity $= 2$ separately on test subset and taking the average patch wise) settings on COCO and ImageNet datasets are shown in Figure 2. While a drop in detection performance is expected, AdvPatchXAI-ViT exhibits only a marginal reduction. For example, its performance drops from $89.6\\%$ to $86.02\\%$ on COCO and from $87.65\\%$ to $83.97\\%$ on ImageNet. In contrast, other backbones suffer more significant decreases in accuracy. For instance, the performance of AdvPatchXAI-R drops by $7.49\\%$ on COCO and $7.73\\%$ on ImageNet, whereas the accuracy of AdvPatchXAI-M drops by $19.88\\%$ on COCO and $19.47\\%$ on ImageNet. Tables 1 and 2 provide detailed average 10-fold cross-validation performance in the unseen patch noise evaluation setting for COCO and ImageNet datasets, respectively. Notably, in zero-shot evaluations where images (clean and patched) are perturbed, AdvPatchXAI with ViT outperforms other backbones by a significant margin. Patch-4 proves to be more effective in 9 out of 16 evaluations, exhibiting higher mean accuracy. The performance difference between AdvPatch-ViT with the best-performing patch, and Patch-4 is less than $1.4\\%$ on both datasets and in each setting (silent, noisy), yet Patch-4 shows very low SD (see supplementary for more detail), indicating its higher effectiveness. Extensive experimental evaluation reveals that AdvPatchXAI-ViT with Patch-4 generalizes well in unseen patch settings and maintains high resiliency when images", + "bbox": [ + 91, + 477, + 483, + 901 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/31943b222b2e80e932f30eb179d708ca89e5be58e983a359f22bb6ea15b3fb6c.jpg", + "image_caption": [ + "Figure 2. Comparison with SOTA in terms of average adversarial patch detection accuracy for unseen patch and unseen patch + unseen noise detection on both COCO and ImageNet datasets." + ], + "image_footnote": [], + "bbox": [ + 517, + 237, + 901, + 366 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "are perturbed by noise. Therefore, in real-world applications, we recommend using AdvPatchXAI-ViT with Patch-4 to defend against adversarial patches.", + "bbox": [ + 511, + 424, + 906, + 470 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "5. Explainability and Comparison with SOTA", + "text_level": 1, + "bbox": [ + 511, + 481, + 898, + 501 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "To demonstrate the effectiveness of the proposed patch detector, we have performed an extensive comparison with state-of-the-art algorithms, which we are going to discuss first. Then, we will discuss the explainability of the proposed approach.", + "bbox": [ + 511, + 507, + 905, + 583 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "5.1. Comparison with SOTA and Baseline", + "text_level": 1, + "bbox": [ + 511, + 590, + 836, + 607 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "To demonstrate the effectiveness of our proposed model, we compared it with recent state-of-the-art (SOTA) methods: Ojaswee et al. [43] and Kumar & Agarwal [31]. For a fair comparison, we followed the same experimental protocol we used to evaluate the proposed algorithm. As shown in Figure 2, our method outperforms all SOTA methods except in the case of AdvPatchXAI-M on both the COCO and ImageNet datasets when clean and patched images are perturbed with noise. Specifically, our AdvPatchXAI-R exceeds the performance of [43] by $13.35\\%$ and [31] by $11.9\\%$ on the COCO dataset. On the ImageNet dataset, AdvPatchXAI-R outperforms [43] by $12.03\\%$ and [31] by $14.01\\%$ when clean and unseen patched images are used for evaluation. Moreover, when clean and patched images are perturbed with noise, AdvPatchXAI-ViT surpasses [43] by $13.64\\%$ and [31] by $12.22\\%$ on the COCO dataset. On the ImageNet dataset, AdvPatchXAI-ViT outperforms [43] by $13.33\\%$ and [31] by $12.67\\%$ . Our proposed model, based on prototypical parts, is unique in its approach to OOD adver", + "bbox": [ + 511, + 613, + 906, + 902 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "30392", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/519cf6a10e8509a0fa02ebd462317148f14ed926ec89ca19a7d126ffae92e577.jpg", + "table_caption": [ + "Table 2. Adversarial patch detection accuracy of the proposed AdvPatchXAI with different backbone on ImageNet (unseen dataset) subset in several generalized settings such as Silent (unseen patch without any noise), Noisy (unseen patch+noise). The results are reported as mean. Patch-{0-9} \\{3} indicates models are trained on Patch-3 and tested on all other patches except Patch-3. R, M, ViT, and C represent ResNet50, MobileNet, Vision Transformer, and ConvNeXt backbones, respectively. The best mean values are highlighted." + ], + "table_footnote": [], + "table_body": "
MethodTestPatch-{0-}\\{0}Patch-{0-}\\{1}Patch-{0-}\\{2}Patch-{0-}\\{3}Patch-{0-}\\{4}Patch-{0-}\\{5}Patch-{0-}\\{6}Patch-{0-}\\{7}Patch-{0-}\\{8}Patch-{0-}\\{9}
AdvPatchXAI-RSilent92.9695.1284.4982.5698.1779.4084.6897.1895.6098.94
Noisy84.3988.5177.8171.4295.6768.3478.3093.7278.8894.76
AdvPatchXAI-MSilent93.6895.2877.3064.1395.7170.7291.9396.8191.9095.92
Noisy75.1070.1656.8850.2287.7451.1569.7382.0068.5567.21
AdvPatchXAI-ViTSilent94.3192.6792.0381.3792.9466.4186.7191.7787.3790.94
Noisy90.1890.1487.9878.8889.1964.2283.7387.1978.1690.04
AdvPatchXAI-CSilent89.4185.2685.1275.9196.2381.2177.3086.6791.5394.65
Noisy78.1479.5573.2963.6685.2059.1371.5470.1675.2183.41
", + "bbox": [ + 94, + 146, + 898, + 228 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/570ab6025cc4a362d9a1d19834b7d40e6b32f91ac3f74eca5ee847cfa0dc827e.jpg", + "image_caption": [ + "Figure 3. t-SNE visualization of feature space of proposed AdvPatchXAI with backbone ViT and ConvNeXt trained on the most effective patch, Patch-4, and tested on other patches, Patch-0, Patch-1, and Patch-2. Red and blue represent patched and real class, respectively." + ], + "image_footnote": [], + "bbox": [ + 104, + 233, + 472, + 425 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "sarial patch detection. To demonstrate that, we compared it to PIP-Net [41], a recent prototypical-parts-based model that is the closest architecture to our proposed method. Since PIP-Net is designed for a convolutional backbone, we integrated ViT into PIP-Net and evaluated results on the COCO subset in an unseen patch (silent) setting. The average patch detection accuracy of PIP-Net with backbone R, M, and ViT are $87.07\\%$ , $81.82\\%$ , and $89.32\\%$ , respectively. In comparison, our AdvPatchXAI method achieved accuracies of $91.8\\%$ , $88.72\\%$ , and $89.6\\%$ with the identical respective backbones. In other words, proposed AdvPatchXAI shows $4.73\\%$ , $6.9\\%$ , and $0.28\\%$ improvement over the baseline PIP-Net with backbone R, M, and ViT, respectively.", + "bbox": [ + 88, + 515, + 482, + 727 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Apart from the above datasets, we have benchmarked our proposed model on other standard datasets CUB-200-2011 [55] and Stanford Cars [30] (results discussed in supplementary). These results highlight the robustness and effectiveness of our proposed model in detecting adversarial patches, particularly when utilizing ViT and ResNet backbones.", + "bbox": [ + 89, + 728, + 483, + 833 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.2. Explainable AdvPatchXAI", + "text_level": 1, + "bbox": [ + 89, + 847, + 333, + 863 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "While quantitative analysis is essential for assessing a model's effectiveness and robustness, explainability is", + "bbox": [ + 89, + 869, + 483, + 902 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/cf1dc040629defedebb46d9d81b4b42a389e5a3c4e98a56e4df1bacdfd645dab.jpg", + "image_caption": [ + "Figure 4. Heat map visualization of the proposed AdvPatchXAI trained on Patch-4 and visualized on Patch-0 on the COCO dataset under both silent (without noise) and noisy (gaussian noise with severity=2) settings. The same image has been taken under both silent and noisy settings for a fair comparison." + ], + "image_footnote": [], + "bbox": [ + 517, + 233, + 903, + 539 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "equally vital in supporting these findings. To address this, we performed several explainability analyses on our proposed AdvPatchXAI, including feature visualization, prototype prediction, prototype visualization, and Grad-CAM [49] visualization. Feature visualization helps in understanding the clusters formed by network features, leading to image classification or misclassification. Figure 3 shows the t-SNE [54] plot of the proposed model using an attention backbone (ViT) and a convolutional backbone (ConvNeXt) when trained on Patch-4 and tested on Patches 0 to 2. The clear separation of feature clusters supports the quantitative effectiveness of our proposed model. We also performed prototype prediction and visualization, along with Grad-CAM visualizations, as shown in Figure 4. The first column shows the unseen patch images and unseen patched+noisy image, and the second column highlights relevant prototypes within the image using yellow boxes, providing a", + "bbox": [ + 511, + 643, + 906, + 902 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "30393", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/0ee4ce8a1377700f4b10f3fccfad03bf1a009dbc96fa1866d58a14e8b1c96b29.jpg", + "table_caption": [ + "Table 3. Mean adversarial patch detection accuracy of the proposed AdvPatchXAI across different color channels RGB, YCbCr, and grayscale with backbone ViT and ResNet50 on COCO under silent (unseen patch without any noise) setting." + ], + "table_footnote": [], + "table_body": "
Model (Channel)Patch- \\{0-9\\} \\\\{0}Patch- \\{0-9\\} \\\\{1}Patch- \\{0-9\\} \\\\{2}Patch- \\{0-9\\} \\\\{3}Patch- \\{0-9\\} \\\\{4}
ViT (RGB)64.7880.9073.1362.2274.11
ViT (YCbCr)65.5790.5084.3666.7278.44
ViT (Grayscale)92.3192.6188.9178.9192.95
R (RGB)63.7177.4961.4462.9581.26
R (YCbCr)70.5687.0179.6470.3378.27
R (Grayscale)85.8188.8678.0772.0196.09
", + "bbox": [ + 91, + 146, + 491, + 212 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "local explanation. The third column collects these relevant prototypes, further connecting them to the classes with sparse linear layers to predict the correct image class. It can be seen that when noise comes with a patched image, the prototype prediction is compromised, and the extracted prototypes are not as fine-grained and less relevant as clean images. The fourth column presents the Grad-CAM for each prototypical part, illustrating the image regions that the network focuses on while determining its class. This detailed insight into the model's decision-making process supports the robustness and effectiveness of AdvPatchXAI in detecting adversarial patches (further discussed in supplementary).", + "bbox": [ + 88, + 229, + 485, + 428 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6. Ablation Studies", + "text_level": 1, + "bbox": [ + 89, + 441, + 256, + 458 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this section, we have presented ablation studies highlighting the impact of color channels, real-world robustness, and different terms used in the loss function. To ensure fairness, experiments are conducted using the pre-defined experimental protocol.", + "bbox": [ + 89, + 467, + 483, + 544 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6.1. Effects of Different Color Augmentations", + "text_level": 1, + "bbox": [ + 89, + 556, + 442, + 571 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Table 3 showcases the advantage of converting images into grayscale. Since unseen patches and datasets can have diverse color distributions, having color information makes the system biased toward learning color information rather than focusing on patch information. We assert that suppressing non-useful information can make the system generalized, which can also be visible from the results. For example, when AdvPatchXAI-ViT is trained on Patch 4 using grayscale images, it yields at least $14.51\\%$ higher accuracy than RGB & YCbCr color channel-trained models.", + "bbox": [ + 89, + 578, + 485, + 729 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6.2. Physical-World Effectiveness and Robustness Against Adversarial Patch Attack", + "text_level": 1, + "bbox": [ + 89, + 741, + 483, + 772 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We have further evaluated the robustness of AdvPatchXAI for real-world adaptation by printing and applying the adversarial patches in the real world, as demonstrated by Pinton et al. [44]. Our proposed defense algorithm is robust in handling the attacks in the physical world and yields an average accuracy of $91.11\\%$ and $88.88\\%$ when the proposed ViT defense is trained with Patch 0 and Patch 4, respectively.", + "bbox": [ + 89, + 779, + 485, + 902 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/6850e2e5b14fdfe821bce4a75e6d454b51996f8e61926f70e8409ceff8a762ef.jpg", + "table_caption": [ + "Table 4. Ablation study concerning loss terms reflecting the total number of relevant prototypes with at least one non-zero weight present in our proposed AdvPatchXAI algorithm. The mean accuracy demonstrates the advantage of a combined loss function." + ], + "table_footnote": [], + "table_body": "
Number of Prototypes
MethodBackbonePatch-0Patch-1Patch-2Patch-3Patch-4Mean Acc. ↑
AdvPatchXAIR26322018220923291.13
ViT18716820316620392.48
{LPD, LA, LT, LC}M825579876588.46
AdvPatchXAIR37028425125234786.44
ViT19515116115216192.66
{LA, LT, LC}M67611009110177.71
", + "bbox": [ + 516, + 146, + 913, + 232 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We generated perturbation-based PGD patch attacks followed by [35, 58] for a fair comparison to evaluate the adversarial robustness of our proposed defense model. We wanted to highlight that the performance of our proposed AdvPatchXAI-R and AdvPatchXAI-ViT yields $98.04\\%$ and $95.67\\%$ average detection accuracy, respectively, in a silent setting when a patch attack based on PGD perturbation is available for evaluation. Even in noisy settings, our proposed network is better robust against PGD perturbation-based patch attacks with average detection performances of $95.55\\%$ and $93.05\\%$ with the same respective backbone.", + "bbox": [ + 511, + 260, + 908, + 428 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6.3. Effect of Proposed Loss Function", + "text_level": 1, + "bbox": [ + 511, + 440, + 805, + 458 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We further examine the impact of the proposed algorithm's different linear combinations of loss functions. The corresponding outcomes presented in Table 4 reveal that the linear combination of our proposed patch decorrelation loss $L_{PD}$ with other loss functions improves interpretability. In other words, it reduces the number of relevant prototypes for a class and enhances the detection accuracy.", + "bbox": [ + 511, + 463, + 908, + 570 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "7. Conclusion", + "text_level": 1, + "bbox": [ + 511, + 587, + 633, + 604 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this research, we present AdvPatchXAI to significantly advance the development of a generalized, robust, and explainable adversarial patch detector. By incorporating prototypical parts and a novel patch decorrelation module, our model achieves unprecedented accuracy in physical adversarial patch detection, particularly in zero-shot settings with and without unseen noise perturbations. Furthermore, the explainable nature of AdvPatchXAI provides deep insights into the decision-making process, enabling better understanding and trust in AI systems. Future work will focus on enhancing the scalability of this approach and exploring its applicability to other forms of adversarial attacks, aiming to fortify AI defenses in increasingly complex applications.", + "bbox": [ + 511, + 614, + 908, + 811 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Acknowledgement", + "text_level": 1, + "bbox": [ + 511, + 828, + 671, + 845 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "V. Kumar is partially supported through the Visvesvaraya PhD Fellowship. A. Agarwal is partially funded through the ANRF PMECRG grant of Govt. of India.", + "bbox": [ + 511, + 854, + 906, + 902 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "30394", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 90, + 187, + 104 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Akshay Agarwal, Nalini Ratha, Mayank Vatsa, and Richa Singh. Crafting adversarial perturbations via transformed image component swapping. IEEE Transactions on Image Processing, 31:7338-7349, 2022. 1", + "[2] Akshay Agarwal, Richa Singh, Mayank Vatsa, and Nalini Ratha. Image transformation-based defense against adversarial perturbation on deep learning models. IEEE Transactions on Dependable and Secure Computing, 18(5):2106-2121, 2020. 2", + "[3] Akshay Agarwal, Richa Singh, Mayank Vatsa, and Nalini Ratha. IBAttack: Being cautious about data labels. IEEE Transactions on Artificial Intelligence, 4(6):1484-1493, 2022. 1", + "[4] Akshay Agarwal, Mayank Vatsa, Richa Singh, and Nalini K Ratha. Noise is inside me! generating adversarial perturbations with noise derived from natural filters. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 774-775, 2020. 6", + "[5] Naveed Akhtar and Ajmal Mian. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6:14410-14430, 2018. 1", + "[6] Ahmed Aldahdooh, Wassim Hamidouche, Sid Ahmed Fezza, and Olivier Déforges. Adversarial example detection for dnn models: A review and experimental comparison. Artificial Intelligence Review, 55(6):4403-4462, 2022. 2", + "[7] Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao. Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934, 2020. 2", + "[8] Tom B Brown, Dandelion Mane, Aurko Roy, Martin Abadi, and Justin Gilmer. Adversarial patch. arXiv preprint arXiv:1712.09665, 2017. 1, 2", + "[9] Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526, 2017. 1", + "[10] Aran Chindaudom, Prarinya Siritanawan, Karin Sumongkayothin, and Kazunori Kotani. Adversarialqr: An adversarial patch in qr code format. In International Conference on Imaging, Vision & Pattern Recognition, pages 1-6, 2020. 2", + "[11] Aran Chindaudom, Prarinya Sritananawan, Karin Sumongkayothin, and Kazunori Kotani. Surreptitious adversarial examples through functioning qr code. Journal of Imaging, 8(5):122, 2022. 2", + "[12] Kenneth T Co, Luis Muñoz-González, Leslie Kanthan, and Emil C Lupu. Real-time detection of practical universal adversarial perturbations. arXiv preprint arXiv:2105.07334, 2021. 1", + "[13] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, 2009. 2", + "[14] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint" + ], + "bbox": [ + 93, + 114, + 483, + 887 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "arXiv:2010.11929,2020.5", + "[15] Gil Fidel, Ron Bitton, and Asaf Shabtai. When explainability meets adversarial learning: Detecting adversarial examples using shap signatures. In 2020 international joint conference on neural networks (IJCNN), pages 1-8. IEEE, 2020. 2", + "[16] Christina M Funke, Judy Borowski, Karolina Stosio, Wieland Brendel, Thomas SA Wallis, and Matthias Bethge. Five points to check when comparing visual perception in humans and machines. Journal of Vision, 21(3):16-16, 2021. 1", + "[17] Thomas Gittings, Steve Schneider, and John Collomosse. Vax-a-net: Training-time defence against adversarial patch attacks. In Asian Conference on Computer Vision, 2020. 1", + "[18] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. 1", + "[19] Gaurav Goswami, Nalini Ratha, Akshay Agarwal, Richa Singh, and Mayank Vatsa. Unravelling robustness of deep learning based face recognition against adversarial attacks. In AAAI Conference on Artificial Intelligence, volume 32, 2018. 1", + "[20] Jindong Gu, Volker Tresp, and Yao Qin. Are vision transformers robust to patch perturbations? In European Conference on Computer Vision, pages 404-421, 2022. 1", + "[21] Jamie Hayes. On visible adversarial perturbations & digital watermarking. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1597-1604, 2018. 1", + "[22] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition, pages 770-778, 2016. 5", + "[23] Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15262-15271, 2021. 2", + "[24] Yuheng Huang and Yuanchun Li. Zero-shot certified defense against adversarial patches with vision transformers. arXiv preprint arXiv:2111.10481, 2021. 1", + "[25] Nan Ji, YanFei Feng, Haidong Xie, Xueshuang Xiang, and Naijin Liu. Adversarial yolo: Defense human detection patch attacks via detecting adversarial patches. arXiv preprint arXiv:2103.08860, 2021. 2", + "[26] Longlong Jing and Yingli Tian. Self-supervised visual feature learning with deep neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(11):4037-4058, 2020. 3", + "[27] Melanie Jutas, Ethan Liang, Sara Leary, Chris Ward, and Keith Manville. Detecting physical adversarial patch attacks with object detectors. In IEEE Applied Imagery Pattern Recognition Workshop, pages 1-7, 2022. 2", + "[28] Danny Karmon, Daniel Zoran, and Yoav Goldberg. Lavan: Localized and visible adversarial noise. In International Conference on Machine Learning, pages 2507-2515. PMLR, 2018. 2", + "[29] Adam Kortylewski, Qing Liu, Huiyu Wang, Zhishuai Zhang, and Alan Yuille. Combining compositional models and deep networks for robust object classification under occlusion. In IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1333-1341, 2020. 1" + ], + "bbox": [ + 516, + 92, + 903, + 890 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "30395", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[30] Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In IEEE International Conference on Computer Vision Workshops, pages 554–561, 2013. 7", + "[31] Vishesh Kumar and Akshay Agarwal. The unseen adversaries: Robust and generalized defense against adversarial patches. Available at SSRN 4772716, 2023. 2, 5, 6", + "[32] Juncheng Li, Frank Schmidt, and Zico Kolter. Adversarial camera stickers: A physical camera-based attack on deep learning systems. In International Conference on Machine Learning, pages 3896-3904. PMLR, 2019. 2", + "[33] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European Conference on Computer Vision, pages 740-755, 2014. 2", + "[34] Aishan Liu, Xianglong Liu, Jiaxin Fan, Yuqing Ma, Anlan Zhang, Huiyuan Xie, and Dacheng Tao. Perceptual-sensitive gan for generating adversarial patches. In AAAI Conference on Artificial Intelligence, volume 33, pages 1028-1035, 2019. 2", + "[35] Jiang Liu, Alexander Levine, Chun Pong Lau, Rama Chellappa, and Soheil Feizi. Segment and complete: Defending object detectors against adversarial patch attacks with robust patch detection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14973-14982, 2022. 8", + "[36] Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11976-11986, 2022. 5", + "[37] Giulio Lovisotto, Nicole Finnie, Maurizio Munoz, Chaithanya Kumar Mummadi, and Jan Hendrik Metzen. Give me your attention: Dot-product attention considered harmful for adversarial patch robustness. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15234-15243, 2022. 1", + "[38] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. 2", + "[39] Michael McCoyd, Won Park, Steven Chen, Neil Shah, Ryan Roggenkemper, Minjune Hwang, Jason Xinyu Liu, and David Wagner. Minority reports defense: Defending against adversarial patches. In International Conference on Applied Cryptography and Network Security, pages 564-582, 2020. 1", + "[40] Samuel G Müller and Frank Hutter. Trivialaugment: Tuning-free yet state-of-the-art data augmentation. In IEEE/CVF International Conference on Computer Vision, pages 774-782, 2021. 3, 5", + "[41] Meike Nauta, Jörg Schlötterer, Maurice van Keulen, and Christin Seifert. Pip-net: Patch-based intuitive prototypes for interpretable image classification. 2023. 4, 7", + "[42] Meike Nauta, Jan Trienes, Shreyasi Pathak, Elisa Nguyen, Michelle Peters, Yasmin Schmitt, Jorg Schlötterer, Maurice van Keulen, and Christin Seifert. From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai. ACM Computing Surveys, 55(13s):1-42, 2023. 1" + ], + "bbox": [ + 91, + 90, + 482, + 893 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[43] Ojaswee Ojaswee, Akshay Agarwal, and Nalini Ratha. Benchmarking image classifiers for physical out-of-distribution examples detection. In IEEE/CVF International Conference on Computer Vision, pages 4427-4435, 2023. 2, 6", + "[44] Maura Pintor, Daniele Angioni, Angelo Sotgiu, Luca Demetrio, Ambra Demontis, Battista Biggio, and Fabio Roli. Imagenet-patch: A dataset for benchmarking machine learning robustness against adversarial patches. Pattern Recognition, 134:109064, 2023. 2, 8", + "[45] Joseph Redmon. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018. 2", + "[46] Joseph Redmon and Ali Farhadi. Yolo9000: better, faster, stronger. In IEEE Conference on Computer Vision and Pattern Recognition, pages 7263-7271, 2017. 2", + "[47] Wojciech Samek, Grégoire Montavon, Sebastian Lapischkin, Christopher J Anders, and Klaus-Robert Müller. Explaining deep neural networks and beyond: A review of methods and applications. IEEE, 109(3):247-278, 2021. 1", + "[48] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. *Mobilenetv2: Inverted residuals and linear bottlenecks*. In IEEE Conference on Computer Vision and Pattern Recognition, pages 4510-4520, 2018. 5", + "[49] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In IEEE International Conference on Computer Vision, pages 618-626, 2017. 7", + "[50] Ali Shafahi, W Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein. Poison frogs! targeted clean-label poisoning attacks on neural networks. Advances in Neural Information Processing Systems, 31, 2018. 1", + "[51] Abhijith Sharma, Yijun Bian, Phil Munz, and Apurva Narayan. Adversarial patch attacks and defences in vision-based tasks: A survey. arXiv preprint arXiv:2206.08304, 2022.2", + "[52] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. 1", + "[53] Tung Tran, Issam Aib, Ehab Al-Shaer, and Raouf Boutaba. An evasive attack on snort flowbits. In IEEE Network Operations and Management Symposium, pages 351-358, 2012. 1", + "[54] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(11), 2008. 7", + "[55] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011. 7", + "[56] Tongzhou Wang and Phillip Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In International Conference on Machine Learning, pages 9929-9939. PMLR, 2020. 4", + "[57] Chong Xiang, Arjun Nitin Bhagoji, Vikash Sehwag, and Prateek Mittal. {PatchGuard}: A provably robust defense against adversarial patches via small receptive fields and" + ], + "bbox": [ + 516, + 90, + 903, + 891 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "30396", + "bbox": [ + 478, + 944, + 519, + 955 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "masking. In USENIX Security Symposium, pages 2237-2254, 2021. 1", + "[58] Ke Xu, Yao Xiao, Zhaoheng Zheng, Kaijie Cai, and Ram Nevatia. Patchzero: Defending against adversarial patch attacks by detecting and zeroing the patch. In IEEE/CVF Winter Conference on Applications of Computer Vision, pages 4632-4641, 2023. 8", + "[59] Weilin Xu, David Evans, and Yanjun Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155, 2017. 2", + "[60] Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane-Ling Wang, and Michael Jordan. Ml-loo: Detecting adversarial examples with feature attribution. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 6639-6647, 2020. 2", + "[61] Xingyu Zhou, Zhisong Pan, Yexin Duan, Jin Zhang, and Shuaihui Wang. A data independent approach to generate adversarial patches. Machine Vision and Applications, 32(3):67, 2021. 2", + "[62] Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang. Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: International Workshop and International Workshop, pages 3-11, 2018. 2", + "[63] Hongru Zhu, Peng Tang, Jeongho Park, Soojin Park, and Alan Yuille. Robustness of object recognition under extreme occlusion in humans and computational models. arXiv preprint arXiv:1905.04598, 2019. 1", + "[64] Alon Zolfi, Moshe Kravchik, Yuval Elovici, and Asaf Shabtai. The translucent patch: A physical and universal attack on object detectors. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15232-15241, 2021. 1, 2" + ], + "bbox": [ + 91, + 90, + 482, + 551 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "30397", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 10 + } +] \ No newline at end of file diff --git a/2025/A Unified, Resilient, and Explainable Adversarial Patch Detector/b9c12ba3-81c5-4427-ad44-e661e361941c_model.json b/2025/A Unified, Resilient, and Explainable Adversarial Patch Detector/b9c12ba3-81c5-4427-ad44-e661e361941c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b7d73c2bc8a723203706dba19c2f171f07c9b642 --- /dev/null +++ b/2025/A Unified, Resilient, and Explainable Adversarial Patch Detector/b9c12ba3-81c5-4427-ad44-e661e361941c_model.json @@ -0,0 +1,2037 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.043 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.001, + 0.812, + 0.047 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.131, + 0.828, + 0.152 + ], + "angle": 0, + "content": "A Unified, Resilient, and Explainable Adversarial Patch Detector" + }, + { + "type": "text", + "bbox": [ + 0.282, + 0.182, + 0.717, + 0.217 + ], + "angle": 0, + "content": "Vishesh Kumar, Akshay Agarwal \nTrustworthy BiometraVision Lab, IISER Bhopal, India" + }, + { + "type": "text", + "bbox": [ + 0.347, + 0.22, + 0.648, + 0.234 + ], + "angle": 0, + "content": "{vishesh22,akagarwal}@iiserb.ac.in" + }, + { + "type": "title", + "bbox": [ + 0.248, + 0.269, + 0.327, + 0.285 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.301, + 0.486, + 0.649 + ], + "angle": 0, + "content": "Deep Neural Networks (DNNs), backbone architecture in 'almost' every computer vision task, are vulnerable to adversarial attacks, particularly physical out-of-distribution (OOD) adversarial patches. Existing defense models often struggle with interpreting these attacks in ways that align with human visual perception. Our proposed AdvPatchXAI approach introduces a generalized, robust, and explainable defense algorithm designed to defend DNNs against physical adversarial threats. AdvPatchXAI employs a novel patch decorrelation loss that reduces feature redundancy and enhances the distinctiveness of patch representations, enabling better generalization across unseen adversarial scenarios. It learns prototypical parts self-supervised, enhancing interpretability and correlation with human vision. The model utilizes a sparse linear layer for classification, making the decision process globally interpretable through a set of learned prototypes and locally explainable by pinpointing relevant prototypes within an image. Our comprehensive evaluation shows that AdvPatchXAI closes the \"semantic\" gap between latent space and pixel space and effectively handles unseen adversarial patches even perturbed with unseen corruptions, thereby significantly advancing DNN robustness in practical settings1." + }, + { + "type": "title", + "bbox": [ + 0.092, + 0.676, + 0.222, + 0.691 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.701, + 0.483, + 0.868 + ], + "angle": 0, + "content": "Besides being dominant in computer vision over the years, deep neural networks (DNNs) have been found vulnerable to adversarial attacks [1, 19]. Early adversarial attacks on DNNs tricked models using slight, barely noticeable noise [18, 52]. While the above attack perturbs an entire image, a few attacks often target the main object in the scene naturally [29, 63]. Another type of adversaries are poisoning attacks, misled models during training by introducing incorrect patterns, such as in poison frogs and backdoor attacks [3, 9, 50]. Surprisingly, many of these minute adversarial attacks are ineffective in the physical world due to" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.27, + 0.908, + 0.408 + ], + "angle": 0, + "content": "several unconstrained environmental factors, including rotation and translation properties of the objects. This led to the development of adversarial patch attacks, where a patterned sub-image is placed over the input image to deceive the model [8]. Due to expectation over transformation (EoT) constraints while learning the patches, they are highly effective in real-world scenarios. They can fool any possible deep networks, including vision transformers [5, 12, 20, 37, 53, 64]." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.412, + 0.909, + 0.775 + ], + "angle": 0, + "content": "Therefore, securing DNN-based systems against stealthy and practical attacks is vital, especially in safety-critical domains such as autonomous driving, robotics, smart homes/cities, smart industries, video surveillance, and healthcare. Researchers are constantly developing new defenses or protection strategies [17, 21, 24, 39, 57] to tackle the limitations of DNNs against physical adversarial patches, but understanding their decision-making processes and evaluating the resiliency of the defense algorithms is increasingly important [42, 47]. It is to be noted that physical adversarial patches are visible to the human eye; however, they need extra precautions since the added patch can also be a real-world object. Due to this, automated detection of patches is challenging and leads to several false rejections. It is essential to align human and machine vision to address this issue. One way can be to align the functional properties of human and machine vision [16]. This alignment will help improve the machine's ability to detect and respond to out-of-distribution (OOD) adversarial patches more effectively in several generalized settings, such as unseen patches and unseen perturbation, ensuring better security and performance. \"To the best of our knowledge, no existing defense provides both high generalizability and explainability against adversarial patch attacks.\"" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.781, + 0.911, + 0.903 + ], + "angle": 0, + "content": "For the first time, we propose a generalized, robust, and explainable adversarial patch detector, namely AdvPatchXAI, by adding the model explainability since the beginning of the development of the network in contrast to the traditional methods, which plug in the explainability module once the model is thoroughly trained. Proposed AdvPatchXAI uses a sparse linear layer that connects learned prototypical parts to classes. This setup allows a user to" + }, + { + "type": "page_footnote", + "bbox": [ + 0.091, + 0.875, + 0.483, + 0.901 + ], + "angle": 0, + "content": "" + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "30387" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.482, + 0.317 + ], + "angle": 0, + "content": "interpret the model by inspecting the prototypes and their relation to the classes. The weights of the linear layer are restricted to non-negative, ensuring that the presence of a class-relevant prototype increases the evidence for that class. This layer functions like a scoring sheet: the score for a class is the sum of all present prototypes multiplied by their weights. Using this interpretable and predictive linear layer, AdvPatchXAI ensures a direct relation between the prototypes and the classification. This approach improves the interpretability of the model's decisions and enhances its ability to detect and respond to adversarial patches. The significant strength of the proposed approach is that it can be plugged in with any deep learning architecture, including CNN and transformer-based architecture. In brief, the contributions of this research are:" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.318, + 0.482, + 0.362 + ], + "angle": 0, + "content": "- We have proposed a generalized, robust, and explainable patch detector (AdvPatchXAI), which effectively detects unseen and out-of-distribution adversarial patches." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.364, + 0.482, + 0.422 + ], + "angle": 0, + "content": "- For the first time in the literature, we have evaluated the robustness of adversarial patch detectors against several common corruptions, ensuring their practicality in the unconstrained physical world." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.424, + 0.482, + 0.483 + ], + "angle": 0, + "content": "- Extensive experimental comparison with benchmark and state-of-the-art works demonstrate the proposed defense algorithm's effectiveness, generalizability, and explainability." + }, + { + "type": "list", + "bbox": [ + 0.091, + 0.318, + 0.482, + 0.483 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.498, + 0.262, + 0.515 + ], + "angle": 0, + "content": "2. Literature review" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.524, + 0.482, + 0.583 + ], + "angle": 0, + "content": "We identify three main philosophies for developing robust defenses against adversarial patch attacks. These approaches are not mutually exclusive and can be adopted together." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.585, + 0.483, + 0.901 + ], + "angle": 0, + "content": "Adversarial Patched Dataset: The first time Brown et al.[8] introduced the concept of adversarial patches to fool the object detectors. Since then, several advancements have been made to create more effective and stealthy adversarial patches, including LaVAN (focusing on weaknesses) [28], Adversarial QR codes (appearing less suspicious) [10, 11], PS-GAN (improved quality) [34], and DiAP (data-independent) [61]. A survey [51] reveals the vulnerability of various state-of-the-art pre-trained models YOLOv4 [7], ViT-B/16, Unet++ [62], YOLOv3 [45], YOLOv2 [46], YOLOv5 against physical adversarial patch attacks. Our first philosophy lies in the lack of a standardized dataset. While several effective adversarial patch generation algorithms exist [10, 32, 38, 61, 64], a lack of standardized datasets hinders the development of robust defense mechanisms. Recent efforts by Pintor et al. [44] and Ojaswee et al.[43] propose benchmark datasets specifically for adversarial patches but missing the effect of natural noises in real-world scenarios [2, 23]. While Kumar & Agarwal [31], for the first time, explored the combined effect of both adversarial patches and natural noises, they have not" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.908, + 0.893 + ], + "angle": 0, + "content": "proposed any novel defense algorithm. Our Second philosophy lies with the defense algorithm. It is also to be noted that minimal defense works exist that can effectively detect adversarial patch attacks in several generalized settings such as unseen datasets, unseen adversarial patches, and unseen threat model [25, 27, 32, 43] and also lack explainability. To address these gaps, in our research, we have regenerated a large-scale dataset followed by [31] containing 10 different adversarial patches [44] and three natural noises (gaussian noise, shot noise, and impulse noise); the detailed description of the dataset given in subsection 4.1. Our third philosophy concerns the lack of a generalized and explainable patch defense algorithm. Generalized & Explainable Patch Defense: While several studies have been explored for adversarial example detection in image classification task [6, 15, 59, 60], very few studies talk about adversarial patch detection in image classification. Pintor et al. [44] introduced an ImageNet-Patch dataset to benchmark machine learning models' robustness against adversarial patches. This dataset includes patches optimized to generalize across different models, allowing for a more efficient and transferable robustness evaluation. The dataset's utility has been demonstrated by testing its effectiveness against 127 models, showcasing its potential as a standard benchmark for assessing and improving model robustness against adversarial patches. Further, Ojaswee et al. [43], using subsets of ImageNet [13] and COCO [33] datasets, developed benchmark datasets and generalized the effect of these patches by finetuning several state-of-the-art DNNs in different generalized settings such as (i) seen patch settings (same patches during testing and training), (ii) unseen patch setting (different patches during training and testing), (iii) seen patch + unseen dataset, and (iv) unseen patch + unseen dataset. Again, Kumar & Agarwal [31] extended this work by training traditional machine learning algorithm using the features extracted by state-of-the-art DNNs in several conditions such as (i) seen patch, (ii) seen patch + natural noises, (iii) unseen patch, and (iv) unseen patch + noise. These findings indicate that defending against adversarial patches in unseen settings is challenging, as the effectiveness of defenses is closely tied to the attributes of the patches used during training. This means detectors have a lower detection rate for new, unseen patches. It has been noticed that none of these studies propose novel algorithms for patch detection. Moreover, to our knowledge, existing detectors do not provide sufficient explanations for the alerts they raise, leaving the reasoning behind their decisions unclear. This research primarily focused on unseen settings such as (i) unseen patch setting, (ii) unseen patch + noise, (iii) Unseen patch + unseen dataset, and (iv) unseen patch + unseen dataset + noise. Generalized better than existing defenses and provided detailed explanations through the proposed AdvPatchXAI patch detector." + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.519, + 0.956 + ], + "angle": 0, + "content": "30388" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.179, + 0.093, + 0.821, + 0.442 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.447, + 0.907, + 0.531 + ], + "angle": 0, + "content": "Figure 1. Overview of our proposed method. OOD Adversarial patches used in this research show on top of the proposed architecture. Patch Decorrelation: A novel patch decorrelation loss \\( L_{PD} \\) followed the contrastive learning approach applying to the backbone features along with loss \\( L_{A} \\) applying on softmax output to assign the same prototype of two representations of patches for an image pair. To avoid trivial solutions and encourage the utilization of all prototypes, a tanh-loss \\( L_{T} \\) is applied during the self-supervised pretraining phase. Connections between learned part-prototypes and classes are established through a sparse linear layer. The standard loss used is negative log-likelihood, denoted \\( L_{C} \\). Model outputs remain unnormalized during testing, allowing them to serve as straightforward scoring metrics." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.556, + 0.45, + 0.573 + ], + "angle": 0, + "content": "3. Proposed Framework for AdvPatchXAI" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.581, + 0.483, + 0.643 + ], + "angle": 0, + "content": "This section first briefly describes the binary classification problem of adversarial patch detection, followed by a comprehensive discussion of the proposed explainable adversarial patch detector, i.e., AdvPatchXAI." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.652, + 0.373, + 0.667 + ], + "angle": 0, + "content": "3.1. Problem Statement & Notations" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.674, + 0.483, + 0.885 + ], + "angle": 0, + "content": "Given a binary classification problem with training dataset \\( D_{tr} = \\{(x,y), x \\in X_{tr}, y \\in Y_{tr}\\} \\) containing known classes \\( C_2 = \\{c_1 \\text{ (real)}, c_2 \\text{ (patched)}\\} \\) and a testing dataset \\( D_{te} = \\{(x', y'), x' \\in X_{te}, y' \\in Y_{te}\\} \\), where, \\( X_{tr} \\) and \\( X_{te} \\) represent the input images of the training and testing dataset, respectively, while \\( Y_{tr} \\) and \\( Y_{te} \\) are the corresponding class labels. We aim to learn interpretable prototypes that can be used as input features for the model explainability. The backbone of our model consists of pre-trained CNNs and ViT, which learn an interpretable, 1-dimensional image encoding \\( p \\) that indicates the presence or absence of prototypical parts in an image. These prototypical parts (prototypes) are then connected to classes through a sparse linear layer." + }, + { + "type": "text", + "bbox": [ + 0.11, + 0.886, + 0.483, + 0.901 + ], + "angle": 0, + "content": "Our framework introduces two steps to effectively iden-" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.557, + 0.907, + 0.633 + ], + "angle": 0, + "content": "tify and classify the given input accurately: (i) data augmentation followed by TrivialAugment [40] to perform self-supervised pretraining of learned prototype patches and (ii) training AdvPatchXAI for effective and explainable adversarial patch detection." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.65, + 0.869, + 0.667 + ], + "angle": 0, + "content": "3.2. Self-Supervised Pretraining of Prototypes" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.675, + 0.907, + 0.901 + ], + "angle": 0, + "content": "Following prior self-supervised learning methods [26], we generate a positive pair, denoted as \\( x' \\) and \\( x'' \\) by using TrivialAugment [40]. This recently introduced augmentation strategy is efficient, requiring no hyperparameter tuning, and applies a single augmentation per image. However, in contrast to the standard approach, we applied TrivialAugment twice, as shown in Figure 1. The first augmentation operation applies location-related transformations or focuses on spatial transformations, including shearing, rotation, and translation. This augmented image is then used as input to another TrivialAugment operation, which involves color alterations, including brightness, sharpness, hue, and contrast. In our approach, we followed grayscale conversion in the second augmentation stage. We also experimented with RGB and YCbCr conversion but found a less" + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "30389" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.091, + 0.092, + 0.443, + 0.108 + ], + "angle": 0, + "content": "effective strategy than grayscale (see subsection 6.1)." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.12, + 0.314, + 0.133 + ], + "angle": 0, + "content": "3.2.1. Patch Decoration Loss" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.14, + 0.485, + 0.534 + ], + "angle": 0, + "content": "To reduce redundancy and enhance the distinctiveness of the patch features, we propose a patch decorrelation loss applied to patch features. We adopt a contrastive learning strategy similar to [56] to achieve alignment and uniformity of patch representations. However, unlike their image-level approach, we focus on patch-level alignment. The Patch Decorrelation Module plays a critical role in our method by promoting the similarity between features extracted from different views of the same patch while simultaneously reducing redundancy across different feature dimensions. An input image is first forwarded through the backbone DNN \\( f \\). The resulting output \\( z = f(x; w_f) \\) consists of \\( D \\) two-dimensional \\( (H \\times W) \\) feature maps, where \\( w_f \\) denotes the trainable parameters of \\( f \\). So, the feature maps for two different views of \\( x \\), \\( z' = f(x'; w_f) \\) and \\( z'' = f(x''; w_f) \\). Since the feature dimensions of each are initially \\( (B, D, H, W) \\), where \\( B \\) denotes the batch size and \\( D \\) represents the total number of feature maps of dimension \\( H \\times W \\). We have converted it to \\( (B \\times H \\times W, D) \\), representing each image patch's features within the batch. Compute the cross-correlation matrix using the converted feature maps for both \\( x' \\) and \\( x'' \\) as \\( [D_{cr}] = [d_{ij}]_{(D,D)} = \\frac{1}{B \\times H \\times W} [x']_{(B \\times H \\times W,D)}^{T} [x'']_{(B \\times H \\times W,D)} \\), where \\( i, j = 1, 2, \\dots, D \\). Consider an identity matrix \\( I = [I_{ij}]_{(D,D)} \\). Finally, the patch decorrelation loss (i.e., \\( L_{PD} \\)) is defined as:" + }, + { + "type": "equation", + "bbox": [ + 0.107, + 0.547, + 0.483, + 0.58 + ], + "angle": 0, + "content": "\\[\nL _ {P D} = \\frac {\\sum_ {i = 1} ^ {D} \\left(d _ {i i} - I _ {i i}\\right) ^ {2} + \\lambda \\times \\left(\\left(d _ {i j} - I _ {i j}\\right) _ {i \\neq j}\\right) ^ {2}}{D} \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.595, + 0.484, + 0.673 + ], + "angle": 0, + "content": "Here, \\(\\lambda = 5 \\cdot e^{-3}\\) is a trade-off parameter of the loss function. Our objective revolves around diagonalizing this cross-correlation matrix, which mitigates the risk of trivial solutions, such as activating the same prototype for all patches and promoting utilizing the entire prototype space." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.684, + 0.394, + 0.699 + ], + "angle": 0, + "content": "3.2.2. Alignment Loss on Softmax Outputs" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.705, + 0.484, + 0.905 + ], + "angle": 0, + "content": "After obtaining raw features aligned using the Patch Decorrelation Module, we further refine the alignment using a softmax-based alignment loss. This alignment loss, denoted as \\(L_{A}\\) [41], ensures that the softmax-normalized feature vectors of corresponding patches from two views are closely aligned. Apply a softmax over \\(D\\) such that \\(\\sum_{i=1}^{D} s_{h,w,i} = 1\\), ensuring that a patch located at \\(h, w \\in H \\times W\\) corresponds to prototype \\(i\\). Ideally, \\(s_{h,w,:}\\) is a one-hot encoded vector indicating a perfect assignment to one prototype. To measure the similarity between corresponding patches from two augmented views, we calculate the dot product between their latent representations, \\((s_{h,w,:}^{\\prime})\\) and \\(s_{h,w,:}^{\\prime \\prime}\\):" + }, + { + "type": "equation", + "bbox": [ + 0.56, + 0.114, + 0.907, + 0.153 + ], + "angle": 0, + "content": "\\[\nL _ {A} = - \\frac {1}{H W} \\sum_ {(h, w) \\in H \\times W} \\log \\left(s _ {h, w,:} ^ {\\prime} \\cdot s _ {h, w,:} ^ {\\prime \\prime}\\right) \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.163, + 0.907, + 0.313 + ], + "angle": 0, + "content": "Our goal is to identify the presence or absence of prototypical parts within an image, for which a max-pooling operation across each feature map dimension (denoted as \\(s::,d\\)) is applied. This results in a presence score tensor, \\(p \\in [0,1]^D\\), where each element \\(p_d\\) represents the strength of the \\(d\\)th prototype's presence in the image. We introduce the tanh-based regularization loss \\(L_{T}\\) [41] to prevent trivial solutions. The tanh-loss encourages the presence of each prototype at least once in a mini-batch. The tanh loss is defined as:" + }, + { + "type": "equation", + "bbox": [ + 0.572, + 0.324, + 0.907, + 0.368 + ], + "angle": 0, + "content": "\\[\nL _ {T} (p) = - \\frac {1}{D} \\sum_ {i = 1} ^ {D} \\log (\\tanh (\\sum_ {b = 1} ^ {B} p _ {b}) + \\epsilon), \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.372, + 0.906, + 0.463 + ], + "angle": 0, + "content": "Where tanh and log are element-wise operations, \\( B \\) is the number of samples in a mini-batch, \\( D \\) is the number of prototypes, and \\( \\epsilon \\) is a small number for numerical stability. This loss ensures that each prototype is utilized across the mini-batch, preventing any prototype from dominating and promoting a balanced representation across all prototypes." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.463, + 0.906, + 0.6 + ], + "angle": 0, + "content": "The combination of patch decorrelation loss, alignment loss, and tanh loss enables our model to achieve robust patch-level alignment and uniformity. The patch decorrelation loss \\((L_{PD})\\) enhances the raw feature alignment, the alignment loss \\((L_A)\\) ensures the softmax-normalized features are closely matched, and the tanh loss \\((L_T)\\) prevents trivial solutions by promoting diversity in prototype assignments. The final objective of our pre-training phase of AdvPatchXAI is: \\(\\lambda_{PD}L_{PD} + \\lambda_{A}L_{A} + \\lambda_{T}L_{T}\\)." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.607, + 0.731, + 0.623 + ], + "angle": 0, + "content": "3.3. Training AdvPatchXAI" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.629, + 0.907, + 0.903 + ], + "angle": 0, + "content": "After the pretraining phase of the prototypes, the patch presence score tensor, \\( p \\), is fed into a linear classification layer having non-negative weights, \\( w_{c} \\in \\mathbb{R}^{(D \\times C)} \\geq 0 \\). It looks for only positive or true class evidence in an input image to which the image belongs. These weights connect prototypes to classes \\( (C) \\) to ensure that the presence of prototypical parts contributes positively to the evidence for their associated class, enhancing interpretability. The bias term adjusts the classifier's decision threshold and is independent of the prototype's contributions. This separation ensures that the model remains interpretable, as each prototype's influence on the decision is straightforward and non-contradictory. The output score for each class is calculated by summing the element-wise product of the presence scores and the corresponding class weights from the linear layer. We incorporate a classification loss term, \\( L_{C} \\), to optimize model performance. This loss is calculated as the standard negative log-likelihood between the predicted" + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.521, + 0.958 + ], + "angle": 0, + "content": "30390" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.482, + 0.184 + ], + "angle": 0, + "content": "class probabilities \\(\\hat{y}\\) and the one-hot encoded ground truth label \\(y\\). While \\(L_{C}\\) primarily influences the weights of the linear layer, it also fine-tunes the prototypes to discriminate features relevant to the classification task better. The overall objective for the second training phase of AdvPatchXAI is: \\(\\lambda_{PD}L_{PD} + \\lambda_{A}L_{A} + \\lambda_{T}L_{T} + \\lambda_{C}L_{C}\\)." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.196, + 0.403, + 0.213 + ], + "angle": 0, + "content": "4. Experimental Result and Analysis" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.221, + 0.485, + 0.296 + ], + "angle": 0, + "content": "The dataset used in this research for evaluating the proposed and existing defenses is discussed in subsection 4.1, followed by the implementation details of the proposed algorithm. Finally, the analysis of the proposed defense algorithm is discussed in detail." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.305, + 0.484, + 0.32 + ], + "angle": 0, + "content": "4.1. Out-of-Distribution Adversarial Patch Dataset" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.327, + 0.485, + 0.765 + ], + "angle": 0, + "content": "Following the procedure outlined by Kumar & Agarwal [31], this paper introduces two datasets focusing on physical adversarial patches and natural noises using ImageNet and COCO datasets. Images are attacked with 10 different styles of physical adversarial patches and three types of natural noises (Gaussian, Shot, and Impulse noise). For COCO, we randomly selected 2,000 clean images from the validation set. These images served as a clean subset. Another set of 2,000 images from the COCO validation set is selected, and 10 different adversarial patches are applied to each, generating 20,000 adversarial patch images. Clean and patched images are divided into training and testing sets in a 3:2 ratio. For example, 2,000 clean images of COCO and 2,000 images with a single patch are divided into 800 test images and 1200 train images. Also, applying three natural noises resulted in 2,400 noisy test images for both clean and patched sets. Since our primary focus is generalizability, we use ImageNet to test our model when trained on the COCO patched train subset. For ImageNet, we randomly selected 800 clean images from the validation set. Another set of 800 images is selected, and 10 different adversarial patches are applied to each image, generating 8000 adversarial patch images. Again, three different types of natural noise applied to each patch give 24,00 noisy images for clean and patched test sets. In total, the dataset includes 2,800 clean images, 28,000 adversarial patch images (20,000 from COCO and 8,000 from ImageNet), 4,800 noisy test images, and 48,000 images with both adversarial patches and natural noise." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.774, + 0.308, + 0.789 + ], + "angle": 0, + "content": "4.2. Implementation Details" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.795, + 0.482, + 0.901 + ], + "angle": 0, + "content": "We utilize three convolutional backbones in AdvPatchXAI: ResNet50 [22] (R), ConvNext-tiny [36] (C), and MobileNetV2 [48] (M). A transformer-based network ViT [14] is also incorporated. The pre-trained models are used, but the strides of the last layers are modified from 2 to 1 to increase the width \\((W)\\) and height \\((H)\\) of the output feature maps (from \\(7 \\times 7\\) to \\(28 \\times 28\\) for ResNet and MobileNet," + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.905, + 0.289 + ], + "angle": 0, + "content": "\\(26 \\times 26\\) for ConvNext, and reshaped the output feature into \\(14 \\times 14\\) for ViT). This adjustment results in a finer-grained patch grid \\(z\\), improving patch similarity optimization. The backbone \\(f\\) is fine-tuned using Adam with a learning rate 0.0005 and a cosine annealing schedule. The linear layer is trained with a learning rate of 0.05. Loss weights are set to \\(\\lambda_{C} = \\lambda_{T} = 2\\) and \\(\\lambda_{PD} = \\lambda_{A} = 5\\). Prototypes are trained for 10 epochs, followed by training AdvPatchXAI for an additional 60 epochs. Images are resized to \\(224 \\times 224\\) and augmented with TrivialAugment [40] in two stages: the first stage is related to location, and the second stage is related to color transformation (grayscale). Experiments are performed with seed value one to ensure reproducibility." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.296, + 0.669, + 0.312 + ], + "angle": 0, + "content": "4.3. Result Analysis" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.318, + 0.905, + 0.56 + ], + "angle": 0, + "content": "We evaluated our proposed method, AdvPatchXAI, employed various backbone DNNs using a comprehensive set of experiments. We have trained our network on the COCO training subset for all 10 patches separately and evaluated them under several generalized zero-shot settings. These settings included scenarios with seen datasets with unseen patches and unseen datasets with unseen patches, both with and without natural noises. The effectiveness of AdvPatchXAI can be assessed through two key metrics: the robustness of the chosen backbone DNN and the success of the training patch in identifying unseen patches. The presence of natural noises during the testing ensures the detector's robustness in a black-box setting. To further strengthen the effectiveness of the proposed defense algorithm, we have evaluated its resiliency against adaptive attacks (discussed in supplementary)." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.565, + 0.907, + 0.595 + ], + "angle": 0, + "content": "4.3.1. Zero-Shot OOD Patch Detection in the Absence of Noise" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.6, + 0.905, + 0.901 + ], + "angle": 0, + "content": "Table 1 and Table 2 present the average classification accuracy (Mean) of our proposed AdvPatchXAI on COCO and ImageNet datasets, respectively, combined with OOD unseen patches in silent (without noise) settings. Based on our findings, it can be seen that AdvPatchXAI-R (i.e., with backbone R) achieved the highest mean accuracy of \\(99.33\\%\\) and \\(98.94\\%\\), particularly with Patch-9 on COCO and ImageNet datasets, respectively, demonstrating exceptional performance and robustness. AdvPatchXAI-ViT (i.e., with backbone ViT) also performed well, especially for Patch-4 on COCO \\(96.42\\%\\) and Patch-0 on ImageNet \\(94.31\\%\\), showing reliability with moderate standard deviation (SD) (see supplementary for more detail) but found more vulnerable on Patch-5. However, AdvPatchXAI-M (MobileNet) and AdvPatchXAI-C (ConvNeXt) exhibited higher variability, with significant accuracy drops for specific patches such as Patch-3. Figure 2 gives a clearer picture of the robustness of AdvPatchXAI-R and AdvPatchXAI-ViT on both the COCO and ImageNet datasets. For example, the performance of AdvPatchXAI-R on COCO and ImageNet, \\(91.8\\%\\)" + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.518, + 0.957 + ], + "angle": 0, + "content": "30391" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.089, + 0.907, + 0.146 + ], + "angle": 0, + "content": "Table 1. Adversarial patch detection accuracy of the proposed AdvPatchXAI with different backbone on COCO (seen dataset) subset in several generalized settings such as Silent (unseen patch without any noise), Noisy (unseen patch+noise). The results are reported as mean. Patch-{0-9}\\{1} indicates models are trained on Patch-1 and tested on all other patches except Patch-1. R, M, ViT, and C represent ResNet50, MobileNet, Vision Transformer, and ConvNeXt backbones, respectively. The best mean values are highlighted." + }, + { + "type": "table", + "bbox": [ + 0.094, + 0.147, + 0.899, + 0.229 + ], + "angle": 0, + "content": "
MethodTestPatch-{0-9}\\{0}Patch-{0-9}\\{1}Patch-{0-9}\\{2}Patch-{0-9}\\{3}Patch-{0-9}\\{4}Patch-{0-9}\\{5}Patch-{0-9}\\{6}Patch-{0-9}\\{7}Patch-{0-9}\\{8}Patch-{0-9}\\{9}
AdvPatchXAI-RSilent94.0995.5784.6983.3097.9979.6585.5396.9194.7699.33
Noisy85.8188.8678.0772.0196.0969.0578.9594.5983.7995.92
AdvPatchXAI-MSilent94.1797.4577.3364.8898.4772.2893.9098.6192.9097.24
Noisy76.0970.8256.9150.2489.6351.3470.4583.6571.1368.11
AdvPatchXAI-ViTSilent95.8495.1993.2081.7496.4267.2789.0893.9089.0094.32
Noisy92.3192.6188.9178.9192.9564.5785.7490.5280.5693.13
AdvPatchXAI-CSilent90.0885.2986.0875.9496.2782.4278.6387.1892.7395.28
Noisy79.1679.8874.6264.6986.5759.7272.7471.0976.4184.06
" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.239, + 0.482, + 0.316 + ], + "angle": 0, + "content": "and \\(90.91\\%\\), respectively, is at least ranges in \\([2.2 - 4.81]\\%\\) better than AdvPatchXAI with other backbones on both datasets. It can be seen that there is only a minor difference \\([0.66 - 1.95]\\%\\) in performance when both the dataset and patches are unknown to the model." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.323, + 0.483, + 0.354 + ], + "angle": 0, + "content": "4.3.2. Zero-Shot OOD Patch Detection Under Noise Perturbation" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.357, + 0.483, + 0.476 + ], + "angle": 0, + "content": "To demonstrate the robustness capabilities of our defense, we evaluated its resiliency when trained on clean images (without exposure to any noise during training) using COCO subsets. This approach is crucial because natural noises are inherent in the environment [4], and training on every possible type of noise is impractical. Therefore, detectors must be resilient enough to handle unseen natural noises." + }, + { + "type": "text", + "bbox": [ + 0.093, + 0.478, + 0.485, + 0.902 + ], + "angle": 0, + "content": "The resiliency results of AdvPatchXAI in unseen patch detection in noisy (applying all three noise Gaussian, Shot, and Impulse with severity \\(= 2\\) separately on test subset and taking the average patch wise) settings on COCO and ImageNet datasets are shown in Figure 2. While a drop in detection performance is expected, AdvPatchXAI-ViT exhibits only a marginal reduction. For example, its performance drops from \\(89.6\\%\\) to \\(86.02\\%\\) on COCO and from \\(87.65\\%\\) to \\(83.97\\%\\) on ImageNet. In contrast, other backbones suffer more significant decreases in accuracy. For instance, the performance of AdvPatchXAI-R drops by \\(7.49\\%\\) on COCO and \\(7.73\\%\\) on ImageNet, whereas the accuracy of AdvPatchXAI-M drops by \\(19.88\\%\\) on COCO and \\(19.47\\%\\) on ImageNet. Tables 1 and 2 provide detailed average 10-fold cross-validation performance in the unseen patch noise evaluation setting for COCO and ImageNet datasets, respectively. Notably, in zero-shot evaluations where images (clean and patched) are perturbed, AdvPatchXAI with ViT outperforms other backbones by a significant margin. Patch-4 proves to be more effective in 9 out of 16 evaluations, exhibiting higher mean accuracy. The performance difference between AdvPatch-ViT with the best-performing patch, and Patch-4 is less than \\(1.4\\%\\) on both datasets and in each setting (silent, noisy), yet Patch-4 shows very low SD (see supplementary for more detail), indicating its higher effectiveness. Extensive experimental evaluation reveals that AdvPatchXAI-ViT with Patch-4 generalizes well in unseen patch settings and maintains high resiliency when images" + }, + { + "type": "image", + "bbox": [ + 0.518, + 0.238, + 0.903, + 0.367 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.513, + 0.375, + 0.907, + 0.418 + ], + "angle": 0, + "content": "Figure 2. Comparison with SOTA in terms of average adversarial patch detection accuracy for unseen patch and unseen patch + unseen noise detection on both COCO and ImageNet datasets." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.425, + 0.907, + 0.471 + ], + "angle": 0, + "content": "are perturbed by noise. Therefore, in real-world applications, we recommend using AdvPatchXAI-ViT with Patch-4 to defend against adversarial patches." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.482, + 0.899, + 0.502 + ], + "angle": 0, + "content": "5. Explainability and Comparison with SOTA" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.508, + 0.906, + 0.584 + ], + "angle": 0, + "content": "To demonstrate the effectiveness of the proposed patch detector, we have performed an extensive comparison with state-of-the-art algorithms, which we are going to discuss first. Then, we will discuss the explainability of the proposed approach." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.592, + 0.837, + 0.608 + ], + "angle": 0, + "content": "5.1. Comparison with SOTA and Baseline" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.614, + 0.907, + 0.903 + ], + "angle": 0, + "content": "To demonstrate the effectiveness of our proposed model, we compared it with recent state-of-the-art (SOTA) methods: Ojaswee et al. [43] and Kumar & Agarwal [31]. For a fair comparison, we followed the same experimental protocol we used to evaluate the proposed algorithm. As shown in Figure 2, our method outperforms all SOTA methods except in the case of AdvPatchXAI-M on both the COCO and ImageNet datasets when clean and patched images are perturbed with noise. Specifically, our AdvPatchXAI-R exceeds the performance of [43] by \\(13.35\\%\\) and [31] by \\(11.9\\%\\) on the COCO dataset. On the ImageNet dataset, AdvPatchXAI-R outperforms [43] by \\(12.03\\%\\) and [31] by \\(14.01\\%\\) when clean and unseen patched images are used for evaluation. Moreover, when clean and patched images are perturbed with noise, AdvPatchXAI-ViT surpasses [43] by \\(13.64\\%\\) and [31] by \\(12.22\\%\\) on the COCO dataset. On the ImageNet dataset, AdvPatchXAI-ViT outperforms [43] by \\(13.33\\%\\) and [31] by \\(12.67\\%\\). Our proposed model, based on prototypical parts, is unique in its approach to OOD adver" + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.52, + 0.958 + ], + "angle": 0, + "content": "30392" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.089, + 0.907, + 0.146 + ], + "angle": 0, + "content": "Table 2. Adversarial patch detection accuracy of the proposed AdvPatchXAI with different backbone on ImageNet (unseen dataset) subset in several generalized settings such as Silent (unseen patch without any noise), Noisy (unseen patch+noise). The results are reported as mean. Patch-{0-9} \\{3} indicates models are trained on Patch-3 and tested on all other patches except Patch-3. R, M, ViT, and C represent ResNet50, MobileNet, Vision Transformer, and ConvNeXt backbones, respectively. The best mean values are highlighted." + }, + { + "type": "table", + "bbox": [ + 0.095, + 0.147, + 0.899, + 0.229 + ], + "angle": 0, + "content": "
MethodTestPatch-{0-}\\{0}Patch-{0-}\\{1}Patch-{0-}\\{2}Patch-{0-}\\{3}Patch-{0-}\\{4}Patch-{0-}\\{5}Patch-{0-}\\{6}Patch-{0-}\\{7}Patch-{0-}\\{8}Patch-{0-}\\{9}
AdvPatchXAI-RSilent92.9695.1284.4982.5698.1779.4084.6897.1895.6098.94
Noisy84.3988.5177.8171.4295.6768.3478.3093.7278.8894.76
AdvPatchXAI-MSilent93.6895.2877.3064.1395.7170.7291.9396.8191.9095.92
Noisy75.1070.1656.8850.2287.7451.1569.7382.0068.5567.21
AdvPatchXAI-ViTSilent94.3192.6792.0381.3792.9466.4186.7191.7787.3790.94
Noisy90.1890.1487.9878.8889.1964.2283.7387.1978.1690.04
AdvPatchXAI-CSilent89.4185.2685.1275.9196.2381.2177.3086.6791.5394.65
Noisy78.1479.5573.2963.6685.2059.1371.5470.1675.2183.41
" + }, + { + "type": "image", + "bbox": [ + 0.105, + 0.234, + 0.473, + 0.426 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.431, + 0.484, + 0.502 + ], + "angle": 0, + "content": "Figure 3. t-SNE visualization of feature space of proposed AdvPatchXAI with backbone ViT and ConvNeXt trained on the most effective patch, Patch-4, and tested on other patches, Patch-0, Patch-1, and Patch-2. Red and blue represent patched and real class, respectively." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.516, + 0.483, + 0.728 + ], + "angle": 0, + "content": "sarial patch detection. To demonstrate that, we compared it to PIP-Net [41], a recent prototypical-parts-based model that is the closest architecture to our proposed method. Since PIP-Net is designed for a convolutional backbone, we integrated ViT into PIP-Net and evaluated results on the COCO subset in an unseen patch (silent) setting. The average patch detection accuracy of PIP-Net with backbone R, M, and ViT are \\(87.07\\%\\), \\(81.82\\%\\), and \\(89.32\\%\\), respectively. In comparison, our AdvPatchXAI method achieved accuracies of \\(91.8\\%\\), \\(88.72\\%\\), and \\(89.6\\%\\) with the identical respective backbones. In other words, proposed AdvPatchXAI shows \\(4.73\\%\\), \\(6.9\\%\\), and \\(0.28\\%\\) improvement over the baseline PIP-Net with backbone R, M, and ViT, respectively." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.729, + 0.484, + 0.834 + ], + "angle": 0, + "content": "Apart from the above datasets, we have benchmarked our proposed model on other standard datasets CUB-200-2011 [55] and Stanford Cars [30] (results discussed in supplementary). These results highlight the robustness and effectiveness of our proposed model in detecting adversarial patches, particularly when utilizing ViT and ResNet backbones." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.848, + 0.334, + 0.864 + ], + "angle": 0, + "content": "5.2. Explainable AdvPatchXAI" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.871, + 0.484, + 0.903 + ], + "angle": 0, + "content": "While quantitative analysis is essential for assessing a model's effectiveness and robustness, explainability is" + }, + { + "type": "image", + "bbox": [ + 0.518, + 0.234, + 0.905, + 0.54 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.513, + 0.542, + 0.907, + 0.613 + ], + "angle": 0, + "content": "Figure 4. Heat map visualization of the proposed AdvPatchXAI trained on Patch-4 and visualized on Patch-0 on the COCO dataset under both silent (without noise) and noisy (gaussian noise with severity=2) settings. The same image has been taken under both silent and noisy settings for a fair comparison." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.644, + 0.907, + 0.903 + ], + "angle": 0, + "content": "equally vital in supporting these findings. To address this, we performed several explainability analyses on our proposed AdvPatchXAI, including feature visualization, prototype prediction, prototype visualization, and Grad-CAM [49] visualization. Feature visualization helps in understanding the clusters formed by network features, leading to image classification or misclassification. Figure 3 shows the t-SNE [54] plot of the proposed model using an attention backbone (ViT) and a convolutional backbone (ConvNeXt) when trained on Patch-4 and tested on Patches 0 to 2. The clear separation of feature clusters supports the quantitative effectiveness of our proposed model. We also performed prototype prediction and visualization, along with Grad-CAM visualizations, as shown in Figure 4. The first column shows the unseen patch images and unseen patched+noisy image, and the second column highlights relevant prototypes within the image using yellow boxes, providing a" + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "30393" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.089, + 0.486, + 0.146 + ], + "angle": 0, + "content": "Table 3. Mean adversarial patch detection accuracy of the proposed AdvPatchXAI across different color channels RGB, YCbCr, and grayscale with backbone ViT and ResNet50 on COCO under silent (unseen patch without any noise) setting." + }, + { + "type": "table", + "bbox": [ + 0.093, + 0.147, + 0.492, + 0.213 + ], + "angle": 0, + "content": "
Model (Channel)Patch- \\{0-9\\} \\\\{0}Patch- \\{0-9\\} \\\\{1}Patch- \\{0-9\\} \\\\{2}Patch- \\{0-9\\} \\\\{3}Patch- \\{0-9\\} \\\\{4}
ViT (RGB)64.7880.9073.1362.2274.11
ViT (YCbCr)65.5790.5084.3666.7278.44
ViT (Grayscale)92.3192.6188.9178.9192.95
R (RGB)63.7177.4961.4462.9581.26
R (YCbCr)70.5687.0179.6470.3378.27
R (Grayscale)85.8188.8678.0772.0196.09
" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.23, + 0.486, + 0.429 + ], + "angle": 0, + "content": "local explanation. The third column collects these relevant prototypes, further connecting them to the classes with sparse linear layers to predict the correct image class. It can be seen that when noise comes with a patched image, the prototype prediction is compromised, and the extracted prototypes are not as fine-grained and less relevant as clean images. The fourth column presents the Grad-CAM for each prototypical part, illustrating the image regions that the network focuses on while determining its class. This detailed insight into the model's decision-making process supports the robustness and effectiveness of AdvPatchXAI in detecting adversarial patches (further discussed in supplementary)." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.442, + 0.258, + 0.459 + ], + "angle": 0, + "content": "6. Ablation Studies" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.468, + 0.485, + 0.545 + ], + "angle": 0, + "content": "In this section, we have presented ablation studies highlighting the impact of color channels, real-world robustness, and different terms used in the loss function. To ensure fairness, experiments are conducted using the pre-defined experimental protocol." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.557, + 0.444, + 0.573 + ], + "angle": 0, + "content": "6.1. Effects of Different Color Augmentations" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.579, + 0.486, + 0.731 + ], + "angle": 0, + "content": "Table 3 showcases the advantage of converting images into grayscale. Since unseen patches and datasets can have diverse color distributions, having color information makes the system biased toward learning color information rather than focusing on patch information. We assert that suppressing non-useful information can make the system generalized, which can also be visible from the results. For example, when AdvPatchXAI-ViT is trained on Patch 4 using grayscale images, it yields at least \\(14.51\\%\\) higher accuracy than RGB & YCbCr color channel-trained models." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.742, + 0.485, + 0.773 + ], + "angle": 0, + "content": "6.2. Physical-World Effectiveness and Robustness Against Adversarial Patch Attack" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.78, + 0.486, + 0.903 + ], + "angle": 0, + "content": "We have further evaluated the robustness of AdvPatchXAI for real-world adaptation by printing and applying the adversarial patches in the real world, as demonstrated by Pinton et al. [44]. Our proposed defense algorithm is robust in handling the attacks in the physical world and yields an average accuracy of \\(91.11\\%\\) and \\(88.88\\%\\) when the proposed ViT defense is trained with Patch 0 and Patch 4, respectively." + }, + { + "type": "table_caption", + "bbox": [ + 0.513, + 0.089, + 0.907, + 0.146 + ], + "angle": 0, + "content": "Table 4. Ablation study concerning loss terms reflecting the total number of relevant prototypes with at least one non-zero weight present in our proposed AdvPatchXAI algorithm. The mean accuracy demonstrates the advantage of a combined loss function." + }, + { + "type": "table", + "bbox": [ + 0.517, + 0.147, + 0.915, + 0.233 + ], + "angle": 0, + "content": "
Number of Prototypes
MethodBackbonePatch-0Patch-1Patch-2Patch-3Patch-4Mean Acc. ↑
AdvPatchXAIR26322018220923291.13
ViT18716820316620392.48
{LPD, LA, LT, LC}M825579876588.46
AdvPatchXAIR37028425125234786.44
ViT19515116115216192.66
{LA, LT, LC}M67611009110177.71
" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.261, + 0.909, + 0.429 + ], + "angle": 0, + "content": "We generated perturbation-based PGD patch attacks followed by [35, 58] for a fair comparison to evaluate the adversarial robustness of our proposed defense model. We wanted to highlight that the performance of our proposed AdvPatchXAI-R and AdvPatchXAI-ViT yields \\(98.04\\%\\) and \\(95.67\\%\\) average detection accuracy, respectively, in a silent setting when a patch attack based on PGD perturbation is available for evaluation. Even in noisy settings, our proposed network is better robust against PGD perturbation-based patch attacks with average detection performances of \\(95.55\\%\\) and \\(93.05\\%\\) with the same respective backbone." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.441, + 0.807, + 0.459 + ], + "angle": 0, + "content": "6.3. Effect of Proposed Loss Function" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.464, + 0.909, + 0.571 + ], + "angle": 0, + "content": "We further examine the impact of the proposed algorithm's different linear combinations of loss functions. The corresponding outcomes presented in Table 4 reveal that the linear combination of our proposed patch decorrelation loss \\( L_{PD} \\) with other loss functions improves interpretability. In other words, it reduces the number of relevant prototypes for a class and enhances the detection accuracy." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.588, + 0.634, + 0.605 + ], + "angle": 0, + "content": "7. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.615, + 0.909, + 0.813 + ], + "angle": 0, + "content": "In this research, we present AdvPatchXAI to significantly advance the development of a generalized, robust, and explainable adversarial patch detector. By incorporating prototypical parts and a novel patch decorrelation module, our model achieves unprecedented accuracy in physical adversarial patch detection, particularly in zero-shot settings with and without unseen noise perturbations. Furthermore, the explainable nature of AdvPatchXAI provides deep insights into the decision-making process, enabling better understanding and trust in AI systems. Future work will focus on enhancing the scalability of this approach and exploring its applicability to other forms of adversarial attacks, aiming to fortify AI defenses in increasingly complex applications." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.829, + 0.673, + 0.846 + ], + "angle": 0, + "content": "Acknowledgement" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.856, + 0.908, + 0.903 + ], + "angle": 0, + "content": "V. Kumar is partially supported through the Visvesvaraya PhD Fellowship. A. Agarwal is partially funded through the ANRF PMECRG grant of Govt. of India." + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.521, + 0.958 + ], + "angle": 0, + "content": "30394" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.093, + 0.091, + 0.188, + 0.106 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.115, + 0.484, + 0.169 + ], + "angle": 0, + "content": "[1] Akshay Agarwal, Nalini Ratha, Mayank Vatsa, and Richa Singh. Crafting adversarial perturbations via transformed image component swapping. IEEE Transactions on Image Processing, 31:7338-7349, 2022. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.17, + 0.484, + 0.237 + ], + "angle": 0, + "content": "[2] Akshay Agarwal, Richa Singh, Mayank Vatsa, and Nalini Ratha. Image transformation-based defense against adversarial perturbation on deep learning models. IEEE Transactions on Dependable and Secure Computing, 18(5):2106-2121, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.239, + 0.484, + 0.291 + ], + "angle": 0, + "content": "[3] Akshay Agarwal, Richa Singh, Mayank Vatsa, and Nalini Ratha. IBAttack: Being cautious about data labels. IEEE Transactions on Artificial Intelligence, 4(6):1484-1493, 2022. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.293, + 0.484, + 0.36 + ], + "angle": 0, + "content": "[4] Akshay Agarwal, Mayank Vatsa, Richa Singh, and Nalini K Ratha. Noise is inside me! generating adversarial perturbations with noise derived from natural filters. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 774-775, 2020. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.361, + 0.483, + 0.4 + ], + "angle": 0, + "content": "[5] Naveed Akhtar and Ajmal Mian. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6:14410-14430, 2018. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.401, + 0.483, + 0.455 + ], + "angle": 0, + "content": "[6] Ahmed Aldahdooh, Wassim Hamidouche, Sid Ahmed Fezza, and Olivier Déforges. Adversarial example detection for dnn models: A review and experimental comparison. Artificial Intelligence Review, 55(6):4403-4462, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.456, + 0.483, + 0.495 + ], + "angle": 0, + "content": "[7] Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao. Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.496, + 0.483, + 0.535 + ], + "angle": 0, + "content": "[8] Tom B Brown, Dandelion Mane, Aurko Roy, Martin Abadi, and Justin Gilmer. Adversarial patch. arXiv preprint arXiv:1712.09665, 2017. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.536, + 0.483, + 0.588 + ], + "angle": 0, + "content": "[9] Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526, 2017. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.59, + 0.483, + 0.657 + ], + "angle": 0, + "content": "[10] Aran Chindaudom, Prarinya Siritanawan, Karin Sumongkayothin, and Kazunori Kotani. Adversarialqr: An adversarial patch in qr code format. In International Conference on Imaging, Vision & Pattern Recognition, pages 1-6, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.658, + 0.483, + 0.711 + ], + "angle": 0, + "content": "[11] Aran Chindaudom, Prarinya Sritananawan, Karin Sumongkayothin, and Kazunori Kotani. Surreptitious adversarial examples through functioning qr code. Journal of Imaging, 8(5):122, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.712, + 0.483, + 0.764 + ], + "angle": 0, + "content": "[12] Kenneth T Co, Luis Muñoz-González, Leslie Kanthan, and Emil C Lupu. Real-time detection of practical universal adversarial perturbations. arXiv preprint arXiv:2105.07334, 2021. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.765, + 0.483, + 0.82 + ], + "angle": 0, + "content": "[13] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, 2009. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.82, + 0.483, + 0.888 + ], + "angle": 0, + "content": "[14] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint" + }, + { + "type": "list", + "bbox": [ + 0.094, + 0.115, + 0.484, + 0.888 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.548, + 0.093, + 0.712, + 0.106 + ], + "angle": 0, + "content": "arXiv:2010.11929,2020.5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.106, + 0.905, + 0.16 + ], + "angle": 0, + "content": "[15] Gil Fidel, Ron Bitton, and Asaf Shabtai. When explainability meets adversarial learning: Detecting adversarial examples using shap signatures. In 2020 international joint conference on neural networks (IJCNN), pages 1-8. IEEE, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.161, + 0.905, + 0.227 + ], + "angle": 0, + "content": "[16] Christina M Funke, Judy Borowski, Karolina Stosio, Wieland Brendel, Thomas SA Wallis, and Matthias Bethge. Five points to check when comparing visual perception in humans and machines. Journal of Vision, 21(3):16-16, 2021. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.228, + 0.905, + 0.269 + ], + "angle": 0, + "content": "[17] Thomas Gittings, Steve Schneider, and John Collomosse. Vax-a-net: Training-time defence against adversarial patch attacks. In Asian Conference on Computer Vision, 2020. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.27, + 0.905, + 0.309 + ], + "angle": 0, + "content": "[18] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.31, + 0.905, + 0.376 + ], + "angle": 0, + "content": "[19] Gaurav Goswami, Nalini Ratha, Akshay Agarwal, Richa Singh, and Mayank Vatsa. Unravelling robustness of deep learning based face recognition against adversarial attacks. In AAAI Conference on Artificial Intelligence, volume 32, 2018. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.377, + 0.905, + 0.417 + ], + "angle": 0, + "content": "[20] Jindong Gu, Volker Tresp, and Yao Qin. Are vision transformers robust to patch perturbations? In European Conference on Computer Vision, pages 404-421, 2022. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.418, + 0.905, + 0.458 + ], + "angle": 0, + "content": "[21] Jamie Hayes. On visible adversarial perturbations & digital watermarking. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1597-1604, 2018. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.459, + 0.905, + 0.511 + ], + "angle": 0, + "content": "[22] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition, pages 770-778, 2016. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.511, + 0.905, + 0.566 + ], + "angle": 0, + "content": "[23] Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15262-15271, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.566, + 0.905, + 0.606 + ], + "angle": 0, + "content": "[24] Yuheng Huang and Yuanchun Li. Zero-shot certified defense against adversarial patches with vision transformers. arXiv preprint arXiv:2111.10481, 2021. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.607, + 0.905, + 0.659 + ], + "angle": 0, + "content": "[25] Nan Ji, YanFei Feng, Haidong Xie, Xueshuang Xiang, and Naijin Liu. Adversarial yolo: Defense human detection patch attacks via detecting adversarial patches. arXiv preprint arXiv:2103.08860, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.66, + 0.905, + 0.713 + ], + "angle": 0, + "content": "[26] Longlong Jing and Yingli Tian. Self-supervised visual feature learning with deep neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(11):4037-4058, 2020. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.714, + 0.905, + 0.768 + ], + "angle": 0, + "content": "[27] Melanie Jutas, Ethan Liang, Sara Leary, Chris Ward, and Keith Manville. Detecting physical adversarial patch attacks with object detectors. In IEEE Applied Imagery Pattern Recognition Workshop, pages 1-7, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.768, + 0.905, + 0.821 + ], + "angle": 0, + "content": "[28] Danny Karmon, Daniel Zoran, and Yoav Goldberg. Lavan: Localized and visible adversarial noise. In International Conference on Machine Learning, pages 2507-2515. PMLR, 2018. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.822, + 0.905, + 0.891 + ], + "angle": 0, + "content": "[29] Adam Kortylewski, Qing Liu, Huiyu Wang, Zhishuai Zhang, and Alan Yuille. Combining compositional models and deep networks for robust object classification under occlusion. In IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1333-1341, 2020. 1" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.891 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "30395" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.092, + 0.482, + 0.147 + ], + "angle": 0, + "content": "[30] Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In IEEE International Conference on Computer Vision Workshops, pages 554–561, 2013. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.148, + 0.483, + 0.188 + ], + "angle": 0, + "content": "[31] Vishesh Kumar and Akshay Agarwal. The unseen adversaries: Robust and generalized defense against adversarial patches. Available at SSRN 4772716, 2023. 2, 5, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.188, + 0.482, + 0.242 + ], + "angle": 0, + "content": "[32] Juncheng Li, Frank Schmidt, and Zico Kolter. Adversarial camera stickers: A physical camera-based attack on deep learning systems. In International Conference on Machine Learning, pages 3896-3904. PMLR, 2019. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.242, + 0.482, + 0.309 + ], + "angle": 0, + "content": "[33] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European Conference on Computer Vision, pages 740-755, 2014. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.31, + 0.482, + 0.376 + ], + "angle": 0, + "content": "[34] Aishan Liu, Xianglong Liu, Jiaxin Fan, Yuqing Ma, Anlan Zhang, Huiyuan Xie, and Dacheng Tao. Perceptual-sensitive gan for generating adversarial patches. In AAAI Conference on Artificial Intelligence, volume 33, pages 1028-1035, 2019. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.378, + 0.482, + 0.446 + ], + "angle": 0, + "content": "[35] Jiang Liu, Alexander Levine, Chun Pong Lau, Rama Chellappa, and Soheil Feizi. Segment and complete: Defending object detectors against adversarial patch attacks with robust patch detection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14973-14982, 2022. 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.446, + 0.482, + 0.5 + ], + "angle": 0, + "content": "[36] Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11976-11986, 2022. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.5, + 0.482, + 0.581 + ], + "angle": 0, + "content": "[37] Giulio Lovisotto, Nicole Finnie, Maurizio Munoz, Chaithanya Kumar Mummadi, and Jan Hendrik Metzen. Give me your attention: Dot-product attention considered harmful for adversarial patch robustness. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15234-15243, 2022. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.581, + 0.482, + 0.635 + ], + "angle": 0, + "content": "[38] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.635, + 0.482, + 0.717 + ], + "angle": 0, + "content": "[39] Michael McCoyd, Won Park, Steven Chen, Neil Shah, Ryan Roggenkemper, Minjune Hwang, Jason Xinyu Liu, and David Wagner. Minority reports defense: Defending against adversarial patches. In International Conference on Applied Cryptography and Network Security, pages 564-582, 2020. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.717, + 0.482, + 0.771 + ], + "angle": 0, + "content": "[40] Samuel G Müller and Frank Hutter. Trivialaugment: Tuning-free yet state-of-the-art data augmentation. In IEEE/CVF International Conference on Computer Vision, pages 774-782, 2021. 3, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.771, + 0.482, + 0.812 + ], + "angle": 0, + "content": "[41] Meike Nauta, Jörg Schlötterer, Maurice van Keulen, and Christin Seifert. Pip-net: Patch-based intuitive prototypes for interpretable image classification. 2023. 4, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.812, + 0.482, + 0.894 + ], + "angle": 0, + "content": "[42] Meike Nauta, Jan Trienes, Shreyasi Pathak, Elisa Nguyen, Michelle Peters, Yasmin Schmitt, Jorg Schlötterer, Maurice van Keulen, and Christin Seifert. From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai. ACM Computing Surveys, 55(13s):1-42, 2023. 1" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.483, + 0.894 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.092, + 0.905, + 0.16 + ], + "angle": 0, + "content": "[43] Ojaswee Ojaswee, Akshay Agarwal, and Nalini Ratha. Benchmarking image classifiers for physical out-of-distribution examples detection. In IEEE/CVF International Conference on Computer Vision, pages 4427-4435, 2023. 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.161, + 0.905, + 0.229 + ], + "angle": 0, + "content": "[44] Maura Pintor, Daniele Angioni, Angelo Sotgiu, Luca Demetrio, Ambra Demontis, Battista Biggio, and Fabio Roli. Imagenet-patch: A dataset for benchmarking machine learning robustness against adversarial patches. Pattern Recognition, 134:109064, 2023. 2, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.229, + 0.904, + 0.256 + ], + "angle": 0, + "content": "[45] Joseph Redmon. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.256, + 0.904, + 0.296 + ], + "angle": 0, + "content": "[46] Joseph Redmon and Ali Farhadi. Yolo9000: better, faster, stronger. In IEEE Conference on Computer Vision and Pattern Recognition, pages 7263-7271, 2017. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.296, + 0.905, + 0.35 + ], + "angle": 0, + "content": "[47] Wojciech Samek, Grégoire Montavon, Sebastian Lapischkin, Christopher J Anders, and Klaus-Robert Müller. Explaining deep neural networks and beyond: A review of methods and applications. IEEE, 109(3):247-278, 2021. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.35, + 0.905, + 0.417 + ], + "angle": 0, + "content": "[48] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. *Mobilenetv2: Inverted residuals and linear bottlenecks*. In IEEE Conference on Computer Vision and Pattern Recognition, pages 4510-4520, 2018. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.418, + 0.905, + 0.486 + ], + "angle": 0, + "content": "[49] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In IEEE International Conference on Computer Vision, pages 618-626, 2017. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.486, + 0.905, + 0.554 + ], + "angle": 0, + "content": "[50] Ali Shafahi, W Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein. Poison frogs! targeted clean-label poisoning attacks on neural networks. Advances in Neural Information Processing Systems, 31, 2018. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.554, + 0.905, + 0.607 + ], + "angle": 0, + "content": "[51] Abhijith Sharma, Yijun Bian, Phil Munz, and Apurva Narayan. Adversarial patch attacks and defences in vision-based tasks: A survey. arXiv preprint arXiv:2206.08304, 2022.2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.608, + 0.905, + 0.661 + ], + "angle": 0, + "content": "[52] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.662, + 0.905, + 0.715 + ], + "angle": 0, + "content": "[53] Tung Tran, Issam Aib, Ehab Al-Shaer, and Raouf Boutaba. An evasive attack on snort flowbits. In IEEE Network Operations and Management Symposium, pages 351-358, 2012. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.716, + 0.905, + 0.756 + ], + "angle": 0, + "content": "[54] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(11), 2008. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.756, + 0.905, + 0.796 + ], + "angle": 0, + "content": "[55] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.797, + 0.905, + 0.851 + ], + "angle": 0, + "content": "[56] Tongzhou Wang and Phillip Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In International Conference on Machine Learning, pages 9929-9939. PMLR, 2020. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.851, + 0.905, + 0.892 + ], + "angle": 0, + "content": "[57] Chong Xiang, Arjun Nitin Bhagoji, Vikash Sehwag, and Prateek Mittal. {PatchGuard}: A provably robust defense against adversarial patches via small receptive fields and" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.092, + 0.905, + 0.892 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "30396" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.125, + 0.092, + 0.482, + 0.119 + ], + "angle": 0, + "content": "masking. In USENIX Security Symposium, pages 2237-2254, 2021. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.12, + 0.483, + 0.187 + ], + "angle": 0, + "content": "[58] Ke Xu, Yao Xiao, Zhaoheng Zheng, Kaijie Cai, and Ram Nevatia. Patchzero: Defending against adversarial patch attacks by detecting and zeroing the patch. In IEEE/CVF Winter Conference on Applications of Computer Vision, pages 4632-4641, 2023. 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.187, + 0.482, + 0.227 + ], + "angle": 0, + "content": "[59] Weilin Xu, David Evans, and Yanjun Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155, 2017. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.228, + 0.482, + 0.295 + ], + "angle": 0, + "content": "[60] Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane-Ling Wang, and Michael Jordan. Ml-loo: Detecting adversarial examples with feature attribution. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 6639-6647, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.296, + 0.482, + 0.349 + ], + "angle": 0, + "content": "[61] Xingyu Zhou, Zhisong Pan, Yexin Duan, Jin Zhang, and Shuaihui Wang. A data independent approach to generate adversarial patches. Machine Vision and Applications, 32(3):67, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.35, + 0.482, + 0.432 + ], + "angle": 0, + "content": "[62] Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang. Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: International Workshop and International Workshop, pages 3-11, 2018. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.432, + 0.482, + 0.486 + ], + "angle": 0, + "content": "[63] Hongru Zhu, Peng Tang, Jeongho Park, Soojin Park, and Alan Yuille. Robustness of object recognition under extreme occlusion in humans and computational models. arXiv preprint arXiv:1905.04598, 2019. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.486, + 0.482, + 0.553 + ], + "angle": 0, + "content": "[64] Alon Zolfi, Moshe Kravchik, Yuval Elovici, and Asaf Shabtai. The translucent patch: A physical and universal attack on object detectors. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15232-15241, 2021. 1, 2" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.483, + 0.553 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "30397" + } + ] +] \ No newline at end of file diff --git a/2025/A Unified, Resilient, and Explainable Adversarial Patch Detector/b9c12ba3-81c5-4427-ad44-e661e361941c_origin.pdf b/2025/A Unified, Resilient, and Explainable Adversarial Patch Detector/b9c12ba3-81c5-4427-ad44-e661e361941c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..1a5b362756782d7de53627f125ad0de06983c0e8 --- /dev/null +++ b/2025/A Unified, Resilient, and Explainable Adversarial Patch Detector/b9c12ba3-81c5-4427-ad44-e661e361941c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8101487851081778f1e01d847c648570b179a6c3fe365fbe5d8725617706fa7 +size 6288621 diff --git a/2025/A Unified, Resilient, and Explainable Adversarial Patch Detector/full.md b/2025/A Unified, Resilient, and Explainable Adversarial Patch Detector/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7973eda7016c4d47dfc38d5efeac6a6bb496d4f1 --- /dev/null +++ b/2025/A Unified, Resilient, and Explainable Adversarial Patch Detector/full.md @@ -0,0 +1,265 @@ +# A Unified, Resilient, and Explainable Adversarial Patch Detector + +Vishesh Kumar, Akshay Agarwal +Trustworthy BiometraVision Lab, IISER Bhopal, India + +{vishesh22,akagarwal}@iiserb.ac.in + +# Abstract + +Deep Neural Networks (DNNs), backbone architecture in 'almost' every computer vision task, are vulnerable to adversarial attacks, particularly physical out-of-distribution (OOD) adversarial patches. Existing defense models often struggle with interpreting these attacks in ways that align with human visual perception. Our proposed AdvPatchXAI approach introduces a generalized, robust, and explainable defense algorithm designed to defend DNNs against physical adversarial threats. AdvPatchXAI employs a novel patch decorrelation loss that reduces feature redundancy and enhances the distinctiveness of patch representations, enabling better generalization across unseen adversarial scenarios. It learns prototypical parts self-supervised, enhancing interpretability and correlation with human vision. The model utilizes a sparse linear layer for classification, making the decision process globally interpretable through a set of learned prototypes and locally explainable by pinpointing relevant prototypes within an image. Our comprehensive evaluation shows that AdvPatchXAI closes the "semantic" gap between latent space and pixel space and effectively handles unseen adversarial patches even perturbed with unseen corruptions, thereby significantly advancing DNN robustness in practical settings1. + +# 1. Introduction + +Besides being dominant in computer vision over the years, deep neural networks (DNNs) have been found vulnerable to adversarial attacks [1, 19]. Early adversarial attacks on DNNs tricked models using slight, barely noticeable noise [18, 52]. While the above attack perturbs an entire image, a few attacks often target the main object in the scene naturally [29, 63]. Another type of adversaries are poisoning attacks, misled models during training by introducing incorrect patterns, such as in poison frogs and backdoor attacks [3, 9, 50]. Surprisingly, many of these minute adversarial attacks are ineffective in the physical world due to + +several unconstrained environmental factors, including rotation and translation properties of the objects. This led to the development of adversarial patch attacks, where a patterned sub-image is placed over the input image to deceive the model [8]. Due to expectation over transformation (EoT) constraints while learning the patches, they are highly effective in real-world scenarios. They can fool any possible deep networks, including vision transformers [5, 12, 20, 37, 53, 64]. + +Therefore, securing DNN-based systems against stealthy and practical attacks is vital, especially in safety-critical domains such as autonomous driving, robotics, smart homes/cities, smart industries, video surveillance, and healthcare. Researchers are constantly developing new defenses or protection strategies [17, 21, 24, 39, 57] to tackle the limitations of DNNs against physical adversarial patches, but understanding their decision-making processes and evaluating the resiliency of the defense algorithms is increasingly important [42, 47]. It is to be noted that physical adversarial patches are visible to the human eye; however, they need extra precautions since the added patch can also be a real-world object. Due to this, automated detection of patches is challenging and leads to several false rejections. It is essential to align human and machine vision to address this issue. One way can be to align the functional properties of human and machine vision [16]. This alignment will help improve the machine's ability to detect and respond to out-of-distribution (OOD) adversarial patches more effectively in several generalized settings, such as unseen patches and unseen perturbation, ensuring better security and performance. "To the best of our knowledge, no existing defense provides both high generalizability and explainability against adversarial patch attacks." + +For the first time, we propose a generalized, robust, and explainable adversarial patch detector, namely AdvPatchXAI, by adding the model explainability since the beginning of the development of the network in contrast to the traditional methods, which plug in the explainability module once the model is thoroughly trained. Proposed AdvPatchXAI uses a sparse linear layer that connects learned prototypical parts to classes. This setup allows a user to + +interpret the model by inspecting the prototypes and their relation to the classes. The weights of the linear layer are restricted to non-negative, ensuring that the presence of a class-relevant prototype increases the evidence for that class. This layer functions like a scoring sheet: the score for a class is the sum of all present prototypes multiplied by their weights. Using this interpretable and predictive linear layer, AdvPatchXAI ensures a direct relation between the prototypes and the classification. This approach improves the interpretability of the model's decisions and enhances its ability to detect and respond to adversarial patches. The significant strength of the proposed approach is that it can be plugged in with any deep learning architecture, including CNN and transformer-based architecture. In brief, the contributions of this research are: + +- We have proposed a generalized, robust, and explainable patch detector (AdvPatchXAI), which effectively detects unseen and out-of-distribution adversarial patches. +- For the first time in the literature, we have evaluated the robustness of adversarial patch detectors against several common corruptions, ensuring their practicality in the unconstrained physical world. +- Extensive experimental comparison with benchmark and state-of-the-art works demonstrate the proposed defense algorithm's effectiveness, generalizability, and explainability. + +# 2. Literature review + +We identify three main philosophies for developing robust defenses against adversarial patch attacks. These approaches are not mutually exclusive and can be adopted together. + +Adversarial Patched Dataset: The first time Brown et al.[8] introduced the concept of adversarial patches to fool the object detectors. Since then, several advancements have been made to create more effective and stealthy adversarial patches, including LaVAN (focusing on weaknesses) [28], Adversarial QR codes (appearing less suspicious) [10, 11], PS-GAN (improved quality) [34], and DiAP (data-independent) [61]. A survey [51] reveals the vulnerability of various state-of-the-art pre-trained models YOLOv4 [7], ViT-B/16, Unet++ [62], YOLOv3 [45], YOLOv2 [46], YOLOv5 against physical adversarial patch attacks. Our first philosophy lies in the lack of a standardized dataset. While several effective adversarial patch generation algorithms exist [10, 32, 38, 61, 64], a lack of standardized datasets hinders the development of robust defense mechanisms. Recent efforts by Pintor et al. [44] and Ojaswee et al.[43] propose benchmark datasets specifically for adversarial patches but missing the effect of natural noises in real-world scenarios [2, 23]. While Kumar & Agarwal [31], for the first time, explored the combined effect of both adversarial patches and natural noises, they have not + +proposed any novel defense algorithm. Our Second philosophy lies with the defense algorithm. It is also to be noted that minimal defense works exist that can effectively detect adversarial patch attacks in several generalized settings such as unseen datasets, unseen adversarial patches, and unseen threat model [25, 27, 32, 43] and also lack explainability. To address these gaps, in our research, we have regenerated a large-scale dataset followed by [31] containing 10 different adversarial patches [44] and three natural noises (gaussian noise, shot noise, and impulse noise); the detailed description of the dataset given in subsection 4.1. Our third philosophy concerns the lack of a generalized and explainable patch defense algorithm. Generalized & Explainable Patch Defense: While several studies have been explored for adversarial example detection in image classification task [6, 15, 59, 60], very few studies talk about adversarial patch detection in image classification. Pintor et al. [44] introduced an ImageNet-Patch dataset to benchmark machine learning models' robustness against adversarial patches. This dataset includes patches optimized to generalize across different models, allowing for a more efficient and transferable robustness evaluation. The dataset's utility has been demonstrated by testing its effectiveness against 127 models, showcasing its potential as a standard benchmark for assessing and improving model robustness against adversarial patches. Further, Ojaswee et al. [43], using subsets of ImageNet [13] and COCO [33] datasets, developed benchmark datasets and generalized the effect of these patches by finetuning several state-of-the-art DNNs in different generalized settings such as (i) seen patch settings (same patches during testing and training), (ii) unseen patch setting (different patches during training and testing), (iii) seen patch + unseen dataset, and (iv) unseen patch + unseen dataset. Again, Kumar & Agarwal [31] extended this work by training traditional machine learning algorithm using the features extracted by state-of-the-art DNNs in several conditions such as (i) seen patch, (ii) seen patch + natural noises, (iii) unseen patch, and (iv) unseen patch + noise. These findings indicate that defending against adversarial patches in unseen settings is challenging, as the effectiveness of defenses is closely tied to the attributes of the patches used during training. This means detectors have a lower detection rate for new, unseen patches. It has been noticed that none of these studies propose novel algorithms for patch detection. Moreover, to our knowledge, existing detectors do not provide sufficient explanations for the alerts they raise, leaving the reasoning behind their decisions unclear. This research primarily focused on unseen settings such as (i) unseen patch setting, (ii) unseen patch + noise, (iii) Unseen patch + unseen dataset, and (iv) unseen patch + unseen dataset + noise. Generalized better than existing defenses and provided detailed explanations through the proposed AdvPatchXAI patch detector. + +![](images/9b6d36f85b9743b72767020bbec29aaae3ac69a31a7f32306720f1d190696f12.jpg) +Figure 1. Overview of our proposed method. OOD Adversarial patches used in this research show on top of the proposed architecture. Patch Decorrelation: A novel patch decorrelation loss $L_{PD}$ followed the contrastive learning approach applying to the backbone features along with loss $L_{A}$ applying on softmax output to assign the same prototype of two representations of patches for an image pair. To avoid trivial solutions and encourage the utilization of all prototypes, a tanh-loss $L_{T}$ is applied during the self-supervised pretraining phase. Connections between learned part-prototypes and classes are established through a sparse linear layer. The standard loss used is negative log-likelihood, denoted $L_{C}$ . Model outputs remain unnormalized during testing, allowing them to serve as straightforward scoring metrics. + +# 3. Proposed Framework for AdvPatchXAI + +This section first briefly describes the binary classification problem of adversarial patch detection, followed by a comprehensive discussion of the proposed explainable adversarial patch detector, i.e., AdvPatchXAI. + +# 3.1. Problem Statement & Notations + +Given a binary classification problem with training dataset $D_{tr} = \{(x,y), x \in X_{tr}, y \in Y_{tr}\}$ containing known classes $C_2 = \{c_1 \text{ (real)}, c_2 \text{ (patched)}\}$ and a testing dataset $D_{te} = \{(x', y'), x' \in X_{te}, y' \in Y_{te}\}$ , where, $X_{tr}$ and $X_{te}$ represent the input images of the training and testing dataset, respectively, while $Y_{tr}$ and $Y_{te}$ are the corresponding class labels. We aim to learn interpretable prototypes that can be used as input features for the model explainability. The backbone of our model consists of pre-trained CNNs and ViT, which learn an interpretable, 1-dimensional image encoding $p$ that indicates the presence or absence of prototypical parts in an image. These prototypical parts (prototypes) are then connected to classes through a sparse linear layer. + +Our framework introduces two steps to effectively iden- + +tify and classify the given input accurately: (i) data augmentation followed by TrivialAugment [40] to perform self-supervised pretraining of learned prototype patches and (ii) training AdvPatchXAI for effective and explainable adversarial patch detection. + +# 3.2. Self-Supervised Pretraining of Prototypes + +Following prior self-supervised learning methods [26], we generate a positive pair, denoted as $x'$ and $x''$ by using TrivialAugment [40]. This recently introduced augmentation strategy is efficient, requiring no hyperparameter tuning, and applies a single augmentation per image. However, in contrast to the standard approach, we applied TrivialAugment twice, as shown in Figure 1. The first augmentation operation applies location-related transformations or focuses on spatial transformations, including shearing, rotation, and translation. This augmented image is then used as input to another TrivialAugment operation, which involves color alterations, including brightness, sharpness, hue, and contrast. In our approach, we followed grayscale conversion in the second augmentation stage. We also experimented with RGB and YCbCr conversion but found a less + +effective strategy than grayscale (see subsection 6.1). + +# 3.2.1. Patch Decoration Loss + +To reduce redundancy and enhance the distinctiveness of the patch features, we propose a patch decorrelation loss applied to patch features. We adopt a contrastive learning strategy similar to [56] to achieve alignment and uniformity of patch representations. However, unlike their image-level approach, we focus on patch-level alignment. The Patch Decorrelation Module plays a critical role in our method by promoting the similarity between features extracted from different views of the same patch while simultaneously reducing redundancy across different feature dimensions. An input image is first forwarded through the backbone DNN $f$ . The resulting output $z = f(x; w_f)$ consists of $D$ two-dimensional $(H \times W)$ feature maps, where $w_f$ denotes the trainable parameters of $f$ . So, the feature maps for two different views of $x$ , $z' = f(x'; w_f)$ and $z'' = f(x''; w_f)$ . Since the feature dimensions of each are initially $(B, D, H, W)$ , where $B$ denotes the batch size and $D$ represents the total number of feature maps of dimension $H \times W$ . We have converted it to $(B \times H \times W, D)$ , representing each image patch's features within the batch. Compute the cross-correlation matrix using the converted feature maps for both $x'$ and $x''$ as $[D_{cr}] = [d_{ij}]_{(D,D)} = \frac{1}{B \times H \times W} [x']_{(B \times H \times W,D)}^{T} [x'']_{(B \times H \times W,D)}$ , where $i, j = 1, 2, \dots, D$ . Consider an identity matrix $I = [I_{ij}]_{(D,D)}$ . Finally, the patch decorrelation loss (i.e., $L_{PD}$ ) is defined as: + +$$ +L _ {P D} = \frac {\sum_ {i = 1} ^ {D} \left(d _ {i i} - I _ {i i}\right) ^ {2} + \lambda \times \left(\left(d _ {i j} - I _ {i j}\right) _ {i \neq j}\right) ^ {2}}{D} \tag {1} +$$ + +Here, $\lambda = 5 \cdot e^{-3}$ is a trade-off parameter of the loss function. Our objective revolves around diagonalizing this cross-correlation matrix, which mitigates the risk of trivial solutions, such as activating the same prototype for all patches and promoting utilizing the entire prototype space. + +# 3.2.2. Alignment Loss on Softmax Outputs + +After obtaining raw features aligned using the Patch Decorrelation Module, we further refine the alignment using a softmax-based alignment loss. This alignment loss, denoted as $L_{A}$ [41], ensures that the softmax-normalized feature vectors of corresponding patches from two views are closely aligned. Apply a softmax over $D$ such that $\sum_{i=1}^{D} s_{h,w,i} = 1$ , ensuring that a patch located at $h, w \in H \times W$ corresponds to prototype $i$ . Ideally, $s_{h,w,:}$ is a one-hot encoded vector indicating a perfect assignment to one prototype. To measure the similarity between corresponding patches from two augmented views, we calculate the dot product between their latent representations, $(s_{h,w,:}^{\prime})$ and $s_{h,w,:}^{\prime \prime}$ : + +$$ +L _ {A} = - \frac {1}{H W} \sum_ {(h, w) \in H \times W} \log \left(s _ {h, w,:} ^ {\prime} \cdot s _ {h, w,:} ^ {\prime \prime}\right) \tag {2} +$$ + +Our goal is to identify the presence or absence of prototypical parts within an image, for which a max-pooling operation across each feature map dimension (denoted as $s::,d$ ) is applied. This results in a presence score tensor, $p \in [0,1]^D$ , where each element $p_d$ represents the strength of the $d$ th prototype's presence in the image. We introduce the tanh-based regularization loss $L_{T}$ [41] to prevent trivial solutions. The tanh-loss encourages the presence of each prototype at least once in a mini-batch. The tanh loss is defined as: + +$$ +L _ {T} (p) = - \frac {1}{D} \sum_ {i = 1} ^ {D} \log (\tanh (\sum_ {b = 1} ^ {B} p _ {b}) + \epsilon), \tag {3} +$$ + +Where tanh and log are element-wise operations, $B$ is the number of samples in a mini-batch, $D$ is the number of prototypes, and $\epsilon$ is a small number for numerical stability. This loss ensures that each prototype is utilized across the mini-batch, preventing any prototype from dominating and promoting a balanced representation across all prototypes. + +The combination of patch decorrelation loss, alignment loss, and tanh loss enables our model to achieve robust patch-level alignment and uniformity. The patch decorrelation loss $(L_{PD})$ enhances the raw feature alignment, the alignment loss $(L_A)$ ensures the softmax-normalized features are closely matched, and the tanh loss $(L_T)$ prevents trivial solutions by promoting diversity in prototype assignments. The final objective of our pre-training phase of AdvPatchXAI is: $\lambda_{PD}L_{PD} + \lambda_{A}L_{A} + \lambda_{T}L_{T}$ . + +# 3.3. Training AdvPatchXAI + +After the pretraining phase of the prototypes, the patch presence score tensor, $p$ , is fed into a linear classification layer having non-negative weights, $w_{c} \in \mathbb{R}^{(D \times C)} \geq 0$ . It looks for only positive or true class evidence in an input image to which the image belongs. These weights connect prototypes to classes $(C)$ to ensure that the presence of prototypical parts contributes positively to the evidence for their associated class, enhancing interpretability. The bias term adjusts the classifier's decision threshold and is independent of the prototype's contributions. This separation ensures that the model remains interpretable, as each prototype's influence on the decision is straightforward and non-contradictory. The output score for each class is calculated by summing the element-wise product of the presence scores and the corresponding class weights from the linear layer. We incorporate a classification loss term, $L_{C}$ , to optimize model performance. This loss is calculated as the standard negative log-likelihood between the predicted + +class probabilities $\hat{y}$ and the one-hot encoded ground truth label $y$ . While $L_{C}$ primarily influences the weights of the linear layer, it also fine-tunes the prototypes to discriminate features relevant to the classification task better. The overall objective for the second training phase of AdvPatchXAI is: $\lambda_{PD}L_{PD} + \lambda_{A}L_{A} + \lambda_{T}L_{T} + \lambda_{C}L_{C}$ . + +# 4. Experimental Result and Analysis + +The dataset used in this research for evaluating the proposed and existing defenses is discussed in subsection 4.1, followed by the implementation details of the proposed algorithm. Finally, the analysis of the proposed defense algorithm is discussed in detail. + +# 4.1. Out-of-Distribution Adversarial Patch Dataset + +Following the procedure outlined by Kumar & Agarwal [31], this paper introduces two datasets focusing on physical adversarial patches and natural noises using ImageNet and COCO datasets. Images are attacked with 10 different styles of physical adversarial patches and three types of natural noises (Gaussian, Shot, and Impulse noise). For COCO, we randomly selected 2,000 clean images from the validation set. These images served as a clean subset. Another set of 2,000 images from the COCO validation set is selected, and 10 different adversarial patches are applied to each, generating 20,000 adversarial patch images. Clean and patched images are divided into training and testing sets in a 3:2 ratio. For example, 2,000 clean images of COCO and 2,000 images with a single patch are divided into 800 test images and 1200 train images. Also, applying three natural noises resulted in 2,400 noisy test images for both clean and patched sets. Since our primary focus is generalizability, we use ImageNet to test our model when trained on the COCO patched train subset. For ImageNet, we randomly selected 800 clean images from the validation set. Another set of 800 images is selected, and 10 different adversarial patches are applied to each image, generating 8000 adversarial patch images. Again, three different types of natural noise applied to each patch give 24,00 noisy images for clean and patched test sets. In total, the dataset includes 2,800 clean images, 28,000 adversarial patch images (20,000 from COCO and 8,000 from ImageNet), 4,800 noisy test images, and 48,000 images with both adversarial patches and natural noise. + +# 4.2. Implementation Details + +We utilize three convolutional backbones in AdvPatchXAI: ResNet50 [22] (R), ConvNext-tiny [36] (C), and MobileNetV2 [48] (M). A transformer-based network ViT [14] is also incorporated. The pre-trained models are used, but the strides of the last layers are modified from 2 to 1 to increase the width $(W)$ and height $(H)$ of the output feature maps (from $7 \times 7$ to $28 \times 28$ for ResNet and MobileNet, + +$26 \times 26$ for ConvNext, and reshaped the output feature into $14 \times 14$ for ViT). This adjustment results in a finer-grained patch grid $z$ , improving patch similarity optimization. The backbone $f$ is fine-tuned using Adam with a learning rate 0.0005 and a cosine annealing schedule. The linear layer is trained with a learning rate of 0.05. Loss weights are set to $\lambda_{C} = \lambda_{T} = 2$ and $\lambda_{PD} = \lambda_{A} = 5$ . Prototypes are trained for 10 epochs, followed by training AdvPatchXAI for an additional 60 epochs. Images are resized to $224 \times 224$ and augmented with TrivialAugment [40] in two stages: the first stage is related to location, and the second stage is related to color transformation (grayscale). Experiments are performed with seed value one to ensure reproducibility. + +# 4.3. Result Analysis + +We evaluated our proposed method, AdvPatchXAI, employed various backbone DNNs using a comprehensive set of experiments. We have trained our network on the COCO training subset for all 10 patches separately and evaluated them under several generalized zero-shot settings. These settings included scenarios with seen datasets with unseen patches and unseen datasets with unseen patches, both with and without natural noises. The effectiveness of AdvPatchXAI can be assessed through two key metrics: the robustness of the chosen backbone DNN and the success of the training patch in identifying unseen patches. The presence of natural noises during the testing ensures the detector's robustness in a black-box setting. To further strengthen the effectiveness of the proposed defense algorithm, we have evaluated its resiliency against adaptive attacks (discussed in supplementary). + +# 4.3.1. Zero-Shot OOD Patch Detection in the Absence of Noise + +Table 1 and Table 2 present the average classification accuracy (Mean) of our proposed AdvPatchXAI on COCO and ImageNet datasets, respectively, combined with OOD unseen patches in silent (without noise) settings. Based on our findings, it can be seen that AdvPatchXAI-R (i.e., with backbone R) achieved the highest mean accuracy of $99.33\%$ and $98.94\%$ , particularly with Patch-9 on COCO and ImageNet datasets, respectively, demonstrating exceptional performance and robustness. AdvPatchXAI-ViT (i.e., with backbone ViT) also performed well, especially for Patch-4 on COCO $96.42\%$ and Patch-0 on ImageNet $94.31\%$ , showing reliability with moderate standard deviation (SD) (see supplementary for more detail) but found more vulnerable on Patch-5. However, AdvPatchXAI-M (MobileNet) and AdvPatchXAI-C (ConvNeXt) exhibited higher variability, with significant accuracy drops for specific patches such as Patch-3. Figure 2 gives a clearer picture of the robustness of AdvPatchXAI-R and AdvPatchXAI-ViT on both the COCO and ImageNet datasets. For example, the performance of AdvPatchXAI-R on COCO and ImageNet, $91.8\%$ + +Table 1. Adversarial patch detection accuracy of the proposed AdvPatchXAI with different backbone on COCO (seen dataset) subset in several generalized settings such as Silent (unseen patch without any noise), Noisy (unseen patch+noise). The results are reported as mean. Patch-{0-9}\{1} indicates models are trained on Patch-1 and tested on all other patches except Patch-1. R, M, ViT, and C represent ResNet50, MobileNet, Vision Transformer, and ConvNeXt backbones, respectively. The best mean values are highlighted. + +
MethodTestPatch-{0-9}\{0}Patch-{0-9}\{1}Patch-{0-9}\{2}Patch-{0-9}\{3}Patch-{0-9}\{4}Patch-{0-9}\{5}Patch-{0-9}\{6}Patch-{0-9}\{7}Patch-{0-9}\{8}Patch-{0-9}\{9}
AdvPatchXAI-RSilent94.0995.5784.6983.3097.9979.6585.5396.9194.7699.33
Noisy85.8188.8678.0772.0196.0969.0578.9594.5983.7995.92
AdvPatchXAI-MSilent94.1797.4577.3364.8898.4772.2893.9098.6192.9097.24
Noisy76.0970.8256.9150.2489.6351.3470.4583.6571.1368.11
AdvPatchXAI-ViTSilent95.8495.1993.2081.7496.4267.2789.0893.9089.0094.32
Noisy92.3192.6188.9178.9192.9564.5785.7490.5280.5693.13
AdvPatchXAI-CSilent90.0885.2986.0875.9496.2782.4278.6387.1892.7395.28
Noisy79.1679.8874.6264.6986.5759.7272.7471.0976.4184.06
+ +and $90.91\%$ , respectively, is at least ranges in $[2.2 - 4.81]\%$ better than AdvPatchXAI with other backbones on both datasets. It can be seen that there is only a minor difference $[0.66 - 1.95]\%$ in performance when both the dataset and patches are unknown to the model. + +# 4.3.2. Zero-Shot OOD Patch Detection Under Noise Perturbation + +To demonstrate the robustness capabilities of our defense, we evaluated its resiliency when trained on clean images (without exposure to any noise during training) using COCO subsets. This approach is crucial because natural noises are inherent in the environment [4], and training on every possible type of noise is impractical. Therefore, detectors must be resilient enough to handle unseen natural noises. + +The resiliency results of AdvPatchXAI in unseen patch detection in noisy (applying all three noise Gaussian, Shot, and Impulse with severity $= 2$ separately on test subset and taking the average patch wise) settings on COCO and ImageNet datasets are shown in Figure 2. While a drop in detection performance is expected, AdvPatchXAI-ViT exhibits only a marginal reduction. For example, its performance drops from $89.6\%$ to $86.02\%$ on COCO and from $87.65\%$ to $83.97\%$ on ImageNet. In contrast, other backbones suffer more significant decreases in accuracy. For instance, the performance of AdvPatchXAI-R drops by $7.49\%$ on COCO and $7.73\%$ on ImageNet, whereas the accuracy of AdvPatchXAI-M drops by $19.88\%$ on COCO and $19.47\%$ on ImageNet. Tables 1 and 2 provide detailed average 10-fold cross-validation performance in the unseen patch noise evaluation setting for COCO and ImageNet datasets, respectively. Notably, in zero-shot evaluations where images (clean and patched) are perturbed, AdvPatchXAI with ViT outperforms other backbones by a significant margin. Patch-4 proves to be more effective in 9 out of 16 evaluations, exhibiting higher mean accuracy. The performance difference between AdvPatch-ViT with the best-performing patch, and Patch-4 is less than $1.4\%$ on both datasets and in each setting (silent, noisy), yet Patch-4 shows very low SD (see supplementary for more detail), indicating its higher effectiveness. Extensive experimental evaluation reveals that AdvPatchXAI-ViT with Patch-4 generalizes well in unseen patch settings and maintains high resiliency when images + +![](images/31943b222b2e80e932f30eb179d708ca89e5be58e983a359f22bb6ea15b3fb6c.jpg) +Figure 2. Comparison with SOTA in terms of average adversarial patch detection accuracy for unseen patch and unseen patch + unseen noise detection on both COCO and ImageNet datasets. + +are perturbed by noise. Therefore, in real-world applications, we recommend using AdvPatchXAI-ViT with Patch-4 to defend against adversarial patches. + +# 5. Explainability and Comparison with SOTA + +To demonstrate the effectiveness of the proposed patch detector, we have performed an extensive comparison with state-of-the-art algorithms, which we are going to discuss first. Then, we will discuss the explainability of the proposed approach. + +# 5.1. Comparison with SOTA and Baseline + +To demonstrate the effectiveness of our proposed model, we compared it with recent state-of-the-art (SOTA) methods: Ojaswee et al. [43] and Kumar & Agarwal [31]. For a fair comparison, we followed the same experimental protocol we used to evaluate the proposed algorithm. As shown in Figure 2, our method outperforms all SOTA methods except in the case of AdvPatchXAI-M on both the COCO and ImageNet datasets when clean and patched images are perturbed with noise. Specifically, our AdvPatchXAI-R exceeds the performance of [43] by $13.35\%$ and [31] by $11.9\%$ on the COCO dataset. On the ImageNet dataset, AdvPatchXAI-R outperforms [43] by $12.03\%$ and [31] by $14.01\%$ when clean and unseen patched images are used for evaluation. Moreover, when clean and patched images are perturbed with noise, AdvPatchXAI-ViT surpasses [43] by $13.64\%$ and [31] by $12.22\%$ on the COCO dataset. On the ImageNet dataset, AdvPatchXAI-ViT outperforms [43] by $13.33\%$ and [31] by $12.67\%$ . Our proposed model, based on prototypical parts, is unique in its approach to OOD adver + +Table 2. Adversarial patch detection accuracy of the proposed AdvPatchXAI with different backbone on ImageNet (unseen dataset) subset in several generalized settings such as Silent (unseen patch without any noise), Noisy (unseen patch+noise). The results are reported as mean. Patch-{0-9} \{3} indicates models are trained on Patch-3 and tested on all other patches except Patch-3. R, M, ViT, and C represent ResNet50, MobileNet, Vision Transformer, and ConvNeXt backbones, respectively. The best mean values are highlighted. + +
MethodTestPatch-{0-}\{0}Patch-{0-}\{1}Patch-{0-}\{2}Patch-{0-}\{3}Patch-{0-}\{4}Patch-{0-}\{5}Patch-{0-}\{6}Patch-{0-}\{7}Patch-{0-}\{8}Patch-{0-}\{9}
AdvPatchXAI-RSilent92.9695.1284.4982.5698.1779.4084.6897.1895.6098.94
Noisy84.3988.5177.8171.4295.6768.3478.3093.7278.8894.76
AdvPatchXAI-MSilent93.6895.2877.3064.1395.7170.7291.9396.8191.9095.92
Noisy75.1070.1656.8850.2287.7451.1569.7382.0068.5567.21
AdvPatchXAI-ViTSilent94.3192.6792.0381.3792.9466.4186.7191.7787.3790.94
Noisy90.1890.1487.9878.8889.1964.2283.7387.1978.1690.04
AdvPatchXAI-CSilent89.4185.2685.1275.9196.2381.2177.3086.6791.5394.65
Noisy78.1479.5573.2963.6685.2059.1371.5470.1675.2183.41
+ +![](images/570ab6025cc4a362d9a1d19834b7d40e6b32f91ac3f74eca5ee847cfa0dc827e.jpg) +Figure 3. t-SNE visualization of feature space of proposed AdvPatchXAI with backbone ViT and ConvNeXt trained on the most effective patch, Patch-4, and tested on other patches, Patch-0, Patch-1, and Patch-2. Red and blue represent patched and real class, respectively. + +sarial patch detection. To demonstrate that, we compared it to PIP-Net [41], a recent prototypical-parts-based model that is the closest architecture to our proposed method. Since PIP-Net is designed for a convolutional backbone, we integrated ViT into PIP-Net and evaluated results on the COCO subset in an unseen patch (silent) setting. The average patch detection accuracy of PIP-Net with backbone R, M, and ViT are $87.07\%$ , $81.82\%$ , and $89.32\%$ , respectively. In comparison, our AdvPatchXAI method achieved accuracies of $91.8\%$ , $88.72\%$ , and $89.6\%$ with the identical respective backbones. In other words, proposed AdvPatchXAI shows $4.73\%$ , $6.9\%$ , and $0.28\%$ improvement over the baseline PIP-Net with backbone R, M, and ViT, respectively. + +Apart from the above datasets, we have benchmarked our proposed model on other standard datasets CUB-200-2011 [55] and Stanford Cars [30] (results discussed in supplementary). These results highlight the robustness and effectiveness of our proposed model in detecting adversarial patches, particularly when utilizing ViT and ResNet backbones. + +# 5.2. Explainable AdvPatchXAI + +While quantitative analysis is essential for assessing a model's effectiveness and robustness, explainability is + +![](images/cf1dc040629defedebb46d9d81b4b42a389e5a3c4e98a56e4df1bacdfd645dab.jpg) +Figure 4. Heat map visualization of the proposed AdvPatchXAI trained on Patch-4 and visualized on Patch-0 on the COCO dataset under both silent (without noise) and noisy (gaussian noise with severity=2) settings. The same image has been taken under both silent and noisy settings for a fair comparison. + +equally vital in supporting these findings. To address this, we performed several explainability analyses on our proposed AdvPatchXAI, including feature visualization, prototype prediction, prototype visualization, and Grad-CAM [49] visualization. Feature visualization helps in understanding the clusters formed by network features, leading to image classification or misclassification. Figure 3 shows the t-SNE [54] plot of the proposed model using an attention backbone (ViT) and a convolutional backbone (ConvNeXt) when trained on Patch-4 and tested on Patches 0 to 2. The clear separation of feature clusters supports the quantitative effectiveness of our proposed model. We also performed prototype prediction and visualization, along with Grad-CAM visualizations, as shown in Figure 4. The first column shows the unseen patch images and unseen patched+noisy image, and the second column highlights relevant prototypes within the image using yellow boxes, providing a + +Table 3. Mean adversarial patch detection accuracy of the proposed AdvPatchXAI across different color channels RGB, YCbCr, and grayscale with backbone ViT and ResNet50 on COCO under silent (unseen patch without any noise) setting. + +
Model (Channel)Patch- \{0-9\} \\{0}Patch- \{0-9\} \\{1}Patch- \{0-9\} \\{2}Patch- \{0-9\} \\{3}Patch- \{0-9\} \\{4}
ViT (RGB)64.7880.9073.1362.2274.11
ViT (YCbCr)65.5790.5084.3666.7278.44
ViT (Grayscale)92.3192.6188.9178.9192.95
R (RGB)63.7177.4961.4462.9581.26
R (YCbCr)70.5687.0179.6470.3378.27
R (Grayscale)85.8188.8678.0772.0196.09
+ +local explanation. The third column collects these relevant prototypes, further connecting them to the classes with sparse linear layers to predict the correct image class. It can be seen that when noise comes with a patched image, the prototype prediction is compromised, and the extracted prototypes are not as fine-grained and less relevant as clean images. The fourth column presents the Grad-CAM for each prototypical part, illustrating the image regions that the network focuses on while determining its class. This detailed insight into the model's decision-making process supports the robustness and effectiveness of AdvPatchXAI in detecting adversarial patches (further discussed in supplementary). + +# 6. Ablation Studies + +In this section, we have presented ablation studies highlighting the impact of color channels, real-world robustness, and different terms used in the loss function. To ensure fairness, experiments are conducted using the pre-defined experimental protocol. + +# 6.1. Effects of Different Color Augmentations + +Table 3 showcases the advantage of converting images into grayscale. Since unseen patches and datasets can have diverse color distributions, having color information makes the system biased toward learning color information rather than focusing on patch information. We assert that suppressing non-useful information can make the system generalized, which can also be visible from the results. For example, when AdvPatchXAI-ViT is trained on Patch 4 using grayscale images, it yields at least $14.51\%$ higher accuracy than RGB & YCbCr color channel-trained models. + +# 6.2. Physical-World Effectiveness and Robustness Against Adversarial Patch Attack + +We have further evaluated the robustness of AdvPatchXAI for real-world adaptation by printing and applying the adversarial patches in the real world, as demonstrated by Pinton et al. [44]. Our proposed defense algorithm is robust in handling the attacks in the physical world and yields an average accuracy of $91.11\%$ and $88.88\%$ when the proposed ViT defense is trained with Patch 0 and Patch 4, respectively. + +Table 4. Ablation study concerning loss terms reflecting the total number of relevant prototypes with at least one non-zero weight present in our proposed AdvPatchXAI algorithm. The mean accuracy demonstrates the advantage of a combined loss function. + +
Number of Prototypes
MethodBackbonePatch-0Patch-1Patch-2Patch-3Patch-4Mean Acc. ↑
AdvPatchXAIR26322018220923291.13
ViT18716820316620392.48
{LPD, LA, LT, LC}M825579876588.46
AdvPatchXAIR37028425125234786.44
ViT19515116115216192.66
{LA, LT, LC}M67611009110177.71
+ +We generated perturbation-based PGD patch attacks followed by [35, 58] for a fair comparison to evaluate the adversarial robustness of our proposed defense model. We wanted to highlight that the performance of our proposed AdvPatchXAI-R and AdvPatchXAI-ViT yields $98.04\%$ and $95.67\%$ average detection accuracy, respectively, in a silent setting when a patch attack based on PGD perturbation is available for evaluation. Even in noisy settings, our proposed network is better robust against PGD perturbation-based patch attacks with average detection performances of $95.55\%$ and $93.05\%$ with the same respective backbone. + +# 6.3. Effect of Proposed Loss Function + +We further examine the impact of the proposed algorithm's different linear combinations of loss functions. The corresponding outcomes presented in Table 4 reveal that the linear combination of our proposed patch decorrelation loss $L_{PD}$ with other loss functions improves interpretability. In other words, it reduces the number of relevant prototypes for a class and enhances the detection accuracy. + +# 7. Conclusion + +In this research, we present AdvPatchXAI to significantly advance the development of a generalized, robust, and explainable adversarial patch detector. By incorporating prototypical parts and a novel patch decorrelation module, our model achieves unprecedented accuracy in physical adversarial patch detection, particularly in zero-shot settings with and without unseen noise perturbations. Furthermore, the explainable nature of AdvPatchXAI provides deep insights into the decision-making process, enabling better understanding and trust in AI systems. Future work will focus on enhancing the scalability of this approach and exploring its applicability to other forms of adversarial attacks, aiming to fortify AI defenses in increasingly complex applications. + +# Acknowledgement + +V. Kumar is partially supported through the Visvesvaraya PhD Fellowship. A. Agarwal is partially funded through the ANRF PMECRG grant of Govt. of India. + +# References + +[1] Akshay Agarwal, Nalini Ratha, Mayank Vatsa, and Richa Singh. Crafting adversarial perturbations via transformed image component swapping. IEEE Transactions on Image Processing, 31:7338-7349, 2022. 1 +[2] Akshay Agarwal, Richa Singh, Mayank Vatsa, and Nalini Ratha. Image transformation-based defense against adversarial perturbation on deep learning models. IEEE Transactions on Dependable and Secure Computing, 18(5):2106-2121, 2020. 2 +[3] Akshay Agarwal, Richa Singh, Mayank Vatsa, and Nalini Ratha. IBAttack: Being cautious about data labels. IEEE Transactions on Artificial Intelligence, 4(6):1484-1493, 2022. 1 +[4] Akshay Agarwal, Mayank Vatsa, Richa Singh, and Nalini K Ratha. Noise is inside me! generating adversarial perturbations with noise derived from natural filters. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 774-775, 2020. 6 +[5] Naveed Akhtar and Ajmal Mian. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6:14410-14430, 2018. 1 +[6] Ahmed Aldahdooh, Wassim Hamidouche, Sid Ahmed Fezza, and Olivier Déforges. Adversarial example detection for dnn models: A review and experimental comparison. Artificial Intelligence Review, 55(6):4403-4462, 2022. 2 +[7] Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao. Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934, 2020. 2 +[8] Tom B Brown, Dandelion Mane, Aurko Roy, Martin Abadi, and Justin Gilmer. Adversarial patch. arXiv preprint arXiv:1712.09665, 2017. 1, 2 +[9] Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526, 2017. 1 +[10] Aran Chindaudom, Prarinya Siritanawan, Karin Sumongkayothin, and Kazunori Kotani. Adversarialqr: An adversarial patch in qr code format. In International Conference on Imaging, Vision & Pattern Recognition, pages 1-6, 2020. 2 +[11] Aran Chindaudom, Prarinya Sritananawan, Karin Sumongkayothin, and Kazunori Kotani. Surreptitious adversarial examples through functioning qr code. Journal of Imaging, 8(5):122, 2022. 2 +[12] Kenneth T Co, Luis Muñoz-González, Leslie Kanthan, and Emil C Lupu. Real-time detection of practical universal adversarial perturbations. arXiv preprint arXiv:2105.07334, 2021. 1 +[13] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, 2009. 2 +[14] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint + +arXiv:2010.11929,2020.5 +[15] Gil Fidel, Ron Bitton, and Asaf Shabtai. When explainability meets adversarial learning: Detecting adversarial examples using shap signatures. In 2020 international joint conference on neural networks (IJCNN), pages 1-8. IEEE, 2020. 2 +[16] Christina M Funke, Judy Borowski, Karolina Stosio, Wieland Brendel, Thomas SA Wallis, and Matthias Bethge. Five points to check when comparing visual perception in humans and machines. Journal of Vision, 21(3):16-16, 2021. 1 +[17] Thomas Gittings, Steve Schneider, and John Collomosse. Vax-a-net: Training-time defence against adversarial patch attacks. In Asian Conference on Computer Vision, 2020. 1 +[18] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. 1 +[19] Gaurav Goswami, Nalini Ratha, Akshay Agarwal, Richa Singh, and Mayank Vatsa. Unravelling robustness of deep learning based face recognition against adversarial attacks. In AAAI Conference on Artificial Intelligence, volume 32, 2018. 1 +[20] Jindong Gu, Volker Tresp, and Yao Qin. Are vision transformers robust to patch perturbations? In European Conference on Computer Vision, pages 404-421, 2022. 1 +[21] Jamie Hayes. On visible adversarial perturbations & digital watermarking. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1597-1604, 2018. 1 +[22] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition, pages 770-778, 2016. 5 +[23] Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15262-15271, 2021. 2 +[24] Yuheng Huang and Yuanchun Li. Zero-shot certified defense against adversarial patches with vision transformers. arXiv preprint arXiv:2111.10481, 2021. 1 +[25] Nan Ji, YanFei Feng, Haidong Xie, Xueshuang Xiang, and Naijin Liu. Adversarial yolo: Defense human detection patch attacks via detecting adversarial patches. arXiv preprint arXiv:2103.08860, 2021. 2 +[26] Longlong Jing and Yingli Tian. Self-supervised visual feature learning with deep neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(11):4037-4058, 2020. 3 +[27] Melanie Jutas, Ethan Liang, Sara Leary, Chris Ward, and Keith Manville. Detecting physical adversarial patch attacks with object detectors. In IEEE Applied Imagery Pattern Recognition Workshop, pages 1-7, 2022. 2 +[28] Danny Karmon, Daniel Zoran, and Yoav Goldberg. Lavan: Localized and visible adversarial noise. In International Conference on Machine Learning, pages 2507-2515. PMLR, 2018. 2 +[29] Adam Kortylewski, Qing Liu, Huiyu Wang, Zhishuai Zhang, and Alan Yuille. Combining compositional models and deep networks for robust object classification under occlusion. In IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1333-1341, 2020. 1 + +[30] Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In IEEE International Conference on Computer Vision Workshops, pages 554–561, 2013. 7 +[31] Vishesh Kumar and Akshay Agarwal. The unseen adversaries: Robust and generalized defense against adversarial patches. Available at SSRN 4772716, 2023. 2, 5, 6 +[32] Juncheng Li, Frank Schmidt, and Zico Kolter. Adversarial camera stickers: A physical camera-based attack on deep learning systems. In International Conference on Machine Learning, pages 3896-3904. PMLR, 2019. 2 +[33] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European Conference on Computer Vision, pages 740-755, 2014. 2 +[34] Aishan Liu, Xianglong Liu, Jiaxin Fan, Yuqing Ma, Anlan Zhang, Huiyuan Xie, and Dacheng Tao. Perceptual-sensitive gan for generating adversarial patches. In AAAI Conference on Artificial Intelligence, volume 33, pages 1028-1035, 2019. 2 +[35] Jiang Liu, Alexander Levine, Chun Pong Lau, Rama Chellappa, and Soheil Feizi. Segment and complete: Defending object detectors against adversarial patch attacks with robust patch detection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14973-14982, 2022. 8 +[36] Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11976-11986, 2022. 5 +[37] Giulio Lovisotto, Nicole Finnie, Maurizio Munoz, Chaithanya Kumar Mummadi, and Jan Hendrik Metzen. Give me your attention: Dot-product attention considered harmful for adversarial patch robustness. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15234-15243, 2022. 1 +[38] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. 2 +[39] Michael McCoyd, Won Park, Steven Chen, Neil Shah, Ryan Roggenkemper, Minjune Hwang, Jason Xinyu Liu, and David Wagner. Minority reports defense: Defending against adversarial patches. In International Conference on Applied Cryptography and Network Security, pages 564-582, 2020. 1 +[40] Samuel G Müller and Frank Hutter. Trivialaugment: Tuning-free yet state-of-the-art data augmentation. In IEEE/CVF International Conference on Computer Vision, pages 774-782, 2021. 3, 5 +[41] Meike Nauta, Jörg Schlötterer, Maurice van Keulen, and Christin Seifert. Pip-net: Patch-based intuitive prototypes for interpretable image classification. 2023. 4, 7 +[42] Meike Nauta, Jan Trienes, Shreyasi Pathak, Elisa Nguyen, Michelle Peters, Yasmin Schmitt, Jorg Schlötterer, Maurice van Keulen, and Christin Seifert. From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai. ACM Computing Surveys, 55(13s):1-42, 2023. 1 + +[43] Ojaswee Ojaswee, Akshay Agarwal, and Nalini Ratha. Benchmarking image classifiers for physical out-of-distribution examples detection. In IEEE/CVF International Conference on Computer Vision, pages 4427-4435, 2023. 2, 6 +[44] Maura Pintor, Daniele Angioni, Angelo Sotgiu, Luca Demetrio, Ambra Demontis, Battista Biggio, and Fabio Roli. Imagenet-patch: A dataset for benchmarking machine learning robustness against adversarial patches. Pattern Recognition, 134:109064, 2023. 2, 8 +[45] Joseph Redmon. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018. 2 +[46] Joseph Redmon and Ali Farhadi. Yolo9000: better, faster, stronger. In IEEE Conference on Computer Vision and Pattern Recognition, pages 7263-7271, 2017. 2 +[47] Wojciech Samek, Grégoire Montavon, Sebastian Lapischkin, Christopher J Anders, and Klaus-Robert Müller. Explaining deep neural networks and beyond: A review of methods and applications. IEEE, 109(3):247-278, 2021. 1 +[48] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. *Mobilenetv2: Inverted residuals and linear bottlenecks*. In IEEE Conference on Computer Vision and Pattern Recognition, pages 4510-4520, 2018. 5 +[49] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In IEEE International Conference on Computer Vision, pages 618-626, 2017. 7 +[50] Ali Shafahi, W Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein. Poison frogs! targeted clean-label poisoning attacks on neural networks. Advances in Neural Information Processing Systems, 31, 2018. 1 +[51] Abhijith Sharma, Yijun Bian, Phil Munz, and Apurva Narayan. Adversarial patch attacks and defences in vision-based tasks: A survey. arXiv preprint arXiv:2206.08304, 2022.2 +[52] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. 1 +[53] Tung Tran, Issam Aib, Ehab Al-Shaer, and Raouf Boutaba. An evasive attack on snort flowbits. In IEEE Network Operations and Management Symposium, pages 351-358, 2012. 1 +[54] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(11), 2008. 7 +[55] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011. 7 +[56] Tongzhou Wang and Phillip Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In International Conference on Machine Learning, pages 9929-9939. PMLR, 2020. 4 +[57] Chong Xiang, Arjun Nitin Bhagoji, Vikash Sehwag, and Prateek Mittal. {PatchGuard}: A provably robust defense against adversarial patches via small receptive fields and + +masking. In USENIX Security Symposium, pages 2237-2254, 2021. 1 +[58] Ke Xu, Yao Xiao, Zhaoheng Zheng, Kaijie Cai, and Ram Nevatia. Patchzero: Defending against adversarial patch attacks by detecting and zeroing the patch. In IEEE/CVF Winter Conference on Applications of Computer Vision, pages 4632-4641, 2023. 8 +[59] Weilin Xu, David Evans, and Yanjun Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155, 2017. 2 +[60] Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane-Ling Wang, and Michael Jordan. Ml-loo: Detecting adversarial examples with feature attribution. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 6639-6647, 2020. 2 +[61] Xingyu Zhou, Zhisong Pan, Yexin Duan, Jin Zhang, and Shuaihui Wang. A data independent approach to generate adversarial patches. Machine Vision and Applications, 32(3):67, 2021. 2 +[62] Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang. Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: International Workshop and International Workshop, pages 3-11, 2018. 2 +[63] Hongru Zhu, Peng Tang, Jeongho Park, Soojin Park, and Alan Yuille. Robustness of object recognition under extreme occlusion in humans and computational models. arXiv preprint arXiv:1905.04598, 2019. 1 +[64] Alon Zolfi, Moshe Kravchik, Yuval Elovici, and Asaf Shabtai. The translucent patch: A physical and universal attack on object detectors. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15232-15241, 2021. 1, 2 \ No newline at end of file diff --git a/2025/A Unified, Resilient, and Explainable Adversarial Patch Detector/images.zip b/2025/A Unified, Resilient, and Explainable Adversarial Patch Detector/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..d3f281417f48d091e92759caeef5d8b5d924c5c5 --- /dev/null +++ b/2025/A Unified, Resilient, and Explainable Adversarial Patch Detector/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6cfd23f7a150a750fdb1a38a6d46bda397b245bd2014edbe236c2e8302f20e95 +size 447048 diff --git a/2025/A Unified, Resilient, and Explainable Adversarial Patch Detector/layout.json b/2025/A Unified, Resilient, and Explainable Adversarial Patch Detector/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..578efc102d8113890206826377e26de5508a13ad --- /dev/null +++ b/2025/A Unified, Resilient, and Explainable Adversarial Patch Detector/layout.json @@ -0,0 +1,8475 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 105, + 103, + 506, + 120 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 103, + 506, + 120 + ], + "spans": [ + { + "bbox": [ + 105, + 103, + 506, + 120 + ], + "type": "text", + "content": "A Unified, Resilient, and Explainable Adversarial Patch Detector" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 172, + 144, + 438, + 171 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 172, + 144, + 438, + 171 + ], + "spans": [ + { + "bbox": [ + 172, + 144, + 438, + 171 + ], + "type": "text", + "content": "Vishesh Kumar, Akshay Agarwal \nTrustworthy BiometraVision Lab, IISER Bhopal, India" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 212, + 174, + 396, + 185 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 212, + 174, + 396, + 185 + ], + "spans": [ + { + "bbox": [ + 212, + 174, + 396, + 185 + ], + "type": "text", + "content": "{vishesh22,akagarwal}@iiserb.ac.in" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 151, + 213, + 200, + 225 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 213, + 200, + 225 + ], + "spans": [ + { + "bbox": [ + 151, + 213, + 200, + 225 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 238, + 297, + 514 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 238, + 297, + 514 + ], + "spans": [ + { + "bbox": [ + 55, + 238, + 297, + 514 + ], + "type": "text", + "content": "Deep Neural Networks (DNNs), backbone architecture in 'almost' every computer vision task, are vulnerable to adversarial attacks, particularly physical out-of-distribution (OOD) adversarial patches. Existing defense models often struggle with interpreting these attacks in ways that align with human visual perception. Our proposed AdvPatchXAI approach introduces a generalized, robust, and explainable defense algorithm designed to defend DNNs against physical adversarial threats. AdvPatchXAI employs a novel patch decorrelation loss that reduces feature redundancy and enhances the distinctiveness of patch representations, enabling better generalization across unseen adversarial scenarios. It learns prototypical parts self-supervised, enhancing interpretability and correlation with human vision. The model utilizes a sparse linear layer for classification, making the decision process globally interpretable through a set of learned prototypes and locally explainable by pinpointing relevant prototypes within an image. Our comprehensive evaluation shows that AdvPatchXAI closes the \"semantic\" gap between latent space and pixel space and effectively handles unseen adversarial patches even perturbed with unseen corruptions, thereby significantly advancing DNN robustness in practical settings1." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 535, + 135, + 547 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 535, + 135, + 547 + ], + "spans": [ + { + "bbox": [ + 56, + 535, + 135, + 547 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 555, + 295, + 687 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 555, + 295, + 687 + ], + "spans": [ + { + "bbox": [ + 55, + 555, + 295, + 687 + ], + "type": "text", + "content": "Besides being dominant in computer vision over the years, deep neural networks (DNNs) have been found vulnerable to adversarial attacks [1, 19]. Early adversarial attacks on DNNs tricked models using slight, barely noticeable noise [18, 52]. While the above attack perturbs an entire image, a few attacks often target the main object in the scene naturally [29, 63]. Another type of adversaries are poisoning attacks, misled models during training by introducing incorrect patterns, such as in poison frogs and backdoor attacks [3, 9, 50]. Surprisingly, many of these minute adversarial attacks are ineffective in the physical world due to" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 213, + 555, + 323 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 213, + 555, + 323 + ], + "spans": [ + { + "bbox": [ + 313, + 213, + 555, + 323 + ], + "type": "text", + "content": "several unconstrained environmental factors, including rotation and translation properties of the objects. This led to the development of adversarial patch attacks, where a patterned sub-image is placed over the input image to deceive the model [8]. Due to expectation over transformation (EoT) constraints while learning the patches, they are highly effective in real-world scenarios. They can fool any possible deep networks, including vision transformers [5, 12, 20, 37, 53, 64]." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 326, + 556, + 613 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 326, + 556, + 613 + ], + "spans": [ + { + "bbox": [ + 313, + 326, + 556, + 613 + ], + "type": "text", + "content": "Therefore, securing DNN-based systems against stealthy and practical attacks is vital, especially in safety-critical domains such as autonomous driving, robotics, smart homes/cities, smart industries, video surveillance, and healthcare. Researchers are constantly developing new defenses or protection strategies [17, 21, 24, 39, 57] to tackle the limitations of DNNs against physical adversarial patches, but understanding their decision-making processes and evaluating the resiliency of the defense algorithms is increasingly important [42, 47]. It is to be noted that physical adversarial patches are visible to the human eye; however, they need extra precautions since the added patch can also be a real-world object. Due to this, automated detection of patches is challenging and leads to several false rejections. It is essential to align human and machine vision to address this issue. One way can be to align the functional properties of human and machine vision [16]. This alignment will help improve the machine's ability to detect and respond to out-of-distribution (OOD) adversarial patches more effectively in several generalized settings, such as unseen patches and unseen perturbation, ensuring better security and performance. \"To the best of our knowledge, no existing defense provides both high generalizability and explainability against adversarial patch attacks.\"" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 618, + 557, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 618, + 557, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 618, + 557, + 715 + ], + "type": "text", + "content": "For the first time, we propose a generalized, robust, and explainable adversarial patch detector, namely AdvPatchXAI, by adding the model explainability since the beginning of the development of the network in contrast to the traditional methods, which plug in the explainability module once the model is thoroughly trained. Proposed AdvPatchXAI uses a sparse linear layer that connects learned prototypical parts to classes. This setup allows a user to" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 693, + 295, + 713 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 693, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 693, + 295, + 713 + ], + "type": "text", + "content": "" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "30387" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 294, + 251 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 294, + 251 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 294, + 251 + ], + "type": "text", + "content": "interpret the model by inspecting the prototypes and their relation to the classes. The weights of the linear layer are restricted to non-negative, ensuring that the presence of a class-relevant prototype increases the evidence for that class. This layer functions like a scoring sheet: the score for a class is the sum of all present prototypes multiplied by their weights. Using this interpretable and predictive linear layer, AdvPatchXAI ensures a direct relation between the prototypes and the classification. This approach improves the interpretability of the model's decisions and enhances its ability to detect and respond to adversarial patches. The significant strength of the proposed approach is that it can be plugged in with any deep learning architecture, including CNN and transformer-based architecture. In brief, the contributions of this research are:" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 251, + 294, + 382 + ], + "type": "list", + "angle": 0, + "index": 4, + "blocks": [ + { + "bbox": [ + 55, + 251, + 294, + 286 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 251, + 294, + 286 + ], + "spans": [ + { + "bbox": [ + 55, + 251, + 294, + 286 + ], + "type": "text", + "content": "- We have proposed a generalized, robust, and explainable patch detector (AdvPatchXAI), which effectively detects unseen and out-of-distribution adversarial patches." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 288, + 294, + 334 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 288, + 294, + 334 + ], + "spans": [ + { + "bbox": [ + 55, + 288, + 294, + 334 + ], + "type": "text", + "content": "- For the first time in the literature, we have evaluated the robustness of adversarial patch detectors against several common corruptions, ensuring their practicality in the unconstrained physical world." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 335, + 294, + 382 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 335, + 294, + 382 + ], + "spans": [ + { + "bbox": [ + 55, + 335, + 294, + 382 + ], + "type": "text", + "content": "- Extensive experimental comparison with benchmark and state-of-the-art works demonstrate the proposed defense algorithm's effectiveness, generalizability, and explainability." + } + ] + } + ], + "index": 3 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 55, + 394, + 160, + 407 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 394, + 160, + 407 + ], + "spans": [ + { + "bbox": [ + 55, + 394, + 160, + 407 + ], + "type": "text", + "content": "2. Literature review" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 415, + 294, + 461 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 415, + 294, + 461 + ], + "spans": [ + { + "bbox": [ + 55, + 415, + 294, + 461 + ], + "type": "text", + "content": "We identify three main philosophies for developing robust defenses against adversarial patch attacks. These approaches are not mutually exclusive and can be adopted together." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 463, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 463, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 463, + 295, + 713 + ], + "type": "text", + "content": "Adversarial Patched Dataset: The first time Brown et al.[8] introduced the concept of adversarial patches to fool the object detectors. Since then, several advancements have been made to create more effective and stealthy adversarial patches, including LaVAN (focusing on weaknesses) [28], Adversarial QR codes (appearing less suspicious) [10, 11], PS-GAN (improved quality) [34], and DiAP (data-independent) [61]. A survey [51] reveals the vulnerability of various state-of-the-art pre-trained models YOLOv4 [7], ViT-B/16, Unet++ [62], YOLOv3 [45], YOLOv2 [46], YOLOv5 against physical adversarial patch attacks. Our first philosophy lies in the lack of a standardized dataset. While several effective adversarial patch generation algorithms exist [10, 32, 38, 61, 64], a lack of standardized datasets hinders the development of robust defense mechanisms. Recent efforts by Pintor et al. [44] and Ojaswee et al.[43] propose benchmark datasets specifically for adversarial patches but missing the effect of natural noises in real-world scenarios [2, 23]. While Kumar & Agarwal [31], for the first time, explored the combined effect of both adversarial patches and natural noises, they have not" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 72, + 555, + 707 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 555, + 707 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 555, + 707 + ], + "type": "text", + "content": "proposed any novel defense algorithm. Our Second philosophy lies with the defense algorithm. It is also to be noted that minimal defense works exist that can effectively detect adversarial patch attacks in several generalized settings such as unseen datasets, unseen adversarial patches, and unseen threat model [25, 27, 32, 43] and also lack explainability. To address these gaps, in our research, we have regenerated a large-scale dataset followed by [31] containing 10 different adversarial patches [44] and three natural noises (gaussian noise, shot noise, and impulse noise); the detailed description of the dataset given in subsection 4.1. Our third philosophy concerns the lack of a generalized and explainable patch defense algorithm. Generalized & Explainable Patch Defense: While several studies have been explored for adversarial example detection in image classification task [6, 15, 59, 60], very few studies talk about adversarial patch detection in image classification. Pintor et al. [44] introduced an ImageNet-Patch dataset to benchmark machine learning models' robustness against adversarial patches. This dataset includes patches optimized to generalize across different models, allowing for a more efficient and transferable robustness evaluation. The dataset's utility has been demonstrated by testing its effectiveness against 127 models, showcasing its potential as a standard benchmark for assessing and improving model robustness against adversarial patches. Further, Ojaswee et al. [43], using subsets of ImageNet [13] and COCO [33] datasets, developed benchmark datasets and generalized the effect of these patches by finetuning several state-of-the-art DNNs in different generalized settings such as (i) seen patch settings (same patches during testing and training), (ii) unseen patch setting (different patches during training and testing), (iii) seen patch + unseen dataset, and (iv) unseen patch + unseen dataset. Again, Kumar & Agarwal [31] extended this work by training traditional machine learning algorithm using the features extracted by state-of-the-art DNNs in several conditions such as (i) seen patch, (ii) seen patch + natural noises, (iii) unseen patch, and (iv) unseen patch + noise. These findings indicate that defending against adversarial patches in unseen settings is challenging, as the effectiveness of defenses is closely tied to the attributes of the patches used during training. This means detectors have a lower detection rate for new, unseen patches. It has been noticed that none of these studies propose novel algorithms for patch detection. Moreover, to our knowledge, existing detectors do not provide sufficient explanations for the alerts they raise, leaving the reasoning behind their decisions unclear. This research primarily focused on unseen settings such as (i) unseen patch setting, (ii) unseen patch + noise, (iii) Unseen patch + unseen dataset, and (iv) unseen patch + unseen dataset + noise. Generalized better than existing defenses and provided detailed explanations through the proposed AdvPatchXAI patch detector." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "30388" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 109, + 73, + 502, + 350 + ], + "blocks": [ + { + "bbox": [ + 109, + 73, + 502, + 350 + ], + "lines": [ + { + "bbox": [ + 109, + 73, + 502, + 350 + ], + "spans": [ + { + "bbox": [ + 109, + 73, + 502, + 350 + ], + "type": "image", + "image_path": "9b6d36f85b9743b72767020bbec29aaae3ac69a31a7f32306720f1d190696f12.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 354, + 555, + 420 + ], + "lines": [ + { + "bbox": [ + 55, + 354, + 555, + 420 + ], + "spans": [ + { + "bbox": [ + 55, + 354, + 555, + 420 + ], + "type": "text", + "content": "Figure 1. Overview of our proposed method. OOD Adversarial patches used in this research show on top of the proposed architecture. Patch Decorrelation: A novel patch decorrelation loss " + }, + { + "bbox": [ + 55, + 354, + 555, + 420 + ], + "type": "inline_equation", + "content": "L_{PD}" + }, + { + "bbox": [ + 55, + 354, + 555, + 420 + ], + "type": "text", + "content": " followed the contrastive learning approach applying to the backbone features along with loss " + }, + { + "bbox": [ + 55, + 354, + 555, + 420 + ], + "type": "inline_equation", + "content": "L_{A}" + }, + { + "bbox": [ + 55, + 354, + 555, + 420 + ], + "type": "text", + "content": " applying on softmax output to assign the same prototype of two representations of patches for an image pair. To avoid trivial solutions and encourage the utilization of all prototypes, a tanh-loss " + }, + { + "bbox": [ + 55, + 354, + 555, + 420 + ], + "type": "inline_equation", + "content": "L_{T}" + }, + { + "bbox": [ + 55, + 354, + 555, + 420 + ], + "type": "text", + "content": " is applied during the self-supervised pretraining phase. Connections between learned part-prototypes and classes are established through a sparse linear layer. The standard loss used is negative log-likelihood, denoted " + }, + { + "bbox": [ + 55, + 354, + 555, + 420 + ], + "type": "inline_equation", + "content": "L_{C}" + }, + { + "bbox": [ + 55, + 354, + 555, + 420 + ], + "type": "text", + "content": ". Model outputs remain unnormalized during testing, allowing them to serve as straightforward scoring metrics." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 440, + 275, + 453 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 440, + 275, + 453 + ], + "spans": [ + { + "bbox": [ + 55, + 440, + 275, + 453 + ], + "type": "text", + "content": "3. Proposed Framework for AdvPatchXAI" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 460, + 295, + 509 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 460, + 295, + 509 + ], + "spans": [ + { + "bbox": [ + 55, + 460, + 295, + 509 + ], + "type": "text", + "content": "This section first briefly describes the binary classification problem of adversarial patch detection, followed by a comprehensive discussion of the proposed explainable adversarial patch detector, i.e., AdvPatchXAI." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 516, + 228, + 528 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 516, + 228, + 528 + ], + "spans": [ + { + "bbox": [ + 55, + 516, + 228, + 528 + ], + "type": "text", + "content": "3.1. Problem Statement & Notations" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 533, + 295, + 700 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 533, + 295, + 700 + ], + "spans": [ + { + "bbox": [ + 55, + 533, + 295, + 700 + ], + "type": "text", + "content": "Given a binary classification problem with training dataset " + }, + { + "bbox": [ + 55, + 533, + 295, + 700 + ], + "type": "inline_equation", + "content": "D_{tr} = \\{(x,y), x \\in X_{tr}, y \\in Y_{tr}\\}" + }, + { + "bbox": [ + 55, + 533, + 295, + 700 + ], + "type": "text", + "content": " containing known classes " + }, + { + "bbox": [ + 55, + 533, + 295, + 700 + ], + "type": "inline_equation", + "content": "C_2 = \\{c_1 \\text{ (real)}, c_2 \\text{ (patched)}\\}" + }, + { + "bbox": [ + 55, + 533, + 295, + 700 + ], + "type": "text", + "content": " and a testing dataset " + }, + { + "bbox": [ + 55, + 533, + 295, + 700 + ], + "type": "inline_equation", + "content": "D_{te} = \\{(x', y'), x' \\in X_{te}, y' \\in Y_{te}\\}" + }, + { + "bbox": [ + 55, + 533, + 295, + 700 + ], + "type": "text", + "content": ", where, " + }, + { + "bbox": [ + 55, + 533, + 295, + 700 + ], + "type": "inline_equation", + "content": "X_{tr}" + }, + { + "bbox": [ + 55, + 533, + 295, + 700 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 533, + 295, + 700 + ], + "type": "inline_equation", + "content": "X_{te}" + }, + { + "bbox": [ + 55, + 533, + 295, + 700 + ], + "type": "text", + "content": " represent the input images of the training and testing dataset, respectively, while " + }, + { + "bbox": [ + 55, + 533, + 295, + 700 + ], + "type": "inline_equation", + "content": "Y_{tr}" + }, + { + "bbox": [ + 55, + 533, + 295, + 700 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 533, + 295, + 700 + ], + "type": "inline_equation", + "content": "Y_{te}" + }, + { + "bbox": [ + 55, + 533, + 295, + 700 + ], + "type": "text", + "content": " are the corresponding class labels. We aim to learn interpretable prototypes that can be used as input features for the model explainability. The backbone of our model consists of pre-trained CNNs and ViT, which learn an interpretable, 1-dimensional image encoding " + }, + { + "bbox": [ + 55, + 533, + 295, + 700 + ], + "type": "inline_equation", + "content": "p" + }, + { + "bbox": [ + 55, + 533, + 295, + 700 + ], + "type": "text", + "content": " that indicates the presence or absence of prototypical parts in an image. These prototypical parts (prototypes) are then connected to classes through a sparse linear layer." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 701, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 701, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 67, + 701, + 295, + 713 + ], + "type": "text", + "content": "Our framework introduces two steps to effectively iden-" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 441, + 555, + 501 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 441, + 555, + 501 + ], + "spans": [ + { + "bbox": [ + 313, + 441, + 555, + 501 + ], + "type": "text", + "content": "tify and classify the given input accurately: (i) data augmentation followed by TrivialAugment [40] to perform self-supervised pretraining of learned prototype patches and (ii) training AdvPatchXAI for effective and explainable adversarial patch detection." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 514, + 531, + 528 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 514, + 531, + 528 + ], + "spans": [ + { + "bbox": [ + 313, + 514, + 531, + 528 + ], + "type": "text", + "content": "3.2. Self-Supervised Pretraining of Prototypes" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 534, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 534, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 534, + 555, + 713 + ], + "type": "text", + "content": "Following prior self-supervised learning methods [26], we generate a positive pair, denoted as " + }, + { + "bbox": [ + 313, + 534, + 555, + 713 + ], + "type": "inline_equation", + "content": "x'" + }, + { + "bbox": [ + 313, + 534, + 555, + 713 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 534, + 555, + 713 + ], + "type": "inline_equation", + "content": "x''" + }, + { + "bbox": [ + 313, + 534, + 555, + 713 + ], + "type": "text", + "content": " by using TrivialAugment [40]. This recently introduced augmentation strategy is efficient, requiring no hyperparameter tuning, and applies a single augmentation per image. However, in contrast to the standard approach, we applied TrivialAugment twice, as shown in Figure 1. The first augmentation operation applies location-related transformations or focuses on spatial transformations, including shearing, rotation, and translation. This augmented image is then used as input to another TrivialAugment operation, which involves color alterations, including brightness, sharpness, hue, and contrast. In our approach, we followed grayscale conversion in the second augmentation stage. We also experimented with RGB and YCbCr conversion but found a less" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "30389" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 271, + 85 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 271, + 85 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 271, + 85 + ], + "type": "text", + "content": "effective strategy than grayscale (see subsection 6.1)." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 95, + 192, + 105 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 95, + 192, + 105 + ], + "spans": [ + { + "bbox": [ + 55, + 95, + 192, + 105 + ], + "type": "text", + "content": "3.2.1. Patch Decoration Loss" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "spans": [ + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "text", + "content": "To reduce redundancy and enhance the distinctiveness of the patch features, we propose a patch decorrelation loss applied to patch features. We adopt a contrastive learning strategy similar to [56] to achieve alignment and uniformity of patch representations. However, unlike their image-level approach, we focus on patch-level alignment. The Patch Decorrelation Module plays a critical role in our method by promoting the similarity between features extracted from different views of the same patch while simultaneously reducing redundancy across different feature dimensions. An input image is first forwarded through the backbone DNN " + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "inline_equation", + "content": "f" + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "text", + "content": ". The resulting output " + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "inline_equation", + "content": "z = f(x; w_f)" + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "text", + "content": " consists of " + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "text", + "content": " two-dimensional " + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "inline_equation", + "content": "(H \\times W)" + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "text", + "content": " feature maps, where " + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "inline_equation", + "content": "w_f" + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "text", + "content": " denotes the trainable parameters of " + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "inline_equation", + "content": "f" + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "text", + "content": ". So, the feature maps for two different views of " + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "inline_equation", + "content": "z' = f(x'; w_f)" + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "inline_equation", + "content": "z'' = f(x''; w_f)" + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "text", + "content": ". Since the feature dimensions of each are initially " + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "inline_equation", + "content": "(B, D, H, W)" + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "text", + "content": " denotes the batch size and " + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "text", + "content": " represents the total number of feature maps of dimension " + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "inline_equation", + "content": "H \\times W" + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "text", + "content": ". We have converted it to " + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "inline_equation", + "content": "(B \\times H \\times W, D)" + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "text", + "content": ", representing each image patch's features within the batch. Compute the cross-correlation matrix using the converted feature maps for both " + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "inline_equation", + "content": "x'" + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "inline_equation", + "content": "x''" + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "text", + "content": " as " + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "inline_equation", + "content": "[D_{cr}] = [d_{ij}]_{(D,D)} = \\frac{1}{B \\times H \\times W} [x']_{(B \\times H \\times W,D)}^{T} [x'']_{(B \\times H \\times W,D)}" + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "inline_equation", + "content": "i, j = 1, 2, \\dots, D" + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "text", + "content": ". Consider an identity matrix " + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "inline_equation", + "content": "I = [I_{ij}]_{(D,D)}" + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "text", + "content": ". Finally, the patch decorrelation loss (i.e., " + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "inline_equation", + "content": "L_{PD}" + }, + { + "bbox": [ + 55, + 110, + 296, + 422 + ], + "type": "text", + "content": ") is defined as:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 65, + 433, + 295, + 459 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 433, + 295, + 459 + ], + "spans": [ + { + "bbox": [ + 65, + 433, + 295, + 459 + ], + "type": "interline_equation", + "content": "L _ {P D} = \\frac {\\sum_ {i = 1} ^ {D} \\left(d _ {i i} - I _ {i i}\\right) ^ {2} + \\lambda \\times \\left(\\left(d _ {i j} - I _ {i j}\\right) _ {i \\neq j}\\right) ^ {2}}{D} \\tag {1}", + "image_path": "4eb9446f2ffd3636f57731dc508d77250de41bc897b5ccd0ada0aa80ed120811.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 471, + 296, + 533 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 471, + 296, + 533 + ], + "spans": [ + { + "bbox": [ + 55, + 471, + 296, + 533 + ], + "type": "text", + "content": "Here, " + }, + { + "bbox": [ + 55, + 471, + 296, + 533 + ], + "type": "inline_equation", + "content": "\\lambda = 5 \\cdot e^{-3}" + }, + { + "bbox": [ + 55, + 471, + 296, + 533 + ], + "type": "text", + "content": " is a trade-off parameter of the loss function. Our objective revolves around diagonalizing this cross-correlation matrix, which mitigates the risk of trivial solutions, such as activating the same prototype for all patches and promoting utilizing the entire prototype space." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 541, + 241, + 553 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 541, + 241, + 553 + ], + "spans": [ + { + "bbox": [ + 55, + 541, + 241, + 553 + ], + "type": "text", + "content": "3.2.2. Alignment Loss on Softmax Outputs" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 558, + 296, + 716 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 558, + 296, + 716 + ], + "spans": [ + { + "bbox": [ + 55, + 558, + 296, + 716 + ], + "type": "text", + "content": "After obtaining raw features aligned using the Patch Decorrelation Module, we further refine the alignment using a softmax-based alignment loss. This alignment loss, denoted as " + }, + { + "bbox": [ + 55, + 558, + 296, + 716 + ], + "type": "inline_equation", + "content": "L_{A}" + }, + { + "bbox": [ + 55, + 558, + 296, + 716 + ], + "type": "text", + "content": " [41], ensures that the softmax-normalized feature vectors of corresponding patches from two views are closely aligned. Apply a softmax over " + }, + { + "bbox": [ + 55, + 558, + 296, + 716 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 55, + 558, + 296, + 716 + ], + "type": "text", + "content": " such that " + }, + { + "bbox": [ + 55, + 558, + 296, + 716 + ], + "type": "inline_equation", + "content": "\\sum_{i=1}^{D} s_{h,w,i} = 1" + }, + { + "bbox": [ + 55, + 558, + 296, + 716 + ], + "type": "text", + "content": ", ensuring that a patch located at " + }, + { + "bbox": [ + 55, + 558, + 296, + 716 + ], + "type": "inline_equation", + "content": "h, w \\in H \\times W" + }, + { + "bbox": [ + 55, + 558, + 296, + 716 + ], + "type": "text", + "content": " corresponds to prototype " + }, + { + "bbox": [ + 55, + 558, + 296, + 716 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 55, + 558, + 296, + 716 + ], + "type": "text", + "content": ". Ideally, " + }, + { + "bbox": [ + 55, + 558, + 296, + 716 + ], + "type": "inline_equation", + "content": "s_{h,w,:}" + }, + { + "bbox": [ + 55, + 558, + 296, + 716 + ], + "type": "text", + "content": " is a one-hot encoded vector indicating a perfect assignment to one prototype. To measure the similarity between corresponding patches from two augmented views, we calculate the dot product between their latent representations, " + }, + { + "bbox": [ + 55, + 558, + 296, + 716 + ], + "type": "inline_equation", + "content": "(s_{h,w,:}^{\\prime})" + }, + { + "bbox": [ + 55, + 558, + 296, + 716 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 558, + 296, + 716 + ], + "type": "inline_equation", + "content": "s_{h,w,:}^{\\prime \\prime}" + }, + { + "bbox": [ + 55, + 558, + 296, + 716 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 342, + 90, + 555, + 121 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 342, + 90, + 555, + 121 + ], + "spans": [ + { + "bbox": [ + 342, + 90, + 555, + 121 + ], + "type": "interline_equation", + "content": "L _ {A} = - \\frac {1}{H W} \\sum_ {(h, w) \\in H \\times W} \\log \\left(s _ {h, w,:} ^ {\\prime} \\cdot s _ {h, w,:} ^ {\\prime \\prime}\\right) \\tag {2}", + "image_path": "f809a638ac892dd898892f54262aba0e2258b414b504e0e5547b9d43169cdc03.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 129, + 555, + 247 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 129, + 555, + 247 + ], + "spans": [ + { + "bbox": [ + 313, + 129, + 555, + 247 + ], + "type": "text", + "content": "Our goal is to identify the presence or absence of prototypical parts within an image, for which a max-pooling operation across each feature map dimension (denoted as " + }, + { + "bbox": [ + 313, + 129, + 555, + 247 + ], + "type": "inline_equation", + "content": "s::,d" + }, + { + "bbox": [ + 313, + 129, + 555, + 247 + ], + "type": "text", + "content": ") is applied. This results in a presence score tensor, " + }, + { + "bbox": [ + 313, + 129, + 555, + 247 + ], + "type": "inline_equation", + "content": "p \\in [0,1]^D" + }, + { + "bbox": [ + 313, + 129, + 555, + 247 + ], + "type": "text", + "content": ", where each element " + }, + { + "bbox": [ + 313, + 129, + 555, + 247 + ], + "type": "inline_equation", + "content": "p_d" + }, + { + "bbox": [ + 313, + 129, + 555, + 247 + ], + "type": "text", + "content": " represents the strength of the " + }, + { + "bbox": [ + 313, + 129, + 555, + 247 + ], + "type": "inline_equation", + "content": "d" + }, + { + "bbox": [ + 313, + 129, + 555, + 247 + ], + "type": "text", + "content": "th prototype's presence in the image. We introduce the tanh-based regularization loss " + }, + { + "bbox": [ + 313, + 129, + 555, + 247 + ], + "type": "inline_equation", + "content": "L_{T}" + }, + { + "bbox": [ + 313, + 129, + 555, + 247 + ], + "type": "text", + "content": " [41] to prevent trivial solutions. The tanh-loss encourages the presence of each prototype at least once in a mini-batch. The tanh loss is defined as:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 350, + 256, + 555, + 291 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 350, + 256, + 555, + 291 + ], + "spans": [ + { + "bbox": [ + 350, + 256, + 555, + 291 + ], + "type": "interline_equation", + "content": "L _ {T} (p) = - \\frac {1}{D} \\sum_ {i = 1} ^ {D} \\log (\\tanh (\\sum_ {b = 1} ^ {B} p _ {b}) + \\epsilon), \\tag {3}", + "image_path": "a68271896a0f029d219e442d021da9e619a19f1cb4067c2c309894cddbf76066.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 294, + 554, + 366 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 294, + 554, + 366 + ], + "spans": [ + { + "bbox": [ + 313, + 294, + 554, + 366 + ], + "type": "text", + "content": "Where tanh and log are element-wise operations, " + }, + { + "bbox": [ + 313, + 294, + 554, + 366 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 313, + 294, + 554, + 366 + ], + "type": "text", + "content": " is the number of samples in a mini-batch, " + }, + { + "bbox": [ + 313, + 294, + 554, + 366 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 313, + 294, + 554, + 366 + ], + "type": "text", + "content": " is the number of prototypes, and " + }, + { + "bbox": [ + 313, + 294, + 554, + 366 + ], + "type": "inline_equation", + "content": "\\epsilon" + }, + { + "bbox": [ + 313, + 294, + 554, + 366 + ], + "type": "text", + "content": " is a small number for numerical stability. This loss ensures that each prototype is utilized across the mini-batch, preventing any prototype from dominating and promoting a balanced representation across all prototypes." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 366, + 554, + 475 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 366, + 554, + 475 + ], + "spans": [ + { + "bbox": [ + 313, + 366, + 554, + 475 + ], + "type": "text", + "content": "The combination of patch decorrelation loss, alignment loss, and tanh loss enables our model to achieve robust patch-level alignment and uniformity. The patch decorrelation loss " + }, + { + "bbox": [ + 313, + 366, + 554, + 475 + ], + "type": "inline_equation", + "content": "(L_{PD})" + }, + { + "bbox": [ + 313, + 366, + 554, + 475 + ], + "type": "text", + "content": " enhances the raw feature alignment, the alignment loss " + }, + { + "bbox": [ + 313, + 366, + 554, + 475 + ], + "type": "inline_equation", + "content": "(L_A)" + }, + { + "bbox": [ + 313, + 366, + 554, + 475 + ], + "type": "text", + "content": " ensures the softmax-normalized features are closely matched, and the tanh loss " + }, + { + "bbox": [ + 313, + 366, + 554, + 475 + ], + "type": "inline_equation", + "content": "(L_T)" + }, + { + "bbox": [ + 313, + 366, + 554, + 475 + ], + "type": "text", + "content": " prevents trivial solutions by promoting diversity in prototype assignments. The final objective of our pre-training phase of AdvPatchXAI is: " + }, + { + "bbox": [ + 313, + 366, + 554, + 475 + ], + "type": "inline_equation", + "content": "\\lambda_{PD}L_{PD} + \\lambda_{A}L_{A} + \\lambda_{T}L_{T}" + }, + { + "bbox": [ + 313, + 366, + 554, + 475 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 480, + 447, + 493 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 480, + 447, + 493 + ], + "spans": [ + { + "bbox": [ + 313, + 480, + 447, + 493 + ], + "type": "text", + "content": "3.3. Training AdvPatchXAI" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 498, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 498, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 498, + 555, + 715 + ], + "type": "text", + "content": "After the pretraining phase of the prototypes, the patch presence score tensor, " + }, + { + "bbox": [ + 313, + 498, + 555, + 715 + ], + "type": "inline_equation", + "content": "p" + }, + { + "bbox": [ + 313, + 498, + 555, + 715 + ], + "type": "text", + "content": ", is fed into a linear classification layer having non-negative weights, " + }, + { + "bbox": [ + 313, + 498, + 555, + 715 + ], + "type": "inline_equation", + "content": "w_{c} \\in \\mathbb{R}^{(D \\times C)} \\geq 0" + }, + { + "bbox": [ + 313, + 498, + 555, + 715 + ], + "type": "text", + "content": ". It looks for only positive or true class evidence in an input image to which the image belongs. These weights connect prototypes to classes " + }, + { + "bbox": [ + 313, + 498, + 555, + 715 + ], + "type": "inline_equation", + "content": "(C)" + }, + { + "bbox": [ + 313, + 498, + 555, + 715 + ], + "type": "text", + "content": " to ensure that the presence of prototypical parts contributes positively to the evidence for their associated class, enhancing interpretability. The bias term adjusts the classifier's decision threshold and is independent of the prototype's contributions. This separation ensures that the model remains interpretable, as each prototype's influence on the decision is straightforward and non-contradictory. The output score for each class is calculated by summing the element-wise product of the presence scores and the corresponding class weights from the linear layer. We incorporate a classification loss term, " + }, + { + "bbox": [ + 313, + 498, + 555, + 715 + ], + "type": "inline_equation", + "content": "L_{C}" + }, + { + "bbox": [ + 313, + 498, + 555, + 715 + ], + "type": "text", + "content": ", to optimize model performance. This loss is calculated as the standard negative log-likelihood between the predicted" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "30390" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 294, + 145 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 294, + 145 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 294, + 145 + ], + "type": "text", + "content": "class probabilities " + }, + { + "bbox": [ + 55, + 72, + 294, + 145 + ], + "type": "inline_equation", + "content": "\\hat{y}" + }, + { + "bbox": [ + 55, + 72, + 294, + 145 + ], + "type": "text", + "content": " and the one-hot encoded ground truth label " + }, + { + "bbox": [ + 55, + 72, + 294, + 145 + ], + "type": "inline_equation", + "content": "y" + }, + { + "bbox": [ + 55, + 72, + 294, + 145 + ], + "type": "text", + "content": ". While " + }, + { + "bbox": [ + 55, + 72, + 294, + 145 + ], + "type": "inline_equation", + "content": "L_{C}" + }, + { + "bbox": [ + 55, + 72, + 294, + 145 + ], + "type": "text", + "content": " primarily influences the weights of the linear layer, it also fine-tunes the prototypes to discriminate features relevant to the classification task better. The overall objective for the second training phase of AdvPatchXAI is: " + }, + { + "bbox": [ + 55, + 72, + 294, + 145 + ], + "type": "inline_equation", + "content": "\\lambda_{PD}L_{PD} + \\lambda_{A}L_{A} + \\lambda_{T}L_{T} + \\lambda_{C}L_{C}" + }, + { + "bbox": [ + 55, + 72, + 294, + 145 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 155, + 246, + 168 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 155, + 246, + 168 + ], + "spans": [ + { + "bbox": [ + 55, + 155, + 246, + 168 + ], + "type": "text", + "content": "4. Experimental Result and Analysis" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 175, + 296, + 234 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 175, + 296, + 234 + ], + "spans": [ + { + "bbox": [ + 55, + 175, + 296, + 234 + ], + "type": "text", + "content": "The dataset used in this research for evaluating the proposed and existing defenses is discussed in subsection 4.1, followed by the implementation details of the proposed algorithm. Finally, the analysis of the proposed defense algorithm is discussed in detail." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 241, + 296, + 253 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 241, + 296, + 253 + ], + "spans": [ + { + "bbox": [ + 55, + 241, + 296, + 253 + ], + "type": "text", + "content": "4.1. Out-of-Distribution Adversarial Patch Dataset" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 258, + 296, + 605 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 258, + 296, + 605 + ], + "spans": [ + { + "bbox": [ + 55, + 258, + 296, + 605 + ], + "type": "text", + "content": "Following the procedure outlined by Kumar & Agarwal [31], this paper introduces two datasets focusing on physical adversarial patches and natural noises using ImageNet and COCO datasets. Images are attacked with 10 different styles of physical adversarial patches and three types of natural noises (Gaussian, Shot, and Impulse noise). For COCO, we randomly selected 2,000 clean images from the validation set. These images served as a clean subset. Another set of 2,000 images from the COCO validation set is selected, and 10 different adversarial patches are applied to each, generating 20,000 adversarial patch images. Clean and patched images are divided into training and testing sets in a 3:2 ratio. For example, 2,000 clean images of COCO and 2,000 images with a single patch are divided into 800 test images and 1200 train images. Also, applying three natural noises resulted in 2,400 noisy test images for both clean and patched sets. Since our primary focus is generalizability, we use ImageNet to test our model when trained on the COCO patched train subset. For ImageNet, we randomly selected 800 clean images from the validation set. Another set of 800 images is selected, and 10 different adversarial patches are applied to each image, generating 8000 adversarial patch images. Again, three different types of natural noise applied to each patch give 24,00 noisy images for clean and patched test sets. In total, the dataset includes 2,800 clean images, 28,000 adversarial patch images (20,000 from COCO and 8,000 from ImageNet), 4,800 noisy test images, and 48,000 images with both adversarial patches and natural noise." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 613, + 188, + 624 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 613, + 188, + 624 + ], + "spans": [ + { + "bbox": [ + 55, + 613, + 188, + 624 + ], + "type": "text", + "content": "4.2. Implementation Details" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 629, + 294, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 629, + 294, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 629, + 294, + 713 + ], + "type": "text", + "content": "We utilize three convolutional backbones in AdvPatchXAI: ResNet50 [22] (R), ConvNext-tiny [36] (C), and MobileNetV2 [48] (M). A transformer-based network ViT [14] is also incorporated. The pre-trained models are used, but the strides of the last layers are modified from 2 to 1 to increase the width " + }, + { + "bbox": [ + 55, + 629, + 294, + 713 + ], + "type": "inline_equation", + "content": "(W)" + }, + { + "bbox": [ + 55, + 629, + 294, + 713 + ], + "type": "text", + "content": " and height " + }, + { + "bbox": [ + 55, + 629, + 294, + 713 + ], + "type": "inline_equation", + "content": "(H)" + }, + { + "bbox": [ + 55, + 629, + 294, + 713 + ], + "type": "text", + "content": " of the output feature maps (from " + }, + { + "bbox": [ + 55, + 629, + 294, + 713 + ], + "type": "inline_equation", + "content": "7 \\times 7" + }, + { + "bbox": [ + 55, + 629, + 294, + 713 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 55, + 629, + 294, + 713 + ], + "type": "inline_equation", + "content": "28 \\times 28" + }, + { + "bbox": [ + 55, + 629, + 294, + 713 + ], + "type": "text", + "content": " for ResNet and MobileNet," + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 72, + 553, + 228 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 553, + 228 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 553, + 228 + ], + "type": "inline_equation", + "content": "26 \\times 26" + }, + { + "bbox": [ + 313, + 72, + 553, + 228 + ], + "type": "text", + "content": " for ConvNext, and reshaped the output feature into " + }, + { + "bbox": [ + 313, + 72, + 553, + 228 + ], + "type": "inline_equation", + "content": "14 \\times 14" + }, + { + "bbox": [ + 313, + 72, + 553, + 228 + ], + "type": "text", + "content": " for ViT). This adjustment results in a finer-grained patch grid " + }, + { + "bbox": [ + 313, + 72, + 553, + 228 + ], + "type": "inline_equation", + "content": "z" + }, + { + "bbox": [ + 313, + 72, + 553, + 228 + ], + "type": "text", + "content": ", improving patch similarity optimization. The backbone " + }, + { + "bbox": [ + 313, + 72, + 553, + 228 + ], + "type": "inline_equation", + "content": "f" + }, + { + "bbox": [ + 313, + 72, + 553, + 228 + ], + "type": "text", + "content": " is fine-tuned using Adam with a learning rate 0.0005 and a cosine annealing schedule. The linear layer is trained with a learning rate of 0.05. Loss weights are set to " + }, + { + "bbox": [ + 313, + 72, + 553, + 228 + ], + "type": "inline_equation", + "content": "\\lambda_{C} = \\lambda_{T} = 2" + }, + { + "bbox": [ + 313, + 72, + 553, + 228 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 72, + 553, + 228 + ], + "type": "inline_equation", + "content": "\\lambda_{PD} = \\lambda_{A} = 5" + }, + { + "bbox": [ + 313, + 72, + 553, + 228 + ], + "type": "text", + "content": ". Prototypes are trained for 10 epochs, followed by training AdvPatchXAI for an additional 60 epochs. Images are resized to " + }, + { + "bbox": [ + 313, + 72, + 553, + 228 + ], + "type": "inline_equation", + "content": "224 \\times 224" + }, + { + "bbox": [ + 313, + 72, + 553, + 228 + ], + "type": "text", + "content": " and augmented with TrivialAugment [40] in two stages: the first stage is related to location, and the second stage is related to color transformation (grayscale). Experiments are performed with seed value one to ensure reproducibility." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 234, + 409, + 247 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 234, + 409, + 247 + ], + "spans": [ + { + "bbox": [ + 313, + 234, + 409, + 247 + ], + "type": "text", + "content": "4.3. Result Analysis" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 251, + 553, + 443 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 251, + 553, + 443 + ], + "spans": [ + { + "bbox": [ + 313, + 251, + 553, + 443 + ], + "type": "text", + "content": "We evaluated our proposed method, AdvPatchXAI, employed various backbone DNNs using a comprehensive set of experiments. We have trained our network on the COCO training subset for all 10 patches separately and evaluated them under several generalized zero-shot settings. These settings included scenarios with seen datasets with unseen patches and unseen datasets with unseen patches, both with and without natural noises. The effectiveness of AdvPatchXAI can be assessed through two key metrics: the robustness of the chosen backbone DNN and the success of the training patch in identifying unseen patches. The presence of natural noises during the testing ensures the detector's robustness in a black-box setting. To further strengthen the effectiveness of the proposed defense algorithm, we have evaluated its resiliency against adaptive attacks (discussed in supplementary)." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 447, + 555, + 471 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 447, + 555, + 471 + ], + "spans": [ + { + "bbox": [ + 313, + 447, + 555, + 471 + ], + "type": "text", + "content": "4.3.1. Zero-Shot OOD Patch Detection in the Absence of Noise" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 475, + 553, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 475, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 475, + 553, + 713 + ], + "type": "text", + "content": "Table 1 and Table 2 present the average classification accuracy (Mean) of our proposed AdvPatchXAI on COCO and ImageNet datasets, respectively, combined with OOD unseen patches in silent (without noise) settings. Based on our findings, it can be seen that AdvPatchXAI-R (i.e., with backbone R) achieved the highest mean accuracy of " + }, + { + "bbox": [ + 313, + 475, + 553, + 713 + ], + "type": "inline_equation", + "content": "99.33\\%" + }, + { + "bbox": [ + 313, + 475, + 553, + 713 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 475, + 553, + 713 + ], + "type": "inline_equation", + "content": "98.94\\%" + }, + { + "bbox": [ + 313, + 475, + 553, + 713 + ], + "type": "text", + "content": ", particularly with Patch-9 on COCO and ImageNet datasets, respectively, demonstrating exceptional performance and robustness. AdvPatchXAI-ViT (i.e., with backbone ViT) also performed well, especially for Patch-4 on COCO " + }, + { + "bbox": [ + 313, + 475, + 553, + 713 + ], + "type": "inline_equation", + "content": "96.42\\%" + }, + { + "bbox": [ + 313, + 475, + 553, + 713 + ], + "type": "text", + "content": " and Patch-0 on ImageNet " + }, + { + "bbox": [ + 313, + 475, + 553, + 713 + ], + "type": "inline_equation", + "content": "94.31\\%" + }, + { + "bbox": [ + 313, + 475, + 553, + 713 + ], + "type": "text", + "content": ", showing reliability with moderate standard deviation (SD) (see supplementary for more detail) but found more vulnerable on Patch-5. However, AdvPatchXAI-M (MobileNet) and AdvPatchXAI-C (ConvNeXt) exhibited higher variability, with significant accuracy drops for specific patches such as Patch-3. Figure 2 gives a clearer picture of the robustness of AdvPatchXAI-R and AdvPatchXAI-ViT on both the COCO and ImageNet datasets. For example, the performance of AdvPatchXAI-R on COCO and ImageNet, " + }, + { + "bbox": [ + 313, + 475, + 553, + 713 + ], + "type": "inline_equation", + "content": "91.8\\%" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "30391" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 57, + 116, + 550, + 181 + ], + "blocks": [ + { + "bbox": [ + 55, + 70, + 555, + 115 + ], + "lines": [ + { + "bbox": [ + 55, + 70, + 555, + 115 + ], + "spans": [ + { + "bbox": [ + 55, + 70, + 555, + 115 + ], + "type": "text", + "content": "Table 1. Adversarial patch detection accuracy of the proposed AdvPatchXAI with different backbone on COCO (seen dataset) subset in several generalized settings such as Silent (unseen patch without any noise), Noisy (unseen patch+noise). The results are reported as mean. Patch-{0-9}\\{1} indicates models are trained on Patch-1 and tested on all other patches except Patch-1. R, M, ViT, and C represent ResNet50, MobileNet, Vision Transformer, and ConvNeXt backbones, respectively. The best mean values are highlighted." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 57, + 116, + 550, + 181 + ], + "lines": [ + { + "bbox": [ + 57, + 116, + 550, + 181 + ], + "spans": [ + { + "bbox": [ + 57, + 116, + 550, + 181 + ], + "type": "table", + "html": "
MethodTestPatch-{0-9}\\{0}Patch-{0-9}\\{1}Patch-{0-9}\\{2}Patch-{0-9}\\{3}Patch-{0-9}\\{4}Patch-{0-9}\\{5}Patch-{0-9}\\{6}Patch-{0-9}\\{7}Patch-{0-9}\\{8}Patch-{0-9}\\{9}
AdvPatchXAI-RSilent94.0995.5784.6983.3097.9979.6585.5396.9194.7699.33
Noisy85.8188.8678.0772.0196.0969.0578.9594.5983.7995.92
AdvPatchXAI-MSilent94.1797.4577.3364.8898.4772.2893.9098.6192.9097.24
Noisy76.0970.8256.9150.2489.6351.3470.4583.6571.1368.11
AdvPatchXAI-ViTSilent95.8495.1993.2081.7496.4267.2789.0893.9089.0094.32
Noisy92.3192.6188.9178.9192.9564.5785.7490.5280.5693.13
AdvPatchXAI-CSilent90.0885.2986.0875.9496.2782.4278.6387.1892.7395.28
Noisy79.1679.8874.6264.6986.5759.7272.7471.0976.4184.06
", + "image_path": "c4975fa67b7054f405952c701bfd7f6943de2a9fbbf4a6c31de04b8be9efe9de.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 189, + 294, + 250 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 189, + 294, + 250 + ], + "spans": [ + { + "bbox": [ + 55, + 189, + 294, + 250 + ], + "type": "text", + "content": "and " + }, + { + "bbox": [ + 55, + 189, + 294, + 250 + ], + "type": "inline_equation", + "content": "90.91\\%" + }, + { + "bbox": [ + 55, + 189, + 294, + 250 + ], + "type": "text", + "content": ", respectively, is at least ranges in " + }, + { + "bbox": [ + 55, + 189, + 294, + 250 + ], + "type": "inline_equation", + "content": "[2.2 - 4.81]\\%" + }, + { + "bbox": [ + 55, + 189, + 294, + 250 + ], + "type": "text", + "content": " better than AdvPatchXAI with other backbones on both datasets. It can be seen that there is only a minor difference " + }, + { + "bbox": [ + 55, + 189, + 294, + 250 + ], + "type": "inline_equation", + "content": "[0.66 - 1.95]\\%" + }, + { + "bbox": [ + 55, + 189, + 294, + 250 + ], + "type": "text", + "content": " in performance when both the dataset and patches are unknown to the model." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 255, + 295, + 280 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 255, + 295, + 280 + ], + "spans": [ + { + "bbox": [ + 55, + 255, + 295, + 280 + ], + "type": "text", + "content": "4.3.2. Zero-Shot OOD Patch Detection Under Noise Perturbation" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 54, + 282, + 295, + 376 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 282, + 295, + 376 + ], + "spans": [ + { + "bbox": [ + 54, + 282, + 295, + 376 + ], + "type": "text", + "content": "To demonstrate the robustness capabilities of our defense, we evaluated its resiliency when trained on clean images (without exposure to any noise during training) using COCO subsets. This approach is crucial because natural noises are inherent in the environment [4], and training on every possible type of noise is impractical. Therefore, detectors must be resilient enough to handle unseen natural noises." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 378, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 378, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 56, + 378, + 296, + 714 + ], + "type": "text", + "content": "The resiliency results of AdvPatchXAI in unseen patch detection in noisy (applying all three noise Gaussian, Shot, and Impulse with severity " + }, + { + "bbox": [ + 56, + 378, + 296, + 714 + ], + "type": "inline_equation", + "content": "= 2" + }, + { + "bbox": [ + 56, + 378, + 296, + 714 + ], + "type": "text", + "content": " separately on test subset and taking the average patch wise) settings on COCO and ImageNet datasets are shown in Figure 2. While a drop in detection performance is expected, AdvPatchXAI-ViT exhibits only a marginal reduction. For example, its performance drops from " + }, + { + "bbox": [ + 56, + 378, + 296, + 714 + ], + "type": "inline_equation", + "content": "89.6\\%" + }, + { + "bbox": [ + 56, + 378, + 296, + 714 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 56, + 378, + 296, + 714 + ], + "type": "inline_equation", + "content": "86.02\\%" + }, + { + "bbox": [ + 56, + 378, + 296, + 714 + ], + "type": "text", + "content": " on COCO and from " + }, + { + "bbox": [ + 56, + 378, + 296, + 714 + ], + "type": "inline_equation", + "content": "87.65\\%" + }, + { + "bbox": [ + 56, + 378, + 296, + 714 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 56, + 378, + 296, + 714 + ], + "type": "inline_equation", + "content": "83.97\\%" + }, + { + "bbox": [ + 56, + 378, + 296, + 714 + ], + "type": "text", + "content": " on ImageNet. In contrast, other backbones suffer more significant decreases in accuracy. For instance, the performance of AdvPatchXAI-R drops by " + }, + { + "bbox": [ + 56, + 378, + 296, + 714 + ], + "type": "inline_equation", + "content": "7.49\\%" + }, + { + "bbox": [ + 56, + 378, + 296, + 714 + ], + "type": "text", + "content": " on COCO and " + }, + { + "bbox": [ + 56, + 378, + 296, + 714 + ], + "type": "inline_equation", + "content": "7.73\\%" + }, + { + "bbox": [ + 56, + 378, + 296, + 714 + ], + "type": "text", + "content": " on ImageNet, whereas the accuracy of AdvPatchXAI-M drops by " + }, + { + "bbox": [ + 56, + 378, + 296, + 714 + ], + "type": "inline_equation", + "content": "19.88\\%" + }, + { + "bbox": [ + 56, + 378, + 296, + 714 + ], + "type": "text", + "content": " on COCO and " + }, + { + "bbox": [ + 56, + 378, + 296, + 714 + ], + "type": "inline_equation", + "content": "19.47\\%" + }, + { + "bbox": [ + 56, + 378, + 296, + 714 + ], + "type": "text", + "content": " on ImageNet. Tables 1 and 2 provide detailed average 10-fold cross-validation performance in the unseen patch noise evaluation setting for COCO and ImageNet datasets, respectively. Notably, in zero-shot evaluations where images (clean and patched) are perturbed, AdvPatchXAI with ViT outperforms other backbones by a significant margin. Patch-4 proves to be more effective in 9 out of 16 evaluations, exhibiting higher mean accuracy. The performance difference between AdvPatch-ViT with the best-performing patch, and Patch-4 is less than " + }, + { + "bbox": [ + 56, + 378, + 296, + 714 + ], + "type": "inline_equation", + "content": "1.4\\%" + }, + { + "bbox": [ + 56, + 378, + 296, + 714 + ], + "type": "text", + "content": " on both datasets and in each setting (silent, noisy), yet Patch-4 shows very low SD (see supplementary for more detail), indicating its higher effectiveness. Extensive experimental evaluation reveals that AdvPatchXAI-ViT with Patch-4 generalizes well in unseen patch settings and maintains high resiliency when images" + } + ] + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 317, + 188, + 552, + 290 + ], + "blocks": [ + { + "bbox": [ + 317, + 188, + 552, + 290 + ], + "lines": [ + { + "bbox": [ + 317, + 188, + 552, + 290 + ], + "spans": [ + { + "bbox": [ + 317, + 188, + 552, + 290 + ], + "type": "image", + "image_path": "31943b222b2e80e932f30eb179d708ca89e5be58e983a359f22bb6ea15b3fb6c.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 297, + 555, + 331 + ], + "lines": [ + { + "bbox": [ + 313, + 297, + 555, + 331 + ], + "spans": [ + { + "bbox": [ + 313, + 297, + 555, + 331 + ], + "type": "text", + "content": "Figure 2. Comparison with SOTA in terms of average adversarial patch detection accuracy for unseen patch and unseen patch + unseen noise detection on both COCO and ImageNet datasets." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 336, + 555, + 373 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 336, + 555, + 373 + ], + "spans": [ + { + "bbox": [ + 313, + 336, + 555, + 373 + ], + "type": "text", + "content": "are perturbed by noise. Therefore, in real-world applications, we recommend using AdvPatchXAI-ViT with Patch-4 to defend against adversarial patches." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 381, + 550, + 397 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 381, + 550, + 397 + ], + "spans": [ + { + "bbox": [ + 313, + 381, + 550, + 397 + ], + "type": "text", + "content": "5. Explainability and Comparison with SOTA" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 402, + 554, + 462 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 402, + 554, + 462 + ], + "spans": [ + { + "bbox": [ + 313, + 402, + 554, + 462 + ], + "type": "text", + "content": "To demonstrate the effectiveness of the proposed patch detector, we have performed an extensive comparison with state-of-the-art algorithms, which we are going to discuss first. Then, we will discuss the explainability of the proposed approach." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 468, + 512, + 481 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 468, + 512, + 481 + ], + "spans": [ + { + "bbox": [ + 313, + 468, + 512, + 481 + ], + "type": "text", + "content": "5.1. Comparison with SOTA and Baseline" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 486, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 486, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 486, + 555, + 715 + ], + "type": "text", + "content": "To demonstrate the effectiveness of our proposed model, we compared it with recent state-of-the-art (SOTA) methods: Ojaswee et al. [43] and Kumar & Agarwal [31]. For a fair comparison, we followed the same experimental protocol we used to evaluate the proposed algorithm. As shown in Figure 2, our method outperforms all SOTA methods except in the case of AdvPatchXAI-M on both the COCO and ImageNet datasets when clean and patched images are perturbed with noise. Specifically, our AdvPatchXAI-R exceeds the performance of [43] by " + }, + { + "bbox": [ + 313, + 486, + 555, + 715 + ], + "type": "inline_equation", + "content": "13.35\\%" + }, + { + "bbox": [ + 313, + 486, + 555, + 715 + ], + "type": "text", + "content": " and [31] by " + }, + { + "bbox": [ + 313, + 486, + 555, + 715 + ], + "type": "inline_equation", + "content": "11.9\\%" + }, + { + "bbox": [ + 313, + 486, + 555, + 715 + ], + "type": "text", + "content": " on the COCO dataset. On the ImageNet dataset, AdvPatchXAI-R outperforms [43] by " + }, + { + "bbox": [ + 313, + 486, + 555, + 715 + ], + "type": "inline_equation", + "content": "12.03\\%" + }, + { + "bbox": [ + 313, + 486, + 555, + 715 + ], + "type": "text", + "content": " and [31] by " + }, + { + "bbox": [ + 313, + 486, + 555, + 715 + ], + "type": "inline_equation", + "content": "14.01\\%" + }, + { + "bbox": [ + 313, + 486, + 555, + 715 + ], + "type": "text", + "content": " when clean and unseen patched images are used for evaluation. Moreover, when clean and patched images are perturbed with noise, AdvPatchXAI-ViT surpasses [43] by " + }, + { + "bbox": [ + 313, + 486, + 555, + 715 + ], + "type": "inline_equation", + "content": "13.64\\%" + }, + { + "bbox": [ + 313, + 486, + 555, + 715 + ], + "type": "text", + "content": " and [31] by " + }, + { + "bbox": [ + 313, + 486, + 555, + 715 + ], + "type": "inline_equation", + "content": "12.22\\%" + }, + { + "bbox": [ + 313, + 486, + 555, + 715 + ], + "type": "text", + "content": " on the COCO dataset. On the ImageNet dataset, AdvPatchXAI-ViT outperforms [43] by " + }, + { + "bbox": [ + 313, + 486, + 555, + 715 + ], + "type": "inline_equation", + "content": "13.33\\%" + }, + { + "bbox": [ + 313, + 486, + 555, + 715 + ], + "type": "text", + "content": " and [31] by " + }, + { + "bbox": [ + 313, + 486, + 555, + 715 + ], + "type": "inline_equation", + "content": "12.67\\%" + }, + { + "bbox": [ + 313, + 486, + 555, + 715 + ], + "type": "text", + "content": ". Our proposed model, based on prototypical parts, is unique in its approach to OOD adver" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "30392" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 58, + 116, + 550, + 181 + ], + "blocks": [ + { + "bbox": [ + 55, + 70, + 555, + 115 + ], + "lines": [ + { + "bbox": [ + 55, + 70, + 555, + 115 + ], + "spans": [ + { + "bbox": [ + 55, + 70, + 555, + 115 + ], + "type": "text", + "content": "Table 2. Adversarial patch detection accuracy of the proposed AdvPatchXAI with different backbone on ImageNet (unseen dataset) subset in several generalized settings such as Silent (unseen patch without any noise), Noisy (unseen patch+noise). The results are reported as mean. Patch-{0-9} \\{3} indicates models are trained on Patch-3 and tested on all other patches except Patch-3. R, M, ViT, and C represent ResNet50, MobileNet, Vision Transformer, and ConvNeXt backbones, respectively. The best mean values are highlighted." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 58, + 116, + 550, + 181 + ], + "lines": [ + { + "bbox": [ + 58, + 116, + 550, + 181 + ], + "spans": [ + { + "bbox": [ + 58, + 116, + 550, + 181 + ], + "type": "table", + "html": "
MethodTestPatch-{0-}\\{0}Patch-{0-}\\{1}Patch-{0-}\\{2}Patch-{0-}\\{3}Patch-{0-}\\{4}Patch-{0-}\\{5}Patch-{0-}\\{6}Patch-{0-}\\{7}Patch-{0-}\\{8}Patch-{0-}\\{9}
AdvPatchXAI-RSilent92.9695.1284.4982.5698.1779.4084.6897.1895.6098.94
Noisy84.3988.5177.8171.4295.6768.3478.3093.7278.8894.76
AdvPatchXAI-MSilent93.6895.2877.3064.1395.7170.7291.9396.8191.9095.92
Noisy75.1070.1656.8850.2287.7451.1569.7382.0068.5567.21
AdvPatchXAI-ViTSilent94.3192.6792.0381.3792.9466.4186.7191.7787.3790.94
Noisy90.1890.1487.9878.8889.1964.2283.7387.1978.1690.04
AdvPatchXAI-CSilent89.4185.2685.1275.9196.2381.2177.3086.6791.5394.65
Noisy78.1479.5573.2963.6685.2059.1371.5470.1675.2183.41
", + "image_path": "519cf6a10e8509a0fa02ebd462317148f14ed926ec89ca19a7d126ffae92e577.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 64, + 185, + 289, + 337 + ], + "blocks": [ + { + "bbox": [ + 64, + 185, + 289, + 337 + ], + "lines": [ + { + "bbox": [ + 64, + 185, + 289, + 337 + ], + "spans": [ + { + "bbox": [ + 64, + 185, + 289, + 337 + ], + "type": "image", + "image_path": "570ab6025cc4a362d9a1d19834b7d40e6b32f91ac3f74eca5ee847cfa0dc827e.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 341, + 296, + 397 + ], + "lines": [ + { + "bbox": [ + 55, + 341, + 296, + 397 + ], + "spans": [ + { + "bbox": [ + 55, + 341, + 296, + 397 + ], + "type": "text", + "content": "Figure 3. t-SNE visualization of feature space of proposed AdvPatchXAI with backbone ViT and ConvNeXt trained on the most effective patch, Patch-4, and tested on other patches, Patch-0, Patch-1, and Patch-2. Red and blue represent patched and real class, respectively." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 54, + 408, + 295, + 576 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 408, + 295, + 576 + ], + "spans": [ + { + "bbox": [ + 54, + 408, + 295, + 576 + ], + "type": "text", + "content": "sarial patch detection. To demonstrate that, we compared it to PIP-Net [41], a recent prototypical-parts-based model that is the closest architecture to our proposed method. Since PIP-Net is designed for a convolutional backbone, we integrated ViT into PIP-Net and evaluated results on the COCO subset in an unseen patch (silent) setting. The average patch detection accuracy of PIP-Net with backbone R, M, and ViT are " + }, + { + "bbox": [ + 54, + 408, + 295, + 576 + ], + "type": "inline_equation", + "content": "87.07\\%" + }, + { + "bbox": [ + 54, + 408, + 295, + 576 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 54, + 408, + 295, + 576 + ], + "type": "inline_equation", + "content": "81.82\\%" + }, + { + "bbox": [ + 54, + 408, + 295, + 576 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 54, + 408, + 295, + 576 + ], + "type": "inline_equation", + "content": "89.32\\%" + }, + { + "bbox": [ + 54, + 408, + 295, + 576 + ], + "type": "text", + "content": ", respectively. In comparison, our AdvPatchXAI method achieved accuracies of " + }, + { + "bbox": [ + 54, + 408, + 295, + 576 + ], + "type": "inline_equation", + "content": "91.8\\%" + }, + { + "bbox": [ + 54, + 408, + 295, + 576 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 54, + 408, + 295, + 576 + ], + "type": "inline_equation", + "content": "88.72\\%" + }, + { + "bbox": [ + 54, + 408, + 295, + 576 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 54, + 408, + 295, + 576 + ], + "type": "inline_equation", + "content": "89.6\\%" + }, + { + "bbox": [ + 54, + 408, + 295, + 576 + ], + "type": "text", + "content": " with the identical respective backbones. In other words, proposed AdvPatchXAI shows " + }, + { + "bbox": [ + 54, + 408, + 295, + 576 + ], + "type": "inline_equation", + "content": "4.73\\%" + }, + { + "bbox": [ + 54, + 408, + 295, + 576 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 54, + 408, + 295, + 576 + ], + "type": "inline_equation", + "content": "6.9\\%" + }, + { + "bbox": [ + 54, + 408, + 295, + 576 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 54, + 408, + 295, + 576 + ], + "type": "inline_equation", + "content": "0.28\\%" + }, + { + "bbox": [ + 54, + 408, + 295, + 576 + ], + "type": "text", + "content": " improvement over the baseline PIP-Net with backbone R, M, and ViT, respectively." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 577, + 296, + 660 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 577, + 296, + 660 + ], + "spans": [ + { + "bbox": [ + 55, + 577, + 296, + 660 + ], + "type": "text", + "content": "Apart from the above datasets, we have benchmarked our proposed model on other standard datasets CUB-200-2011 [55] and Stanford Cars [30] (results discussed in supplementary). These results highlight the robustness and effectiveness of our proposed model in detecting adversarial patches, particularly when utilizing ViT and ResNet backbones." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 671, + 204, + 684 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 671, + 204, + 684 + ], + "spans": [ + { + "bbox": [ + 55, + 671, + 204, + 684 + ], + "type": "text", + "content": "5.2. Explainable AdvPatchXAI" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 689, + 296, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 689, + 296, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 689, + 296, + 715 + ], + "type": "text", + "content": "While quantitative analysis is essential for assessing a model's effectiveness and robustness, explainability is" + } + ] + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 317, + 185, + 553, + 427 + ], + "blocks": [ + { + "bbox": [ + 317, + 185, + 553, + 427 + ], + "lines": [ + { + "bbox": [ + 317, + 185, + 553, + 427 + ], + "spans": [ + { + "bbox": [ + 317, + 185, + 553, + 427 + ], + "type": "image", + "image_path": "cf1dc040629defedebb46d9d81b4b42a389e5a3c4e98a56e4df1bacdfd645dab.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 429, + 555, + 485 + ], + "lines": [ + { + "bbox": [ + 313, + 429, + 555, + 485 + ], + "spans": [ + { + "bbox": [ + 313, + 429, + 555, + 485 + ], + "type": "text", + "content": "Figure 4. Heat map visualization of the proposed AdvPatchXAI trained on Patch-4 and visualized on Patch-0 on the COCO dataset under both silent (without noise) and noisy (gaussian noise with severity=2) settings. The same image has been taken under both silent and noisy settings for a fair comparison." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 510, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 510, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 510, + 555, + 715 + ], + "type": "text", + "content": "equally vital in supporting these findings. To address this, we performed several explainability analyses on our proposed AdvPatchXAI, including feature visualization, prototype prediction, prototype visualization, and Grad-CAM [49] visualization. Feature visualization helps in understanding the clusters formed by network features, leading to image classification or misclassification. Figure 3 shows the t-SNE [54] plot of the proposed model using an attention backbone (ViT) and a convolutional backbone (ConvNeXt) when trained on Patch-4 and tested on Patches 0 to 2. The clear separation of feature clusters supports the quantitative effectiveness of our proposed model. We also performed prototype prediction and visualization, along with Grad-CAM visualizations, as shown in Figure 4. The first column shows the unseen patch images and unseen patched+noisy image, and the second column highlights relevant prototypes within the image using yellow boxes, providing a" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "30393" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 56, + 116, + 301, + 168 + ], + "blocks": [ + { + "bbox": [ + 55, + 70, + 297, + 115 + ], + "lines": [ + { + "bbox": [ + 55, + 70, + 297, + 115 + ], + "spans": [ + { + "bbox": [ + 55, + 70, + 297, + 115 + ], + "type": "text", + "content": "Table 3. Mean adversarial patch detection accuracy of the proposed AdvPatchXAI across different color channels RGB, YCbCr, and grayscale with backbone ViT and ResNet50 on COCO under silent (unseen patch without any noise) setting." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 56, + 116, + 301, + 168 + ], + "lines": [ + { + "bbox": [ + 56, + 116, + 301, + 168 + ], + "spans": [ + { + "bbox": [ + 56, + 116, + 301, + 168 + ], + "type": "table", + "html": "
Model (Channel)Patch- \\{0-9\\} \\\\{0}Patch- \\{0-9\\} \\\\{1}Patch- \\{0-9\\} \\\\{2}Patch- \\{0-9\\} \\\\{3}Patch- \\{0-9\\} \\\\{4}
ViT (RGB)64.7880.9073.1362.2274.11
ViT (YCbCr)65.5790.5084.3666.7278.44
ViT (Grayscale)92.3192.6188.9178.9192.95
R (RGB)63.7177.4961.4462.9581.26
R (YCbCr)70.5687.0179.6470.3378.27
R (Grayscale)85.8188.8678.0772.0196.09
", + "image_path": "0ee4ce8a1377700f4b10f3fccfad03bf1a009dbc96fa1866d58a14e8b1c96b29.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 54, + 182, + 297, + 339 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 182, + 297, + 339 + ], + "spans": [ + { + "bbox": [ + 54, + 182, + 297, + 339 + ], + "type": "text", + "content": "local explanation. The third column collects these relevant prototypes, further connecting them to the classes with sparse linear layers to predict the correct image class. It can be seen that when noise comes with a patched image, the prototype prediction is compromised, and the extracted prototypes are not as fine-grained and less relevant as clean images. The fourth column presents the Grad-CAM for each prototypical part, illustrating the image regions that the network focuses on while determining its class. This detailed insight into the model's decision-making process supports the robustness and effectiveness of AdvPatchXAI in detecting adversarial patches (further discussed in supplementary)." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 350, + 157, + 363 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 350, + 157, + 363 + ], + "spans": [ + { + "bbox": [ + 55, + 350, + 157, + 363 + ], + "type": "text", + "content": "6. Ablation Studies" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 370, + 296, + 431 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 370, + 296, + 431 + ], + "spans": [ + { + "bbox": [ + 55, + 370, + 296, + 431 + ], + "type": "text", + "content": "In this section, we have presented ablation studies highlighting the impact of color channels, real-world robustness, and different terms used in the loss function. To ensure fairness, experiments are conducted using the pre-defined experimental protocol." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 441, + 271, + 453 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 441, + 271, + 453 + ], + "spans": [ + { + "bbox": [ + 55, + 441, + 271, + 453 + ], + "type": "text", + "content": "6.1. Effects of Different Color Augmentations" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 458, + 297, + 578 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 458, + 297, + 578 + ], + "spans": [ + { + "bbox": [ + 55, + 458, + 297, + 578 + ], + "type": "text", + "content": "Table 3 showcases the advantage of converting images into grayscale. Since unseen patches and datasets can have diverse color distributions, having color information makes the system biased toward learning color information rather than focusing on patch information. We assert that suppressing non-useful information can make the system generalized, which can also be visible from the results. For example, when AdvPatchXAI-ViT is trained on Patch 4 using grayscale images, it yields at least " + }, + { + "bbox": [ + 55, + 458, + 297, + 578 + ], + "type": "inline_equation", + "content": "14.51\\%" + }, + { + "bbox": [ + 55, + 458, + 297, + 578 + ], + "type": "text", + "content": " higher accuracy than RGB & YCbCr color channel-trained models." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 587, + 296, + 612 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 587, + 296, + 612 + ], + "spans": [ + { + "bbox": [ + 55, + 587, + 296, + 612 + ], + "type": "text", + "content": "6.2. Physical-World Effectiveness and Robustness Against Adversarial Patch Attack" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 617, + 297, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 617, + 297, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 617, + 297, + 715 + ], + "type": "text", + "content": "We have further evaluated the robustness of AdvPatchXAI for real-world adaptation by printing and applying the adversarial patches in the real world, as demonstrated by Pinton et al. [44]. Our proposed defense algorithm is robust in handling the attacks in the physical world and yields an average accuracy of " + }, + { + "bbox": [ + 55, + 617, + 297, + 715 + ], + "type": "inline_equation", + "content": "91.11\\%" + }, + { + "bbox": [ + 55, + 617, + 297, + 715 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 617, + 297, + 715 + ], + "type": "inline_equation", + "content": "88.88\\%" + }, + { + "bbox": [ + 55, + 617, + 297, + 715 + ], + "type": "text", + "content": " when the proposed ViT defense is trained with Patch 0 and Patch 4, respectively." + } + ] + } + ], + "index": 8 + }, + { + "type": "table", + "bbox": [ + 316, + 116, + 559, + 184 + ], + "blocks": [ + { + "bbox": [ + 313, + 70, + 555, + 115 + ], + "lines": [ + { + "bbox": [ + 313, + 70, + 555, + 115 + ], + "spans": [ + { + "bbox": [ + 313, + 70, + 555, + 115 + ], + "type": "text", + "content": "Table 4. Ablation study concerning loss terms reflecting the total number of relevant prototypes with at least one non-zero weight present in our proposed AdvPatchXAI algorithm. The mean accuracy demonstrates the advantage of a combined loss function." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 316, + 116, + 559, + 184 + ], + "lines": [ + { + "bbox": [ + 316, + 116, + 559, + 184 + ], + "spans": [ + { + "bbox": [ + 316, + 116, + 559, + 184 + ], + "type": "table", + "html": "
Number of Prototypes
MethodBackbonePatch-0Patch-1Patch-2Patch-3Patch-4Mean Acc. ↑
AdvPatchXAIR26322018220923291.13
ViT18716820316620392.48
{LPD, LA, LT, LC}M825579876588.46
AdvPatchXAIR37028425125234786.44
ViT19515116115216192.66
{LA, LT, LC}M67611009110177.71
", + "image_path": "6850e2e5b14fdfe821bce4a75e6d454b51996f8e61926f70e8409ceff8a762ef.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "table_body" + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 206, + 556, + 339 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 206, + 556, + 339 + ], + "spans": [ + { + "bbox": [ + 313, + 206, + 556, + 339 + ], + "type": "text", + "content": "We generated perturbation-based PGD patch attacks followed by [35, 58] for a fair comparison to evaluate the adversarial robustness of our proposed defense model. We wanted to highlight that the performance of our proposed AdvPatchXAI-R and AdvPatchXAI-ViT yields " + }, + { + "bbox": [ + 313, + 206, + 556, + 339 + ], + "type": "inline_equation", + "content": "98.04\\%" + }, + { + "bbox": [ + 313, + 206, + 556, + 339 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 206, + 556, + 339 + ], + "type": "inline_equation", + "content": "95.67\\%" + }, + { + "bbox": [ + 313, + 206, + 556, + 339 + ], + "type": "text", + "content": " average detection accuracy, respectively, in a silent setting when a patch attack based on PGD perturbation is available for evaluation. Even in noisy settings, our proposed network is better robust against PGD perturbation-based patch attacks with average detection performances of " + }, + { + "bbox": [ + 313, + 206, + 556, + 339 + ], + "type": "inline_equation", + "content": "95.55\\%" + }, + { + "bbox": [ + 313, + 206, + 556, + 339 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 206, + 556, + 339 + ], + "type": "inline_equation", + "content": "93.05\\%" + }, + { + "bbox": [ + 313, + 206, + 556, + 339 + ], + "type": "text", + "content": " with the same respective backbone." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 349, + 493, + 363 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 349, + 493, + 363 + ], + "spans": [ + { + "bbox": [ + 313, + 349, + 493, + 363 + ], + "type": "text", + "content": "6.3. Effect of Proposed Loss Function" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 367, + 556, + 452 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 367, + 556, + 452 + ], + "spans": [ + { + "bbox": [ + 313, + 367, + 556, + 452 + ], + "type": "text", + "content": "We further examine the impact of the proposed algorithm's different linear combinations of loss functions. The corresponding outcomes presented in Table 4 reveal that the linear combination of our proposed patch decorrelation loss " + }, + { + "bbox": [ + 313, + 367, + 556, + 452 + ], + "type": "inline_equation", + "content": "L_{PD}" + }, + { + "bbox": [ + 313, + 367, + 556, + 452 + ], + "type": "text", + "content": " with other loss functions improves interpretability. In other words, it reduces the number of relevant prototypes for a class and enhances the detection accuracy." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 465, + 388, + 479 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 465, + 388, + 479 + ], + "spans": [ + { + "bbox": [ + 313, + 465, + 388, + 479 + ], + "type": "text", + "content": "7. Conclusion" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 487, + 556, + 643 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 487, + 556, + 643 + ], + "spans": [ + { + "bbox": [ + 313, + 487, + 556, + 643 + ], + "type": "text", + "content": "In this research, we present AdvPatchXAI to significantly advance the development of a generalized, robust, and explainable adversarial patch detector. By incorporating prototypical parts and a novel patch decorrelation module, our model achieves unprecedented accuracy in physical adversarial patch detection, particularly in zero-shot settings with and without unseen noise perturbations. Furthermore, the explainable nature of AdvPatchXAI provides deep insights into the decision-making process, enabling better understanding and trust in AI systems. Future work will focus on enhancing the scalability of this approach and exploring its applicability to other forms of adversarial attacks, aiming to fortify AI defenses in increasingly complex applications." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 656, + 411, + 670 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 656, + 411, + 670 + ], + "spans": [ + { + "bbox": [ + 313, + 656, + 411, + 670 + ], + "type": "text", + "content": "Acknowledgement" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 677, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 677, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 677, + 555, + 715 + ], + "type": "text", + "content": "V. Kumar is partially supported through the Visvesvaraya PhD Fellowship. A. Agarwal is partially funded through the ANRF PMECRG grant of Govt. of India." + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "30394" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 115, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 115, + 83 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 115, + 83 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 57, + 91, + 296, + 703 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 61, + 91, + 296, + 133 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 91, + 296, + 133 + ], + "spans": [ + { + "bbox": [ + 61, + 91, + 296, + 133 + ], + "type": "text", + "content": "[1] Akshay Agarwal, Nalini Ratha, Mayank Vatsa, and Richa Singh. Crafting adversarial perturbations via transformed image component swapping. IEEE Transactions on Image Processing, 31:7338-7349, 2022. 1" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 134, + 296, + 187 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 134, + 296, + 187 + ], + "spans": [ + { + "bbox": [ + 61, + 134, + 296, + 187 + ], + "type": "text", + "content": "[2] Akshay Agarwal, Richa Singh, Mayank Vatsa, and Nalini Ratha. Image transformation-based defense against adversarial perturbation on deep learning models. IEEE Transactions on Dependable and Secure Computing, 18(5):2106-2121, 2020. 2" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 62, + 189, + 296, + 230 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 189, + 296, + 230 + ], + "spans": [ + { + "bbox": [ + 62, + 189, + 296, + 230 + ], + "type": "text", + "content": "[3] Akshay Agarwal, Richa Singh, Mayank Vatsa, and Nalini Ratha. IBAttack: Being cautious about data labels. IEEE Transactions on Artificial Intelligence, 4(6):1484-1493, 2022. 1" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 62, + 232, + 296, + 285 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 232, + 296, + 285 + ], + "spans": [ + { + "bbox": [ + 62, + 232, + 296, + 285 + ], + "type": "text", + "content": "[4] Akshay Agarwal, Mayank Vatsa, Richa Singh, and Nalini K Ratha. Noise is inside me! generating adversarial perturbations with noise derived from natural filters. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 774-775, 2020. 6" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 62, + 285, + 295, + 316 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 285, + 295, + 316 + ], + "spans": [ + { + "bbox": [ + 62, + 285, + 295, + 316 + ], + "type": "text", + "content": "[5] Naveed Akhtar and Ajmal Mian. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6:14410-14430, 2018. 1" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 317, + 295, + 360 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 317, + 295, + 360 + ], + "spans": [ + { + "bbox": [ + 62, + 317, + 295, + 360 + ], + "type": "text", + "content": "[6] Ahmed Aldahdooh, Wassim Hamidouche, Sid Ahmed Fezza, and Olivier Déforges. Adversarial example detection for dnn models: A review and experimental comparison. Artificial Intelligence Review, 55(6):4403-4462, 2022. 2" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 361, + 295, + 392 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 361, + 295, + 392 + ], + "spans": [ + { + "bbox": [ + 62, + 361, + 295, + 392 + ], + "type": "text", + "content": "[7] Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao. Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934, 2020. 2" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 392, + 295, + 423 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 392, + 295, + 423 + ], + "spans": [ + { + "bbox": [ + 62, + 392, + 295, + 423 + ], + "type": "text", + "content": "[8] Tom B Brown, Dandelion Mane, Aurko Roy, Martin Abadi, and Justin Gilmer. Adversarial patch. arXiv preprint arXiv:1712.09665, 2017. 1, 2" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 424, + 295, + 465 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 424, + 295, + 465 + ], + "spans": [ + { + "bbox": [ + 62, + 424, + 295, + 465 + ], + "type": "text", + "content": "[9] Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526, 2017. 1" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 57, + 467, + 295, + 520 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 467, + 295, + 520 + ], + "spans": [ + { + "bbox": [ + 57, + 467, + 295, + 520 + ], + "type": "text", + "content": "[10] Aran Chindaudom, Prarinya Siritanawan, Karin Sumongkayothin, and Kazunori Kotani. Adversarialqr: An adversarial patch in qr code format. In International Conference on Imaging, Vision & Pattern Recognition, pages 1-6, 2020. 2" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 57, + 521, + 295, + 563 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 521, + 295, + 563 + ], + "spans": [ + { + "bbox": [ + 57, + 521, + 295, + 563 + ], + "type": "text", + "content": "[11] Aran Chindaudom, Prarinya Sritananawan, Karin Sumongkayothin, and Kazunori Kotani. Surreptitious adversarial examples through functioning qr code. Journal of Imaging, 8(5):122, 2022. 2" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 563, + 295, + 605 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 563, + 295, + 605 + ], + "spans": [ + { + "bbox": [ + 57, + 563, + 295, + 605 + ], + "type": "text", + "content": "[12] Kenneth T Co, Luis Muñoz-González, Leslie Kanthan, and Emil C Lupu. Real-time detection of practical universal adversarial perturbations. arXiv preprint arXiv:2105.07334, 2021. 1" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 57, + 605, + 295, + 649 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 605, + 295, + 649 + ], + "spans": [ + { + "bbox": [ + 57, + 605, + 295, + 649 + ], + "type": "text", + "content": "[13] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, 2009. 2" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 57, + 649, + 295, + 703 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 649, + 295, + 703 + ], + "spans": [ + { + "bbox": [ + 57, + 649, + 295, + 703 + ], + "type": "text", + "content": "[14] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint" + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 553, + 705 + ], + "type": "list", + "angle": 0, + "index": 32, + "blocks": [ + { + "bbox": [ + 335, + 73, + 435, + 83 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 335, + 73, + 435, + 83 + ], + "spans": [ + { + "bbox": [ + 335, + 73, + 435, + 83 + ], + "type": "text", + "content": "arXiv:2010.11929,2020.5" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 83, + 553, + 126 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 83, + 553, + 126 + ], + "spans": [ + { + "bbox": [ + 316, + 83, + 553, + 126 + ], + "type": "text", + "content": "[15] Gil Fidel, Ron Bitton, and Asaf Shabtai. When explainability meets adversarial learning: Detecting adversarial examples using shap signatures. In 2020 international joint conference on neural networks (IJCNN), pages 1-8. IEEE, 2020. 2" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 317, + 127, + 553, + 179 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 127, + 553, + 179 + ], + "spans": [ + { + "bbox": [ + 317, + 127, + 553, + 179 + ], + "type": "text", + "content": "[16] Christina M Funke, Judy Borowski, Karolina Stosio, Wieland Brendel, Thomas SA Wallis, and Matthias Bethge. Five points to check when comparing visual perception in humans and machines. Journal of Vision, 21(3):16-16, 2021. 1" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 180, + 553, + 213 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 180, + 553, + 213 + ], + "spans": [ + { + "bbox": [ + 316, + 180, + 553, + 213 + ], + "type": "text", + "content": "[17] Thomas Gittings, Steve Schneider, and John Collomosse. Vax-a-net: Training-time defence against adversarial patch attacks. In Asian Conference on Computer Vision, 2020. 1" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 213, + 553, + 244 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 213, + 553, + 244 + ], + "spans": [ + { + "bbox": [ + 316, + 213, + 553, + 244 + ], + "type": "text", + "content": "[18] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. 1" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 245, + 553, + 297 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 245, + 553, + 297 + ], + "spans": [ + { + "bbox": [ + 316, + 245, + 553, + 297 + ], + "type": "text", + "content": "[19] Gaurav Goswami, Nalini Ratha, Akshay Agarwal, Richa Singh, and Mayank Vatsa. Unravelling robustness of deep learning based face recognition against adversarial attacks. In AAAI Conference on Artificial Intelligence, volume 32, 2018. 1" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 298, + 553, + 330 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 298, + 553, + 330 + ], + "spans": [ + { + "bbox": [ + 316, + 298, + 553, + 330 + ], + "type": "text", + "content": "[20] Jindong Gu, Volker Tresp, and Yao Qin. Are vision transformers robust to patch perturbations? In European Conference on Computer Vision, pages 404-421, 2022. 1" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 331, + 553, + 362 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 331, + 553, + 362 + ], + "spans": [ + { + "bbox": [ + 316, + 331, + 553, + 362 + ], + "type": "text", + "content": "[21] Jamie Hayes. On visible adversarial perturbations & digital watermarking. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1597-1604, 2018. 1" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 363, + 553, + 404 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 363, + 553, + 404 + ], + "spans": [ + { + "bbox": [ + 316, + 363, + 553, + 404 + ], + "type": "text", + "content": "[22] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition, pages 770-778, 2016. 5" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 404, + 553, + 448 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 404, + 553, + 448 + ], + "spans": [ + { + "bbox": [ + 316, + 404, + 553, + 448 + ], + "type": "text", + "content": "[23] Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15262-15271, 2021. 2" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 448, + 553, + 479 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 448, + 553, + 479 + ], + "spans": [ + { + "bbox": [ + 316, + 448, + 553, + 479 + ], + "type": "text", + "content": "[24] Yuheng Huang and Yuanchun Li. Zero-shot certified defense against adversarial patches with vision transformers. arXiv preprint arXiv:2111.10481, 2021. 1" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 316, + 480, + 553, + 521 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 480, + 553, + 521 + ], + "spans": [ + { + "bbox": [ + 316, + 480, + 553, + 521 + ], + "type": "text", + "content": "[25] Nan Ji, YanFei Feng, Haidong Xie, Xueshuang Xiang, and Naijin Liu. Adversarial yolo: Defense human detection patch attacks via detecting adversarial patches. arXiv preprint arXiv:2103.08860, 2021. 2" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 316, + 522, + 553, + 564 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 522, + 553, + 564 + ], + "spans": [ + { + "bbox": [ + 316, + 522, + 553, + 564 + ], + "type": "text", + "content": "[26] Longlong Jing and Yingli Tian. Self-supervised visual feature learning with deep neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(11):4037-4058, 2020. 3" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 316, + 565, + 553, + 608 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 565, + 553, + 608 + ], + "spans": [ + { + "bbox": [ + 316, + 565, + 553, + 608 + ], + "type": "text", + "content": "[27] Melanie Jutas, Ethan Liang, Sara Leary, Chris Ward, and Keith Manville. Detecting physical adversarial patch attacks with object detectors. In IEEE Applied Imagery Pattern Recognition Workshop, pages 1-7, 2022. 2" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 316, + 608, + 553, + 650 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 608, + 553, + 650 + ], + "spans": [ + { + "bbox": [ + 316, + 608, + 553, + 650 + ], + "type": "text", + "content": "[28] Danny Karmon, Daniel Zoran, and Yoav Goldberg. Lavan: Localized and visible adversarial noise. In International Conference on Machine Learning, pages 2507-2515. PMLR, 2018. 2" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 316, + 651, + 553, + 705 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 651, + 553, + 705 + ], + "spans": [ + { + "bbox": [ + 316, + 651, + 553, + 705 + ], + "type": "text", + "content": "[29] Adam Kortylewski, Qing Liu, Huiyu Wang, Zhishuai Zhang, and Alan Yuille. Combining compositional models and deep networks for robust object classification under occlusion. In IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1333-1341, 2020. 1" + } + ] + } + ], + "index": 31 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "30395" + } + ] + } + ], + "index": 33 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 295, + 708 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 56, + 72, + 294, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 294, + 116 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 294, + 116 + ], + "type": "text", + "content": "[30] Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In IEEE International Conference on Computer Vision Workshops, pages 554–561, 2013. 7" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 117, + 295, + 148 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 117, + 295, + 148 + ], + "spans": [ + { + "bbox": [ + 56, + 117, + 295, + 148 + ], + "type": "text", + "content": "[31] Vishesh Kumar and Akshay Agarwal. The unseen adversaries: Robust and generalized defense against adversarial patches. Available at SSRN 4772716, 2023. 2, 5, 6" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 148, + 294, + 191 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 148, + 294, + 191 + ], + "spans": [ + { + "bbox": [ + 56, + 148, + 294, + 191 + ], + "type": "text", + "content": "[32] Juncheng Li, Frank Schmidt, and Zico Kolter. Adversarial camera stickers: A physical camera-based attack on deep learning systems. In International Conference on Machine Learning, pages 3896-3904. PMLR, 2019. 2" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 191, + 294, + 244 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 191, + 294, + 244 + ], + "spans": [ + { + "bbox": [ + 56, + 191, + 294, + 244 + ], + "type": "text", + "content": "[33] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European Conference on Computer Vision, pages 740-755, 2014. 2" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 245, + 294, + 297 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 245, + 294, + 297 + ], + "spans": [ + { + "bbox": [ + 56, + 245, + 294, + 297 + ], + "type": "text", + "content": "[34] Aishan Liu, Xianglong Liu, Jiaxin Fan, Yuqing Ma, Anlan Zhang, Huiyuan Xie, and Dacheng Tao. Perceptual-sensitive gan for generating adversarial patches. In AAAI Conference on Artificial Intelligence, volume 33, pages 1028-1035, 2019. 2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 299, + 294, + 353 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 299, + 294, + 353 + ], + "spans": [ + { + "bbox": [ + 56, + 299, + 294, + 353 + ], + "type": "text", + "content": "[35] Jiang Liu, Alexander Levine, Chun Pong Lau, Rama Chellappa, and Soheil Feizi. Segment and complete: Defending object detectors against adversarial patch attacks with robust patch detection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14973-14982, 2022. 8" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 353, + 294, + 396 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 353, + 294, + 396 + ], + "spans": [ + { + "bbox": [ + 56, + 353, + 294, + 396 + ], + "type": "text", + "content": "[36] Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11976-11986, 2022. 5" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 396, + 294, + 460 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 396, + 294, + 460 + ], + "spans": [ + { + "bbox": [ + 56, + 396, + 294, + 460 + ], + "type": "text", + "content": "[37] Giulio Lovisotto, Nicole Finnie, Maurizio Munoz, Chaithanya Kumar Mummadi, and Jan Hendrik Metzen. Give me your attention: Dot-product attention considered harmful for adversarial patch robustness. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15234-15243, 2022. 1" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 460, + 294, + 502 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 460, + 294, + 502 + ], + "spans": [ + { + "bbox": [ + 56, + 460, + 294, + 502 + ], + "type": "text", + "content": "[38] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. 2" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 502, + 294, + 567 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 502, + 294, + 567 + ], + "spans": [ + { + "bbox": [ + 56, + 502, + 294, + 567 + ], + "type": "text", + "content": "[39] Michael McCoyd, Won Park, Steven Chen, Neil Shah, Ryan Roggenkemper, Minjune Hwang, Jason Xinyu Liu, and David Wagner. Minority reports defense: Defending against adversarial patches. In International Conference on Applied Cryptography and Network Security, pages 564-582, 2020. 1" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 567, + 294, + 610 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 567, + 294, + 610 + ], + "spans": [ + { + "bbox": [ + 56, + 567, + 294, + 610 + ], + "type": "text", + "content": "[40] Samuel G Müller and Frank Hutter. Trivialaugment: Tuning-free yet state-of-the-art data augmentation. In IEEE/CVF International Conference on Computer Vision, pages 774-782, 2021. 3, 5" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 610, + 294, + 643 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 610, + 294, + 643 + ], + "spans": [ + { + "bbox": [ + 56, + 610, + 294, + 643 + ], + "type": "text", + "content": "[41] Meike Nauta, Jörg Schlötterer, Maurice van Keulen, and Christin Seifert. Pip-net: Patch-based intuitive prototypes for interpretable image classification. 2023. 4, 7" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 56, + 643, + 294, + 708 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 643, + 294, + 708 + ], + "spans": [ + { + "bbox": [ + 56, + 643, + 294, + 708 + ], + "type": "text", + "content": "[42] Meike Nauta, Jan Trienes, Shreyasi Pathak, Elisa Nguyen, Michelle Peters, Yasmin Schmitt, Jorg Schlötterer, Maurice van Keulen, and Christin Seifert. From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai. ACM Computing Surveys, 55(13s):1-42, 2023. 1" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 72, + 553, + 706 + ], + "type": "list", + "angle": 0, + "index": 29, + "blocks": [ + { + "bbox": [ + 316, + 72, + 553, + 126 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 72, + 553, + 126 + ], + "spans": [ + { + "bbox": [ + 316, + 72, + 553, + 126 + ], + "type": "text", + "content": "[43] Ojaswee Ojaswee, Akshay Agarwal, and Nalini Ratha. Benchmarking image classifiers for physical out-of-distribution examples detection. In IEEE/CVF International Conference on Computer Vision, pages 4427-4435, 2023. 2, 6" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 127, + 553, + 181 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 127, + 553, + 181 + ], + "spans": [ + { + "bbox": [ + 316, + 127, + 553, + 181 + ], + "type": "text", + "content": "[44] Maura Pintor, Daniele Angioni, Angelo Sotgiu, Luca Demetrio, Ambra Demontis, Battista Biggio, and Fabio Roli. Imagenet-patch: A dataset for benchmarking machine learning robustness against adversarial patches. Pattern Recognition, 134:109064, 2023. 2, 8" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 181, + 553, + 202 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 181, + 553, + 202 + ], + "spans": [ + { + "bbox": [ + 316, + 181, + 553, + 202 + ], + "type": "text", + "content": "[45] Joseph Redmon. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018. 2" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 202, + 553, + 234 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 202, + 553, + 234 + ], + "spans": [ + { + "bbox": [ + 316, + 202, + 553, + 234 + ], + "type": "text", + "content": "[46] Joseph Redmon and Ali Farhadi. Yolo9000: better, faster, stronger. In IEEE Conference on Computer Vision and Pattern Recognition, pages 7263-7271, 2017. 2" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 234, + 553, + 277 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 234, + 553, + 277 + ], + "spans": [ + { + "bbox": [ + 316, + 234, + 553, + 277 + ], + "type": "text", + "content": "[47] Wojciech Samek, Grégoire Montavon, Sebastian Lapischkin, Christopher J Anders, and Klaus-Robert Müller. Explaining deep neural networks and beyond: A review of methods and applications. IEEE, 109(3):247-278, 2021. 1" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 277, + 553, + 330 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 277, + 553, + 330 + ], + "spans": [ + { + "bbox": [ + 316, + 277, + 553, + 330 + ], + "type": "text", + "content": "[48] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. *Mobilenetv2: Inverted residuals and linear bottlenecks*. In IEEE Conference on Computer Vision and Pattern Recognition, pages 4510-4520, 2018. 5" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 331, + 553, + 384 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 331, + 553, + 384 + ], + "spans": [ + { + "bbox": [ + 316, + 331, + 553, + 384 + ], + "type": "text", + "content": "[49] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In IEEE International Conference on Computer Vision, pages 618-626, 2017. 7" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 384, + 553, + 438 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 384, + 553, + 438 + ], + "spans": [ + { + "bbox": [ + 316, + 384, + 553, + 438 + ], + "type": "text", + "content": "[50] Ali Shafahi, W Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein. Poison frogs! targeted clean-label poisoning attacks on neural networks. Advances in Neural Information Processing Systems, 31, 2018. 1" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 438, + 553, + 480 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 438, + 553, + 480 + ], + "spans": [ + { + "bbox": [ + 316, + 438, + 553, + 480 + ], + "type": "text", + "content": "[51] Abhijith Sharma, Yijun Bian, Phil Munz, and Apurva Narayan. Adversarial patch attacks and defences in vision-based tasks: A survey. arXiv preprint arXiv:2206.08304, 2022.2" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 481, + 553, + 523 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 481, + 553, + 523 + ], + "spans": [ + { + "bbox": [ + 316, + 481, + 553, + 523 + ], + "type": "text", + "content": "[52] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. 1" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 524, + 553, + 566 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 524, + 553, + 566 + ], + "spans": [ + { + "bbox": [ + 316, + 524, + 553, + 566 + ], + "type": "text", + "content": "[53] Tung Tran, Issam Aib, Ehab Al-Shaer, and Raouf Boutaba. An evasive attack on snort flowbits. In IEEE Network Operations and Management Symposium, pages 351-358, 2012. 1" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 567, + 553, + 598 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 567, + 553, + 598 + ], + "spans": [ + { + "bbox": [ + 316, + 567, + 553, + 598 + ], + "type": "text", + "content": "[54] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(11), 2008. 7" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 598, + 553, + 630 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 598, + 553, + 630 + ], + "spans": [ + { + "bbox": [ + 316, + 598, + 553, + 630 + ], + "type": "text", + "content": "[55] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011. 7" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 316, + 631, + 553, + 673 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 631, + 553, + 673 + ], + "spans": [ + { + "bbox": [ + 316, + 631, + 553, + 673 + ], + "type": "text", + "content": "[56] Tongzhou Wang and Phillip Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In International Conference on Machine Learning, pages 9929-9939. PMLR, 2020. 4" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 316, + 673, + 553, + 706 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 673, + 553, + 706 + ], + "spans": [ + { + "bbox": [ + 316, + 673, + 553, + 706 + ], + "type": "text", + "content": "[57] Chong Xiang, Arjun Nitin Bhagoji, Vikash Sehwag, and Prateek Mittal. {PatchGuard}: A provably robust defense against adversarial patches via small receptive fields and" + } + ] + } + ], + "index": 28 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "text", + "content": "30396" + } + ] + } + ], + "index": 30 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 295, + 437 + ], + "type": "list", + "angle": 0, + "index": 8, + "blocks": [ + { + "bbox": [ + 76, + 72, + 294, + 94 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 72, + 294, + 94 + ], + "spans": [ + { + "bbox": [ + 76, + 72, + 294, + 94 + ], + "type": "text", + "content": "masking. In USENIX Security Symposium, pages 2237-2254, 2021. 1" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 95, + 295, + 148 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 95, + 295, + 148 + ], + "spans": [ + { + "bbox": [ + 56, + 95, + 295, + 148 + ], + "type": "text", + "content": "[58] Ke Xu, Yao Xiao, Zhaoheng Zheng, Kaijie Cai, and Ram Nevatia. Patchzero: Defending against adversarial patch attacks by detecting and zeroing the patch. In IEEE/CVF Winter Conference on Applications of Computer Vision, pages 4632-4641, 2023. 8" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 148, + 294, + 179 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 148, + 294, + 179 + ], + "spans": [ + { + "bbox": [ + 56, + 148, + 294, + 179 + ], + "type": "text", + "content": "[59] Weilin Xu, David Evans, and Yanjun Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155, 2017. 2" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 180, + 294, + 233 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 180, + 294, + 233 + ], + "spans": [ + { + "bbox": [ + 56, + 180, + 294, + 233 + ], + "type": "text", + "content": "[60] Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane-Ling Wang, and Michael Jordan. Ml-loo: Detecting adversarial examples with feature attribution. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 6639-6647, 2020. 2" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 234, + 294, + 276 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 234, + 294, + 276 + ], + "spans": [ + { + "bbox": [ + 56, + 234, + 294, + 276 + ], + "type": "text", + "content": "[61] Xingyu Zhou, Zhisong Pan, Yexin Duan, Jin Zhang, and Shuaihui Wang. A data independent approach to generate adversarial patches. Machine Vision and Applications, 32(3):67, 2021. 2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 277, + 294, + 342 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 277, + 294, + 342 + ], + "spans": [ + { + "bbox": [ + 56, + 277, + 294, + 342 + ], + "type": "text", + "content": "[62] Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang. Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: International Workshop and International Workshop, pages 3-11, 2018. 2" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 342, + 294, + 384 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 342, + 294, + 384 + ], + "spans": [ + { + "bbox": [ + 56, + 342, + 294, + 384 + ], + "type": "text", + "content": "[63] Hongru Zhu, Peng Tang, Jeongho Park, Soojin Park, and Alan Yuille. Robustness of object recognition under extreme occlusion in humans and computational models. arXiv preprint arXiv:1905.04598, 2019. 1" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 384, + 294, + 437 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 384, + 294, + 437 + ], + "spans": [ + { + "bbox": [ + 56, + 384, + 294, + 437 + ], + "type": "text", + "content": "[64] Alon Zolfi, Moshe Kravchik, Yuval Elovici, and Asaf Shabtai. The translucent patch: A physical and universal attack on object detectors. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15232-15241, 2021. 1, 2" + } + ] + } + ], + "index": 7 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "30397" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/A Universal Scale-Adaptive Deformable Transformer for Image Restoration across Diverse Artifacts/36bcd46e-32c4-4da4-aff4-67b68f83d335_content_list.json b/2025/A Universal Scale-Adaptive Deformable Transformer for Image Restoration across Diverse Artifacts/36bcd46e-32c4-4da4-aff4-67b68f83d335_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1c84db8d7dc43b6b229cb2e6e429cbde3ba5d2f3 --- /dev/null +++ b/2025/A Universal Scale-Adaptive Deformable Transformer for Image Restoration across Diverse Artifacts/36bcd46e-32c4-4da4-aff4-67b68f83d335_content_list.json @@ -0,0 +1,2103 @@ +[ + { + "type": "text", + "text": "A Universal Scale-Adaptive Deformable Transformer for Image Restoration across Diverse Artifacts", + "text_level": 1, + "bbox": [ + 116, + 128, + 880, + 174 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Xuyi He", + "bbox": [ + 254, + 202, + 334, + 220 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Yuhui Quan", + "bbox": [ + 377, + 203, + 480, + 220 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$\\mathbf{R}$ uotao $\\mathrm{Xu^{2,3}*}$", + "bbox": [ + 526, + 203, + 645, + 220 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Hui Ji", + "bbox": [ + 681, + 203, + 738, + 219 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{1}$ School of Computer Science and Engineering, South China University of Technology $^{2}$ Institute for Super Robotics, South China University of Technology $^{3}$ Key Laboratory of Large-Model Embodied-Intelligent Humanoid Robot $^{4}$ Department of Mathematics, National University of Singapore", + "bbox": [ + 158, + 220, + 836, + 292 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "csxuyihe@mail.scut.edu.cn, csyhquan@scut.edu.cn, rtxu@superobots.com, matjh@nus.edu.sg", + "bbox": [ + 125, + 295, + 864, + 309 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 246, + 344, + 326, + 359 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Structured artifacts are semi-regular, repetitive patterns that closely intertwine with genuine image content, making their removal highly challenging. In this paper, we introduce the Scale-Adaptive Deformable Transformer, an network architecture specifically designed to eliminate such artifacts from images. The proposed network features two key components: a scale-enhanced deformable convolution module for modeling scale-varying patterns with abundant orientations and potential distortions, and a scale-adaptive deformable attention mechanism for capturing long-range relationships among repetitive patterns with different sizes and non-uniform spatial distributions. Extensive experiments show that our network consistently outperforms state-of-the-art methods in diverse artifact removal tasks, including image deraining, image demoiring, and image debanding.", + "bbox": [ + 86, + 377, + 485, + 604 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 91, + 636, + 220, + 651 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Structured artifacts, in contrast to random noise, are repetitive patterns with similar appearance. Examples include Moiré patterns caused by overlapping pattern interference, rain streaks formed during image acquisition in rainy conditions, and banding effects resulting from color quantization. See Figure 1 for an illustration. These artifacts often display similar appearance that repeat in a quasi-periodic way over large image regions. However, they can differ in size, intensity, orientation, and may exhibit certain shape distortions. Additionally, their characteristics can vary across different images due to changes in image content and capturing.", + "bbox": [ + 89, + 662, + 483, + 829 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Removing structured artifacts has many applications. Eliminating moiré patterns improves digital photography", + "bbox": [ + 89, + 830, + 483, + 861 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/18490bcb9b39b556463d9de3aff215467f7618319c3a6d439533beca73b749a7.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 517, + 345, + 625, + 428 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/27b3199a8bdeb0a4f74b5c24a3e6ece83d9938a1ad95db60b6d44d2f2727a682.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 633, + 345, + 712, + 428 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/f3f4ae252b636f6bb503193f28a39639a40ecec227b60b97af9aff387b63015b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 720, + 345, + 901, + 428 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/13e0c606cbef0d8ed6d80aa3e24beb844257e273d8c4ad2523b9759e31aa7fab.jpg", + "image_caption": [ + "Figure 1. Structured artifacts with varying orientations, scales, and may involve warping effect. From left to right: moiré, rain, band." + ], + "image_footnote": [], + "bbox": [ + 517, + 431, + 625, + 501 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/d7781c63dc311025e1dd22964f9b94bee05f07bb835318904eb3cf5687c57596.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 632, + 431, + 712, + 501 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/cbf684e07eb7050c2158b2e3123ed491437caf1152495299b503b95199bce24b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 720, + 431, + 903, + 501 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "and screen captures. Rain streak removal improves the reliability of outdoor vision systems in rainy conditions, i.e., those used in autonomous driving and video surveillance. Addressing banding artifacts is necessary in professional imaging and printing to achieve smooth gradients and high dynamic range. Additionally, structured artifact removal finds applications in medical imaging, industrial quality inspection, and consumer electronics, enhancing image quality and reliability. All these highlight a broad impact of structured artifacts removal across many real-world applications.", + "bbox": [ + 511, + 566, + 906, + 718 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Structured artifact removal presents significant challenges that differ from random noise removal. The primary difficulties arise due to the quasi-periodic nature of these artifacts and their resemblance to authentic repeating image patterns (e.g., textures). These artifacts often intertwine with natural image features, creating complex overlapping structures that are difficult to disentangle from genuine image content. For instance, moiré patterns can closely mimic textile textures, and banding effects may align with natural linear features. Additionally, these artifacts often display noticeable variations in pattern size and geometric shape, further complicating their identification and removal.", + "bbox": [ + 511, + 719, + 908, + 900 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 46 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "*Corresponding author.", + "bbox": [ + 109, + 875, + 235, + 887 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "Code is available at: https://github.com/csxyhe/SADT", + "bbox": [ + 114, + 888, + 462, + 898 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "12731", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1.1. Challenges and Existing Works", + "text_level": 1, + "bbox": [ + 91, + 90, + 366, + 107 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Traditional approaches, relying on handcrafted priors to distinguish structured artifacts from genuine image features, often struggle to handle variations in these artifacts, resulting in limited effectiveness for artifacts with noticeable variations. While deep learning methods have shown promising performance in removing these artifacts, they also face notable challenges. Early approaches are based on convolutional neural networks (CNNs) [1, 12, 23, 28, 29, 33, 48], which have limited capacity on capturing long-range relationships due to their local receptive fields. However, the quasi-periodicity of local patterns with certain similarities is a crucial cue for identifying structured artifacts. Thus, the design of the NN for removing structured artifacts removal must be capable of effectively exploiting long-range relationships among local structures to accurately identify these artifacts.", + "bbox": [ + 89, + 112, + 485, + 338 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Recent works addressed this limitation by leveraging transformers with self-attention mechanisms [5, 6, 51]. However, challenges remain. The significant variations in the geometric shapes and pattern sizes of structured artifacts make it challenging to synthesize sufficient training data to have a comprehensive coverage of all possible variations. In addition, training NNs on very large datasets can be also computationally expensive. Consequently, relying solely on extensive datasets may not lead to robust generalization. To conclude, an effective transformer-based NN for structured artifacts removal should be designed with deformable capacity and scale adaptability, enabling them to generalize effectively to unseen data even with limited training samples.", + "bbox": [ + 89, + 339, + 483, + 535 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "1.2. Main Idea", + "text_level": 1, + "bbox": [ + 91, + 542, + 207, + 556 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "An effective NN for structured artifact removal requires specialized mechanisms that can efficiently adapt to the varying scales and distortions of these artifacts, as well as exploit long-range dependencies among local structures. In this paper, we propose a transformer-based NN designed with specific optimization for handling scale variations and distortions of structured artifacts. Our proposed architecture, the Scale-Adaptive Deformable Transformer (SADT), leverages the transformer's capacity to capture long-range dependencies while incorporating new modules for scale adaptation and deformable transformations, enabling effective removal of structured artifacts with noticeable variations.", + "bbox": [ + 89, + 564, + 485, + 744 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Scale-enhanced deformable convolution (SEDC): Structured artifacts are semi-regular, exhibiting repetitive patterns with similar appearances but noticeable variations in size, orientation, and shape distortion. Classic convolution with fixed geometric structures struggle to capture such variable patterns effectively. Existing deformable convolutions offer some flexibility through spatial adaptations but remain ineffective in handling large-scale and orientation variations. To address these issues, we introduce the SEDC module, a convolution module designed to better manage these variations.", + "bbox": [ + 89, + 750, + 485, + 900 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The cascaded deformable convolutions are used to generate intermediate features from different perspectives, scales, and orientations, enabling flexible sampling and effective modeling of warped patterns. Furthermore, the subsequent Spatial-Channel Mixing Convolution (SCMC) with large-scale convolution kernel is introduced, which helps capture broader contextual information while preserving positional cues. Finally, the parallel features are adaptively aggregated for modeling scale-varying and deformable patterns.", + "bbox": [ + 511, + 90, + 906, + 227 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Scale-adaptive deformable attention (SADA): Structured artifacts are repetitive patterns spanning over large image regions. These patterns are quasi-periodic with non-uniform spatial distribution and size variation. While transformer is effective in capturing such repetitive patterns through attention mechanisms, the significant variations in size and non-uniform spatial distribution of these patterns demand better scale adaptability and more effective sampling within the attention mechanisms.", + "bbox": [ + 511, + 231, + 908, + 366 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "This motivates us to propose Scale-Adaptive Deformable Attention (SADA), which assigns multi-scale multi-head mechanism for capturing long-range dependencies within a semantic layer. Moreover, SADA introduces deformable-sampling offset to effectively handle the non-uniform spatial distribution of repetitive patterns. These two techniques enable local self-attention to better capture long-range quasi-periodic patterns through selective information aggregation.", + "bbox": [ + 511, + 367, + 908, + 489 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "1.3. Our Contributions", + "text_level": 1, + "bbox": [ + 513, + 500, + 694, + 513 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In summary, our contributions are listed as follows:", + "bbox": [ + 511, + 522, + 852, + 536 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- We propose the SEDC module to model scale-varying artifacts with abundant orientations and potential distortions.", + "- We introduce the SADA module, enhancing local selfattention by incorporating a multi-scale multi-head mechanism and deformable technique, to capture long-range contextual information of quasi-periodic patterns.", + "- Leveraging on SEDC and SADA, we construct the transformer-based SADT for structured artifact removal." + ], + "bbox": [ + 511, + 537, + 906, + 657 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Extensive experiments on diverse artifact removal tasks, including image demoiring, deraining, and debanding, showed that SADT achieves the state-of-the-art performance.", + "bbox": [ + 511, + 659, + 906, + 704 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related Work", + "text_level": 1, + "bbox": [ + 513, + 719, + 653, + 734 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Non-learning structured artifact removal method: Traditional methods rely on pre-defined priors on structured artifacts to distinguish them from genuine image content. For moiré pattern removal from texture images, Liu et al. [22] proposed using a low rank prior for textures and a sparse prior of Moiré patterns in the discrete cosine transform to separate two. For removing rain streak, Luo et al. [27] separates the rain streak and background layers via discriminative sparse coding. Li et al. [19] introduces a patch-based approach leveraging Gaussian mixture model priors to separate", + "bbox": [ + 511, + 750, + 908, + 902 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "12732", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "rain streak and background layers. These handcrafted priors are often overly simplistic and tend to fail in removing artifacts with large variations in scale, orientation, and shape.", + "bbox": [ + 89, + 90, + 485, + 137 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Multi-scale deep learning methods for structured artifact removal: There is extensive literature on deep learning for structured artifact removal. Here, we focus exclusively on the most relevant works employing multi-scale strategies to address artifact patterns of varying scales. To remove Moiré patterns from images, Sun et al. [33] proposed a multiresolution encoder-decoder architecture designed to model Moiré patterns at varying scales. Yu et al. [48] introduced a semantic-aligned scale-aware module that combines multiscale features through an attention mechanism to reduce scale discrepancies in Moiré patterns. Nguyen et al. [28] developed a multiscale guided restoration block targeting both low- and high-frequency noise. For deraining, Wang et al. [38] proposed a wavelet-inspired multi-level module for rain removal. For debanding, Liu et al. [23] introduced a dual-branch depthwise group fusion module to capture both inter-scale and intra-scale correlations of banding patterns. Quan et al. [30] introduced a cross-Scale invertible NN with deformable convolutions to handle scale variations of banding artifacts. For shadow removal, built upon the Retinex decomposition model, Huang et al. [15] proposed a neural network with a multi-scale structure. These methods have shown promising results in structured artifact removal tasks. However, these methods neither fully exploit the repetitive nature of artifacts for accurate separation from genuine image structures, nor do they effectively address the significant orientation or shape variations of artifacts in their designs.", + "bbox": [ + 91, + 141, + 485, + 547 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Transformer for removing structured artifacts: Transformer is an effective architecture for modeling global and local relationship of local structures. In image processing, an image is partitioned into small patches to fit the transformer architecture [3]. Many transformer methods have been proposed for different structured artifact removal tasks. For deraining, Xiao et al. [44] introduced a combination of window-based and global self-attention mechanisms. Chen et al. [5] developed sparse channel self-attention to selectively retain key values. For image debanding, Wen et al. [42] utilized local self-attention with varying window sizes across different heads to capture features at multiple scales.", + "bbox": [ + 89, + 551, + 485, + 733 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "However, these transformer methods have not fully leveraged self-attention's ability to capture long-range dependencies of repetitive patterns. Such weakness arises from the high computational costs of standard transformers due to quadratic scaling with patch numbers. To reduce costs, these transformers rely on techniques such as window-based self-attention [21, 41], channel-only self-attention in Restormer [51], and local self-attention (LSA) [18, 31]. While computationally efficient, these methods' limited spatial perceptive fields weaken their ability to capture long-range information. To address these limitations, the proposed SADT extends the", + "bbox": [ + 89, + 734, + 485, + 901 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "LSA with scale-adaptive deformable sampling to effectively capture long-range dependencies of repetitive artifacts with varying scales, orientations, and distortions. This design enables SADT to model structured artifacts more effectively.", + "bbox": [ + 511, + 90, + 906, + 151 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Deformable convolution and attention: Deformable convolutions [10, 40, 45, 56] introduce learnable offsets to convolution kernels, enabling adaptive feature sampling to handle geometric variations and spatial transformations. Deformable attention [2, 43, 57] extends this adaptability to the self-attention mechanism, allowing transformers to focus on relevant spatial locations and better capture complex patterns. Zhu et al. [57] proposed a multi-scale deformable attention module for generating a feature pyramid and allocating several keys per query to capture multi-scale features. Xia et al. [43] introduced the deformable attention transformer, which learns shared deformed points for efficient computations. Cao et al. [2] developed a reference-based deformable attention module to enhance low-resolution feature representations using multiple relevant features.", + "bbox": [ + 511, + 155, + 908, + 381 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The design of deformable self-attention in our work differs significantly from that of [43]. Xia et al. [43] combines self-attention with deformable convolution, computing the relevance between input and reference images for offset prediction. In contrast, our method integrates deformable sampling within self-attention by first predicting offsets and then calculating the similarity between predicted keys and the query. It is more effective in modeling long-range dependence of patterns with varying appearances. Furthermore, we enhance the scale adaptability of deformable convolution and attention for handle scale variations among artifacts.", + "bbox": [ + 511, + 382, + 910, + 549 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. Methodology", + "text_level": 1, + "bbox": [ + 511, + 559, + 650, + 575 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In this section, we introduce the two key components of the proposed transformer. SEDC, an advanced deformable convolution module designed to capture local patterns with large-scale and orientation variations of local patterns. SADA, a deformable attention module that exploits long-range relationships among repetitive patterns with varying sizes and non-uniform spatial distributions. Subsequently, detail present the proposed transformer, SADT, which integrates SEDC and SADA within a multi-in multi-out framework, together with the training loss.", + "bbox": [ + 511, + 584, + 908, + 736 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Scale-Enhanced Deformable Convolution", + "text_level": 1, + "bbox": [ + 511, + 742, + 867, + 757 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "SEDC is for effectively capturing scale-varying and deformable patterns. The module consists of two key components: the cascaded DCNs [56] for modeling complex patterns with distortions, and the parallel Spatial-Channel Mixing Convolution (SCMC) layers for capturing patterns with large scale variations. The pipeline of SEDC is illustrated in Figure 2.", + "bbox": [ + 511, + 763, + 908, + 869 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "To capture patterns of large size, we introduce the SCMC layer for the expansion of receptive field. For each $i$ -th", + "bbox": [ + 511, + 869, + 906, + 900 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "12733", + "bbox": [ + 480, + 944, + 519, + 955 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/e12412d60de57c41d2a4242b11273d89a189d572ddabc87593d66f7b1eed0f6b.jpg", + "image_caption": [ + "Figure 2. SEDC configuration." + ], + "image_footnote": [], + "bbox": [ + 116, + 88, + 460, + 171 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "channel of an input feature $\\mathbf{X} \\in \\mathbb{R}^{H \\times W \\times C}$ , SCMC first partitions each feature map into several non-overlapping patches of size $K \\times K$ , and subsequently reshape it into a matrix $\\mathbf{X}_i \\in \\mathbb{R}^{K^2 \\times \\frac{HW}{K^2}}$ . Denote such an operation by", + "bbox": [ + 89, + 218, + 483, + 285 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\Psi : \\boldsymbol {X} \\in \\mathbb {R} ^ {H \\times W \\times C} \\to \\left\\{\\boldsymbol {X} _ {i} \\in \\mathbb {R} ^ {K ^ {2} \\times \\frac {H W}{K ^ {2}}}, \\forall i \\right\\},\n$$\n", + "text_format": "latex", + "bbox": [ + 130, + 292, + 442, + 313 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Then, a learnable linear transform $\\mathbf{W} \\in \\mathbb{R}^{K^2 \\times K^2}$ is applied to each $\\mathbf{X}_i$ : for each $\\mathbf{X}_i$ ,", + "bbox": [ + 89, + 323, + 483, + 356 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\Phi_ {\\boldsymbol {W}}: \\boldsymbol {X} _ {i} \\in \\mathbb {R} ^ {K ^ {2} \\times \\frac {H W}{K ^ {2}}} \\to W \\boldsymbol {X} _ {i} \\in \\mathbb {R} ^ {K ^ {2} \\times \\frac {H W}{K ^ {2}}},\n$$\n", + "text_format": "latex", + "bbox": [ + 130, + 363, + 442, + 383 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $W$ is shared across different $X_{i}$ s. Afterward, the output of $\\Phi_W$ is reshaped and concatenated to the original dimension $(H,W,C)$ , using the inversion of $\\Psi$ . A gating mechanism is then applied, performing an element-wise product between the original input and processed feature:", + "bbox": [ + 89, + 393, + 483, + 470 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\operatorname {S C M C} (\\boldsymbol {X}) = \\boldsymbol {X} \\odot \\Psi^ {- 1} \\left(\\Phi_ {\\boldsymbol {W}} \\left(\\Psi (\\boldsymbol {X})\\right)\\right). \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 151, + 479, + 482, + 498 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "SCMC can effectively capture large pattern with minimal computation overhead. However, similar to other MLP-based architectures (e.g., MLP Mixer [35]), its non-overlapping partitions may struggle with patterns that span across regions, particularly when intersected by partition boundaries. This issue will be mitigated by the cascaded DCNs, which helps the aggregation of relevant non-local information into local windows, and provides features with different scales into parallel SCMCs. The efficient DCN utilizes a PConv operation [4] followed by a pointwise convolution to generate deformable offsets and modulation scalars.", + "bbox": [ + 89, + 507, + 483, + 672 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The complete pipeline of SEDC is shown in Fig. 2. In the main branch, input feature channels are first reduced by a factor of 4. The cascaded DCN processes corresponding features, while parallel SCMCs handle the original input features. Finally, features with different receptive field shapes are merged via pointwise convolution. Overall, this design effectively captures local patterns with varying sizes, orientations, geometric distortions, and warping effects.", + "bbox": [ + 89, + 672, + 483, + 794 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2. Scale-Adaptive Deformable Attention", + "text_level": 1, + "bbox": [ + 89, + 803, + 415, + 819 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "SADA integrates the multi-scale multi-head mechanism with deformable-sampled offsets to effectively model long-range dependencies in repetitive artifact patterns. Given the input feature map $\\mathbf{X} \\in \\mathbb{R}^{H \\times W \\times C}$ , we first generate query $Q$ , key $K$ and value $V$ projections, enriched with local context", + "bbox": [ + 89, + 825, + 483, + 900 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "by applying pointwise convolutions to aggregate pixel-wise cross-channel context, followed by $3 \\times 3$ depth-wise convolutions to encode channel-wise spatial context. A multi-head mechanism is then used, where $Q, K, V$ are split into $S$ parts along channels for subsequent process.", + "bbox": [ + 511, + 90, + 906, + 167 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Revisiting standard LSA: Omitting head index, let $V_{u}$ , $Q_{u}$ , $K_{u}$ denote the $u$ -th feature vector in $V$ , $Q$ , $K$ , and $u$ denote the related spatial location. In the standard LSA [31], for query $Q_{u}$ , the output $Y_{u}$ is defined as", + "bbox": [ + 511, + 170, + 906, + 231 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {Y} _ {u} = \\sum_ {v \\in \\mathcal {N} (u)} \\omega_ {u \\rightarrow v} \\boldsymbol {V} _ {u}, \\omega_ {u \\rightarrow v} = \\frac {\\exp \\left(\\boldsymbol {Q} _ {u}\\right) ^ {\\top} \\boldsymbol {K} _ {v}}{\\sum_ {w \\in \\mathcal {N} (u)} \\exp \\left(\\boldsymbol {Q} _ {u}\\right) ^ {\\top} \\boldsymbol {K} _ {w}}, \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 511, + 239, + 911, + 292 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\mathcal{N}(u) = u + \\{-1,0,1\\}^2$ denotes the index set of neighboring points of the position $u$ , the size of $\\mathcal{N}(u)$ is rather small and results in poor capability to capture long-range dependencies with varying locations.", + "bbox": [ + 511, + 292, + 906, + 354 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Multi-scale multi-head mechanism for LSA: To leverage the long-range dependencies of patterns, we introduce multi-dilatation initial grids, denoted as $\\Delta^{(s)} = \\{-M_s,0, + M_s\\} ^2$ , to construct a multi-scale neighborhood. Specifically, for the $s$ -th head, the neighborhood is denoted as $\\mathcal{N}^s (u) = u + \\Delta^s$ . By assigning different sampling scales to each attention head, a broad and sparse receptive field, as shown in Figure 3(c), is formed through the aggregation of information across heads, enabling the layer to effectively capture long-range contextual information more effectively.", + "bbox": [ + 511, + 358, + 908, + 508 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Deformable-sampling offset for LSA: To handle non-uniform distribution of structured artifacts, we integrate deformable sampling into LSA by predicting offsets $\\delta_u^s$ for different position $u$ and different scale $s$ . The neighboring set is given by", + "bbox": [ + 511, + 512, + 908, + 589 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\bar {\\mathcal {N}} ^ {s} (u) = u + \\Delta^ {(s)} + \\delta_ {u} ^ {s},\n$$\n", + "text_format": "latex", + "bbox": [ + 620, + 597, + 795, + 616 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Let $D_u^s$ denote $\\delta_u^s$ in vector form, and $D^s$ combines all $D_u^s$ . Then, the offsets $D^s$ are extracted from $Q$ as", + "bbox": [ + 513, + 625, + 906, + 656 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {D} ^ {s} = \\phi^ {s} (\\boldsymbol {Q}) \\in \\mathbb {R} ^ {H \\times W \\times 2 | \\Delta^ {(s)} |}, \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 599, + 665, + 906, + 686 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Here, $\\phi^s$ is implemented using PConv [4] followed by a pointwise convolution. Instead of a single head $Q_s$ , the full $Q$ is fed to $\\phi^s$ for a holistic query view, enabling consistent head-specific adaptations. Learnable deformable offsets $D^s$ adjust sampling locations within a head, enhancing the relevance of sampled key and value to the central query and improving the layer's handling of complex patterns.", + "bbox": [ + 511, + 695, + 906, + 800 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Finally, the $s$ -th head output of SADA is calculated by", + "bbox": [ + 531, + 801, + 890, + 816 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {Y} _ {u} ^ {s} = \\sum_ {v \\in \\tilde {\\mathcal {N}} ^ {s} (u)} \\omega_ {u \\rightarrow v} \\boldsymbol {V} _ {u} ^ {s}. \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 622, + 825, + 903, + 861 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "To mitigate the negative effects caused by statistical-based offset generator $\\phi^s$ , the low-relevance points, referred to", + "bbox": [ + 511, + 869, + 906, + 901 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "12734", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/26969461c2da252e50c7dc2d873906a68b609d3aad27c7861c0c1a3cf449eef7.jpg", + "image_caption": [ + "(a) Overall SADA." + ], + "image_footnote": [], + "bbox": [ + 138, + 95, + 367, + 367 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/366547a4407cb33e037e3dbe47b51e0544fe3af5a66f089ad586396273162cb8.jpg", + "image_caption": [ + "(b) Deformable sampling process of the $s$ -th head." + ], + "image_footnote": [], + "bbox": [ + 372, + 104, + 651, + 268 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/780f209f31a2ec7d82446a21bb5d6000393155851241e63cb4cd702a586b842f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 370, + 287, + 660, + 362 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/7c0e7d3ca0817fa01603efc28366f9e06617052167e5bd6ba41df3116162ee16.jpg", + "image_caption": [ + "(c) Initial sampling grids $\\{\\Delta^{(s)}\\}$" + ], + "image_footnote": [], + "bbox": [ + 666, + 104, + 859, + 205 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/0b220c97b6564a4020272459b3f394be20e6f26d481b3ac2dad432dd4f0a1683.jpg", + "image_caption": [ + "(d) The process instance for one head.", + "Figure 3. Overview of SADA: (a) SADA employs multi-scale multi-head mechanism to aggregate (c) a broad, sparse receptive field for capturing long-range context, while utilizing (b) deformable sampling offsets to model complex patterns. (d) A process instance of SADA within a head." + ], + "image_footnote": [], + "bbox": [ + 666, + 220, + 859, + 369 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "invalid points, are filtered out in the weighted sum with a low $\\omega_{u\\rightarrow v}$ in Eq. 4. The remaining points are considered valid. The output $Y^{s}$ s are combined into $\\pmb{Y}$ along channels and fed into a feed-forward network (FFN) for subsequent processing. The FFN is implemented by FRFN [55]. See Fig. 3 for an illustration.", + "bbox": [ + 89, + 459, + 483, + 551 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.3. SADT for Structured Artifact Removal", + "text_level": 1, + "bbox": [ + 89, + 560, + 426, + 575 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Building on SEDC and SADA, we propose a transformer, called SADT, for structured artifact removal. SADT employs a multi-input multi-output (MIMO) strategy, as shown in Fig. 4. Given a degraded image $\\mathbf{Y} \\in \\mathbb{R}^{W \\times H \\times 3}$ , SADT initially extracts shallow features of size $H \\times W \\times C$ via a convolution layer. These features then pass through 4 encoder blocks, capturing fine to coarse scales, with each block comprising $N_{i}$ cascaded SADA layers to flexibly model long-range dependencies of repetitive artifacts and a pair of SEDC layers in the end for capturing scale-varying features.", + "bbox": [ + 89, + 582, + 483, + 733 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Within the encoder, channels are expanded and spatial resolution is reduced. Following [8, 9], features from downsampled degraded images are merged into the main path through a fusion module and subsequently processed by 3 decoders to progressively reconstruct the image from coarse to fine scales. Features in the decoder are concatenated with those from the encoder via skip connections, followed by a convolutional layer for channel dimension adjustment. The convolutional layer followed the decoder generates a residual image, which is added to the corresponding downsampled input to restore the image. The image at the finest scale", + "bbox": [ + 89, + 734, + 483, + 901 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "serves as the final output, while other scales also contribute to the loss function during training.", + "bbox": [ + 511, + 459, + 906, + 491 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.4. Loss Function for NN Training", + "text_level": 1, + "bbox": [ + 511, + 507, + 785, + 523 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "A multi-scale loss is used for training. Let $O_{t}$ be the output from the $t$ -th scale decoder block, while $X^{gt}$ be the ground truth. Besides standard $L_{1}$ fitting loss, we also include an auxiliary loss function that minimizes the distance between input and output in feature space, tailed for each task.", + "bbox": [ + 511, + 531, + 905, + 607 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Image demoiréing: The perceptual loss [16] $\\mathcal{L}_p$ is used as an auxiliary loss, and the overall loss is", + "bbox": [ + 511, + 613, + 905, + 643 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} = \\sum_ {t = 1} ^ {3} \\mathcal {L} _ {1} \\left(\\boldsymbol {O} _ {t}, \\boldsymbol {X} _ {\\downarrow 2 ^ {t - 1}} ^ {g t}\\right) + \\lambda_ {p} \\cdot \\mathcal {L} _ {p} \\left(\\boldsymbol {O} _ {t}, \\boldsymbol {X} _ {\\downarrow 2 ^ {t - 1}} ^ {g t}\\right), \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 532, + 659, + 905, + 700 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\lambda_p > 0$ is a hyper-parameter, set to 1 in experiments.", + "bbox": [ + 511, + 715, + 903, + 733 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Image debanding/deraining: Unlike moiré patterns, band effects and rain streaks exhibit more pronounced stripe-like appearances, yielding strong responses in the frequency domain. Thus, the frequency loss $\\mathcal{L}_f$ is introduced as an additional loss term, and the overall loss is", + "bbox": [ + 511, + 737, + 906, + 811 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} = \\sum_ {t = 1} ^ {3} \\mathcal {L} _ {1} \\left(\\boldsymbol {O} _ {t}, \\boldsymbol {X} _ {\\downarrow 2 ^ {t - 1}} ^ {g t}\\right) + \\lambda_ {f} \\cdot \\mathcal {L} _ {f} \\left(\\boldsymbol {O} _ {t}, \\boldsymbol {X} _ {\\downarrow 2 ^ {t - 1}} ^ {g t}\\right), \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 531, + 829, + 903, + 869 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\lambda_{f} > 0$ is a hyper-parameter, set to 0.1 in experiments.", + "bbox": [ + 511, + 885, + 906, + 901 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "12735", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/3a9cacf7fc8ba0171f708675a83945402a6028153ea077f50e4f02ae7094cc26.jpg", + "image_caption": [ + "Figure 4. The proposed network employs a multiscale hierarchical encoder-decoder architecture. Each block comprises $N$ cascaded SADA layers while utilizing a pair of SEDC layers at the ends." + ], + "image_footnote": [], + "bbox": [ + 135, + 88, + 861, + 340 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4. Experiments", + "text_level": 1, + "bbox": [ + 89, + 401, + 225, + 419 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In this section, we first discuss the experimental setting, followed by the evaluation of the effectiveness of our SADT for several structured artifact removal tasks, including image demoiring, debanding and deraining. Ablation studies are then conducted to assess each component of SADT. More results can be found in supplementary material.", + "bbox": [ + 89, + 428, + 483, + 518 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.1. Experimental Settings", + "text_level": 1, + "bbox": [ + 89, + 534, + 299, + 550 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In our SADT, The number of channels is set to $C = 32$ while $\\{N_0, N_1, N_2, N_3\\}$ are set to $\\{0, 4, 4, 4\\}$ . The partition size in SCMC is $K = 8$ . 4 heads are used in SADA, and $M_s$ is set as $\\{1, 3, 5, 7\\}$ . Model training employs the Adam optimizer [17] with $\\beta_1 = 0.9$ and $\\beta_2 = 0.999$ . Code will be released upon acceptance. Experimental datasets and implementation details are listed below:", + "bbox": [ + 89, + 556, + 483, + 662 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Image demoiréing: Three datasets are used for benchmarking: TIP2018 [33], FHDMi [13], and LCDMoiré [49]. The training employs cyclic cosine annealing [26], with an initial learning rate of $2 \\times 10^{-4}$ decaying to $10^{-6}$ over each cycle. For two high-definition datasets, FHDMi and LCDMoiré, we randomly crop $512 \\times 512$ patches from the images, and train the model for 150 epochs with the annealing cycle 50 and batch size of 2. For TIP2018 dataset, the model is trained for 70 epochs with annealing cycle 10. Consistent with [48], no data augmentation is utilized in training.", + "bbox": [ + 89, + 667, + 483, + 819 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Image debanding: DID dataset [54] is used for benchmarking, which contains 51,490 pairs of banded and latent image patches with size $256\\times 256$ (30,829 training pairs, 10,237 validation pairs and 10,354 testing pairs) The initial learning rate is 0.0002, and decreased to $10^{-6}$ for 300K iter", + "bbox": [ + 89, + 824, + 483, + 900 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "ations with the cosine annealing scheme [26]. Batch size is 4. Data augmentations include horizontal/vertical flipping and rotations of $0^{\\circ}$ , $90^{\\circ}$ , $180^{\\circ}$ and $270^{\\circ}$ .", + "bbox": [ + 511, + 402, + 906, + 448 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Image deraining: The large-scale real-world dataset, SPAD [39], is used for benchmarking the task of rain streak removal. It contains 638,492 image pairs for training and 1,000 for testing. Batch size is 32 and th total iteration number is $300\\mathrm{K}$ . The initial learning rate is fixed as $3\\times 10^{-4}$ for the first 92K iterations, and then decreased to $1\\times 10^{-6}$ for 208K iterations with the cosine annealing scheme [26]. Random vertical/horizontal flips are used in data augmentation.", + "bbox": [ + 511, + 452, + 908, + 573 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Two common quantitative evaluation metrics are used for all tasks: PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity Index). Consistent with existing deraining methods [5, 44], PSNR/SSIM scores are computed using the Y channel in YCbCr color space. For image demoiring and debanding where periodic patterns significantly impact perception, the LPIPS metric [52] is also used.", + "bbox": [ + 511, + 575, + 908, + 681 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2. Quantitative and Qualitative Evaluation", + "text_level": 1, + "bbox": [ + 511, + 691, + 857, + 709 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Image demoiréing: See Tab. 1 for the comparison of different methods. SADT outperforms all compared methods on the PSNR metric and achieves the best SSIM scores on both TIP2018 and FHDMi datasets. While many methods, including SADT, achieve comparable SSIM scores on the LCDmoiré dataset, SADT maintains its superiority with the highest PSNR performance. The substantial performance gains on these datasets demonstrate SADT's effectiveness in handling moiré artifacts. Qualitative analysis also reveals that SADT exhibits remarkable improvement in moiré pattern removal, particularly in challenging scenarios such as patterns intertwined with hair details and those distributed", + "bbox": [ + 511, + 719, + 908, + 900 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "12736", + "bbox": [ + 480, + 945, + 519, + 955 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/e34758b91ec6cc38cebbb3185d9415d18dbd83137bb646e6b77056fdebe01e5e.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
MethodTIP2018 [33]FHDMi [13]LCDMoiré [49]
LPIPS↓PSNR↑SSIM↑LPIPSPSNRSSIMLPIPSPSNRSSIM
MopNet [12]-27.750.8950.179422.760.7958---
FHDe2Net [13]-27.780.8960.168822.930.7885-41.40-
WDNet [24]-28.080.904----29.660.9670
MBCNN [53]-30.030.8930.198022.310.8095-44.040.9948
ESDNet [48]0.081629.810.9160.135424.500.83510.009744.830.9963
CDDF [37]-28.870.8940.161023.630.8040-44.10-
RVDNet [7]----24.290.8352-44.540.9932
RRID [46]----24.390.8300---
Ours0.060830.770.9260.123824.960.84630.006546.430.9923
", + "bbox": [ + 93, + 88, + 637, + 242 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/d3f07b330ecf7b2fd610e44f54cb597534e043c94477c899e7980e7a3a7c84d3.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
MethodPSNR↑SSIM↑LPIPS↓
FCDR [14]25.730.71700.3766
FFmpeg [11]35.330.93520.0622
AdaDeband [36]35.350.93920.0639
BitNet [1]38.240.96330.0505
ADNet [34]38.290.96120.0499
MWCNN [25]39.240.96880.4854
MPRNet [50]39.420.96970.0461
Restormer [51]39.500.97090.0478
Ours39.780.97290.0453
", + "bbox": [ + 640, + 88, + 905, + 242 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/09e4b09302f65d399bb59c1de75c81247fdd7ac51fd43a7afd0f0df09fcca9fc.jpg", + "image_caption": [ + "Figure 5. Visual inspection of the results from different image demoiring methods on sample images; see zoom-in box for details inspection." + ], + "image_footnote": [], + "bbox": [ + 127, + 295, + 869, + 407 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "across flat clothing regions, as shown in Fig. 5.", + "bbox": [ + 89, + 455, + 401, + 469 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Image debanding: Given the limited availability of open-source deep learning methods specifically for debanding, we extended our evaluation to include methods adapted from related tasks. As shown in Tab.2, SADT demonstrates superior performance across all evaluation metrics. See Fig. 6 for visual comparison, where SADT achieves optimal results with minimal banding artifacts and superior color fidelity.", + "bbox": [ + 89, + 474, + 483, + 580 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Image deraining: The SPAD dataset [39] is used for benchmarking. As shown in Tab. 3, SADT outperformed all other methods, in PSNR and SSIM. Notably, SADT achieved a $0.23\\mathrm{dB}$ improvement over the second best performer, NeRD-Rain-S [6], with $42\\%$ fewer parameter count. (See Tab. 4). As shown in Fig. 7, the visual comparison further validate our method's effectiveness. While exhibits residual streaks and NeRD-Rain-S produces discontinuous blocks in their outputs, SADT effectively removes rain streaks while maintaining high visual quality.", + "bbox": [ + 89, + 585, + 483, + 737 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Complexity comparison: See Tab. 4 for the comparison of different methods in terms of FLOPs and parameter numbers. Our model, SADT, maintains relatively low FLOPs and a small number of parameters, while achieving SOTA performance as evidenced in Tab. $1\\sim 3$ . This shows that the effectiveness of our model is from its design, not model size.", + "bbox": [ + 89, + 742, + 483, + 833 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.3. Ablation Study", + "text_level": 1, + "bbox": [ + 89, + 847, + 243, + 862 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "This study evaluates the contribution of key designs, SADA, SEDC, and MIMO architecture, toward performance gain", + "bbox": [ + 89, + 869, + 483, + 900 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/ec4462c56b1b870a8b532f666e8e9035a08b14bb87cd99e5f734635f3df647f7.jpg", + "table_caption": [ + "Table 1. Quantitative results for image demoiring. The best and second-best results are Table 2. Quantitative results of image deboldfaced and underlined, respectively. banding on the DID dataset [54]." + ], + "table_footnote": [], + "table_body": "
MethodSourcePSNR↑SSIM↑
PReLU [32]CVPR'1940.160.9816
RCDNet [38]CVPR'2043.360.9831
SPDNet [47]ICCV'2143.550.9875
MPRNet [50]CVPR'2145.000.9897
ECNet [20]WACV'2244.320.9913
IDT [44]TPAMI'2247.340.9929
Uformer-B [41]CVPR'2247.840.9925
Restormer [51]CVPR'2247.980.9921
DRSformer [5]CVPR'2348.530.9924
NeRD-Rain-S [6]CVPR'2448.900.9936
Ours-49.130.9939
", + "bbox": [ + 570, + 452, + 849, + 612 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/8a6774e8788b5d2b7f5bf29d563e2e05dfc541214ebd4defa377e6855f110d9e.jpg", + "table_caption": [ + "Table 3. Image deraining results on the SPAD dataset [39]. The best and second-best results are boldfaced and underlined, respectively." + ], + "table_footnote": [], + "table_body": "
TaskMethod#Params(M)#FLOPs(G)
DemoiréingESDNet [48]5.9317.6
MBCNN [53]13.9-
DebandingRestormer [51]26.13150.0
MPRNet [50]20.13777.0
DerainingNeRD-Rain-S [6]10.5379.2
DRSformer [5]33.65242.9
Ours6.1339.7
", + "bbox": [ + 527, + 662, + 893, + 790 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 4. Complexity comparison with SOTA methods in different tasks. The test image is of size $256 \\times 256$ .", + "bbox": [ + 511, + 797, + 905, + 825 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "of our model. For SADA, we removed the multi-scale head mechanism and deformable sampling offset, reducing it to multi-head LSA. For SEDC, we substituted it with a single", + "bbox": [ + 511, + 854, + 905, + 900 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "12737", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/d67476ce008b6e90254367f5b920789fb20385f4661dc52e9b60162aefaf1bc0.jpg", + "image_caption": [ + "Degraded" + ], + "image_footnote": [], + "bbox": [ + 125, + 90, + 272, + 205 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/d691ae42742b29842e54b1ccbbb3c974c519c71e6701947492c1e7047b82148f.jpg", + "image_caption": [ + "Reference" + ], + "image_footnote": [], + "bbox": [ + 274, + 90, + 346, + 205 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/c833da0ab580a2d86d53810a244d54d169308991c112581edf1020cb5e8288e9.jpg", + "image_caption": [ + "FFmpeg" + ], + "image_footnote": [], + "bbox": [ + 348, + 90, + 421, + 205 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/831fc324be9d9600d55e81fae8bc6e030c271a484ab588f89aaad949967ee31a.jpg", + "image_caption": [ + "BitNet" + ], + "image_footnote": [], + "bbox": [ + 421, + 90, + 496, + 205 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/476347858e28260e4c3cb099275709896347ac097ea0a1281ba1cfd7ce90e429.jpg", + "image_caption": [ + "ADNet" + ], + "image_footnote": [], + "bbox": [ + 496, + 90, + 573, + 205 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/308722b10de25de6940078184cbd85d0fdd10b2f66790cda603f15ffc611d5d2.jpg", + "image_caption": [ + "MWCNN" + ], + "image_footnote": [], + "bbox": [ + 573, + 90, + 647, + 205 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/9fc412018d2df07dc4983bc954290d12761dac8eee708cefa2aef5bbe09b7368.jpg", + "image_caption": [ + "MPRNet" + ], + "image_footnote": [], + "bbox": [ + 647, + 90, + 720, + 205 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/ee0ccb03a30074b1912753b00796a13c6b83dbe013effe654294fb47b065aad3.jpg", + "image_caption": [ + "Restormer" + ], + "image_footnote": [], + "bbox": [ + 720, + 90, + 795, + 205 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/51caff34d730129e89291a9db1e8e6b8930a7ad08b9100a4ab4fbe28705f1665.jpg", + "image_caption": [ + "Ours" + ], + "image_footnote": [], + "bbox": [ + 797, + 90, + 870, + 205 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/ecfe37fee3dcaf0a42e963ab705ad66f1c3f2d3fac6f0279c8bdc5d8c1302290.jpg", + "image_caption": [ + "Figure 6. Visual inspection of the results from different image debanding methods on sample images; see zoom-in box for details inspection", + "Degraded", + "Figure 7. Visual inspection of the results from different image deraining methods on sampled images." + ], + "image_footnote": [], + "bbox": [ + 122, + 253, + 230, + 339 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/000aa278105d67f7a69eb4a7bc1f826e098f25661e4e789987ff51bc309fe272.jpg", + "image_caption": [ + "Reference" + ], + "image_footnote": [], + "bbox": [ + 232, + 253, + 338, + 339 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/e15a3c2430e39fd7123886b5a1710ae9ba1da945d5fb288896c4355e5607ee8f.jpg", + "image_caption": [ + "SPDNet" + ], + "image_footnote": [], + "bbox": [ + 339, + 253, + 444, + 339 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/a049252f83c7c347e0e67fa0e7a302e4aec2ee6636b18d23b4a0409918faf8b3.jpg", + "image_caption": [ + "Restormer" + ], + "image_footnote": [], + "bbox": [ + 444, + 253, + 552, + 339 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/62cfc5aadba427b59bf2a9f566536de2746a6d7030ddf4a0a4850b7c7b62825f.jpg", + "image_caption": [ + "DRSformer" + ], + "image_footnote": [], + "bbox": [ + 552, + 253, + 658, + 339 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/5d9797410ae695ae22fb6444ec2d21f72bc0478f1a0746ada3611a1140555cde.jpg", + "image_caption": [ + "NeRD-Rain-S" + ], + "image_footnote": [], + "bbox": [ + 660, + 253, + 764, + 339 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/992b7a7a872e1f181b9a0d38b0728510fdee449bdacd466939f2ff0b3110199b.jpg", + "image_caption": [ + "Ours" + ], + "image_footnote": [], + "bbox": [ + 766, + 253, + 874, + 339 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "SCMC. To ensure a fair comparison, all models were adjusted for comparable sizes by modifying channel numbers, and are trained on the DID [54] dataset for 300K iterations. As shown in Tab. 5, each component makes significant contribution to performance improvement. Notably, SADA contributes a PSNR gain of $0.78\\mathrm{dB}$ and an SSIM improvement of 0.003, while the inclusion of SEDC yields an additional PSNR increase of $0.17\\mathrm{dB}$ .", + "bbox": [ + 89, + 400, + 483, + 521 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/22f245c44512667d3c6a0fbdd35f9281723496ae77dc2a6c5af9f1c88fbff2ef.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
SADASEDCMIMOPSNR(dB)↑SSIM↑LPIPS↓
---38.620.96630.0587
--38.820.96780.0533
-39.050.96920.0521
-39.610.97120.0480
39.780.97290.0453
", + "bbox": [ + 102, + 534, + 470, + 623 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Impact of the grid dilation: In this study, we investigate the performance benefits of the multi-head mechanism in SADA by comparing different settings of $M_{s}$ on the DID dataset. Tab. 6 shows that the multi-dilation offset grid improves performance with similar model size. Compared to assigning each head same small dilation (1,1,1,1), our setting (1,3,5,7) better exploits long-range dependencies, resulting in a PSNR gain of 0.2dB and a LPIPS reduction of 0.0039.", + "bbox": [ + 89, + 667, + 482, + 789 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Visualization of deformable offsets: Refer to Fig. 8 for a comparison of the sampling points between the standard LSA and our SADA. The purple star indicates the central point, while the yellow points and red circles represent the initial and final sampling positions, respectively. This shows SADA can better capture long-range and multi-scale context, which proves advantageous for the ensuing weighted aggregation.", + "bbox": [ + 89, + 794, + 483, + 902 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/4c5b660f0b30bec78612d5df9889306406088b86b27206e014cae09cafbbe41b.jpg", + "table_caption": [ + "Table 5. Results of ablation studies. Boldfaced: best values." + ], + "table_footnote": [], + "table_body": "
MsPSNR(dB)↑SSIM↑LPIPS↓
{1,1,1,1}39.580.97080.0492
{1,2,3,4}39.700.97230.0463
{1,3,5,7}39.780.97290.0453
", + "bbox": [ + 573, + 398, + 846, + 462 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/bfd953a627a2abc4954138fa3debdc2328a09d9327ee929ea53cae079735db6f.jpg", + "image_caption": [ + "Table 6. Results using different setting of dilation $M_{s}$ ." + ], + "image_footnote": [], + "bbox": [ + 560, + 496, + 674, + 585 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/6d76e442b18716801e65ee1cd73101a25e62b04883e1cf54f954d2eb3ce53086.jpg", + "image_caption": [ + "0-th head", + "Figure 8. Visualization of sampling points for LSA and SADA." + ], + "image_footnote": [], + "bbox": [ + 521, + 599, + 606, + 665 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/73d98c9d86f7edf79ab57523a9982fd5945b3bbc2a7af6ed44c4a69f34719144.jpg", + "image_caption": [ + "1-st head", + "(c) Sampling points for each SADA head." + ], + "image_footnote": [], + "bbox": [ + 617, + 599, + 704, + 665 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/dd646ab5fd9eefd96601511c31171cc032884f794f17d8c47f51ad61d9b7c460.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 746, + 497, + 862, + 585 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "(b) Sampling points for each LSA head.", + "bbox": [ + 715, + 587, + 895, + 598 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/2a0b1a3470f6a21ffa7906f139fa5646057af9a95b030c7e5c441f09400aeeeb.jpg", + "image_caption": [ + "2-nd head" + ], + "image_footnote": [], + "bbox": [ + 715, + 599, + 802, + 665 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/59a0015c909e693453bcbbf9c95f4fea10a30d1297d9446ee47f6ea50a835e12.jpg", + "image_caption": [ + "(a) Corresponding down-sampled image. (b) Sampling points for each LSA head.", + "3-rd head" + ], + "image_footnote": [], + "bbox": [ + 813, + 598, + 898, + 665 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Conclusion", + "text_level": 1, + "bbox": [ + 513, + 729, + 612, + 746 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this paper, we presented SADT, a universal transformer-based architecture for image restoration across diverse artifacts. Our approach integrates the SEDC module to capture scale-varying patterns with abundant orientations and potential distortions, and the SADA module to model long-range relationships among repetitive patterns with diverse sizes and non-uniform distributions. Extensive experiments showed that SADT consistently outperforms SOTA methods in the tasks including image demoiréing, debanding, and deraining.", + "bbox": [ + 511, + 756, + 906, + 893 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "12738", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Acknowledgements.", + "text_level": 1, + "bbox": [ + 91, + 90, + 263, + 107 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "This work was supported by National Key R&D Pogram of China (2023YFA1011601), the Basic and Applied Basic Research Foundation of Guangdong Province (2024A1515012287), Science and Technology Key Program of Guangzhou, China (2023B03J1388), National Natural Science Foundation of China (62372186), Natural Science Foundation of Guangdong Province (2023A1515012841), Fundamental Research Funds for the Central Universities (x2jsD2230220), National Natural Science Foundation of China (62106077), Natural Science Foundation of Guangdong Province (2022A1515011087) and Singapore MOE AcRF Tier 1 (Grant No. A-8000981-00-00).", + "bbox": [ + 89, + 114, + 485, + 297 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 310, + 187, + 325 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Junyoung Byun, Kyujin Shim, and Changick Kim. Bitnet: Learning-based bit-depth expansion. In Proceedings of the Asian Conference on Computer Vision, pages 67-82. Springer, 2019. 2, 7", + "[2] Jiezhang Cao, Jingyun Liang, Kai Zhang, Yawei Li, Yulun Zhang, Wenguan Wang, and Luc Van Gool. Reference-based image super-resolution with deformable attention transformer. In European conference on computer vision, pages 325-342. Springer, 2022. 3", + "[3] Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. Pre-trained image processing transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12299-12310, 2021. 3", + "[4] Jierun Chen, Shiu-hong Kao, Hao He, Weipeng Zhuo, Song Wen, Chul-Ho Lee, and S-H Gary Chan. Run, don't walk: chasing higher flops for faster neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12021-12031, 2023. 4", + "[5] Xiang Chen, Hao Li, Mingqiang Li, and Jinshan Pan. Learning a sparse transformer network for effective image deraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5896-5905, 2023. 2, 3, 6, 7", + "[6] Xiang Chen, Jinshan Pan, and Jiangxin Dong. Bidirectional multi-scale implicit neural representations for image deraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 25627-25636, 2024. 2, 7", + "[7] Yijia Cheng, Xin Liu, and Jingyu Yang. Recaptured raw screen image and video demoiring via channel and spatial modulations. Advances in Neural Information Processing Systems, 36, 2024. 7", + "[8] Sung-Jin Cho, Seo-Won Ji, Jun-Pyo Hong, Seung-Won Jung, and Sung-Jea Ko. Rethinking coarse-to-fine approach in single image deblurring. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4641-4650, 2021. 5", + "[9] Yuning Cui, Wenqi Ren, Sining Yang, Xiaochun Cao, and Alois Knoll. Irnext: Rethinking convolutional network design" + ], + "bbox": [ + 99, + 335, + 485, + 901 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "for image restoration. In International conference on machine learning, 2023. 5", + "[10] Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 764-773, 2017. 3", + "[11] FFmpeg Filters deband. Accessed: Aug. 31, 2021. [online]. available:: https://ffmpeg.org/ffmpeg-filters.html#deband, 2021.7", + "[12] Bin He, Ce Wang, Boxin Shi, and Ling-Yu Duan. Mop moiré patterns using mopnet. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2424-2432, 2019. 2, 7", + "[13] Bin He, Ce Wang, Boxin Shi, and Ling-Yu Duan. Fhde 2 net: Full high definition demoiring network. In Proceedings of the European Conference on Computer Vision, pages 713-729. Springer, 2020. 6, 7", + "[14] Qin Huang, Hui Yong Kim, Wen-Jiin Tsai, Se Yoon Jeong, Jin Soo Choi, and C-C Jay Kuo. Understanding and removal of false contour in hevc compressed images. IEEE Transactions on Circuits and Systems for Video Technology, 28(2): 378-391, 2016. 7", + "[15] Yan Huang, Xinchang Lu, Yuhui Quan, Yong Xu, and Hui Ji. Image shadow removal via multi-scale deep retina decomposition. Pattern Recognition, 159:111126, 2025. 3", + "[16] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision, pages 694–711. Springer, 2016. 5", + "[17] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 6", + "[18] Gang Li, Di Xu, Xing Cheng, Lingyu Si, and Changwen Zheng. Simvit: Exploring a simple vision transformer with sliding windows. In 2022 IEEE International Conference on Multimedia and Expo (ICME), pages 1-6. IEEE, 2022. 3", + "[19] Yu Li, Robby T Tan, Xiaojie Guo, Jiangbo Lu, and Michael S Brown. Rain streak removal using layer priors. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2736-2744, 2016. 2", + "[20] Yizhou Li, Yusuke Monno, and Masatoshi Okutomi. Single image deraining network with rain embedding consistency and layered LSTM. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 4060-4069, 2022. 7", + "[21] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1833-1844, 2021. 3", + "[22] Fanglei Liu, Jingyu Yang, and Huanjing Yue. Moiré pattern removal from texture images via low-rank and sparse matrix decomposition. In 2015 Visual Communications and Image Processing, pages 1-4. IEEE, 2015. 2", + "[23] Jing Liu, Xin Wen, Weizhi Nie, Yuting Su, Peiguang Jing, and Xiaokang Yang. Residual-guided multiscale fusion network for bit-depth enhancement. IEEE Transactions on Circuits" + ], + "bbox": [ + 516, + 92, + 908, + 900 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "12739", + "bbox": [ + 480, + 944, + 519, + 955 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "and Systems for Video Technology, 32(5):2773-2786, 2021. 2, 3", + "[24] Lin Liu, Jianzhuang Liu, Shanxin Yuan, Gregory Slabaugh, Ales Leonardis, Wengang Zhou, and Qi Tian. Wavelet-based dual-branch network for image demoiring. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIII 16, pages 86–102. Springer, 2020. 7", + "[25] Pengju Liu, Hongzhi Zhang, Kai Zhang, Liang Lin, and Wangmeng Zuo. Multi-level wavelet-cnn for image restoration. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 773-782, 2018. 7", + "[26] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. 6", + "[27] Yu Luo, Yong Xu, and Hui Ji. Removing rain from a single image via discriminative sparse coding. In Proceedings of the IEEE international conference on computer vision, pages 3397-3405, 2015. 2", + "[28] Duong Hai Nguyen, Se-Ho Lee, and Chul Lee. Multiscale coarse-to-fine guided screenshot demoiring. IEEE Signal Processing Letters, 2023. 2, 3", + "[29] Yuhui Quan, Shijie Deng, Yixin Chen, and Hui Ji. Deep learning for seeing through window with raindrops. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2463-2471, 2019. 2", + "[30] Yuhui Quan, Xuyi He, Ruotao Xu, Yong Xu, and Hui Ji. Image debanding using cross-scale invertible networks with banded deformable convolutions. Neural Networks, page 107270, 2025. 3", + "[31] Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan Bello, Anselm Levskaya, and Jon Shlens. Stand-alone self-attention in vision models. Advances in neural information processing systems, 32, 2019. 3, 4", + "[32] Dongwei Ren, Wangmeng Zuo, Qinghua Hu, Pengfei Zhu, and Deyu Meng. Progressive image deraining networks: A better and simpler baseline. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3937-3946, 2019. 7", + "[33] Yujing Sun, Yizhou Yu, and Wenping Wang. Moiré photo restoration using multiresolution convolutional neural networks. IEEE Transactions on Image Processing, 27(8):4160-4172, 2018. 2, 3, 6, 7", + "[34] Chunwei Tian, Yong Xu, Zuoyong Li, Wangmeng Zuo, Lunke Fei, and Hong Liu. Attention-guided cnn for image denoising. Neural Networks, 124:117-129, 2020. 7", + "[35] Ilya O Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, et al. Mlp-mixer: An all-mlp architecture for vision. Advances in neural information processing systems, 34:24261–24272, 2021. 4", + "[36] Zhengzhong Tu, Jessie Lin, Yilin Wang, Balu Adsumilli, and Alan C Bovik. Adaptive debanding filter. IEEE Signal Processing Letters, 27:1715-1719, 2020. 7", + "[37] Ce Wang, Bin He, Shengsen Wu, Renjie Wan, Boxin Shi, and Ling-Yu Duan. Coarse-to-fine disentangling demoiréing framework for recaptured screen images. IEEE Transactions" + ], + "bbox": [ + 91, + 90, + 483, + 900 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "on Pattern Analysis and Machine Intelligence, 45(8):9439-9453, 2023. 7", + "[38] Hong Wang, Qi Xie, Qian Zhao, and Deyu Meng. A model-driven deep neural network for single image rain removal. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3103-3112, 2020. 3, 7", + "[39] Tianyu Wang, Xin Yang, Ke Xu, Shaozhe Chen, Qiang Zhang, and Rynson WH Lau. Spatial attentive single-image deraining with a high quality real rain dataset. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12270-12279, 2019. 6, 7", + "[40] Wenhai Wang, Jifeng Dai, Zhe Chen, Zhenhang Huang, Zhiqi Li, Xizhou Zhu, Xiaowei Hu, Tong Lu, Lewei Lu, Hongsheng Li, et al. Internimage: Exploring large-scale vision foundation models with deformable convolutions. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14408-14419, 2023. 3", + "[41] Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 17683-17693, 2022. 3, 7", + "[42] Xin Wen, Weizhi Nie, Jing Liu, and Yuting Su. Mrft: Multiscale recurrent fusion transformer based prior knowledge for bit-depth enhancement. IEEE Transactions on Circuits and Systems for Video Technology, 33(10):5562-5575, 2023. 3", + "[43] Zhuofan Xia, Xuran Pan, Shiji Song, Li Erran Li, and Gao Huang. Vision transformer with deformable attention. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4794-4803, 2022. 3", + "[44] Jie Xiao, Xueyang Fu, Aiping Liu, Feng Wu, and Zheng-Jun Zha. Image de-raining transformer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(11):12978-12995, 2022. 3, 6, 7", + "[45] Yuwen Xiong, Zhiqi Li, Yuntao Chen, Feng Wang, Xizhou Zhu, Jiapeng Luo, Wenhai Wang, Tong Lu, Hongsheng Li, Yu Qiao, et al. Efficient deformable convnets: Rethinking dynamic and sparse operator for vision applications. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5652-5661, 2024. 3", + "[46] Shuning Xu, Binbin Song, Xiangyu Chen, Xina Liu, and Jiantao Zhou. Image demoiring in raw and srgb domains. In European Conference on Computer Vision, pages 108-124. Springer, 2025. 7", + "[47] Qiaosi Yi, Juncheng Li, Qinyan Dai, Faming Fang, Guixu Zhang, and Tieyong Zeng. Structure-preserving deraining with residue channel prior guidance. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4238-4247, 2021. 7", + "[48] Xin Yu, Peng Dai, Wenbo Li, Lan Ma, Jiajun Shen, Jia Li, and Xiaojuan Qi. Towards efficient and scale-robust ultra-high-definition image demoiring. In Proceedings of the European Conference on Computer Vision, pages 646–662. Springer, 2022. 2, 3, 6, 7", + "[49] Shanxin Yuan, Radu Timofte, Gregory Slabaugh, Ales Leonardis, Bolun Zheng, Xin Ye, Xiang Tian, Yaowu Chen, Xi Cheng, Zhenyong Fu, et al. Aim 2019 challenge on image" + ], + "bbox": [ + 516, + 92, + 906, + 900 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "12740", + "bbox": [ + 480, + 944, + 519, + 955 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "demoireing: Methods and results. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pages 3534-3545. IEEE, 2019. 6, 7", + "[50] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14821-14831, 2021. 7", + "[51] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5728-5739, 2022. 2, 3, 7", + "[52] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 586-595, 2018. 6", + "[53] Bolun Zheng, Shanxin Yuan, Gregory Slabaugh, and Ales Leonardis. Image demoiring with learnable bandpass filters. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3636-3645, 2020. 7", + "[54] Raymond Zhou, Shahrukh Athar, Zhongling Wang, and Zhou Wang. Deep image debanding. In Proceedings of the IEEE International Conference on Image Processing, pages 1951-1955. IEEE, 2022. 6, 7, 8", + "[55] Shihao Zhou, Duosheng Chen, Jinshan Pan, Jinglei Shi, and Jufeng Yang. Adapt or perish: Adaptive sparse transformer with attentive feature refinement for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2952-2963, 2024. 5", + "[56] Xizhou Zhu, Han Hu, Stephen Lin, and Jifeng Dai. Deformable convnets v2: More deformable, better results. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, pages 9308-9316, 2019. 3", + "[57] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159, 2020.3" + ], + "bbox": [ + 91, + 90, + 483, + 654 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "12741", + "bbox": [ + 480, + 945, + 517, + 955 + ], + "page_idx": 10 + } +] \ No newline at end of file diff --git a/2025/A Universal Scale-Adaptive Deformable Transformer for Image Restoration across Diverse Artifacts/36bcd46e-32c4-4da4-aff4-67b68f83d335_model.json b/2025/A Universal Scale-Adaptive Deformable Transformer for Image Restoration across Diverse Artifacts/36bcd46e-32c4-4da4-aff4-67b68f83d335_model.json new file mode 100644 index 0000000000000000000000000000000000000000..1420e3f94a14e720b756b2787c968f7f508205c0 --- /dev/null +++ b/2025/A Universal Scale-Adaptive Deformable Transformer for Image Restoration across Diverse Artifacts/36bcd46e-32c4-4da4-aff4-67b68f83d335_model.json @@ -0,0 +1,2983 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.043 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.001, + 0.812, + 0.047 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.117, + 0.13, + 0.882, + 0.175 + ], + "angle": 0, + "content": "A Universal Scale-Adaptive Deformable Transformer for Image Restoration across Diverse Artifacts" + }, + { + "type": "text", + "bbox": [ + 0.255, + 0.203, + 0.335, + 0.221 + ], + "angle": 0, + "content": "Xuyi He" + }, + { + "type": "text", + "bbox": [ + 0.379, + 0.204, + 0.482, + 0.221 + ], + "angle": 0, + "content": "Yuhui Quan" + }, + { + "type": "text", + "bbox": [ + 0.527, + 0.204, + 0.646, + 0.221 + ], + "angle": 0, + "content": "\\(\\mathbf{R}\\) uotao \\(\\mathrm{Xu^{2,3}*}\\)" + }, + { + "type": "text", + "bbox": [ + 0.682, + 0.204, + 0.74, + 0.22 + ], + "angle": 0, + "content": "Hui Ji" + }, + { + "type": "text", + "bbox": [ + 0.159, + 0.222, + 0.838, + 0.293 + ], + "angle": 0, + "content": "\\(^{1}\\)School of Computer Science and Engineering, South China University of Technology \\(^{2}\\)Institute for Super Robotics, South China University of Technology \\(^{3}\\)Key Laboratory of Large-Model Embodied-Intelligent Humanoid Robot \\(^{4}\\)Department of Mathematics, National University of Singapore" + }, + { + "type": "text", + "bbox": [ + 0.127, + 0.296, + 0.865, + 0.31 + ], + "angle": 0, + "content": "csxuyihe@mail.scut.edu.cn, csyhquan@scut.edu.cn, rtxu@superobots.com, matjh@nus.edu.sg" + }, + { + "type": "title", + "bbox": [ + 0.248, + 0.345, + 0.327, + 0.361 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.378, + 0.486, + 0.605 + ], + "angle": 0, + "content": "Structured artifacts are semi-regular, repetitive patterns that closely intertwine with genuine image content, making their removal highly challenging. In this paper, we introduce the Scale-Adaptive Deformable Transformer, an network architecture specifically designed to eliminate such artifacts from images. The proposed network features two key components: a scale-enhanced deformable convolution module for modeling scale-varying patterns with abundant orientations and potential distortions, and a scale-adaptive deformable attention mechanism for capturing long-range relationships among repetitive patterns with different sizes and non-uniform spatial distributions. Extensive experiments show that our network consistently outperforms state-of-the-art methods in diverse artifact removal tasks, including image deraining, image demoiring, and image debanding." + }, + { + "type": "title", + "bbox": [ + 0.092, + 0.637, + 0.222, + 0.652 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.664, + 0.484, + 0.83 + ], + "angle": 0, + "content": "Structured artifacts, in contrast to random noise, are repetitive patterns with similar appearance. Examples include Moiré patterns caused by overlapping pattern interference, rain streaks formed during image acquisition in rainy conditions, and banding effects resulting from color quantization. See Figure 1 for an illustration. These artifacts often display similar appearance that repeat in a quasi-periodic way over large image regions. However, they can differ in size, intensity, orientation, and may exhibit certain shape distortions. Additionally, their characteristics can vary across different images due to changes in image content and capturing." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.831, + 0.484, + 0.862 + ], + "angle": 0, + "content": "Removing structured artifacts has many applications. Eliminating moiré patterns improves digital photography" + }, + { + "type": "image", + "bbox": [ + 0.518, + 0.346, + 0.626, + 0.429 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.634, + 0.346, + 0.714, + 0.429 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.722, + 0.346, + 0.903, + 0.429 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.518, + 0.432, + 0.627, + 0.502 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.633, + 0.432, + 0.714, + 0.502 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.722, + 0.432, + 0.904, + 0.502 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.513, + 0.512, + 0.906, + 0.54 + ], + "angle": 0, + "content": "Figure 1. Structured artifacts with varying orientations, scales, and may involve warping effect. From left to right: moiré, rain, band." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.568, + 0.907, + 0.719 + ], + "angle": 0, + "content": "and screen captures. Rain streak removal improves the reliability of outdoor vision systems in rainy conditions, i.e., those used in autonomous driving and video surveillance. Addressing banding artifacts is necessary in professional imaging and printing to achieve smooth gradients and high dynamic range. Additionally, structured artifact removal finds applications in medical imaging, industrial quality inspection, and consumer electronics, enhancing image quality and reliability. All these highlight a broad impact of structured artifacts removal across many real-world applications." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.72, + 0.909, + 0.901 + ], + "angle": 0, + "content": "Structured artifact removal presents significant challenges that differ from random noise removal. The primary difficulties arise due to the quasi-periodic nature of these artifacts and their resemblance to authentic repeating image patterns (e.g., textures). These artifacts often intertwine with natural image features, creating complex overlapping structures that are difficult to disentangle from genuine image content. For instance, moiré patterns can closely mimic textile textures, and banding effects may align with natural linear features. Additionally, these artifacts often display noticeable variations in pattern size and geometric shape, further complicating their identification and removal." + }, + { + "type": "page_footnote", + "bbox": [ + 0.11, + 0.876, + 0.236, + 0.888 + ], + "angle": 0, + "content": "*Corresponding author." + }, + { + "type": "page_footnote", + "bbox": [ + 0.116, + 0.889, + 0.463, + 0.9 + ], + "angle": 0, + "content": "Code is available at: https://github.com/csxyhe/SADT" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.518, + 0.957 + ], + "angle": 0, + "content": "12731" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.092, + 0.091, + 0.367, + 0.108 + ], + "angle": 0, + "content": "1.1. Challenges and Existing Works" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.113, + 0.486, + 0.339 + ], + "angle": 0, + "content": "Traditional approaches, relying on handcrafted priors to distinguish structured artifacts from genuine image features, often struggle to handle variations in these artifacts, resulting in limited effectiveness for artifacts with noticeable variations. While deep learning methods have shown promising performance in removing these artifacts, they also face notable challenges. Early approaches are based on convolutional neural networks (CNNs) [1, 12, 23, 28, 29, 33, 48], which have limited capacity on capturing long-range relationships due to their local receptive fields. However, the quasi-periodicity of local patterns with certain similarities is a crucial cue for identifying structured artifacts. Thus, the design of the NN for removing structured artifacts removal must be capable of effectively exploiting long-range relationships among local structures to accurately identify these artifacts." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.34, + 0.485, + 0.536 + ], + "angle": 0, + "content": "Recent works addressed this limitation by leveraging transformers with self-attention mechanisms [5, 6, 51]. However, challenges remain. The significant variations in the geometric shapes and pattern sizes of structured artifacts make it challenging to synthesize sufficient training data to have a comprehensive coverage of all possible variations. In addition, training NNs on very large datasets can be also computationally expensive. Consequently, relying solely on extensive datasets may not lead to robust generalization. To conclude, an effective transformer-based NN for structured artifacts removal should be designed with deformable capacity and scale adaptability, enabling them to generalize effectively to unseen data even with limited training samples." + }, + { + "type": "title", + "bbox": [ + 0.092, + 0.543, + 0.208, + 0.558 + ], + "angle": 0, + "content": "1.2. Main Idea" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.565, + 0.486, + 0.746 + ], + "angle": 0, + "content": "An effective NN for structured artifact removal requires specialized mechanisms that can efficiently adapt to the varying scales and distortions of these artifacts, as well as exploit long-range dependencies among local structures. In this paper, we propose a transformer-based NN designed with specific optimization for handling scale variations and distortions of structured artifacts. Our proposed architecture, the Scale-Adaptive Deformable Transformer (SADT), leverages the transformer's capacity to capture long-range dependencies while incorporating new modules for scale adaptation and deformable transformations, enabling effective removal of structured artifacts with noticeable variations." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.75, + 0.486, + 0.901 + ], + "angle": 0, + "content": "Scale-enhanced deformable convolution (SEDC): Structured artifacts are semi-regular, exhibiting repetitive patterns with similar appearances but noticeable variations in size, orientation, and shape distortion. Classic convolution with fixed geometric structures struggle to capture such variable patterns effectively. Existing deformable convolutions offer some flexibility through spatial adaptations but remain ineffective in handling large-scale and orientation variations. To address these issues, we introduce the SEDC module, a convolution module designed to better manage these variations." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.908, + 0.228 + ], + "angle": 0, + "content": "The cascaded deformable convolutions are used to generate intermediate features from different perspectives, scales, and orientations, enabling flexible sampling and effective modeling of warped patterns. Furthermore, the subsequent Spatial-Channel Mixing Convolution (SCMC) with large-scale convolution kernel is introduced, which helps capture broader contextual information while preserving positional cues. Finally, the parallel features are adaptively aggregated for modeling scale-varying and deformable patterns." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.232, + 0.909, + 0.367 + ], + "angle": 0, + "content": "Scale-adaptive deformable attention (SADA): Structured artifacts are repetitive patterns spanning over large image regions. These patterns are quasi-periodic with non-uniform spatial distribution and size variation. While transformer is effective in capturing such repetitive patterns through attention mechanisms, the significant variations in size and non-uniform spatial distribution of these patterns demand better scale adaptability and more effective sampling within the attention mechanisms." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.368, + 0.909, + 0.49 + ], + "angle": 0, + "content": "This motivates us to propose Scale-Adaptive Deformable Attention (SADA), which assigns multi-scale multi-head mechanism for capturing long-range dependencies within a semantic layer. Moreover, SADA introduces deformable-sampling offset to effectively handle the non-uniform spatial distribution of repetitive patterns. These two techniques enable local self-attention to better capture long-range quasi-periodic patterns through selective information aggregation." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.5, + 0.695, + 0.515 + ], + "angle": 0, + "content": "1.3. Our Contributions" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.523, + 0.854, + 0.537 + ], + "angle": 0, + "content": "In summary, our contributions are listed as follows:" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.539, + 0.907, + 0.568 + ], + "angle": 0, + "content": "- We propose the SEDC module to model scale-varying artifacts with abundant orientations and potential distortions." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.569, + 0.907, + 0.629 + ], + "angle": 0, + "content": "- We introduce the SADA module, enhancing local selfattention by incorporating a multi-scale multi-head mechanism and deformable technique, to capture long-range contextual information of quasi-periodic patterns." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.63, + 0.905, + 0.659 + ], + "angle": 0, + "content": "- Leveraging on SEDC and SADA, we construct the transformer-based SADT for structured artifact removal." + }, + { + "type": "list", + "bbox": [ + 0.513, + 0.539, + 0.907, + 0.659 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.66, + 0.907, + 0.705 + ], + "angle": 0, + "content": "Extensive experiments on diverse artifact removal tasks, including image demoiring, deraining, and debanding, showed that SADT achieves the state-of-the-art performance." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.72, + 0.655, + 0.736 + ], + "angle": 0, + "content": "2. Related Work" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.75, + 0.909, + 0.903 + ], + "angle": 0, + "content": "Non-learning structured artifact removal method: Traditional methods rely on pre-defined priors on structured artifacts to distinguish them from genuine image content. For moiré pattern removal from texture images, Liu et al. [22] proposed using a low rank prior for textures and a sparse prior of Moiré patterns in the discrete cosine transform to separate two. For removing rain streak, Luo et al. [27] separates the rain streak and background layers via discriminative sparse coding. Li et al. [19] introduces a patch-based approach leveraging Gaussian mixture model priors to separate" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "12732" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.486, + 0.138 + ], + "angle": 0, + "content": "rain streak and background layers. These handcrafted priors are often overly simplistic and tend to fail in removing artifacts with large variations in scale, orientation, and shape." + }, + { + "type": "text", + "bbox": [ + 0.093, + 0.142, + 0.486, + 0.549 + ], + "angle": 0, + "content": "Multi-scale deep learning methods for structured artifact removal: There is extensive literature on deep learning for structured artifact removal. Here, we focus exclusively on the most relevant works employing multi-scale strategies to address artifact patterns of varying scales. To remove Moiré patterns from images, Sun et al. [33] proposed a multiresolution encoder-decoder architecture designed to model Moiré patterns at varying scales. Yu et al. [48] introduced a semantic-aligned scale-aware module that combines multiscale features through an attention mechanism to reduce scale discrepancies in Moiré patterns. Nguyen et al. [28] developed a multiscale guided restoration block targeting both low- and high-frequency noise. For deraining, Wang et al. [38] proposed a wavelet-inspired multi-level module for rain removal. For debanding, Liu et al. [23] introduced a dual-branch depthwise group fusion module to capture both inter-scale and intra-scale correlations of banding patterns. Quan et al. [30] introduced a cross-Scale invertible NN with deformable convolutions to handle scale variations of banding artifacts. For shadow removal, built upon the Retinex decomposition model, Huang et al. [15] proposed a neural network with a multi-scale structure. These methods have shown promising results in structured artifact removal tasks. However, these methods neither fully exploit the repetitive nature of artifacts for accurate separation from genuine image structures, nor do they effectively address the significant orientation or shape variations of artifacts in their designs." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.553, + 0.486, + 0.734 + ], + "angle": 0, + "content": "Transformer for removing structured artifacts: Transformer is an effective architecture for modeling global and local relationship of local structures. In image processing, an image is partitioned into small patches to fit the transformer architecture [3]. Many transformer methods have been proposed for different structured artifact removal tasks. For deraining, Xiao et al. [44] introduced a combination of window-based and global self-attention mechanisms. Chen et al. [5] developed sparse channel self-attention to selectively retain key values. For image debanding, Wen et al. [42] utilized local self-attention with varying window sizes across different heads to capture features at multiple scales." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.735, + 0.486, + 0.902 + ], + "angle": 0, + "content": "However, these transformer methods have not fully leveraged self-attention's ability to capture long-range dependencies of repetitive patterns. Such weakness arises from the high computational costs of standard transformers due to quadratic scaling with patch numbers. To reduce costs, these transformers rely on techniques such as window-based self-attention [21, 41], channel-only self-attention in Restormer [51], and local self-attention (LSA) [18, 31]. While computationally efficient, these methods' limited spatial perceptive fields weaken their ability to capture long-range information. To address these limitations, the proposed SADT extends the" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.908, + 0.152 + ], + "angle": 0, + "content": "LSA with scale-adaptive deformable sampling to effectively capture long-range dependencies of repetitive artifacts with varying scales, orientations, and distortions. This design enables SADT to model structured artifacts more effectively." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.156, + 0.909, + 0.382 + ], + "angle": 0, + "content": "Deformable convolution and attention: Deformable convolutions [10, 40, 45, 56] introduce learnable offsets to convolution kernels, enabling adaptive feature sampling to handle geometric variations and spatial transformations. Deformable attention [2, 43, 57] extends this adaptability to the self-attention mechanism, allowing transformers to focus on relevant spatial locations and better capture complex patterns. Zhu et al. [57] proposed a multi-scale deformable attention module for generating a feature pyramid and allocating several keys per query to capture multi-scale features. Xia et al. [43] introduced the deformable attention transformer, which learns shared deformed points for efficient computations. Cao et al. [2] developed a reference-based deformable attention module to enhance low-resolution feature representations using multiple relevant features." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.383, + 0.911, + 0.55 + ], + "angle": 0, + "content": "The design of deformable self-attention in our work differs significantly from that of [43]. Xia et al. [43] combines self-attention with deformable convolution, computing the relevance between input and reference images for offset prediction. In contrast, our method integrates deformable sampling within self-attention by first predicting offsets and then calculating the similarity between predicted keys and the query. It is more effective in modeling long-range dependence of patterns with varying appearances. Furthermore, we enhance the scale adaptability of deformable convolution and attention for handle scale variations among artifacts." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.56, + 0.651, + 0.577 + ], + "angle": 0, + "content": "3. Methodology" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.585, + 0.909, + 0.737 + ], + "angle": 0, + "content": "In this section, we introduce the two key components of the proposed transformer. SEDC, an advanced deformable convolution module designed to capture local patterns with large-scale and orientation variations of local patterns. SADA, a deformable attention module that exploits long-range relationships among repetitive patterns with varying sizes and non-uniform spatial distributions. Subsequently, detail present the proposed transformer, SADT, which integrates SEDC and SADA within a multi-in multi-out framework, together with the training loss." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.743, + 0.868, + 0.758 + ], + "angle": 0, + "content": "3.1. Scale-Enhanced Deformable Convolution" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.765, + 0.909, + 0.87 + ], + "angle": 0, + "content": "SEDC is for effectively capturing scale-varying and deformable patterns. The module consists of two key components: the cascaded DCNs [56] for modeling complex patterns with distortions, and the parallel Spatial-Channel Mixing Convolution (SCMC) layers for capturing patterns with large scale variations. The pipeline of SEDC is illustrated in Figure 2." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.871, + 0.908, + 0.901 + ], + "angle": 0, + "content": "To capture patterns of large size, we introduce the SCMC layer for the expansion of receptive field. For each \\(i\\)-th" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "12733" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.117, + 0.089, + 0.462, + 0.172 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.194, + 0.181, + 0.381, + 0.196 + ], + "angle": 0, + "content": "Figure 2. SEDC configuration." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.219, + 0.484, + 0.286 + ], + "angle": 0, + "content": "channel of an input feature \\( \\mathbf{X} \\in \\mathbb{R}^{H \\times W \\times C} \\), SCMC first partitions each feature map into several non-overlapping patches of size \\( K \\times K \\), and subsequently reshape it into a matrix \\( \\mathbf{X}_i \\in \\mathbb{R}^{K^2 \\times \\frac{HW}{K^2}} \\). Denote such an operation by" + }, + { + "type": "equation", + "bbox": [ + 0.131, + 0.293, + 0.443, + 0.314 + ], + "angle": 0, + "content": "\\[\n\\Psi : \\boldsymbol {X} \\in \\mathbb {R} ^ {H \\times W \\times C} \\to \\left\\{\\boldsymbol {X} _ {i} \\in \\mathbb {R} ^ {K ^ {2} \\times \\frac {H W}{K ^ {2}}}, \\forall i \\right\\},\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.324, + 0.484, + 0.357 + ], + "angle": 0, + "content": "Then, a learnable linear transform \\( \\mathbf{W} \\in \\mathbb{R}^{K^2 \\times K^2} \\) is applied to each \\( \\mathbf{X}_i \\): for each \\( \\mathbf{X}_i \\)," + }, + { + "type": "equation", + "bbox": [ + 0.132, + 0.364, + 0.443, + 0.385 + ], + "angle": 0, + "content": "\\[\n\\Phi_ {\\boldsymbol {W}}: \\boldsymbol {X} _ {i} \\in \\mathbb {R} ^ {K ^ {2} \\times \\frac {H W}{K ^ {2}}} \\to W \\boldsymbol {X} _ {i} \\in \\mathbb {R} ^ {K ^ {2} \\times \\frac {H W}{K ^ {2}}},\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.395, + 0.484, + 0.471 + ], + "angle": 0, + "content": "where \\(W\\) is shared across different \\(X_{i}\\) s. Afterward, the output of \\(\\Phi_W\\) is reshaped and concatenated to the original dimension \\((H,W,C)\\), using the inversion of \\(\\Psi\\). A gating mechanism is then applied, performing an element-wise product between the original input and processed feature:" + }, + { + "type": "equation", + "bbox": [ + 0.152, + 0.48, + 0.483, + 0.499 + ], + "angle": 0, + "content": "\\[\n\\operatorname {S C M C} (\\boldsymbol {X}) = \\boldsymbol {X} \\odot \\Psi^ {- 1} \\left(\\Phi_ {\\boldsymbol {W}} \\left(\\Psi (\\boldsymbol {X})\\right)\\right). \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.508, + 0.485, + 0.674 + ], + "angle": 0, + "content": "SCMC can effectively capture large pattern with minimal computation overhead. However, similar to other MLP-based architectures (e.g., MLP Mixer [35]), its non-overlapping partitions may struggle with patterns that span across regions, particularly when intersected by partition boundaries. This issue will be mitigated by the cascaded DCNs, which helps the aggregation of relevant non-local information into local windows, and provides features with different scales into parallel SCMCs. The efficient DCN utilizes a PConv operation [4] followed by a pointwise convolution to generate deformable offsets and modulation scalars." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.674, + 0.484, + 0.795 + ], + "angle": 0, + "content": "The complete pipeline of SEDC is shown in Fig. 2. In the main branch, input feature channels are first reduced by a factor of 4. The cascaded DCN processes corresponding features, while parallel SCMCs handle the original input features. Finally, features with different receptive field shapes are merged via pointwise convolution. Overall, this design effectively captures local patterns with varying sizes, orientations, geometric distortions, and warping effects." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.804, + 0.416, + 0.82 + ], + "angle": 0, + "content": "3.2. Scale-Adaptive Deformable Attention" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.826, + 0.484, + 0.901 + ], + "angle": 0, + "content": "SADA integrates the multi-scale multi-head mechanism with deformable-sampled offsets to effectively model long-range dependencies in repetitive artifact patterns. Given the input feature map \\( \\mathbf{X} \\in \\mathbb{R}^{H \\times W \\times C} \\), we first generate query \\( Q \\), key \\( K \\) and value \\( V \\) projections, enriched with local context" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.907, + 0.168 + ], + "angle": 0, + "content": "by applying pointwise convolutions to aggregate pixel-wise cross-channel context, followed by \\(3 \\times 3\\) depth-wise convolutions to encode channel-wise spatial context. A multi-head mechanism is then used, where \\(Q, K, V\\) are split into \\(S\\) parts along channels for subsequent process." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.171, + 0.907, + 0.232 + ], + "angle": 0, + "content": "Revisiting standard LSA: Omitting head index, let \\( V_{u} \\), \\( Q_{u} \\), \\( K_{u} \\) denote the \\( u \\)-th feature vector in \\( V \\), \\( Q \\), \\( K \\), and \\( u \\) denote the related spatial location. In the standard LSA [31], for query \\( Q_{u} \\), the output \\( Y_{u} \\) is defined as" + }, + { + "type": "equation", + "bbox": [ + 0.513, + 0.24, + 0.913, + 0.294 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {Y} _ {u} = \\sum_ {v \\in \\mathcal {N} (u)} \\omega_ {u \\rightarrow v} \\boldsymbol {V} _ {u}, \\omega_ {u \\rightarrow v} = \\frac {\\exp \\left(\\boldsymbol {Q} _ {u}\\right) ^ {\\top} \\boldsymbol {K} _ {v}}{\\sum_ {w \\in \\mathcal {N} (u)} \\exp \\left(\\boldsymbol {Q} _ {u}\\right) ^ {\\top} \\boldsymbol {K} _ {w}}, \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.294, + 0.907, + 0.355 + ], + "angle": 0, + "content": "where \\(\\mathcal{N}(u) = u + \\{-1,0,1\\}^2\\) denotes the index set of neighboring points of the position \\(u\\), the size of \\(\\mathcal{N}(u)\\) is rather small and results in poor capability to capture long-range dependencies with varying locations." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.359, + 0.909, + 0.51 + ], + "angle": 0, + "content": "Multi-scale multi-head mechanism for LSA: To leverage the long-range dependencies of patterns, we introduce multi-dilatation initial grids, denoted as \\(\\Delta^{(s)} = \\{-M_s,0, + M_s\\} ^2\\) , to construct a multi-scale neighborhood. Specifically, for the \\(s\\) -th head, the neighborhood is denoted as \\(\\mathcal{N}^s (u) = u + \\Delta^s\\) . By assigning different sampling scales to each attention head, a broad and sparse receptive field, as shown in Figure 3(c), is formed through the aggregation of information across heads, enabling the layer to effectively capture long-range contextual information more effectively." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.513, + 0.909, + 0.59 + ], + "angle": 0, + "content": "Deformable-sampling offset for LSA: To handle non-uniform distribution of structured artifacts, we integrate deformable sampling into LSA by predicting offsets \\(\\delta_u^s\\) for different position \\(u\\) and different scale \\(s\\). The neighboring set is given by" + }, + { + "type": "equation", + "bbox": [ + 0.622, + 0.598, + 0.796, + 0.617 + ], + "angle": 0, + "content": "\\[\n\\bar {\\mathcal {N}} ^ {s} (u) = u + \\Delta^ {(s)} + \\delta_ {u} ^ {s},\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.625, + 0.907, + 0.657 + ], + "angle": 0, + "content": "Let \\( D_u^s \\) denote \\( \\delta_u^s \\) in vector form, and \\( D^s \\) combines all \\( D_u^s \\). Then, the offsets \\( D^s \\) are extracted from \\( Q \\) as" + }, + { + "type": "equation", + "bbox": [ + 0.601, + 0.666, + 0.907, + 0.687 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {D} ^ {s} = \\phi^ {s} (\\boldsymbol {Q}) \\in \\mathbb {R} ^ {H \\times W \\times 2 | \\Delta^ {(s)} |}, \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.696, + 0.907, + 0.801 + ], + "angle": 0, + "content": "Here, \\(\\phi^s\\) is implemented using PConv [4] followed by a pointwise convolution. Instead of a single head \\(Q_s\\), the full \\(Q\\) is fed to \\(\\phi^s\\) for a holistic query view, enabling consistent head-specific adaptations. Learnable deformable offsets \\(D^s\\) adjust sampling locations within a head, enhancing the relevance of sampled key and value to the central query and improving the layer's handling of complex patterns." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.802, + 0.892, + 0.817 + ], + "angle": 0, + "content": "Finally, the \\(s\\)-th head output of SADA is calculated by" + }, + { + "type": "equation", + "bbox": [ + 0.624, + 0.826, + 0.905, + 0.862 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {Y} _ {u} ^ {s} = \\sum_ {v \\in \\tilde {\\mathcal {N}} ^ {s} (u)} \\omega_ {u \\rightarrow v} \\boldsymbol {V} _ {u} ^ {s}. \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.871, + 0.907, + 0.902 + ], + "angle": 0, + "content": "To mitigate the negative effects caused by statistical-based offset generator \\(\\phi^s\\), the low-relevance points, referred to" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "12734" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.14, + 0.096, + 0.368, + 0.368 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.371, + 0.303, + 0.381 + ], + "angle": 0, + "content": "(a) Overall SADA." + }, + { + "type": "image", + "bbox": [ + 0.373, + 0.105, + 0.653, + 0.27 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.396, + 0.272, + 0.631, + 0.283 + ], + "angle": 0, + "content": "(b) Deformable sampling process of the \\(s\\)-th head." + }, + { + "type": "image", + "bbox": [ + 0.371, + 0.288, + 0.661, + 0.363 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.668, + 0.105, + 0.861, + 0.207 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.689, + 0.208, + 0.836, + 0.219 + ], + "angle": 0, + "content": "(c) Initial sampling grids \\(\\{\\Delta^{(s)}\\}\\)" + }, + { + "type": "image", + "bbox": [ + 0.668, + 0.221, + 0.86, + 0.37 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.674, + 0.371, + 0.849, + 0.381 + ], + "angle": 0, + "content": "(d) The process instance for one head." + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.391, + 0.908, + 0.434 + ], + "angle": 0, + "content": "Figure 3. Overview of SADA: (a) SADA employs multi-scale multi-head mechanism to aggregate (c) a broad, sparse receptive field for capturing long-range context, while utilizing (b) deformable sampling offsets to model complex patterns. (d) A process instance of SADA within a head." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.46, + 0.484, + 0.552 + ], + "angle": 0, + "content": "invalid points, are filtered out in the weighted sum with a low \\(\\omega_{u\\rightarrow v}\\) in Eq. 4. The remaining points are considered valid. The output \\(Y^{s}\\) s are combined into \\(\\pmb{Y}\\) along channels and fed into a feed-forward network (FFN) for subsequent processing. The FFN is implemented by FRFN [55]. See Fig. 3 for an illustration." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.561, + 0.428, + 0.576 + ], + "angle": 0, + "content": "3.3. SADT for Structured Artifact Removal" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.583, + 0.484, + 0.734 + ], + "angle": 0, + "content": "Building on SEDC and SADA, we propose a transformer, called SADT, for structured artifact removal. SADT employs a multi-input multi-output (MIMO) strategy, as shown in Fig. 4. Given a degraded image \\( \\mathbf{Y} \\in \\mathbb{R}^{W \\times H \\times 3} \\), SADT initially extracts shallow features of size \\( H \\times W \\times C \\) via a convolution layer. These features then pass through 4 encoder blocks, capturing fine to coarse scales, with each block comprising \\( N_{i} \\) cascaded SADA layers to flexibly model long-range dependencies of repetitive artifacts and a pair of SEDC layers in the end for capturing scale-varying features." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.735, + 0.484, + 0.902 + ], + "angle": 0, + "content": "Within the encoder, channels are expanded and spatial resolution is reduced. Following [8, 9], features from downsampled degraded images are merged into the main path through a fusion module and subsequently processed by 3 decoders to progressively reconstruct the image from coarse to fine scales. Features in the decoder are concatenated with those from the encoder via skip connections, followed by a convolutional layer for channel dimension adjustment. The convolutional layer followed the decoder generates a residual image, which is added to the corresponding downsampled input to restore the image. The image at the finest scale" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.46, + 0.907, + 0.492 + ], + "angle": 0, + "content": "serves as the final output, while other scales also contribute to the loss function during training." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.508, + 0.787, + 0.525 + ], + "angle": 0, + "content": "3.4. Loss Function for NN Training" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.532, + 0.906, + 0.608 + ], + "angle": 0, + "content": "A multi-scale loss is used for training. Let \\( O_{t} \\) be the output from the \\( t \\)-th scale decoder block, while \\( X^{gt} \\) be the ground truth. Besides standard \\( L_{1} \\) fitting loss, we also include an auxiliary loss function that minimizes the distance between input and output in feature space, tailed for each task." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.614, + 0.906, + 0.644 + ], + "angle": 0, + "content": "Image demoiréing: The perceptual loss [16] \\(\\mathcal{L}_p\\) is used as an auxiliary loss, and the overall loss is" + }, + { + "type": "equation", + "bbox": [ + 0.534, + 0.66, + 0.906, + 0.702 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} = \\sum_ {t = 1} ^ {3} \\mathcal {L} _ {1} \\left(\\boldsymbol {O} _ {t}, \\boldsymbol {X} _ {\\downarrow 2 ^ {t - 1}} ^ {g t}\\right) + \\lambda_ {p} \\cdot \\mathcal {L} _ {p} \\left(\\boldsymbol {O} _ {t}, \\boldsymbol {X} _ {\\downarrow 2 ^ {t - 1}} ^ {g t}\\right), \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.717, + 0.904, + 0.734 + ], + "angle": 0, + "content": "where \\(\\lambda_p > 0\\) is a hyper-parameter, set to 1 in experiments." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.738, + 0.907, + 0.813 + ], + "angle": 0, + "content": "Image debanding/deraining: Unlike moiré patterns, band effects and rain streaks exhibit more pronounced stripe-like appearances, yielding strong responses in the frequency domain. Thus, the frequency loss \\(\\mathcal{L}_f\\) is introduced as an additional loss term, and the overall loss is" + }, + { + "type": "equation", + "bbox": [ + 0.532, + 0.83, + 0.905, + 0.87 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} = \\sum_ {t = 1} ^ {3} \\mathcal {L} _ {1} \\left(\\boldsymbol {O} _ {t}, \\boldsymbol {X} _ {\\downarrow 2 ^ {t - 1}} ^ {g t}\\right) + \\lambda_ {f} \\cdot \\mathcal {L} _ {f} \\left(\\boldsymbol {O} _ {t}, \\boldsymbol {X} _ {\\downarrow 2 ^ {t - 1}} ^ {g t}\\right), \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.886, + 0.908, + 0.902 + ], + "angle": 0, + "content": "where \\(\\lambda_{f} > 0\\) is a hyper-parameter, set to 0.1 in experiments." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "12735" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.137, + 0.089, + 0.862, + 0.341 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.348, + 0.908, + 0.378 + ], + "angle": 0, + "content": "Figure 4. The proposed network employs a multiscale hierarchical encoder-decoder architecture. Each block comprises \\(N\\) cascaded SADA layers while utilizing a pair of SEDC layers at the ends." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.402, + 0.226, + 0.42 + ], + "angle": 0, + "content": "4. Experiments" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.429, + 0.484, + 0.52 + ], + "angle": 0, + "content": "In this section, we first discuss the experimental setting, followed by the evaluation of the effectiveness of our SADT for several structured artifact removal tasks, including image demoiring, debanding and deraining. Ablation studies are then conducted to assess each component of SADT. More results can be found in supplementary material." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.535, + 0.3, + 0.551 + ], + "angle": 0, + "content": "4.1. Experimental Settings" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.558, + 0.484, + 0.664 + ], + "angle": 0, + "content": "In our SADT, The number of channels is set to \\( C = 32 \\) while \\( \\{N_0, N_1, N_2, N_3\\} \\) are set to \\( \\{0, 4, 4, 4\\} \\). The partition size in SCMC is \\( K = 8 \\). 4 heads are used in SADA, and \\( M_s \\) is set as \\( \\{1, 3, 5, 7\\} \\). Model training employs the Adam optimizer [17] with \\( \\beta_1 = 0.9 \\) and \\( \\beta_2 = 0.999 \\). Code will be released upon acceptance. Experimental datasets and implementation details are listed below:" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.669, + 0.484, + 0.82 + ], + "angle": 0, + "content": "Image demoiréing: Three datasets are used for benchmarking: TIP2018 [33], FHDMi [13], and LCDMoiré [49]. The training employs cyclic cosine annealing [26], with an initial learning rate of \\( 2 \\times 10^{-4} \\) decaying to \\( 10^{-6} \\) over each cycle. For two high-definition datasets, FHDMi and LCDMoiré, we randomly crop \\( 512 \\times 512 \\) patches from the images, and train the model for 150 epochs with the annealing cycle 50 and batch size of 2. For TIP2018 dataset, the model is trained for 70 epochs with annealing cycle 10. Consistent with [48], no data augmentation is utilized in training." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.825, + 0.484, + 0.901 + ], + "angle": 0, + "content": "Image debanding: DID dataset [54] is used for benchmarking, which contains 51,490 pairs of banded and latent image patches with size \\(256\\times 256\\) (30,829 training pairs, 10,237 validation pairs and 10,354 testing pairs) The initial learning rate is 0.0002, and decreased to \\(10^{-6}\\) for 300K iter" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.404, + 0.907, + 0.449 + ], + "angle": 0, + "content": "ations with the cosine annealing scheme [26]. Batch size is 4. Data augmentations include horizontal/vertical flipping and rotations of \\(0^{\\circ}\\), \\(90^{\\circ}\\), \\(180^{\\circ}\\) and \\(270^{\\circ}\\)." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.453, + 0.909, + 0.574 + ], + "angle": 0, + "content": "Image deraining: The large-scale real-world dataset, SPAD [39], is used for benchmarking the task of rain streak removal. It contains 638,492 image pairs for training and 1,000 for testing. Batch size is 32 and th total iteration number is \\(300\\mathrm{K}\\). The initial learning rate is fixed as \\(3\\times 10^{-4}\\) for the first 92K iterations, and then decreased to \\(1\\times 10^{-6}\\) for 208K iterations with the cosine annealing scheme [26]. Random vertical/horizontal flips are used in data augmentation." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.576, + 0.909, + 0.682 + ], + "angle": 0, + "content": "Two common quantitative evaluation metrics are used for all tasks: PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity Index). Consistent with existing deraining methods [5, 44], PSNR/SSIM scores are computed using the Y channel in YCbCr color space. For image demoiring and debanding where periodic patterns significantly impact perception, the LPIPS metric [52] is also used." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.693, + 0.859, + 0.71 + ], + "angle": 0, + "content": "4.2. Quantitative and Qualitative Evaluation" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.72, + 0.909, + 0.901 + ], + "angle": 0, + "content": "Image demoiréing: See Tab. 1 for the comparison of different methods. SADT outperforms all compared methods on the PSNR metric and achieves the best SSIM scores on both TIP2018 and FHDMi datasets. While many methods, including SADT, achieve comparable SSIM scores on the LCDmoiré dataset, SADT maintains its superiority with the highest PSNR performance. The substantial performance gains on these datasets demonstrate SADT's effectiveness in handling moiré artifacts. Qualitative analysis also reveals that SADT exhibits remarkable improvement in moiré pattern removal, particularly in challenging scenarios such as patterns intertwined with hair details and those distributed" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.946, + 0.52, + 0.957 + ], + "angle": 0, + "content": "12736" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.094, + 0.089, + 0.638, + 0.243 + ], + "angle": 0, + "content": "
MethodTIP2018 [33]FHDMi [13]LCDMoiré [49]
LPIPS↓PSNR↑SSIM↑LPIPSPSNRSSIMLPIPSPSNRSSIM
MopNet [12]-27.750.8950.179422.760.7958---
FHDe2Net [13]-27.780.8960.168822.930.7885-41.40-
WDNet [24]-28.080.904----29.660.9670
MBCNN [53]-30.030.8930.198022.310.8095-44.040.9948
ESDNet [48]0.081629.810.9160.135424.500.83510.009744.830.9963
CDDF [37]-28.870.8940.161023.630.8040-44.10-
RVDNet [7]----24.290.8352-44.540.9932
RRID [46]----24.390.8300---
Ours0.060830.770.9260.123824.960.84630.006546.430.9923
" + }, + { + "type": "table", + "bbox": [ + 0.642, + 0.089, + 0.906, + 0.243 + ], + "angle": 0, + "content": "
MethodPSNR↑SSIM↑LPIPS↓
FCDR [14]25.730.71700.3766
FFmpeg [11]35.330.93520.0622
AdaDeband [36]35.350.93920.0639
BitNet [1]38.240.96330.0505
ADNet [34]38.290.96120.0499
MWCNN [25]39.240.96880.4854
MPRNet [50]39.420.96970.0461
Restormer [51]39.500.97090.0478
Ours39.780.97290.0453
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.254, + 0.908, + 0.281 + ], + "angle": 0, + "content": "Table 1. Quantitative results for image demoiring. The best and second-best results are Table 2. Quantitative results of image deboldfaced and underlined, respectively. banding on the DID dataset [54]." + }, + { + "type": "image", + "bbox": [ + 0.129, + 0.296, + 0.87, + 0.409 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.415, + 0.904, + 0.429 + ], + "angle": 0, + "content": "Figure 5. Visual inspection of the results from different image demoiring methods on sample images; see zoom-in box for details inspection." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.456, + 0.402, + 0.47 + ], + "angle": 0, + "content": "across flat clothing regions, as shown in Fig. 5." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.476, + 0.484, + 0.581 + ], + "angle": 0, + "content": "Image debanding: Given the limited availability of open-source deep learning methods specifically for debanding, we extended our evaluation to include methods adapted from related tasks. As shown in Tab.2, SADT demonstrates superior performance across all evaluation metrics. See Fig. 6 for visual comparison, where SADT achieves optimal results with minimal banding artifacts and superior color fidelity." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.587, + 0.484, + 0.738 + ], + "angle": 0, + "content": "Image deraining: The SPAD dataset [39] is used for benchmarking. As shown in Tab. 3, SADT outperformed all other methods, in PSNR and SSIM. Notably, SADT achieved a \\(0.23\\mathrm{dB}\\) improvement over the second best performer, NeRD-Rain-S [6], with \\(42\\%\\) fewer parameter count. (See Tab. 4). As shown in Fig. 7, the visual comparison further validate our method's effectiveness. While exhibits residual streaks and NeRD-Rain-S produces discontinuous blocks in their outputs, SADT effectively removes rain streaks while maintaining high visual quality." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.743, + 0.484, + 0.834 + ], + "angle": 0, + "content": "Complexity comparison: See Tab. 4 for the comparison of different methods in terms of FLOPs and parameter numbers. Our model, SADT, maintains relatively low FLOPs and a small number of parameters, while achieving SOTA performance as evidenced in Tab. \\(1\\sim 3\\). This shows that the effectiveness of our model is from its design, not model size." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.848, + 0.245, + 0.863 + ], + "angle": 0, + "content": "4.3. Ablation Study" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.871, + 0.484, + 0.901 + ], + "angle": 0, + "content": "This study evaluates the contribution of key designs, SADA, SEDC, and MIMO architecture, toward performance gain" + }, + { + "type": "table", + "bbox": [ + 0.571, + 0.453, + 0.851, + 0.613 + ], + "angle": 0, + "content": "
MethodSourcePSNR↑SSIM↑
PReLU [32]CVPR'1940.160.9816
RCDNet [38]CVPR'2043.360.9831
SPDNet [47]ICCV'2143.550.9875
MPRNet [50]CVPR'2145.000.9897
ECNet [20]WACV'2244.320.9913
IDT [44]TPAMI'2247.340.9929
Uformer-B [41]CVPR'2247.840.9925
Restormer [51]CVPR'2247.980.9921
DRSformer [5]CVPR'2348.530.9924
NeRD-Rain-S [6]CVPR'2448.900.9936
Ours-49.130.9939
" + }, + { + "type": "table_caption", + "bbox": [ + 0.513, + 0.62, + 0.906, + 0.648 + ], + "angle": 0, + "content": "Table 3. Image deraining results on the SPAD dataset [39]. The best and second-best results are boldfaced and underlined, respectively." + }, + { + "type": "table", + "bbox": [ + 0.528, + 0.663, + 0.894, + 0.791 + ], + "angle": 0, + "content": "
TaskMethod#Params(M)#FLOPs(G)
DemoiréingESDNet [48]5.9317.6
MBCNN [53]13.9-
DebandingRestormer [51]26.13150.0
MPRNet [50]20.13777.0
DerainingNeRD-Rain-S [6]10.5379.2
DRSformer [5]33.65242.9
Ours6.1339.7
" + }, + { + "type": "table_caption", + "bbox": [ + 0.513, + 0.799, + 0.906, + 0.827 + ], + "angle": 0, + "content": "Table 4. Complexity comparison with SOTA methods in different tasks. The test image is of size \\(256 \\times 256\\)." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.856, + 0.906, + 0.901 + ], + "angle": 0, + "content": "of our model. For SADA, we removed the multi-scale head mechanism and deformable sampling offset, reducing it to multi-head LSA. For SEDC, we substituted it with a single" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.518, + 0.957 + ], + "angle": 0, + "content": "12737" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.127, + 0.091, + 0.273, + 0.206 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.173, + 0.206, + 0.227, + 0.217 + ], + "angle": 0, + "content": "Degraded" + }, + { + "type": "image", + "bbox": [ + 0.276, + 0.091, + 0.348, + 0.206 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.283, + 0.207, + 0.34, + 0.217 + ], + "angle": 0, + "content": "Reference" + }, + { + "type": "image", + "bbox": [ + 0.349, + 0.091, + 0.422, + 0.206 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.363, + 0.207, + 0.409, + 0.218 + ], + "angle": 0, + "content": "FFmpeg" + }, + { + "type": "image", + "bbox": [ + 0.423, + 0.092, + 0.498, + 0.206 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.442, + 0.207, + 0.479, + 0.217 + ], + "angle": 0, + "content": "BitNet" + }, + { + "type": "image", + "bbox": [ + 0.498, + 0.092, + 0.574, + 0.206 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.515, + 0.207, + 0.555, + 0.217 + ], + "angle": 0, + "content": "ADNet" + }, + { + "type": "image", + "bbox": [ + 0.575, + 0.092, + 0.648, + 0.206 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.584, + 0.207, + 0.636, + 0.217 + ], + "angle": 0, + "content": "MWCNN" + }, + { + "type": "image", + "bbox": [ + 0.648, + 0.092, + 0.721, + 0.206 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.661, + 0.207, + 0.709, + 0.217 + ], + "angle": 0, + "content": "MPRNet" + }, + { + "type": "image", + "bbox": [ + 0.722, + 0.092, + 0.797, + 0.206 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.731, + 0.207, + 0.788, + 0.217 + ], + "angle": 0, + "content": "Restormer" + }, + { + "type": "image", + "bbox": [ + 0.798, + 0.092, + 0.872, + 0.206 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.821, + 0.207, + 0.848, + 0.216 + ], + "angle": 0, + "content": "Ours" + }, + { + "type": "image_caption", + "bbox": [ + 0.091, + 0.224, + 0.907, + 0.24 + ], + "angle": 0, + "content": "Figure 6. Visual inspection of the results from different image debanding methods on sample images; see zoom-in box for details inspection" + }, + { + "type": "image", + "bbox": [ + 0.124, + 0.255, + 0.232, + 0.34 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.152, + 0.342, + 0.205, + 0.353 + ], + "angle": 0, + "content": "Degraded" + }, + { + "type": "image", + "bbox": [ + 0.233, + 0.255, + 0.339, + 0.34 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.257, + 0.342, + 0.313, + 0.352 + ], + "angle": 0, + "content": "Reference" + }, + { + "type": "image", + "bbox": [ + 0.34, + 0.255, + 0.446, + 0.34 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.37, + 0.342, + 0.415, + 0.352 + ], + "angle": 0, + "content": "SPDNet" + }, + { + "type": "image", + "bbox": [ + 0.446, + 0.255, + 0.553, + 0.34 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.471, + 0.342, + 0.528, + 0.352 + ], + "angle": 0, + "content": "Restormer" + }, + { + "type": "image", + "bbox": [ + 0.553, + 0.255, + 0.66, + 0.34 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.575, + 0.342, + 0.639, + 0.352 + ], + "angle": 0, + "content": "DRSformer" + }, + { + "type": "image", + "bbox": [ + 0.661, + 0.255, + 0.766, + 0.34 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.675, + 0.342, + 0.752, + 0.352 + ], + "angle": 0, + "content": "NeRD-Rain-S" + }, + { + "type": "image", + "bbox": [ + 0.767, + 0.255, + 0.875, + 0.34 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.807, + 0.342, + 0.834, + 0.352 + ], + "angle": 0, + "content": "Ours" + }, + { + "type": "image_caption", + "bbox": [ + 0.2, + 0.36, + 0.796, + 0.375 + ], + "angle": 0, + "content": "Figure 7. Visual inspection of the results from different image deraining methods on sampled images." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.401, + 0.484, + 0.522 + ], + "angle": 0, + "content": "SCMC. To ensure a fair comparison, all models were adjusted for comparable sizes by modifying channel numbers, and are trained on the DID [54] dataset for 300K iterations. As shown in Tab. 5, each component makes significant contribution to performance improvement. Notably, SADA contributes a PSNR gain of \\(0.78\\mathrm{dB}\\) and an SSIM improvement of 0.003, while the inclusion of SEDC yields an additional PSNR increase of \\(0.17\\mathrm{dB}\\)." + }, + { + "type": "table", + "bbox": [ + 0.104, + 0.535, + 0.472, + 0.624 + ], + "angle": 0, + "content": "
SADASEDCMIMOPSNR(dB)↑SSIM↑LPIPS↓
---38.620.96630.0587
--38.820.96780.0533
-39.050.96920.0521
-39.610.97120.0480
39.780.97290.0453
" + }, + { + "type": "table_caption", + "bbox": [ + 0.109, + 0.63, + 0.465, + 0.644 + ], + "angle": 0, + "content": "Table 5. Results of ablation studies. Boldfaced: best values." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.669, + 0.483, + 0.79 + ], + "angle": 0, + "content": "Impact of the grid dilation: In this study, we investigate the performance benefits of the multi-head mechanism in SADA by comparing different settings of \\( M_{s} \\) on the DID dataset. Tab. 6 shows that the multi-dilation offset grid improves performance with similar model size. Compared to assigning each head same small dilation (1,1,1,1), our setting (1,3,5,7) better exploits long-range dependencies, resulting in a PSNR gain of 0.2dB and a LPIPS reduction of 0.0039." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.795, + 0.484, + 0.903 + ], + "angle": 0, + "content": "Visualization of deformable offsets: Refer to Fig. 8 for a comparison of the sampling points between the standard LSA and our SADA. The purple star indicates the central point, while the yellow points and red circles represent the initial and final sampling positions, respectively. This shows SADA can better capture long-range and multi-scale context, which proves advantageous for the ensuing weighted aggregation." + }, + { + "type": "table", + "bbox": [ + 0.575, + 0.399, + 0.848, + 0.463 + ], + "angle": 0, + "content": "
MsPSNR(dB)↑SSIM↑LPIPS↓
{1,1,1,1}39.580.97080.0492
{1,2,3,4}39.700.97230.0463
{1,3,5,7}39.780.97290.0453
" + }, + { + "type": "image_caption", + "bbox": [ + 0.546, + 0.47, + 0.871, + 0.484 + ], + "angle": 0, + "content": "Table 6. Results using different setting of dilation \\(M_{s}\\)." + }, + { + "type": "image", + "bbox": [ + 0.561, + 0.497, + 0.676, + 0.586 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.523, + 0.588, + 0.898, + 0.599 + ], + "angle": 0, + "content": "(a) Corresponding down-sampled image. (b) Sampling points for each LSA head." + }, + { + "type": "image", + "bbox": [ + 0.522, + 0.6, + 0.607, + 0.666 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.541, + 0.667, + 0.589, + 0.677 + ], + "angle": 0, + "content": "0-th head" + }, + { + "type": "image", + "bbox": [ + 0.619, + 0.6, + 0.705, + 0.666 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.638, + 0.667, + 0.681, + 0.676 + ], + "angle": 0, + "content": "1-st head" + }, + { + "type": "image", + "bbox": [ + 0.748, + 0.498, + 0.864, + 0.587 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.716, + 0.588, + 0.897, + 0.599 + ], + "angle": 0, + "content": "(b) Sampling points for each LSA head." + }, + { + "type": "image", + "bbox": [ + 0.716, + 0.6, + 0.803, + 0.666 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.732, + 0.667, + 0.78, + 0.676 + ], + "angle": 0, + "content": "2-nd head" + }, + { + "type": "image", + "bbox": [ + 0.815, + 0.599, + 0.899, + 0.666 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.832, + 0.667, + 0.877, + 0.675 + ], + "angle": 0, + "content": "3-rd head" + }, + { + "type": "image_caption", + "bbox": [ + 0.615, + 0.677, + 0.806, + 0.687 + ], + "angle": 0, + "content": "(c) Sampling points for each SADA head." + }, + { + "type": "image_caption", + "bbox": [ + 0.521, + 0.7, + 0.896, + 0.715 + ], + "angle": 0, + "content": "Figure 8. Visualization of sampling points for LSA and SADA." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.731, + 0.613, + 0.747 + ], + "angle": 0, + "content": "Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.757, + 0.908, + 0.894 + ], + "angle": 0, + "content": "In this paper, we presented SADT, a universal transformer-based architecture for image restoration across diverse artifacts. Our approach integrates the SEDC module to capture scale-varying patterns with abundant orientations and potential distortions, and the SADA module to model long-range relationships among repetitive patterns with diverse sizes and non-uniform distributions. Extensive experiments showed that SADT consistently outperforms SOTA methods in the tasks including image demoiréing, debanding, and deraining." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "12738" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.092, + 0.091, + 0.264, + 0.108 + ], + "angle": 0, + "content": "Acknowledgements." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.115, + 0.486, + 0.298 + ], + "angle": 0, + "content": "This work was supported by National Key R&D Pogram of China (2023YFA1011601), the Basic and Applied Basic Research Foundation of Guangdong Province (2024A1515012287), Science and Technology Key Program of Guangzhou, China (2023B03J1388), National Natural Science Foundation of China (62372186), Natural Science Foundation of Guangdong Province (2023A1515012841), Fundamental Research Funds for the Central Universities (x2jsD2230220), National Natural Science Foundation of China (62106077), Natural Science Foundation of Guangdong Province (2022A1515011087) and Singapore MOE AcRF Tier 1 (Grant No. A-8000981-00-00)." + }, + { + "type": "title", + "bbox": [ + 0.093, + 0.311, + 0.188, + 0.326 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.336, + 0.486, + 0.39 + ], + "angle": 0, + "content": "[1] Junyoung Byun, Kyujin Shim, and Changick Kim. Bitnet: Learning-based bit-depth expansion. In Proceedings of the Asian Conference on Computer Vision, pages 67-82. Springer, 2019. 2, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.393, + 0.484, + 0.463 + ], + "angle": 0, + "content": "[2] Jiezhang Cao, Jingyun Liang, Kai Zhang, Yawei Li, Yulun Zhang, Wenguan Wang, and Luc Van Gool. Reference-based image super-resolution with deformable attention transformer. In European conference on computer vision, pages 325-342. Springer, 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.464, + 0.484, + 0.533 + ], + "angle": 0, + "content": "[3] Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. Pre-trained image processing transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12299-12310, 2021. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.534, + 0.485, + 0.604 + ], + "angle": 0, + "content": "[4] Jierun Chen, Shiu-hong Kao, Hao He, Weipeng Zhuo, Song Wen, Chul-Ho Lee, and S-H Gary Chan. Run, don't walk: chasing higher flops for faster neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12021-12031, 2023. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.605, + 0.485, + 0.673 + ], + "angle": 0, + "content": "[5] Xiang Chen, Hao Li, Mingqiang Li, and Jinshan Pan. Learning a sparse transformer network for effective image deraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5896-5905, 2023. 2, 3, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.676, + 0.485, + 0.744 + ], + "angle": 0, + "content": "[6] Xiang Chen, Jinshan Pan, and Jiangxin Dong. Bidirectional multi-scale implicit neural representations for image deraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 25627-25636, 2024. 2, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.746, + 0.484, + 0.802 + ], + "angle": 0, + "content": "[7] Yijia Cheng, Xin Liu, and Jingyu Yang. Recaptured raw screen image and video demoiring via channel and spatial modulations. Advances in Neural Information Processing Systems, 36, 2024. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.803, + 0.485, + 0.871 + ], + "angle": 0, + "content": "[8] Sung-Jin Cho, Seo-Won Ji, Jun-Pyo Hong, Seung-Won Jung, and Sung-Jea Ko. Rethinking coarse-to-fine approach in single image deblurring. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4641-4650, 2021. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.873, + 0.484, + 0.902 + ], + "angle": 0, + "content": "[9] Yuning Cui, Wenqi Ren, Sining Yang, Xiaochun Cao, and Alois Knoll. Irnext: Rethinking convolutional network design" + }, + { + "type": "list", + "bbox": [ + 0.101, + 0.336, + 0.486, + 0.902 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.545, + 0.093, + 0.907, + 0.12 + ], + "angle": 0, + "content": "for image restoration. In International conference on machine learning, 2023. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.122, + 0.908, + 0.178 + ], + "angle": 0, + "content": "[10] Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 764-773, 2017. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.179, + 0.909, + 0.22 + ], + "angle": 0, + "content": "[11] FFmpeg Filters deband. Accessed: Aug. 31, 2021. [online]. available:: https://ffmpeg.org/ffmpeg-filters.html#deband, 2021.7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.222, + 0.908, + 0.276 + ], + "angle": 0, + "content": "[12] Bin He, Ce Wang, Boxin Shi, and Ling-Yu Duan. Mop moiré patterns using mopnet. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2424-2432, 2019. 2, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.278, + 0.907, + 0.333 + ], + "angle": 0, + "content": "[13] Bin He, Ce Wang, Boxin Shi, and Ling-Yu Duan. Fhde 2 net: Full high definition demoiring network. In Proceedings of the European Conference on Computer Vision, pages 713-729. Springer, 2020. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.335, + 0.908, + 0.403 + ], + "angle": 0, + "content": "[14] Qin Huang, Hui Yong Kim, Wen-Jiin Tsai, Se Yoon Jeong, Jin Soo Choi, and C-C Jay Kuo. Understanding and removal of false contour in hevc compressed images. IEEE Transactions on Circuits and Systems for Video Technology, 28(2): 378-391, 2016. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.406, + 0.908, + 0.448 + ], + "angle": 0, + "content": "[15] Yan Huang, Xinchang Lu, Yuhui Quan, Yong Xu, and Hui Ji. Image shadow removal via multi-scale deep retina decomposition. Pattern Recognition, 159:111126, 2025. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.449, + 0.907, + 0.504 + ], + "angle": 0, + "content": "[16] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision, pages 694–711. Springer, 2016. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.505, + 0.907, + 0.545 + ], + "angle": 0, + "content": "[17] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.548, + 0.906, + 0.604 + ], + "angle": 0, + "content": "[18] Gang Li, Di Xu, Xing Cheng, Lingyu Si, and Changwen Zheng. Simvit: Exploring a simple vision transformer with sliding windows. In 2022 IEEE International Conference on Multimedia and Expo (ICME), pages 1-6. IEEE, 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.605, + 0.908, + 0.66 + ], + "angle": 0, + "content": "[19] Yu Li, Robby T Tan, Xiaojie Guo, Jiangbo Lu, and Michael S Brown. Rain streak removal using layer priors. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2736-2744, 2016. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.661, + 0.908, + 0.731 + ], + "angle": 0, + "content": "[20] Yizhou Li, Yusuke Monno, and Masatoshi Okutomi. Single image deraining network with rain embedding consistency and layered LSTM. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 4060-4069, 2022. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.732, + 0.908, + 0.8 + ], + "angle": 0, + "content": "[21] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1833-1844, 2021. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.802, + 0.907, + 0.858 + ], + "angle": 0, + "content": "[22] Fanglei Liu, Jingyu Yang, and Huanjing Yue. Moiré pattern removal from texture images via low-rank and sparse matrix decomposition. In 2015 Visual Communications and Image Processing, pages 1-4. IEEE, 2015. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.86, + 0.906, + 0.901 + ], + "angle": 0, + "content": "[23] Jing Liu, Xin Wen, Weizhi Nie, Yuting Su, Peiguang Jing, and Xiaokang Yang. Residual-guided multiscale fusion network for bit-depth enhancement. IEEE Transactions on Circuits" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.909, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "12739" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.126, + 0.092, + 0.484, + 0.119 + ], + "angle": 0, + "content": "and Systems for Video Technology, 32(5):2773-2786, 2021. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.122, + 0.484, + 0.205 + ], + "angle": 0, + "content": "[24] Lin Liu, Jianzhuang Liu, Shanxin Yuan, Gregory Slabaugh, Ales Leonardis, Wengang Zhou, and Qi Tian. Wavelet-based dual-branch network for image demoiring. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIII 16, pages 86–102. Springer, 2020. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.207, + 0.484, + 0.262 + ], + "angle": 0, + "content": "[25] Pengju Liu, Hongzhi Zhang, Kai Zhang, Liang Lin, and Wangmeng Zuo. Multi-level wavelet-cnn for image restoration. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 773-782, 2018. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.264, + 0.484, + 0.303 + ], + "angle": 0, + "content": "[26] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.305, + 0.484, + 0.36 + ], + "angle": 0, + "content": "[27] Yu Luo, Yong Xu, and Hui Ji. Removing rain from a single image via discriminative sparse coding. In Proceedings of the IEEE international conference on computer vision, pages 3397-3405, 2015. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.362, + 0.484, + 0.404 + ], + "angle": 0, + "content": "[28] Duong Hai Nguyen, Se-Ho Lee, and Chul Lee. Multiscale coarse-to-fine guided screenshot demoiring. IEEE Signal Processing Letters, 2023. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.406, + 0.484, + 0.461 + ], + "angle": 0, + "content": "[29] Yuhui Quan, Shijie Deng, Yixin Chen, and Hui Ji. Deep learning for seeing through window with raindrops. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2463-2471, 2019. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.462, + 0.484, + 0.517 + ], + "angle": 0, + "content": "[30] Yuhui Quan, Xuyi He, Ruotao Xu, Yong Xu, and Hui Ji. Image debanding using cross-scale invertible networks with banded deformable convolutions. Neural Networks, page 107270, 2025. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.519, + 0.484, + 0.574 + ], + "angle": 0, + "content": "[31] Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan Bello, Anselm Levskaya, and Jon Shlens. Stand-alone self-attention in vision models. Advances in neural information processing systems, 32, 2019. 3, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.576, + 0.484, + 0.644 + ], + "angle": 0, + "content": "[32] Dongwei Ren, Wangmeng Zuo, Qinghua Hu, Pengfei Zhu, and Deyu Meng. Progressive image deraining networks: A better and simpler baseline. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3937-3946, 2019. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.646, + 0.484, + 0.701 + ], + "angle": 0, + "content": "[33] Yujing Sun, Yizhou Yu, and Wenping Wang. Moiré photo restoration using multiresolution convolutional neural networks. IEEE Transactions on Image Processing, 27(8):4160-4172, 2018. 2, 3, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.703, + 0.484, + 0.744 + ], + "angle": 0, + "content": "[34] Chunwei Tian, Yong Xu, Zuoyong Li, Wangmeng Zuo, Lunke Fei, and Hong Liu. Attention-guided cnn for image denoising. Neural Networks, 124:117-129, 2020. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.746, + 0.484, + 0.815 + ], + "angle": 0, + "content": "[35] Ilya O Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, et al. Mlp-mixer: An all-mlp architecture for vision. Advances in neural information processing systems, 34:24261–24272, 2021. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.817, + 0.484, + 0.858 + ], + "angle": 0, + "content": "[36] Zhengzhong Tu, Jessie Lin, Yilin Wang, Balu Adsumilli, and Alan C Bovik. Adaptive debanding filter. IEEE Signal Processing Letters, 27:1715-1719, 2020. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.86, + 0.484, + 0.901 + ], + "angle": 0, + "content": "[37] Ce Wang, Bin He, Shengsen Wu, Renjie Wan, Boxin Shi, and Ling-Yu Duan. Coarse-to-fine disentangling demoiréing framework for recaptured screen images. IEEE Transactions" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.484, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.545, + 0.093, + 0.908, + 0.12 + ], + "angle": 0, + "content": "on Pattern Analysis and Machine Intelligence, 45(8):9439-9453, 2023. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.122, + 0.908, + 0.178 + ], + "angle": 0, + "content": "[38] Hong Wang, Qi Xie, Qian Zhao, and Deyu Meng. A model-driven deep neural network for single image rain removal. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3103-3112, 2020. 3, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.179, + 0.908, + 0.248 + ], + "angle": 0, + "content": "[39] Tianyu Wang, Xin Yang, Ke Xu, Shaozhe Chen, Qiang Zhang, and Rynson WH Lau. Spatial attentive single-image deraining with a high quality real rain dataset. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12270-12279, 2019. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.25, + 0.908, + 0.332 + ], + "angle": 0, + "content": "[40] Wenhai Wang, Jifeng Dai, Zhe Chen, Zhenhang Huang, Zhiqi Li, Xizhou Zhu, Xiaowei Hu, Tong Lu, Lewei Lu, Hongsheng Li, et al. Internimage: Exploring large-scale vision foundation models with deformable convolutions. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14408-14419, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.334, + 0.908, + 0.404 + ], + "angle": 0, + "content": "[41] Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 17683-17693, 2022. 3, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.405, + 0.908, + 0.461 + ], + "angle": 0, + "content": "[42] Xin Wen, Weizhi Nie, Jing Liu, and Yuting Su. Mrft: Multiscale recurrent fusion transformer based prior knowledge for bit-depth enhancement. IEEE Transactions on Circuits and Systems for Video Technology, 33(10):5562-5575, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.462, + 0.908, + 0.517 + ], + "angle": 0, + "content": "[43] Zhuofan Xia, Xuran Pan, Shiji Song, Li Erran Li, and Gao Huang. Vision transformer with deformable attention. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4794-4803, 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.519, + 0.908, + 0.574 + ], + "angle": 0, + "content": "[44] Jie Xiao, Xueyang Fu, Aiping Liu, Feng Wu, and Zheng-Jun Zha. Image de-raining transformer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(11):12978-12995, 2022. 3, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.576, + 0.908, + 0.659 + ], + "angle": 0, + "content": "[45] Yuwen Xiong, Zhiqi Li, Yuntao Chen, Feng Wang, Xizhou Zhu, Jiapeng Luo, Wenhai Wang, Tong Lu, Hongsheng Li, Yu Qiao, et al. Efficient deformable convnets: Rethinking dynamic and sparse operator for vision applications. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5652-5661, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.661, + 0.908, + 0.716 + ], + "angle": 0, + "content": "[46] Shuning Xu, Binbin Song, Xiangyu Chen, Xina Liu, and Jiantao Zhou. Image demoiring in raw and srgb domains. In European Conference on Computer Vision, pages 108-124. Springer, 2025. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.718, + 0.908, + 0.786 + ], + "angle": 0, + "content": "[47] Qiaosi Yi, Juncheng Li, Qinyan Dai, Faming Fang, Guixu Zhang, and Tieyong Zeng. Structure-preserving deraining with residue channel prior guidance. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4238-4247, 2021. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.788, + 0.908, + 0.857 + ], + "angle": 0, + "content": "[48] Xin Yu, Peng Dai, Wenbo Li, Lan Ma, Jiajun Shen, Jia Li, and Xiaojuan Qi. Towards efficient and scale-robust ultra-high-definition image demoiring. In Proceedings of the European Conference on Computer Vision, pages 646–662. Springer, 2022. 2, 3, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.859, + 0.908, + 0.901 + ], + "angle": 0, + "content": "[49] Shanxin Yuan, Radu Timofte, Gregory Slabaugh, Ales Leonardis, Bolun Zheng, Xin Ye, Xiang Tian, Yaowu Chen, Xi Cheng, Zhenyong Fu, et al. Aim 2019 challenge on image" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.908, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "12740" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.125, + 0.092, + 0.484, + 0.134 + ], + "angle": 0, + "content": "demoireing: Methods and results. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pages 3534-3545. IEEE, 2019. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.136, + 0.484, + 0.205 + ], + "angle": 0, + "content": "[50] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14821-14831, 2021. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.207, + 0.484, + 0.288 + ], + "angle": 0, + "content": "[51] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5728-5739, 2022. 2, 3, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.29, + 0.484, + 0.359 + ], + "angle": 0, + "content": "[52] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 586-595, 2018. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.361, + 0.484, + 0.416 + ], + "angle": 0, + "content": "[53] Bolun Zheng, Shanxin Yuan, Gregory Slabaugh, and Ales Leonardis. Image demoiring with learnable bandpass filters. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3636-3645, 2020. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.418, + 0.484, + 0.471 + ], + "angle": 0, + "content": "[54] Raymond Zhou, Shahrukh Athar, Zhongling Wang, and Zhou Wang. Deep image debanding. In Proceedings of the IEEE International Conference on Image Processing, pages 1951-1955. IEEE, 2022. 6, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.474, + 0.484, + 0.543 + ], + "angle": 0, + "content": "[55] Shihao Zhou, Duosheng Chen, Jinshan Pan, Jinglei Shi, and Jufeng Yang. Adapt or perish: Adaptive sparse transformer with attentive feature refinement for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2952-2963, 2024. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.545, + 0.484, + 0.599 + ], + "angle": 0, + "content": "[56] Xizhou Zhu, Han Hu, Stephen Lin, and Jifeng Dai. Deformable convnets v2: More deformable, better results. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, pages 9308-9316, 2019. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.601, + 0.484, + 0.655 + ], + "angle": 0, + "content": "[57] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159, 2020.3" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.484, + 0.655 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.946, + 0.518, + 0.956 + ], + "angle": 0, + "content": "12741" + } + ] +] \ No newline at end of file diff --git a/2025/A Universal Scale-Adaptive Deformable Transformer for Image Restoration across Diverse Artifacts/36bcd46e-32c4-4da4-aff4-67b68f83d335_origin.pdf b/2025/A Universal Scale-Adaptive Deformable Transformer for Image Restoration across Diverse Artifacts/36bcd46e-32c4-4da4-aff4-67b68f83d335_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ad1937607b67a35710e0964fab1dd6d87f776097 --- /dev/null +++ b/2025/A Universal Scale-Adaptive Deformable Transformer for Image Restoration across Diverse Artifacts/36bcd46e-32c4-4da4-aff4-67b68f83d335_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:750993438fcf6195342532f8f7d676d736b6ef522b94e74cbc9fe1b6617cbfec +size 6995588 diff --git a/2025/A Universal Scale-Adaptive Deformable Transformer for Image Restoration across Diverse Artifacts/full.md b/2025/A Universal Scale-Adaptive Deformable Transformer for Image Restoration across Diverse Artifacts/full.md new file mode 100644 index 0000000000000000000000000000000000000000..088893da2a9697a0172dd6d48b3acd91322c85d0 --- /dev/null +++ b/2025/A Universal Scale-Adaptive Deformable Transformer for Image Restoration across Diverse Artifacts/full.md @@ -0,0 +1,420 @@ +# A Universal Scale-Adaptive Deformable Transformer for Image Restoration across Diverse Artifacts + +Xuyi He + +Yuhui Quan + +$\mathbf{R}$ uotao $\mathrm{Xu^{2,3}*}$ + +Hui Ji + +$^{1}$ School of Computer Science and Engineering, South China University of Technology $^{2}$ Institute for Super Robotics, South China University of Technology $^{3}$ Key Laboratory of Large-Model Embodied-Intelligent Humanoid Robot $^{4}$ Department of Mathematics, National University of Singapore + +csxuyihe@mail.scut.edu.cn, csyhquan@scut.edu.cn, rtxu@superobots.com, matjh@nus.edu.sg + +# Abstract + +Structured artifacts are semi-regular, repetitive patterns that closely intertwine with genuine image content, making their removal highly challenging. In this paper, we introduce the Scale-Adaptive Deformable Transformer, an network architecture specifically designed to eliminate such artifacts from images. The proposed network features two key components: a scale-enhanced deformable convolution module for modeling scale-varying patterns with abundant orientations and potential distortions, and a scale-adaptive deformable attention mechanism for capturing long-range relationships among repetitive patterns with different sizes and non-uniform spatial distributions. Extensive experiments show that our network consistently outperforms state-of-the-art methods in diverse artifact removal tasks, including image deraining, image demoiring, and image debanding. + +# 1. Introduction + +Structured artifacts, in contrast to random noise, are repetitive patterns with similar appearance. Examples include Moiré patterns caused by overlapping pattern interference, rain streaks formed during image acquisition in rainy conditions, and banding effects resulting from color quantization. See Figure 1 for an illustration. These artifacts often display similar appearance that repeat in a quasi-periodic way over large image regions. However, they can differ in size, intensity, orientation, and may exhibit certain shape distortions. Additionally, their characteristics can vary across different images due to changes in image content and capturing. + +Removing structured artifacts has many applications. Eliminating moiré patterns improves digital photography + +![](images/18490bcb9b39b556463d9de3aff215467f7618319c3a6d439533beca73b749a7.jpg) + +![](images/27b3199a8bdeb0a4f74b5c24a3e6ece83d9938a1ad95db60b6d44d2f2727a682.jpg) + +![](images/f3f4ae252b636f6bb503193f28a39639a40ecec227b60b97af9aff387b63015b.jpg) + +![](images/13e0c606cbef0d8ed6d80aa3e24beb844257e273d8c4ad2523b9759e31aa7fab.jpg) +Figure 1. Structured artifacts with varying orientations, scales, and may involve warping effect. From left to right: moiré, rain, band. + +![](images/d7781c63dc311025e1dd22964f9b94bee05f07bb835318904eb3cf5687c57596.jpg) + +![](images/cbf684e07eb7050c2158b2e3123ed491437caf1152495299b503b95199bce24b.jpg) + +and screen captures. Rain streak removal improves the reliability of outdoor vision systems in rainy conditions, i.e., those used in autonomous driving and video surveillance. Addressing banding artifacts is necessary in professional imaging and printing to achieve smooth gradients and high dynamic range. Additionally, structured artifact removal finds applications in medical imaging, industrial quality inspection, and consumer electronics, enhancing image quality and reliability. All these highlight a broad impact of structured artifacts removal across many real-world applications. + +Structured artifact removal presents significant challenges that differ from random noise removal. The primary difficulties arise due to the quasi-periodic nature of these artifacts and their resemblance to authentic repeating image patterns (e.g., textures). These artifacts often intertwine with natural image features, creating complex overlapping structures that are difficult to disentangle from genuine image content. For instance, moiré patterns can closely mimic textile textures, and banding effects may align with natural linear features. Additionally, these artifacts often display noticeable variations in pattern size and geometric shape, further complicating their identification and removal. + +# 1.1. Challenges and Existing Works + +Traditional approaches, relying on handcrafted priors to distinguish structured artifacts from genuine image features, often struggle to handle variations in these artifacts, resulting in limited effectiveness for artifacts with noticeable variations. While deep learning methods have shown promising performance in removing these artifacts, they also face notable challenges. Early approaches are based on convolutional neural networks (CNNs) [1, 12, 23, 28, 29, 33, 48], which have limited capacity on capturing long-range relationships due to their local receptive fields. However, the quasi-periodicity of local patterns with certain similarities is a crucial cue for identifying structured artifacts. Thus, the design of the NN for removing structured artifacts removal must be capable of effectively exploiting long-range relationships among local structures to accurately identify these artifacts. + +Recent works addressed this limitation by leveraging transformers with self-attention mechanisms [5, 6, 51]. However, challenges remain. The significant variations in the geometric shapes and pattern sizes of structured artifacts make it challenging to synthesize sufficient training data to have a comprehensive coverage of all possible variations. In addition, training NNs on very large datasets can be also computationally expensive. Consequently, relying solely on extensive datasets may not lead to robust generalization. To conclude, an effective transformer-based NN for structured artifacts removal should be designed with deformable capacity and scale adaptability, enabling them to generalize effectively to unseen data even with limited training samples. + +# 1.2. Main Idea + +An effective NN for structured artifact removal requires specialized mechanisms that can efficiently adapt to the varying scales and distortions of these artifacts, as well as exploit long-range dependencies among local structures. In this paper, we propose a transformer-based NN designed with specific optimization for handling scale variations and distortions of structured artifacts. Our proposed architecture, the Scale-Adaptive Deformable Transformer (SADT), leverages the transformer's capacity to capture long-range dependencies while incorporating new modules for scale adaptation and deformable transformations, enabling effective removal of structured artifacts with noticeable variations. + +Scale-enhanced deformable convolution (SEDC): Structured artifacts are semi-regular, exhibiting repetitive patterns with similar appearances but noticeable variations in size, orientation, and shape distortion. Classic convolution with fixed geometric structures struggle to capture such variable patterns effectively. Existing deformable convolutions offer some flexibility through spatial adaptations but remain ineffective in handling large-scale and orientation variations. To address these issues, we introduce the SEDC module, a convolution module designed to better manage these variations. + +The cascaded deformable convolutions are used to generate intermediate features from different perspectives, scales, and orientations, enabling flexible sampling and effective modeling of warped patterns. Furthermore, the subsequent Spatial-Channel Mixing Convolution (SCMC) with large-scale convolution kernel is introduced, which helps capture broader contextual information while preserving positional cues. Finally, the parallel features are adaptively aggregated for modeling scale-varying and deformable patterns. + +Scale-adaptive deformable attention (SADA): Structured artifacts are repetitive patterns spanning over large image regions. These patterns are quasi-periodic with non-uniform spatial distribution and size variation. While transformer is effective in capturing such repetitive patterns through attention mechanisms, the significant variations in size and non-uniform spatial distribution of these patterns demand better scale adaptability and more effective sampling within the attention mechanisms. + +This motivates us to propose Scale-Adaptive Deformable Attention (SADA), which assigns multi-scale multi-head mechanism for capturing long-range dependencies within a semantic layer. Moreover, SADA introduces deformable-sampling offset to effectively handle the non-uniform spatial distribution of repetitive patterns. These two techniques enable local self-attention to better capture long-range quasi-periodic patterns through selective information aggregation. + +# 1.3. Our Contributions + +In summary, our contributions are listed as follows: + +- We propose the SEDC module to model scale-varying artifacts with abundant orientations and potential distortions. +- We introduce the SADA module, enhancing local selfattention by incorporating a multi-scale multi-head mechanism and deformable technique, to capture long-range contextual information of quasi-periodic patterns. +- Leveraging on SEDC and SADA, we construct the transformer-based SADT for structured artifact removal. + +Extensive experiments on diverse artifact removal tasks, including image demoiring, deraining, and debanding, showed that SADT achieves the state-of-the-art performance. + +# 2. Related Work + +Non-learning structured artifact removal method: Traditional methods rely on pre-defined priors on structured artifacts to distinguish them from genuine image content. For moiré pattern removal from texture images, Liu et al. [22] proposed using a low rank prior for textures and a sparse prior of Moiré patterns in the discrete cosine transform to separate two. For removing rain streak, Luo et al. [27] separates the rain streak and background layers via discriminative sparse coding. Li et al. [19] introduces a patch-based approach leveraging Gaussian mixture model priors to separate + +rain streak and background layers. These handcrafted priors are often overly simplistic and tend to fail in removing artifacts with large variations in scale, orientation, and shape. + +Multi-scale deep learning methods for structured artifact removal: There is extensive literature on deep learning for structured artifact removal. Here, we focus exclusively on the most relevant works employing multi-scale strategies to address artifact patterns of varying scales. To remove Moiré patterns from images, Sun et al. [33] proposed a multiresolution encoder-decoder architecture designed to model Moiré patterns at varying scales. Yu et al. [48] introduced a semantic-aligned scale-aware module that combines multiscale features through an attention mechanism to reduce scale discrepancies in Moiré patterns. Nguyen et al. [28] developed a multiscale guided restoration block targeting both low- and high-frequency noise. For deraining, Wang et al. [38] proposed a wavelet-inspired multi-level module for rain removal. For debanding, Liu et al. [23] introduced a dual-branch depthwise group fusion module to capture both inter-scale and intra-scale correlations of banding patterns. Quan et al. [30] introduced a cross-Scale invertible NN with deformable convolutions to handle scale variations of banding artifacts. For shadow removal, built upon the Retinex decomposition model, Huang et al. [15] proposed a neural network with a multi-scale structure. These methods have shown promising results in structured artifact removal tasks. However, these methods neither fully exploit the repetitive nature of artifacts for accurate separation from genuine image structures, nor do they effectively address the significant orientation or shape variations of artifacts in their designs. + +Transformer for removing structured artifacts: Transformer is an effective architecture for modeling global and local relationship of local structures. In image processing, an image is partitioned into small patches to fit the transformer architecture [3]. Many transformer methods have been proposed for different structured artifact removal tasks. For deraining, Xiao et al. [44] introduced a combination of window-based and global self-attention mechanisms. Chen et al. [5] developed sparse channel self-attention to selectively retain key values. For image debanding, Wen et al. [42] utilized local self-attention with varying window sizes across different heads to capture features at multiple scales. + +However, these transformer methods have not fully leveraged self-attention's ability to capture long-range dependencies of repetitive patterns. Such weakness arises from the high computational costs of standard transformers due to quadratic scaling with patch numbers. To reduce costs, these transformers rely on techniques such as window-based self-attention [21, 41], channel-only self-attention in Restormer [51], and local self-attention (LSA) [18, 31]. While computationally efficient, these methods' limited spatial perceptive fields weaken their ability to capture long-range information. To address these limitations, the proposed SADT extends the + +LSA with scale-adaptive deformable sampling to effectively capture long-range dependencies of repetitive artifacts with varying scales, orientations, and distortions. This design enables SADT to model structured artifacts more effectively. + +Deformable convolution and attention: Deformable convolutions [10, 40, 45, 56] introduce learnable offsets to convolution kernels, enabling adaptive feature sampling to handle geometric variations and spatial transformations. Deformable attention [2, 43, 57] extends this adaptability to the self-attention mechanism, allowing transformers to focus on relevant spatial locations and better capture complex patterns. Zhu et al. [57] proposed a multi-scale deformable attention module for generating a feature pyramid and allocating several keys per query to capture multi-scale features. Xia et al. [43] introduced the deformable attention transformer, which learns shared deformed points for efficient computations. Cao et al. [2] developed a reference-based deformable attention module to enhance low-resolution feature representations using multiple relevant features. + +The design of deformable self-attention in our work differs significantly from that of [43]. Xia et al. [43] combines self-attention with deformable convolution, computing the relevance between input and reference images for offset prediction. In contrast, our method integrates deformable sampling within self-attention by first predicting offsets and then calculating the similarity between predicted keys and the query. It is more effective in modeling long-range dependence of patterns with varying appearances. Furthermore, we enhance the scale adaptability of deformable convolution and attention for handle scale variations among artifacts. + +# 3. Methodology + +In this section, we introduce the two key components of the proposed transformer. SEDC, an advanced deformable convolution module designed to capture local patterns with large-scale and orientation variations of local patterns. SADA, a deformable attention module that exploits long-range relationships among repetitive patterns with varying sizes and non-uniform spatial distributions. Subsequently, detail present the proposed transformer, SADT, which integrates SEDC and SADA within a multi-in multi-out framework, together with the training loss. + +# 3.1. Scale-Enhanced Deformable Convolution + +SEDC is for effectively capturing scale-varying and deformable patterns. The module consists of two key components: the cascaded DCNs [56] for modeling complex patterns with distortions, and the parallel Spatial-Channel Mixing Convolution (SCMC) layers for capturing patterns with large scale variations. The pipeline of SEDC is illustrated in Figure 2. + +To capture patterns of large size, we introduce the SCMC layer for the expansion of receptive field. For each $i$ -th + +![](images/e12412d60de57c41d2a4242b11273d89a189d572ddabc87593d66f7b1eed0f6b.jpg) +Figure 2. SEDC configuration. + +channel of an input feature $\mathbf{X} \in \mathbb{R}^{H \times W \times C}$ , SCMC first partitions each feature map into several non-overlapping patches of size $K \times K$ , and subsequently reshape it into a matrix $\mathbf{X}_i \in \mathbb{R}^{K^2 \times \frac{HW}{K^2}}$ . Denote such an operation by + +$$ +\Psi : \boldsymbol {X} \in \mathbb {R} ^ {H \times W \times C} \to \left\{\boldsymbol {X} _ {i} \in \mathbb {R} ^ {K ^ {2} \times \frac {H W}{K ^ {2}}}, \forall i \right\}, +$$ + +Then, a learnable linear transform $\mathbf{W} \in \mathbb{R}^{K^2 \times K^2}$ is applied to each $\mathbf{X}_i$ : for each $\mathbf{X}_i$ , + +$$ +\Phi_ {\boldsymbol {W}}: \boldsymbol {X} _ {i} \in \mathbb {R} ^ {K ^ {2} \times \frac {H W}{K ^ {2}}} \to W \boldsymbol {X} _ {i} \in \mathbb {R} ^ {K ^ {2} \times \frac {H W}{K ^ {2}}}, +$$ + +where $W$ is shared across different $X_{i}$ s. Afterward, the output of $\Phi_W$ is reshaped and concatenated to the original dimension $(H,W,C)$ , using the inversion of $\Psi$ . A gating mechanism is then applied, performing an element-wise product between the original input and processed feature: + +$$ +\operatorname {S C M C} (\boldsymbol {X}) = \boldsymbol {X} \odot \Psi^ {- 1} \left(\Phi_ {\boldsymbol {W}} \left(\Psi (\boldsymbol {X})\right)\right). \tag {1} +$$ + +SCMC can effectively capture large pattern with minimal computation overhead. However, similar to other MLP-based architectures (e.g., MLP Mixer [35]), its non-overlapping partitions may struggle with patterns that span across regions, particularly when intersected by partition boundaries. This issue will be mitigated by the cascaded DCNs, which helps the aggregation of relevant non-local information into local windows, and provides features with different scales into parallel SCMCs. The efficient DCN utilizes a PConv operation [4] followed by a pointwise convolution to generate deformable offsets and modulation scalars. + +The complete pipeline of SEDC is shown in Fig. 2. In the main branch, input feature channels are first reduced by a factor of 4. The cascaded DCN processes corresponding features, while parallel SCMCs handle the original input features. Finally, features with different receptive field shapes are merged via pointwise convolution. Overall, this design effectively captures local patterns with varying sizes, orientations, geometric distortions, and warping effects. + +# 3.2. Scale-Adaptive Deformable Attention + +SADA integrates the multi-scale multi-head mechanism with deformable-sampled offsets to effectively model long-range dependencies in repetitive artifact patterns. Given the input feature map $\mathbf{X} \in \mathbb{R}^{H \times W \times C}$ , we first generate query $Q$ , key $K$ and value $V$ projections, enriched with local context + +by applying pointwise convolutions to aggregate pixel-wise cross-channel context, followed by $3 \times 3$ depth-wise convolutions to encode channel-wise spatial context. A multi-head mechanism is then used, where $Q, K, V$ are split into $S$ parts along channels for subsequent process. + +Revisiting standard LSA: Omitting head index, let $V_{u}$ , $Q_{u}$ , $K_{u}$ denote the $u$ -th feature vector in $V$ , $Q$ , $K$ , and $u$ denote the related spatial location. In the standard LSA [31], for query $Q_{u}$ , the output $Y_{u}$ is defined as + +$$ +\boldsymbol {Y} _ {u} = \sum_ {v \in \mathcal {N} (u)} \omega_ {u \rightarrow v} \boldsymbol {V} _ {u}, \omega_ {u \rightarrow v} = \frac {\exp \left(\boldsymbol {Q} _ {u}\right) ^ {\top} \boldsymbol {K} _ {v}}{\sum_ {w \in \mathcal {N} (u)} \exp \left(\boldsymbol {Q} _ {u}\right) ^ {\top} \boldsymbol {K} _ {w}}, \tag {2} +$$ + +where $\mathcal{N}(u) = u + \{-1,0,1\}^2$ denotes the index set of neighboring points of the position $u$ , the size of $\mathcal{N}(u)$ is rather small and results in poor capability to capture long-range dependencies with varying locations. + +Multi-scale multi-head mechanism for LSA: To leverage the long-range dependencies of patterns, we introduce multi-dilatation initial grids, denoted as $\Delta^{(s)} = \{-M_s,0, + M_s\} ^2$ , to construct a multi-scale neighborhood. Specifically, for the $s$ -th head, the neighborhood is denoted as $\mathcal{N}^s (u) = u + \Delta^s$ . By assigning different sampling scales to each attention head, a broad and sparse receptive field, as shown in Figure 3(c), is formed through the aggregation of information across heads, enabling the layer to effectively capture long-range contextual information more effectively. + +Deformable-sampling offset for LSA: To handle non-uniform distribution of structured artifacts, we integrate deformable sampling into LSA by predicting offsets $\delta_u^s$ for different position $u$ and different scale $s$ . The neighboring set is given by + +$$ +\bar {\mathcal {N}} ^ {s} (u) = u + \Delta^ {(s)} + \delta_ {u} ^ {s}, +$$ + +Let $D_u^s$ denote $\delta_u^s$ in vector form, and $D^s$ combines all $D_u^s$ . Then, the offsets $D^s$ are extracted from $Q$ as + +$$ +\boldsymbol {D} ^ {s} = \phi^ {s} (\boldsymbol {Q}) \in \mathbb {R} ^ {H \times W \times 2 | \Delta^ {(s)} |}, \tag {3} +$$ + +Here, $\phi^s$ is implemented using PConv [4] followed by a pointwise convolution. Instead of a single head $Q_s$ , the full $Q$ is fed to $\phi^s$ for a holistic query view, enabling consistent head-specific adaptations. Learnable deformable offsets $D^s$ adjust sampling locations within a head, enhancing the relevance of sampled key and value to the central query and improving the layer's handling of complex patterns. + +Finally, the $s$ -th head output of SADA is calculated by + +$$ +\boldsymbol {Y} _ {u} ^ {s} = \sum_ {v \in \tilde {\mathcal {N}} ^ {s} (u)} \omega_ {u \rightarrow v} \boldsymbol {V} _ {u} ^ {s}. \tag {4} +$$ + +To mitigate the negative effects caused by statistical-based offset generator $\phi^s$ , the low-relevance points, referred to + +![](images/26969461c2da252e50c7dc2d873906a68b609d3aad27c7861c0c1a3cf449eef7.jpg) +(a) Overall SADA. + +![](images/366547a4407cb33e037e3dbe47b51e0544fe3af5a66f089ad586396273162cb8.jpg) +(b) Deformable sampling process of the $s$ -th head. + +![](images/780f209f31a2ec7d82446a21bb5d6000393155851241e63cb4cd702a586b842f.jpg) + +![](images/7c0e7d3ca0817fa01603efc28366f9e06617052167e5bd6ba41df3116162ee16.jpg) +(c) Initial sampling grids $\{\Delta^{(s)}\}$ + +![](images/0b220c97b6564a4020272459b3f394be20e6f26d481b3ac2dad432dd4f0a1683.jpg) +(d) The process instance for one head. +Figure 3. Overview of SADA: (a) SADA employs multi-scale multi-head mechanism to aggregate (c) a broad, sparse receptive field for capturing long-range context, while utilizing (b) deformable sampling offsets to model complex patterns. (d) A process instance of SADA within a head. + +invalid points, are filtered out in the weighted sum with a low $\omega_{u\rightarrow v}$ in Eq. 4. The remaining points are considered valid. The output $Y^{s}$ s are combined into $\pmb{Y}$ along channels and fed into a feed-forward network (FFN) for subsequent processing. The FFN is implemented by FRFN [55]. See Fig. 3 for an illustration. + +# 3.3. SADT for Structured Artifact Removal + +Building on SEDC and SADA, we propose a transformer, called SADT, for structured artifact removal. SADT employs a multi-input multi-output (MIMO) strategy, as shown in Fig. 4. Given a degraded image $\mathbf{Y} \in \mathbb{R}^{W \times H \times 3}$ , SADT initially extracts shallow features of size $H \times W \times C$ via a convolution layer. These features then pass through 4 encoder blocks, capturing fine to coarse scales, with each block comprising $N_{i}$ cascaded SADA layers to flexibly model long-range dependencies of repetitive artifacts and a pair of SEDC layers in the end for capturing scale-varying features. + +Within the encoder, channels are expanded and spatial resolution is reduced. Following [8, 9], features from downsampled degraded images are merged into the main path through a fusion module and subsequently processed by 3 decoders to progressively reconstruct the image from coarse to fine scales. Features in the decoder are concatenated with those from the encoder via skip connections, followed by a convolutional layer for channel dimension adjustment. The convolutional layer followed the decoder generates a residual image, which is added to the corresponding downsampled input to restore the image. The image at the finest scale + +serves as the final output, while other scales also contribute to the loss function during training. + +# 3.4. Loss Function for NN Training + +A multi-scale loss is used for training. Let $O_{t}$ be the output from the $t$ -th scale decoder block, while $X^{gt}$ be the ground truth. Besides standard $L_{1}$ fitting loss, we also include an auxiliary loss function that minimizes the distance between input and output in feature space, tailed for each task. + +Image demoiréing: The perceptual loss [16] $\mathcal{L}_p$ is used as an auxiliary loss, and the overall loss is + +$$ +\mathcal {L} = \sum_ {t = 1} ^ {3} \mathcal {L} _ {1} \left(\boldsymbol {O} _ {t}, \boldsymbol {X} _ {\downarrow 2 ^ {t - 1}} ^ {g t}\right) + \lambda_ {p} \cdot \mathcal {L} _ {p} \left(\boldsymbol {O} _ {t}, \boldsymbol {X} _ {\downarrow 2 ^ {t - 1}} ^ {g t}\right), \tag {5} +$$ + +where $\lambda_p > 0$ is a hyper-parameter, set to 1 in experiments. + +Image debanding/deraining: Unlike moiré patterns, band effects and rain streaks exhibit more pronounced stripe-like appearances, yielding strong responses in the frequency domain. Thus, the frequency loss $\mathcal{L}_f$ is introduced as an additional loss term, and the overall loss is + +$$ +\mathcal {L} = \sum_ {t = 1} ^ {3} \mathcal {L} _ {1} \left(\boldsymbol {O} _ {t}, \boldsymbol {X} _ {\downarrow 2 ^ {t - 1}} ^ {g t}\right) + \lambda_ {f} \cdot \mathcal {L} _ {f} \left(\boldsymbol {O} _ {t}, \boldsymbol {X} _ {\downarrow 2 ^ {t - 1}} ^ {g t}\right), \tag {6} +$$ + +where $\lambda_{f} > 0$ is a hyper-parameter, set to 0.1 in experiments. + +![](images/3a9cacf7fc8ba0171f708675a83945402a6028153ea077f50e4f02ae7094cc26.jpg) +Figure 4. The proposed network employs a multiscale hierarchical encoder-decoder architecture. Each block comprises $N$ cascaded SADA layers while utilizing a pair of SEDC layers at the ends. + +# 4. Experiments + +In this section, we first discuss the experimental setting, followed by the evaluation of the effectiveness of our SADT for several structured artifact removal tasks, including image demoiring, debanding and deraining. Ablation studies are then conducted to assess each component of SADT. More results can be found in supplementary material. + +# 4.1. Experimental Settings + +In our SADT, The number of channels is set to $C = 32$ while $\{N_0, N_1, N_2, N_3\}$ are set to $\{0, 4, 4, 4\}$ . The partition size in SCMC is $K = 8$ . 4 heads are used in SADA, and $M_s$ is set as $\{1, 3, 5, 7\}$ . Model training employs the Adam optimizer [17] with $\beta_1 = 0.9$ and $\beta_2 = 0.999$ . Code will be released upon acceptance. Experimental datasets and implementation details are listed below: + +Image demoiréing: Three datasets are used for benchmarking: TIP2018 [33], FHDMi [13], and LCDMoiré [49]. The training employs cyclic cosine annealing [26], with an initial learning rate of $2 \times 10^{-4}$ decaying to $10^{-6}$ over each cycle. For two high-definition datasets, FHDMi and LCDMoiré, we randomly crop $512 \times 512$ patches from the images, and train the model for 150 epochs with the annealing cycle 50 and batch size of 2. For TIP2018 dataset, the model is trained for 70 epochs with annealing cycle 10. Consistent with [48], no data augmentation is utilized in training. + +Image debanding: DID dataset [54] is used for benchmarking, which contains 51,490 pairs of banded and latent image patches with size $256\times 256$ (30,829 training pairs, 10,237 validation pairs and 10,354 testing pairs) The initial learning rate is 0.0002, and decreased to $10^{-6}$ for 300K iter + +ations with the cosine annealing scheme [26]. Batch size is 4. Data augmentations include horizontal/vertical flipping and rotations of $0^{\circ}$ , $90^{\circ}$ , $180^{\circ}$ and $270^{\circ}$ . + +Image deraining: The large-scale real-world dataset, SPAD [39], is used for benchmarking the task of rain streak removal. It contains 638,492 image pairs for training and 1,000 for testing. Batch size is 32 and th total iteration number is $300\mathrm{K}$ . The initial learning rate is fixed as $3\times 10^{-4}$ for the first 92K iterations, and then decreased to $1\times 10^{-6}$ for 208K iterations with the cosine annealing scheme [26]. Random vertical/horizontal flips are used in data augmentation. + +Two common quantitative evaluation metrics are used for all tasks: PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity Index). Consistent with existing deraining methods [5, 44], PSNR/SSIM scores are computed using the Y channel in YCbCr color space. For image demoiring and debanding where periodic patterns significantly impact perception, the LPIPS metric [52] is also used. + +# 4.2. Quantitative and Qualitative Evaluation + +Image demoiréing: See Tab. 1 for the comparison of different methods. SADT outperforms all compared methods on the PSNR metric and achieves the best SSIM scores on both TIP2018 and FHDMi datasets. While many methods, including SADT, achieve comparable SSIM scores on the LCDmoiré dataset, SADT maintains its superiority with the highest PSNR performance. The substantial performance gains on these datasets demonstrate SADT's effectiveness in handling moiré artifacts. Qualitative analysis also reveals that SADT exhibits remarkable improvement in moiré pattern removal, particularly in challenging scenarios such as patterns intertwined with hair details and those distributed + +
MethodTIP2018 [33]FHDMi [13]LCDMoiré [49]
LPIPS↓PSNR↑SSIM↑LPIPSPSNRSSIMLPIPSPSNRSSIM
MopNet [12]-27.750.8950.179422.760.7958---
FHDe2Net [13]-27.780.8960.168822.930.7885-41.40-
WDNet [24]-28.080.904----29.660.9670
MBCNN [53]-30.030.8930.198022.310.8095-44.040.9948
ESDNet [48]0.081629.810.9160.135424.500.83510.009744.830.9963
CDDF [37]-28.870.8940.161023.630.8040-44.10-
RVDNet [7]----24.290.8352-44.540.9932
RRID [46]----24.390.8300---
Ours0.060830.770.9260.123824.960.84630.006546.430.9923
+ +
MethodPSNR↑SSIM↑LPIPS↓
FCDR [14]25.730.71700.3766
FFmpeg [11]35.330.93520.0622
AdaDeband [36]35.350.93920.0639
BitNet [1]38.240.96330.0505
ADNet [34]38.290.96120.0499
MWCNN [25]39.240.96880.4854
MPRNet [50]39.420.96970.0461
Restormer [51]39.500.97090.0478
Ours39.780.97290.0453
+ +![](images/09e4b09302f65d399bb59c1de75c81247fdd7ac51fd43a7afd0f0df09fcca9fc.jpg) +Figure 5. Visual inspection of the results from different image demoiring methods on sample images; see zoom-in box for details inspection. + +across flat clothing regions, as shown in Fig. 5. + +Image debanding: Given the limited availability of open-source deep learning methods specifically for debanding, we extended our evaluation to include methods adapted from related tasks. As shown in Tab.2, SADT demonstrates superior performance across all evaluation metrics. See Fig. 6 for visual comparison, where SADT achieves optimal results with minimal banding artifacts and superior color fidelity. + +Image deraining: The SPAD dataset [39] is used for benchmarking. As shown in Tab. 3, SADT outperformed all other methods, in PSNR and SSIM. Notably, SADT achieved a $0.23\mathrm{dB}$ improvement over the second best performer, NeRD-Rain-S [6], with $42\%$ fewer parameter count. (See Tab. 4). As shown in Fig. 7, the visual comparison further validate our method's effectiveness. While exhibits residual streaks and NeRD-Rain-S produces discontinuous blocks in their outputs, SADT effectively removes rain streaks while maintaining high visual quality. + +Complexity comparison: See Tab. 4 for the comparison of different methods in terms of FLOPs and parameter numbers. Our model, SADT, maintains relatively low FLOPs and a small number of parameters, while achieving SOTA performance as evidenced in Tab. $1\sim 3$ . This shows that the effectiveness of our model is from its design, not model size. + +# 4.3. Ablation Study + +This study evaluates the contribution of key designs, SADA, SEDC, and MIMO architecture, toward performance gain + +Table 1. Quantitative results for image demoiring. The best and second-best results are Table 2. Quantitative results of image deboldfaced and underlined, respectively. banding on the DID dataset [54]. + +
MethodSourcePSNR↑SSIM↑
PReLU [32]CVPR'1940.160.9816
RCDNet [38]CVPR'2043.360.9831
SPDNet [47]ICCV'2143.550.9875
MPRNet [50]CVPR'2145.000.9897
ECNet [20]WACV'2244.320.9913
IDT [44]TPAMI'2247.340.9929
Uformer-B [41]CVPR'2247.840.9925
Restormer [51]CVPR'2247.980.9921
DRSformer [5]CVPR'2348.530.9924
NeRD-Rain-S [6]CVPR'2448.900.9936
Ours-49.130.9939
+ +Table 3. Image deraining results on the SPAD dataset [39]. The best and second-best results are boldfaced and underlined, respectively. + +
TaskMethod#Params(M)#FLOPs(G)
DemoiréingESDNet [48]5.9317.6
MBCNN [53]13.9-
DebandingRestormer [51]26.13150.0
MPRNet [50]20.13777.0
DerainingNeRD-Rain-S [6]10.5379.2
DRSformer [5]33.65242.9
Ours6.1339.7
+ +Table 4. Complexity comparison with SOTA methods in different tasks. The test image is of size $256 \times 256$ . + +of our model. For SADA, we removed the multi-scale head mechanism and deformable sampling offset, reducing it to multi-head LSA. For SEDC, we substituted it with a single + +![](images/d67476ce008b6e90254367f5b920789fb20385f4661dc52e9b60162aefaf1bc0.jpg) +Degraded + +![](images/d691ae42742b29842e54b1ccbbb3c974c519c71e6701947492c1e7047b82148f.jpg) +Reference + +![](images/c833da0ab580a2d86d53810a244d54d169308991c112581edf1020cb5e8288e9.jpg) +FFmpeg + +![](images/831fc324be9d9600d55e81fae8bc6e030c271a484ab588f89aaad949967ee31a.jpg) +BitNet + +![](images/476347858e28260e4c3cb099275709896347ac097ea0a1281ba1cfd7ce90e429.jpg) +ADNet + +![](images/308722b10de25de6940078184cbd85d0fdd10b2f66790cda603f15ffc611d5d2.jpg) +MWCNN + +![](images/9fc412018d2df07dc4983bc954290d12761dac8eee708cefa2aef5bbe09b7368.jpg) +MPRNet + +![](images/ee0ccb03a30074b1912753b00796a13c6b83dbe013effe654294fb47b065aad3.jpg) +Restormer + +![](images/51caff34d730129e89291a9db1e8e6b8930a7ad08b9100a4ab4fbe28705f1665.jpg) +Ours + +![](images/ecfe37fee3dcaf0a42e963ab705ad66f1c3f2d3fac6f0279c8bdc5d8c1302290.jpg) +Figure 6. Visual inspection of the results from different image debanding methods on sample images; see zoom-in box for details inspection +Degraded +Figure 7. Visual inspection of the results from different image deraining methods on sampled images. + +![](images/000aa278105d67f7a69eb4a7bc1f826e098f25661e4e789987ff51bc309fe272.jpg) +Reference + +![](images/e15a3c2430e39fd7123886b5a1710ae9ba1da945d5fb288896c4355e5607ee8f.jpg) +SPDNet + +![](images/a049252f83c7c347e0e67fa0e7a302e4aec2ee6636b18d23b4a0409918faf8b3.jpg) +Restormer + +![](images/62cfc5aadba427b59bf2a9f566536de2746a6d7030ddf4a0a4850b7c7b62825f.jpg) +DRSformer + +![](images/5d9797410ae695ae22fb6444ec2d21f72bc0478f1a0746ada3611a1140555cde.jpg) +NeRD-Rain-S + +![](images/992b7a7a872e1f181b9a0d38b0728510fdee449bdacd466939f2ff0b3110199b.jpg) +Ours + +SCMC. To ensure a fair comparison, all models were adjusted for comparable sizes by modifying channel numbers, and are trained on the DID [54] dataset for 300K iterations. As shown in Tab. 5, each component makes significant contribution to performance improvement. Notably, SADA contributes a PSNR gain of $0.78\mathrm{dB}$ and an SSIM improvement of 0.003, while the inclusion of SEDC yields an additional PSNR increase of $0.17\mathrm{dB}$ . + +
SADASEDCMIMOPSNR(dB)↑SSIM↑LPIPS↓
---38.620.96630.0587
--38.820.96780.0533
-39.050.96920.0521
-39.610.97120.0480
39.780.97290.0453
+ +Impact of the grid dilation: In this study, we investigate the performance benefits of the multi-head mechanism in SADA by comparing different settings of $M_{s}$ on the DID dataset. Tab. 6 shows that the multi-dilation offset grid improves performance with similar model size. Compared to assigning each head same small dilation (1,1,1,1), our setting (1,3,5,7) better exploits long-range dependencies, resulting in a PSNR gain of 0.2dB and a LPIPS reduction of 0.0039. + +Visualization of deformable offsets: Refer to Fig. 8 for a comparison of the sampling points between the standard LSA and our SADA. The purple star indicates the central point, while the yellow points and red circles represent the initial and final sampling positions, respectively. This shows SADA can better capture long-range and multi-scale context, which proves advantageous for the ensuing weighted aggregation. + +Table 5. Results of ablation studies. Boldfaced: best values. + +
MsPSNR(dB)↑SSIM↑LPIPS↓
{1,1,1,1}39.580.97080.0492
{1,2,3,4}39.700.97230.0463
{1,3,5,7}39.780.97290.0453
+ +![](images/bfd953a627a2abc4954138fa3debdc2328a09d9327ee929ea53cae079735db6f.jpg) +Table 6. Results using different setting of dilation $M_{s}$ . + +![](images/6d76e442b18716801e65ee1cd73101a25e62b04883e1cf54f954d2eb3ce53086.jpg) +0-th head +Figure 8. Visualization of sampling points for LSA and SADA. + +![](images/73d98c9d86f7edf79ab57523a9982fd5945b3bbc2a7af6ed44c4a69f34719144.jpg) +1-st head +(c) Sampling points for each SADA head. + +![](images/dd646ab5fd9eefd96601511c31171cc032884f794f17d8c47f51ad61d9b7c460.jpg) + +(b) Sampling points for each LSA head. + +![](images/2a0b1a3470f6a21ffa7906f139fa5646057af9a95b030c7e5c441f09400aeeeb.jpg) +2-nd head + +![](images/59a0015c909e693453bcbbf9c95f4fea10a30d1297d9446ee47f6ea50a835e12.jpg) +(a) Corresponding down-sampled image. (b) Sampling points for each LSA head. +3-rd head + +# Conclusion + +In this paper, we presented SADT, a universal transformer-based architecture for image restoration across diverse artifacts. Our approach integrates the SEDC module to capture scale-varying patterns with abundant orientations and potential distortions, and the SADA module to model long-range relationships among repetitive patterns with diverse sizes and non-uniform distributions. Extensive experiments showed that SADT consistently outperforms SOTA methods in the tasks including image demoiréing, debanding, and deraining. + +# Acknowledgements. + +This work was supported by National Key R&D Pogram of China (2023YFA1011601), the Basic and Applied Basic Research Foundation of Guangdong Province (2024A1515012287), Science and Technology Key Program of Guangzhou, China (2023B03J1388), National Natural Science Foundation of China (62372186), Natural Science Foundation of Guangdong Province (2023A1515012841), Fundamental Research Funds for the Central Universities (x2jsD2230220), National Natural Science Foundation of China (62106077), Natural Science Foundation of Guangdong Province (2022A1515011087) and Singapore MOE AcRF Tier 1 (Grant No. A-8000981-00-00). + +# References + +[1] Junyoung Byun, Kyujin Shim, and Changick Kim. Bitnet: Learning-based bit-depth expansion. In Proceedings of the Asian Conference on Computer Vision, pages 67-82. Springer, 2019. 2, 7 +[2] Jiezhang Cao, Jingyun Liang, Kai Zhang, Yawei Li, Yulun Zhang, Wenguan Wang, and Luc Van Gool. Reference-based image super-resolution with deformable attention transformer. In European conference on computer vision, pages 325-342. Springer, 2022. 3 +[3] Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. Pre-trained image processing transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12299-12310, 2021. 3 +[4] Jierun Chen, Shiu-hong Kao, Hao He, Weipeng Zhuo, Song Wen, Chul-Ho Lee, and S-H Gary Chan. Run, don't walk: chasing higher flops for faster neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12021-12031, 2023. 4 +[5] Xiang Chen, Hao Li, Mingqiang Li, and Jinshan Pan. Learning a sparse transformer network for effective image deraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5896-5905, 2023. 2, 3, 6, 7 +[6] Xiang Chen, Jinshan Pan, and Jiangxin Dong. Bidirectional multi-scale implicit neural representations for image deraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 25627-25636, 2024. 2, 7 +[7] Yijia Cheng, Xin Liu, and Jingyu Yang. Recaptured raw screen image and video demoiring via channel and spatial modulations. Advances in Neural Information Processing Systems, 36, 2024. 7 +[8] Sung-Jin Cho, Seo-Won Ji, Jun-Pyo Hong, Seung-Won Jung, and Sung-Jea Ko. Rethinking coarse-to-fine approach in single image deblurring. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4641-4650, 2021. 5 +[9] Yuning Cui, Wenqi Ren, Sining Yang, Xiaochun Cao, and Alois Knoll. Irnext: Rethinking convolutional network design + +for image restoration. In International conference on machine learning, 2023. 5 +[10] Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 764-773, 2017. 3 +[11] FFmpeg Filters deband. Accessed: Aug. 31, 2021. [online]. available:: https://ffmpeg.org/ffmpeg-filters.html#deband, 2021.7 +[12] Bin He, Ce Wang, Boxin Shi, and Ling-Yu Duan. Mop moiré patterns using mopnet. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2424-2432, 2019. 2, 7 +[13] Bin He, Ce Wang, Boxin Shi, and Ling-Yu Duan. Fhde 2 net: Full high definition demoiring network. In Proceedings of the European Conference on Computer Vision, pages 713-729. Springer, 2020. 6, 7 +[14] Qin Huang, Hui Yong Kim, Wen-Jiin Tsai, Se Yoon Jeong, Jin Soo Choi, and C-C Jay Kuo. Understanding and removal of false contour in hevc compressed images. IEEE Transactions on Circuits and Systems for Video Technology, 28(2): 378-391, 2016. 7 +[15] Yan Huang, Xinchang Lu, Yuhui Quan, Yong Xu, and Hui Ji. Image shadow removal via multi-scale deep retina decomposition. Pattern Recognition, 159:111126, 2025. 3 +[16] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision, pages 694–711. Springer, 2016. 5 +[17] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 6 +[18] Gang Li, Di Xu, Xing Cheng, Lingyu Si, and Changwen Zheng. Simvit: Exploring a simple vision transformer with sliding windows. In 2022 IEEE International Conference on Multimedia and Expo (ICME), pages 1-6. IEEE, 2022. 3 +[19] Yu Li, Robby T Tan, Xiaojie Guo, Jiangbo Lu, and Michael S Brown. Rain streak removal using layer priors. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2736-2744, 2016. 2 +[20] Yizhou Li, Yusuke Monno, and Masatoshi Okutomi. Single image deraining network with rain embedding consistency and layered LSTM. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 4060-4069, 2022. 7 +[21] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1833-1844, 2021. 3 +[22] Fanglei Liu, Jingyu Yang, and Huanjing Yue. Moiré pattern removal from texture images via low-rank and sparse matrix decomposition. In 2015 Visual Communications and Image Processing, pages 1-4. IEEE, 2015. 2 +[23] Jing Liu, Xin Wen, Weizhi Nie, Yuting Su, Peiguang Jing, and Xiaokang Yang. Residual-guided multiscale fusion network for bit-depth enhancement. IEEE Transactions on Circuits + +and Systems for Video Technology, 32(5):2773-2786, 2021. 2, 3 +[24] Lin Liu, Jianzhuang Liu, Shanxin Yuan, Gregory Slabaugh, Ales Leonardis, Wengang Zhou, and Qi Tian. Wavelet-based dual-branch network for image demoiring. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIII 16, pages 86–102. Springer, 2020. 7 +[25] Pengju Liu, Hongzhi Zhang, Kai Zhang, Liang Lin, and Wangmeng Zuo. Multi-level wavelet-cnn for image restoration. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 773-782, 2018. 7 +[26] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. 6 +[27] Yu Luo, Yong Xu, and Hui Ji. Removing rain from a single image via discriminative sparse coding. In Proceedings of the IEEE international conference on computer vision, pages 3397-3405, 2015. 2 +[28] Duong Hai Nguyen, Se-Ho Lee, and Chul Lee. Multiscale coarse-to-fine guided screenshot demoiring. IEEE Signal Processing Letters, 2023. 2, 3 +[29] Yuhui Quan, Shijie Deng, Yixin Chen, and Hui Ji. Deep learning for seeing through window with raindrops. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2463-2471, 2019. 2 +[30] Yuhui Quan, Xuyi He, Ruotao Xu, Yong Xu, and Hui Ji. Image debanding using cross-scale invertible networks with banded deformable convolutions. Neural Networks, page 107270, 2025. 3 +[31] Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan Bello, Anselm Levskaya, and Jon Shlens. Stand-alone self-attention in vision models. Advances in neural information processing systems, 32, 2019. 3, 4 +[32] Dongwei Ren, Wangmeng Zuo, Qinghua Hu, Pengfei Zhu, and Deyu Meng. Progressive image deraining networks: A better and simpler baseline. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3937-3946, 2019. 7 +[33] Yujing Sun, Yizhou Yu, and Wenping Wang. Moiré photo restoration using multiresolution convolutional neural networks. IEEE Transactions on Image Processing, 27(8):4160-4172, 2018. 2, 3, 6, 7 +[34] Chunwei Tian, Yong Xu, Zuoyong Li, Wangmeng Zuo, Lunke Fei, and Hong Liu. Attention-guided cnn for image denoising. Neural Networks, 124:117-129, 2020. 7 +[35] Ilya O Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, et al. Mlp-mixer: An all-mlp architecture for vision. Advances in neural information processing systems, 34:24261–24272, 2021. 4 +[36] Zhengzhong Tu, Jessie Lin, Yilin Wang, Balu Adsumilli, and Alan C Bovik. Adaptive debanding filter. IEEE Signal Processing Letters, 27:1715-1719, 2020. 7 +[37] Ce Wang, Bin He, Shengsen Wu, Renjie Wan, Boxin Shi, and Ling-Yu Duan. Coarse-to-fine disentangling demoiréing framework for recaptured screen images. IEEE Transactions + +on Pattern Analysis and Machine Intelligence, 45(8):9439-9453, 2023. 7 +[38] Hong Wang, Qi Xie, Qian Zhao, and Deyu Meng. A model-driven deep neural network for single image rain removal. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3103-3112, 2020. 3, 7 +[39] Tianyu Wang, Xin Yang, Ke Xu, Shaozhe Chen, Qiang Zhang, and Rynson WH Lau. Spatial attentive single-image deraining with a high quality real rain dataset. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12270-12279, 2019. 6, 7 +[40] Wenhai Wang, Jifeng Dai, Zhe Chen, Zhenhang Huang, Zhiqi Li, Xizhou Zhu, Xiaowei Hu, Tong Lu, Lewei Lu, Hongsheng Li, et al. Internimage: Exploring large-scale vision foundation models with deformable convolutions. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14408-14419, 2023. 3 +[41] Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 17683-17693, 2022. 3, 7 +[42] Xin Wen, Weizhi Nie, Jing Liu, and Yuting Su. Mrft: Multiscale recurrent fusion transformer based prior knowledge for bit-depth enhancement. IEEE Transactions on Circuits and Systems for Video Technology, 33(10):5562-5575, 2023. 3 +[43] Zhuofan Xia, Xuran Pan, Shiji Song, Li Erran Li, and Gao Huang. Vision transformer with deformable attention. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4794-4803, 2022. 3 +[44] Jie Xiao, Xueyang Fu, Aiping Liu, Feng Wu, and Zheng-Jun Zha. Image de-raining transformer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(11):12978-12995, 2022. 3, 6, 7 +[45] Yuwen Xiong, Zhiqi Li, Yuntao Chen, Feng Wang, Xizhou Zhu, Jiapeng Luo, Wenhai Wang, Tong Lu, Hongsheng Li, Yu Qiao, et al. Efficient deformable convnets: Rethinking dynamic and sparse operator for vision applications. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5652-5661, 2024. 3 +[46] Shuning Xu, Binbin Song, Xiangyu Chen, Xina Liu, and Jiantao Zhou. Image demoiring in raw and srgb domains. In European Conference on Computer Vision, pages 108-124. Springer, 2025. 7 +[47] Qiaosi Yi, Juncheng Li, Qinyan Dai, Faming Fang, Guixu Zhang, and Tieyong Zeng. Structure-preserving deraining with residue channel prior guidance. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4238-4247, 2021. 7 +[48] Xin Yu, Peng Dai, Wenbo Li, Lan Ma, Jiajun Shen, Jia Li, and Xiaojuan Qi. Towards efficient and scale-robust ultra-high-definition image demoiring. In Proceedings of the European Conference on Computer Vision, pages 646–662. Springer, 2022. 2, 3, 6, 7 +[49] Shanxin Yuan, Radu Timofte, Gregory Slabaugh, Ales Leonardis, Bolun Zheng, Xin Ye, Xiang Tian, Yaowu Chen, Xi Cheng, Zhenyong Fu, et al. Aim 2019 challenge on image + +demoireing: Methods and results. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pages 3534-3545. IEEE, 2019. 6, 7 +[50] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14821-14831, 2021. 7 +[51] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5728-5739, 2022. 2, 3, 7 +[52] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 586-595, 2018. 6 +[53] Bolun Zheng, Shanxin Yuan, Gregory Slabaugh, and Ales Leonardis. Image demoiring with learnable bandpass filters. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3636-3645, 2020. 7 +[54] Raymond Zhou, Shahrukh Athar, Zhongling Wang, and Zhou Wang. Deep image debanding. In Proceedings of the IEEE International Conference on Image Processing, pages 1951-1955. IEEE, 2022. 6, 7, 8 +[55] Shihao Zhou, Duosheng Chen, Jinshan Pan, Jinglei Shi, and Jufeng Yang. Adapt or perish: Adaptive sparse transformer with attentive feature refinement for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2952-2963, 2024. 5 +[56] Xizhou Zhu, Han Hu, Stephen Lin, and Jifeng Dai. Deformable convnets v2: More deformable, better results. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, pages 9308-9316, 2019. 3 +[57] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159, 2020.3 \ No newline at end of file diff --git a/2025/A Universal Scale-Adaptive Deformable Transformer for Image Restoration across Diverse Artifacts/images.zip b/2025/A Universal Scale-Adaptive Deformable Transformer for Image Restoration across Diverse Artifacts/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..984bcd4e178da0a9ea30a55bac4a8e8f138d1ec4 --- /dev/null +++ b/2025/A Universal Scale-Adaptive Deformable Transformer for Image Restoration across Diverse Artifacts/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47207a53c93ce7c6cd7f58598bb54946c9d91ada833009b953370320ae082873 +size 641920 diff --git a/2025/A Universal Scale-Adaptive Deformable Transformer for Image Restoration across Diverse Artifacts/layout.json b/2025/A Universal Scale-Adaptive Deformable Transformer for Image Restoration across Diverse Artifacts/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..8ecb14d8c1944b97d366f34b3c29abd50bd90716 --- /dev/null +++ b/2025/A Universal Scale-Adaptive Deformable Transformer for Image Restoration across Diverse Artifacts/layout.json @@ -0,0 +1,11409 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 71, + 102, + 539, + 138 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 102, + 539, + 138 + ], + "spans": [ + { + "bbox": [ + 71, + 102, + 539, + 138 + ], + "type": "text", + "content": "A Universal Scale-Adaptive Deformable Transformer for Image Restoration across Diverse Artifacts" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 156, + 160, + 205, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 156, + 160, + 205, + 175 + ], + "spans": [ + { + "bbox": [ + 156, + 160, + 205, + 175 + ], + "type": "text", + "content": "Xuyi He" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 231, + 161, + 294, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 231, + 161, + 294, + 175 + ], + "spans": [ + { + "bbox": [ + 231, + 161, + 294, + 175 + ], + "type": "text", + "content": "Yuhui Quan" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 322, + 161, + 395, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 322, + 161, + 395, + 175 + ], + "spans": [ + { + "bbox": [ + 322, + 161, + 395, + 175 + ], + "type": "inline_equation", + "content": "\\mathbf{R}" + }, + { + "bbox": [ + 322, + 161, + 395, + 175 + ], + "type": "text", + "content": " uotao " + }, + { + "bbox": [ + 322, + 161, + 395, + 175 + ], + "type": "inline_equation", + "content": "\\mathrm{Xu^{2,3}*}" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 417, + 161, + 452, + 174 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 417, + 161, + 452, + 174 + ], + "spans": [ + { + "bbox": [ + 417, + 161, + 452, + 174 + ], + "type": "text", + "content": "Hui Ji" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 97, + 175, + 512, + 232 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 97, + 175, + 512, + 232 + ], + "spans": [ + { + "bbox": [ + 97, + 175, + 512, + 232 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 97, + 175, + 512, + 232 + ], + "type": "text", + "content": "School of Computer Science and Engineering, South China University of Technology " + }, + { + "bbox": [ + 97, + 175, + 512, + 232 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 97, + 175, + 512, + 232 + ], + "type": "text", + "content": "Institute for Super Robotics, South China University of Technology " + }, + { + "bbox": [ + 97, + 175, + 512, + 232 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 97, + 175, + 512, + 232 + ], + "type": "text", + "content": "Key Laboratory of Large-Model Embodied-Intelligent Humanoid Robot " + }, + { + "bbox": [ + 97, + 175, + 512, + 232 + ], + "type": "inline_equation", + "content": "^{4}" + }, + { + "bbox": [ + 97, + 175, + 512, + 232 + ], + "type": "text", + "content": "Department of Mathematics, National University of Singapore" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 77, + 234, + 529, + 245 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 234, + 529, + 245 + ], + "spans": [ + { + "bbox": [ + 77, + 234, + 529, + 245 + ], + "type": "text", + "content": "csxuyihe@mail.scut.edu.cn, csyhquan@scut.edu.cn, rtxu@superobots.com, matjh@nus.edu.sg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 151, + 273, + 200, + 285 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 273, + 200, + 285 + ], + "spans": [ + { + "bbox": [ + 151, + 273, + 200, + 285 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 53, + 299, + 297, + 479 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 299, + 297, + 479 + ], + "spans": [ + { + "bbox": [ + 53, + 299, + 297, + 479 + ], + "type": "text", + "content": "Structured artifacts are semi-regular, repetitive patterns that closely intertwine with genuine image content, making their removal highly challenging. In this paper, we introduce the Scale-Adaptive Deformable Transformer, an network architecture specifically designed to eliminate such artifacts from images. The proposed network features two key components: a scale-enhanced deformable convolution module for modeling scale-varying patterns with abundant orientations and potential distortions, and a scale-adaptive deformable attention mechanism for capturing long-range relationships among repetitive patterns with different sizes and non-uniform spatial distributions. Extensive experiments show that our network consistently outperforms state-of-the-art methods in diverse artifact removal tasks, including image deraining, image demoiring, and image debanding." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 504, + 135, + 516 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 504, + 135, + 516 + ], + "spans": [ + { + "bbox": [ + 56, + 504, + 135, + 516 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 55, + 525, + 296, + 657 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 525, + 296, + 657 + ], + "spans": [ + { + "bbox": [ + 55, + 525, + 296, + 657 + ], + "type": "text", + "content": "Structured artifacts, in contrast to random noise, are repetitive patterns with similar appearance. Examples include Moiré patterns caused by overlapping pattern interference, rain streaks formed during image acquisition in rainy conditions, and banding effects resulting from color quantization. See Figure 1 for an illustration. These artifacts often display similar appearance that repeat in a quasi-periodic way over large image regions. However, they can differ in size, intensity, orientation, and may exhibit certain shape distortions. Additionally, their characteristics can vary across different images due to changes in image content and capturing." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 55, + 658, + 296, + 682 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 658, + 296, + 682 + ], + "spans": [ + { + "bbox": [ + 55, + 658, + 296, + 682 + ], + "type": "text", + "content": "Removing structured artifacts has many applications. Eliminating moiré patterns improves digital photography" + } + ] + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 317, + 274, + 383, + 339 + ], + "blocks": [ + { + "bbox": [ + 317, + 274, + 383, + 339 + ], + "lines": [ + { + "bbox": [ + 317, + 274, + 383, + 339 + ], + "spans": [ + { + "bbox": [ + 317, + 274, + 383, + 339 + ], + "type": "image", + "image_path": "18490bcb9b39b556463d9de3aff215467f7618319c3a6d439533beca73b749a7.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 388, + 274, + 436, + 339 + ], + "blocks": [ + { + "bbox": [ + 388, + 274, + 436, + 339 + ], + "lines": [ + { + "bbox": [ + 388, + 274, + 436, + 339 + ], + "spans": [ + { + "bbox": [ + 388, + 274, + 436, + 339 + ], + "type": "image", + "image_path": "27b3199a8bdeb0a4f74b5c24a3e6ece83d9938a1ad95db60b6d44d2f2727a682.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 441, + 274, + 552, + 339 + ], + "blocks": [ + { + "bbox": [ + 441, + 274, + 552, + 339 + ], + "lines": [ + { + "bbox": [ + 441, + 274, + 552, + 339 + ], + "spans": [ + { + "bbox": [ + 441, + 274, + 552, + 339 + ], + "type": "image", + "image_path": "f3f4ae252b636f6bb503193f28a39639a40ecec227b60b97af9aff387b63015b.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 317, + 342, + 383, + 397 + ], + "blocks": [ + { + "bbox": [ + 317, + 342, + 383, + 397 + ], + "lines": [ + { + "bbox": [ + 317, + 342, + 383, + 397 + ], + "spans": [ + { + "bbox": [ + 317, + 342, + 383, + 397 + ], + "type": "image", + "image_path": "13e0c606cbef0d8ed6d80aa3e24beb844257e273d8c4ad2523b9759e31aa7fab.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 405, + 554, + 427 + ], + "lines": [ + { + "bbox": [ + 313, + 405, + 554, + 427 + ], + "spans": [ + { + "bbox": [ + 313, + 405, + 554, + 427 + ], + "type": "text", + "content": "Figure 1. Structured artifacts with varying orientations, scales, and may involve warping effect. From left to right: moiré, rain, band." + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_caption" + } + ], + "index": 17 + }, + { + "type": "image", + "bbox": [ + 387, + 342, + 436, + 397 + ], + "blocks": [ + { + "bbox": [ + 387, + 342, + 436, + 397 + ], + "lines": [ + { + "bbox": [ + 387, + 342, + 436, + 397 + ], + "spans": [ + { + "bbox": [ + 387, + 342, + 436, + 397 + ], + "type": "image", + "image_path": "d7781c63dc311025e1dd22964f9b94bee05f07bb835318904eb3cf5687c57596.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 441, + 342, + 553, + 397 + ], + "blocks": [ + { + "bbox": [ + 441, + 342, + 553, + 397 + ], + "lines": [ + { + "bbox": [ + 441, + 342, + 553, + 397 + ], + "spans": [ + { + "bbox": [ + 441, + 342, + 553, + 397 + ], + "type": "image", + "image_path": "cbf684e07eb7050c2158b2e3123ed491437caf1152495299b503b95199bce24b.jpg" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_body" + } + ], + "index": 19 + }, + { + "bbox": [ + 313, + 449, + 555, + 569 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 449, + 555, + 569 + ], + "spans": [ + { + "bbox": [ + 313, + 449, + 555, + 569 + ], + "type": "text", + "content": "and screen captures. Rain streak removal improves the reliability of outdoor vision systems in rainy conditions, i.e., those used in autonomous driving and video surveillance. Addressing banding artifacts is necessary in professional imaging and printing to achieve smooth gradients and high dynamic range. Additionally, structured artifact removal finds applications in medical imaging, industrial quality inspection, and consumer electronics, enhancing image quality and reliability. All these highlight a broad impact of structured artifacts removal across many real-world applications." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 313, + 570, + 556, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 570, + 556, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 570, + 556, + 713 + ], + "type": "text", + "content": "Structured artifact removal presents significant challenges that differ from random noise removal. The primary difficulties arise due to the quasi-periodic nature of these artifacts and their resemblance to authentic repeating image patterns (e.g., textures). These artifacts often intertwine with natural image features, creating complex overlapping structures that are difficult to disentangle from genuine image content. For instance, moiré patterns can closely mimic textile textures, and banding effects may align with natural linear features. Additionally, these artifacts often display noticeable variations in pattern size and geometric shape, further complicating their identification and removal." + } + ] + } + ], + "index": 22 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 693, + 144, + 703 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 693, + 144, + 703 + ], + "spans": [ + { + "bbox": [ + 67, + 693, + 144, + 703 + ], + "type": "text", + "content": "*Corresponding author." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 70, + 704, + 283, + 712 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 70, + 704, + 283, + 712 + ], + "spans": [ + { + "bbox": [ + 70, + 704, + 283, + 712 + ], + "type": "text", + "content": "Code is available at: https://github.com/csxyhe/SADT" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "12731" + } + ] + } + ], + "index": 25 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 224, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 224, + 85 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 224, + 85 + ], + "type": "text", + "content": "1.1. Challenges and Existing Works" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 89, + 297, + 268 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 89, + 297, + 268 + ], + "spans": [ + { + "bbox": [ + 55, + 89, + 297, + 268 + ], + "type": "text", + "content": "Traditional approaches, relying on handcrafted priors to distinguish structured artifacts from genuine image features, often struggle to handle variations in these artifacts, resulting in limited effectiveness for artifacts with noticeable variations. While deep learning methods have shown promising performance in removing these artifacts, they also face notable challenges. Early approaches are based on convolutional neural networks (CNNs) [1, 12, 23, 28, 29, 33, 48], which have limited capacity on capturing long-range relationships due to their local receptive fields. However, the quasi-periodicity of local patterns with certain similarities is a crucial cue for identifying structured artifacts. Thus, the design of the NN for removing structured artifacts removal must be capable of effectively exploiting long-range relationships among local structures to accurately identify these artifacts." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 269, + 296, + 424 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 269, + 296, + 424 + ], + "spans": [ + { + "bbox": [ + 55, + 269, + 296, + 424 + ], + "type": "text", + "content": "Recent works addressed this limitation by leveraging transformers with self-attention mechanisms [5, 6, 51]. However, challenges remain. The significant variations in the geometric shapes and pattern sizes of structured artifacts make it challenging to synthesize sufficient training data to have a comprehensive coverage of all possible variations. In addition, training NNs on very large datasets can be also computationally expensive. Consequently, relying solely on extensive datasets may not lead to robust generalization. To conclude, an effective transformer-based NN for structured artifacts removal should be designed with deformable capacity and scale adaptability, enabling them to generalize effectively to unseen data even with limited training samples." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 430, + 127, + 441 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 430, + 127, + 441 + ], + "spans": [ + { + "bbox": [ + 56, + 430, + 127, + 441 + ], + "type": "text", + "content": "1.2. Main Idea" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 447, + 297, + 590 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 447, + 297, + 590 + ], + "spans": [ + { + "bbox": [ + 55, + 447, + 297, + 590 + ], + "type": "text", + "content": "An effective NN for structured artifact removal requires specialized mechanisms that can efficiently adapt to the varying scales and distortions of these artifacts, as well as exploit long-range dependencies among local structures. In this paper, we propose a transformer-based NN designed with specific optimization for handling scale variations and distortions of structured artifacts. Our proposed architecture, the Scale-Adaptive Deformable Transformer (SADT), leverages the transformer's capacity to capture long-range dependencies while incorporating new modules for scale adaptation and deformable transformations, enabling effective removal of structured artifacts with noticeable variations." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 594, + 297, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 594, + 297, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 594, + 297, + 713 + ], + "type": "text", + "content": "Scale-enhanced deformable convolution (SEDC): Structured artifacts are semi-regular, exhibiting repetitive patterns with similar appearances but noticeable variations in size, orientation, and shape distortion. Classic convolution with fixed geometric structures struggle to capture such variable patterns effectively. Existing deformable convolutions offer some flexibility through spatial adaptations but remain ineffective in handling large-scale and orientation variations. To address these issues, we introduce the SEDC module, a convolution module designed to better manage these variations." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 72, + 555, + 180 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 555, + 180 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 555, + 180 + ], + "type": "text", + "content": "The cascaded deformable convolutions are used to generate intermediate features from different perspectives, scales, and orientations, enabling flexible sampling and effective modeling of warped patterns. Furthermore, the subsequent Spatial-Channel Mixing Convolution (SCMC) with large-scale convolution kernel is introduced, which helps capture broader contextual information while preserving positional cues. Finally, the parallel features are adaptively aggregated for modeling scale-varying and deformable patterns." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 183, + 556, + 290 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 183, + 556, + 290 + ], + "spans": [ + { + "bbox": [ + 313, + 183, + 556, + 290 + ], + "type": "text", + "content": "Scale-adaptive deformable attention (SADA): Structured artifacts are repetitive patterns spanning over large image regions. These patterns are quasi-periodic with non-uniform spatial distribution and size variation. While transformer is effective in capturing such repetitive patterns through attention mechanisms, the significant variations in size and non-uniform spatial distribution of these patterns demand better scale adaptability and more effective sampling within the attention mechanisms." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 291, + 556, + 388 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 291, + 556, + 388 + ], + "spans": [ + { + "bbox": [ + 313, + 291, + 556, + 388 + ], + "type": "text", + "content": "This motivates us to propose Scale-Adaptive Deformable Attention (SADA), which assigns multi-scale multi-head mechanism for capturing long-range dependencies within a semantic layer. Moreover, SADA introduces deformable-sampling offset to effectively handle the non-uniform spatial distribution of repetitive patterns. These two techniques enable local self-attention to better capture long-range quasi-periodic patterns through selective information aggregation." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 314, + 396, + 425, + 407 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 396, + 425, + 407 + ], + "spans": [ + { + "bbox": [ + 314, + 396, + 425, + 407 + ], + "type": "text", + "content": "1.3. Our Contributions" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 414, + 522, + 425 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 414, + 522, + 425 + ], + "spans": [ + { + "bbox": [ + 313, + 414, + 522, + 425 + ], + "type": "text", + "content": "In summary, our contributions are listed as follows:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 426, + 555, + 521 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 313, + 426, + 555, + 449 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 426, + 555, + 449 + ], + "spans": [ + { + "bbox": [ + 313, + 426, + 555, + 449 + ], + "type": "text", + "content": "- We propose the SEDC module to model scale-varying artifacts with abundant orientations and potential distortions." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 450, + 555, + 498 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 450, + 555, + 498 + ], + "spans": [ + { + "bbox": [ + 313, + 450, + 555, + 498 + ], + "type": "text", + "content": "- We introduce the SADA module, enhancing local selfattention by incorporating a multi-scale multi-head mechanism and deformable technique, to capture long-range contextual information of quasi-periodic patterns." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 498, + 553, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 498, + 553, + 521 + ], + "spans": [ + { + "bbox": [ + 313, + 498, + 553, + 521 + ], + "type": "text", + "content": "- Leveraging on SEDC and SADA, we construct the transformer-based SADT for structured artifact removal." + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 313, + 522, + 555, + 558 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 522, + 555, + 558 + ], + "spans": [ + { + "bbox": [ + 313, + 522, + 555, + 558 + ], + "type": "text", + "content": "Extensive experiments on diverse artifact removal tasks, including image demoiring, deraining, and debanding, showed that SADT achieves the state-of-the-art performance." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 314, + 570, + 400, + 582 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 570, + 400, + 582 + ], + "spans": [ + { + "bbox": [ + 314, + 570, + 400, + 582 + ], + "type": "text", + "content": "2. Related Work" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 594, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 594, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 594, + 556, + 715 + ], + "type": "text", + "content": "Non-learning structured artifact removal method: Traditional methods rely on pre-defined priors on structured artifacts to distinguish them from genuine image content. For moiré pattern removal from texture images, Liu et al. [22] proposed using a low rank prior for textures and a sparse prior of Moiré patterns in the discrete cosine transform to separate two. For removing rain streak, Luo et al. [27] separates the rain streak and background layers via discriminative sparse coding. Li et al. [19] introduces a patch-based approach leveraging Gaussian mixture model priors to separate" + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "12732" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 297, + 109 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 297, + 109 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 297, + 109 + ], + "type": "text", + "content": "rain streak and background layers. These handcrafted priors are often overly simplistic and tend to fail in removing artifacts with large variations in scale, orientation, and shape." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 112, + 297, + 434 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 112, + 297, + 434 + ], + "spans": [ + { + "bbox": [ + 56, + 112, + 297, + 434 + ], + "type": "text", + "content": "Multi-scale deep learning methods for structured artifact removal: There is extensive literature on deep learning for structured artifact removal. Here, we focus exclusively on the most relevant works employing multi-scale strategies to address artifact patterns of varying scales. To remove Moiré patterns from images, Sun et al. [33] proposed a multiresolution encoder-decoder architecture designed to model Moiré patterns at varying scales. Yu et al. [48] introduced a semantic-aligned scale-aware module that combines multiscale features through an attention mechanism to reduce scale discrepancies in Moiré patterns. Nguyen et al. [28] developed a multiscale guided restoration block targeting both low- and high-frequency noise. For deraining, Wang et al. [38] proposed a wavelet-inspired multi-level module for rain removal. For debanding, Liu et al. [23] introduced a dual-branch depthwise group fusion module to capture both inter-scale and intra-scale correlations of banding patterns. Quan et al. [30] introduced a cross-Scale invertible NN with deformable convolutions to handle scale variations of banding artifacts. For shadow removal, built upon the Retinex decomposition model, Huang et al. [15] proposed a neural network with a multi-scale structure. These methods have shown promising results in structured artifact removal tasks. However, these methods neither fully exploit the repetitive nature of artifacts for accurate separation from genuine image structures, nor do they effectively address the significant orientation or shape variations of artifacts in their designs." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 437, + 297, + 581 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 437, + 297, + 581 + ], + "spans": [ + { + "bbox": [ + 55, + 437, + 297, + 581 + ], + "type": "text", + "content": "Transformer for removing structured artifacts: Transformer is an effective architecture for modeling global and local relationship of local structures. In image processing, an image is partitioned into small patches to fit the transformer architecture [3]. Many transformer methods have been proposed for different structured artifact removal tasks. For deraining, Xiao et al. [44] introduced a combination of window-based and global self-attention mechanisms. Chen et al. [5] developed sparse channel self-attention to selectively retain key values. For image debanding, Wen et al. [42] utilized local self-attention with varying window sizes across different heads to capture features at multiple scales." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 582, + 297, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 582, + 297, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 582, + 297, + 714 + ], + "type": "text", + "content": "However, these transformer methods have not fully leveraged self-attention's ability to capture long-range dependencies of repetitive patterns. Such weakness arises from the high computational costs of standard transformers due to quadratic scaling with patch numbers. To reduce costs, these transformers rely on techniques such as window-based self-attention [21, 41], channel-only self-attention in Restormer [51], and local self-attention (LSA) [18, 31]. While computationally efficient, these methods' limited spatial perceptive fields weaken their ability to capture long-range information. To address these limitations, the proposed SADT extends the" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 313, + 72, + 555, + 120 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 555, + 120 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 555, + 120 + ], + "type": "text", + "content": "LSA with scale-adaptive deformable sampling to effectively capture long-range dependencies of repetitive artifacts with varying scales, orientations, and distortions. This design enables SADT to model structured artifacts more effectively." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 313, + 123, + 556, + 302 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 123, + 556, + 302 + ], + "spans": [ + { + "bbox": [ + 313, + 123, + 556, + 302 + ], + "type": "text", + "content": "Deformable convolution and attention: Deformable convolutions [10, 40, 45, 56] introduce learnable offsets to convolution kernels, enabling adaptive feature sampling to handle geometric variations and spatial transformations. Deformable attention [2, 43, 57] extends this adaptability to the self-attention mechanism, allowing transformers to focus on relevant spatial locations and better capture complex patterns. Zhu et al. [57] proposed a multi-scale deformable attention module for generating a feature pyramid and allocating several keys per query to capture multi-scale features. Xia et al. [43] introduced the deformable attention transformer, which learns shared deformed points for efficient computations. Cao et al. [2] developed a reference-based deformable attention module to enhance low-resolution feature representations using multiple relevant features." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 303, + 557, + 435 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 303, + 557, + 435 + ], + "spans": [ + { + "bbox": [ + 313, + 303, + 557, + 435 + ], + "type": "text", + "content": "The design of deformable self-attention in our work differs significantly from that of [43]. Xia et al. [43] combines self-attention with deformable convolution, computing the relevance between input and reference images for offset prediction. In contrast, our method integrates deformable sampling within self-attention by first predicting offsets and then calculating the similarity between predicted keys and the query. It is more effective in modeling long-range dependence of patterns with varying appearances. Furthermore, we enhance the scale adaptability of deformable convolution and attention for handle scale variations among artifacts." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 443, + 398, + 456 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 443, + 398, + 456 + ], + "spans": [ + { + "bbox": [ + 313, + 443, + 398, + 456 + ], + "type": "text", + "content": "3. Methodology" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 463, + 556, + 583 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 463, + 556, + 583 + ], + "spans": [ + { + "bbox": [ + 313, + 463, + 556, + 583 + ], + "type": "text", + "content": "In this section, we introduce the two key components of the proposed transformer. SEDC, an advanced deformable convolution module designed to capture local patterns with large-scale and orientation variations of local patterns. SADA, a deformable attention module that exploits long-range relationships among repetitive patterns with varying sizes and non-uniform spatial distributions. Subsequently, detail present the proposed transformer, SADT, which integrates SEDC and SADA within a multi-in multi-out framework, together with the training loss." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 588, + 531, + 600 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 588, + 531, + 600 + ], + "spans": [ + { + "bbox": [ + 313, + 588, + 531, + 600 + ], + "type": "text", + "content": "3.1. Scale-Enhanced Deformable Convolution" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 605, + 556, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 605, + 556, + 689 + ], + "spans": [ + { + "bbox": [ + 313, + 605, + 556, + 689 + ], + "type": "text", + "content": "SEDC is for effectively capturing scale-varying and deformable patterns. The module consists of two key components: the cascaded DCNs [56] for modeling complex patterns with distortions, and the parallel Spatial-Channel Mixing Convolution (SCMC) layers for capturing patterns with large scale variations. The pipeline of SEDC is illustrated in Figure 2." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 689, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 689, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 689, + 555, + 713 + ], + "type": "text", + "content": "To capture patterns of large size, we introduce the SCMC layer for the expansion of receptive field. For each " + }, + { + "bbox": [ + 313, + 689, + 555, + 713 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 313, + 689, + 555, + 713 + ], + "type": "text", + "content": "-th" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "text", + "content": "12733" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 71, + 70, + 282, + 136 + ], + "blocks": [ + { + "bbox": [ + 71, + 70, + 282, + 136 + ], + "lines": [ + { + "bbox": [ + 71, + 70, + 282, + 136 + ], + "spans": [ + { + "bbox": [ + 71, + 70, + 282, + 136 + ], + "type": "image", + "image_path": "e12412d60de57c41d2a4242b11273d89a189d572ddabc87593d66f7b1eed0f6b.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 118, + 143, + 233, + 155 + ], + "lines": [ + { + "bbox": [ + 118, + 143, + 233, + 155 + ], + "spans": [ + { + "bbox": [ + 118, + 143, + 233, + 155 + ], + "type": "text", + "content": "Figure 2. SEDC configuration." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 173, + 296, + 226 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 173, + 296, + 226 + ], + "spans": [ + { + "bbox": [ + 55, + 173, + 296, + 226 + ], + "type": "text", + "content": "channel of an input feature " + }, + { + "bbox": [ + 55, + 173, + 296, + 226 + ], + "type": "inline_equation", + "content": "\\mathbf{X} \\in \\mathbb{R}^{H \\times W \\times C}" + }, + { + "bbox": [ + 55, + 173, + 296, + 226 + ], + "type": "text", + "content": ", SCMC first partitions each feature map into several non-overlapping patches of size " + }, + { + "bbox": [ + 55, + 173, + 296, + 226 + ], + "type": "inline_equation", + "content": "K \\times K" + }, + { + "bbox": [ + 55, + 173, + 296, + 226 + ], + "type": "text", + "content": ", and subsequently reshape it into a matrix " + }, + { + "bbox": [ + 55, + 173, + 296, + 226 + ], + "type": "inline_equation", + "content": "\\mathbf{X}_i \\in \\mathbb{R}^{K^2 \\times \\frac{HW}{K^2}}" + }, + { + "bbox": [ + 55, + 173, + 296, + 226 + ], + "type": "text", + "content": ". Denote such an operation by" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 80, + 232, + 271, + 248 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 232, + 271, + 248 + ], + "spans": [ + { + "bbox": [ + 80, + 232, + 271, + 248 + ], + "type": "interline_equation", + "content": "\\Psi : \\boldsymbol {X} \\in \\mathbb {R} ^ {H \\times W \\times C} \\to \\left\\{\\boldsymbol {X} _ {i} \\in \\mathbb {R} ^ {K ^ {2} \\times \\frac {H W}{K ^ {2}}}, \\forall i \\right\\},", + "image_path": "81d4d708433be53f8d01568b0287289753e1a5a5a90798dc2a75e89d9dc186aa.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 256, + 296, + 282 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 256, + 296, + 282 + ], + "spans": [ + { + "bbox": [ + 55, + 256, + 296, + 282 + ], + "type": "text", + "content": "Then, a learnable linear transform " + }, + { + "bbox": [ + 55, + 256, + 296, + 282 + ], + "type": "inline_equation", + "content": "\\mathbf{W} \\in \\mathbb{R}^{K^2 \\times K^2}" + }, + { + "bbox": [ + 55, + 256, + 296, + 282 + ], + "type": "text", + "content": " is applied to each " + }, + { + "bbox": [ + 55, + 256, + 296, + 282 + ], + "type": "inline_equation", + "content": "\\mathbf{X}_i" + }, + { + "bbox": [ + 55, + 256, + 296, + 282 + ], + "type": "text", + "content": ": for each " + }, + { + "bbox": [ + 55, + 256, + 296, + 282 + ], + "type": "inline_equation", + "content": "\\mathbf{X}_i" + }, + { + "bbox": [ + 55, + 256, + 296, + 282 + ], + "type": "text", + "content": "," + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 80, + 288, + 271, + 304 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 288, + 271, + 304 + ], + "spans": [ + { + "bbox": [ + 80, + 288, + 271, + 304 + ], + "type": "interline_equation", + "content": "\\Phi_ {\\boldsymbol {W}}: \\boldsymbol {X} _ {i} \\in \\mathbb {R} ^ {K ^ {2} \\times \\frac {H W}{K ^ {2}}} \\to W \\boldsymbol {X} _ {i} \\in \\mathbb {R} ^ {K ^ {2} \\times \\frac {H W}{K ^ {2}}},", + "image_path": "a4c64f2531bbad55d4554818d6626a323b3320add633115d905b2780b9458cee.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 312, + 296, + 373 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 312, + 296, + 373 + ], + "spans": [ + { + "bbox": [ + 55, + 312, + 296, + 373 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 312, + 296, + 373 + ], + "type": "inline_equation", + "content": "W" + }, + { + "bbox": [ + 55, + 312, + 296, + 373 + ], + "type": "text", + "content": " is shared across different " + }, + { + "bbox": [ + 55, + 312, + 296, + 373 + ], + "type": "inline_equation", + "content": "X_{i}" + }, + { + "bbox": [ + 55, + 312, + 296, + 373 + ], + "type": "text", + "content": " s. Afterward, the output of " + }, + { + "bbox": [ + 55, + 312, + 296, + 373 + ], + "type": "inline_equation", + "content": "\\Phi_W" + }, + { + "bbox": [ + 55, + 312, + 296, + 373 + ], + "type": "text", + "content": " is reshaped and concatenated to the original dimension " + }, + { + "bbox": [ + 55, + 312, + 296, + 373 + ], + "type": "inline_equation", + "content": "(H,W,C)" + }, + { + "bbox": [ + 55, + 312, + 296, + 373 + ], + "type": "text", + "content": ", using the inversion of " + }, + { + "bbox": [ + 55, + 312, + 296, + 373 + ], + "type": "inline_equation", + "content": "\\Psi" + }, + { + "bbox": [ + 55, + 312, + 296, + 373 + ], + "type": "text", + "content": ". A gating mechanism is then applied, performing an element-wise product between the original input and processed feature:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 93, + 380, + 295, + 395 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 93, + 380, + 295, + 395 + ], + "spans": [ + { + "bbox": [ + 93, + 380, + 295, + 395 + ], + "type": "interline_equation", + "content": "\\operatorname {S C M C} (\\boldsymbol {X}) = \\boldsymbol {X} \\odot \\Psi^ {- 1} \\left(\\Phi_ {\\boldsymbol {W}} \\left(\\Psi (\\boldsymbol {X})\\right)\\right). \\tag {1}", + "image_path": "229bcb7267ad6dce6fa6ab3adbe346671232ff4a6910046061fb13523884ffea.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 402, + 296, + 533 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 402, + 296, + 533 + ], + "spans": [ + { + "bbox": [ + 55, + 402, + 296, + 533 + ], + "type": "text", + "content": "SCMC can effectively capture large pattern with minimal computation overhead. However, similar to other MLP-based architectures (e.g., MLP Mixer [35]), its non-overlapping partitions may struggle with patterns that span across regions, particularly when intersected by partition boundaries. This issue will be mitigated by the cascaded DCNs, which helps the aggregation of relevant non-local information into local windows, and provides features with different scales into parallel SCMCs. The efficient DCN utilizes a PConv operation [4] followed by a pointwise convolution to generate deformable offsets and modulation scalars." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 533, + 296, + 629 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 533, + 296, + 629 + ], + "spans": [ + { + "bbox": [ + 55, + 533, + 296, + 629 + ], + "type": "text", + "content": "The complete pipeline of SEDC is shown in Fig. 2. In the main branch, input feature channels are first reduced by a factor of 4. The cascaded DCN processes corresponding features, while parallel SCMCs handle the original input features. Finally, features with different receptive field shapes are merged via pointwise convolution. Overall, this design effectively captures local patterns with varying sizes, orientations, geometric distortions, and warping effects." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 636, + 254, + 649 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 636, + 254, + 649 + ], + "spans": [ + { + "bbox": [ + 55, + 636, + 254, + 649 + ], + "type": "text", + "content": "3.2. Scale-Adaptive Deformable Attention" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 55, + 654, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 654, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 654, + 296, + 713 + ], + "type": "text", + "content": "SADA integrates the multi-scale multi-head mechanism with deformable-sampled offsets to effectively model long-range dependencies in repetitive artifact patterns. Given the input feature map " + }, + { + "bbox": [ + 55, + 654, + 296, + 713 + ], + "type": "inline_equation", + "content": "\\mathbf{X} \\in \\mathbb{R}^{H \\times W \\times C}" + }, + { + "bbox": [ + 55, + 654, + 296, + 713 + ], + "type": "text", + "content": ", we first generate query " + }, + { + "bbox": [ + 55, + 654, + 296, + 713 + ], + "type": "inline_equation", + "content": "Q" + }, + { + "bbox": [ + 55, + 654, + 296, + 713 + ], + "type": "text", + "content": ", key " + }, + { + "bbox": [ + 55, + 654, + 296, + 713 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 55, + 654, + 296, + 713 + ], + "type": "text", + "content": " and value " + }, + { + "bbox": [ + 55, + 654, + 296, + 713 + ], + "type": "inline_equation", + "content": "V" + }, + { + "bbox": [ + 55, + 654, + 296, + 713 + ], + "type": "text", + "content": " projections, enriched with local context" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 72, + 555, + 133 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 555, + 133 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 555, + 133 + ], + "type": "text", + "content": "by applying pointwise convolutions to aggregate pixel-wise cross-channel context, followed by " + }, + { + "bbox": [ + 313, + 72, + 555, + 133 + ], + "type": "inline_equation", + "content": "3 \\times 3" + }, + { + "bbox": [ + 313, + 72, + 555, + 133 + ], + "type": "text", + "content": " depth-wise convolutions to encode channel-wise spatial context. A multi-head mechanism is then used, where " + }, + { + "bbox": [ + 313, + 72, + 555, + 133 + ], + "type": "inline_equation", + "content": "Q, K, V" + }, + { + "bbox": [ + 313, + 72, + 555, + 133 + ], + "type": "text", + "content": " are split into " + }, + { + "bbox": [ + 313, + 72, + 555, + 133 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 313, + 72, + 555, + 133 + ], + "type": "text", + "content": " parts along channels for subsequent process." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 135, + 555, + 183 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 135, + 555, + 183 + ], + "spans": [ + { + "bbox": [ + 313, + 135, + 555, + 183 + ], + "type": "text", + "content": "Revisiting standard LSA: Omitting head index, let " + }, + { + "bbox": [ + 313, + 135, + 555, + 183 + ], + "type": "inline_equation", + "content": "V_{u}" + }, + { + "bbox": [ + 313, + 135, + 555, + 183 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 135, + 555, + 183 + ], + "type": "inline_equation", + "content": "Q_{u}" + }, + { + "bbox": [ + 313, + 135, + 555, + 183 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 135, + 555, + 183 + ], + "type": "inline_equation", + "content": "K_{u}" + }, + { + "bbox": [ + 313, + 135, + 555, + 183 + ], + "type": "text", + "content": " denote the " + }, + { + "bbox": [ + 313, + 135, + 555, + 183 + ], + "type": "inline_equation", + "content": "u" + }, + { + "bbox": [ + 313, + 135, + 555, + 183 + ], + "type": "text", + "content": "-th feature vector in " + }, + { + "bbox": [ + 313, + 135, + 555, + 183 + ], + "type": "inline_equation", + "content": "V" + }, + { + "bbox": [ + 313, + 135, + 555, + 183 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 135, + 555, + 183 + ], + "type": "inline_equation", + "content": "Q" + }, + { + "bbox": [ + 313, + 135, + 555, + 183 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 135, + 555, + 183 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 313, + 135, + 555, + 183 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 313, + 135, + 555, + 183 + ], + "type": "inline_equation", + "content": "u" + }, + { + "bbox": [ + 313, + 135, + 555, + 183 + ], + "type": "text", + "content": " denote the related spatial location. In the standard LSA [31], for query " + }, + { + "bbox": [ + 313, + 135, + 555, + 183 + ], + "type": "inline_equation", + "content": "Q_{u}" + }, + { + "bbox": [ + 313, + 135, + 555, + 183 + ], + "type": "text", + "content": ", the output " + }, + { + "bbox": [ + 313, + 135, + 555, + 183 + ], + "type": "inline_equation", + "content": "Y_{u}" + }, + { + "bbox": [ + 313, + 135, + 555, + 183 + ], + "type": "text", + "content": " is defined as" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 190, + 558, + 232 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 190, + 558, + 232 + ], + "spans": [ + { + "bbox": [ + 313, + 190, + 558, + 232 + ], + "type": "interline_equation", + "content": "\\boldsymbol {Y} _ {u} = \\sum_ {v \\in \\mathcal {N} (u)} \\omega_ {u \\rightarrow v} \\boldsymbol {V} _ {u}, \\omega_ {u \\rightarrow v} = \\frac {\\exp \\left(\\boldsymbol {Q} _ {u}\\right) ^ {\\top} \\boldsymbol {K} _ {v}}{\\sum_ {w \\in \\mathcal {N} (u)} \\exp \\left(\\boldsymbol {Q} _ {u}\\right) ^ {\\top} \\boldsymbol {K} _ {w}}, \\tag {2}", + "image_path": "e1fb0e60f4daea3b6a26d3385a5181d24830bf29a76ba2be944237c7407c2444.jpg" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 232, + 555, + 281 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 232, + 555, + 281 + ], + "spans": [ + { + "bbox": [ + 313, + 232, + 555, + 281 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 232, + 555, + 281 + ], + "type": "inline_equation", + "content": "\\mathcal{N}(u) = u + \\{-1,0,1\\}^2" + }, + { + "bbox": [ + 313, + 232, + 555, + 281 + ], + "type": "text", + "content": " denotes the index set of neighboring points of the position " + }, + { + "bbox": [ + 313, + 232, + 555, + 281 + ], + "type": "inline_equation", + "content": "u" + }, + { + "bbox": [ + 313, + 232, + 555, + 281 + ], + "type": "text", + "content": ", the size of " + }, + { + "bbox": [ + 313, + 232, + 555, + 281 + ], + "type": "inline_equation", + "content": "\\mathcal{N}(u)" + }, + { + "bbox": [ + 313, + 232, + 555, + 281 + ], + "type": "text", + "content": " is rather small and results in poor capability to capture long-range dependencies with varying locations." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 284, + 556, + 403 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 284, + 556, + 403 + ], + "spans": [ + { + "bbox": [ + 313, + 284, + 556, + 403 + ], + "type": "text", + "content": "Multi-scale multi-head mechanism for LSA: To leverage the long-range dependencies of patterns, we introduce multi-dilatation initial grids, denoted as " + }, + { + "bbox": [ + 313, + 284, + 556, + 403 + ], + "type": "inline_equation", + "content": "\\Delta^{(s)} = \\{-M_s,0, + M_s\\} ^2" + }, + { + "bbox": [ + 313, + 284, + 556, + 403 + ], + "type": "text", + "content": " , to construct a multi-scale neighborhood. Specifically, for the " + }, + { + "bbox": [ + 313, + 284, + 556, + 403 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 313, + 284, + 556, + 403 + ], + "type": "text", + "content": " -th head, the neighborhood is denoted as " + }, + { + "bbox": [ + 313, + 284, + 556, + 403 + ], + "type": "inline_equation", + "content": "\\mathcal{N}^s (u) = u + \\Delta^s" + }, + { + "bbox": [ + 313, + 284, + 556, + 403 + ], + "type": "text", + "content": " . By assigning different sampling scales to each attention head, a broad and sparse receptive field, as shown in Figure 3(c), is formed through the aggregation of information across heads, enabling the layer to effectively capture long-range contextual information more effectively." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 406, + 556, + 467 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 406, + 556, + 467 + ], + "spans": [ + { + "bbox": [ + 313, + 406, + 556, + 467 + ], + "type": "text", + "content": "Deformable-sampling offset for LSA: To handle non-uniform distribution of structured artifacts, we integrate deformable sampling into LSA by predicting offsets " + }, + { + "bbox": [ + 313, + 406, + 556, + 467 + ], + "type": "inline_equation", + "content": "\\delta_u^s" + }, + { + "bbox": [ + 313, + 406, + 556, + 467 + ], + "type": "text", + "content": " for different position " + }, + { + "bbox": [ + 313, + 406, + 556, + 467 + ], + "type": "inline_equation", + "content": "u" + }, + { + "bbox": [ + 313, + 406, + 556, + 467 + ], + "type": "text", + "content": " and different scale " + }, + { + "bbox": [ + 313, + 406, + 556, + 467 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 313, + 406, + 556, + 467 + ], + "type": "text", + "content": ". The neighboring set is given by" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 380, + 473, + 487, + 488 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 380, + 473, + 487, + 488 + ], + "spans": [ + { + "bbox": [ + 380, + 473, + 487, + 488 + ], + "type": "interline_equation", + "content": "\\bar {\\mathcal {N}} ^ {s} (u) = u + \\Delta^ {(s)} + \\delta_ {u} ^ {s},", + "image_path": "1dde7a4eca88889232b4a44d76cd6743a2ab8817d3f6777e108f81efa0c3a0d3.jpg" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 314, + 495, + 555, + 520 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 495, + 555, + 520 + ], + "spans": [ + { + "bbox": [ + 314, + 495, + 555, + 520 + ], + "type": "text", + "content": "Let " + }, + { + "bbox": [ + 314, + 495, + 555, + 520 + ], + "type": "inline_equation", + "content": "D_u^s" + }, + { + "bbox": [ + 314, + 495, + 555, + 520 + ], + "type": "text", + "content": " denote " + }, + { + "bbox": [ + 314, + 495, + 555, + 520 + ], + "type": "inline_equation", + "content": "\\delta_u^s" + }, + { + "bbox": [ + 314, + 495, + 555, + 520 + ], + "type": "text", + "content": " in vector form, and " + }, + { + "bbox": [ + 314, + 495, + 555, + 520 + ], + "type": "inline_equation", + "content": "D^s" + }, + { + "bbox": [ + 314, + 495, + 555, + 520 + ], + "type": "text", + "content": " combines all " + }, + { + "bbox": [ + 314, + 495, + 555, + 520 + ], + "type": "inline_equation", + "content": "D_u^s" + }, + { + "bbox": [ + 314, + 495, + 555, + 520 + ], + "type": "text", + "content": ". Then, the offsets " + }, + { + "bbox": [ + 314, + 495, + 555, + 520 + ], + "type": "inline_equation", + "content": "D^s" + }, + { + "bbox": [ + 314, + 495, + 555, + 520 + ], + "type": "text", + "content": " are extracted from " + }, + { + "bbox": [ + 314, + 495, + 555, + 520 + ], + "type": "inline_equation", + "content": "Q" + }, + { + "bbox": [ + 314, + 495, + 555, + 520 + ], + "type": "text", + "content": " as" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 367, + 527, + 555, + 544 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 367, + 527, + 555, + 544 + ], + "spans": [ + { + "bbox": [ + 367, + 527, + 555, + 544 + ], + "type": "interline_equation", + "content": "\\boldsymbol {D} ^ {s} = \\phi^ {s} (\\boldsymbol {Q}) \\in \\mathbb {R} ^ {H \\times W \\times 2 | \\Delta^ {(s)} |}, \\tag {3}", + "image_path": "68fc778ab99ac2166abbdd82ea16d39b6efb5d643d78d8809ef9f5a2166dff90.jpg" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 313, + 551, + 555, + 634 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 551, + 555, + 634 + ], + "spans": [ + { + "bbox": [ + 313, + 551, + 555, + 634 + ], + "type": "text", + "content": "Here, " + }, + { + "bbox": [ + 313, + 551, + 555, + 634 + ], + "type": "inline_equation", + "content": "\\phi^s" + }, + { + "bbox": [ + 313, + 551, + 555, + 634 + ], + "type": "text", + "content": " is implemented using PConv [4] followed by a pointwise convolution. Instead of a single head " + }, + { + "bbox": [ + 313, + 551, + 555, + 634 + ], + "type": "inline_equation", + "content": "Q_s" + }, + { + "bbox": [ + 313, + 551, + 555, + 634 + ], + "type": "text", + "content": ", the full " + }, + { + "bbox": [ + 313, + 551, + 555, + 634 + ], + "type": "inline_equation", + "content": "Q" + }, + { + "bbox": [ + 313, + 551, + 555, + 634 + ], + "type": "text", + "content": " is fed to " + }, + { + "bbox": [ + 313, + 551, + 555, + 634 + ], + "type": "inline_equation", + "content": "\\phi^s" + }, + { + "bbox": [ + 313, + 551, + 555, + 634 + ], + "type": "text", + "content": " for a holistic query view, enabling consistent head-specific adaptations. Learnable deformable offsets " + }, + { + "bbox": [ + 313, + 551, + 555, + 634 + ], + "type": "inline_equation", + "content": "D^s" + }, + { + "bbox": [ + 313, + 551, + 555, + 634 + ], + "type": "text", + "content": " adjust sampling locations within a head, enhancing the relevance of sampled key and value to the central query and improving the layer's handling of complex patterns." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 325, + 635, + 545, + 647 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 325, + 635, + 545, + 647 + ], + "spans": [ + { + "bbox": [ + 325, + 635, + 545, + 647 + ], + "type": "text", + "content": "Finally, the " + }, + { + "bbox": [ + 325, + 635, + 545, + 647 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 325, + 635, + 545, + 647 + ], + "type": "text", + "content": "-th head output of SADA is calculated by" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 381, + 654, + 553, + 682 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 381, + 654, + 553, + 682 + ], + "spans": [ + { + "bbox": [ + 381, + 654, + 553, + 682 + ], + "type": "interline_equation", + "content": "\\boldsymbol {Y} _ {u} ^ {s} = \\sum_ {v \\in \\tilde {\\mathcal {N}} ^ {s} (u)} \\omega_ {u \\rightarrow v} \\boldsymbol {V} _ {u} ^ {s}. \\tag {4}", + "image_path": "869f5e50091f8ed5fd1c6920ac123137eca6b7068c7f23ee51feb5a2a1d5af66.jpg" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "text", + "content": "To mitigate the negative effects caused by statistical-based offset generator " + }, + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "inline_equation", + "content": "\\phi^s" + }, + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "text", + "content": ", the low-relevance points, referred to" + } + ] + } + ], + "index": 24 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "12734" + } + ] + } + ], + "index": 25 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 85, + 76, + 225, + 291 + ], + "blocks": [ + { + "bbox": [ + 85, + 76, + 225, + 291 + ], + "lines": [ + { + "bbox": [ + 85, + 76, + 225, + 291 + ], + "spans": [ + { + "bbox": [ + 85, + 76, + 225, + 291 + ], + "type": "image", + "image_path": "26969461c2da252e50c7dc2d873906a68b609d3aad27c7861c0c1a3cf449eef7.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 293, + 185, + 301 + ], + "lines": [ + { + "bbox": [ + 130, + 293, + 185, + 301 + ], + "spans": [ + { + "bbox": [ + 130, + 293, + 185, + 301 + ], + "type": "text", + "content": "(a) Overall SADA." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 228, + 83, + 399, + 213 + ], + "blocks": [ + { + "bbox": [ + 228, + 83, + 399, + 213 + ], + "lines": [ + { + "bbox": [ + 228, + 83, + 399, + 213 + ], + "spans": [ + { + "bbox": [ + 228, + 83, + 399, + 213 + ], + "type": "image", + "image_path": "366547a4407cb33e037e3dbe47b51e0544fe3af5a66f089ad586396273162cb8.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 242, + 215, + 386, + 224 + ], + "lines": [ + { + "bbox": [ + 242, + 215, + 386, + 224 + ], + "spans": [ + { + "bbox": [ + 242, + 215, + 386, + 224 + ], + "type": "text", + "content": "(b) Deformable sampling process of the " + }, + { + "bbox": [ + 242, + 215, + 386, + 224 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 242, + 215, + 386, + 224 + ], + "type": "text", + "content": "-th head." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 227, + 228, + 404, + 287 + ], + "blocks": [ + { + "bbox": [ + 227, + 228, + 404, + 287 + ], + "lines": [ + { + "bbox": [ + 227, + 228, + 404, + 287 + ], + "spans": [ + { + "bbox": [ + 227, + 228, + 404, + 287 + ], + "type": "image", + "image_path": "780f209f31a2ec7d82446a21bb5d6000393155851241e63cb4cd702a586b842f.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 408, + 83, + 526, + 163 + ], + "blocks": [ + { + "bbox": [ + 408, + 83, + 526, + 163 + ], + "lines": [ + { + "bbox": [ + 408, + 83, + 526, + 163 + ], + "spans": [ + { + "bbox": [ + 408, + 83, + 526, + 163 + ], + "type": "image", + "image_path": "7c0e7d3ca0817fa01603efc28366f9e06617052167e5bd6ba41df3116162ee16.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 421, + 164, + 511, + 173 + ], + "lines": [ + { + "bbox": [ + 421, + 164, + 511, + 173 + ], + "spans": [ + { + "bbox": [ + 421, + 164, + 511, + 173 + ], + "type": "text", + "content": "(c) Initial sampling grids " + }, + { + "bbox": [ + 421, + 164, + 511, + 173 + ], + "type": "inline_equation", + "content": "\\{\\Delta^{(s)}\\}" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 408, + 175, + 526, + 293 + ], + "blocks": [ + { + "bbox": [ + 408, + 175, + 526, + 293 + ], + "lines": [ + { + "bbox": [ + 408, + 175, + 526, + 293 + ], + "spans": [ + { + "bbox": [ + 408, + 175, + 526, + 293 + ], + "type": "image", + "image_path": "0b220c97b6564a4020272459b3f394be20e6f26d481b3ac2dad432dd4f0a1683.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 412, + 293, + 519, + 301 + ], + "lines": [ + { + "bbox": [ + 412, + 293, + 519, + 301 + ], + "spans": [ + { + "bbox": [ + 412, + 293, + 519, + 301 + ], + "type": "text", + "content": "(d) The process instance for one head." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 54, + 309, + 555, + 343 + ], + "lines": [ + { + "bbox": [ + 54, + 309, + 555, + 343 + ], + "spans": [ + { + "bbox": [ + 54, + 309, + 555, + 343 + ], + "type": "text", + "content": "Figure 3. Overview of SADA: (a) SADA employs multi-scale multi-head mechanism to aggregate (c) a broad, sparse receptive field for capturing long-range context, while utilizing (b) deformable sampling offsets to model complex patterns. (d) A process instance of SADA within a head." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 364, + 296, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 364, + 296, + 437 + ], + "spans": [ + { + "bbox": [ + 55, + 364, + 296, + 437 + ], + "type": "text", + "content": "invalid points, are filtered out in the weighted sum with a low " + }, + { + "bbox": [ + 55, + 364, + 296, + 437 + ], + "type": "inline_equation", + "content": "\\omega_{u\\rightarrow v}" + }, + { + "bbox": [ + 55, + 364, + 296, + 437 + ], + "type": "text", + "content": " in Eq. 4. The remaining points are considered valid. The output " + }, + { + "bbox": [ + 55, + 364, + 296, + 437 + ], + "type": "inline_equation", + "content": "Y^{s}" + }, + { + "bbox": [ + 55, + 364, + 296, + 437 + ], + "type": "text", + "content": " s are combined into " + }, + { + "bbox": [ + 55, + 364, + 296, + 437 + ], + "type": "inline_equation", + "content": "\\pmb{Y}" + }, + { + "bbox": [ + 55, + 364, + 296, + 437 + ], + "type": "text", + "content": " along channels and fed into a feed-forward network (FFN) for subsequent processing. The FFN is implemented by FRFN [55]. See Fig. 3 for an illustration." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 55, + 444, + 261, + 456 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 444, + 261, + 456 + ], + "spans": [ + { + "bbox": [ + 55, + 444, + 261, + 456 + ], + "type": "text", + "content": "3.3. SADT for Structured Artifact Removal" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 55, + 461, + 296, + 581 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 461, + 296, + 581 + ], + "spans": [ + { + "bbox": [ + 55, + 461, + 296, + 581 + ], + "type": "text", + "content": "Building on SEDC and SADA, we propose a transformer, called SADT, for structured artifact removal. SADT employs a multi-input multi-output (MIMO) strategy, as shown in Fig. 4. Given a degraded image " + }, + { + "bbox": [ + 55, + 461, + 296, + 581 + ], + "type": "inline_equation", + "content": "\\mathbf{Y} \\in \\mathbb{R}^{W \\times H \\times 3}" + }, + { + "bbox": [ + 55, + 461, + 296, + 581 + ], + "type": "text", + "content": ", SADT initially extracts shallow features of size " + }, + { + "bbox": [ + 55, + 461, + 296, + 581 + ], + "type": "inline_equation", + "content": "H \\times W \\times C" + }, + { + "bbox": [ + 55, + 461, + 296, + 581 + ], + "type": "text", + "content": " via a convolution layer. These features then pass through 4 encoder blocks, capturing fine to coarse scales, with each block comprising " + }, + { + "bbox": [ + 55, + 461, + 296, + 581 + ], + "type": "inline_equation", + "content": "N_{i}" + }, + { + "bbox": [ + 55, + 461, + 296, + 581 + ], + "type": "text", + "content": " cascaded SADA layers to flexibly model long-range dependencies of repetitive artifacts and a pair of SEDC layers in the end for capturing scale-varying features." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 55, + 582, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 582, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 582, + 296, + 714 + ], + "type": "text", + "content": "Within the encoder, channels are expanded and spatial resolution is reduced. Following [8, 9], features from downsampled degraded images are merged into the main path through a fusion module and subsequently processed by 3 decoders to progressively reconstruct the image from coarse to fine scales. Features in the decoder are concatenated with those from the encoder via skip connections, followed by a convolutional layer for channel dimension adjustment. The convolutional layer followed the decoder generates a residual image, which is added to the corresponding downsampled input to restore the image. The image at the finest scale" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 364, + 555, + 389 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 364, + 555, + 389 + ], + "spans": [ + { + "bbox": [ + 313, + 364, + 555, + 389 + ], + "type": "text", + "content": "serves as the final output, while other scales also contribute to the loss function during training." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 402, + 481, + 415 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 402, + 481, + 415 + ], + "spans": [ + { + "bbox": [ + 313, + 402, + 481, + 415 + ], + "type": "text", + "content": "3.4. Loss Function for NN Training" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 421, + 554, + 481 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 421, + 554, + 481 + ], + "spans": [ + { + "bbox": [ + 313, + 421, + 554, + 481 + ], + "type": "text", + "content": "A multi-scale loss is used for training. Let " + }, + { + "bbox": [ + 313, + 421, + 554, + 481 + ], + "type": "inline_equation", + "content": "O_{t}" + }, + { + "bbox": [ + 313, + 421, + 554, + 481 + ], + "type": "text", + "content": " be the output from the " + }, + { + "bbox": [ + 313, + 421, + 554, + 481 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 313, + 421, + 554, + 481 + ], + "type": "text", + "content": "-th scale decoder block, while " + }, + { + "bbox": [ + 313, + 421, + 554, + 481 + ], + "type": "inline_equation", + "content": "X^{gt}" + }, + { + "bbox": [ + 313, + 421, + 554, + 481 + ], + "type": "text", + "content": " be the ground truth. Besides standard " + }, + { + "bbox": [ + 313, + 421, + 554, + 481 + ], + "type": "inline_equation", + "content": "L_{1}" + }, + { + "bbox": [ + 313, + 421, + 554, + 481 + ], + "type": "text", + "content": " fitting loss, we also include an auxiliary loss function that minimizes the distance between input and output in feature space, tailed for each task." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 486, + 554, + 510 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 486, + 554, + 510 + ], + "spans": [ + { + "bbox": [ + 313, + 486, + 554, + 510 + ], + "type": "text", + "content": "Image demoiréing: The perceptual loss [16] " + }, + { + "bbox": [ + 313, + 486, + 554, + 510 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_p" + }, + { + "bbox": [ + 313, + 486, + 554, + 510 + ], + "type": "text", + "content": " is used as an auxiliary loss, and the overall loss is" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 326, + 522, + 554, + 555 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 326, + 522, + 554, + 555 + ], + "spans": [ + { + "bbox": [ + 326, + 522, + 554, + 555 + ], + "type": "interline_equation", + "content": "\\mathcal {L} = \\sum_ {t = 1} ^ {3} \\mathcal {L} _ {1} \\left(\\boldsymbol {O} _ {t}, \\boldsymbol {X} _ {\\downarrow 2 ^ {t - 1}} ^ {g t}\\right) + \\lambda_ {p} \\cdot \\mathcal {L} _ {p} \\left(\\boldsymbol {O} _ {t}, \\boldsymbol {X} _ {\\downarrow 2 ^ {t - 1}} ^ {g t}\\right), \\tag {5}", + "image_path": "f81a30fd6b2b7c37503f05339de73158d316961e778ac1b9f192034e6eccd84f.jpg" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 313, + 567, + 553, + 581 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 567, + 553, + 581 + ], + "spans": [ + { + "bbox": [ + 313, + 567, + 553, + 581 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 567, + 553, + 581 + ], + "type": "inline_equation", + "content": "\\lambda_p > 0" + }, + { + "bbox": [ + 313, + 567, + 553, + 581 + ], + "type": "text", + "content": " is a hyper-parameter, set to 1 in experiments." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 313, + 584, + 555, + 643 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 584, + 555, + 643 + ], + "spans": [ + { + "bbox": [ + 313, + 584, + 555, + 643 + ], + "type": "text", + "content": "Image debanding/deraining: Unlike moiré patterns, band effects and rain streaks exhibit more pronounced stripe-like appearances, yielding strong responses in the frequency domain. Thus, the frequency loss " + }, + { + "bbox": [ + 313, + 584, + 555, + 643 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_f" + }, + { + "bbox": [ + 313, + 584, + 555, + 643 + ], + "type": "text", + "content": " is introduced as an additional loss term, and the overall loss is" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 325, + 657, + 553, + 689 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 325, + 657, + 553, + 689 + ], + "spans": [ + { + "bbox": [ + 325, + 657, + 553, + 689 + ], + "type": "interline_equation", + "content": "\\mathcal {L} = \\sum_ {t = 1} ^ {3} \\mathcal {L} _ {1} \\left(\\boldsymbol {O} _ {t}, \\boldsymbol {X} _ {\\downarrow 2 ^ {t - 1}} ^ {g t}\\right) + \\lambda_ {f} \\cdot \\mathcal {L} _ {f} \\left(\\boldsymbol {O} _ {t}, \\boldsymbol {X} _ {\\downarrow 2 ^ {t - 1}} ^ {g t}\\right), \\tag {6}", + "image_path": "917e0a605d893256656b47ab20877a9d1634393a71dd948354426926f5022931.jpg" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 313, + 701, + 555, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 701, + 555, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 701, + 555, + 714 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 701, + 555, + 714 + ], + "type": "inline_equation", + "content": "\\lambda_{f} > 0" + }, + { + "bbox": [ + 313, + 701, + 555, + 714 + ], + "type": "text", + "content": " is a hyper-parameter, set to 0.1 in experiments." + } + ] + } + ], + "index": 22 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "12735" + } + ] + } + ], + "index": 23 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 83, + 70, + 527, + 270 + ], + "blocks": [ + { + "bbox": [ + 83, + 70, + 527, + 270 + ], + "lines": [ + { + "bbox": [ + 83, + 70, + 527, + 270 + ], + "spans": [ + { + "bbox": [ + 83, + 70, + 527, + 270 + ], + "type": "image", + "image_path": "3a9cacf7fc8ba0171f708675a83945402a6028153ea077f50e4f02ae7094cc26.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 275, + 555, + 299 + ], + "lines": [ + { + "bbox": [ + 55, + 275, + 555, + 299 + ], + "spans": [ + { + "bbox": [ + 55, + 275, + 555, + 299 + ], + "type": "text", + "content": "Figure 4. The proposed network employs a multiscale hierarchical encoder-decoder architecture. Each block comprises " + }, + { + "bbox": [ + 55, + 275, + 555, + 299 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 55, + 275, + 555, + 299 + ], + "type": "text", + "content": " cascaded SADA layers while utilizing a pair of SEDC layers at the ends." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 318, + 138, + 332 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 318, + 138, + 332 + ], + "spans": [ + { + "bbox": [ + 55, + 318, + 138, + 332 + ], + "type": "text", + "content": "4. Experiments" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 339, + 296, + 411 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 339, + 296, + 411 + ], + "spans": [ + { + "bbox": [ + 55, + 339, + 296, + 411 + ], + "type": "text", + "content": "In this section, we first discuss the experimental setting, followed by the evaluation of the effectiveness of our SADT for several structured artifact removal tasks, including image demoiring, debanding and deraining. Ablation studies are then conducted to assess each component of SADT. More results can be found in supplementary material." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 423, + 183, + 436 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 423, + 183, + 436 + ], + "spans": [ + { + "bbox": [ + 55, + 423, + 183, + 436 + ], + "type": "text", + "content": "4.1. Experimental Settings" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 441, + 296, + 525 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 441, + 296, + 525 + ], + "spans": [ + { + "bbox": [ + 55, + 441, + 296, + 525 + ], + "type": "text", + "content": "In our SADT, The number of channels is set to " + }, + { + "bbox": [ + 55, + 441, + 296, + 525 + ], + "type": "inline_equation", + "content": "C = 32" + }, + { + "bbox": [ + 55, + 441, + 296, + 525 + ], + "type": "text", + "content": " while " + }, + { + "bbox": [ + 55, + 441, + 296, + 525 + ], + "type": "inline_equation", + "content": "\\{N_0, N_1, N_2, N_3\\}" + }, + { + "bbox": [ + 55, + 441, + 296, + 525 + ], + "type": "text", + "content": " are set to " + }, + { + "bbox": [ + 55, + 441, + 296, + 525 + ], + "type": "inline_equation", + "content": "\\{0, 4, 4, 4\\}" + }, + { + "bbox": [ + 55, + 441, + 296, + 525 + ], + "type": "text", + "content": ". The partition size in SCMC is " + }, + { + "bbox": [ + 55, + 441, + 296, + 525 + ], + "type": "inline_equation", + "content": "K = 8" + }, + { + "bbox": [ + 55, + 441, + 296, + 525 + ], + "type": "text", + "content": ". 4 heads are used in SADA, and " + }, + { + "bbox": [ + 55, + 441, + 296, + 525 + ], + "type": "inline_equation", + "content": "M_s" + }, + { + "bbox": [ + 55, + 441, + 296, + 525 + ], + "type": "text", + "content": " is set as " + }, + { + "bbox": [ + 55, + 441, + 296, + 525 + ], + "type": "inline_equation", + "content": "\\{1, 3, 5, 7\\}" + }, + { + "bbox": [ + 55, + 441, + 296, + 525 + ], + "type": "text", + "content": ". Model training employs the Adam optimizer [17] with " + }, + { + "bbox": [ + 55, + 441, + 296, + 525 + ], + "type": "inline_equation", + "content": "\\beta_1 = 0.9" + }, + { + "bbox": [ + 55, + 441, + 296, + 525 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 441, + 296, + 525 + ], + "type": "inline_equation", + "content": "\\beta_2 = 0.999" + }, + { + "bbox": [ + 55, + 441, + 296, + 525 + ], + "type": "text", + "content": ". Code will be released upon acceptance. Experimental datasets and implementation details are listed below:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 529, + 296, + 649 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 529, + 296, + 649 + ], + "spans": [ + { + "bbox": [ + 55, + 529, + 296, + 649 + ], + "type": "text", + "content": "Image demoiréing: Three datasets are used for benchmarking: TIP2018 [33], FHDMi [13], and LCDMoiré [49]. The training employs cyclic cosine annealing [26], with an initial learning rate of " + }, + { + "bbox": [ + 55, + 529, + 296, + 649 + ], + "type": "inline_equation", + "content": "2 \\times 10^{-4}" + }, + { + "bbox": [ + 55, + 529, + 296, + 649 + ], + "type": "text", + "content": " decaying to " + }, + { + "bbox": [ + 55, + 529, + 296, + 649 + ], + "type": "inline_equation", + "content": "10^{-6}" + }, + { + "bbox": [ + 55, + 529, + 296, + 649 + ], + "type": "text", + "content": " over each cycle. For two high-definition datasets, FHDMi and LCDMoiré, we randomly crop " + }, + { + "bbox": [ + 55, + 529, + 296, + 649 + ], + "type": "inline_equation", + "content": "512 \\times 512" + }, + { + "bbox": [ + 55, + 529, + 296, + 649 + ], + "type": "text", + "content": " patches from the images, and train the model for 150 epochs with the annealing cycle 50 and batch size of 2. For TIP2018 dataset, the model is trained for 70 epochs with annealing cycle 10. Consistent with [48], no data augmentation is utilized in training." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 653, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 653, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 653, + 296, + 713 + ], + "type": "text", + "content": "Image debanding: DID dataset [54] is used for benchmarking, which contains 51,490 pairs of banded and latent image patches with size " + }, + { + "bbox": [ + 55, + 653, + 296, + 713 + ], + "type": "inline_equation", + "content": "256\\times 256" + }, + { + "bbox": [ + 55, + 653, + 296, + 713 + ], + "type": "text", + "content": " (30,829 training pairs, 10,237 validation pairs and 10,354 testing pairs) The initial learning rate is 0.0002, and decreased to " + }, + { + "bbox": [ + 55, + 653, + 296, + 713 + ], + "type": "inline_equation", + "content": "10^{-6}" + }, + { + "bbox": [ + 55, + 653, + 296, + 713 + ], + "type": "text", + "content": " for 300K iter" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 319, + 555, + 355 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 319, + 555, + 355 + ], + "spans": [ + { + "bbox": [ + 313, + 319, + 555, + 355 + ], + "type": "text", + "content": "ations with the cosine annealing scheme [26]. Batch size is 4. Data augmentations include horizontal/vertical flipping and rotations of " + }, + { + "bbox": [ + 313, + 319, + 555, + 355 + ], + "type": "inline_equation", + "content": "0^{\\circ}" + }, + { + "bbox": [ + 313, + 319, + 555, + 355 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 319, + 555, + 355 + ], + "type": "inline_equation", + "content": "90^{\\circ}" + }, + { + "bbox": [ + 313, + 319, + 555, + 355 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 319, + 555, + 355 + ], + "type": "inline_equation", + "content": "180^{\\circ}" + }, + { + "bbox": [ + 313, + 319, + 555, + 355 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 319, + 555, + 355 + ], + "type": "inline_equation", + "content": "270^{\\circ}" + }, + { + "bbox": [ + 313, + 319, + 555, + 355 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 358, + 556, + 454 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 358, + 556, + 454 + ], + "spans": [ + { + "bbox": [ + 313, + 358, + 556, + 454 + ], + "type": "text", + "content": "Image deraining: The large-scale real-world dataset, SPAD [39], is used for benchmarking the task of rain streak removal. It contains 638,492 image pairs for training and 1,000 for testing. Batch size is 32 and th total iteration number is " + }, + { + "bbox": [ + 313, + 358, + 556, + 454 + ], + "type": "inline_equation", + "content": "300\\mathrm{K}" + }, + { + "bbox": [ + 313, + 358, + 556, + 454 + ], + "type": "text", + "content": ". The initial learning rate is fixed as " + }, + { + "bbox": [ + 313, + 358, + 556, + 454 + ], + "type": "inline_equation", + "content": "3\\times 10^{-4}" + }, + { + "bbox": [ + 313, + 358, + 556, + 454 + ], + "type": "text", + "content": " for the first 92K iterations, and then decreased to " + }, + { + "bbox": [ + 313, + 358, + 556, + 454 + ], + "type": "inline_equation", + "content": "1\\times 10^{-6}" + }, + { + "bbox": [ + 313, + 358, + 556, + 454 + ], + "type": "text", + "content": " for 208K iterations with the cosine annealing scheme [26]. Random vertical/horizontal flips are used in data augmentation." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 456, + 556, + 540 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 456, + 556, + 540 + ], + "spans": [ + { + "bbox": [ + 313, + 456, + 556, + 540 + ], + "type": "text", + "content": "Two common quantitative evaluation metrics are used for all tasks: PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity Index). Consistent with existing deraining methods [5, 44], PSNR/SSIM scores are computed using the Y channel in YCbCr color space. For image demoiring and debanding where periodic patterns significantly impact perception, the LPIPS metric [52] is also used." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 548, + 525, + 562 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 548, + 525, + 562 + ], + "spans": [ + { + "bbox": [ + 313, + 548, + 525, + 562 + ], + "type": "text", + "content": "4.2. Quantitative and Qualitative Evaluation" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 570, + 556, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 570, + 556, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 570, + 556, + 713 + ], + "type": "text", + "content": "Image demoiréing: See Tab. 1 for the comparison of different methods. SADT outperforms all compared methods on the PSNR metric and achieves the best SSIM scores on both TIP2018 and FHDMi datasets. While many methods, including SADT, achieve comparable SSIM scores on the LCDmoiré dataset, SADT maintains its superiority with the highest PSNR performance. The substantial performance gains on these datasets demonstrate SADT's effectiveness in handling moiré artifacts. Qualitative analysis also reveals that SADT exhibits remarkable improvement in moiré pattern removal, particularly in challenging scenarios such as patterns intertwined with hair details and those distributed" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 749, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 749, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 749, + 318, + 757 + ], + "type": "text", + "content": "12736" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 57, + 70, + 390, + 192 + ], + "blocks": [ + { + "bbox": [ + 57, + 70, + 390, + 192 + ], + "lines": [ + { + "bbox": [ + 57, + 70, + 390, + 192 + ], + "spans": [ + { + "bbox": [ + 57, + 70, + 390, + 192 + ], + "type": "table", + "html": "
MethodTIP2018 [33]FHDMi [13]LCDMoiré [49]
LPIPS↓PSNR↑SSIM↑LPIPSPSNRSSIMLPIPSPSNRSSIM
MopNet [12]-27.750.8950.179422.760.7958---
FHDe2Net [13]-27.780.8960.168822.930.7885-41.40-
WDNet [24]-28.080.904----29.660.9670
MBCNN [53]-30.030.8930.198022.310.8095-44.040.9948
ESDNet [48]0.081629.810.9160.135424.500.83510.009744.830.9963
CDDF [37]-28.870.8940.161023.630.8040-44.10-
RVDNet [7]----24.290.8352-44.540.9932
RRID [46]----24.390.8300---
Ours0.060830.770.9260.123824.960.84630.006546.430.9923
", + "image_path": "e34758b91ec6cc38cebbb3185d9415d18dbd83137bb646e6b77056fdebe01e5e.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 392, + 70, + 554, + 192 + ], + "blocks": [ + { + "bbox": [ + 392, + 70, + 554, + 192 + ], + "lines": [ + { + "bbox": [ + 392, + 70, + 554, + 192 + ], + "spans": [ + { + "bbox": [ + 392, + 70, + 554, + 192 + ], + "type": "table", + "html": "
MethodPSNR↑SSIM↑LPIPS↓
FCDR [14]25.730.71700.3766
FFmpeg [11]35.330.93520.0622
AdaDeband [36]35.350.93920.0639
BitNet [1]38.240.96330.0505
ADNet [34]38.290.96120.0499
MWCNN [25]39.240.96880.4854
MPRNet [50]39.420.96970.0461
Restormer [51]39.500.97090.0478
Ours39.780.97290.0453
", + "image_path": "d3f07b330ecf7b2fd610e44f54cb597534e043c94477c899e7980e7a3a7c84d3.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 78, + 234, + 532, + 323 + ], + "blocks": [ + { + "bbox": [ + 78, + 234, + 532, + 323 + ], + "lines": [ + { + "bbox": [ + 78, + 234, + 532, + 323 + ], + "spans": [ + { + "bbox": [ + 78, + 234, + 532, + 323 + ], + "type": "image", + "image_path": "09e4b09302f65d399bb59c1de75c81247fdd7ac51fd43a7afd0f0df09fcca9fc.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 328, + 553, + 339 + ], + "lines": [ + { + "bbox": [ + 55, + 328, + 553, + 339 + ], + "spans": [ + { + "bbox": [ + 55, + 328, + 553, + 339 + ], + "type": "text", + "content": "Figure 5. Visual inspection of the results from different image demoiring methods on sample images; see zoom-in box for details inspection." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 361, + 246, + 372 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 361, + 246, + 372 + ], + "spans": [ + { + "bbox": [ + 55, + 361, + 246, + 372 + ], + "type": "text", + "content": "across flat clothing regions, as shown in Fig. 5." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 376, + 296, + 460 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 376, + 296, + 460 + ], + "spans": [ + { + "bbox": [ + 55, + 376, + 296, + 460 + ], + "type": "text", + "content": "Image debanding: Given the limited availability of open-source deep learning methods specifically for debanding, we extended our evaluation to include methods adapted from related tasks. As shown in Tab.2, SADT demonstrates superior performance across all evaluation metrics. See Fig. 6 for visual comparison, where SADT achieves optimal results with minimal banding artifacts and superior color fidelity." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 464, + 296, + 584 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 464, + 296, + 584 + ], + "spans": [ + { + "bbox": [ + 55, + 464, + 296, + 584 + ], + "type": "text", + "content": "Image deraining: The SPAD dataset [39] is used for benchmarking. As shown in Tab. 3, SADT outperformed all other methods, in PSNR and SSIM. Notably, SADT achieved a " + }, + { + "bbox": [ + 55, + 464, + 296, + 584 + ], + "type": "inline_equation", + "content": "0.23\\mathrm{dB}" + }, + { + "bbox": [ + 55, + 464, + 296, + 584 + ], + "type": "text", + "content": " improvement over the second best performer, NeRD-Rain-S [6], with " + }, + { + "bbox": [ + 55, + 464, + 296, + 584 + ], + "type": "inline_equation", + "content": "42\\%" + }, + { + "bbox": [ + 55, + 464, + 296, + 584 + ], + "type": "text", + "content": " fewer parameter count. (See Tab. 4). As shown in Fig. 7, the visual comparison further validate our method's effectiveness. While exhibits residual streaks and NeRD-Rain-S produces discontinuous blocks in their outputs, SADT effectively removes rain streaks while maintaining high visual quality." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 588, + 296, + 660 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 588, + 296, + 660 + ], + "spans": [ + { + "bbox": [ + 55, + 588, + 296, + 660 + ], + "type": "text", + "content": "Complexity comparison: See Tab. 4 for the comparison of different methods in terms of FLOPs and parameter numbers. Our model, SADT, maintains relatively low FLOPs and a small number of parameters, while achieving SOTA performance as evidenced in Tab. " + }, + { + "bbox": [ + 55, + 588, + 296, + 660 + ], + "type": "inline_equation", + "content": "1\\sim 3" + }, + { + "bbox": [ + 55, + 588, + 296, + 660 + ], + "type": "text", + "content": ". This shows that the effectiveness of our model is from its design, not model size." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 671, + 149, + 683 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 671, + 149, + 683 + ], + "spans": [ + { + "bbox": [ + 55, + 671, + 149, + 683 + ], + "type": "text", + "content": "4.3. Ablation Study" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 689, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 689, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 689, + 296, + 713 + ], + "type": "text", + "content": "This study evaluates the contribution of key designs, SADA, SEDC, and MIMO architecture, toward performance gain" + } + ] + } + ], + "index": 10 + }, + { + "type": "table", + "bbox": [ + 349, + 358, + 520, + 485 + ], + "blocks": [ + { + "bbox": [ + 55, + 201, + 555, + 222 + ], + "lines": [ + { + "bbox": [ + 55, + 201, + 555, + 222 + ], + "spans": [ + { + "bbox": [ + 55, + 201, + 555, + 222 + ], + "type": "text", + "content": "Table 1. Quantitative results for image demoiring. The best and second-best results are Table 2. Quantitative results of image deboldfaced and underlined, respectively. banding on the DID dataset [54]." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 349, + 358, + 520, + 485 + ], + "lines": [ + { + "bbox": [ + 349, + 358, + 520, + 485 + ], + "spans": [ + { + "bbox": [ + 349, + 358, + 520, + 485 + ], + "type": "table", + "html": "
MethodSourcePSNR↑SSIM↑
PReLU [32]CVPR'1940.160.9816
RCDNet [38]CVPR'2043.360.9831
SPDNet [47]ICCV'2143.550.9875
MPRNet [50]CVPR'2145.000.9897
ECNet [20]WACV'2244.320.9913
IDT [44]TPAMI'2247.340.9929
Uformer-B [41]CVPR'2247.840.9925
Restormer [51]CVPR'2247.980.9921
DRSformer [5]CVPR'2348.530.9924
NeRD-Rain-S [6]CVPR'2448.900.9936
Ours-49.130.9939
", + "image_path": "ec4462c56b1b870a8b532f666e8e9035a08b14bb87cd99e5f734635f3df647f7.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "table_body" + } + ], + "index": 11 + }, + { + "type": "table", + "bbox": [ + 323, + 525, + 547, + 626 + ], + "blocks": [ + { + "bbox": [ + 313, + 491, + 554, + 513 + ], + "lines": [ + { + "bbox": [ + 313, + 491, + 554, + 513 + ], + "spans": [ + { + "bbox": [ + 313, + 491, + 554, + 513 + ], + "type": "text", + "content": "Table 3. Image deraining results on the SPAD dataset [39]. The best and second-best results are boldfaced and underlined, respectively." + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 323, + 525, + 547, + 626 + ], + "lines": [ + { + "bbox": [ + 323, + 525, + 547, + 626 + ], + "spans": [ + { + "bbox": [ + 323, + 525, + 547, + 626 + ], + "type": "table", + "html": "
TaskMethod#Params(M)#FLOPs(G)
DemoiréingESDNet [48]5.9317.6
MBCNN [53]13.9-
DebandingRestormer [51]26.13150.0
MPRNet [50]20.13777.0
DerainingNeRD-Rain-S [6]10.5379.2
DRSformer [5]33.65242.9
Ours6.1339.7
", + "image_path": "8a6774e8788b5d2b7f5bf29d563e2e05dfc541214ebd4defa377e6855f110d9e.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "table_body" + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 632, + 554, + 654 + ], + "lines": [ + { + "bbox": [ + 313, + 632, + 554, + 654 + ], + "spans": [ + { + "bbox": [ + 313, + 632, + 554, + 654 + ], + "type": "text", + "content": "Table 4. Complexity comparison with SOTA methods in different tasks. The test image is of size " + }, + { + "bbox": [ + 313, + 632, + 554, + 654 + ], + "type": "inline_equation", + "content": "256 \\times 256" + }, + { + "bbox": [ + 313, + 632, + 554, + 654 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 313, + 677, + 554, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 677, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 677, + 554, + 713 + ], + "type": "text", + "content": "of our model. For SADA, we removed the multi-scale head mechanism and deformable sampling offset, reducing it to multi-head LSA. For SEDC, we substituted it with a single" + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "12737" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 77, + 72, + 167, + 163 + ], + "blocks": [ + { + "bbox": [ + 77, + 72, + 167, + 163 + ], + "lines": [ + { + "bbox": [ + 77, + 72, + 167, + 163 + ], + "spans": [ + { + "bbox": [ + 77, + 72, + 167, + 163 + ], + "type": "image", + "image_path": "d67476ce008b6e90254367f5b920789fb20385f4661dc52e9b60162aefaf1bc0.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 163, + 138, + 171 + ], + "lines": [ + { + "bbox": [ + 105, + 163, + 138, + 171 + ], + "spans": [ + { + "bbox": [ + 105, + 163, + 138, + 171 + ], + "type": "text", + "content": "Degraded" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 168, + 72, + 212, + 163 + ], + "blocks": [ + { + "bbox": [ + 168, + 72, + 212, + 163 + ], + "lines": [ + { + "bbox": [ + 168, + 72, + 212, + 163 + ], + "spans": [ + { + "bbox": [ + 168, + 72, + 212, + 163 + ], + "type": "image", + "image_path": "d691ae42742b29842e54b1ccbbb3c974c519c71e6701947492c1e7047b82148f.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 173, + 163, + 208, + 171 + ], + "lines": [ + { + "bbox": [ + 173, + 163, + 208, + 171 + ], + "spans": [ + { + "bbox": [ + 173, + 163, + 208, + 171 + ], + "type": "text", + "content": "Reference" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 213, + 72, + 258, + 163 + ], + "blocks": [ + { + "bbox": [ + 213, + 72, + 258, + 163 + ], + "lines": [ + { + "bbox": [ + 213, + 72, + 258, + 163 + ], + "spans": [ + { + "bbox": [ + 213, + 72, + 258, + 163 + ], + "type": "image", + "image_path": "c833da0ab580a2d86d53810a244d54d169308991c112581edf1020cb5e8288e9.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 222, + 163, + 250, + 172 + ], + "lines": [ + { + "bbox": [ + 222, + 163, + 250, + 172 + ], + "spans": [ + { + "bbox": [ + 222, + 163, + 250, + 172 + ], + "type": "text", + "content": "FFmpeg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 258, + 72, + 304, + 163 + ], + "blocks": [ + { + "bbox": [ + 258, + 72, + 304, + 163 + ], + "lines": [ + { + "bbox": [ + 258, + 72, + 304, + 163 + ], + "spans": [ + { + "bbox": [ + 258, + 72, + 304, + 163 + ], + "type": "image", + "image_path": "831fc324be9d9600d55e81fae8bc6e030c271a484ab588f89aaad949967ee31a.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 270, + 163, + 293, + 171 + ], + "lines": [ + { + "bbox": [ + 270, + 163, + 293, + 171 + ], + "spans": [ + { + "bbox": [ + 270, + 163, + 293, + 171 + ], + "type": "text", + "content": "BitNet" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 304, + 72, + 351, + 163 + ], + "blocks": [ + { + "bbox": [ + 304, + 72, + 351, + 163 + ], + "lines": [ + { + "bbox": [ + 304, + 72, + 351, + 163 + ], + "spans": [ + { + "bbox": [ + 304, + 72, + 351, + 163 + ], + "type": "image", + "image_path": "476347858e28260e4c3cb099275709896347ac097ea0a1281ba1cfd7ce90e429.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 315, + 163, + 339, + 171 + ], + "lines": [ + { + "bbox": [ + 315, + 163, + 339, + 171 + ], + "spans": [ + { + "bbox": [ + 315, + 163, + 339, + 171 + ], + "type": "text", + "content": "ADNet" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 351, + 72, + 396, + 163 + ], + "blocks": [ + { + "bbox": [ + 351, + 72, + 396, + 163 + ], + "lines": [ + { + "bbox": [ + 351, + 72, + 396, + 163 + ], + "spans": [ + { + "bbox": [ + 351, + 72, + 396, + 163 + ], + "type": "image", + "image_path": "308722b10de25de6940078184cbd85d0fdd10b2f66790cda603f15ffc611d5d2.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 357, + 163, + 389, + 171 + ], + "lines": [ + { + "bbox": [ + 357, + 163, + 389, + 171 + ], + "spans": [ + { + "bbox": [ + 357, + 163, + 389, + 171 + ], + "type": "text", + "content": "MWCNN" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 396, + 72, + 441, + 163 + ], + "blocks": [ + { + "bbox": [ + 396, + 72, + 441, + 163 + ], + "lines": [ + { + "bbox": [ + 396, + 72, + 441, + 163 + ], + "spans": [ + { + "bbox": [ + 396, + 72, + 441, + 163 + ], + "type": "image", + "image_path": "9fc412018d2df07dc4983bc954290d12761dac8eee708cefa2aef5bbe09b7368.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 404, + 163, + 433, + 171 + ], + "lines": [ + { + "bbox": [ + 404, + 163, + 433, + 171 + ], + "spans": [ + { + "bbox": [ + 404, + 163, + 433, + 171 + ], + "type": "text", + "content": "MPRNet" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 441, + 72, + 487, + 163 + ], + "blocks": [ + { + "bbox": [ + 441, + 72, + 487, + 163 + ], + "lines": [ + { + "bbox": [ + 441, + 72, + 487, + 163 + ], + "spans": [ + { + "bbox": [ + 441, + 72, + 487, + 163 + ], + "type": "image", + "image_path": "ee0ccb03a30074b1912753b00796a13c6b83dbe013effe654294fb47b065aad3.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 447, + 163, + 482, + 171 + ], + "lines": [ + { + "bbox": [ + 447, + 163, + 482, + 171 + ], + "spans": [ + { + "bbox": [ + 447, + 163, + 482, + 171 + ], + "type": "text", + "content": "Restormer" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_caption" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 488, + 72, + 533, + 163 + ], + "blocks": [ + { + "bbox": [ + 488, + 72, + 533, + 163 + ], + "lines": [ + { + "bbox": [ + 488, + 72, + 533, + 163 + ], + "spans": [ + { + "bbox": [ + 488, + 72, + 533, + 163 + ], + "type": "image", + "image_path": "51caff34d730129e89291a9db1e8e6b8930a7ad08b9100a4ab4fbe28705f1665.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 502, + 163, + 518, + 171 + ], + "lines": [ + { + "bbox": [ + 502, + 163, + 518, + 171 + ], + "spans": [ + { + "bbox": [ + 502, + 163, + 518, + 171 + ], + "type": "text", + "content": "Ours" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_caption" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 75, + 201, + 141, + 269 + ], + "blocks": [ + { + "bbox": [ + 55, + 177, + 555, + 190 + ], + "lines": [ + { + "bbox": [ + 55, + 177, + 555, + 190 + ], + "spans": [ + { + "bbox": [ + 55, + 177, + 555, + 190 + ], + "type": "text", + "content": "Figure 6. Visual inspection of the results from different image debanding methods on sample images; see zoom-in box for details inspection" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 75, + 201, + 141, + 269 + ], + "lines": [ + { + "bbox": [ + 75, + 201, + 141, + 269 + ], + "spans": [ + { + "bbox": [ + 75, + 201, + 141, + 269 + ], + "type": "image", + "image_path": "ecfe37fee3dcaf0a42e963ab705ad66f1c3f2d3fac6f0279c8bdc5d8c1302290.jpg" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 93, + 270, + 125, + 279 + ], + "lines": [ + { + "bbox": [ + 93, + 270, + 125, + 279 + ], + "spans": [ + { + "bbox": [ + 93, + 270, + 125, + 279 + ], + "type": "text", + "content": "Degraded" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 122, + 285, + 487, + 297 + ], + "lines": [ + { + "bbox": [ + 122, + 285, + 487, + 297 + ], + "spans": [ + { + "bbox": [ + 122, + 285, + 487, + 297 + ], + "type": "text", + "content": "Figure 7. Visual inspection of the results from different image deraining methods on sampled images." + } + ] + } + ], + "index": 33, + "angle": 0, + "type": "image_caption" + } + ], + "index": 19 + }, + { + "type": "image", + "bbox": [ + 142, + 201, + 207, + 269 + ], + "blocks": [ + { + "bbox": [ + 142, + 201, + 207, + 269 + ], + "lines": [ + { + "bbox": [ + 142, + 201, + 207, + 269 + ], + "spans": [ + { + "bbox": [ + 142, + 201, + 207, + 269 + ], + "type": "image", + "image_path": "000aa278105d67f7a69eb4a7bc1f826e098f25661e4e789987ff51bc309fe272.jpg" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 157, + 270, + 191, + 278 + ], + "lines": [ + { + "bbox": [ + 157, + 270, + 191, + 278 + ], + "spans": [ + { + "bbox": [ + 157, + 270, + 191, + 278 + ], + "type": "text", + "content": "Reference" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_caption" + } + ], + "index": 21 + }, + { + "type": "image", + "bbox": [ + 208, + 201, + 272, + 269 + ], + "blocks": [ + { + "bbox": [ + 208, + 201, + 272, + 269 + ], + "lines": [ + { + "bbox": [ + 208, + 201, + 272, + 269 + ], + "spans": [ + { + "bbox": [ + 208, + 201, + 272, + 269 + ], + "type": "image", + "image_path": "e15a3c2430e39fd7123886b5a1710ae9ba1da945d5fb288896c4355e5607ee8f.jpg" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 226, + 270, + 253, + 278 + ], + "lines": [ + { + "bbox": [ + 226, + 270, + 253, + 278 + ], + "spans": [ + { + "bbox": [ + 226, + 270, + 253, + 278 + ], + "type": "text", + "content": "SPDNet" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_caption" + } + ], + "index": 23 + }, + { + "type": "image", + "bbox": [ + 272, + 201, + 338, + 269 + ], + "blocks": [ + { + "bbox": [ + 272, + 201, + 338, + 269 + ], + "lines": [ + { + "bbox": [ + 272, + 201, + 338, + 269 + ], + "spans": [ + { + "bbox": [ + 272, + 201, + 338, + 269 + ], + "type": "image", + "image_path": "a049252f83c7c347e0e67fa0e7a302e4aec2ee6636b18d23b4a0409918faf8b3.jpg" + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 288, + 270, + 323, + 278 + ], + "lines": [ + { + "bbox": [ + 288, + 270, + 323, + 278 + ], + "spans": [ + { + "bbox": [ + 288, + 270, + 323, + 278 + ], + "type": "text", + "content": "Restormer" + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_caption" + } + ], + "index": 25 + }, + { + "type": "image", + "bbox": [ + 338, + 201, + 403, + 269 + ], + "blocks": [ + { + "bbox": [ + 338, + 201, + 403, + 269 + ], + "lines": [ + { + "bbox": [ + 338, + 201, + 403, + 269 + ], + "spans": [ + { + "bbox": [ + 338, + 201, + 403, + 269 + ], + "type": "image", + "image_path": "62cfc5aadba427b59bf2a9f566536de2746a6d7030ddf4a0a4850b7c7b62825f.jpg" + } + ] + } + ], + "index": 27, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 351, + 270, + 391, + 278 + ], + "lines": [ + { + "bbox": [ + 351, + 270, + 391, + 278 + ], + "spans": [ + { + "bbox": [ + 351, + 270, + 391, + 278 + ], + "type": "text", + "content": "DRSformer" + } + ] + } + ], + "index": 28, + "angle": 0, + "type": "image_caption" + } + ], + "index": 27 + }, + { + "type": "image", + "bbox": [ + 404, + 201, + 468, + 269 + ], + "blocks": [ + { + "bbox": [ + 404, + 201, + 468, + 269 + ], + "lines": [ + { + "bbox": [ + 404, + 201, + 468, + 269 + ], + "spans": [ + { + "bbox": [ + 404, + 201, + 468, + 269 + ], + "type": "image", + "image_path": "5d9797410ae695ae22fb6444ec2d21f72bc0478f1a0746ada3611a1140555cde.jpg" + } + ] + } + ], + "index": 29, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 413, + 270, + 460, + 278 + ], + "lines": [ + { + "bbox": [ + 413, + 270, + 460, + 278 + ], + "spans": [ + { + "bbox": [ + 413, + 270, + 460, + 278 + ], + "type": "text", + "content": "NeRD-Rain-S" + } + ] + } + ], + "index": 30, + "angle": 0, + "type": "image_caption" + } + ], + "index": 29 + }, + { + "type": "image", + "bbox": [ + 469, + 201, + 535, + 269 + ], + "blocks": [ + { + "bbox": [ + 469, + 201, + 535, + 269 + ], + "lines": [ + { + "bbox": [ + 469, + 201, + 535, + 269 + ], + "spans": [ + { + "bbox": [ + 469, + 201, + 535, + 269 + ], + "type": "image", + "image_path": "992b7a7a872e1f181b9a0d38b0728510fdee449bdacd466939f2ff0b3110199b.jpg" + } + ] + } + ], + "index": 31, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 493, + 270, + 510, + 278 + ], + "lines": [ + { + "bbox": [ + 493, + 270, + 510, + 278 + ], + "spans": [ + { + "bbox": [ + 493, + 270, + 510, + 278 + ], + "type": "text", + "content": "Ours" + } + ] + } + ], + "index": 32, + "angle": 0, + "type": "image_caption" + } + ], + "index": 31 + }, + { + "bbox": [ + 55, + 317, + 296, + 413 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 317, + 296, + 413 + ], + "spans": [ + { + "bbox": [ + 55, + 317, + 296, + 413 + ], + "type": "text", + "content": "SCMC. To ensure a fair comparison, all models were adjusted for comparable sizes by modifying channel numbers, and are trained on the DID [54] dataset for 300K iterations. As shown in Tab. 5, each component makes significant contribution to performance improvement. Notably, SADA contributes a PSNR gain of " + }, + { + "bbox": [ + 55, + 317, + 296, + 413 + ], + "type": "inline_equation", + "content": "0.78\\mathrm{dB}" + }, + { + "bbox": [ + 55, + 317, + 296, + 413 + ], + "type": "text", + "content": " and an SSIM improvement of 0.003, while the inclusion of SEDC yields an additional PSNR increase of " + }, + { + "bbox": [ + 55, + 317, + 296, + 413 + ], + "type": "inline_equation", + "content": "0.17\\mathrm{dB}" + }, + { + "bbox": [ + 55, + 317, + 296, + 413 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 34 + }, + { + "type": "table", + "bbox": [ + 63, + 423, + 288, + 494 + ], + "blocks": [ + { + "bbox": [ + 63, + 423, + 288, + 494 + ], + "lines": [ + { + "bbox": [ + 63, + 423, + 288, + 494 + ], + "spans": [ + { + "bbox": [ + 63, + 423, + 288, + 494 + ], + "type": "table", + "html": "
SADASEDCMIMOPSNR(dB)↑SSIM↑LPIPS↓
---38.620.96630.0587
--38.820.96780.0533
-39.050.96920.0521
-39.610.97120.0480
39.780.97290.0453
", + "image_path": "22f245c44512667d3c6a0fbdd35f9281723496ae77dc2a6c5af9f1c88fbff2ef.jpg" + } + ] + } + ], + "index": 35, + "angle": 0, + "type": "table_body" + } + ], + "index": 35 + }, + { + "bbox": [ + 55, + 529, + 295, + 625 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 529, + 295, + 625 + ], + "spans": [ + { + "bbox": [ + 55, + 529, + 295, + 625 + ], + "type": "text", + "content": "Impact of the grid dilation: In this study, we investigate the performance benefits of the multi-head mechanism in SADA by comparing different settings of " + }, + { + "bbox": [ + 55, + 529, + 295, + 625 + ], + "type": "inline_equation", + "content": "M_{s}" + }, + { + "bbox": [ + 55, + 529, + 295, + 625 + ], + "type": "text", + "content": " on the DID dataset. Tab. 6 shows that the multi-dilation offset grid improves performance with similar model size. Compared to assigning each head same small dilation (1,1,1,1), our setting (1,3,5,7) better exploits long-range dependencies, resulting in a PSNR gain of 0.2dB and a LPIPS reduction of 0.0039." + } + ] + } + ], + "index": 37 + }, + { + "bbox": [ + 55, + 629, + 296, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 629, + 296, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 629, + 296, + 715 + ], + "type": "text", + "content": "Visualization of deformable offsets: Refer to Fig. 8 for a comparison of the sampling points between the standard LSA and our SADA. The purple star indicates the central point, while the yellow points and red circles represent the initial and final sampling positions, respectively. This shows SADA can better capture long-range and multi-scale context, which proves advantageous for the ensuing weighted aggregation." + } + ] + } + ], + "index": 38 + }, + { + "type": "table", + "bbox": [ + 351, + 316, + 518, + 366 + ], + "blocks": [ + { + "bbox": [ + 66, + 498, + 284, + 510 + ], + "lines": [ + { + "bbox": [ + 66, + 498, + 284, + 510 + ], + "spans": [ + { + "bbox": [ + 66, + 498, + 284, + 510 + ], + "type": "text", + "content": "Table 5. Results of ablation studies. Boldfaced: best values." + } + ] + } + ], + "index": 36, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 351, + 316, + 518, + 366 + ], + "lines": [ + { + "bbox": [ + 351, + 316, + 518, + 366 + ], + "spans": [ + { + "bbox": [ + 351, + 316, + 518, + 366 + ], + "type": "table", + "html": "
MsPSNR(dB)↑SSIM↑LPIPS↓
{1,1,1,1}39.580.97080.0492
{1,2,3,4}39.700.97230.0463
{1,3,5,7}39.780.97290.0453
", + "image_path": "4c5b660f0b30bec78612d5df9889306406088b86b27206e014cae09cafbbe41b.jpg" + } + ] + } + ], + "index": 39, + "angle": 0, + "type": "table_body" + } + ], + "index": 39 + }, + { + "type": "image", + "bbox": [ + 343, + 393, + 413, + 464 + ], + "blocks": [ + { + "bbox": [ + 334, + 372, + 533, + 383 + ], + "lines": [ + { + "bbox": [ + 334, + 372, + 533, + 383 + ], + "spans": [ + { + "bbox": [ + 334, + 372, + 533, + 383 + ], + "type": "text", + "content": "Table 6. Results using different setting of dilation " + }, + { + "bbox": [ + 334, + 372, + 533, + 383 + ], + "type": "inline_equation", + "content": "M_{s}" + }, + { + "bbox": [ + 334, + 372, + 533, + 383 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 40, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 343, + 393, + 413, + 464 + ], + "lines": [ + { + "bbox": [ + 343, + 393, + 413, + 464 + ], + "spans": [ + { + "bbox": [ + 343, + 393, + 413, + 464 + ], + "type": "image", + "image_path": "bfd953a627a2abc4954138fa3debdc2328a09d9327ee929ea53cae079735db6f.jpg" + } + ] + } + ], + "index": 41, + "angle": 0, + "type": "image_body" + } + ], + "index": 41 + }, + { + "type": "image", + "bbox": [ + 319, + 475, + 371, + 527 + ], + "blocks": [ + { + "bbox": [ + 319, + 475, + 371, + 527 + ], + "lines": [ + { + "bbox": [ + 319, + 475, + 371, + 527 + ], + "spans": [ + { + "bbox": [ + 319, + 475, + 371, + 527 + ], + "type": "image", + "image_path": "6d76e442b18716801e65ee1cd73101a25e62b04883e1cf54f954d2eb3ce53086.jpg" + } + ] + } + ], + "index": 43, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 331, + 528, + 360, + 536 + ], + "lines": [ + { + "bbox": [ + 331, + 528, + 360, + 536 + ], + "spans": [ + { + "bbox": [ + 331, + 528, + 360, + 536 + ], + "type": "text", + "content": "0-th head" + } + ] + } + ], + "index": 44, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 318, + 554, + 548, + 566 + ], + "lines": [ + { + "bbox": [ + 318, + 554, + 548, + 566 + ], + "spans": [ + { + "bbox": [ + 318, + 554, + 548, + 566 + ], + "type": "text", + "content": "Figure 8. Visualization of sampling points for LSA and SADA." + } + ] + } + ], + "index": 54, + "angle": 0, + "type": "image_caption" + } + ], + "index": 43 + }, + { + "type": "image", + "bbox": [ + 378, + 475, + 431, + 527 + ], + "blocks": [ + { + "bbox": [ + 378, + 475, + 431, + 527 + ], + "lines": [ + { + "bbox": [ + 378, + 475, + 431, + 527 + ], + "spans": [ + { + "bbox": [ + 378, + 475, + 431, + 527 + ], + "type": "image", + "image_path": "73d98c9d86f7edf79ab57523a9982fd5945b3bbc2a7af6ed44c4a69f34719144.jpg" + } + ] + } + ], + "index": 45, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 390, + 528, + 416, + 535 + ], + "lines": [ + { + "bbox": [ + 390, + 528, + 416, + 535 + ], + "spans": [ + { + "bbox": [ + 390, + 528, + 416, + 535 + ], + "type": "text", + "content": "1-st head" + } + ] + } + ], + "index": 46, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 376, + 536, + 493, + 544 + ], + "lines": [ + { + "bbox": [ + 376, + 536, + 493, + 544 + ], + "spans": [ + { + "bbox": [ + 376, + 536, + 493, + 544 + ], + "type": "text", + "content": "(c) Sampling points for each SADA head." + } + ] + } + ], + "index": 53, + "angle": 0, + "type": "image_caption" + } + ], + "index": 45 + }, + { + "type": "image", + "bbox": [ + 457, + 394, + 528, + 464 + ], + "blocks": [ + { + "bbox": [ + 457, + 394, + 528, + 464 + ], + "lines": [ + { + "bbox": [ + 457, + 394, + 528, + 464 + ], + "spans": [ + { + "bbox": [ + 457, + 394, + 528, + 464 + ], + "type": "image", + "image_path": "dd646ab5fd9eefd96601511c31171cc032884f794f17d8c47f51ad61d9b7c460.jpg" + } + ] + } + ], + "index": 47, + "angle": 0, + "type": "image_body" + } + ], + "index": 47 + }, + { + "bbox": [ + 438, + 465, + 548, + 474 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 438, + 465, + 548, + 474 + ], + "spans": [ + { + "bbox": [ + 438, + 465, + 548, + 474 + ], + "type": "text", + "content": "(b) Sampling points for each LSA head." + } + ] + } + ], + "index": 48, + "type": "text" + }, + { + "type": "image", + "bbox": [ + 438, + 475, + 491, + 527 + ], + "blocks": [ + { + "bbox": [ + 438, + 475, + 491, + 527 + ], + "lines": [ + { + "bbox": [ + 438, + 475, + 491, + 527 + ], + "spans": [ + { + "bbox": [ + 438, + 475, + 491, + 527 + ], + "type": "image", + "image_path": "2a0b1a3470f6a21ffa7906f139fa5646057af9a95b030c7e5c441f09400aeeeb.jpg" + } + ] + } + ], + "index": 49, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 447, + 528, + 477, + 535 + ], + "lines": [ + { + "bbox": [ + 447, + 528, + 477, + 535 + ], + "spans": [ + { + "bbox": [ + 447, + 528, + 477, + 535 + ], + "type": "text", + "content": "2-nd head" + } + ] + } + ], + "index": 50, + "angle": 0, + "type": "image_caption" + } + ], + "index": 49 + }, + { + "type": "image", + "bbox": [ + 498, + 474, + 550, + 527 + ], + "blocks": [ + { + "bbox": [ + 320, + 465, + 549, + 474 + ], + "lines": [ + { + "bbox": [ + 320, + 465, + 549, + 474 + ], + "spans": [ + { + "bbox": [ + 320, + 465, + 549, + 474 + ], + "type": "text", + "content": "(a) Corresponding down-sampled image. (b) Sampling points for each LSA head." + } + ] + } + ], + "index": 42, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 498, + 474, + 550, + 527 + ], + "lines": [ + { + "bbox": [ + 498, + 474, + 550, + 527 + ], + "spans": [ + { + "bbox": [ + 498, + 474, + 550, + 527 + ], + "type": "image", + "image_path": "59a0015c909e693453bcbbf9c95f4fea10a30d1297d9446ee47f6ea50a835e12.jpg" + } + ] + } + ], + "index": 51, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 509, + 528, + 536, + 534 + ], + "lines": [ + { + "bbox": [ + 509, + 528, + 536, + 534 + ], + "spans": [ + { + "bbox": [ + 509, + 528, + 536, + 534 + ], + "type": "text", + "content": "3-rd head" + } + ] + } + ], + "index": 52, + "angle": 0, + "type": "image_caption" + } + ], + "index": 51 + }, + { + "bbox": [ + 314, + 578, + 375, + 591 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 578, + 375, + 591 + ], + "spans": [ + { + "bbox": [ + 314, + 578, + 375, + 591 + ], + "type": "text", + "content": "Conclusion" + } + ] + } + ], + "index": 55 + }, + { + "bbox": [ + 313, + 599, + 555, + 708 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 599, + 555, + 708 + ], + "spans": [ + { + "bbox": [ + 313, + 599, + 555, + 708 + ], + "type": "text", + "content": "In this paper, we presented SADT, a universal transformer-based architecture for image restoration across diverse artifacts. Our approach integrates the SEDC module to capture scale-varying patterns with abundant orientations and potential distortions, and the SADA module to model long-range relationships among repetitive patterns with diverse sizes and non-uniform distributions. Extensive experiments showed that SADT consistently outperforms SOTA methods in the tasks including image demoiréing, debanding, and deraining." + } + ] + } + ], + "index": 56 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "12738" + } + ] + } + ], + "index": 57 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 161, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 161, + 85 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 161, + 85 + ], + "type": "text", + "content": "Acknowledgements." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 91, + 297, + 236 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 91, + 297, + 236 + ], + "spans": [ + { + "bbox": [ + 55, + 91, + 297, + 236 + ], + "type": "text", + "content": "This work was supported by National Key R&D Pogram of China (2023YFA1011601), the Basic and Applied Basic Research Foundation of Guangdong Province (2024A1515012287), Science and Technology Key Program of Guangzhou, China (2023B03J1388), National Natural Science Foundation of China (62372186), Natural Science Foundation of Guangdong Province (2023A1515012841), Fundamental Research Funds for the Central Universities (x2jsD2230220), National Natural Science Foundation of China (62106077), Natural Science Foundation of Guangdong Province (2022A1515011087) and Singapore MOE AcRF Tier 1 (Grant No. A-8000981-00-00)." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 246, + 115, + 258 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 246, + 115, + 258 + ], + "spans": [ + { + "bbox": [ + 56, + 246, + 115, + 258 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 61, + 266, + 297, + 714 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 61, + 266, + 297, + 308 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 266, + 297, + 308 + ], + "spans": [ + { + "bbox": [ + 61, + 266, + 297, + 308 + ], + "type": "text", + "content": "[1] Junyoung Byun, Kyujin Shim, and Changick Kim. Bitnet: Learning-based bit-depth expansion. In Proceedings of the Asian Conference on Computer Vision, pages 67-82. Springer, 2019. 2, 7" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 311, + 296, + 366 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 311, + 296, + 366 + ], + "spans": [ + { + "bbox": [ + 61, + 311, + 296, + 366 + ], + "type": "text", + "content": "[2] Jiezhang Cao, Jingyun Liang, Kai Zhang, Yawei Li, Yulun Zhang, Wenguan Wang, and Luc Van Gool. Reference-based image super-resolution with deformable attention transformer. In European conference on computer vision, pages 325-342. Springer, 2022. 3" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 62, + 367, + 296, + 422 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 367, + 296, + 422 + ], + "spans": [ + { + "bbox": [ + 62, + 367, + 296, + 422 + ], + "type": "text", + "content": "[3] Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. Pre-trained image processing transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12299-12310, 2021. 3" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 422, + 296, + 478 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 422, + 296, + 478 + ], + "spans": [ + { + "bbox": [ + 62, + 422, + 296, + 478 + ], + "type": "text", + "content": "[4] Jierun Chen, Shiu-hong Kao, Hao He, Weipeng Zhuo, Song Wen, Chul-Ho Lee, and S-H Gary Chan. Run, don't walk: chasing higher flops for faster neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12021-12031, 2023. 4" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 479, + 296, + 533 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 479, + 296, + 533 + ], + "spans": [ + { + "bbox": [ + 62, + 479, + 296, + 533 + ], + "type": "text", + "content": "[5] Xiang Chen, Hao Li, Mingqiang Li, and Jinshan Pan. Learning a sparse transformer network for effective image deraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5896-5905, 2023. 2, 3, 6, 7" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 535, + 296, + 589 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 535, + 296, + 589 + ], + "spans": [ + { + "bbox": [ + 62, + 535, + 296, + 589 + ], + "type": "text", + "content": "[6] Xiang Chen, Jinshan Pan, and Jiangxin Dong. Bidirectional multi-scale implicit neural representations for image deraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 25627-25636, 2024. 2, 7" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 590, + 296, + 635 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 590, + 296, + 635 + ], + "spans": [ + { + "bbox": [ + 62, + 590, + 296, + 635 + ], + "type": "text", + "content": "[7] Yijia Cheng, Xin Liu, and Jingyu Yang. Recaptured raw screen image and video demoiring via channel and spatial modulations. Advances in Neural Information Processing Systems, 36, 2024. 7" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 62, + 635, + 296, + 689 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 635, + 296, + 689 + ], + "spans": [ + { + "bbox": [ + 62, + 635, + 296, + 689 + ], + "type": "text", + "content": "[8] Sung-Jin Cho, Seo-Won Ji, Jun-Pyo Hong, Seung-Won Jung, and Sung-Jea Ko. Rethinking coarse-to-fine approach in single image deblurring. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4641-4650, 2021. 5" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 62, + 691, + 296, + 714 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 691, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 62, + 691, + 296, + 714 + ], + "type": "text", + "content": "[9] Yuning Cui, Wenqi Ren, Sining Yang, Xiaochun Cao, and Alois Knoll. Irnext: Rethinking convolutional network design" + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 556, + 713 + ], + "type": "list", + "angle": 0, + "index": 28, + "blocks": [ + { + "bbox": [ + 333, + 73, + 555, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 333, + 73, + 555, + 95 + ], + "spans": [ + { + "bbox": [ + 333, + 73, + 555, + 95 + ], + "type": "text", + "content": "for image restoration. In International conference on machine learning, 2023. 5" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 316, + 96, + 555, + 140 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 96, + 555, + 140 + ], + "spans": [ + { + "bbox": [ + 316, + 96, + 555, + 140 + ], + "type": "text", + "content": "[10] Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 764-773, 2017. 3" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 141, + 556, + 174 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 141, + 556, + 174 + ], + "spans": [ + { + "bbox": [ + 316, + 141, + 556, + 174 + ], + "type": "text", + "content": "[11] FFmpeg Filters deband. Accessed: Aug. 31, 2021. [online]. available:: https://ffmpeg.org/ffmpeg-filters.html#deband, 2021.7" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 175, + 555, + 218 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 175, + 555, + 218 + ], + "spans": [ + { + "bbox": [ + 316, + 175, + 555, + 218 + ], + "type": "text", + "content": "[12] Bin He, Ce Wang, Boxin Shi, and Ling-Yu Duan. Mop moiré patterns using mopnet. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2424-2432, 2019. 2, 7" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 220, + 555, + 263 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 220, + 555, + 263 + ], + "spans": [ + { + "bbox": [ + 316, + 220, + 555, + 263 + ], + "type": "text", + "content": "[13] Bin He, Ce Wang, Boxin Shi, and Ling-Yu Duan. Fhde 2 net: Full high definition demoiring network. In Proceedings of the European Conference on Computer Vision, pages 713-729. Springer, 2020. 6, 7" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 265, + 555, + 319 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 265, + 555, + 319 + ], + "spans": [ + { + "bbox": [ + 316, + 265, + 555, + 319 + ], + "type": "text", + "content": "[14] Qin Huang, Hui Yong Kim, Wen-Jiin Tsai, Se Yoon Jeong, Jin Soo Choi, and C-C Jay Kuo. Understanding and removal of false contour in hevc compressed images. IEEE Transactions on Circuits and Systems for Video Technology, 28(2): 378-391, 2016. 7" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 321, + 555, + 354 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 321, + 555, + 354 + ], + "spans": [ + { + "bbox": [ + 316, + 321, + 555, + 354 + ], + "type": "text", + "content": "[15] Yan Huang, Xinchang Lu, Yuhui Quan, Yong Xu, and Hui Ji. Image shadow removal via multi-scale deep retina decomposition. Pattern Recognition, 159:111126, 2025. 3" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 355, + 555, + 399 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 355, + 555, + 399 + ], + "spans": [ + { + "bbox": [ + 316, + 355, + 555, + 399 + ], + "type": "text", + "content": "[16] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision, pages 694–711. Springer, 2016. 5" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 399, + 555, + 431 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 399, + 555, + 431 + ], + "spans": [ + { + "bbox": [ + 316, + 399, + 555, + 431 + ], + "type": "text", + "content": "[17] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 6" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 434, + 554, + 478 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 434, + 554, + 478 + ], + "spans": [ + { + "bbox": [ + 316, + 434, + 554, + 478 + ], + "type": "text", + "content": "[18] Gang Li, Di Xu, Xing Cheng, Lingyu Si, and Changwen Zheng. Simvit: Exploring a simple vision transformer with sliding windows. In 2022 IEEE International Conference on Multimedia and Expo (ICME), pages 1-6. IEEE, 2022. 3" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 479, + 555, + 522 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 479, + 555, + 522 + ], + "spans": [ + { + "bbox": [ + 316, + 479, + 555, + 522 + ], + "type": "text", + "content": "[19] Yu Li, Robby T Tan, Xiaojie Guo, Jiangbo Lu, and Michael S Brown. Rain streak removal using layer priors. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2736-2744, 2016. 2" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 523, + 555, + 578 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 523, + 555, + 578 + ], + "spans": [ + { + "bbox": [ + 316, + 523, + 555, + 578 + ], + "type": "text", + "content": "[20] Yizhou Li, Yusuke Monno, and Masatoshi Okutomi. Single image deraining network with rain embedding consistency and layered LSTM. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 4060-4069, 2022. 7" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 579, + 555, + 633 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 579, + 555, + 633 + ], + "spans": [ + { + "bbox": [ + 316, + 579, + 555, + 633 + ], + "type": "text", + "content": "[21] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1833-1844, 2021. 3" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 635, + 555, + 679 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 635, + 555, + 679 + ], + "spans": [ + { + "bbox": [ + 316, + 635, + 555, + 679 + ], + "type": "text", + "content": "[22] Fanglei Liu, Jingyu Yang, and Huanjing Yue. Moiré pattern removal from texture images via low-rank and sparse matrix decomposition. In 2015 Visual Communications and Image Processing, pages 1-4. IEEE, 2015. 2" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 316, + 681, + 554, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 681, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 681, + 554, + 713 + ], + "type": "text", + "content": "[23] Jing Liu, Xin Wen, Weizhi Nie, Yuting Su, Peiguang Jing, and Xiaokang Yang. Residual-guided multiscale fusion network for bit-depth enhancement. IEEE Transactions on Circuits" + } + ] + } + ], + "index": 27 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "text", + "content": "12739" + } + ] + } + ], + "index": 29 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 296, + 713 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 77, + 72, + 296, + 94 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 72, + 296, + 94 + ], + "spans": [ + { + "bbox": [ + 77, + 72, + 296, + 94 + ], + "type": "text", + "content": "and Systems for Video Technology, 32(5):2773-2786, 2021. 2, 3" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 96, + 296, + 162 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 96, + 296, + 162 + ], + "spans": [ + { + "bbox": [ + 56, + 96, + 296, + 162 + ], + "type": "text", + "content": "[24] Lin Liu, Jianzhuang Liu, Shanxin Yuan, Gregory Slabaugh, Ales Leonardis, Wengang Zhou, and Qi Tian. Wavelet-based dual-branch network for image demoiring. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIII 16, pages 86–102. Springer, 2020. 7" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 163, + 296, + 207 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 163, + 296, + 207 + ], + "spans": [ + { + "bbox": [ + 56, + 163, + 296, + 207 + ], + "type": "text", + "content": "[25] Pengju Liu, Hongzhi Zhang, Kai Zhang, Liang Lin, and Wangmeng Zuo. Multi-level wavelet-cnn for image restoration. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 773-782, 2018. 7" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 209, + 296, + 239 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 209, + 296, + 239 + ], + "spans": [ + { + "bbox": [ + 56, + 209, + 296, + 239 + ], + "type": "text", + "content": "[26] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. 6" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 241, + 296, + 285 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 241, + 296, + 285 + ], + "spans": [ + { + "bbox": [ + 56, + 241, + 296, + 285 + ], + "type": "text", + "content": "[27] Yu Luo, Yong Xu, and Hui Ji. Removing rain from a single image via discriminative sparse coding. In Proceedings of the IEEE international conference on computer vision, pages 3397-3405, 2015. 2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 286, + 296, + 319 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 286, + 296, + 319 + ], + "spans": [ + { + "bbox": [ + 56, + 286, + 296, + 319 + ], + "type": "text", + "content": "[28] Duong Hai Nguyen, Se-Ho Lee, and Chul Lee. Multiscale coarse-to-fine guided screenshot demoiring. IEEE Signal Processing Letters, 2023. 2, 3" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 321, + 296, + 365 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 321, + 296, + 365 + ], + "spans": [ + { + "bbox": [ + 56, + 321, + 296, + 365 + ], + "type": "text", + "content": "[29] Yuhui Quan, Shijie Deng, Yixin Chen, and Hui Ji. Deep learning for seeing through window with raindrops. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2463-2471, 2019. 2" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 365, + 296, + 409 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 365, + 296, + 409 + ], + "spans": [ + { + "bbox": [ + 56, + 365, + 296, + 409 + ], + "type": "text", + "content": "[30] Yuhui Quan, Xuyi He, Ruotao Xu, Yong Xu, and Hui Ji. Image debanding using cross-scale invertible networks with banded deformable convolutions. Neural Networks, page 107270, 2025. 3" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 411, + 296, + 454 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 411, + 296, + 454 + ], + "spans": [ + { + "bbox": [ + 56, + 411, + 296, + 454 + ], + "type": "text", + "content": "[31] Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan Bello, Anselm Levskaya, and Jon Shlens. Stand-alone self-attention in vision models. Advances in neural information processing systems, 32, 2019. 3, 4" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 456, + 296, + 510 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 456, + 296, + 510 + ], + "spans": [ + { + "bbox": [ + 56, + 456, + 296, + 510 + ], + "type": "text", + "content": "[32] Dongwei Ren, Wangmeng Zuo, Qinghua Hu, Pengfei Zhu, and Deyu Meng. Progressive image deraining networks: A better and simpler baseline. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3937-3946, 2019. 7" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 511, + 296, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 511, + 296, + 555 + ], + "spans": [ + { + "bbox": [ + 56, + 511, + 296, + 555 + ], + "type": "text", + "content": "[33] Yujing Sun, Yizhou Yu, and Wenping Wang. Moiré photo restoration using multiresolution convolutional neural networks. IEEE Transactions on Image Processing, 27(8):4160-4172, 2018. 2, 3, 6, 7" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 556, + 296, + 589 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 556, + 296, + 589 + ], + "spans": [ + { + "bbox": [ + 56, + 556, + 296, + 589 + ], + "type": "text", + "content": "[34] Chunwei Tian, Yong Xu, Zuoyong Li, Wangmeng Zuo, Lunke Fei, and Hong Liu. Attention-guided cnn for image denoising. Neural Networks, 124:117-129, 2020. 7" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 56, + 590, + 296, + 645 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 590, + 296, + 645 + ], + "spans": [ + { + "bbox": [ + 56, + 590, + 296, + 645 + ], + "type": "text", + "content": "[35] Ilya O Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, et al. Mlp-mixer: An all-mlp architecture for vision. Advances in neural information processing systems, 34:24261–24272, 2021. 4" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 56, + 647, + 296, + 679 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 647, + 296, + 679 + ], + "spans": [ + { + "bbox": [ + 56, + 647, + 296, + 679 + ], + "type": "text", + "content": "[36] Zhengzhong Tu, Jessie Lin, Yilin Wang, Balu Adsumilli, and Alan C Bovik. Adaptive debanding filter. IEEE Signal Processing Letters, 27:1715-1719, 2020. 7" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 56, + 681, + 296, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 681, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 56, + 681, + 296, + 713 + ], + "type": "text", + "content": "[37] Ce Wang, Bin He, Shengsen Wu, Renjie Wan, Boxin Shi, and Ling-Yu Duan. Coarse-to-fine disentangling demoiréing framework for recaptured screen images. IEEE Transactions" + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 555, + 713 + ], + "type": "list", + "angle": 0, + "index": 29, + "blocks": [ + { + "bbox": [ + 333, + 73, + 555, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 333, + 73, + 555, + 95 + ], + "spans": [ + { + "bbox": [ + 333, + 73, + 555, + 95 + ], + "type": "text", + "content": "on Pattern Analysis and Machine Intelligence, 45(8):9439-9453, 2023. 7" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 96, + 555, + 140 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 96, + 555, + 140 + ], + "spans": [ + { + "bbox": [ + 316, + 96, + 555, + 140 + ], + "type": "text", + "content": "[38] Hong Wang, Qi Xie, Qian Zhao, and Deyu Meng. A model-driven deep neural network for single image rain removal. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3103-3112, 2020. 3, 7" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 141, + 555, + 196 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 141, + 555, + 196 + ], + "spans": [ + { + "bbox": [ + 316, + 141, + 555, + 196 + ], + "type": "text", + "content": "[39] Tianyu Wang, Xin Yang, Ke Xu, Shaozhe Chen, Qiang Zhang, and Rynson WH Lau. Spatial attentive single-image deraining with a high quality real rain dataset. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12270-12279, 2019. 6, 7" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 198, + 555, + 262 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 198, + 555, + 262 + ], + "spans": [ + { + "bbox": [ + 316, + 198, + 555, + 262 + ], + "type": "text", + "content": "[40] Wenhai Wang, Jifeng Dai, Zhe Chen, Zhenhang Huang, Zhiqi Li, Xizhou Zhu, Xiaowei Hu, Tong Lu, Lewei Lu, Hongsheng Li, et al. Internimage: Exploring large-scale vision foundation models with deformable convolutions. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14408-14419, 2023. 3" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 264, + 555, + 319 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 264, + 555, + 319 + ], + "spans": [ + { + "bbox": [ + 316, + 264, + 555, + 319 + ], + "type": "text", + "content": "[41] Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 17683-17693, 2022. 3, 7" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 320, + 555, + 365 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 320, + 555, + 365 + ], + "spans": [ + { + "bbox": [ + 316, + 320, + 555, + 365 + ], + "type": "text", + "content": "[42] Xin Wen, Weizhi Nie, Jing Liu, and Yuting Su. Mrft: Multiscale recurrent fusion transformer based prior knowledge for bit-depth enhancement. IEEE Transactions on Circuits and Systems for Video Technology, 33(10):5562-5575, 2023. 3" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 365, + 555, + 409 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 365, + 555, + 409 + ], + "spans": [ + { + "bbox": [ + 316, + 365, + 555, + 409 + ], + "type": "text", + "content": "[43] Zhuofan Xia, Xuran Pan, Shiji Song, Li Erran Li, and Gao Huang. Vision transformer with deformable attention. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4794-4803, 2022. 3" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 411, + 555, + 454 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 411, + 555, + 454 + ], + "spans": [ + { + "bbox": [ + 316, + 411, + 555, + 454 + ], + "type": "text", + "content": "[44] Jie Xiao, Xueyang Fu, Aiping Liu, Feng Wu, and Zheng-Jun Zha. Image de-raining transformer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(11):12978-12995, 2022. 3, 6, 7" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 456, + 555, + 521 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 456, + 555, + 521 + ], + "spans": [ + { + "bbox": [ + 316, + 456, + 555, + 521 + ], + "type": "text", + "content": "[45] Yuwen Xiong, Zhiqi Li, Yuntao Chen, Feng Wang, Xizhou Zhu, Jiapeng Luo, Wenhai Wang, Tong Lu, Hongsheng Li, Yu Qiao, et al. Efficient deformable convnets: Rethinking dynamic and sparse operator for vision applications. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5652-5661, 2024. 3" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 523, + 555, + 567 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 523, + 555, + 567 + ], + "spans": [ + { + "bbox": [ + 316, + 523, + 555, + 567 + ], + "type": "text", + "content": "[46] Shuning Xu, Binbin Song, Xiangyu Chen, Xina Liu, and Jiantao Zhou. Image demoiring in raw and srgb domains. In European Conference on Computer Vision, pages 108-124. Springer, 2025. 7" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 568, + 555, + 622 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 568, + 555, + 622 + ], + "spans": [ + { + "bbox": [ + 316, + 568, + 555, + 622 + ], + "type": "text", + "content": "[47] Qiaosi Yi, Juncheng Li, Qinyan Dai, Faming Fang, Guixu Zhang, and Tieyong Zeng. Structure-preserving deraining with residue channel prior guidance. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4238-4247, 2021. 7" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 316, + 624, + 555, + 678 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 624, + 555, + 678 + ], + "spans": [ + { + "bbox": [ + 316, + 624, + 555, + 678 + ], + "type": "text", + "content": "[48] Xin Yu, Peng Dai, Wenbo Li, Lan Ma, Jiajun Shen, Jia Li, and Xiaojuan Qi. Towards efficient and scale-robust ultra-high-definition image demoiring. In Proceedings of the European Conference on Computer Vision, pages 646–662. Springer, 2022. 2, 3, 6, 7" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 316, + 680, + 555, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 680, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 680, + 555, + 713 + ], + "type": "text", + "content": "[49] Shanxin Yuan, Radu Timofte, Gregory Slabaugh, Ales Leonardis, Bolun Zheng, Xin Ye, Xiang Tian, Yaowu Chen, Xi Cheng, Zhenyong Fu, et al. Aim 2019 challenge on image" + } + ] + } + ], + "index": 28 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "text", + "content": "12740" + } + ] + } + ], + "index": 30 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 296, + 518 + ], + "type": "list", + "angle": 0, + "index": 9, + "blocks": [ + { + "bbox": [ + 76, + 72, + 296, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 72, + 296, + 106 + ], + "spans": [ + { + "bbox": [ + 76, + 72, + 296, + 106 + ], + "type": "text", + "content": "demoireing: Methods and results. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pages 3534-3545. IEEE, 2019. 6, 7" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 107, + 296, + 162 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 107, + 296, + 162 + ], + "spans": [ + { + "bbox": [ + 56, + 107, + 296, + 162 + ], + "type": "text", + "content": "[50] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14821-14831, 2021. 7" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 163, + 296, + 228 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 163, + 296, + 228 + ], + "spans": [ + { + "bbox": [ + 56, + 163, + 296, + 228 + ], + "type": "text", + "content": "[51] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5728-5739, 2022. 2, 3, 7" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 229, + 296, + 284 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 229, + 296, + 284 + ], + "spans": [ + { + "bbox": [ + 56, + 229, + 296, + 284 + ], + "type": "text", + "content": "[52] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 586-595, 2018. 6" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 285, + 296, + 329 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 285, + 296, + 329 + ], + "spans": [ + { + "bbox": [ + 56, + 285, + 296, + 329 + ], + "type": "text", + "content": "[53] Bolun Zheng, Shanxin Yuan, Gregory Slabaugh, and Ales Leonardis. Image demoiring with learnable bandpass filters. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3636-3645, 2020. 7" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 331, + 296, + 373 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 331, + 296, + 373 + ], + "spans": [ + { + "bbox": [ + 56, + 331, + 296, + 373 + ], + "type": "text", + "content": "[54] Raymond Zhou, Shahrukh Athar, Zhongling Wang, and Zhou Wang. Deep image debanding. In Proceedings of the IEEE International Conference on Image Processing, pages 1951-1955. IEEE, 2022. 6, 7, 8" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 375, + 296, + 430 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 375, + 296, + 430 + ], + "spans": [ + { + "bbox": [ + 56, + 375, + 296, + 430 + ], + "type": "text", + "content": "[55] Shihao Zhou, Duosheng Chen, Jinshan Pan, Jinglei Shi, and Jufeng Yang. Adapt or perish: Adaptive sparse transformer with attentive feature refinement for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2952-2963, 2024. 5" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 431, + 296, + 474 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 431, + 296, + 474 + ], + "spans": [ + { + "bbox": [ + 56, + 431, + 296, + 474 + ], + "type": "text", + "content": "[56] Xizhou Zhu, Han Hu, Stephen Lin, and Jifeng Dai. Deformable convnets v2: More deformable, better results. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, pages 9308-9316, 2019. 3" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 475, + 296, + 518 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 475, + 296, + 518 + ], + "spans": [ + { + "bbox": [ + 56, + 475, + 296, + 518 + ], + "type": "text", + "content": "[57] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159, 2020.3" + } + ] + } + ], + "index": 8 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 749, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 749, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 749, + 317, + 757 + ], + "type": "text", + "content": "12741" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/A3_ Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment/641deceb-eb2b-45b8-97bb-8e2f7fe97a5e_content_list.json b/2025/A3_ Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment/641deceb-eb2b-45b8-97bb-8e2f7fe97a5e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ba895e43a46110020a212cdfa3e68994e128aaa4 --- /dev/null +++ b/2025/A3_ Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment/641deceb-eb2b-45b8-97bb-8e2f7fe97a5e_content_list.json @@ -0,0 +1,1739 @@ +[ + { + "type": "text", + "text": "A3: Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment", + "text_level": 1, + "bbox": [ + 212, + 128, + 784, + 176 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Xuan Wang*", + "bbox": [ + 181, + 204, + 285, + 220 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Anhui Key Lab of CSSAE, National University of Defense Technology wangxuan21d@nudt.edu.cn", + "bbox": [ + 114, + 220, + 344, + 253 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Tianrui Qin", + "bbox": [ + 194, + 258, + 290, + 275 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "OPPO Research Institute qintianrui123@gmail.com", + "bbox": [ + 151, + 276, + 334, + 297 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Xitong Gao*†", + "bbox": [ + 468, + 203, + 576, + 220 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Shenzhen Institutes of Advanced Technology, CAS; Shenzhen University of Advanced Technology xt.gao@siat.ac.cn", + "bbox": [ + 382, + 220, + 656, + 253 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Yu-liang Lu†", + "bbox": [ + 442, + 258, + 545, + 276 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Anhui Key Lab of CSSAE, National University of Defense Technology publicLuYL@126.com", + "bbox": [ + 377, + 276, + 609, + 308 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Dongping Liao", + "bbox": [ + 727, + 204, + 849, + 220 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "State Key Lab of IoTSC, CIS Dept, University of Macau yb97428@um.edu.mo", + "bbox": [ + 694, + 220, + 882, + 253 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Cheng-zhong Xu", + "bbox": [ + 679, + 258, + 818, + 275 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "State Key Lab of IoTSC, CIS Dept, University of Macau czxu@um.edu, mo", + "bbox": [ + 653, + 275, + 841, + 308 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 246, + 340, + 326, + 356 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "In the age of pervasive machine learning applications, protecting digital content from unauthorized use has become a pressing concern. Unlearnable examples (UEs)—data modified with imperceptible perturbations to inhibit model training while preserving human usability—have emerged as a promising approach. However, existing UE methods assume unauthorized trainers have extensive exposure to UEs or that models are trained from scratch, which may not hold in practical scenarios. This paper investigates the effectiveness of UEs under the few-shot learning paradigm, pitching it against prompt learning (PL) models that leverage pretrained vision-language models (VLMs), like CLIP, capable of generalizing to new classes with minimal data. To address this, we introduce an adaptive UE framework to generate unlearnable examples that specifically target the PL process. In addition, we propose a novel UE countermeasure, $A^3$ , with cross-modal adversarial feature alignment, specifically designed to circumvent UEs under few-shot PL. Experimental evaluations on 7 datasets show that $A^3$ outperforms existing PL methods, achieving up to $33\\%$ higher performance in learning from UEs. For example, in the scenario involving $\\ell_{\\infty}$ -bounded EM perturbations, $A^3$ has an average harmonic mean accuracy across 7 datasets of $82.43\\%$ , compared to CoCoOp's baseline of $65.47\\%$ . Our findings highlight the limitations of existing UEs against PL and lay the foundation for future data protection mechanisms.", + "bbox": [ + 89, + 373, + 483, + 765 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 91, + 794, + 220, + 809 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "In the era of pervasive machine learning applications, protecting digital content from unauthorized use is an escalating concern. An emerging solution involves unlearnable ex", + "bbox": [ + 89, + 819, + 485, + 864 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "amples (UEs) [9, 11, 29, 36, 40]. — data modified with imperceptible perturbations that prevent machine learning models from effectively learning and generalizing from it, while preserving its utility for human observers. Unlike traditional data poisoning attacks intended for malicious use, UEs serve content creators by providing a way to inhibit unauthorized model training. Beyond this, UEs can also be used to shed light on vulnerabilities and learning preferences [36] of deep learning models, and prevent unlawful use of personal features [24]. However, existing UE methods are primarily designed for models trained from scratch, and make strong assumptions where all or a large proportion of training data is used unknowingly by unauthorized trainers. These assumptions may not hold in the wilderness, for several reasons: (a) Creators may release limited data. (b) Unauthorized trainers may have limited exposure to the UEs: they may curate their training data from various sources and may only use a small fraction of the creator's data. (c) Trainers may leverage pretrained models to improve training efficiency, and to generalize well to new classes and contexts.", + "bbox": [ + 511, + 342, + 906, + 645 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "In this paper, we found that recent advances in prompt learning (PL) with pretrained vision-language models (VLMs) can indeed challenge the robustness of UEs. VLMs use contrastive learning to align images and text features, enabling strong zero-shot and downstream tasks [15, 30, 32, 35]. PL further adapts CLIP by fine-tuning prompts instead of model weights, making it ideal for data-limited scenarios, and novel tasks and classes. This paper thus investigates a central question: Are UEs effective in protecting data against PL-enabled models? This question has profound implications from both the content creator's and the unauthorized trainer's perspectives.", + "bbox": [ + 511, + 651, + 908, + 833 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "For content creators, understanding this question is crucial for several reasons: (a) To effectively prevent unauthorized usage, creators need to know the minimum amount of modified data required to maintain protection. (b) Content", + "bbox": [ + 511, + 839, + 908, + 900 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 46 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "*Equal contribution.", + "bbox": [ + 107, + 875, + 218, + 887 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "$\\dagger$ Correspondence to Xitong Gao and Yu-liang Lu.", + "bbox": [ + 109, + 887, + 372, + 900 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "9507", + "bbox": [ + 480, + 944, + 514, + 955 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "creators typically control and release a limited quantity of data, making it impractical to assume access to large datasets. This constraint naturally leads to a few-shot scenario, which is the focus of this study.", + "bbox": [ + 89, + 90, + 483, + 151 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "For unauthorized trainers, PL represents an appealing tool to bypass UEs, as PL exploits the generalization strengths of VLMs: (a) The pretrained encoders of VLMs enables it to generalize well to novel classes, potentially circumventing perturbations that would normally deter training from scratch. (b) While existing methods to circumvent UEs typically involve adversarial training [19], or image augmentations [16, 25], which affect only the image seen by the model. PL may be able to enhance its robustness by incorporating text augmentations, offering a broader strategy to bypass unlearnability protections.", + "bbox": [ + 89, + 152, + 483, + 318 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To address these challenges, we propose an adaptive framework that targets UEs in the few-shot PL setting. The contributions of this paper are as follows:", + "bbox": [ + 89, + 319, + 483, + 364 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- We introduced a scenario designed to examine the effectiveness of UEs against prompt learning, particularly in a few-shot context where data availability is constrained. Beyond existing UE methods, we introduced an adaptive UE framework that incorporates PL-specific considerations for surrogate-based UEs, generating stronger UEs that are more effective against PL.", + "- We propose a novel method, $\\mathrm{A}^3$ , which employs cross-modal adversarial augmented feature alignment to enhance PL's ability to generalize when learned from UEs. This method adversarially aligns diversely-augmented image and text augmentations to make PL robust against UEs.", + "- Experimental results demonstrate that $\\mathrm{A}^3$ achieves significant performance gains over existing methods, proving more effective against other UE methods in few-shot scenarios, even when faced with larger perturbations, and partial poisoning. $\\mathrm{A}^3$ also generalizes well to novel classes." + ], + "bbox": [ + 89, + 366, + 483, + 622 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "This work offers new insights into the capabilities and limitations of UEs against PL, laying a foundation for more robust data protection strategies in the era of knowledge transfer with large pretrained models and multimodal machine learning.", + "bbox": [ + 89, + 623, + 483, + 699 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related Work & Preliminaries", + "text_level": 1, + "bbox": [ + 89, + 715, + 370, + 731 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.1. Unlearnable Examples", + "text_level": 1, + "bbox": [ + 89, + 741, + 302, + 758 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The primary goal of unlearnable examples [9, 11, 29, 33, 36, 40] is to safeguard the privacy and copyright of content providers by adding small, human-imperceptible perturbations to data. These perturbations prevent machine learning models from effectively generalizing to the data's original distribution. Unlike traditional data poisoning attacks [10], which aim to introduce backdoor patterns into a model, unlearnable examples are not intended for malicious purposes but solely to protect data from unauthorized use.", + "bbox": [ + 89, + 763, + 485, + 900 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Definition of Unlearnable Examples Consider a dataset with $N$ clean samples $\\mathcal{D}_{\\mathrm{clean}} = \\{(x_i,y_i)\\}_{i = 1}^N$ , where $\\mathbf{x}_i\\in \\mathcal{X} = [0,1]^{C\\times H\\times W}$ and $y_{i}\\in \\mathcal{V} = \\{1,\\dots ,K\\}$ represent the $i^{\\mathrm{th}}$ input sample with $C$ channels and $H\\times W$ spatial dimensions, and its corresponding true label. Each sample is drawn from a distribution $\\mathcal{S}$ . The content provider aims to add small perturbations $\\delta_{i}\\in \\mathcal{B}_{p}(\\mathbf{x}_{i},\\epsilon)$ to the clean samples in $\\mathbf{x}_i\\in \\mathcal{D}_{\\mathrm{clean}}$ to generate unlearnable examples $\\mathcal{D}_{\\mathrm{ue}}(\\boldsymbol {\\delta})\\triangleq \\{(\\mathbf{x}_i + \\boldsymbol {\\delta}_i,y_i)\\mid (\\mathbf{x}_i,y_i)\\in \\mathcal{D}_{\\mathrm{clean}}\\}$ . The set $\\mathcal{B}_p(\\mathbf{x}_i,\\epsilon)$ is:", + "bbox": [ + 511, + 90, + 906, + 229 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {B} _ {p} \\left(\\mathbf {x} _ {i}, \\epsilon\\right) \\triangleq \\left\\{\\mathbf {d} \\mid \\| \\mathbf {d} \\| _ {p} \\leq \\epsilon , \\mathbf {x} _ {i} + \\mathbf {d} \\in \\mathcal {X} \\right\\}. \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 566, + 237, + 906, + 256 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "It bounds the noise $\\delta_{i}$ of each sample $\\mathbf{x}_i$ within the $\\epsilon$ -ball of $\\ell_p$ -distance with respect to the sample, and the perturbed sample $\\mathbf{x}_i + \\delta_i$ remain within the input domain $\\mathcal{X}$ . A small $\\epsilon$ is crucial to ensure that the perturbations do not significantly alter the original content, thus preserving the data's utility, and typically $p \\in \\{0,2,\\infty\\}$ .", + "bbox": [ + 511, + 262, + 906, + 352 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "When this perturbed dataset is used for training, the goal is for the resulting model to generalize poorly to the original distribution $S$ . The optimization for the noise can be formulated as the following bi-level optimization problem to solve for the bounded perturbations $\\delta \\triangleq \\{\\delta_i \\in \\mathcal{B}_p(\\mathbf{x}_i, \\epsilon)\\}_{i=1}^N$ :", + "bbox": [ + 511, + 353, + 906, + 429 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\n\\max _ {\\delta} \\mathbb {E} _ {\\left(\\mathbf {x} _ {i}, y _ {i}\\right) \\sim \\mathcal {S}} \\left[ \\mathcal {L} \\left(f _ {\\boldsymbol {\\theta} ^ {*} (\\delta)} \\left(\\mathbf {x} _ {i}\\right), y _ {i}\\right) \\right], \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 586, + 436, + 906, + 454 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "where $f_{\\theta}:\\mathcal{X}\\to \\mathbb{R}^{K}$ denotes the model with parameters $\\pmb{\\theta}$ , $\\mathcal{L}:\\mathbb{R}^K\\times \\mathcal{V}\\rightarrow \\mathbb{R}$ is the loss function (typically cross-entropy), and $\\pmb{\\theta}^{\\star}$ represents the model parameters optimized on the perturbed images:", + "bbox": [ + 511, + 459, + 906, + 521 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {\\theta} ^ {\\star} (\\boldsymbol {\\delta}) = \\operatorname {a r g m i n} _ {\\boldsymbol {\\theta}} \\mathbb {E} _ {(\\mathbf {x} _ {i}, y _ {i}) \\sim \\mathcal {D} _ {\\text {c l e a n}}} [ \\mathcal {L} (f _ {\\boldsymbol {\\theta}} (\\mathbf {x} _ {i} + \\boldsymbol {\\delta} _ {i}), y _ {i}) ]. \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 524, + 527, + 906, + 547 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "As the above problem is intractable, many works have proposed alternative methods to approximate the solution, commonly involving surrogate models:", + "bbox": [ + 511, + 553, + 906, + 598 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Hypocritical perturbations (HYPO) [33] assumes a surrogate model $g_{\\theta} : \\mathcal{X} \\to \\mathbb{R}^{K}$ with pretrained weights $\\theta$ learned on samples from $S$ , and directly finds the perturbations $\\delta$ that makes the model easily produce correct predictions for $\\mathcal{D}_{\\mathrm{clean}}$ images.", + "- Error maximization (EM) [11] further considers a randomly-initialized surrogate $g_{\\theta}$ , and optimizes the noise $\\delta$ , and the surrogate model $g_{\\theta}$ simultaneously.", + "- Robust error maximization (REM) [9] extends EM to optimize the surrogate $g_{\\theta}$ under adversarial training [19], where the adversarial noise is also bounded within the $\\epsilon$ -ball of $\\ell_p$ -norm, and optimized via projected gradient descent (PGD) [19]. This helps to improve the effectiveness of the perturbations even under adversarial training." + ], + "bbox": [ + 513, + 599, + 906, + 809 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Interestingly, recent works have shown that unlearnable examples can also be curated without the need for optimization, where the perturbations form a linearly-separable subspace that can be learned easily by the model. This bias is so strong that it makes the underlying features less learnable by model training algorithms:", + "bbox": [ + 511, + 810, + 906, + 900 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "9508", + "bbox": [ + 480, + 944, + 514, + 955 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Linearly-separable perturbations (LSP) [40] generates random color patches as perturbations, and apply them to the images while ensuring the added noise is bounded within a small $\\ell_2$ -distance from the original image. This simple method can enable strong unlearnable examples without the expensive optimization process and the need for surrogate models.", + "- Autoregressive Poisoning (AR) [29] Similar to LSP, AR prescribes a simple perturbation strategy which first fills all channels of the first 2 rows and columns of the image with Gaussian noise, then uses an autoregressive process to fill the remaining pixels with a $3 \\times 3$ sliding window. It then re-scales the perturbations to be within the noise bound $\\mathcal{B}_p(\\mathbf{x}_i, \\epsilon)$ , before adding them to the image $\\mathbf{x}_i$ .", + "- One-pixel Shortcuts (OPS) [36] For each image belonging to a specific class, OPS searches for an optimal pixel and color value for the class, such that it results in the largest change in the pixel's color value for all perturbed images. This constitutes a simple $\\ell_0$ -bounded perturbation where only one pixel is modified for each image. Surprisingly, when training models from scratch, OPS can generate even stronger unlearnable examples than EM with a 8/255 noise budget [25, 36]. It also resists even $\\ell_{\\{2,\\infty\\}}$ -bounded adversarial training, as the noise added by adversarial training cannot effectively perturb the pixel values for the erasure of $\\ell_0$ -perturbations." + ], + "bbox": [ + 89, + 90, + 480, + 483 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "As $\\mathsf{A}^3$ considers the problem of learning from unlearnable examples from the perspective of few-shot learning with prompt learning (PL), we provide a framework that adapts the above methods to this setting, by making the surrogate models $g$ our prompt learners. It also shows that PL can be in a certain degree effective against unlearnable examples produced by these methods.", + "bbox": [ + 89, + 484, + 482, + 590 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.2. Learning from Unlearnable Examples", + "text_level": 1, + "bbox": [ + 89, + 603, + 416, + 619 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The emergence of unlearnable examples prompts investigation into the mechanisms that unauthorized trainers might exploit to extract useful features.", + "bbox": [ + 89, + 626, + 482, + 671 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Adversarial Training (AT) [2, 19] involves the generation of adversarial examples [4, 12, 41] specifically tailored to the model under training, which are in turn used to train the model to enhance the model's robustness. It also has been known to be an effective approach to improve the model generalization when trained on unlearnable examples [11]. However, adversarial training is known to be computationally expensive, and also affects the model's performance on clean data [38], especially when the sample size is small [5].", + "bbox": [ + 89, + 672, + 482, + 808 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "For this reason, Image Shortcut Squeezing (ISS) [16] introduces a suite of simple image processing methods can show surprising effectiveness in mitigating the impact of unlearnable examples, without the costs associated with adversarial training. Grayscale removes the color information from the training images, and JPEG Compression", + "bbox": [ + 89, + 810, + 482, + 900 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "(JPEG) instead performs a lossy JPEG with high compression rate on the training images. However, it was recently discovered [26] that simple image processing is much less effective against adaptively-optimized unlearnable examples. Building upon this idea, UEraser [25] further proposes a stochastic augmentation pipeline with a wider range of transformations, and uses a simple adversarial augmentation to optimize models only on augmented images with the maximum loss. This allows the model to learn the underlying features without being affected by the easily learnable shortcuts in unlearnable examples. While all these methods have shown effectiveness against unlearnable examples, they consider the problem of training models from scratch rather than leveraging pretrained models.", + "bbox": [ + 511, + 90, + 906, + 303 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.3. Prompt Learning", + "text_level": 1, + "bbox": [ + 511, + 311, + 684, + 327 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Vision-language pretrained models (VLMs), such as CLIP [27] and ALIGN [13], represent a significant advancement in multi-modal learning. Trained on extensive image-text pair datasets, The VLM have two main components: an image encoder $f_{\\mathrm{im}}: \\mathcal{X} \\to \\mathbb{R}^d$ and a text encoder $f_{\\mathrm{tx}}: \\mathcal{W} \\to \\mathbb{R}^d$ , where $\\mathcal{X}$ and $\\mathcal{W}$ are the input image and text, respectively, and learn a shared embedding space $\\mathbb{R}^d$ for images and texts. Image and text pairs that are semantically similar will have similar embeddings in this space, and vice versa. This makes them versatile for a wide range of downstream tasks, including image classification [27], captioning [1], retrieval [18], and providing guidance for image generation [7, 28]. Notably, VLMs show impressive zero-shot performance, where they can perform well on new tasks without task-specific training, showcasing their generalization capabilities.", + "bbox": [ + 511, + 333, + 906, + 559 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Prompt Engineering for VLMs seeks to use VLMs for image classification tasks by constructing class-specific text prompts, e.g., \"a photo of a c1s\" for the class c1s, and comparing the model's similarity scores between the features of these prompts and the target image. The class yielding the highest similarity score is selected as the predicted class for the image. Formally, prompt engineering creates $M$ text prompts, embedded as a tensor $\\mathbf{V} = \\{\\mathbf{v}_m\\}_{m=1}^M \\in \\mathbb{R}^{M \\times T \\times h}$ , where $\\mathbf{v}_m$ denotes an embedded prompt prefix sequence of $T$ tokens. By appending each prefix $\\mathbf{v}_i$ with $\\mathbf{c}_k \\in \\mathbb{R}^h$ , the constant embedding vector of the $k^{\\text{th}}$ class, the model can thus construct a holistic classifier $h_\\phi: \\mathcal{X} \\to \\mathbb{R}^K$ by averaging the similarity scores across all $M$ prompts, where $\\phi = \\{\\theta, \\mathbf{V}, \\ldots\\}$ denotes all parameters in $h_\\phi$ , consisting of the pretrained CLIP weights $\\pmb{\\theta}$ , a manually-designed prompt embedding $\\mathbf{V}$ , and other potential parameters used by the prompt learning algorithm. For the $k^{\\text{th}}$ class, we can obtain its logit as follows:", + "bbox": [ + 511, + 560, + 908, + 832 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\nh _ {\\phi} (\\mathbf {x}) _ {k} = \\frac {1}{M} \\sum_ {m = 1} ^ {M} \\operatorname {s i m} \\left(f _ {\\mathrm {i m}} (\\mathbf {x}), f _ {\\mathrm {t x}} \\left(\\left[ \\mathbf {v} _ {m}, \\mathbf {c} _ {k} \\right]\\right)\\right). \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 547, + 840, + 906, + 861 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Here, sim denotes the similarity function used to compute the closeness between the image and text features, typically", + "bbox": [ + 511, + 869, + 906, + 900 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "9509", + "bbox": [ + 480, + 944, + 514, + 955 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "the cosine similarity. The $k^{\\mathrm{th}}$ class probabilities can thus be computed using the softmax function, where $\\tau$ is the softmax temperature and is usually set to 1:", + "bbox": [ + 89, + 90, + 485, + 137 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\np (y = k \\mid \\mathbf {x}, \\phi) = e ^ {h _ {\\phi} (\\mathbf {x}) _ {k} / \\tau} / \\sum_ {j = 1} ^ {K} e ^ {h _ {\\phi} (x) _ {j} / \\tau}. \\quad (5)\n$$\n", + "text_format": "latex", + "bbox": [ + 125, + 146, + 483, + 167 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Prompt Learning (PL) While the above zero-shot method is effective for many tasks, it is limited by the quality of the manually-designed prompts and may require extensive labor and expert knowledge to construct. In contrast, PL aims to automatically learn the prompt embeddings $\\mathbf{V}$ , and possibly other parameters in $\\phi$ , by optimizing them during training. To improve downstream task performance, CoOp [43] learns the prompt embeddings $\\mathbf{V}$ , show that even with few-shot examples, can generate better prompts than manual designs, and generalize well to unseen tasks. CoCoOp [42] builds upon CoOp by introducing a trainable meta-net to learn to generate prompt embeddings from the extracted image features, in order to improve the model's performance on unseen tasks. KgCoOp [39] further regularizes the prompt embeddings to be close to the initial handcrafted prompts, showing that by retaining proximity to the original prompts, unseen tasks can be generalized better. Finally, ProDA [17] uses Gaussian to model the prompt embedding distribution, and encourages orthogonality among the prompt embeddings. This paper presents the first work to highlight that prompt learning can be effective in learning useful features from unlearnable examples, even under few-shot scenarios.", + "bbox": [ + 91, + 176, + 483, + 508 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3. The $\\mathbf{A}^3$ Method", + "text_level": 1, + "bbox": [ + 89, + 522, + 243, + 539 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.1. Adaptive UEs Targeting PL", + "text_level": 1, + "bbox": [ + 89, + 547, + 339, + 565 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Surrogate-based methods (EM [11], REM [9], and HYPO [33]) for synthesizing UEs typically train models from scratch, and do not assume low data availability, which makes them an unsuitable choice for affecting the PL process of the pretrained CLIP model (Table 1). To address this limitation, we first introduce an adaptive approach for all methods that uses the PL process as the surrogate model. To make the UEs stronger, we also assume that the UEs are synthesized with the same PL method using the same pretrained CLIP model. We implemented adaptive-variations of EM, REM, and HYPO. As they are stronger UEs than the original methods, our experiments by default use these adaptive variants. Recall REM [9] in Section 2, we adapt its objective to PL, specifically CoCoOp [42] in this paper, by using the holistic classifier $h_{\\phi}$ in (4) as the surrogate model. The REM objective to seek $\\delta$ under PL is thus:", + "bbox": [ + 89, + 570, + 485, + 811 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\min _ {(\\boldsymbol {\\delta}, \\mathbf {V}, \\dots)} \\max _ {\\boldsymbol {\\eta}} \\mathbb {E} _ {(\\mathbf {x} _ {i}, y _ {i}) \\sim \\mathcal {D} _ {\\text {c l e a n}}} [ \\mathcal {L} (h _ {\\phi} (\\mathbf {x} _ {i} + \\boldsymbol {\\delta} _ {i} + \\boldsymbol {\\eta}), y _ {i}) ], \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 94, + 823, + 483, + 854 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\pmb {\\eta}\\in \\mathcal{B}_p(\\mathbf{x}_i,\\epsilon)$ is the $\\epsilon$ -bounded $\\ell_p$ -norm noise.", + "bbox": [ + 89, + 854, + 441, + 869 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In this adaptive context, EM [11] simplifies the above REM objective by removing the inner maximization over $\\pmb{\\eta}$", + "bbox": [ + 89, + 869, + 483, + 901 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "and $\\eta = 0$ . Similarly, HYPO [33] further assumes that $\\mathbf{V}$ is kept constant to its initial manual design, and searches for the perturbations $\\delta$ directly.", + "bbox": [ + 511, + 90, + 906, + 136 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Surrogate-free methods such as LSP [40], AR [29] and OPS [36] do not rely on a surrogate model training process and directly prescribe the perturbations for a given set of clean examples. Therefore, they do not have an adaptive counterpart. In our experiments, we examine the performance of these methods by directly applying them to the training data used by the prompt learning process.", + "bbox": [ + 511, + 136, + 908, + 243 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2. An Overview of $\\mathbf{A}^3$", + "text_level": 1, + "bbox": [ + 511, + 252, + 691, + 268 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Figure 1 provides an overview of $\\mathrm{A}^3$ . The overall algorithm is in Algorithm 1 of Appendix A.3. $\\mathrm{A}^3$ provides a pool of diverse image and text augmentation strategies $\\mathcal{A}_{\\mathrm{im}}$ and $\\mathcal{A}_{\\mathrm{tx}}$ . For each image-class training pair, it first samples $K_{\\mathrm{im}}$ and $K_{\\mathrm{tx}}$ different image and text augmentation strategies, and applies them to the image and text sample respectively. This results in $K_{\\mathrm{im}} \\times K_{\\mathrm{tx}}$ distinct augmented pairs for each training sample. Following the prompt learning technique of CoCoOp [42], it then optimizes the prompt embeddings $\\mathbf{V}$ and the meta-net weights $\\psi$ to align the pair of augmented samples with the minimum similarity. Intuitively, training prompts with the most dissimilar augmented pairs of image and text features forces the model to learn from the underlying features rather than fixating on spurious correlations typically exploited by UEs.", + "bbox": [ + 511, + 273, + 908, + 502 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.3. Augmentations for Image and Text Modalities", + "text_level": 1, + "bbox": [ + 511, + 512, + 901, + 529 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Image Augmentations As noted by Qin et al. [26], the effectiveness of simple augmentation strategies, such as Grayscale and JPEG compression, is greatly diminished in the context of adaptively synthesized UEs. To address this, we follow the approach of UEraser [25], which proposes an extensive set of image augmentation strategies, including not only standard techniques (e.g., random cropping, rotation, etc.), and more complex strategies including fractal-based transformations [21], and TrivialAugment [20].", + "bbox": [ + 511, + 534, + 908, + 671 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Text Augmentations Since CLIP models can leverage not only image features but also text embeddings, which makes it particularly effective in zero-shot classification tasks, and few-shot learning facilitated by PL algorithms. In our context of using PL as an effective defense against UEs, we also introduce a set of text augmentation strategies, which include techniques such as random token masking and reordering that operates in the discrete token space of the text input, and small random rotations of the text embeddings.", + "bbox": [ + 511, + 671, + 908, + 806 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "For the details of the augmentation strategies used in $\\mathrm{A}^3$ for both image and text, please refer to Appendix A.2.", + "bbox": [ + 511, + 806, + 906, + 838 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.4. Cross-modal Adversarial Feature Alignment", + "text_level": 1, + "bbox": [ + 511, + 848, + 890, + 864 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Given an image $\\mathbf{x}$ and its corresponding label $y$ , we can use the above augmentation strategies to find $K_{\\mathrm{im}}$ augmented", + "bbox": [ + 511, + 869, + 906, + 901 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "9510", + "bbox": [ + 480, + 944, + 514, + 955 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/55935ae4039ef7df53c264ca9a4b6db363a90b7a5a54779c9db26691094a1d0e.jpg", + "image_caption": [ + "Figure 1. An overview of $A^3$ . For each image-class training pair, $A^3$ respectively sample $K_{\\mathrm{im}}$ and $K_{\\mathrm{tx}}$ different image and text augmentation strategies ( $a_{\\mathrm{im}} \\sim \\mathcal{A}_{\\mathrm{im}}$ and $a_{\\mathrm{tx}} \\sim \\mathcal{A}_{\\mathrm{tx}}$ ). It then optimizes the prompt embeddings $\\mathbf{V}$ and the meta-net $m_{\\psi}$ for empirical risk minimization by aligning pairs of augmented samples with the minimum similarity (i.e., maximum loss) between the image and text features." + ], + "image_footnote": [], + "bbox": [ + 94, + 88, + 903, + 314 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "images and $K_{\\mathrm{tx}}$ text embeddings, by drawing from the sets of image $a_{\\mathrm{im}} \\sim \\mathcal{A}_{\\mathrm{im}}$ and text $a_{\\mathrm{tx}} \\sim \\mathcal{A}_{\\mathrm{tx}}$ augmentation strategies, and applying them to the image and text embeddings respectively, forming a set of augmented images $\\tilde{\\mathbf{x}} \\triangleq \\{\\tilde{\\mathbf{x}}_i\\}_{i=1}^{K_{\\mathrm{im}}}$ and a set of text embeddings $\\tilde{\\mathbf{t}} \\triangleq \\{\\tilde{\\mathbf{t}}_j\\}_{j=1}^{K_{\\mathrm{tx}}}$ :", + "bbox": [ + 89, + 386, + 483, + 467 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\tilde {\\mathbf {x}} _ {i} = a _ {\\mathrm {i m}} (\\mathbf {x}), \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad a _ {\\mathrm {i m}} \\sim \\mathcal {A} _ {\\mathrm {i m}}, \\text {f o r} i \\in [ 1, \\dots , K _ {\\mathrm {i m}} ],\n$$\n", + "text_format": "latex", + "bbox": [ + 89, + 477, + 483, + 494 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\tilde {\\mathbf {t}} _ {j} = a _ {\\mathrm {t x}} \\left(\\left[ \\mathbf {v} _ {j \\bmod M}, \\mathbf {c} _ {y} \\right]\\right), a _ {\\mathrm {t x}} \\sim \\mathcal {A} _ {\\mathrm {t x}}, \\quad \\text {f o r} j \\in [ 1, \\dots , K _ {\\mathrm {t x}} ], \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 91, + 496, + 483, + 526 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Here, we note that the text embedding before augmentation $[\\mathbf{v}_{j\\mathrm{mod}M},\\mathbf{c}_y]$ is a concatenation of the prompt embedding $\\mathbf{v}_{j\\mathrm{mod}M}$ and the class embedding $\\mathbf{c}_y$ . Recall that $M$ is the number of prompt embeddings, and the modulus operation $j\\mathrm{mod}M$ is to ensure that the prompt embedding is selected cyclically, if $K_{\\mathrm{tx}}$ exceeds $M$ .", + "bbox": [ + 89, + 527, + 483, + 617 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Using the augmented samples, we can compute the similarity between the image and text features for each pair of augmented samples, assuming $S(\\tilde{\\mathbf{x}},\\tilde{\\mathbf{t}})\\in [-1,1]^{K_{\\mathrm{im}}\\times K_{\\mathrm{tx}}}$ is the (cosine) similarity matrix containing the similarity between each image-text pair of augmented samples. Namely, for the $\\tilde{\\mathbf{x}}_i$ and $\\tilde{\\mathbf{t}}_j$ pair, we have:", + "bbox": [ + 89, + 617, + 483, + 708 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {S} (\\tilde {\\mathbf {x}}, \\tilde {\\mathbf {t}}) _ {i j} = \\operatorname {s i m} \\left(f _ {\\mathrm {i m}} \\left(\\tilde {\\mathbf {x}} _ {i}\\right), f _ {\\mathrm {t x}} \\left(\\tilde {\\mathbf {t}} _ {j}\\right)\\right), \\tag {8}\n$$\n", + "text_format": "latex", + "bbox": [ + 171, + 718, + 483, + 738 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "and we optimize the trainable weights in $\\phi$ by maximizing the similarity alignment between the least similar image and text augmented features. Putting it all together, we have the following min-max problem, which can be optimized using mini-batch stochastic gradient descent (SGD):", + "bbox": [ + 89, + 748, + 483, + 824 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\min _ {\\phi} \\mathbb {E} _ {(\\mathbf {x}, y) \\sim \\mathcal {D} _ {\\mathrm {u e}}} \\left[ \\max _ {(i, j)} \\mathcal {L} \\left(\\mathcal {S} (\\tilde {\\mathbf {x}}, \\tilde {\\mathbf {t}}) _ {i j}, y\\right) \\right], \\tag {9}\n$$\n", + "text_format": "latex", + "bbox": [ + 137, + 834, + 483, + 861 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\mathcal{L}$ is the softmax cross-entropy loss, and $\\mathcal{D}_{\\mathrm{ue}}$ is the set of unlearnable training examples.", + "bbox": [ + 89, + 869, + 483, + 901 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.5. More Augmentation Diversity with Meta-Net", + "text_level": 1, + "bbox": [ + 511, + 386, + 895, + 402 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "To further enhance the diversity of the augmented samples, we introduce a meta-net $m_{\\psi}:\\mathbb{R}^d\\to \\mathbb{R}^{M\\times (T - 1)\\times h}$ with trainable weights $\\psi$ , which is a small neural network that learns to predict prompt embeddings $\\mathbf{V}$ from $f_{\\mathrm{im}}(\\tilde{\\mathbf{x}})$ , i.e., the feature extracted from an augmented image by the image encoder. This allows us to generate more diverse augmented text embeddings in addition to the text augmentation strategies. While this can yield $K_{\\mathrm{im}}\\times K_{\\mathrm{tx}}$ different augmented prompt features, with the computational cost of CLIP's feature extraction, we only use the meta-net for a random augmented image for each augmented text embedding. Using the meta-net, the similarity matrix in (8) thus becomes:", + "bbox": [ + 511, + 409, + 906, + 590 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {S} (\\tilde {\\mathbf {x}}, \\tilde {\\mathbf {t}}) _ {i j} = \\operatorname {s i m} \\left(\\mathbf {p} _ {i}, \\mathbf {q} _ {j}\\right),\n$$\n", + "text_format": "latex", + "bbox": [ + 540, + 604, + 709, + 622 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\mathbf{p}_i = f_{\\mathrm{im}}(\\tilde{\\mathbf{x}}_i)$ (10)", + "bbox": [ + 531, + 625, + 906, + 640 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {q} _ {j} = f _ {\\mathrm {t x}} \\left(\\tilde {\\mathbf {t}} _ {j} + m _ {\\psi} \\left(\\mathbf {p} _ {k}\\right)\\right), k \\sim \\mathcal {U} \\{1, K _ {\\mathrm {i m}} \\}.\n$$\n", + "text_format": "latex", + "bbox": [ + 584, + 643, + 862, + 660 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4. Experiments", + "text_level": 1, + "bbox": [ + 511, + 675, + 643, + 693 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Datasets For the main experiments, We evaluate $A^3$ under 7 datasets, including ImageNet [6] Caltech-101 [8], Oxford Flowers-102 [22], Food-101 [3], Oxford-Pets [23], and UCF-101 [31]. These datasets cover various recognition tasks, including classification of generic objects, fine-grained classification, and action recognition. We also proportionally resized and cropped all images to $224 \\times 224$ , the input size for the image encoder $f_{\\mathrm{im}}$ .", + "bbox": [ + 511, + 702, + 906, + 824 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Models In our experiments, unless otherwise specified, we use either the ViT-B/16 or ResNet-50 as the backbone for the image encoder $f_{\\mathrm{im}}$ , and the text encoder $f_{\\mathrm{tx}}$ is a Transformer-based model [34]. All pretrained models are obtained from the official CLIP repository [27].", + "bbox": [ + 511, + 825, + 906, + 901 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "9511", + "bbox": [ + 480, + 944, + 513, + 955 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Evaluation For all experiments in this section, unless otherwise specified, we consider a common few-shot learning setup with $S$ labeled training examples per class, i.e., $S$ -shot learning, where $S = 16$ by default. We used a context length $T$ of 4, and $M = 1$ number of prompt embeddings. For all surrogate-based attacks we optimized perturbations $\\delta$ for 15 epochs with the cosine annealing scheduler and a learning rate 0.002. For learning, we adopted the SGD optimizer with a momentum of 0.9 and a weight decay of $5 \\times 10^{-4}$ . We consider the following dataset split protocols:", + "bbox": [ + 89, + 90, + 483, + 241 + ], + "page_idx": 5 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Standard We used the standard train-test splits from [42, 43] to ensure reproducibility. In this setting, all classes are included in the training phase, where each class contains $S$ labeled examples.", + "- Base-to-Novel To better evaluate the model's generalization ability in few-shot scenarios, we also followed [42] to divide each datasets into two equal and non-overlapping groups of classes, where the first group (base) is used for training the prompt learning model and validation, and the second group (novel) which contains unseen classes is also used for performance testing. For this protocol, we reported the test accuracies on the base classes $\\alpha_{\\mathrm{b}}$ , the novel classes $\\alpha_{\\mathrm{n}}$ , and also their harmonic mean $\\alpha_{\\mathrm{h}} = 2 / (\\alpha_{\\mathrm{b}}^{-1} + \\alpha_{\\mathrm{n}}^{-1})$ ." + ], + "bbox": [ + 89, + 243, + 483, + 455 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Unlearnable Example Methods Different methods consider distinct perturbation types and perturbation budgets:", + "bbox": [ + 89, + 455, + 483, + 484 + ], + "page_idx": 5 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Surrogate-based methods such as EM [11], REM [9], and HYPO [33], consider $\\ell_{\\infty}$ -bounded perturbations, with a perturbation budget of $8/255$ . As these methods require surrogate models, we used Section 3.1 to adapt them to the prompt learning setting.", + "- LSP [40] and AR [29] both use the $\\ell_2$ -norm perturbations, but their perturbation budgets are different due to their original setups. LSP assumes a perturbation budget of 1.30, while AR uses 1.00.", + "- OPS [36] is model-agnostic, and uses the $\\ell_0$ -norm perturbations with a perturbation budget of 1 by default." + ], + "bbox": [ + 89, + 486, + 483, + 652 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "For additional details regarding the experimental setup, please refer to Appendix B. We will now present the results and main findings of our experiments below. Appendix C provides additional results including sensitivity analysis of the hyperparameters, and more adaptive variants.", + "bbox": [ + 89, + 652, + 483, + 729 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.1. Prompt Learning under UEs", + "text_level": 1, + "bbox": [ + 89, + 741, + 346, + 757 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Prompt learning generalizes well to existing UEs. Table 1 compares EM-0 and EM, where EM-0 synthesizes UEs by training ResNet-18 surrogate models from scratch on the Caltech-101 dataset, and EM adapts the EM method to prompt learning using Section 3.1. Notably, we found that UE methods that can effectively thwart [26] supervised learning of small models (e.g., ResNet-18) on small datasets (e.g., CIFAR-10 [14]) are much less effective when transferred to CLIP-based prompt learning algorithms.", + "bbox": [ + 89, + 763, + 483, + 900 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Prompt learning generalizes better with increasing number of shots $S$ . Table 1 also shows that the performance of all prompt learning algorithms increases with the number of shots $S$ , even when all shots are UEs. This trend can be observed across all UE methods, but the adaptive EM method consistently suppresses the performance gains from increasing $S$ the most.", + "bbox": [ + 511, + 90, + 903, + 196 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "CoCoOp and ProDA show increased robustness against UEs, while KgCoOp is the most fragile in Table 1. We speculate that this is because the meta-net used in CoCoOp may be able to absorb the shortcut present in the UEs, and ProDA models the Gaussian distribution of prompt embeddings, making it more robust to input-side perturbations. While trained on clean data, KgCoOp [42] exhibits the best performance, as shown in the \"Clean\" row of Table 1. However, it is notably prone to UEs, showing the largest performance drop when UEs are introduced, especially when using the adaptive EM method. This suggests that KgCoOp, guided by the regularization to be in close proximity to the initial manual prompts, cannot effectively evade crafted UEs by EM based on the initial prompts. Because of the robustness of CoCoOp under our default number of shots ( $S = 16$ ), we chose it as the baseline algorithm for the subsequent experiments.", + "bbox": [ + 511, + 198, + 906, + 455 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Without proper defenses, it is better off not learning from UEs. It is interesting to note that usually, the performance of CoCoOp when trained with UEs is worse than the zero-shot performance. This suggests that UEs can indeed be harmful to the model's performance. This behavior can be observed in Tables 1 and 2.", + "bbox": [ + 511, + 455, + 908, + 546 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2. Prompt Learning with $\\mathbf{A}^3$", + "text_level": 1, + "bbox": [ + 511, + 556, + 743, + 574 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "$\\mathbf{A}^3$ is very effective (up to $33\\%$ better than CoCoOp) in mitigating UEs. In Table 2, we observe that $\\mathrm{A}^3$ consistently outperforms CoCoOp across all datasets, producing $15\\%$ to $33\\%$ higher $\\alpha_{\\mathrm{h}}$ than CoCoOp for surrogate-based methods (EM, REM, HYPO), and $6\\%$ to $30\\%$ higher for surrogate-free methods (LSP, AR, OPS).", + "bbox": [ + 511, + 580, + 905, + 671 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The image and text augmentations in $\\mathbf{A}^3$ are both crucial for its effectiveness. Table 3 performs an ablation study on the individual contributions of image and text augmentations. We note that using either image-only or text-only augmentations can improve the model's performance, but the combination of both is the most effective, giving large accuracy gains over CoCoOp.", + "bbox": [ + 511, + 671, + 906, + 777 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Simple augmentation strategies fall short of $\\mathbf{A}^3$ 's performance. We also highlight in Table 3 that applying simple augmentation strategies such as Grayscale and JPEG compression on CoCoOp can certainly gain improvement over the CoCoOp baseline. However, they are still outperformed by $\\mathbf{A}^3$ , sometimes with a large margin ( $\\geq 25\\%$ ).", + "bbox": [ + 511, + 777, + 906, + 869 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "A large arsenal of image augmentation strategies also fall short of $\\mathrm{A}^3$ 's performance. Table 3 also shows that while", + "bbox": [ + 511, + 869, + 905, + 900 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "9512", + "bbox": [ + 480, + 944, + 514, + 955 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/011bae5734cdd6190c8860c2c423a1bb6d23c0da25c496ba7e50090047bcbd02.jpg", + "table_caption": [ + "Table 1. Test accuracies $(\\%)$ of prompt learning algorithms on an unlearnability-poisoned Caltech-101 dataset under varying number of shots $S$ . Note that \"EM-0\" is the original EM [11] method, and \"EM\" is our adaptive variant (Section 3.1). The image encoder of the CLIP model is ResNet-50, and the zero-shot accuracy is $86.00\\%$ . We also report the average accuracy across unlearnability methods (the \"Avg.\" rows) and prompt learning algorithms (the \"Avg.\" column). We highlight the best prompt learning algorithm against each unlearnability method in bold, and underline the strongest unlearnability methods for each prompt learning algorithm." + ], + "table_footnote": [], + "table_body": "
SMethodCoOpCoCoOpProDAKgCoOpAvg.
2EM-078.2380.4581.9177.8679.61
EM61.7864.1365.0559.2662.56
OPS74.4076.9175.7573.1075.04
AR80.6381.8380.7479.8980.77
Avg.73.7675.8375.8672.5374.50
4EM-083.2985.1986.0280.8383.83
EM68.4170.4570.1464.9168.48
OPS78.8080.8880.1578.0779.48
AR82.5483.7084.4182.6683.33
Avg.78.2680.0680.1876.6278.78
8EM-086.1889.4490.6485.4287.92
EM70.5072.2873.0367.0770.72
OPS80.7483.3582.6379.3181.51
AR85.8188.4188.5083.7786.62
Avg.80.8183.3783.7078.8981.69
16EM-090.7690.8590.4889.6090.42
EM71.4272.8372.1069.3471.42
OPS82.4084.0984.4380.0282.74
AR88.6390.7490.0886.4688.98
Avg.83.3084.6384.2781.3683.39
16Clean91.2091.7091.6091.8091.58
", + "bbox": [ + 91, + 227, + 486, + 513 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "UEraser [25] is the most effective strategy to learn from UEs among all tested existing methods, the performance of UEraser is still inferior to $\\mathrm{A}^3$ , particularly on HYPO, LSP and AR, with increased perturbation budgets. To preserve image semantics while maximizing augmentation diversity, such image augmentation strategies are often designed with a balance between the two. This trade-off choice may limit the ability to suppress UE perturbations in images. While this is also true for $\\mathrm{A}^3$ , but the additional text augmentation strategies of $\\mathrm{A}^3$ can help to work around this limitation.", + "bbox": [ + 89, + 532, + 483, + 683 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Adversarial training (AT) may not be the most effective defense against UEs. As the settings of AT in Table 3 assume $\\ell_{\\infty}$ -bounded perturbations with $\\epsilon = 8 / 255$ , it notably struggles against large $\\ell_{\\infty}$ perturbation budgets ( $\\epsilon = 16 / 255$ ), and other types of perturbation norm-bounds ( $\\ell_{2}$ and $\\ell_{0}$ ), as it is not designed to handle them. There may also be an intricate balance between accuracy and robustness [38], which could result in a seesaw effect in the performance as the perturbation budget increases.", + "bbox": [ + 89, + 686, + 483, + 821 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Increasing the number of shots $S$ improves $\\mathbf{A}^3$ 's performance. In Table 4, we found that while CoCoOp's performance metrics continue to improve with increasing $S$ , it never surpassed the performance of zero-shot CLIP. This echoes the findings in Table 1. On the other hand, $\\mathbf{A}^3$ im", + "bbox": [ + 89, + 824, + 483, + 900 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/15837c8e21f004714a7e8b20eca8fb9ee5070b5d62224268676d035ff5fbc19f.jpg", + "table_caption": [ + "Table 2. Base-to-novel prompt learning accuracies (\\%) for CoCoOp and $\\mathrm{A}^3$ trained with unlearnable examples under $\\ell_{\\infty}$ -bounded attacks. Rows with \"+\" indicate when $\\mathrm{A}^3$ is applied. $\\alpha_{\\mathrm{b}}$ refers to the model accuracy on poisoned data, while $\\alpha_{\\mathrm{n}}$ refers to the model accuracy on novel classes, excluding the poisoned classes. We also report the harmonic mean $\\alpha_{\\mathrm{h}} = 2 / (\\alpha_{\\mathrm{b}}^{-1} + \\alpha_{\\mathrm{n}}^{-1})$ . For the \"Δ\" column, we report the test accuracy drop for the CoCoOp baseline from the clean training setting, and accuracy gain for $\\mathrm{A}^3$ over CoCoOp. The backbone is VIT-B/16." + ], + "table_footnote": [], + "table_body": "
ImNetCaltechPetsFlowersFoodSUNUCFAvg.Δ
Zero-Shot [27]
αb67.5092.6087.4067.9082.9064.8066.1075.60
αn60.2087.4083.7060.8075.5056.9059.5069.14
αh63.6489.0085.5264.8079.0560.6062.8172.20
Baseline (CoCoOp [42])
αb75.2596.3094.3592.8690.1878.3880.5386.84
αn69.4393.2396.8870.1790.8074.7273.2881.22
αh72.3994.7495.5879.7490.3476.2876.6183.67
+αb76.0397.9594.0692.4190.6877.5980.2587.00
+αn70.4793.8093.1870.2991.2974.8371.8980.82
+αh73.0995.7393.6379.7490.9876.1475.8883.60
EM [11] (l∞, ε = 8/255)
αb56.4780.6878.4474.4079.9358.9063.7170.36-16.48
αn43.2774.9076.3851.6678.1553.3352.0961.40-19.82
αh48.7777.5477.4061.0379.0456.0157.3865.31-18.36
+αb73.4895.4993.7191.8489.3877.6579.3485.8215.46
+αn67.8593.0793.5368.0888.0073.1771.6979.3017.90
+αh70.4194.3493.7478.2088.8775.1875.1282.2716.96
REM [9] (l∞, ε = 8/255)
αb43.5162.6364.6061.9860.2148.2950.5355.94-30.90
αn30.4954.7760.1246.6458.0844.3041.9848.05-33.16
αh35.7458.3962.2553.1759.0646.1846.0251.54-32.12
+αb72.7694.5293.2891.0088.0676.9478.4685.0029.06
+αn66.4292.1691.9466.7487.3471.9871.2278.2630.21
+αh69.3793.6992.7476.7387.5474.4374.9481.3529.80
HYPO [33] (l∞, ε = 8/255)
αb40.0858.5959.8357.3356.5044.1147.6652.01-34.83
αn30.6454.8257.1144.7656.9041.2543.5847.01-34.21
αh34.6856.5758.4650.3056.5542.4745.4549.21-34.46
+αb71.8094.2893.5990.7588.5577.6778.9285.0833.07
+αn65.2990.3792.0165.2584.9370.0469.8576.8229.81
+αh68.3992.1393.0576.2086.6973.7374.4380.6631.45
LSP [40] (l2, ε = 1.30)
αb49.7268.4965.9963.3361.8850.6951.4758.80-28.04
αn36.0456.2963.3550.6259.4146.2048.8851.54-29.67
αh41.7961.8464.4756.1860.5348.4350.0454.75-28.91
+αb72.5394.9794.0291.2988.6477.0478.1785.2426.44
+αn67.3792.4493.8067.5287.4072.2171.1178.8427.30
+αh69.9793.7494.2777.0987.7774.4774.5481.6926.94
AR [29] (l2, ε = 1.00)
αb42.3361.7360.5859.0157.2046.2349.0653.73-33.11
αn31.6756.4258.0844.5455.9643.3042.6947.52-33.69
αh36.2959.0059.2750.6356.5244.7445.6750.30-33.37
+αb71.6894.3892.5590.8287.7776.6777.5884.4930.76
+αn66.0991.5991.6766.0686.2571.1970.8677.6730.15
+αh68.7492.9892.1078.4087.0073.9374.2181.0530.75
OPS [36] (l0, ε = 1)
αb68.2888.6086.2081.5985.6565.6068.9877.84-9.00
αn54.5080.2180.8358.7079.0660.4960.3267.73-13.49
αh60.5784.0583.4568.2882.2462.8964.3972.37-11.40
+αb73.0895.1093.3991.5089.5676.9778.1383.967.55
+αn67.2092.6393.8868.0388.0172.6471.2879.1011.37
+αh70.0693.8693.6378.0588.8774.6974.5081.429.68
", + "bbox": [ + 511, + 213, + 906, + 898 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "9513", + "bbox": [ + 480, + 944, + 514, + 955 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/5fcb750b676d4983519eecab0cb047a3ebd53e763bde4b73ef3bbef34d380c7d.jpg", + "table_caption": [ + "Table 3. Clean test accuracies (\\%) on Caltech-101 datasets (16-shots, standard protocol). \"UEr\" means UEraser. Baseline and compared methods are adapted to CoCoOp [42]. Image encoder backbone is ResNet-50. \"Text\" and \"Image\" refer to the augmented modalities, \"Full\" includes both." + ], + "table_footnote": [], + "table_body": "
BaselineGrayJPEGATUErTextImageFull
EM8/25575.5374.2879.2182.8490.3784.6992.5194.28
16/25559.3660.9664.5877.0388.7982.4590.4091.86
REM8/25552.3363.7170.2278.9690.7386.8992.3293.88
16/25537.9759.5466.7874.1086.9683.3389.3590.25
HYPO8/25547.3650.0264.5678.8987.7484.0791.4793.21
16/25527.1834.6759.1172.2682.4879.2786.0989.33
LSP1.3043.2364.7080.0181.5890.6683.3792.8394.02
1.7425.4142.5868.3076.2983.0580.7787.2491.62
AR1.0050.1853.2481.4076.7390.1284.9691.6393.41
1.3032.6535.5769.2664.2282.8981.1786.6190.06
OPS186.1786.5289.7383.1688.6189.9390.5093.86
473.2473.3480.2369.5980.1281.3784.0787.10
", + "bbox": [ + 91, + 145, + 558, + 329 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/64c10adeb61a466aca34a2397c622418c4d4c1d5ce286ef8ce3ea6af56ddb772.jpg", + "table_caption": [ + "Table 4. Base-to-novel metrics for different number of shots $S \\in \\{ 0,2,4,8,{16}\\}$ . The image encoder backbone is ResNet-50." + ], + "table_footnote": [], + "table_body": "
SCaltechFoodImageNet
A3-+-+-+
0αb86.778.162.7
αn78.474.952.8
αh82.676.457.2
2αb69.3488.5166.1184.3444.0466.04
αn61.7883.4560.2981.7730.960.03
αh65.3485.9163.0783.0436.3262.89
4αb76.0991.2375.2187.8150.4368.1
αn67.5187.9366.1986.1536.6162.9
αh71.5489.5570.4186.9742.4265.4
8αb77.8692.8975.8588.0351.6569.01
αn70.1989.1170.0986.4439.2264.58
αh73.8390.9672.8687.2344.5866.72
16αb77.5293.3676.6788.5253.7271.08
αn72.4391.0273.3487.7640.7464.95
αh74.8992.1874.9788.1446.3467.88
", + "bbox": [ + 586, + 132, + 906, + 327 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/3e63b6b5e9254872d7c39ebffcb88b9b4640ae4b812eb8255b8919a5c846b150.jpg", + "image_caption": [], + "image_footnote": [ + "αoriginal αunlearn 01 αtest αtrain" + ], + "bbox": [ + 94, + 361, + 285, + 444 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/cd91b39d42053d417575a3e791bbf2c683504123f56e5f0e61be118cd4fe24c9.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 292, + 361, + 486, + 444 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/0e966e46ae20f2d8b7aafbc0fd37033660a7ccc32b79ee38a6907e2f24abd3a5.jpg", + "image_caption": [ + "(a) $\\mathrm{EM} + \\mathrm{CoCoOp}$", + "(c) $\\mathrm{REM} + \\mathrm{CoCoOp}$" + ], + "image_footnote": [], + "bbox": [ + 94, + 455, + 287, + 540 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/13a94f1e41a87b7f18ea40d518aa2cd3057bf3ccb934147e291bfc18ae11ba04.jpg", + "image_caption": [ + "(b) $\\mathrm{EM} + \\mathrm{A}^3$", + "(d) $\\mathrm{REM} + \\mathrm{A}^3$" + ], + "image_footnote": [], + "bbox": [ + 292, + 455, + 486, + 540 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/81616334ed89d10775a3865fa15e1d3ed0af05dedecca17cf98b084e1cdb7d74.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 94, + 553, + 285, + 637 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/a6ad827e2de157735f90f2d0d34f9bc8deb05ba662f5c66c99a57b3fa52a03e9.jpg", + "image_caption": [ + "(f) $\\mathrm{AR} + \\mathrm{A}^3$" + ], + "image_footnote": [], + "bbox": [ + 292, + 553, + 486, + 637 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/2a809476949a81417905ada8d5f012a84bf314fba9e658888963850b1dcbf5e6.jpg", + "image_caption": [ + "(e) $\\mathrm{AR} + \\mathrm{CoCoOp}$", + "(g) $\\mathrm{LSP} + \\mathrm{CoCoOp}$", + "Figure 2. CoCoOp vs. A3 under partial poisoning with rates $R \\in \\{\\frac{1}{8}, \\frac{1}{4}, \\frac{1}{2}, 1\\}$ (x-axis, %). Accuracy metrics (y-axis, %): $\\alpha_{\\text{unlearn}} = \\text{UEs}$ in the training set; $\\alpha_{\\text{original}} = \\text{original clean images of the UEs}$ ; $\\alpha_{\\text{test}} = \\text{clean images in the test set}$ ; $\\alpha_{\\text{train}} = \\text{clean images in the training set}$ . The image encoder backbone is ResNet-50." + ], + "image_footnote": [], + "bbox": [ + 94, + 648, + 287, + 733 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/9c16c1e3695b1908ad6ede841fafda072772a77904d57be754a7105f372b4987.jpg", + "image_caption": [ + "(h) $\\mathrm{LSP} + \\mathrm{A}^3$" + ], + "image_footnote": [], + "bbox": [ + 292, + 648, + 486, + 732 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "proves notably with increasing $S$ , where CoCoOp falls behind while $A^3$ leads by a large margin.", + "bbox": [ + 89, + 824, + 483, + 854 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "With partially poisoned datasets, $\\mathbf{A}^3$ learns the underlying features while CoCoOp likely does not. In practice, model trainers may curate datasets from a variety of sources,", + "bbox": [ + 89, + 854, + 485, + 900 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "and only a portion of the data may contain UE perturbations. Training on such partially poisoned datasets typically result in minimal performance loss over the clean dataset, and is not indicative of the model's ability to learn the underlying features of the UEs. In Figure 2, we thus investigate whether CoCoOp and $\\mathbf{A}^3$ can learn such features when trained on partially poisoned datasets. It evaluates the accuracy metrics of the unlearnable part ( $\\alpha_{\\mathrm{unlearn}}$ ), the original images of the unlearnable part ( $\\alpha_{\\mathrm{original}}$ , i.e., before perturbation), the samples from the test set ( $\\alpha_{\\mathrm{test}}$ ), and the clean part of the train set ( $\\alpha_{\\mathrm{train}}$ ). Importantly, if the model can learn the underlying features, then the $\\alpha_{\\mathrm{original}}$ should be close to $\\alpha_{\\mathrm{train}}$ , otherwise, it should be close to $\\alpha_{\\mathrm{test}}$ . It is evident that CoCoOp actually struggles to learn useful features from the UEs, as $\\alpha_{\\mathrm{original}}$ closely tracks $\\alpha_{\\mathrm{test}}$ , while $\\alpha_{\\mathrm{unlearn}}$ is higher than $\\alpha_{\\mathrm{train}}$ . This suggests that CoCoOp is likely overfitting to the UE perturbations, even more so than the clean training data. In contrast, $\\mathbf{A}^3$ shows that $\\alpha_{\\mathrm{original}}$ follows $\\alpha_{\\mathrm{train}}$ closely, and $\\alpha_{\\mathrm{unlearn}}$ is close to $\\alpha_{\\mathrm{test}}$ , hinting that $\\mathbf{A}^3$ is learning the underlying features instead of the UE-crafted perturbations.", + "bbox": [ + 511, + 343, + 908, + 659 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5. Conclusion", + "text_level": 1, + "bbox": [ + 513, + 676, + 633, + 694 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "First, PLs' generalization combined with $A^3$ make them challenging adversaries for traditional UEs. Second, Augmenting PL with diverse image and text perturbations significantly improves their resilience against UEs, pointing to the need for multimodal considerations in both UEs and countermeasures. Third, compared to simpler augmentations or adversarial training, $A^3$ 's cross-modal feature alignment proved especially effective in mitigating PL's adaptation to UEs than preexisting learning methods. Finally, we emphasize the need for adaptive, multimodal approaches in UEs and open pathways toward more sophisticated protections against unauthorized training in an era of large multimodal and pretrained models.", + "bbox": [ + 511, + 703, + 908, + 900 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "9514", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Acknowledgment", + "text_level": 1, + "bbox": [ + 91, + 90, + 240, + 107 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "This work is supported in part by National Natural Science Foundation of China (62376263, 62372443 and 62271496), Guangdong Basic and Applied Basic Research Foundation (2023B1515130002), Natural Science Foundation of Guangdong (2024A1515030209 and 2024A1515011970), Shenzhen Science and Technology Innovation Commission (JCYJ20230807140507015 and JCYJ20220531100804009), and Yu-Liang Lu's Project Team Development Funding (KY23A102). This work was carried out in part at SICC, which is supported by SKL-IOTSC, University of Macau.", + "bbox": [ + 89, + 113, + 485, + 253 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 277, + 187, + 292 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Manuele Barraco, Marcella Cornia, Silvia Cascianelli, Lorenzo Baraldi, and Rita Cucchiara. The unreasonable effectiveness of clip features for image captioning: An experimental analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 4662-4670, 2022. 3", + "[2] Battista Biggio and Fabio Roli. Wild patterns: Ten years after the rise of adversarial machine learning. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pages 2154-2156, 2018. 3", + "[3] Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101 – mining discriminative components with random forests. In Computer Vision – ECCV 2014, pages 446–461. Springer International Publishing, 2014. 5, 11", + "[4] Xinquan Chen, Xitong Gao, Juanjuan Zhao, Kejiang Ye, and Cheng-Zhong Xu. AdvDiffuser: Natural adversarial example synthesis with diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4562-4572, 2023. 3", + "[5] Jacob Clarysse, Julia Hörrmann, and Fanny Yang. Why adversarial training can hurt robust accuracy. In The Eleventh International Conference on Learning Representations, 2023. 3", + "[6] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE, 2009. 5, 11", + "[7] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat GANs on image synthesis. In Advances in Neural Information Processing Systems, pages 8780-8794. Curran Associates, Inc., 2021. 3", + "[8] Li Fei-Fei, Rob Fergus, and Pietro Perona. Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories. In 2004 conference on computer vision and pattern recognition workshop, pages 178-178. IEEE, 2004. 5, 11", + "[9] Shaopeng Fu, Fengxiang He, Yang Liu, Li Shen, and Dacheng Tao. Robust unlearnable examples: Protecting data against adversarial learning. In International Conference on Learning Representations, 2022. 1, 2, 4, 6, 7", + "[10] Micah Goldblum, Dimitris Tsipras, Chulin Xie, and et al. Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses. IEEE Transactions on Pattern" + ], + "bbox": [ + 93, + 303, + 485, + 898 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Analysis and Machine Intelligence, 45(2):1563-1580, 2022. 2", + "[11] Hanxun Huang, Xingjun Ma, Sarah Monazam Erfani, James Bailey, and Yisen Wang. Unlearnable examples: Making personal data unexploitable. In International Conference on Learning Representations, 2021. 1, 2, 3, 4, 6, 7", + "[12] Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. Advances in neural information processing systems, 32, 2019. 3", + "[13] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In International conference on machine learning, pages 4904-4916. PMLR, 2021. 3", + "[14] Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The CIFAR-10 and CIFAR-100 datasets, 2014. Available at: http://www.cs.toronto.edu/~kriz/cifar.html.6", + "[15] Ziyi Lin, Shijie Geng, Renrui Zhang, Peng Gao, Gerard De Melo, Xiaogang Wang, Jifeng Dai, Yu Qiao, and Hongsheng Li. Frozen clip models are efficient video learners. In European Conference on Computer Vision, pages 388-404. Springer, 2022. 1", + "[16] Zhuoran Liu, Zhengyu Zhao, and Martha Larson. Image shortcut squeezing: Countering perturbative availability poisons with compression. In International conference on machine learning, 2023. 2, 3, 11", + "[17] Yuning Lu, Jianzhuang Liu, Yonggang Zhang, Yajing Liu, and Xinmei Tian. Prompt distribution learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2022. 4", + "[18] Yiwei Ma, Guohai Xu, Xiaoshuai Sun, Ming Yan, Ji Zhang, and Rongrong Ji. X-clip: End-to-end multi-grained contrastive learning for video-text retrieval. In Proceedings of the 30th ACM International Conference on Multimedia, page 638–647, New York, NY, USA, 2022. Association for Computing Machinery. 3", + "[19] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. 2, 3, 11", + "[20] Samuel G Müller and Frank Hutter. Trivialaugment: Tuning-free yet state-of-the-art data augmentation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 774-782, 2021. 4, 11", + "[21] Anguelos Nicolaou, Vincent Christlein, Edgar Riba, Jian Shi, Georg Vogeler, and Mathias Seuret. Tormentor: Deterministic dynamic-path, data augmentations with fractals. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 2707-2711, 2022. 4, 11", + "[22] Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In 2008 Sixth Indian conference on computer vision, graphics & image processing, pages 722-729. IEEE, 2008. 5, 11" + ], + "bbox": [ + 516, + 92, + 908, + 901 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9515", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[23] Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, and CV Jawahar. Cats and dogs. In 2012 IEEE conference on computer vision and pattern recognition, pages 3498-3505. IEEE, 2012. 5, 11", + "[24] Tianrui Qin, Xitong Gao, Juanjuan Zhao, and Kejiang Ye. Destruction-restoration suppresses data protection perturbations against diffusion models. In 2023 IEEE 35th International Conference on Tools with Artificial Intelligence (ICTAI), pages 586-594. IEEE, 2023. 1", + "[25] Tianrui Qin, Xitong Gao, Juanjuan Zhao, Kejiang Ye, and Cheng-Zhong Xu. Learning the unlearnable: Adversarial augmentations suppress unlearnable example attacks. In 4th Workshop on Adversarial Robustness In the Real World (AROW), ICCV 2023, 2023. 2, 3, 4, 7, 12", + "[26] Tianrui Qin, Xitong Gao, Juanjuan Zhao, Kejiang Ye, and Cheng zhong Xu. APBench: A unified availability poisoning attack and defenses benchmark. Transactions on Machine Learning Research, 2024. 3, 4, 6", + "[27] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 3, 5, 7", + "[28] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684-10695, 2022. 3", + "[29] Pedro Sandoval-Segura, Vasu Singla, Jonas Geiping, Micah Goldblum, Tom Goldstein, and David Jacobs. Autoregressive perturbations for data poisoning. Advances in Neural Information Processing Systems, 35:27374-27386, 2022. 1, 2, 3, 4, 6, 7", + "[30] Mainak Singha, Harsh Pal, Ankit Jha, and Biplab Banerjee. Ad-clip: Adapting domains in prompt space using clip. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4355–4364, 2023. 1", + "[31] Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012. 5, 11", + "[32] Zeyi Sun, Ye Fang, Tong Wu, Pan Zhang, Yuhang Zang, Shu Kong, Yuanjun Xiong, Dahua Lin, and Jiaqi Wang. Alphaclip: A clip model focusing on wherever you want. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 13019-13029, 2024. 1", + "[33] Lue Tao, Lei Feng, Jinfeng Yi, Sheng-Jun Huang, and Song-can Chen. Better safe than sorry: Preventing delusive adversaries with adversarial training. Advances in Neural Information Processing Systems, 34:16209-16225, 2021. 2, 4, 6, 7", + "[34] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. 5", + "[35] Jianyi Wang, Kelvin CK Chan, and Chen Change Loy. Exploring clip for assessing the look and feel of images. In" + ], + "bbox": [ + 91, + 92, + 485, + 901 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Proceedings of the AAAI Conference on Artificial Intelligence, pages 2555-2563, 2023. 1", + "[36] Shutong Wu, Sizhe Chen, Cihang Xie, and Xiaolin Huang. One-pixel shortcut: on the learning preference of deep neural networks. In International Conference on Learning Representations, 2023. 1, 2, 3, 4, 6, 7", + "[37] Jianxiong Xiao, James Hays, Krista A. Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 3485-3492, 2010. 11", + "[38] Yao-Yuan Yang, Cyrus Rashtchian, Hongyang Zhang, Russ R Salakhutdinov, and Kamalika Chaudhuri. A closer look at accuracy vs. robustness. In Advances in Neural Information Processing Systems, pages 8588-8601. Curran Associates, Inc., 2020. 3, 7", + "[39] Hantao Yao, Rui Zhang, and Changsheng Xu. Visual-language prompt tuning with knowledge-guided context optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6757-6767, 2023. 4", + "[40] Da Yu, Huishuai Zhang, Wei Chen, Jian Yin, and Tie-Yan Liu. Availability attacks create shortcuts. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 2367-2376, 2022. 1, 2, 3, 4, 6, 7", + "[41] Yunrui Yu, Xitong Gao, and Cheng-Zhong Xu. LAFIT: Efficient and reliable evaluation of adversarial defenses with latent features. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 46(1):354-369, 2024. 3", + "[42] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2022. 4, 6, 7, 8", + "[43] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for vision-language models. International Journal of Computer Vision, 130(9):2337-2348, 2022. 4, 6" + ], + "bbox": [ + 516, + 92, + 906, + 613 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "9516", + "bbox": [ + 482, + 945, + 514, + 955 + ], + "page_idx": 9 + } +] \ No newline at end of file diff --git a/2025/A3_ Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment/641deceb-eb2b-45b8-97bb-8e2f7fe97a5e_model.json b/2025/A3_ Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment/641deceb-eb2b-45b8-97bb-8e2f7fe97a5e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2267220a5a50b6d307d6902a2d4526f0dd1e5969 --- /dev/null +++ b/2025/A3_ Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment/641deceb-eb2b-45b8-97bb-8e2f7fe97a5e_model.json @@ -0,0 +1,2420 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.043 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.001, + 0.812, + 0.047 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.214, + 0.13, + 0.785, + 0.177 + ], + "angle": 0, + "content": "A3: Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment" + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.205, + 0.286, + 0.221 + ], + "angle": 0, + "content": "Xuan Wang*" + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.222, + 0.346, + 0.254 + ], + "angle": 0, + "content": "Anhui Key Lab of CSSAE, National University of Defense Technology wangxuan21d@nudt.edu.cn" + }, + { + "type": "text", + "bbox": [ + 0.195, + 0.26, + 0.292, + 0.276 + ], + "angle": 0, + "content": "Tianrui Qin" + }, + { + "type": "text", + "bbox": [ + 0.153, + 0.277, + 0.336, + 0.299 + ], + "angle": 0, + "content": "OPPO Research Institute qintianrui123@gmail.com" + }, + { + "type": "text", + "bbox": [ + 0.469, + 0.204, + 0.578, + 0.221 + ], + "angle": 0, + "content": "Xitong Gao*†" + }, + { + "type": "text", + "bbox": [ + 0.383, + 0.221, + 0.658, + 0.254 + ], + "angle": 0, + "content": "Shenzhen Institutes of Advanced Technology, CAS; Shenzhen University of Advanced Technology xt.gao@siat.ac.cn" + }, + { + "type": "text", + "bbox": [ + 0.443, + 0.26, + 0.546, + 0.277 + ], + "angle": 0, + "content": "Yu-liang Lu†" + }, + { + "type": "text", + "bbox": [ + 0.379, + 0.277, + 0.61, + 0.309 + ], + "angle": 0, + "content": "Anhui Key Lab of CSSAE, National University of Defense Technology publicLuYL@126.com" + }, + { + "type": "text", + "bbox": [ + 0.728, + 0.205, + 0.85, + 0.222 + ], + "angle": 0, + "content": "Dongping Liao" + }, + { + "type": "text", + "bbox": [ + 0.695, + 0.222, + 0.883, + 0.254 + ], + "angle": 0, + "content": "State Key Lab of IoTSC, CIS Dept, University of Macau yb97428@um.edu.mo" + }, + { + "type": "text", + "bbox": [ + 0.681, + 0.26, + 0.819, + 0.276 + ], + "angle": 0, + "content": "Cheng-zhong Xu" + }, + { + "type": "text", + "bbox": [ + 0.655, + 0.276, + 0.842, + 0.309 + ], + "angle": 0, + "content": "State Key Lab of IoTSC, CIS Dept, University of Macau czxu@um.edu, mo" + }, + { + "type": "title", + "bbox": [ + 0.248, + 0.342, + 0.327, + 0.357 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.374, + 0.485, + 0.766 + ], + "angle": 0, + "content": "In the age of pervasive machine learning applications, protecting digital content from unauthorized use has become a pressing concern. Unlearnable examples (UEs)—data modified with imperceptible perturbations to inhibit model training while preserving human usability—have emerged as a promising approach. However, existing UE methods assume unauthorized trainers have extensive exposure to UEs or that models are trained from scratch, which may not hold in practical scenarios. This paper investigates the effectiveness of UEs under the few-shot learning paradigm, pitching it against prompt learning (PL) models that leverage pretrained vision-language models (VLMs), like CLIP, capable of generalizing to new classes with minimal data. To address this, we introduce an adaptive UE framework to generate unlearnable examples that specifically target the PL process. In addition, we propose a novel UE countermeasure, \\( A^3 \\), with cross-modal adversarial feature alignment, specifically designed to circumvent UEs under few-shot PL. Experimental evaluations on 7 datasets show that \\( A^3 \\) outperforms existing PL methods, achieving up to \\( 33\\% \\) higher performance in learning from UEs. For example, in the scenario involving \\( \\ell_{\\infty} \\)-bounded EM perturbations, \\( A^3 \\) has an average harmonic mean accuracy across 7 datasets of \\( 82.43\\% \\), compared to CoCoOp's baseline of \\( 65.47\\% \\). Our findings highlight the limitations of existing UEs against PL and lay the foundation for future data protection mechanisms." + }, + { + "type": "title", + "bbox": [ + 0.092, + 0.795, + 0.222, + 0.81 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.82, + 0.486, + 0.866 + ], + "angle": 0, + "content": "In the era of pervasive machine learning applications, protecting digital content from unauthorized use is an escalating concern. An emerging solution involves unlearnable ex" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.343, + 0.908, + 0.646 + ], + "angle": 0, + "content": "amples (UEs) [9, 11, 29, 36, 40]. — data modified with imperceptible perturbations that prevent machine learning models from effectively learning and generalizing from it, while preserving its utility for human observers. Unlike traditional data poisoning attacks intended for malicious use, UEs serve content creators by providing a way to inhibit unauthorized model training. Beyond this, UEs can also be used to shed light on vulnerabilities and learning preferences [36] of deep learning models, and prevent unlawful use of personal features [24]. However, existing UE methods are primarily designed for models trained from scratch, and make strong assumptions where all or a large proportion of training data is used unknowingly by unauthorized trainers. These assumptions may not hold in the wilderness, for several reasons: (a) Creators may release limited data. (b) Unauthorized trainers may have limited exposure to the UEs: they may curate their training data from various sources and may only use a small fraction of the creator's data. (c) Trainers may leverage pretrained models to improve training efficiency, and to generalize well to new classes and contexts." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.652, + 0.909, + 0.834 + ], + "angle": 0, + "content": "In this paper, we found that recent advances in prompt learning (PL) with pretrained vision-language models (VLMs) can indeed challenge the robustness of UEs. VLMs use contrastive learning to align images and text features, enabling strong zero-shot and downstream tasks [15, 30, 32, 35]. PL further adapts CLIP by fine-tuning prompts instead of model weights, making it ideal for data-limited scenarios, and novel tasks and classes. This paper thus investigates a central question: Are UEs effective in protecting data against PL-enabled models? This question has profound implications from both the content creator's and the unauthorized trainer's perspectives." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.84, + 0.91, + 0.901 + ], + "angle": 0, + "content": "For content creators, understanding this question is crucial for several reasons: (a) To effectively prevent unauthorized usage, creators need to know the minimum amount of modified data required to maintain protection. (b) Content" + }, + { + "type": "page_footnote", + "bbox": [ + 0.109, + 0.875, + 0.22, + 0.888 + ], + "angle": 0, + "content": "*Equal contribution." + }, + { + "type": "page_footnote", + "bbox": [ + 0.111, + 0.888, + 0.373, + 0.901 + ], + "angle": 0, + "content": "\\(\\dagger\\) Correspondence to Xitong Gao and Yu-liang Lu." + }, + { + "type": "list", + "bbox": [ + 0.109, + 0.875, + 0.373, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.482, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "9507" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.485, + 0.152 + ], + "angle": 0, + "content": "creators typically control and release a limited quantity of data, making it impractical to assume access to large datasets. This constraint naturally leads to a few-shot scenario, which is the focus of this study." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.153, + 0.484, + 0.319 + ], + "angle": 0, + "content": "For unauthorized trainers, PL represents an appealing tool to bypass UEs, as PL exploits the generalization strengths of VLMs: (a) The pretrained encoders of VLMs enables it to generalize well to novel classes, potentially circumventing perturbations that would normally deter training from scratch. (b) While existing methods to circumvent UEs typically involve adversarial training [19], or image augmentations [16, 25], which affect only the image seen by the model. PL may be able to enhance its robustness by incorporating text augmentations, offering a broader strategy to bypass unlearnability protections." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.32, + 0.484, + 0.366 + ], + "angle": 0, + "content": "To address these challenges, we propose an adaptive framework that targets UEs in the few-shot PL setting. The contributions of this paper are as follows:" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.367, + 0.484, + 0.472 + ], + "angle": 0, + "content": "- We introduced a scenario designed to examine the effectiveness of UEs against prompt learning, particularly in a few-shot context where data availability is constrained. Beyond existing UE methods, we introduced an adaptive UE framework that incorporates PL-specific considerations for surrogate-based UEs, generating stronger UEs that are more effective against PL." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.473, + 0.484, + 0.547 + ], + "angle": 0, + "content": "- We propose a novel method, \\( \\mathrm{A}^3 \\), which employs cross-modal adversarial augmented feature alignment to enhance PL's ability to generalize when learned from UEs. This method adversarially aligns diversely-augmented image and text augmentations to make PL robust against UEs." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.548, + 0.484, + 0.623 + ], + "angle": 0, + "content": "- Experimental results demonstrate that \\( \\mathrm{A}^3 \\) achieves significant performance gains over existing methods, proving more effective against other UE methods in few-shot scenarios, even when faced with larger perturbations, and partial poisoning. \\( \\mathrm{A}^3 \\) also generalizes well to novel classes." + }, + { + "type": "list", + "bbox": [ + 0.091, + 0.367, + 0.484, + 0.623 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.624, + 0.485, + 0.7 + ], + "angle": 0, + "content": "This work offers new insights into the capabilities and limitations of UEs against PL, laying a foundation for more robust data protection strategies in the era of knowledge transfer with large pretrained models and multimodal machine learning." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.716, + 0.371, + 0.732 + ], + "angle": 0, + "content": "2. Related Work & Preliminaries" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.742, + 0.303, + 0.759 + ], + "angle": 0, + "content": "2.1. Unlearnable Examples" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.765, + 0.486, + 0.901 + ], + "angle": 0, + "content": "The primary goal of unlearnable examples [9, 11, 29, 33, 36, 40] is to safeguard the privacy and copyright of content providers by adding small, human-imperceptible perturbations to data. These perturbations prevent machine learning models from effectively generalizing to the data's original distribution. Unlike traditional data poisoning attacks [10], which aim to introduce backdoor patterns into a model, unlearnable examples are not intended for malicious purposes but solely to protect data from unauthorized use." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.092, + 0.907, + 0.231 + ], + "angle": 0, + "content": "Definition of Unlearnable Examples Consider a dataset with \\(N\\) clean samples \\(\\mathcal{D}_{\\mathrm{clean}} = \\{(x_i,y_i)\\}_{i = 1}^N\\), where \\(\\mathbf{x}_i\\in \\mathcal{X} = [0,1]^{C\\times H\\times W}\\) and \\(y_{i}\\in \\mathcal{V} = \\{1,\\dots ,K\\}\\) represent the \\(i^{\\mathrm{th}}\\) input sample with \\(C\\) channels and \\(H\\times W\\) spatial dimensions, and its corresponding true label. Each sample is drawn from a distribution \\(\\mathcal{S}\\). The content provider aims to add small perturbations \\(\\delta_{i}\\in \\mathcal{B}_{p}(\\mathbf{x}_{i},\\epsilon)\\) to the clean samples in \\(\\mathbf{x}_i\\in \\mathcal{D}_{\\mathrm{clean}}\\) to generate unlearnable examples \\(\\mathcal{D}_{\\mathrm{ue}}(\\boldsymbol {\\delta})\\triangleq \\{(\\mathbf{x}_i + \\boldsymbol {\\delta}_i,y_i)\\mid (\\mathbf{x}_i,y_i)\\in \\mathcal{D}_{\\mathrm{clean}}\\}\\). The set \\(\\mathcal{B}_p(\\mathbf{x}_i,\\epsilon)\\) is:" + }, + { + "type": "equation", + "bbox": [ + 0.567, + 0.238, + 0.907, + 0.257 + ], + "angle": 0, + "content": "\\[\n\\mathcal {B} _ {p} \\left(\\mathbf {x} _ {i}, \\epsilon\\right) \\triangleq \\left\\{\\mathbf {d} \\mid \\| \\mathbf {d} \\| _ {p} \\leq \\epsilon , \\mathbf {x} _ {i} + \\mathbf {d} \\in \\mathcal {X} \\right\\}. \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.263, + 0.907, + 0.353 + ], + "angle": 0, + "content": "It bounds the noise \\(\\delta_{i}\\) of each sample \\(\\mathbf{x}_i\\) within the \\(\\epsilon\\)-ball of \\(\\ell_p\\)-distance with respect to the sample, and the perturbed sample \\(\\mathbf{x}_i + \\delta_i\\) remain within the input domain \\(\\mathcal{X}\\). A small \\(\\epsilon\\) is crucial to ensure that the perturbations do not significantly alter the original content, thus preserving the data's utility, and typically \\(p \\in \\{0,2,\\infty\\}\\)." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.354, + 0.907, + 0.43 + ], + "angle": 0, + "content": "When this perturbed dataset is used for training, the goal is for the resulting model to generalize poorly to the original distribution \\(S\\). The optimization for the noise can be formulated as the following bi-level optimization problem to solve for the bounded perturbations \\(\\delta \\triangleq \\{\\delta_i \\in \\mathcal{B}_p(\\mathbf{x}_i, \\epsilon)\\}_{i=1}^N\\):" + }, + { + "type": "equation", + "bbox": [ + 0.588, + 0.437, + 0.907, + 0.455 + ], + "angle": 0, + "content": "\\[\n\\max _ {\\delta} \\mathbb {E} _ {\\left(\\mathbf {x} _ {i}, y _ {i}\\right) \\sim \\mathcal {S}} \\left[ \\mathcal {L} \\left(f _ {\\boldsymbol {\\theta} ^ {*} (\\delta)} \\left(\\mathbf {x} _ {i}\\right), y _ {i}\\right) \\right], \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.46, + 0.907, + 0.522 + ], + "angle": 0, + "content": "where \\(f_{\\theta}:\\mathcal{X}\\to \\mathbb{R}^{K}\\) denotes the model with parameters \\(\\pmb{\\theta}\\), \\(\\mathcal{L}:\\mathbb{R}^K\\times \\mathcal{V}\\rightarrow \\mathbb{R}\\) is the loss function (typically cross-entropy), and \\(\\pmb{\\theta}^{\\star}\\) represents the model parameters optimized on the perturbed images:" + }, + { + "type": "equation", + "bbox": [ + 0.525, + 0.529, + 0.907, + 0.548 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {\\theta} ^ {\\star} (\\boldsymbol {\\delta}) = \\operatorname {a r g m i n} _ {\\boldsymbol {\\theta}} \\mathbb {E} _ {(\\mathbf {x} _ {i}, y _ {i}) \\sim \\mathcal {D} _ {\\text {c l e a n}}} [ \\mathcal {L} (f _ {\\boldsymbol {\\theta}} (\\mathbf {x} _ {i} + \\boldsymbol {\\delta} _ {i}), y _ {i}) ]. \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.554, + 0.907, + 0.599 + ], + "angle": 0, + "content": "As the above problem is intractable, many works have proposed alternative methods to approximate the solution, commonly involving surrogate models:" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.6, + 0.907, + 0.675 + ], + "angle": 0, + "content": "- Hypocritical perturbations (HYPO) [33] assumes a surrogate model \\( g_{\\theta} : \\mathcal{X} \\to \\mathbb{R}^{K} \\) with pretrained weights \\( \\theta \\) learned on samples from \\( S \\), and directly finds the perturbations \\( \\delta \\) that makes the model easily produce correct predictions for \\( \\mathcal{D}_{\\mathrm{clean}} \\) images." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.675, + 0.906, + 0.719 + ], + "angle": 0, + "content": "- Error maximization (EM) [11] further considers a randomly-initialized surrogate \\( g_{\\theta} \\), and optimizes the noise \\( \\delta \\), and the surrogate model \\( g_{\\theta} \\) simultaneously." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.72, + 0.907, + 0.81 + ], + "angle": 0, + "content": "- Robust error maximization (REM) [9] extends EM to optimize the surrogate \\( g_{\\theta} \\) under adversarial training [19], where the adversarial noise is also bounded within the \\( \\epsilon \\)-ball of \\( \\ell_p \\)-norm, and optimized via projected gradient descent (PGD) [19]. This helps to improve the effectiveness of the perturbations even under adversarial training." + }, + { + "type": "list", + "bbox": [ + 0.514, + 0.6, + 0.907, + 0.81 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.811, + 0.907, + 0.901 + ], + "angle": 0, + "content": "Interestingly, recent works have shown that unlearnable examples can also be curated without the need for optimization, where the perturbations form a linearly-separable subspace that can be learned easily by the model. This bias is so strong that it makes the underlying features less learnable by model training algorithms:" + }, + { + "type": "page_number", + "bbox": [ + 0.482, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "9508" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.091, + 0.092, + 0.482, + 0.196 + ], + "angle": 0, + "content": "- Linearly-separable perturbations (LSP) [40] generates random color patches as perturbations, and apply them to the images while ensuring the added noise is bounded within a small \\(\\ell_2\\)-distance from the original image. This simple method can enable strong unlearnable examples without the expensive optimization process and the need for surrogate models." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.197, + 0.482, + 0.303 + ], + "angle": 0, + "content": "- Autoregressive Poisoning (AR) [29] Similar to LSP, AR prescribes a simple perturbation strategy which first fills all channels of the first 2 rows and columns of the image with Gaussian noise, then uses an autoregressive process to fill the remaining pixels with a \\(3 \\times 3\\) sliding window. It then re-scales the perturbations to be within the noise bound \\(\\mathcal{B}_p(\\mathbf{x}_i, \\epsilon)\\), before adding them to the image \\(\\mathbf{x}_i\\)." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.304, + 0.482, + 0.484 + ], + "angle": 0, + "content": "- One-pixel Shortcuts (OPS) [36] For each image belonging to a specific class, OPS searches for an optimal pixel and color value for the class, such that it results in the largest change in the pixel's color value for all perturbed images. This constitutes a simple \\(\\ell_0\\)-bounded perturbation where only one pixel is modified for each image. Surprisingly, when training models from scratch, OPS can generate even stronger unlearnable examples than EM with a 8/255 noise budget [25, 36]. It also resists even \\(\\ell_{\\{2,\\infty\\}}\\)-bounded adversarial training, as the noise added by adversarial training cannot effectively perturb the pixel values for the erasure of \\(\\ell_0\\)-perturbations." + }, + { + "type": "list", + "bbox": [ + 0.091, + 0.092, + 0.482, + 0.484 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.486, + 0.483, + 0.591 + ], + "angle": 0, + "content": "As \\( \\mathsf{A}^3 \\) considers the problem of learning from unlearnable examples from the perspective of few-shot learning with prompt learning (PL), we provide a framework that adapts the above methods to this setting, by making the surrogate models \\( g \\) our prompt learners. It also shows that PL can be in a certain degree effective against unlearnable examples produced by these methods." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.604, + 0.418, + 0.62 + ], + "angle": 0, + "content": "2.2. Learning from Unlearnable Examples" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.627, + 0.483, + 0.672 + ], + "angle": 0, + "content": "The emergence of unlearnable examples prompts investigation into the mechanisms that unauthorized trainers might exploit to extract useful features." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.673, + 0.483, + 0.809 + ], + "angle": 0, + "content": "Adversarial Training (AT) [2, 19] involves the generation of adversarial examples [4, 12, 41] specifically tailored to the model under training, which are in turn used to train the model to enhance the model's robustness. It also has been known to be an effective approach to improve the model generalization when trained on unlearnable examples [11]. However, adversarial training is known to be computationally expensive, and also affects the model's performance on clean data [38], especially when the sample size is small [5]." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.811, + 0.483, + 0.901 + ], + "angle": 0, + "content": "For this reason, Image Shortcut Squeezing (ISS) [16] introduces a suite of simple image processing methods can show surprising effectiveness in mitigating the impact of unlearnable examples, without the costs associated with adversarial training. Grayscale removes the color information from the training images, and JPEG Compression" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.908, + 0.304 + ], + "angle": 0, + "content": "(JPEG) instead performs a lossy JPEG with high compression rate on the training images. However, it was recently discovered [26] that simple image processing is much less effective against adaptively-optimized unlearnable examples. Building upon this idea, UEraser [25] further proposes a stochastic augmentation pipeline with a wider range of transformations, and uses a simple adversarial augmentation to optimize models only on augmented images with the maximum loss. This allows the model to learn the underlying features without being affected by the easily learnable shortcuts in unlearnable examples. While all these methods have shown effectiveness against unlearnable examples, they consider the problem of training models from scratch rather than leveraging pretrained models." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.312, + 0.685, + 0.328 + ], + "angle": 0, + "content": "2.3. Prompt Learning" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.334, + 0.907, + 0.56 + ], + "angle": 0, + "content": "Vision-language pretrained models (VLMs), such as CLIP [27] and ALIGN [13], represent a significant advancement in multi-modal learning. Trained on extensive image-text pair datasets, The VLM have two main components: an image encoder \\( f_{\\mathrm{im}}: \\mathcal{X} \\to \\mathbb{R}^d \\) and a text encoder \\( f_{\\mathrm{tx}}: \\mathcal{W} \\to \\mathbb{R}^d \\), where \\( \\mathcal{X} \\) and \\( \\mathcal{W} \\) are the input image and text, respectively, and learn a shared embedding space \\( \\mathbb{R}^d \\) for images and texts. Image and text pairs that are semantically similar will have similar embeddings in this space, and vice versa. This makes them versatile for a wide range of downstream tasks, including image classification [27], captioning [1], retrieval [18], and providing guidance for image generation [7, 28]. Notably, VLMs show impressive zero-shot performance, where they can perform well on new tasks without task-specific training, showcasing their generalization capabilities." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.561, + 0.909, + 0.833 + ], + "angle": 0, + "content": "Prompt Engineering for VLMs seeks to use VLMs for image classification tasks by constructing class-specific text prompts, e.g., \"a photo of a c1s\" for the class c1s, and comparing the model's similarity scores between the features of these prompts and the target image. The class yielding the highest similarity score is selected as the predicted class for the image. Formally, prompt engineering creates \\(M\\) text prompts, embedded as a tensor \\(\\mathbf{V} = \\{\\mathbf{v}_m\\}_{m=1}^M \\in \\mathbb{R}^{M \\times T \\times h}\\), where \\(\\mathbf{v}_m\\) denotes an embedded prompt prefix sequence of \\(T\\) tokens. By appending each prefix \\(\\mathbf{v}_i\\) with \\(\\mathbf{c}_k \\in \\mathbb{R}^h\\), the constant embedding vector of the \\(k^{\\text{th}}\\) class, the model can thus construct a holistic classifier \\(h_\\phi: \\mathcal{X} \\to \\mathbb{R}^K\\) by averaging the similarity scores across all \\(M\\) prompts, where \\(\\phi = \\{\\theta, \\mathbf{V}, \\ldots\\}\\) denotes all parameters in \\(h_\\phi\\), consisting of the pretrained CLIP weights \\(\\pmb{\\theta}\\), a manually-designed prompt embedding \\(\\mathbf{V}\\), and other potential parameters used by the prompt learning algorithm. For the \\(k^{\\text{th}}\\) class, we can obtain its logit as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.549, + 0.841, + 0.907, + 0.862 + ], + "angle": 0, + "content": "\\[\nh _ {\\phi} (\\mathbf {x}) _ {k} = \\frac {1}{M} \\sum_ {m = 1} ^ {M} \\operatorname {s i m} \\left(f _ {\\mathrm {i m}} (\\mathbf {x}), f _ {\\mathrm {t x}} \\left(\\left[ \\mathbf {v} _ {m}, \\mathbf {c} _ {k} \\right]\\right)\\right). \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.871, + 0.908, + 0.901 + ], + "angle": 0, + "content": "Here, sim denotes the similarity function used to compute the closeness between the image and text features, typically" + }, + { + "type": "page_number", + "bbox": [ + 0.482, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "9509" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.091, + 0.486, + 0.138 + ], + "angle": 0, + "content": "the cosine similarity. The \\(k^{\\mathrm{th}}\\) class probabilities can thus be computed using the softmax function, where \\(\\tau\\) is the softmax temperature and is usually set to 1:" + }, + { + "type": "equation", + "bbox": [ + 0.127, + 0.147, + 0.484, + 0.169 + ], + "angle": 0, + "content": "\\[\np (y = k \\mid \\mathbf {x}, \\phi) = e ^ {h _ {\\phi} (\\mathbf {x}) _ {k} / \\tau} / \\sum_ {j = 1} ^ {K} e ^ {h _ {\\phi} (x) _ {j} / \\tau}. \\quad (5)\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.093, + 0.178, + 0.485, + 0.51 + ], + "angle": 0, + "content": "Prompt Learning (PL) While the above zero-shot method is effective for many tasks, it is limited by the quality of the manually-designed prompts and may require extensive labor and expert knowledge to construct. In contrast, PL aims to automatically learn the prompt embeddings \\(\\mathbf{V}\\), and possibly other parameters in \\(\\phi\\), by optimizing them during training. To improve downstream task performance, CoOp [43] learns the prompt embeddings \\(\\mathbf{V}\\), show that even with few-shot examples, can generate better prompts than manual designs, and generalize well to unseen tasks. CoCoOp [42] builds upon CoOp by introducing a trainable meta-net to learn to generate prompt embeddings from the extracted image features, in order to improve the model's performance on unseen tasks. KgCoOp [39] further regularizes the prompt embeddings to be close to the initial handcrafted prompts, showing that by retaining proximity to the original prompts, unseen tasks can be generalized better. Finally, ProDA [17] uses Gaussian to model the prompt embedding distribution, and encourages orthogonality among the prompt embeddings. This paper presents the first work to highlight that prompt learning can be effective in learning useful features from unlearnable examples, even under few-shot scenarios." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.523, + 0.245, + 0.54 + ], + "angle": 0, + "content": "3. The \\(\\mathbf{A}^3\\) Method" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.549, + 0.34, + 0.566 + ], + "angle": 0, + "content": "3.1. Adaptive UEs Targeting PL" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.571, + 0.486, + 0.813 + ], + "angle": 0, + "content": "Surrogate-based methods (EM [11], REM [9], and HYPO [33]) for synthesizing UEs typically train models from scratch, and do not assume low data availability, which makes them an unsuitable choice for affecting the PL process of the pretrained CLIP model (Table 1). To address this limitation, we first introduce an adaptive approach for all methods that uses the PL process as the surrogate model. To make the UEs stronger, we also assume that the UEs are synthesized with the same PL method using the same pretrained CLIP model. We implemented adaptive-variations of EM, REM, and HYPO. As they are stronger UEs than the original methods, our experiments by default use these adaptive variants. Recall REM [9] in Section 2, we adapt its objective to PL, specifically CoCoOp [42] in this paper, by using the holistic classifier \\( h_{\\phi} \\) in (4) as the surrogate model. The REM objective to seek \\( \\delta \\) under PL is thus:" + }, + { + "type": "equation", + "bbox": [ + 0.095, + 0.824, + 0.484, + 0.855 + ], + "angle": 0, + "content": "\\[\n\\min _ {(\\boldsymbol {\\delta}, \\mathbf {V}, \\dots)} \\max _ {\\boldsymbol {\\eta}} \\mathbb {E} _ {(\\mathbf {x} _ {i}, y _ {i}) \\sim \\mathcal {D} _ {\\text {c l e a n}}} [ \\mathcal {L} (h _ {\\phi} (\\mathbf {x} _ {i} + \\boldsymbol {\\delta} _ {i} + \\boldsymbol {\\eta}), y _ {i}) ], \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.856, + 0.442, + 0.871 + ], + "angle": 0, + "content": "where \\(\\pmb {\\eta}\\in \\mathcal{B}_p(\\mathbf{x}_i,\\epsilon)\\) is the \\(\\epsilon\\) -bounded \\(\\ell_p\\) -norm noise." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.871, + 0.485, + 0.902 + ], + "angle": 0, + "content": "In this adaptive context, EM [11] simplifies the above REM objective by removing the inner maximization over \\(\\pmb{\\eta}\\)" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.907, + 0.137 + ], + "angle": 0, + "content": "and \\(\\eta = 0\\). Similarly, HYPO [33] further assumes that \\(\\mathbf{V}\\) is kept constant to its initial manual design, and searches for the perturbations \\(\\delta\\) directly." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.137, + 0.909, + 0.244 + ], + "angle": 0, + "content": "Surrogate-free methods such as LSP [40], AR [29] and OPS [36] do not rely on a surrogate model training process and directly prescribe the perturbations for a given set of clean examples. Therefore, they do not have an adaptive counterpart. In our experiments, we examine the performance of these methods by directly applying them to the training data used by the prompt learning process." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.253, + 0.692, + 0.269 + ], + "angle": 0, + "content": "3.2. An Overview of \\(\\mathbf{A}^3\\)" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.275, + 0.909, + 0.503 + ], + "angle": 0, + "content": "Figure 1 provides an overview of \\(\\mathrm{A}^3\\). The overall algorithm is in Algorithm 1 of Appendix A.3. \\(\\mathrm{A}^3\\) provides a pool of diverse image and text augmentation strategies \\(\\mathcal{A}_{\\mathrm{im}}\\) and \\(\\mathcal{A}_{\\mathrm{tx}}\\). For each image-class training pair, it first samples \\(K_{\\mathrm{im}}\\) and \\(K_{\\mathrm{tx}}\\) different image and text augmentation strategies, and applies them to the image and text sample respectively. This results in \\(K_{\\mathrm{im}} \\times K_{\\mathrm{tx}}\\) distinct augmented pairs for each training sample. Following the prompt learning technique of CoCoOp [42], it then optimizes the prompt embeddings \\(\\mathbf{V}\\) and the meta-net weights \\(\\psi\\) to align the pair of augmented samples with the minimum similarity. Intuitively, training prompts with the most dissimilar augmented pairs of image and text features forces the model to learn from the underlying features rather than fixating on spurious correlations typically exploited by UEs." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.513, + 0.902, + 0.53 + ], + "angle": 0, + "content": "3.3. Augmentations for Image and Text Modalities" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.535, + 0.909, + 0.672 + ], + "angle": 0, + "content": "Image Augmentations As noted by Qin et al. [26], the effectiveness of simple augmentation strategies, such as Grayscale and JPEG compression, is greatly diminished in the context of adaptively synthesized UEs. To address this, we follow the approach of UEraser [25], which proposes an extensive set of image augmentation strategies, including not only standard techniques (e.g., random cropping, rotation, etc.), and more complex strategies including fractal-based transformations [21], and TrivialAugment [20]." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.672, + 0.909, + 0.807 + ], + "angle": 0, + "content": "Text Augmentations Since CLIP models can leverage not only image features but also text embeddings, which makes it particularly effective in zero-shot classification tasks, and few-shot learning facilitated by PL algorithms. In our context of using PL as an effective defense against UEs, we also introduce a set of text augmentation strategies, which include techniques such as random token masking and reordering that operates in the discrete token space of the text input, and small random rotations of the text embeddings." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.807, + 0.908, + 0.839 + ], + "angle": 0, + "content": "For the details of the augmentation strategies used in \\( \\mathrm{A}^3 \\) for both image and text, please refer to Appendix A.2." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.849, + 0.892, + 0.866 + ], + "angle": 0, + "content": "3.4. Cross-modal Adversarial Feature Alignment" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.871, + 0.908, + 0.902 + ], + "angle": 0, + "content": "Given an image \\( \\mathbf{x} \\) and its corresponding label \\( y \\), we can use the above augmentation strategies to find \\( K_{\\mathrm{im}} \\) augmented" + }, + { + "type": "page_number", + "bbox": [ + 0.482, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "9510" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.095, + 0.089, + 0.905, + 0.315 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.318, + 0.908, + 0.364 + ], + "angle": 0, + "content": "Figure 1. An overview of \\( A^3 \\). For each image-class training pair, \\( A^3 \\) respectively sample \\( K_{\\mathrm{im}} \\) and \\( K_{\\mathrm{tx}} \\) different image and text augmentation strategies (\\( a_{\\mathrm{im}} \\sim \\mathcal{A}_{\\mathrm{im}} \\) and \\( a_{\\mathrm{tx}} \\sim \\mathcal{A}_{\\mathrm{tx}} \\)). It then optimizes the prompt embeddings \\( \\mathbf{V} \\) and the meta-net \\( m_{\\psi} \\) for empirical risk minimization by aligning pairs of augmented samples with the minimum similarity (i.e., maximum loss) between the image and text features." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.387, + 0.484, + 0.468 + ], + "angle": 0, + "content": "images and \\( K_{\\mathrm{tx}} \\) text embeddings, by drawing from the sets of image \\( a_{\\mathrm{im}} \\sim \\mathcal{A}_{\\mathrm{im}} \\) and text \\( a_{\\mathrm{tx}} \\sim \\mathcal{A}_{\\mathrm{tx}} \\) augmentation strategies, and applying them to the image and text embeddings respectively, forming a set of augmented images \\( \\tilde{\\mathbf{x}} \\triangleq \\{\\tilde{\\mathbf{x}}_i\\}_{i=1}^{K_{\\mathrm{im}}} \\) and a set of text embeddings \\( \\tilde{\\mathbf{t}} \\triangleq \\{\\tilde{\\mathbf{t}}_j\\}_{j=1}^{K_{\\mathrm{tx}}} \\):" + }, + { + "type": "equation", + "bbox": [ + 0.09, + 0.478, + 0.484, + 0.495 + ], + "angle": 0, + "content": "\\[\n\\tilde {\\mathbf {x}} _ {i} = a _ {\\mathrm {i m}} (\\mathbf {x}), \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad a _ {\\mathrm {i m}} \\sim \\mathcal {A} _ {\\mathrm {i m}}, \\text {f o r} i \\in [ 1, \\dots , K _ {\\mathrm {i m}} ],\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.093, + 0.497, + 0.484, + 0.527 + ], + "angle": 0, + "content": "\\[\n\\tilde {\\mathbf {t}} _ {j} = a _ {\\mathrm {t x}} \\left(\\left[ \\mathbf {v} _ {j \\bmod M}, \\mathbf {c} _ {y} \\right]\\right), a _ {\\mathrm {t x}} \\sim \\mathcal {A} _ {\\mathrm {t x}}, \\quad \\text {f o r} j \\in [ 1, \\dots , K _ {\\mathrm {t x}} ], \\tag {7}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.528, + 0.484, + 0.618 + ], + "angle": 0, + "content": "Here, we note that the text embedding before augmentation \\([\\mathbf{v}_{j\\mathrm{mod}M},\\mathbf{c}_y]\\) is a concatenation of the prompt embedding \\(\\mathbf{v}_{j\\mathrm{mod}M}\\) and the class embedding \\(\\mathbf{c}_y\\). Recall that \\(M\\) is the number of prompt embeddings, and the modulus operation \\(j\\mathrm{mod}M\\) is to ensure that the prompt embedding is selected cyclically, if \\(K_{\\mathrm{tx}}\\) exceeds \\(M\\)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.618, + 0.484, + 0.709 + ], + "angle": 0, + "content": "Using the augmented samples, we can compute the similarity between the image and text features for each pair of augmented samples, assuming \\( S(\\tilde{\\mathbf{x}},\\tilde{\\mathbf{t}})\\in [-1,1]^{K_{\\mathrm{im}}\\times K_{\\mathrm{tx}}} \\) is the (cosine) similarity matrix containing the similarity between each image-text pair of augmented samples. Namely, for the \\( \\tilde{\\mathbf{x}}_i \\) and \\( \\tilde{\\mathbf{t}}_j \\) pair, we have:" + }, + { + "type": "equation", + "bbox": [ + 0.173, + 0.719, + 0.484, + 0.739 + ], + "angle": 0, + "content": "\\[\n\\mathcal {S} (\\tilde {\\mathbf {x}}, \\tilde {\\mathbf {t}}) _ {i j} = \\operatorname {s i m} \\left(f _ {\\mathrm {i m}} \\left(\\tilde {\\mathbf {x}} _ {i}\\right), f _ {\\mathrm {t x}} \\left(\\tilde {\\mathbf {t}} _ {j}\\right)\\right), \\tag {8}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.749, + 0.484, + 0.825 + ], + "angle": 0, + "content": "and we optimize the trainable weights in \\(\\phi\\) by maximizing the similarity alignment between the least similar image and text augmented features. Putting it all together, we have the following min-max problem, which can be optimized using mini-batch stochastic gradient descent (SGD):" + }, + { + "type": "equation", + "bbox": [ + 0.138, + 0.835, + 0.484, + 0.862 + ], + "angle": 0, + "content": "\\[\n\\min _ {\\phi} \\mathbb {E} _ {(\\mathbf {x}, y) \\sim \\mathcal {D} _ {\\mathrm {u e}}} \\left[ \\max _ {(i, j)} \\mathcal {L} \\left(\\mathcal {S} (\\tilde {\\mathbf {x}}, \\tilde {\\mathbf {t}}) _ {i j}, y\\right) \\right], \\tag {9}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.871, + 0.484, + 0.902 + ], + "angle": 0, + "content": "where \\(\\mathcal{L}\\) is the softmax cross-entropy loss, and \\(\\mathcal{D}_{\\mathrm{ue}}\\) is the set of unlearnable training examples." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.387, + 0.896, + 0.403 + ], + "angle": 0, + "content": "3.5. More Augmentation Diversity with Meta-Net" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.41, + 0.908, + 0.591 + ], + "angle": 0, + "content": "To further enhance the diversity of the augmented samples, we introduce a meta-net \\(m_{\\psi}:\\mathbb{R}^d\\to \\mathbb{R}^{M\\times (T - 1)\\times h}\\) with trainable weights \\(\\psi\\) , which is a small neural network that learns to predict prompt embeddings \\(\\mathbf{V}\\) from \\(f_{\\mathrm{im}}(\\tilde{\\mathbf{x}})\\) , i.e., the feature extracted from an augmented image by the image encoder. This allows us to generate more diverse augmented text embeddings in addition to the text augmentation strategies. While this can yield \\(K_{\\mathrm{im}}\\times K_{\\mathrm{tx}}\\) different augmented prompt features, with the computational cost of CLIP's feature extraction, we only use the meta-net for a random augmented image for each augmented text embedding. Using the meta-net, the similarity matrix in (8) thus becomes:" + }, + { + "type": "equation", + "bbox": [ + 0.542, + 0.605, + 0.71, + 0.623 + ], + "angle": 0, + "content": "\\[\n\\mathcal {S} (\\tilde {\\mathbf {x}}, \\tilde {\\mathbf {t}}) _ {i j} = \\operatorname {s i m} \\left(\\mathbf {p} _ {i}, \\mathbf {q} _ {j}\\right),\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.626, + 0.907, + 0.641 + ], + "angle": 0, + "content": "where \\(\\mathbf{p}_i = f_{\\mathrm{im}}(\\tilde{\\mathbf{x}}_i)\\) (10)" + }, + { + "type": "equation", + "bbox": [ + 0.585, + 0.644, + 0.863, + 0.661 + ], + "angle": 0, + "content": "\\[\n\\mathbf {q} _ {j} = f _ {\\mathrm {t x}} \\left(\\tilde {\\mathbf {t}} _ {j} + m _ {\\psi} \\left(\\mathbf {p} _ {k}\\right)\\right), k \\sim \\mathcal {U} \\{1, K _ {\\mathrm {i m}} \\}.\n\\]" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.676, + 0.645, + 0.694 + ], + "angle": 0, + "content": "4. Experiments" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.703, + 0.907, + 0.825 + ], + "angle": 0, + "content": "Datasets For the main experiments, We evaluate \\(A^3\\) under 7 datasets, including ImageNet [6] Caltech-101 [8], Oxford Flowers-102 [22], Food-101 [3], Oxford-Pets [23], and UCF-101 [31]. These datasets cover various recognition tasks, including classification of generic objects, fine-grained classification, and action recognition. We also proportionally resized and cropped all images to \\(224 \\times 224\\), the input size for the image encoder \\(f_{\\mathrm{im}}\\)." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.826, + 0.907, + 0.902 + ], + "angle": 0, + "content": "Models In our experiments, unless otherwise specified, we use either the ViT-B/16 or ResNet-50 as the backbone for the image encoder \\( f_{\\mathrm{im}} \\), and the text encoder \\( f_{\\mathrm{tx}} \\) is a Transformer-based model [34]. All pretrained models are obtained from the official CLIP repository [27]." + }, + { + "type": "page_number", + "bbox": [ + 0.482, + 0.945, + 0.514, + 0.957 + ], + "angle": 0, + "content": "9511" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.484, + 0.242 + ], + "angle": 0, + "content": "Evaluation For all experiments in this section, unless otherwise specified, we consider a common few-shot learning setup with \\( S \\) labeled training examples per class, i.e., \\( S \\)-shot learning, where \\( S = 16 \\) by default. We used a context length \\( T \\) of 4, and \\( M = 1 \\) number of prompt embeddings. For all surrogate-based attacks we optimized perturbations \\( \\delta \\) for 15 epochs with the cosine annealing scheduler and a learning rate 0.002. For learning, we adopted the SGD optimizer with a momentum of 0.9 and a weight decay of \\( 5 \\times 10^{-4} \\). We consider the following dataset split protocols:" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.244, + 0.484, + 0.303 + ], + "angle": 0, + "content": "- Standard We used the standard train-test splits from [42, 43] to ensure reproducibility. In this setting, all classes are included in the training phase, where each class contains \\(S\\) labeled examples." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.304, + 0.484, + 0.456 + ], + "angle": 0, + "content": "- Base-to-Novel To better evaluate the model's generalization ability in few-shot scenarios, we also followed [42] to divide each datasets into two equal and non-overlapping groups of classes, where the first group (base) is used for training the prompt learning model and validation, and the second group (novel) which contains unseen classes is also used for performance testing. For this protocol, we reported the test accuracies on the base classes \\(\\alpha_{\\mathrm{b}}\\), the novel classes \\(\\alpha_{\\mathrm{n}}\\), and also their harmonic mean \\(\\alpha_{\\mathrm{h}} = 2 / (\\alpha_{\\mathrm{b}}^{-1} + \\alpha_{\\mathrm{n}}^{-1})\\)." + }, + { + "type": "list", + "bbox": [ + 0.091, + 0.244, + 0.484, + 0.456 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.457, + 0.484, + 0.486 + ], + "angle": 0, + "content": "Unlearnable Example Methods Different methods consider distinct perturbation types and perturbation budgets:" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.487, + 0.483, + 0.563 + ], + "angle": 0, + "content": "- Surrogate-based methods such as EM [11], REM [9], and HYPO [33], consider \\(\\ell_{\\infty}\\)-bounded perturbations, with a perturbation budget of \\(8/255\\). As these methods require surrogate models, we used Section 3.1 to adapt them to the prompt learning setting." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.563, + 0.484, + 0.621 + ], + "angle": 0, + "content": "- LSP [40] and AR [29] both use the \\(\\ell_2\\)-norm perturbations, but their perturbation budgets are different due to their original setups. LSP assumes a perturbation budget of 1.30, while AR uses 1.00." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.623, + 0.484, + 0.653 + ], + "angle": 0, + "content": "- OPS [36] is model-agnostic, and uses the \\(\\ell_0\\)-norm perturbations with a perturbation budget of 1 by default." + }, + { + "type": "list", + "bbox": [ + 0.091, + 0.487, + 0.484, + 0.653 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.654, + 0.484, + 0.73 + ], + "angle": 0, + "content": "For additional details regarding the experimental setup, please refer to Appendix B. We will now present the results and main findings of our experiments below. Appendix C provides additional results including sensitivity analysis of the hyperparameters, and more adaptive variants." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.742, + 0.348, + 0.758 + ], + "angle": 0, + "content": "4.1. Prompt Learning under UEs" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.765, + 0.484, + 0.901 + ], + "angle": 0, + "content": "Prompt learning generalizes well to existing UEs. Table 1 compares EM-0 and EM, where EM-0 synthesizes UEs by training ResNet-18 surrogate models from scratch on the Caltech-101 dataset, and EM adapts the EM method to prompt learning using Section 3.1. Notably, we found that UE methods that can effectively thwart [26] supervised learning of small models (e.g., ResNet-18) on small datasets (e.g., CIFAR-10 [14]) are much less effective when transferred to CLIP-based prompt learning algorithms." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.905, + 0.198 + ], + "angle": 0, + "content": "Prompt learning generalizes better with increasing number of shots \\( S \\). Table 1 also shows that the performance of all prompt learning algorithms increases with the number of shots \\( S \\), even when all shots are UEs. This trend can be observed across all UE methods, but the adaptive EM method consistently suppresses the performance gains from increasing \\( S \\) the most." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.199, + 0.907, + 0.456 + ], + "angle": 0, + "content": "CoCoOp and ProDA show increased robustness against UEs, while KgCoOp is the most fragile in Table 1. We speculate that this is because the meta-net used in CoCoOp may be able to absorb the shortcut present in the UEs, and ProDA models the Gaussian distribution of prompt embeddings, making it more robust to input-side perturbations. While trained on clean data, KgCoOp [42] exhibits the best performance, as shown in the \"Clean\" row of Table 1. However, it is notably prone to UEs, showing the largest performance drop when UEs are introduced, especially when using the adaptive EM method. This suggests that KgCoOp, guided by the regularization to be in close proximity to the initial manual prompts, cannot effectively evade crafted UEs by EM based on the initial prompts. Because of the robustness of CoCoOp under our default number of shots (\\(S = 16\\)), we chose it as the baseline algorithm for the subsequent experiments." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.456, + 0.909, + 0.547 + ], + "angle": 0, + "content": "Without proper defenses, it is better off not learning from UEs. It is interesting to note that usually, the performance of CoCoOp when trained with UEs is worse than the zero-shot performance. This suggests that UEs can indeed be harmful to the model's performance. This behavior can be observed in Tables 1 and 2." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.558, + 0.744, + 0.575 + ], + "angle": 0, + "content": "4.2. Prompt Learning with \\(\\mathbf{A}^3\\)" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.581, + 0.906, + 0.672 + ], + "angle": 0, + "content": "\\(\\mathbf{A}^3\\) is very effective (up to \\(33\\%\\) better than CoCoOp) in mitigating UEs. In Table 2, we observe that \\(\\mathrm{A}^3\\) consistently outperforms CoCoOp across all datasets, producing \\(15\\%\\) to \\(33\\%\\) higher \\(\\alpha_{\\mathrm{h}}\\) than CoCoOp for surrogate-based methods (EM, REM, HYPO), and \\(6\\%\\) to \\(30\\%\\) higher for surrogate-free methods (LSP, AR, OPS)." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.672, + 0.907, + 0.779 + ], + "angle": 0, + "content": "The image and text augmentations in \\(\\mathbf{A}^3\\) are both crucial for its effectiveness. Table 3 performs an ablation study on the individual contributions of image and text augmentations. We note that using either image-only or text-only augmentations can improve the model's performance, but the combination of both is the most effective, giving large accuracy gains over CoCoOp." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.779, + 0.907, + 0.87 + ], + "angle": 0, + "content": "Simple augmentation strategies fall short of \\(\\mathbf{A}^3\\)'s performance. We also highlight in Table 3 that applying simple augmentation strategies such as Grayscale and JPEG compression on CoCoOp can certainly gain improvement over the CoCoOp baseline. However, they are still outperformed by \\(\\mathbf{A}^3\\), sometimes with a large margin (\\(\\geq 25\\%\\))." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.871, + 0.906, + 0.901 + ], + "angle": 0, + "content": "A large arsenal of image augmentation strategies also fall short of \\( \\mathrm{A}^3 \\)'s performance. Table 3 also shows that while" + }, + { + "type": "page_number", + "bbox": [ + 0.482, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "9512" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.089, + 0.485, + 0.228 + ], + "angle": 0, + "content": "Table 1. Test accuracies \\((\\%)\\) of prompt learning algorithms on an unlearnability-poisoned Caltech-101 dataset under varying number of shots \\(S\\). Note that \"EM-0\" is the original EM [11] method, and \"EM\" is our adaptive variant (Section 3.1). The image encoder of the CLIP model is ResNet-50, and the zero-shot accuracy is \\(86.00\\%\\). We also report the average accuracy across unlearnability methods (the \"Avg.\" rows) and prompt learning algorithms (the \"Avg.\" column). We highlight the best prompt learning algorithm against each unlearnability method in bold, and underline the strongest unlearnability methods for each prompt learning algorithm." + }, + { + "type": "table", + "bbox": [ + 0.093, + 0.228, + 0.487, + 0.515 + ], + "angle": 0, + "content": "
SMethodCoOpCoCoOpProDAKgCoOpAvg.
2EM-078.2380.4581.9177.8679.61
EM61.7864.1365.0559.2662.56
OPS74.4076.9175.7573.1075.04
AR80.6381.8380.7479.8980.77
Avg.73.7675.8375.8672.5374.50
4EM-083.2985.1986.0280.8383.83
EM68.4170.4570.1464.9168.48
OPS78.8080.8880.1578.0779.48
AR82.5483.7084.4182.6683.33
Avg.78.2680.0680.1876.6278.78
8EM-086.1889.4490.6485.4287.92
EM70.5072.2873.0367.0770.72
OPS80.7483.3582.6379.3181.51
AR85.8188.4188.5083.7786.62
Avg.80.8183.3783.7078.8981.69
16EM-090.7690.8590.4889.6090.42
EM71.4272.8372.1069.3471.42
OPS82.4084.0984.4380.0282.74
AR88.6390.7490.0886.4688.98
Avg.83.3084.6384.2781.3683.39
16Clean91.2091.7091.6091.8091.58
" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.534, + 0.484, + 0.684 + ], + "angle": 0, + "content": "UEraser [25] is the most effective strategy to learn from UEs among all tested existing methods, the performance of UEraser is still inferior to \\( \\mathrm{A}^3 \\), particularly on HYPO, LSP and AR, with increased perturbation budgets. To preserve image semantics while maximizing augmentation diversity, such image augmentation strategies are often designed with a balance between the two. This trade-off choice may limit the ability to suppress UE perturbations in images. While this is also true for \\( \\mathrm{A}^3 \\), but the additional text augmentation strategies of \\( \\mathrm{A}^3 \\) can help to work around this limitation." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.687, + 0.484, + 0.822 + ], + "angle": 0, + "content": "Adversarial training (AT) may not be the most effective defense against UEs. As the settings of AT in Table 3 assume \\(\\ell_{\\infty}\\)-bounded perturbations with \\(\\epsilon = 8 / 255\\), it notably struggles against large \\(\\ell_{\\infty}\\) perturbation budgets (\\(\\epsilon = 16 / 255\\)), and other types of perturbation norm-bounds (\\(\\ell_{2}\\) and \\(\\ell_{0}\\)), as it is not designed to handle them. There may also be an intricate balance between accuracy and robustness [38], which could result in a seesaw effect in the performance as the perturbation budget increases." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.825, + 0.485, + 0.901 + ], + "angle": 0, + "content": "Increasing the number of shots \\(S\\) improves \\(\\mathbf{A}^3\\)'s performance. In Table 4, we found that while CoCoOp's performance metrics continue to improve with increasing \\(S\\), it never surpassed the performance of zero-shot CLIP. This echoes the findings in Table 1. On the other hand, \\(\\mathbf{A}^3\\) im" + }, + { + "type": "table_caption", + "bbox": [ + 0.512, + 0.089, + 0.908, + 0.214 + ], + "angle": 0, + "content": "Table 2. Base-to-novel prompt learning accuracies (\\%) for CoCoOp and \\( \\mathrm{A}^3 \\) trained with unlearnable examples under \\( \\ell_{\\infty} \\)-bounded attacks. Rows with \"+\" indicate when \\( \\mathrm{A}^3 \\) is applied. \\( \\alpha_{\\mathrm{b}} \\) refers to the model accuracy on poisoned data, while \\( \\alpha_{\\mathrm{n}} \\) refers to the model accuracy on novel classes, excluding the poisoned classes. We also report the harmonic mean \\( \\alpha_{\\mathrm{h}} = 2 / (\\alpha_{\\mathrm{b}}^{-1} + \\alpha_{\\mathrm{n}}^{-1}) \\). For the \"Δ\" column, we report the test accuracy drop for the CoCoOp baseline from the clean training setting, and accuracy gain for \\( \\mathrm{A}^3 \\) over CoCoOp. The backbone is VIT-B/16." + }, + { + "type": "table", + "bbox": [ + 0.513, + 0.214, + 0.907, + 0.899 + ], + "angle": 0, + "content": "
ImNetCaltechPetsFlowersFoodSUNUCFAvg.Δ
Zero-Shot [27]
αb67.5092.6087.4067.9082.9064.8066.1075.60
αn60.2087.4083.7060.8075.5056.9059.5069.14
αh63.6489.0085.5264.8079.0560.6062.8172.20
Baseline (CoCoOp [42])
αb75.2596.3094.3592.8690.1878.3880.5386.84
αn69.4393.2396.8870.1790.8074.7273.2881.22
αh72.3994.7495.5879.7490.3476.2876.6183.67
+αb76.0397.9594.0692.4190.6877.5980.2587.00
+αn70.4793.8093.1870.2991.2974.8371.8980.82
+αh73.0995.7393.6379.7490.9876.1475.8883.60
EM [11] (l∞, ε = 8/255)
αb56.4780.6878.4474.4079.9358.9063.7170.36-16.48
αn43.2774.9076.3851.6678.1553.3352.0961.40-19.82
αh48.7777.5477.4061.0379.0456.0157.3865.31-18.36
+αb73.4895.4993.7191.8489.3877.6579.3485.8215.46
+αn67.8593.0793.5368.0888.0073.1771.6979.3017.90
+αh70.4194.3493.7478.2088.8775.1875.1282.2716.96
REM [9] (l∞, ε = 8/255)
αb43.5162.6364.6061.9860.2148.2950.5355.94-30.90
αn30.4954.7760.1246.6458.0844.3041.9848.05-33.16
αh35.7458.3962.2553.1759.0646.1846.0251.54-32.12
+αb72.7694.5293.2891.0088.0676.9478.4685.0029.06
+αn66.4292.1691.9466.7487.3471.9871.2278.2630.21
+αh69.3793.6992.7476.7387.5474.4374.9481.3529.80
HYPO [33] (l∞, ε = 8/255)
αb40.0858.5959.8357.3356.5044.1147.6652.01-34.83
αn30.6454.8257.1144.7656.9041.2543.5847.01-34.21
αh34.6856.5758.4650.3056.5542.4745.4549.21-34.46
+αb71.8094.2893.5990.7588.5577.6778.9285.0833.07
+αn65.2990.3792.0165.2584.9370.0469.8576.8229.81
+αh68.3992.1393.0576.2086.6973.7374.4380.6631.45
LSP [40] (l2, ε = 1.30)
αb49.7268.4965.9963.3361.8850.6951.4758.80-28.04
αn36.0456.2963.3550.6259.4146.2048.8851.54-29.67
αh41.7961.8464.4756.1860.5348.4350.0454.75-28.91
+αb72.5394.9794.0291.2988.6477.0478.1785.2426.44
+αn67.3792.4493.8067.5287.4072.2171.1178.8427.30
+αh69.9793.7494.2777.0987.7774.4774.5481.6926.94
AR [29] (l2, ε = 1.00)
αb42.3361.7360.5859.0157.2046.2349.0653.73-33.11
αn31.6756.4258.0844.5455.9643.3042.6947.52-33.69
αh36.2959.0059.2750.6356.5244.7445.6750.30-33.37
+αb71.6894.3892.5590.8287.7776.6777.5884.4930.76
+αn66.0991.5991.6766.0686.2571.1970.8677.6730.15
+αh68.7492.9892.1078.4087.0073.9374.2181.0530.75
OPS [36] (l0, ε = 1)
αb68.2888.6086.2081.5985.6565.6068.9877.84-9.00
αn54.5080.2180.8358.7079.0660.4960.3267.73-13.49
αh60.5784.0583.4568.2882.2462.8964.3972.37-11.40
+αb73.0895.1093.3991.5089.5676.9778.1383.967.55
+αn67.2092.6393.8868.0388.0172.6471.2879.1011.37
+αh70.0693.8693.6378.0588.8774.6974.5081.429.68
" + }, + { + "type": "page_number", + "bbox": [ + 0.482, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "9513" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.089, + 0.56, + 0.145 + ], + "angle": 0, + "content": "Table 3. Clean test accuracies (\\%) on Caltech-101 datasets (16-shots, standard protocol). \"UEr\" means UEraser. Baseline and compared methods are adapted to CoCoOp [42]. Image encoder backbone is ResNet-50. \"Text\" and \"Image\" refer to the augmented modalities, \"Full\" includes both." + }, + { + "type": "table", + "bbox": [ + 0.092, + 0.146, + 0.56, + 0.33 + ], + "angle": 0, + "content": "
BaselineGrayJPEGATUErTextImageFull
EM8/25575.5374.2879.2182.8490.3784.6992.5194.28
16/25559.3660.9664.5877.0388.7982.4590.4091.86
REM8/25552.3363.7170.2278.9690.7386.8992.3293.88
16/25537.9759.5466.7874.1086.9683.3389.3590.25
HYPO8/25547.3650.0264.5678.8987.7484.0791.4793.21
16/25527.1834.6759.1172.2682.4879.2786.0989.33
LSP1.3043.2364.7080.0181.5890.6683.3792.8394.02
1.7425.4142.5868.3076.2983.0580.7787.2491.62
AR1.0050.1853.2481.4076.7390.1284.9691.6393.41
1.3032.6535.5769.2664.2282.8981.1786.6190.06
OPS186.1786.5289.7383.1688.6189.9390.5093.86
473.2473.3480.2369.5980.1281.3784.0787.10
" + }, + { + "type": "table_caption", + "bbox": [ + 0.584, + 0.091, + 0.908, + 0.131 + ], + "angle": 0, + "content": "Table 4. Base-to-novel metrics for different number of shots \\( S \\in \\{ 0,2,4,8,{16}\\} \\) . The image encoder backbone is ResNet-50." + }, + { + "type": "table", + "bbox": [ + 0.587, + 0.133, + 0.907, + 0.328 + ], + "angle": 0, + "content": "
SCaltechFoodImageNet
A3-+-+-+
0αb86.778.162.7
αn78.474.952.8
αh82.676.457.2
2αb69.3488.5166.1184.3444.0466.04
αn61.7883.4560.2981.7730.960.03
αh65.3485.9163.0783.0436.3262.89
4αb76.0991.2375.2187.8150.4368.1
αn67.5187.9366.1986.1536.6162.9
αh71.5489.5570.4186.9742.4265.4
8αb77.8692.8975.8588.0351.6569.01
αn70.1989.1170.0986.4439.2264.58
αh73.8390.9672.8687.2344.5866.72
16αb77.5293.3676.6788.5253.7271.08
αn72.4391.0273.3487.7640.7464.95
αh74.8992.1874.9788.1446.3467.88
" + }, + { + "type": "image_footnote", + "bbox": [ + 0.13, + 0.344, + 0.447, + 0.359 + ], + "angle": 0, + "content": "αoriginal αunlearn 01 αtest αtrain" + }, + { + "type": "image", + "bbox": [ + 0.095, + 0.362, + 0.287, + 0.445 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.144, + 0.446, + 0.246, + 0.457 + ], + "angle": 0, + "content": "(a) \\(\\mathrm{EM} + \\mathrm{CoCoOp}\\)" + }, + { + "type": "image", + "bbox": [ + 0.293, + 0.362, + 0.487, + 0.445 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.36, + 0.446, + 0.43, + 0.456 + ], + "angle": 0, + "content": "(b) \\(\\mathrm{EM} + \\mathrm{A}^3\\)" + }, + { + "type": "image", + "bbox": [ + 0.095, + 0.457, + 0.288, + 0.541 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.14, + 0.542, + 0.25, + 0.553 + ], + "angle": 0, + "content": "(c) \\(\\mathrm{REM} + \\mathrm{CoCoOp}\\)" + }, + { + "type": "image", + "bbox": [ + 0.294, + 0.457, + 0.488, + 0.541 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.356, + 0.542, + 0.434, + 0.553 + ], + "angle": 0, + "content": "(d) \\(\\mathrm{REM} + \\mathrm{A}^3\\)" + }, + { + "type": "image", + "bbox": [ + 0.095, + 0.554, + 0.287, + 0.638 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.144, + 0.639, + 0.245, + 0.649 + ], + "angle": 0, + "content": "(e) \\(\\mathrm{AR} + \\mathrm{CoCoOp}\\)" + }, + { + "type": "image", + "bbox": [ + 0.293, + 0.554, + 0.488, + 0.638 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.362, + 0.638, + 0.428, + 0.649 + ], + "angle": 0, + "content": "(f) \\(\\mathrm{AR} + \\mathrm{A}^3\\)" + }, + { + "type": "image", + "bbox": [ + 0.095, + 0.65, + 0.288, + 0.734 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.142, + 0.734, + 0.248, + 0.746 + ], + "angle": 0, + "content": "(g) \\(\\mathrm{LSP} + \\mathrm{CoCoOp}\\)" + }, + { + "type": "image", + "bbox": [ + 0.293, + 0.65, + 0.488, + 0.733 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.359, + 0.734, + 0.43, + 0.745 + ], + "angle": 0, + "content": "(h) \\(\\mathrm{LSP} + \\mathrm{A}^3\\)" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.748, + 0.483, + 0.818 + ], + "angle": 0, + "content": "Figure 2. CoCoOp vs. A3 under partial poisoning with rates \\( R \\in \\{\\frac{1}{8}, \\frac{1}{4}, \\frac{1}{2}, 1\\} \\) (x-axis, %). Accuracy metrics (y-axis, %): \\( \\alpha_{\\text{unlearn}} = \\text{UEs} \\) in the training set; \\( \\alpha_{\\text{original}} = \\text{original clean images of the UEs} \\); \\( \\alpha_{\\text{test}} = \\text{clean images in the test set} \\); \\( \\alpha_{\\text{train}} = \\text{clean images in the training set} \\). The image encoder backbone is ResNet-50." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.825, + 0.485, + 0.856 + ], + "angle": 0, + "content": "proves notably with increasing \\( S \\), where CoCoOp falls behind while \\( A^3 \\) leads by a large margin." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.856, + 0.486, + 0.901 + ], + "angle": 0, + "content": "With partially poisoned datasets, \\(\\mathbf{A}^3\\) learns the underlying features while CoCoOp likely does not. In practice, model trainers may curate datasets from a variety of sources," + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.344, + 0.909, + 0.66 + ], + "angle": 0, + "content": "and only a portion of the data may contain UE perturbations. Training on such partially poisoned datasets typically result in minimal performance loss over the clean dataset, and is not indicative of the model's ability to learn the underlying features of the UEs. In Figure 2, we thus investigate whether CoCoOp and \\(\\mathbf{A}^3\\) can learn such features when trained on partially poisoned datasets. It evaluates the accuracy metrics of the unlearnable part (\\(\\alpha_{\\mathrm{unlearn}}\\)), the original images of the unlearnable part (\\(\\alpha_{\\mathrm{original}}\\), i.e., before perturbation), the samples from the test set (\\(\\alpha_{\\mathrm{test}}\\)), and the clean part of the train set (\\(\\alpha_{\\mathrm{train}}\\)). Importantly, if the model can learn the underlying features, then the \\(\\alpha_{\\mathrm{original}}\\) should be close to \\(\\alpha_{\\mathrm{train}}\\), otherwise, it should be close to \\(\\alpha_{\\mathrm{test}}\\). It is evident that CoCoOp actually struggles to learn useful features from the UEs, as \\(\\alpha_{\\mathrm{original}}\\) closely tracks \\(\\alpha_{\\mathrm{test}}\\), while \\(\\alpha_{\\mathrm{unlearn}}\\) is higher than \\(\\alpha_{\\mathrm{train}}\\). This suggests that CoCoOp is likely overfitting to the UE perturbations, even more so than the clean training data. In contrast, \\(\\mathbf{A}^3\\) shows that \\(\\alpha_{\\mathrm{original}}\\) follows \\(\\alpha_{\\mathrm{train}}\\) closely, and \\(\\alpha_{\\mathrm{unlearn}}\\) is close to \\(\\alpha_{\\mathrm{test}}\\), hinting that \\(\\mathbf{A}^3\\) is learning the underlying features instead of the UE-crafted perturbations." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.678, + 0.634, + 0.695 + ], + "angle": 0, + "content": "5. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.704, + 0.909, + 0.901 + ], + "angle": 0, + "content": "First, PLs' generalization combined with \\( A^3 \\) make them challenging adversaries for traditional UEs. Second, Augmenting PL with diverse image and text perturbations significantly improves their resilience against UEs, pointing to the need for multimodal considerations in both UEs and countermeasures. Third, compared to simpler augmentations or adversarial training, \\( A^3 \\)'s cross-modal feature alignment proved especially effective in mitigating PL's adaptation to UEs than preexisting learning methods. Finally, we emphasize the need for adaptive, multimodal approaches in UEs and open pathways toward more sophisticated protections against unauthorized training in an era of large multimodal and pretrained models." + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "9514" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.092, + 0.091, + 0.241, + 0.108 + ], + "angle": 0, + "content": "Acknowledgment" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.114, + 0.486, + 0.254 + ], + "angle": 0, + "content": "This work is supported in part by National Natural Science Foundation of China (62376263, 62372443 and 62271496), Guangdong Basic and Applied Basic Research Foundation (2023B1515130002), Natural Science Foundation of Guangdong (2024A1515030209 and 2024A1515011970), Shenzhen Science and Technology Innovation Commission (JCYJ20230807140507015 and JCYJ20220531100804009), and Yu-Liang Lu's Project Team Development Funding (KY23A102). This work was carried out in part at SICC, which is supported by SKL-IOTSC, University of Macau." + }, + { + "type": "title", + "bbox": [ + 0.093, + 0.279, + 0.188, + 0.294 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.304, + 0.486, + 0.387 + ], + "angle": 0, + "content": "[1] Manuele Barraco, Marcella Cornia, Silvia Cascianelli, Lorenzo Baraldi, and Rita Cucchiara. The unreasonable effectiveness of clip features for image captioning: An experimental analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 4662-4670, 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.388, + 0.484, + 0.442 + ], + "angle": 0, + "content": "[2] Battista Biggio and Fabio Roli. Wild patterns: Ten years after the rise of adversarial machine learning. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pages 2154-2156, 2018. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.444, + 0.484, + 0.498 + ], + "angle": 0, + "content": "[3] Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101 – mining discriminative components with random forests. In Computer Vision – ECCV 2014, pages 446–461. Springer International Publishing, 2014. 5, 11" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.5, + 0.484, + 0.567 + ], + "angle": 0, + "content": "[4] Xinquan Chen, Xitong Gao, Juanjuan Zhao, Kejiang Ye, and Cheng-Zhong Xu. AdvDiffuser: Natural adversarial example synthesis with diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4562-4572, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.568, + 0.484, + 0.621 + ], + "angle": 0, + "content": "[5] Jacob Clarysse, Julia Hörrmann, and Fanny Yang. Why adversarial training can hurt robust accuracy. In The Eleventh International Conference on Learning Representations, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.624, + 0.484, + 0.679 + ], + "angle": 0, + "content": "[6] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE, 2009. 5, 11" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.68, + 0.484, + 0.734 + ], + "angle": 0, + "content": "[7] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat GANs on image synthesis. In Advances in Neural Information Processing Systems, pages 8780-8794. Curran Associates, Inc., 2021. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.735, + 0.484, + 0.804 + ], + "angle": 0, + "content": "[8] Li Fei-Fei, Rob Fergus, and Pietro Perona. Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories. In 2004 conference on computer vision and pattern recognition workshop, pages 178-178. IEEE, 2004. 5, 11" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.805, + 0.484, + 0.859 + ], + "angle": 0, + "content": "[9] Shaopeng Fu, Fengxiang He, Yang Liu, Li Shen, and Dacheng Tao. Robust unlearnable examples: Protecting data against adversarial learning. In International Conference on Learning Representations, 2022. 1, 2, 4, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.86, + 0.484, + 0.9 + ], + "angle": 0, + "content": "[10] Micah Goldblum, Dimitris Tsipras, Chulin Xie, and et al. Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses. IEEE Transactions on Pattern" + }, + { + "type": "list", + "bbox": [ + 0.094, + 0.304, + 0.486, + 0.9 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.545, + 0.093, + 0.908, + 0.119 + ], + "angle": 0, + "content": "Analysis and Machine Intelligence, 45(2):1563-1580, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.122, + 0.907, + 0.178 + ], + "angle": 0, + "content": "[11] Hanxun Huang, Xingjun Ma, Sarah Monazam Erfani, James Bailey, and Yisen Wang. Unlearnable examples: Making personal data unexploitable. In International Conference on Learning Representations, 2021. 1, 2, 3, 4, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.179, + 0.907, + 0.234 + ], + "angle": 0, + "content": "[12] Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. Advances in neural information processing systems, 32, 2019. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.236, + 0.908, + 0.317 + ], + "angle": 0, + "content": "[13] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In International conference on machine learning, pages 4904-4916. PMLR, 2021. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.321, + 0.909, + 0.375 + ], + "angle": 0, + "content": "[14] Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The CIFAR-10 and CIFAR-100 datasets, 2014. Available at: http://www.cs.toronto.edu/~kriz/cifar.html.6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.378, + 0.908, + 0.446 + ], + "angle": 0, + "content": "[15] Ziyi Lin, Shijie Geng, Renrui Zhang, Peng Gao, Gerard De Melo, Xiaogang Wang, Jifeng Dai, Yu Qiao, and Hongsheng Li. Frozen clip models are efficient video learners. In European Conference on Computer Vision, pages 388-404. Springer, 2022. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.449, + 0.907, + 0.504 + ], + "angle": 0, + "content": "[16] Zhuoran Liu, Zhengyu Zhao, and Martha Larson. Image shortcut squeezing: Countering perturbative availability poisons with compression. In International conference on machine learning, 2023. 2, 3, 11" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.505, + 0.908, + 0.56 + ], + "angle": 0, + "content": "[17] Yuning Lu, Jianzhuang Liu, Yonggang Zhang, Yajing Liu, and Xinmei Tian. Prompt distribution learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2022. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.562, + 0.908, + 0.646 + ], + "angle": 0, + "content": "[18] Yiwei Ma, Guohai Xu, Xiaoshuai Sun, Ming Yan, Ji Zhang, and Rongrong Ji. X-clip: End-to-end multi-grained contrastive learning for video-text retrieval. In Proceedings of the 30th ACM International Conference on Multimedia, page 638–647, New York, NY, USA, 2022. Association for Computing Machinery. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.647, + 0.908, + 0.702 + ], + "angle": 0, + "content": "[19] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. 2, 3, 11" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.704, + 0.908, + 0.759 + ], + "angle": 0, + "content": "[20] Samuel G Müller and Frank Hutter. Trivialaugment: Tuning-free yet state-of-the-art data augmentation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 774-782, 2021. 4, 11" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.761, + 0.908, + 0.843 + ], + "angle": 0, + "content": "[21] Anguelos Nicolaou, Vincent Christlein, Edgar Riba, Jian Shi, Georg Vogeler, and Mathias Seuret. Tormentor: Deterministic dynamic-path, data augmentations with fractals. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 2707-2711, 2022. 4, 11" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.845, + 0.907, + 0.902 + ], + "angle": 0, + "content": "[22] Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In 2008 Sixth Indian conference on computer vision, graphics & image processing, pages 722-729. IEEE, 2008. 5, 11" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.909, + 0.902 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "9515" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.093, + 0.486, + 0.147 + ], + "angle": 0, + "content": "[23] Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, and CV Jawahar. Cats and dogs. In 2012 IEEE conference on computer vision and pattern recognition, pages 3498-3505. IEEE, 2012. 5, 11" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.149, + 0.485, + 0.218 + ], + "angle": 0, + "content": "[24] Tianrui Qin, Xitong Gao, Juanjuan Zhao, and Kejiang Ye. Destruction-restoration suppresses data protection perturbations against diffusion models. In 2023 IEEE 35th International Conference on Tools with Artificial Intelligence (ICTAI), pages 586-594. IEEE, 2023. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.219, + 0.483, + 0.287 + ], + "angle": 0, + "content": "[25] Tianrui Qin, Xitong Gao, Juanjuan Zhao, Kejiang Ye, and Cheng-Zhong Xu. Learning the unlearnable: Adversarial augmentations suppress unlearnable example attacks. In 4th Workshop on Adversarial Robustness In the Real World (AROW), ICCV 2023, 2023. 2, 3, 4, 7, 12" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.288, + 0.483, + 0.343 + ], + "angle": 0, + "content": "[26] Tianrui Qin, Xitong Gao, Juanjuan Zhao, Kejiang Ye, and Cheng zhong Xu. APBench: A unified availability poisoning attack and defenses benchmark. Transactions on Machine Learning Research, 2024. 3, 4, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.344, + 0.483, + 0.426 + ], + "angle": 0, + "content": "[27] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 3, 5, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.428, + 0.485, + 0.496 + ], + "angle": 0, + "content": "[28] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684-10695, 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.497, + 0.485, + 0.564 + ], + "angle": 0, + "content": "[29] Pedro Sandoval-Segura, Vasu Singla, Jonas Geiping, Micah Goldblum, Tom Goldstein, and David Jacobs. Autoregressive perturbations for data poisoning. Advances in Neural Information Processing Systems, 35:27374-27386, 2022. 1, 2, 3, 4, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.567, + 0.483, + 0.621 + ], + "angle": 0, + "content": "[30] Mainak Singha, Harsh Pal, Ankit Jha, and Biplab Banerjee. Ad-clip: Adapting domains in prompt space using clip. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4355–4364, 2023. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.622, + 0.483, + 0.663 + ], + "angle": 0, + "content": "[31] Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012. 5, 11" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.665, + 0.483, + 0.746 + ], + "angle": 0, + "content": "[32] Zeyi Sun, Ye Fang, Tong Wu, Pan Zhang, Yuhang Zang, Shu Kong, Yuanjun Xiong, Dahua Lin, and Jiaqi Wang. Alphaclip: A clip model focusing on wherever you want. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 13019-13029, 2024. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.748, + 0.483, + 0.815 + ], + "angle": 0, + "content": "[33] Lue Tao, Lei Feng, Jinfeng Yi, Sheng-Jun Huang, and Song-can Chen. Better safe than sorry: Preventing delusive adversaries with adversarial training. Advances in Neural Information Processing Systems, 34:16209-16225, 2021. 2, 4, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.818, + 0.483, + 0.872 + ], + "angle": 0, + "content": "[34] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.873, + 0.483, + 0.902 + ], + "angle": 0, + "content": "[35] Jianyi Wang, Kelvin CK Chan, and Chen Change Loy. Exploring clip for assessing the look and feel of images. In" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.093, + 0.486, + 0.902 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.545, + 0.093, + 0.908, + 0.12 + ], + "angle": 0, + "content": "Proceedings of the AAAI Conference on Artificial Intelligence, pages 2555-2563, 2023. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.122, + 0.908, + 0.177 + ], + "angle": 0, + "content": "[36] Shutong Wu, Sizhe Chen, Cihang Xie, and Xiaolin Huang. One-pixel shortcut: on the learning preference of deep neural networks. In International Conference on Learning Representations, 2023. 1, 2, 3, 4, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.179, + 0.907, + 0.247 + ], + "angle": 0, + "content": "[37] Jianxiong Xiao, James Hays, Krista A. Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 3485-3492, 2010. 11" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.249, + 0.907, + 0.317 + ], + "angle": 0, + "content": "[38] Yao-Yuan Yang, Cyrus Rashtchian, Hongyang Zhang, Russ R Salakhutdinov, and Kamalika Chaudhuri. A closer look at accuracy vs. robustness. In Advances in Neural Information Processing Systems, pages 8588-8601. Curran Associates, Inc., 2020. 3, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.319, + 0.908, + 0.387 + ], + "angle": 0, + "content": "[39] Hantao Yao, Rui Zhang, and Changsheng Xu. Visual-language prompt tuning with knowledge-guided context optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6757-6767, 2023. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.39, + 0.908, + 0.445 + ], + "angle": 0, + "content": "[40] Da Yu, Huishuai Zhang, Wei Chen, Jian Yin, and Tie-Yan Liu. Availability attacks create shortcuts. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 2367-2376, 2022. 1, 2, 3, 4, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.447, + 0.907, + 0.501 + ], + "angle": 0, + "content": "[41] Yunrui Yu, Xitong Gao, and Cheng-Zhong Xu. LAFIT: Efficient and reliable evaluation of adversarial defenses with latent features. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 46(1):354-369, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.503, + 0.907, + 0.558 + ], + "angle": 0, + "content": "[42] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2022. 4, 6, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.56, + 0.907, + 0.614 + ], + "angle": 0, + "content": "[43] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for vision-language models. International Journal of Computer Vision, 130(9):2337-2348, 2022. 4, 6" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.908, + 0.614 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.946, + 0.516, + 0.956 + ], + "angle": 0, + "content": "9516" + } + ] +] \ No newline at end of file diff --git a/2025/A3_ Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment/641deceb-eb2b-45b8-97bb-8e2f7fe97a5e_origin.pdf b/2025/A3_ Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment/641deceb-eb2b-45b8-97bb-8e2f7fe97a5e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..597836518b3de8a170d672283a52e605224af0b2 --- /dev/null +++ b/2025/A3_ Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment/641deceb-eb2b-45b8-97bb-8e2f7fe97a5e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d3efb044320f70b95c005dab39b5818b1ee294d239a3bd17e21f7813edd92ba +size 1096234 diff --git a/2025/A3_ Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment/full.md b/2025/A3_ Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment/full.md new file mode 100644 index 0000000000000000000000000000000000000000..4f9e4b7d398c838691d4752dce2d87472189426d --- /dev/null +++ b/2025/A3_ Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment/full.md @@ -0,0 +1,350 @@ +# A3: Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment + +Xuan Wang* + +Anhui Key Lab of CSSAE, National University of Defense Technology wangxuan21d@nudt.edu.cn + +Tianrui Qin + +OPPO Research Institute qintianrui123@gmail.com + +Xitong Gao*† + +Shenzhen Institutes of Advanced Technology, CAS; Shenzhen University of Advanced Technology xt.gao@siat.ac.cn + +Yu-liang Lu† + +Anhui Key Lab of CSSAE, National University of Defense Technology publicLuYL@126.com + +Dongping Liao + +State Key Lab of IoTSC, CIS Dept, University of Macau yb97428@um.edu.mo + +Cheng-zhong Xu + +State Key Lab of IoTSC, CIS Dept, University of Macau czxu@um.edu, mo + +# Abstract + +In the age of pervasive machine learning applications, protecting digital content from unauthorized use has become a pressing concern. Unlearnable examples (UEs)—data modified with imperceptible perturbations to inhibit model training while preserving human usability—have emerged as a promising approach. However, existing UE methods assume unauthorized trainers have extensive exposure to UEs or that models are trained from scratch, which may not hold in practical scenarios. This paper investigates the effectiveness of UEs under the few-shot learning paradigm, pitching it against prompt learning (PL) models that leverage pretrained vision-language models (VLMs), like CLIP, capable of generalizing to new classes with minimal data. To address this, we introduce an adaptive UE framework to generate unlearnable examples that specifically target the PL process. In addition, we propose a novel UE countermeasure, $A^3$ , with cross-modal adversarial feature alignment, specifically designed to circumvent UEs under few-shot PL. Experimental evaluations on 7 datasets show that $A^3$ outperforms existing PL methods, achieving up to $33\%$ higher performance in learning from UEs. For example, in the scenario involving $\ell_{\infty}$ -bounded EM perturbations, $A^3$ has an average harmonic mean accuracy across 7 datasets of $82.43\%$ , compared to CoCoOp's baseline of $65.47\%$ . Our findings highlight the limitations of existing UEs against PL and lay the foundation for future data protection mechanisms. + +# 1. Introduction + +In the era of pervasive machine learning applications, protecting digital content from unauthorized use is an escalating concern. An emerging solution involves unlearnable ex + +amples (UEs) [9, 11, 29, 36, 40]. — data modified with imperceptible perturbations that prevent machine learning models from effectively learning and generalizing from it, while preserving its utility for human observers. Unlike traditional data poisoning attacks intended for malicious use, UEs serve content creators by providing a way to inhibit unauthorized model training. Beyond this, UEs can also be used to shed light on vulnerabilities and learning preferences [36] of deep learning models, and prevent unlawful use of personal features [24]. However, existing UE methods are primarily designed for models trained from scratch, and make strong assumptions where all or a large proportion of training data is used unknowingly by unauthorized trainers. These assumptions may not hold in the wilderness, for several reasons: (a) Creators may release limited data. (b) Unauthorized trainers may have limited exposure to the UEs: they may curate their training data from various sources and may only use a small fraction of the creator's data. (c) Trainers may leverage pretrained models to improve training efficiency, and to generalize well to new classes and contexts. + +In this paper, we found that recent advances in prompt learning (PL) with pretrained vision-language models (VLMs) can indeed challenge the robustness of UEs. VLMs use contrastive learning to align images and text features, enabling strong zero-shot and downstream tasks [15, 30, 32, 35]. PL further adapts CLIP by fine-tuning prompts instead of model weights, making it ideal for data-limited scenarios, and novel tasks and classes. This paper thus investigates a central question: Are UEs effective in protecting data against PL-enabled models? This question has profound implications from both the content creator's and the unauthorized trainer's perspectives. + +For content creators, understanding this question is crucial for several reasons: (a) To effectively prevent unauthorized usage, creators need to know the minimum amount of modified data required to maintain protection. (b) Content + +creators typically control and release a limited quantity of data, making it impractical to assume access to large datasets. This constraint naturally leads to a few-shot scenario, which is the focus of this study. + +For unauthorized trainers, PL represents an appealing tool to bypass UEs, as PL exploits the generalization strengths of VLMs: (a) The pretrained encoders of VLMs enables it to generalize well to novel classes, potentially circumventing perturbations that would normally deter training from scratch. (b) While existing methods to circumvent UEs typically involve adversarial training [19], or image augmentations [16, 25], which affect only the image seen by the model. PL may be able to enhance its robustness by incorporating text augmentations, offering a broader strategy to bypass unlearnability protections. + +To address these challenges, we propose an adaptive framework that targets UEs in the few-shot PL setting. The contributions of this paper are as follows: + +- We introduced a scenario designed to examine the effectiveness of UEs against prompt learning, particularly in a few-shot context where data availability is constrained. Beyond existing UE methods, we introduced an adaptive UE framework that incorporates PL-specific considerations for surrogate-based UEs, generating stronger UEs that are more effective against PL. +- We propose a novel method, $\mathrm{A}^3$ , which employs cross-modal adversarial augmented feature alignment to enhance PL's ability to generalize when learned from UEs. This method adversarially aligns diversely-augmented image and text augmentations to make PL robust against UEs. +- Experimental results demonstrate that $\mathrm{A}^3$ achieves significant performance gains over existing methods, proving more effective against other UE methods in few-shot scenarios, even when faced with larger perturbations, and partial poisoning. $\mathrm{A}^3$ also generalizes well to novel classes. + +This work offers new insights into the capabilities and limitations of UEs against PL, laying a foundation for more robust data protection strategies in the era of knowledge transfer with large pretrained models and multimodal machine learning. + +# 2. Related Work & Preliminaries + +# 2.1. Unlearnable Examples + +The primary goal of unlearnable examples [9, 11, 29, 33, 36, 40] is to safeguard the privacy and copyright of content providers by adding small, human-imperceptible perturbations to data. These perturbations prevent machine learning models from effectively generalizing to the data's original distribution. Unlike traditional data poisoning attacks [10], which aim to introduce backdoor patterns into a model, unlearnable examples are not intended for malicious purposes but solely to protect data from unauthorized use. + +Definition of Unlearnable Examples Consider a dataset with $N$ clean samples $\mathcal{D}_{\mathrm{clean}} = \{(x_i,y_i)\}_{i = 1}^N$ , where $\mathbf{x}_i\in \mathcal{X} = [0,1]^{C\times H\times W}$ and $y_{i}\in \mathcal{V} = \{1,\dots ,K\}$ represent the $i^{\mathrm{th}}$ input sample with $C$ channels and $H\times W$ spatial dimensions, and its corresponding true label. Each sample is drawn from a distribution $\mathcal{S}$ . The content provider aims to add small perturbations $\delta_{i}\in \mathcal{B}_{p}(\mathbf{x}_{i},\epsilon)$ to the clean samples in $\mathbf{x}_i\in \mathcal{D}_{\mathrm{clean}}$ to generate unlearnable examples $\mathcal{D}_{\mathrm{ue}}(\boldsymbol {\delta})\triangleq \{(\mathbf{x}_i + \boldsymbol {\delta}_i,y_i)\mid (\mathbf{x}_i,y_i)\in \mathcal{D}_{\mathrm{clean}}\}$ . The set $\mathcal{B}_p(\mathbf{x}_i,\epsilon)$ is: + +$$ +\mathcal {B} _ {p} \left(\mathbf {x} _ {i}, \epsilon\right) \triangleq \left\{\mathbf {d} \mid \| \mathbf {d} \| _ {p} \leq \epsilon , \mathbf {x} _ {i} + \mathbf {d} \in \mathcal {X} \right\}. \tag {1} +$$ + +It bounds the noise $\delta_{i}$ of each sample $\mathbf{x}_i$ within the $\epsilon$ -ball of $\ell_p$ -distance with respect to the sample, and the perturbed sample $\mathbf{x}_i + \delta_i$ remain within the input domain $\mathcal{X}$ . A small $\epsilon$ is crucial to ensure that the perturbations do not significantly alter the original content, thus preserving the data's utility, and typically $p \in \{0,2,\infty\}$ . + +When this perturbed dataset is used for training, the goal is for the resulting model to generalize poorly to the original distribution $S$ . The optimization for the noise can be formulated as the following bi-level optimization problem to solve for the bounded perturbations $\delta \triangleq \{\delta_i \in \mathcal{B}_p(\mathbf{x}_i, \epsilon)\}_{i=1}^N$ : + +$$ +\max _ {\delta} \mathbb {E} _ {\left(\mathbf {x} _ {i}, y _ {i}\right) \sim \mathcal {S}} \left[ \mathcal {L} \left(f _ {\boldsymbol {\theta} ^ {*} (\delta)} \left(\mathbf {x} _ {i}\right), y _ {i}\right) \right], \tag {2} +$$ + +where $f_{\theta}:\mathcal{X}\to \mathbb{R}^{K}$ denotes the model with parameters $\pmb{\theta}$ , $\mathcal{L}:\mathbb{R}^K\times \mathcal{V}\rightarrow \mathbb{R}$ is the loss function (typically cross-entropy), and $\pmb{\theta}^{\star}$ represents the model parameters optimized on the perturbed images: + +$$ +\boldsymbol {\theta} ^ {\star} (\boldsymbol {\delta}) = \operatorname {a r g m i n} _ {\boldsymbol {\theta}} \mathbb {E} _ {(\mathbf {x} _ {i}, y _ {i}) \sim \mathcal {D} _ {\text {c l e a n}}} [ \mathcal {L} (f _ {\boldsymbol {\theta}} (\mathbf {x} _ {i} + \boldsymbol {\delta} _ {i}), y _ {i}) ]. \tag {3} +$$ + +As the above problem is intractable, many works have proposed alternative methods to approximate the solution, commonly involving surrogate models: + +- Hypocritical perturbations (HYPO) [33] assumes a surrogate model $g_{\theta} : \mathcal{X} \to \mathbb{R}^{K}$ with pretrained weights $\theta$ learned on samples from $S$ , and directly finds the perturbations $\delta$ that makes the model easily produce correct predictions for $\mathcal{D}_{\mathrm{clean}}$ images. +- Error maximization (EM) [11] further considers a randomly-initialized surrogate $g_{\theta}$ , and optimizes the noise $\delta$ , and the surrogate model $g_{\theta}$ simultaneously. +- Robust error maximization (REM) [9] extends EM to optimize the surrogate $g_{\theta}$ under adversarial training [19], where the adversarial noise is also bounded within the $\epsilon$ -ball of $\ell_p$ -norm, and optimized via projected gradient descent (PGD) [19]. This helps to improve the effectiveness of the perturbations even under adversarial training. + +Interestingly, recent works have shown that unlearnable examples can also be curated without the need for optimization, where the perturbations form a linearly-separable subspace that can be learned easily by the model. This bias is so strong that it makes the underlying features less learnable by model training algorithms: + +- Linearly-separable perturbations (LSP) [40] generates random color patches as perturbations, and apply them to the images while ensuring the added noise is bounded within a small $\ell_2$ -distance from the original image. This simple method can enable strong unlearnable examples without the expensive optimization process and the need for surrogate models. +- Autoregressive Poisoning (AR) [29] Similar to LSP, AR prescribes a simple perturbation strategy which first fills all channels of the first 2 rows and columns of the image with Gaussian noise, then uses an autoregressive process to fill the remaining pixels with a $3 \times 3$ sliding window. It then re-scales the perturbations to be within the noise bound $\mathcal{B}_p(\mathbf{x}_i, \epsilon)$ , before adding them to the image $\mathbf{x}_i$ . +- One-pixel Shortcuts (OPS) [36] For each image belonging to a specific class, OPS searches for an optimal pixel and color value for the class, such that it results in the largest change in the pixel's color value for all perturbed images. This constitutes a simple $\ell_0$ -bounded perturbation where only one pixel is modified for each image. Surprisingly, when training models from scratch, OPS can generate even stronger unlearnable examples than EM with a 8/255 noise budget [25, 36]. It also resists even $\ell_{\{2,\infty\}}$ -bounded adversarial training, as the noise added by adversarial training cannot effectively perturb the pixel values for the erasure of $\ell_0$ -perturbations. + +As $\mathsf{A}^3$ considers the problem of learning from unlearnable examples from the perspective of few-shot learning with prompt learning (PL), we provide a framework that adapts the above methods to this setting, by making the surrogate models $g$ our prompt learners. It also shows that PL can be in a certain degree effective against unlearnable examples produced by these methods. + +# 2.2. Learning from Unlearnable Examples + +The emergence of unlearnable examples prompts investigation into the mechanisms that unauthorized trainers might exploit to extract useful features. + +Adversarial Training (AT) [2, 19] involves the generation of adversarial examples [4, 12, 41] specifically tailored to the model under training, which are in turn used to train the model to enhance the model's robustness. It also has been known to be an effective approach to improve the model generalization when trained on unlearnable examples [11]. However, adversarial training is known to be computationally expensive, and also affects the model's performance on clean data [38], especially when the sample size is small [5]. + +For this reason, Image Shortcut Squeezing (ISS) [16] introduces a suite of simple image processing methods can show surprising effectiveness in mitigating the impact of unlearnable examples, without the costs associated with adversarial training. Grayscale removes the color information from the training images, and JPEG Compression + +(JPEG) instead performs a lossy JPEG with high compression rate on the training images. However, it was recently discovered [26] that simple image processing is much less effective against adaptively-optimized unlearnable examples. Building upon this idea, UEraser [25] further proposes a stochastic augmentation pipeline with a wider range of transformations, and uses a simple adversarial augmentation to optimize models only on augmented images with the maximum loss. This allows the model to learn the underlying features without being affected by the easily learnable shortcuts in unlearnable examples. While all these methods have shown effectiveness against unlearnable examples, they consider the problem of training models from scratch rather than leveraging pretrained models. + +# 2.3. Prompt Learning + +Vision-language pretrained models (VLMs), such as CLIP [27] and ALIGN [13], represent a significant advancement in multi-modal learning. Trained on extensive image-text pair datasets, The VLM have two main components: an image encoder $f_{\mathrm{im}}: \mathcal{X} \to \mathbb{R}^d$ and a text encoder $f_{\mathrm{tx}}: \mathcal{W} \to \mathbb{R}^d$ , where $\mathcal{X}$ and $\mathcal{W}$ are the input image and text, respectively, and learn a shared embedding space $\mathbb{R}^d$ for images and texts. Image and text pairs that are semantically similar will have similar embeddings in this space, and vice versa. This makes them versatile for a wide range of downstream tasks, including image classification [27], captioning [1], retrieval [18], and providing guidance for image generation [7, 28]. Notably, VLMs show impressive zero-shot performance, where they can perform well on new tasks without task-specific training, showcasing their generalization capabilities. + +Prompt Engineering for VLMs seeks to use VLMs for image classification tasks by constructing class-specific text prompts, e.g., "a photo of a c1s" for the class c1s, and comparing the model's similarity scores between the features of these prompts and the target image. The class yielding the highest similarity score is selected as the predicted class for the image. Formally, prompt engineering creates $M$ text prompts, embedded as a tensor $\mathbf{V} = \{\mathbf{v}_m\}_{m=1}^M \in \mathbb{R}^{M \times T \times h}$ , where $\mathbf{v}_m$ denotes an embedded prompt prefix sequence of $T$ tokens. By appending each prefix $\mathbf{v}_i$ with $\mathbf{c}_k \in \mathbb{R}^h$ , the constant embedding vector of the $k^{\text{th}}$ class, the model can thus construct a holistic classifier $h_\phi: \mathcal{X} \to \mathbb{R}^K$ by averaging the similarity scores across all $M$ prompts, where $\phi = \{\theta, \mathbf{V}, \ldots\}$ denotes all parameters in $h_\phi$ , consisting of the pretrained CLIP weights $\pmb{\theta}$ , a manually-designed prompt embedding $\mathbf{V}$ , and other potential parameters used by the prompt learning algorithm. For the $k^{\text{th}}$ class, we can obtain its logit as follows: + +$$ +h _ {\phi} (\mathbf {x}) _ {k} = \frac {1}{M} \sum_ {m = 1} ^ {M} \operatorname {s i m} \left(f _ {\mathrm {i m}} (\mathbf {x}), f _ {\mathrm {t x}} \left(\left[ \mathbf {v} _ {m}, \mathbf {c} _ {k} \right]\right)\right). \tag {4} +$$ + +Here, sim denotes the similarity function used to compute the closeness between the image and text features, typically + +the cosine similarity. The $k^{\mathrm{th}}$ class probabilities can thus be computed using the softmax function, where $\tau$ is the softmax temperature and is usually set to 1: + +$$ +p (y = k \mid \mathbf {x}, \phi) = e ^ {h _ {\phi} (\mathbf {x}) _ {k} / \tau} / \sum_ {j = 1} ^ {K} e ^ {h _ {\phi} (x) _ {j} / \tau}. \quad (5) +$$ + +Prompt Learning (PL) While the above zero-shot method is effective for many tasks, it is limited by the quality of the manually-designed prompts and may require extensive labor and expert knowledge to construct. In contrast, PL aims to automatically learn the prompt embeddings $\mathbf{V}$ , and possibly other parameters in $\phi$ , by optimizing them during training. To improve downstream task performance, CoOp [43] learns the prompt embeddings $\mathbf{V}$ , show that even with few-shot examples, can generate better prompts than manual designs, and generalize well to unseen tasks. CoCoOp [42] builds upon CoOp by introducing a trainable meta-net to learn to generate prompt embeddings from the extracted image features, in order to improve the model's performance on unseen tasks. KgCoOp [39] further regularizes the prompt embeddings to be close to the initial handcrafted prompts, showing that by retaining proximity to the original prompts, unseen tasks can be generalized better. Finally, ProDA [17] uses Gaussian to model the prompt embedding distribution, and encourages orthogonality among the prompt embeddings. This paper presents the first work to highlight that prompt learning can be effective in learning useful features from unlearnable examples, even under few-shot scenarios. + +# 3. The $\mathbf{A}^3$ Method + +# 3.1. Adaptive UEs Targeting PL + +Surrogate-based methods (EM [11], REM [9], and HYPO [33]) for synthesizing UEs typically train models from scratch, and do not assume low data availability, which makes them an unsuitable choice for affecting the PL process of the pretrained CLIP model (Table 1). To address this limitation, we first introduce an adaptive approach for all methods that uses the PL process as the surrogate model. To make the UEs stronger, we also assume that the UEs are synthesized with the same PL method using the same pretrained CLIP model. We implemented adaptive-variations of EM, REM, and HYPO. As they are stronger UEs than the original methods, our experiments by default use these adaptive variants. Recall REM [9] in Section 2, we adapt its objective to PL, specifically CoCoOp [42] in this paper, by using the holistic classifier $h_{\phi}$ in (4) as the surrogate model. The REM objective to seek $\delta$ under PL is thus: + +$$ +\min _ {(\boldsymbol {\delta}, \mathbf {V}, \dots)} \max _ {\boldsymbol {\eta}} \mathbb {E} _ {(\mathbf {x} _ {i}, y _ {i}) \sim \mathcal {D} _ {\text {c l e a n}}} [ \mathcal {L} (h _ {\phi} (\mathbf {x} _ {i} + \boldsymbol {\delta} _ {i} + \boldsymbol {\eta}), y _ {i}) ], \tag {6} +$$ + +where $\pmb {\eta}\in \mathcal{B}_p(\mathbf{x}_i,\epsilon)$ is the $\epsilon$ -bounded $\ell_p$ -norm noise. + +In this adaptive context, EM [11] simplifies the above REM objective by removing the inner maximization over $\pmb{\eta}$ + +and $\eta = 0$ . Similarly, HYPO [33] further assumes that $\mathbf{V}$ is kept constant to its initial manual design, and searches for the perturbations $\delta$ directly. + +Surrogate-free methods such as LSP [40], AR [29] and OPS [36] do not rely on a surrogate model training process and directly prescribe the perturbations for a given set of clean examples. Therefore, they do not have an adaptive counterpart. In our experiments, we examine the performance of these methods by directly applying them to the training data used by the prompt learning process. + +# 3.2. An Overview of $\mathbf{A}^3$ + +Figure 1 provides an overview of $\mathrm{A}^3$ . The overall algorithm is in Algorithm 1 of Appendix A.3. $\mathrm{A}^3$ provides a pool of diverse image and text augmentation strategies $\mathcal{A}_{\mathrm{im}}$ and $\mathcal{A}_{\mathrm{tx}}$ . For each image-class training pair, it first samples $K_{\mathrm{im}}$ and $K_{\mathrm{tx}}$ different image and text augmentation strategies, and applies them to the image and text sample respectively. This results in $K_{\mathrm{im}} \times K_{\mathrm{tx}}$ distinct augmented pairs for each training sample. Following the prompt learning technique of CoCoOp [42], it then optimizes the prompt embeddings $\mathbf{V}$ and the meta-net weights $\psi$ to align the pair of augmented samples with the minimum similarity. Intuitively, training prompts with the most dissimilar augmented pairs of image and text features forces the model to learn from the underlying features rather than fixating on spurious correlations typically exploited by UEs. + +# 3.3. Augmentations for Image and Text Modalities + +Image Augmentations As noted by Qin et al. [26], the effectiveness of simple augmentation strategies, such as Grayscale and JPEG compression, is greatly diminished in the context of adaptively synthesized UEs. To address this, we follow the approach of UEraser [25], which proposes an extensive set of image augmentation strategies, including not only standard techniques (e.g., random cropping, rotation, etc.), and more complex strategies including fractal-based transformations [21], and TrivialAugment [20]. + +Text Augmentations Since CLIP models can leverage not only image features but also text embeddings, which makes it particularly effective in zero-shot classification tasks, and few-shot learning facilitated by PL algorithms. In our context of using PL as an effective defense against UEs, we also introduce a set of text augmentation strategies, which include techniques such as random token masking and reordering that operates in the discrete token space of the text input, and small random rotations of the text embeddings. + +For the details of the augmentation strategies used in $\mathrm{A}^3$ for both image and text, please refer to Appendix A.2. + +# 3.4. Cross-modal Adversarial Feature Alignment + +Given an image $\mathbf{x}$ and its corresponding label $y$ , we can use the above augmentation strategies to find $K_{\mathrm{im}}$ augmented + +![](images/55935ae4039ef7df53c264ca9a4b6db363a90b7a5a54779c9db26691094a1d0e.jpg) +Figure 1. An overview of $A^3$ . For each image-class training pair, $A^3$ respectively sample $K_{\mathrm{im}}$ and $K_{\mathrm{tx}}$ different image and text augmentation strategies ( $a_{\mathrm{im}} \sim \mathcal{A}_{\mathrm{im}}$ and $a_{\mathrm{tx}} \sim \mathcal{A}_{\mathrm{tx}}$ ). It then optimizes the prompt embeddings $\mathbf{V}$ and the meta-net $m_{\psi}$ for empirical risk minimization by aligning pairs of augmented samples with the minimum similarity (i.e., maximum loss) between the image and text features. + +images and $K_{\mathrm{tx}}$ text embeddings, by drawing from the sets of image $a_{\mathrm{im}} \sim \mathcal{A}_{\mathrm{im}}$ and text $a_{\mathrm{tx}} \sim \mathcal{A}_{\mathrm{tx}}$ augmentation strategies, and applying them to the image and text embeddings respectively, forming a set of augmented images $\tilde{\mathbf{x}} \triangleq \{\tilde{\mathbf{x}}_i\}_{i=1}^{K_{\mathrm{im}}}$ and a set of text embeddings $\tilde{\mathbf{t}} \triangleq \{\tilde{\mathbf{t}}_j\}_{j=1}^{K_{\mathrm{tx}}}$ : + +$$ +\tilde {\mathbf {x}} _ {i} = a _ {\mathrm {i m}} (\mathbf {x}), \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad a _ {\mathrm {i m}} \sim \mathcal {A} _ {\mathrm {i m}}, \text {f o r} i \in [ 1, \dots , K _ {\mathrm {i m}} ], +$$ + +$$ +\tilde {\mathbf {t}} _ {j} = a _ {\mathrm {t x}} \left(\left[ \mathbf {v} _ {j \bmod M}, \mathbf {c} _ {y} \right]\right), a _ {\mathrm {t x}} \sim \mathcal {A} _ {\mathrm {t x}}, \quad \text {f o r} j \in [ 1, \dots , K _ {\mathrm {t x}} ], \tag {7} +$$ + +Here, we note that the text embedding before augmentation $[\mathbf{v}_{j\mathrm{mod}M},\mathbf{c}_y]$ is a concatenation of the prompt embedding $\mathbf{v}_{j\mathrm{mod}M}$ and the class embedding $\mathbf{c}_y$ . Recall that $M$ is the number of prompt embeddings, and the modulus operation $j\mathrm{mod}M$ is to ensure that the prompt embedding is selected cyclically, if $K_{\mathrm{tx}}$ exceeds $M$ . + +Using the augmented samples, we can compute the similarity between the image and text features for each pair of augmented samples, assuming $S(\tilde{\mathbf{x}},\tilde{\mathbf{t}})\in [-1,1]^{K_{\mathrm{im}}\times K_{\mathrm{tx}}}$ is the (cosine) similarity matrix containing the similarity between each image-text pair of augmented samples. Namely, for the $\tilde{\mathbf{x}}_i$ and $\tilde{\mathbf{t}}_j$ pair, we have: + +$$ +\mathcal {S} (\tilde {\mathbf {x}}, \tilde {\mathbf {t}}) _ {i j} = \operatorname {s i m} \left(f _ {\mathrm {i m}} \left(\tilde {\mathbf {x}} _ {i}\right), f _ {\mathrm {t x}} \left(\tilde {\mathbf {t}} _ {j}\right)\right), \tag {8} +$$ + +and we optimize the trainable weights in $\phi$ by maximizing the similarity alignment between the least similar image and text augmented features. Putting it all together, we have the following min-max problem, which can be optimized using mini-batch stochastic gradient descent (SGD): + +$$ +\min _ {\phi} \mathbb {E} _ {(\mathbf {x}, y) \sim \mathcal {D} _ {\mathrm {u e}}} \left[ \max _ {(i, j)} \mathcal {L} \left(\mathcal {S} (\tilde {\mathbf {x}}, \tilde {\mathbf {t}}) _ {i j}, y\right) \right], \tag {9} +$$ + +where $\mathcal{L}$ is the softmax cross-entropy loss, and $\mathcal{D}_{\mathrm{ue}}$ is the set of unlearnable training examples. + +# 3.5. More Augmentation Diversity with Meta-Net + +To further enhance the diversity of the augmented samples, we introduce a meta-net $m_{\psi}:\mathbb{R}^d\to \mathbb{R}^{M\times (T - 1)\times h}$ with trainable weights $\psi$ , which is a small neural network that learns to predict prompt embeddings $\mathbf{V}$ from $f_{\mathrm{im}}(\tilde{\mathbf{x}})$ , i.e., the feature extracted from an augmented image by the image encoder. This allows us to generate more diverse augmented text embeddings in addition to the text augmentation strategies. While this can yield $K_{\mathrm{im}}\times K_{\mathrm{tx}}$ different augmented prompt features, with the computational cost of CLIP's feature extraction, we only use the meta-net for a random augmented image for each augmented text embedding. Using the meta-net, the similarity matrix in (8) thus becomes: + +$$ +\mathcal {S} (\tilde {\mathbf {x}}, \tilde {\mathbf {t}}) _ {i j} = \operatorname {s i m} \left(\mathbf {p} _ {i}, \mathbf {q} _ {j}\right), +$$ + +where $\mathbf{p}_i = f_{\mathrm{im}}(\tilde{\mathbf{x}}_i)$ (10) + +$$ +\mathbf {q} _ {j} = f _ {\mathrm {t x}} \left(\tilde {\mathbf {t}} _ {j} + m _ {\psi} \left(\mathbf {p} _ {k}\right)\right), k \sim \mathcal {U} \{1, K _ {\mathrm {i m}} \}. +$$ + +# 4. Experiments + +Datasets For the main experiments, We evaluate $A^3$ under 7 datasets, including ImageNet [6] Caltech-101 [8], Oxford Flowers-102 [22], Food-101 [3], Oxford-Pets [23], and UCF-101 [31]. These datasets cover various recognition tasks, including classification of generic objects, fine-grained classification, and action recognition. We also proportionally resized and cropped all images to $224 \times 224$ , the input size for the image encoder $f_{\mathrm{im}}$ . + +Models In our experiments, unless otherwise specified, we use either the ViT-B/16 or ResNet-50 as the backbone for the image encoder $f_{\mathrm{im}}$ , and the text encoder $f_{\mathrm{tx}}$ is a Transformer-based model [34]. All pretrained models are obtained from the official CLIP repository [27]. + +Evaluation For all experiments in this section, unless otherwise specified, we consider a common few-shot learning setup with $S$ labeled training examples per class, i.e., $S$ -shot learning, where $S = 16$ by default. We used a context length $T$ of 4, and $M = 1$ number of prompt embeddings. For all surrogate-based attacks we optimized perturbations $\delta$ for 15 epochs with the cosine annealing scheduler and a learning rate 0.002. For learning, we adopted the SGD optimizer with a momentum of 0.9 and a weight decay of $5 \times 10^{-4}$ . We consider the following dataset split protocols: + +- Standard We used the standard train-test splits from [42, 43] to ensure reproducibility. In this setting, all classes are included in the training phase, where each class contains $S$ labeled examples. +- Base-to-Novel To better evaluate the model's generalization ability in few-shot scenarios, we also followed [42] to divide each datasets into two equal and non-overlapping groups of classes, where the first group (base) is used for training the prompt learning model and validation, and the second group (novel) which contains unseen classes is also used for performance testing. For this protocol, we reported the test accuracies on the base classes $\alpha_{\mathrm{b}}$ , the novel classes $\alpha_{\mathrm{n}}$ , and also their harmonic mean $\alpha_{\mathrm{h}} = 2 / (\alpha_{\mathrm{b}}^{-1} + \alpha_{\mathrm{n}}^{-1})$ . + +Unlearnable Example Methods Different methods consider distinct perturbation types and perturbation budgets: + +- Surrogate-based methods such as EM [11], REM [9], and HYPO [33], consider $\ell_{\infty}$ -bounded perturbations, with a perturbation budget of $8/255$ . As these methods require surrogate models, we used Section 3.1 to adapt them to the prompt learning setting. +- LSP [40] and AR [29] both use the $\ell_2$ -norm perturbations, but their perturbation budgets are different due to their original setups. LSP assumes a perturbation budget of 1.30, while AR uses 1.00. +- OPS [36] is model-agnostic, and uses the $\ell_0$ -norm perturbations with a perturbation budget of 1 by default. + +For additional details regarding the experimental setup, please refer to Appendix B. We will now present the results and main findings of our experiments below. Appendix C provides additional results including sensitivity analysis of the hyperparameters, and more adaptive variants. + +# 4.1. Prompt Learning under UEs + +Prompt learning generalizes well to existing UEs. Table 1 compares EM-0 and EM, where EM-0 synthesizes UEs by training ResNet-18 surrogate models from scratch on the Caltech-101 dataset, and EM adapts the EM method to prompt learning using Section 3.1. Notably, we found that UE methods that can effectively thwart [26] supervised learning of small models (e.g., ResNet-18) on small datasets (e.g., CIFAR-10 [14]) are much less effective when transferred to CLIP-based prompt learning algorithms. + +Prompt learning generalizes better with increasing number of shots $S$ . Table 1 also shows that the performance of all prompt learning algorithms increases with the number of shots $S$ , even when all shots are UEs. This trend can be observed across all UE methods, but the adaptive EM method consistently suppresses the performance gains from increasing $S$ the most. + +CoCoOp and ProDA show increased robustness against UEs, while KgCoOp is the most fragile in Table 1. We speculate that this is because the meta-net used in CoCoOp may be able to absorb the shortcut present in the UEs, and ProDA models the Gaussian distribution of prompt embeddings, making it more robust to input-side perturbations. While trained on clean data, KgCoOp [42] exhibits the best performance, as shown in the "Clean" row of Table 1. However, it is notably prone to UEs, showing the largest performance drop when UEs are introduced, especially when using the adaptive EM method. This suggests that KgCoOp, guided by the regularization to be in close proximity to the initial manual prompts, cannot effectively evade crafted UEs by EM based on the initial prompts. Because of the robustness of CoCoOp under our default number of shots ( $S = 16$ ), we chose it as the baseline algorithm for the subsequent experiments. + +Without proper defenses, it is better off not learning from UEs. It is interesting to note that usually, the performance of CoCoOp when trained with UEs is worse than the zero-shot performance. This suggests that UEs can indeed be harmful to the model's performance. This behavior can be observed in Tables 1 and 2. + +# 4.2. Prompt Learning with $\mathbf{A}^3$ + +$\mathbf{A}^3$ is very effective (up to $33\%$ better than CoCoOp) in mitigating UEs. In Table 2, we observe that $\mathrm{A}^3$ consistently outperforms CoCoOp across all datasets, producing $15\%$ to $33\%$ higher $\alpha_{\mathrm{h}}$ than CoCoOp for surrogate-based methods (EM, REM, HYPO), and $6\%$ to $30\%$ higher for surrogate-free methods (LSP, AR, OPS). + +The image and text augmentations in $\mathbf{A}^3$ are both crucial for its effectiveness. Table 3 performs an ablation study on the individual contributions of image and text augmentations. We note that using either image-only or text-only augmentations can improve the model's performance, but the combination of both is the most effective, giving large accuracy gains over CoCoOp. + +Simple augmentation strategies fall short of $\mathbf{A}^3$ 's performance. We also highlight in Table 3 that applying simple augmentation strategies such as Grayscale and JPEG compression on CoCoOp can certainly gain improvement over the CoCoOp baseline. However, they are still outperformed by $\mathbf{A}^3$ , sometimes with a large margin ( $\geq 25\%$ ). + +A large arsenal of image augmentation strategies also fall short of $\mathrm{A}^3$ 's performance. Table 3 also shows that while + +Table 1. Test accuracies $(\%)$ of prompt learning algorithms on an unlearnability-poisoned Caltech-101 dataset under varying number of shots $S$ . Note that "EM-0" is the original EM [11] method, and "EM" is our adaptive variant (Section 3.1). The image encoder of the CLIP model is ResNet-50, and the zero-shot accuracy is $86.00\%$ . We also report the average accuracy across unlearnability methods (the "Avg." rows) and prompt learning algorithms (the "Avg." column). We highlight the best prompt learning algorithm against each unlearnability method in bold, and underline the strongest unlearnability methods for each prompt learning algorithm. + +
SMethodCoOpCoCoOpProDAKgCoOpAvg.
2EM-078.2380.4581.9177.8679.61
EM61.7864.1365.0559.2662.56
OPS74.4076.9175.7573.1075.04
AR80.6381.8380.7479.8980.77
Avg.73.7675.8375.8672.5374.50
4EM-083.2985.1986.0280.8383.83
EM68.4170.4570.1464.9168.48
OPS78.8080.8880.1578.0779.48
AR82.5483.7084.4182.6683.33
Avg.78.2680.0680.1876.6278.78
8EM-086.1889.4490.6485.4287.92
EM70.5072.2873.0367.0770.72
OPS80.7483.3582.6379.3181.51
AR85.8188.4188.5083.7786.62
Avg.80.8183.3783.7078.8981.69
16EM-090.7690.8590.4889.6090.42
EM71.4272.8372.1069.3471.42
OPS82.4084.0984.4380.0282.74
AR88.6390.7490.0886.4688.98
Avg.83.3084.6384.2781.3683.39
16Clean91.2091.7091.6091.8091.58
+ +UEraser [25] is the most effective strategy to learn from UEs among all tested existing methods, the performance of UEraser is still inferior to $\mathrm{A}^3$ , particularly on HYPO, LSP and AR, with increased perturbation budgets. To preserve image semantics while maximizing augmentation diversity, such image augmentation strategies are often designed with a balance between the two. This trade-off choice may limit the ability to suppress UE perturbations in images. While this is also true for $\mathrm{A}^3$ , but the additional text augmentation strategies of $\mathrm{A}^3$ can help to work around this limitation. + +Adversarial training (AT) may not be the most effective defense against UEs. As the settings of AT in Table 3 assume $\ell_{\infty}$ -bounded perturbations with $\epsilon = 8 / 255$ , it notably struggles against large $\ell_{\infty}$ perturbation budgets ( $\epsilon = 16 / 255$ ), and other types of perturbation norm-bounds ( $\ell_{2}$ and $\ell_{0}$ ), as it is not designed to handle them. There may also be an intricate balance between accuracy and robustness [38], which could result in a seesaw effect in the performance as the perturbation budget increases. + +Increasing the number of shots $S$ improves $\mathbf{A}^3$ 's performance. In Table 4, we found that while CoCoOp's performance metrics continue to improve with increasing $S$ , it never surpassed the performance of zero-shot CLIP. This echoes the findings in Table 1. On the other hand, $\mathbf{A}^3$ im + +Table 2. Base-to-novel prompt learning accuracies (\%) for CoCoOp and $\mathrm{A}^3$ trained with unlearnable examples under $\ell_{\infty}$ -bounded attacks. Rows with "+" indicate when $\mathrm{A}^3$ is applied. $\alpha_{\mathrm{b}}$ refers to the model accuracy on poisoned data, while $\alpha_{\mathrm{n}}$ refers to the model accuracy on novel classes, excluding the poisoned classes. We also report the harmonic mean $\alpha_{\mathrm{h}} = 2 / (\alpha_{\mathrm{b}}^{-1} + \alpha_{\mathrm{n}}^{-1})$ . For the "Δ" column, we report the test accuracy drop for the CoCoOp baseline from the clean training setting, and accuracy gain for $\mathrm{A}^3$ over CoCoOp. The backbone is VIT-B/16. + +
ImNetCaltechPetsFlowersFoodSUNUCFAvg.Δ
Zero-Shot [27]
αb67.5092.6087.4067.9082.9064.8066.1075.60
αn60.2087.4083.7060.8075.5056.9059.5069.14
αh63.6489.0085.5264.8079.0560.6062.8172.20
Baseline (CoCoOp [42])
αb75.2596.3094.3592.8690.1878.3880.5386.84
αn69.4393.2396.8870.1790.8074.7273.2881.22
αh72.3994.7495.5879.7490.3476.2876.6183.67
+αb76.0397.9594.0692.4190.6877.5980.2587.00
+αn70.4793.8093.1870.2991.2974.8371.8980.82
+αh73.0995.7393.6379.7490.9876.1475.8883.60
EM [11] (l∞, ε = 8/255)
αb56.4780.6878.4474.4079.9358.9063.7170.36-16.48
αn43.2774.9076.3851.6678.1553.3352.0961.40-19.82
αh48.7777.5477.4061.0379.0456.0157.3865.31-18.36
+αb73.4895.4993.7191.8489.3877.6579.3485.8215.46
+αn67.8593.0793.5368.0888.0073.1771.6979.3017.90
+αh70.4194.3493.7478.2088.8775.1875.1282.2716.96
REM [9] (l∞, ε = 8/255)
αb43.5162.6364.6061.9860.2148.2950.5355.94-30.90
αn30.4954.7760.1246.6458.0844.3041.9848.05-33.16
αh35.7458.3962.2553.1759.0646.1846.0251.54-32.12
+αb72.7694.5293.2891.0088.0676.9478.4685.0029.06
+αn66.4292.1691.9466.7487.3471.9871.2278.2630.21
+αh69.3793.6992.7476.7387.5474.4374.9481.3529.80
HYPO [33] (l∞, ε = 8/255)
αb40.0858.5959.8357.3356.5044.1147.6652.01-34.83
αn30.6454.8257.1144.7656.9041.2543.5847.01-34.21
αh34.6856.5758.4650.3056.5542.4745.4549.21-34.46
+αb71.8094.2893.5990.7588.5577.6778.9285.0833.07
+αn65.2990.3792.0165.2584.9370.0469.8576.8229.81
+αh68.3992.1393.0576.2086.6973.7374.4380.6631.45
LSP [40] (l2, ε = 1.30)
αb49.7268.4965.9963.3361.8850.6951.4758.80-28.04
αn36.0456.2963.3550.6259.4146.2048.8851.54-29.67
αh41.7961.8464.4756.1860.5348.4350.0454.75-28.91
+αb72.5394.9794.0291.2988.6477.0478.1785.2426.44
+αn67.3792.4493.8067.5287.4072.2171.1178.8427.30
+αh69.9793.7494.2777.0987.7774.4774.5481.6926.94
AR [29] (l2, ε = 1.00)
αb42.3361.7360.5859.0157.2046.2349.0653.73-33.11
αn31.6756.4258.0844.5455.9643.3042.6947.52-33.69
αh36.2959.0059.2750.6356.5244.7445.6750.30-33.37
+αb71.6894.3892.5590.8287.7776.6777.5884.4930.76
+αn66.0991.5991.6766.0686.2571.1970.8677.6730.15
+αh68.7492.9892.1078.4087.0073.9374.2181.0530.75
OPS [36] (l0, ε = 1)
αb68.2888.6086.2081.5985.6565.6068.9877.84-9.00
αn54.5080.2180.8358.7079.0660.4960.3267.73-13.49
αh60.5784.0583.4568.2882.2462.8964.3972.37-11.40
+αb73.0895.1093.3991.5089.5676.9778.1383.967.55
+αn67.2092.6393.8868.0388.0172.6471.2879.1011.37
+αh70.0693.8693.6378.0588.8774.6974.5081.429.68
+ +Table 3. Clean test accuracies (\%) on Caltech-101 datasets (16-shots, standard protocol). "UEr" means UEraser. Baseline and compared methods are adapted to CoCoOp [42]. Image encoder backbone is ResNet-50. "Text" and "Image" refer to the augmented modalities, "Full" includes both. + +
BaselineGrayJPEGATUErTextImageFull
EM8/25575.5374.2879.2182.8490.3784.6992.5194.28
16/25559.3660.9664.5877.0388.7982.4590.4091.86
REM8/25552.3363.7170.2278.9690.7386.8992.3293.88
16/25537.9759.5466.7874.1086.9683.3389.3590.25
HYPO8/25547.3650.0264.5678.8987.7484.0791.4793.21
16/25527.1834.6759.1172.2682.4879.2786.0989.33
LSP1.3043.2364.7080.0181.5890.6683.3792.8394.02
1.7425.4142.5868.3076.2983.0580.7787.2491.62
AR1.0050.1853.2481.4076.7390.1284.9691.6393.41
1.3032.6535.5769.2664.2282.8981.1786.6190.06
OPS186.1786.5289.7383.1688.6189.9390.5093.86
473.2473.3480.2369.5980.1281.3784.0787.10
+ +Table 4. Base-to-novel metrics for different number of shots $S \in \{ 0,2,4,8,{16}\}$ . The image encoder backbone is ResNet-50. + +
SCaltechFoodImageNet
A3-+-+-+
0αb86.778.162.7
αn78.474.952.8
αh82.676.457.2
2αb69.3488.5166.1184.3444.0466.04
αn61.7883.4560.2981.7730.960.03
αh65.3485.9163.0783.0436.3262.89
4αb76.0991.2375.2187.8150.4368.1
αn67.5187.9366.1986.1536.6162.9
αh71.5489.5570.4186.9742.4265.4
8αb77.8692.8975.8588.0351.6569.01
αn70.1989.1170.0986.4439.2264.58
αh73.8390.9672.8687.2344.5866.72
16αb77.5293.3676.6788.5253.7271.08
αn72.4391.0273.3487.7640.7464.95
αh74.8992.1874.9788.1446.3467.88
+ +![](images/3e63b6b5e9254872d7c39ebffcb88b9b4640ae4b812eb8255b8919a5c846b150.jpg) +αoriginal αunlearn 01 αtest αtrain + +![](images/cd91b39d42053d417575a3e791bbf2c683504123f56e5f0e61be118cd4fe24c9.jpg) + +![](images/0e966e46ae20f2d8b7aafbc0fd37033660a7ccc32b79ee38a6907e2f24abd3a5.jpg) +(a) $\mathrm{EM} + \mathrm{CoCoOp}$ +(c) $\mathrm{REM} + \mathrm{CoCoOp}$ + +![](images/13a94f1e41a87b7f18ea40d518aa2cd3057bf3ccb934147e291bfc18ae11ba04.jpg) +(b) $\mathrm{EM} + \mathrm{A}^3$ +(d) $\mathrm{REM} + \mathrm{A}^3$ + +![](images/81616334ed89d10775a3865fa15e1d3ed0af05dedecca17cf98b084e1cdb7d74.jpg) + +![](images/a6ad827e2de157735f90f2d0d34f9bc8deb05ba662f5c66c99a57b3fa52a03e9.jpg) +(f) $\mathrm{AR} + \mathrm{A}^3$ + +![](images/2a809476949a81417905ada8d5f012a84bf314fba9e658888963850b1dcbf5e6.jpg) +(e) $\mathrm{AR} + \mathrm{CoCoOp}$ +(g) $\mathrm{LSP} + \mathrm{CoCoOp}$ +Figure 2. CoCoOp vs. A3 under partial poisoning with rates $R \in \{\frac{1}{8}, \frac{1}{4}, \frac{1}{2}, 1\}$ (x-axis, %). Accuracy metrics (y-axis, %): $\alpha_{\text{unlearn}} = \text{UEs}$ in the training set; $\alpha_{\text{original}} = \text{original clean images of the UEs}$ ; $\alpha_{\text{test}} = \text{clean images in the test set}$ ; $\alpha_{\text{train}} = \text{clean images in the training set}$ . The image encoder backbone is ResNet-50. + +![](images/9c16c1e3695b1908ad6ede841fafda072772a77904d57be754a7105f372b4987.jpg) +(h) $\mathrm{LSP} + \mathrm{A}^3$ + +proves notably with increasing $S$ , where CoCoOp falls behind while $A^3$ leads by a large margin. + +With partially poisoned datasets, $\mathbf{A}^3$ learns the underlying features while CoCoOp likely does not. In practice, model trainers may curate datasets from a variety of sources, + +and only a portion of the data may contain UE perturbations. Training on such partially poisoned datasets typically result in minimal performance loss over the clean dataset, and is not indicative of the model's ability to learn the underlying features of the UEs. In Figure 2, we thus investigate whether CoCoOp and $\mathbf{A}^3$ can learn such features when trained on partially poisoned datasets. It evaluates the accuracy metrics of the unlearnable part ( $\alpha_{\mathrm{unlearn}}$ ), the original images of the unlearnable part ( $\alpha_{\mathrm{original}}$ , i.e., before perturbation), the samples from the test set ( $\alpha_{\mathrm{test}}$ ), and the clean part of the train set ( $\alpha_{\mathrm{train}}$ ). Importantly, if the model can learn the underlying features, then the $\alpha_{\mathrm{original}}$ should be close to $\alpha_{\mathrm{train}}$ , otherwise, it should be close to $\alpha_{\mathrm{test}}$ . It is evident that CoCoOp actually struggles to learn useful features from the UEs, as $\alpha_{\mathrm{original}}$ closely tracks $\alpha_{\mathrm{test}}$ , while $\alpha_{\mathrm{unlearn}}$ is higher than $\alpha_{\mathrm{train}}$ . This suggests that CoCoOp is likely overfitting to the UE perturbations, even more so than the clean training data. In contrast, $\mathbf{A}^3$ shows that $\alpha_{\mathrm{original}}$ follows $\alpha_{\mathrm{train}}$ closely, and $\alpha_{\mathrm{unlearn}}$ is close to $\alpha_{\mathrm{test}}$ , hinting that $\mathbf{A}^3$ is learning the underlying features instead of the UE-crafted perturbations. + +# 5. Conclusion + +First, PLs' generalization combined with $A^3$ make them challenging adversaries for traditional UEs. Second, Augmenting PL with diverse image and text perturbations significantly improves their resilience against UEs, pointing to the need for multimodal considerations in both UEs and countermeasures. Third, compared to simpler augmentations or adversarial training, $A^3$ 's cross-modal feature alignment proved especially effective in mitigating PL's adaptation to UEs than preexisting learning methods. Finally, we emphasize the need for adaptive, multimodal approaches in UEs and open pathways toward more sophisticated protections against unauthorized training in an era of large multimodal and pretrained models. + +# Acknowledgment + +This work is supported in part by National Natural Science Foundation of China (62376263, 62372443 and 62271496), Guangdong Basic and Applied Basic Research Foundation (2023B1515130002), Natural Science Foundation of Guangdong (2024A1515030209 and 2024A1515011970), Shenzhen Science and Technology Innovation Commission (JCYJ20230807140507015 and JCYJ20220531100804009), and Yu-Liang Lu's Project Team Development Funding (KY23A102). This work was carried out in part at SICC, which is supported by SKL-IOTSC, University of Macau. + +# References + +[1] Manuele Barraco, Marcella Cornia, Silvia Cascianelli, Lorenzo Baraldi, and Rita Cucchiara. The unreasonable effectiveness of clip features for image captioning: An experimental analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 4662-4670, 2022. 3 +[2] Battista Biggio and Fabio Roli. Wild patterns: Ten years after the rise of adversarial machine learning. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pages 2154-2156, 2018. 3 +[3] Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101 – mining discriminative components with random forests. In Computer Vision – ECCV 2014, pages 446–461. Springer International Publishing, 2014. 5, 11 +[4] Xinquan Chen, Xitong Gao, Juanjuan Zhao, Kejiang Ye, and Cheng-Zhong Xu. AdvDiffuser: Natural adversarial example synthesis with diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4562-4572, 2023. 3 +[5] Jacob Clarysse, Julia Hörrmann, and Fanny Yang. Why adversarial training can hurt robust accuracy. In The Eleventh International Conference on Learning Representations, 2023. 3 +[6] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE, 2009. 5, 11 +[7] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat GANs on image synthesis. In Advances in Neural Information Processing Systems, pages 8780-8794. Curran Associates, Inc., 2021. 3 +[8] Li Fei-Fei, Rob Fergus, and Pietro Perona. Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories. In 2004 conference on computer vision and pattern recognition workshop, pages 178-178. IEEE, 2004. 5, 11 +[9] Shaopeng Fu, Fengxiang He, Yang Liu, Li Shen, and Dacheng Tao. Robust unlearnable examples: Protecting data against adversarial learning. In International Conference on Learning Representations, 2022. 1, 2, 4, 6, 7 +[10] Micah Goldblum, Dimitris Tsipras, Chulin Xie, and et al. Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses. IEEE Transactions on Pattern + +Analysis and Machine Intelligence, 45(2):1563-1580, 2022. 2 +[11] Hanxun Huang, Xingjun Ma, Sarah Monazam Erfani, James Bailey, and Yisen Wang. Unlearnable examples: Making personal data unexploitable. In International Conference on Learning Representations, 2021. 1, 2, 3, 4, 6, 7 +[12] Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. Advances in neural information processing systems, 32, 2019. 3 +[13] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In International conference on machine learning, pages 4904-4916. PMLR, 2021. 3 +[14] Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The CIFAR-10 and CIFAR-100 datasets, 2014. Available at: http://www.cs.toronto.edu/~kriz/cifar.html.6 +[15] Ziyi Lin, Shijie Geng, Renrui Zhang, Peng Gao, Gerard De Melo, Xiaogang Wang, Jifeng Dai, Yu Qiao, and Hongsheng Li. Frozen clip models are efficient video learners. In European Conference on Computer Vision, pages 388-404. Springer, 2022. 1 +[16] Zhuoran Liu, Zhengyu Zhao, and Martha Larson. Image shortcut squeezing: Countering perturbative availability poisons with compression. In International conference on machine learning, 2023. 2, 3, 11 +[17] Yuning Lu, Jianzhuang Liu, Yonggang Zhang, Yajing Liu, and Xinmei Tian. Prompt distribution learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2022. 4 +[18] Yiwei Ma, Guohai Xu, Xiaoshuai Sun, Ming Yan, Ji Zhang, and Rongrong Ji. X-clip: End-to-end multi-grained contrastive learning for video-text retrieval. In Proceedings of the 30th ACM International Conference on Multimedia, page 638–647, New York, NY, USA, 2022. Association for Computing Machinery. 3 +[19] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. 2, 3, 11 +[20] Samuel G Müller and Frank Hutter. Trivialaugment: Tuning-free yet state-of-the-art data augmentation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 774-782, 2021. 4, 11 +[21] Anguelos Nicolaou, Vincent Christlein, Edgar Riba, Jian Shi, Georg Vogeler, and Mathias Seuret. Tormentor: Deterministic dynamic-path, data augmentations with fractals. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 2707-2711, 2022. 4, 11 +[22] Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In 2008 Sixth Indian conference on computer vision, graphics & image processing, pages 722-729. IEEE, 2008. 5, 11 + +[23] Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, and CV Jawahar. Cats and dogs. In 2012 IEEE conference on computer vision and pattern recognition, pages 3498-3505. IEEE, 2012. 5, 11 +[24] Tianrui Qin, Xitong Gao, Juanjuan Zhao, and Kejiang Ye. Destruction-restoration suppresses data protection perturbations against diffusion models. In 2023 IEEE 35th International Conference on Tools with Artificial Intelligence (ICTAI), pages 586-594. IEEE, 2023. 1 +[25] Tianrui Qin, Xitong Gao, Juanjuan Zhao, Kejiang Ye, and Cheng-Zhong Xu. Learning the unlearnable: Adversarial augmentations suppress unlearnable example attacks. In 4th Workshop on Adversarial Robustness In the Real World (AROW), ICCV 2023, 2023. 2, 3, 4, 7, 12 +[26] Tianrui Qin, Xitong Gao, Juanjuan Zhao, Kejiang Ye, and Cheng zhong Xu. APBench: A unified availability poisoning attack and defenses benchmark. Transactions on Machine Learning Research, 2024. 3, 4, 6 +[27] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 3, 5, 7 +[28] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684-10695, 2022. 3 +[29] Pedro Sandoval-Segura, Vasu Singla, Jonas Geiping, Micah Goldblum, Tom Goldstein, and David Jacobs. Autoregressive perturbations for data poisoning. Advances in Neural Information Processing Systems, 35:27374-27386, 2022. 1, 2, 3, 4, 6, 7 +[30] Mainak Singha, Harsh Pal, Ankit Jha, and Biplab Banerjee. Ad-clip: Adapting domains in prompt space using clip. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4355–4364, 2023. 1 +[31] Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012. 5, 11 +[32] Zeyi Sun, Ye Fang, Tong Wu, Pan Zhang, Yuhang Zang, Shu Kong, Yuanjun Xiong, Dahua Lin, and Jiaqi Wang. Alphaclip: A clip model focusing on wherever you want. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 13019-13029, 2024. 1 +[33] Lue Tao, Lei Feng, Jinfeng Yi, Sheng-Jun Huang, and Song-can Chen. Better safe than sorry: Preventing delusive adversaries with adversarial training. Advances in Neural Information Processing Systems, 34:16209-16225, 2021. 2, 4, 6, 7 +[34] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. 5 +[35] Jianyi Wang, Kelvin CK Chan, and Chen Change Loy. Exploring clip for assessing the look and feel of images. In + +Proceedings of the AAAI Conference on Artificial Intelligence, pages 2555-2563, 2023. 1 +[36] Shutong Wu, Sizhe Chen, Cihang Xie, and Xiaolin Huang. One-pixel shortcut: on the learning preference of deep neural networks. In International Conference on Learning Representations, 2023. 1, 2, 3, 4, 6, 7 +[37] Jianxiong Xiao, James Hays, Krista A. Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 3485-3492, 2010. 11 +[38] Yao-Yuan Yang, Cyrus Rashtchian, Hongyang Zhang, Russ R Salakhutdinov, and Kamalika Chaudhuri. A closer look at accuracy vs. robustness. In Advances in Neural Information Processing Systems, pages 8588-8601. Curran Associates, Inc., 2020. 3, 7 +[39] Hantao Yao, Rui Zhang, and Changsheng Xu. Visual-language prompt tuning with knowledge-guided context optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6757-6767, 2023. 4 +[40] Da Yu, Huishuai Zhang, Wei Chen, Jian Yin, and Tie-Yan Liu. Availability attacks create shortcuts. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 2367-2376, 2022. 1, 2, 3, 4, 6, 7 +[41] Yunrui Yu, Xitong Gao, and Cheng-Zhong Xu. LAFIT: Efficient and reliable evaluation of adversarial defenses with latent features. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 46(1):354-369, 2024. 3 +[42] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2022. 4, 6, 7, 8 +[43] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for vision-language models. International Journal of Computer Vision, 130(9):2337-2348, 2022. 4, 6 \ No newline at end of file diff --git a/2025/A3_ Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment/images.zip b/2025/A3_ Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..9487b2bf093f4e295c644499bf849124c4b4a5da --- /dev/null +++ b/2025/A3_ Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca7db1aa9b5767fe76f20977e6a47ceae33a868218cbf17b6b43f1321b78582d +size 695140 diff --git a/2025/A3_ Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment/layout.json b/2025/A3_ Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..911149dc2040e31898823e94cee66f74c251f8cb --- /dev/null +++ b/2025/A3_ Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment/layout.json @@ -0,0 +1,11848 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 130, + 102, + 480, + 140 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 102, + 480, + 140 + ], + "spans": [ + { + "bbox": [ + 130, + 102, + 480, + 140 + ], + "type": "text", + "content": "A3: Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 111, + 162, + 175, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 162, + 175, + 175 + ], + "spans": [ + { + "bbox": [ + 111, + 162, + 175, + 175 + ], + "type": "text", + "content": "Xuan Wang*" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 70, + 175, + 211, + 201 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 70, + 175, + 211, + 201 + ], + "spans": [ + { + "bbox": [ + 70, + 175, + 211, + 201 + ], + "type": "text", + "content": "Anhui Key Lab of CSSAE, National University of Defense Technology wangxuan21d@nudt.edu.cn" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 119, + 205, + 178, + 218 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 119, + 205, + 178, + 218 + ], + "spans": [ + { + "bbox": [ + 119, + 205, + 178, + 218 + ], + "type": "text", + "content": "Tianrui Qin" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 93, + 219, + 205, + 236 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 93, + 219, + 205, + 236 + ], + "spans": [ + { + "bbox": [ + 93, + 219, + 205, + 236 + ], + "type": "text", + "content": "OPPO Research Institute qintianrui123@gmail.com" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 287, + 161, + 353, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 287, + 161, + 353, + 175 + ], + "spans": [ + { + "bbox": [ + 287, + 161, + 353, + 175 + ], + "type": "text", + "content": "Xitong Gao*†" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 234, + 175, + 402, + 201 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 234, + 175, + 402, + 201 + ], + "spans": [ + { + "bbox": [ + 234, + 175, + 402, + 201 + ], + "type": "text", + "content": "Shenzhen Institutes of Advanced Technology, CAS; Shenzhen University of Advanced Technology xt.gao@siat.ac.cn" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 271, + 205, + 334, + 219 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 271, + 205, + 334, + 219 + ], + "spans": [ + { + "bbox": [ + 271, + 205, + 334, + 219 + ], + "type": "text", + "content": "Yu-liang Lu†" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 231, + 219, + 373, + 244 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 231, + 219, + 373, + 244 + ], + "spans": [ + { + "bbox": [ + 231, + 219, + 373, + 244 + ], + "type": "text", + "content": "Anhui Key Lab of CSSAE, National University of Defense Technology publicLuYL@126.com" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 445, + 162, + 520, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 445, + 162, + 520, + 175 + ], + "spans": [ + { + "bbox": [ + 445, + 162, + 520, + 175 + ], + "type": "text", + "content": "Dongping Liao" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 425, + 175, + 540, + 201 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 425, + 175, + 540, + 201 + ], + "spans": [ + { + "bbox": [ + 425, + 175, + 540, + 201 + ], + "type": "text", + "content": "State Key Lab of IoTSC, CIS Dept, University of Macau yb97428@um.edu.mo" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 416, + 205, + 501, + 218 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 416, + 205, + 501, + 218 + ], + "spans": [ + { + "bbox": [ + 416, + 205, + 501, + 218 + ], + "type": "text", + "content": "Cheng-zhong Xu" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 400, + 218, + 515, + 244 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 400, + 218, + 515, + 244 + ], + "spans": [ + { + "bbox": [ + 400, + 218, + 515, + 244 + ], + "type": "text", + "content": "State Key Lab of IoTSC, CIS Dept, University of Macau czxu@um.edu, mo" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 151, + 270, + 200, + 282 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 270, + 200, + 282 + ], + "spans": [ + { + "bbox": [ + 151, + 270, + 200, + 282 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 55, + 296, + 296, + 606 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 296, + 296, + 606 + ], + "spans": [ + { + "bbox": [ + 55, + 296, + 296, + 606 + ], + "type": "text", + "content": "In the age of pervasive machine learning applications, protecting digital content from unauthorized use has become a pressing concern. Unlearnable examples (UEs)—data modified with imperceptible perturbations to inhibit model training while preserving human usability—have emerged as a promising approach. However, existing UE methods assume unauthorized trainers have extensive exposure to UEs or that models are trained from scratch, which may not hold in practical scenarios. This paper investigates the effectiveness of UEs under the few-shot learning paradigm, pitching it against prompt learning (PL) models that leverage pretrained vision-language models (VLMs), like CLIP, capable of generalizing to new classes with minimal data. To address this, we introduce an adaptive UE framework to generate unlearnable examples that specifically target the PL process. In addition, we propose a novel UE countermeasure, " + }, + { + "bbox": [ + 55, + 296, + 296, + 606 + ], + "type": "inline_equation", + "content": "A^3" + }, + { + "bbox": [ + 55, + 296, + 296, + 606 + ], + "type": "text", + "content": ", with cross-modal adversarial feature alignment, specifically designed to circumvent UEs under few-shot PL. Experimental evaluations on 7 datasets show that " + }, + { + "bbox": [ + 55, + 296, + 296, + 606 + ], + "type": "inline_equation", + "content": "A^3" + }, + { + "bbox": [ + 55, + 296, + 296, + 606 + ], + "type": "text", + "content": " outperforms existing PL methods, achieving up to " + }, + { + "bbox": [ + 55, + 296, + 296, + 606 + ], + "type": "inline_equation", + "content": "33\\%" + }, + { + "bbox": [ + 55, + 296, + 296, + 606 + ], + "type": "text", + "content": " higher performance in learning from UEs. For example, in the scenario involving " + }, + { + "bbox": [ + 55, + 296, + 296, + 606 + ], + "type": "inline_equation", + "content": "\\ell_{\\infty}" + }, + { + "bbox": [ + 55, + 296, + 296, + 606 + ], + "type": "text", + "content": "-bounded EM perturbations, " + }, + { + "bbox": [ + 55, + 296, + 296, + 606 + ], + "type": "inline_equation", + "content": "A^3" + }, + { + "bbox": [ + 55, + 296, + 296, + 606 + ], + "type": "text", + "content": " has an average harmonic mean accuracy across 7 datasets of " + }, + { + "bbox": [ + 55, + 296, + 296, + 606 + ], + "type": "inline_equation", + "content": "82.43\\%" + }, + { + "bbox": [ + 55, + 296, + 296, + 606 + ], + "type": "text", + "content": ", compared to CoCoOp's baseline of " + }, + { + "bbox": [ + 55, + 296, + 296, + 606 + ], + "type": "inline_equation", + "content": "65.47\\%" + }, + { + "bbox": [ + 55, + 296, + 296, + 606 + ], + "type": "text", + "content": ". Our findings highlight the limitations of existing UEs against PL and lay the foundation for future data protection mechanisms." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 56, + 629, + 135, + 641 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 629, + 135, + 641 + ], + "spans": [ + { + "bbox": [ + 56, + 629, + 135, + 641 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 55, + 649, + 297, + 685 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 649, + 297, + 685 + ], + "spans": [ + { + "bbox": [ + 55, + 649, + 297, + 685 + ], + "type": "text", + "content": "In the era of pervasive machine learning applications, protecting digital content from unauthorized use is an escalating concern. An emerging solution involves unlearnable ex" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 313, + 271, + 555, + 511 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 271, + 555, + 511 + ], + "spans": [ + { + "bbox": [ + 313, + 271, + 555, + 511 + ], + "type": "text", + "content": "amples (UEs) [9, 11, 29, 36, 40]. — data modified with imperceptible perturbations that prevent machine learning models from effectively learning and generalizing from it, while preserving its utility for human observers. Unlike traditional data poisoning attacks intended for malicious use, UEs serve content creators by providing a way to inhibit unauthorized model training. Beyond this, UEs can also be used to shed light on vulnerabilities and learning preferences [36] of deep learning models, and prevent unlawful use of personal features [24]. However, existing UE methods are primarily designed for models trained from scratch, and make strong assumptions where all or a large proportion of training data is used unknowingly by unauthorized trainers. These assumptions may not hold in the wilderness, for several reasons: (a) Creators may release limited data. (b) Unauthorized trainers may have limited exposure to the UEs: they may curate their training data from various sources and may only use a small fraction of the creator's data. (c) Trainers may leverage pretrained models to improve training efficiency, and to generalize well to new classes and contexts." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 313, + 516, + 556, + 660 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 516, + 556, + 660 + ], + "spans": [ + { + "bbox": [ + 313, + 516, + 556, + 660 + ], + "type": "text", + "content": "In this paper, we found that recent advances in prompt learning (PL) with pretrained vision-language models (VLMs) can indeed challenge the robustness of UEs. VLMs use contrastive learning to align images and text features, enabling strong zero-shot and downstream tasks [15, 30, 32, 35]. PL further adapts CLIP by fine-tuning prompts instead of model weights, making it ideal for data-limited scenarios, and novel tasks and classes. This paper thus investigates a central question: Are UEs effective in protecting data against PL-enabled models? This question has profound implications from both the content creator's and the unauthorized trainer's perspectives." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 313, + 665, + 556, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 665, + 556, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 665, + 556, + 713 + ], + "type": "text", + "content": "For content creators, understanding this question is crucial for several reasons: (a) To effectively prevent unauthorized usage, creators need to know the minimum amount of modified data required to maintain protection. (b) Content" + } + ] + } + ], + "index": 21 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 66, + 693, + 134, + 703 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 693, + 134, + 703 + ], + "spans": [ + { + "bbox": [ + 66, + 693, + 134, + 703 + ], + "type": "text", + "content": "*Equal contribution." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 67, + 703, + 228, + 713 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 703, + 228, + 713 + ], + "spans": [ + { + "bbox": [ + 67, + 703, + 228, + 713 + ], + "type": "inline_equation", + "content": "\\dagger" + }, + { + "bbox": [ + 67, + 703, + 228, + 713 + ], + "type": "text", + "content": " Correspondence to Xitong Gao and Yu-liang Lu." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 294, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 315, + 757 + ], + "type": "text", + "content": "9507" + } + ] + } + ], + "index": 25 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 296, + 120 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 296, + 120 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 296, + 120 + ], + "type": "text", + "content": "creators typically control and release a limited quantity of data, making it impractical to assume access to large datasets. This constraint naturally leads to a few-shot scenario, which is the focus of this study." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 121, + 296, + 252 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 121, + 296, + 252 + ], + "spans": [ + { + "bbox": [ + 55, + 121, + 296, + 252 + ], + "type": "text", + "content": "For unauthorized trainers, PL represents an appealing tool to bypass UEs, as PL exploits the generalization strengths of VLMs: (a) The pretrained encoders of VLMs enables it to generalize well to novel classes, potentially circumventing perturbations that would normally deter training from scratch. (b) While existing methods to circumvent UEs typically involve adversarial training [19], or image augmentations [16, 25], which affect only the image seen by the model. PL may be able to enhance its robustness by incorporating text augmentations, offering a broader strategy to bypass unlearnability protections." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 253, + 296, + 289 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 253, + 296, + 289 + ], + "spans": [ + { + "bbox": [ + 55, + 253, + 296, + 289 + ], + "type": "text", + "content": "To address these challenges, we propose an adaptive framework that targets UEs in the few-shot PL setting. The contributions of this paper are as follows:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 290, + 296, + 493 + ], + "type": "list", + "angle": 0, + "index": 6, + "blocks": [ + { + "bbox": [ + 55, + 290, + 296, + 373 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 290, + 296, + 373 + ], + "spans": [ + { + "bbox": [ + 55, + 290, + 296, + 373 + ], + "type": "text", + "content": "- We introduced a scenario designed to examine the effectiveness of UEs against prompt learning, particularly in a few-shot context where data availability is constrained. Beyond existing UE methods, we introduced an adaptive UE framework that incorporates PL-specific considerations for surrogate-based UEs, generating stronger UEs that are more effective against PL." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 374, + 296, + 433 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 374, + 296, + 433 + ], + "spans": [ + { + "bbox": [ + 55, + 374, + 296, + 433 + ], + "type": "text", + "content": "- We propose a novel method, " + }, + { + "bbox": [ + 55, + 374, + 296, + 433 + ], + "type": "inline_equation", + "content": "\\mathrm{A}^3" + }, + { + "bbox": [ + 55, + 374, + 296, + 433 + ], + "type": "text", + "content": ", which employs cross-modal adversarial augmented feature alignment to enhance PL's ability to generalize when learned from UEs. This method adversarially aligns diversely-augmented image and text augmentations to make PL robust against UEs." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 434, + 296, + 493 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 434, + 296, + 493 + ], + "spans": [ + { + "bbox": [ + 55, + 434, + 296, + 493 + ], + "type": "text", + "content": "- Experimental results demonstrate that " + }, + { + "bbox": [ + 55, + 434, + 296, + 493 + ], + "type": "inline_equation", + "content": "\\mathrm{A}^3" + }, + { + "bbox": [ + 55, + 434, + 296, + 493 + ], + "type": "text", + "content": " achieves significant performance gains over existing methods, proving more effective against other UE methods in few-shot scenarios, even when faced with larger perturbations, and partial poisoning. " + }, + { + "bbox": [ + 55, + 434, + 296, + 493 + ], + "type": "inline_equation", + "content": "\\mathrm{A}^3" + }, + { + "bbox": [ + 55, + 434, + 296, + 493 + ], + "type": "text", + "content": " also generalizes well to novel classes." + } + ] + } + ], + "index": 5 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 55, + 494, + 296, + 554 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 494, + 296, + 554 + ], + "spans": [ + { + "bbox": [ + 55, + 494, + 296, + 554 + ], + "type": "text", + "content": "This work offers new insights into the capabilities and limitations of UEs against PL, laying a foundation for more robust data protection strategies in the era of knowledge transfer with large pretrained models and multimodal machine learning." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 567, + 227, + 579 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 567, + 227, + 579 + ], + "spans": [ + { + "bbox": [ + 55, + 567, + 227, + 579 + ], + "type": "text", + "content": "2. Related Work & Preliminaries" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 587, + 185, + 601 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 587, + 185, + 601 + ], + "spans": [ + { + "bbox": [ + 55, + 587, + 185, + 601 + ], + "type": "text", + "content": "2.1. Unlearnable Examples" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 605, + 297, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 605, + 297, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 605, + 297, + 713 + ], + "type": "text", + "content": "The primary goal of unlearnable examples [9, 11, 29, 33, 36, 40] is to safeguard the privacy and copyright of content providers by adding small, human-imperceptible perturbations to data. These perturbations prevent machine learning models from effectively generalizing to the data's original distribution. Unlike traditional data poisoning attacks [10], which aim to introduce backdoor patterns into a model, unlearnable examples are not intended for malicious purposes but solely to protect data from unauthorized use." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "text", + "content": "Definition of Unlearnable Examples Consider a dataset with " + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "text", + "content": " clean samples " + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_{\\mathrm{clean}} = \\{(x_i,y_i)\\}_{i = 1}^N" + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "inline_equation", + "content": "\\mathbf{x}_i\\in \\mathcal{X} = [0,1]^{C\\times H\\times W}" + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "inline_equation", + "content": "y_{i}\\in \\mathcal{V} = \\{1,\\dots ,K\\}" + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "text", + "content": " represent the " + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "inline_equation", + "content": "i^{\\mathrm{th}}" + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "text", + "content": " input sample with " + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "text", + "content": " channels and " + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "inline_equation", + "content": "H\\times W" + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "text", + "content": " spatial dimensions, and its corresponding true label. Each sample is drawn from a distribution " + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "inline_equation", + "content": "\\mathcal{S}" + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "text", + "content": ". The content provider aims to add small perturbations " + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "inline_equation", + "content": "\\delta_{i}\\in \\mathcal{B}_{p}(\\mathbf{x}_{i},\\epsilon)" + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "text", + "content": " to the clean samples in " + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "inline_equation", + "content": "\\mathbf{x}_i\\in \\mathcal{D}_{\\mathrm{clean}}" + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "text", + "content": " to generate unlearnable examples " + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_{\\mathrm{ue}}(\\boldsymbol {\\delta})\\triangleq \\{(\\mathbf{x}_i + \\boldsymbol {\\delta}_i,y_i)\\mid (\\mathbf{x}_i,y_i)\\in \\mathcal{D}_{\\mathrm{clean}}\\}" + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "text", + "content": ". The set " + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "inline_equation", + "content": "\\mathcal{B}_p(\\mathbf{x}_i,\\epsilon)" + }, + { + "bbox": [ + 313, + 72, + 555, + 182 + ], + "type": "text", + "content": " is:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 347, + 188, + 555, + 203 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 347, + 188, + 555, + 203 + ], + "spans": [ + { + "bbox": [ + 347, + 188, + 555, + 203 + ], + "type": "interline_equation", + "content": "\\mathcal {B} _ {p} \\left(\\mathbf {x} _ {i}, \\epsilon\\right) \\triangleq \\left\\{\\mathbf {d} \\mid \\| \\mathbf {d} \\| _ {p} \\leq \\epsilon , \\mathbf {x} _ {i} + \\mathbf {d} \\in \\mathcal {X} \\right\\}. \\tag {1}", + "image_path": "87faecea651da0350e31305240f00dd7f1e35a9f9e1e326a236013a741b476c7.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 208, + 555, + 279 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 208, + 555, + 279 + ], + "spans": [ + { + "bbox": [ + 313, + 208, + 555, + 279 + ], + "type": "text", + "content": "It bounds the noise " + }, + { + "bbox": [ + 313, + 208, + 555, + 279 + ], + "type": "inline_equation", + "content": "\\delta_{i}" + }, + { + "bbox": [ + 313, + 208, + 555, + 279 + ], + "type": "text", + "content": " of each sample " + }, + { + "bbox": [ + 313, + 208, + 555, + 279 + ], + "type": "inline_equation", + "content": "\\mathbf{x}_i" + }, + { + "bbox": [ + 313, + 208, + 555, + 279 + ], + "type": "text", + "content": " within the " + }, + { + "bbox": [ + 313, + 208, + 555, + 279 + ], + "type": "inline_equation", + "content": "\\epsilon" + }, + { + "bbox": [ + 313, + 208, + 555, + 279 + ], + "type": "text", + "content": "-ball of " + }, + { + "bbox": [ + 313, + 208, + 555, + 279 + ], + "type": "inline_equation", + "content": "\\ell_p" + }, + { + "bbox": [ + 313, + 208, + 555, + 279 + ], + "type": "text", + "content": "-distance with respect to the sample, and the perturbed sample " + }, + { + "bbox": [ + 313, + 208, + 555, + 279 + ], + "type": "inline_equation", + "content": "\\mathbf{x}_i + \\delta_i" + }, + { + "bbox": [ + 313, + 208, + 555, + 279 + ], + "type": "text", + "content": " remain within the input domain " + }, + { + "bbox": [ + 313, + 208, + 555, + 279 + ], + "type": "inline_equation", + "content": "\\mathcal{X}" + }, + { + "bbox": [ + 313, + 208, + 555, + 279 + ], + "type": "text", + "content": ". A small " + }, + { + "bbox": [ + 313, + 208, + 555, + 279 + ], + "type": "inline_equation", + "content": "\\epsilon" + }, + { + "bbox": [ + 313, + 208, + 555, + 279 + ], + "type": "text", + "content": " is crucial to ensure that the perturbations do not significantly alter the original content, thus preserving the data's utility, and typically " + }, + { + "bbox": [ + 313, + 208, + 555, + 279 + ], + "type": "inline_equation", + "content": "p \\in \\{0,2,\\infty\\}" + }, + { + "bbox": [ + 313, + 208, + 555, + 279 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 280, + 555, + 340 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 280, + 555, + 340 + ], + "spans": [ + { + "bbox": [ + 313, + 280, + 555, + 340 + ], + "type": "text", + "content": "When this perturbed dataset is used for training, the goal is for the resulting model to generalize poorly to the original distribution " + }, + { + "bbox": [ + 313, + 280, + 555, + 340 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 313, + 280, + 555, + 340 + ], + "type": "text", + "content": ". The optimization for the noise can be formulated as the following bi-level optimization problem to solve for the bounded perturbations " + }, + { + "bbox": [ + 313, + 280, + 555, + 340 + ], + "type": "inline_equation", + "content": "\\delta \\triangleq \\{\\delta_i \\in \\mathcal{B}_p(\\mathbf{x}_i, \\epsilon)\\}_{i=1}^N" + }, + { + "bbox": [ + 313, + 280, + 555, + 340 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 359, + 346, + 555, + 360 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 359, + 346, + 555, + 360 + ], + "spans": [ + { + "bbox": [ + 359, + 346, + 555, + 360 + ], + "type": "interline_equation", + "content": "\\max _ {\\delta} \\mathbb {E} _ {\\left(\\mathbf {x} _ {i}, y _ {i}\\right) \\sim \\mathcal {S}} \\left[ \\mathcal {L} \\left(f _ {\\boldsymbol {\\theta} ^ {*} (\\delta)} \\left(\\mathbf {x} _ {i}\\right), y _ {i}\\right) \\right], \\tag {2}", + "image_path": "a8ce563b8499248cf4b0dd7744216fd5d8827501618fb2063a88cde677ef4ca7.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 364, + 555, + 413 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 364, + 555, + 413 + ], + "spans": [ + { + "bbox": [ + 313, + 364, + 555, + 413 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 364, + 555, + 413 + ], + "type": "inline_equation", + "content": "f_{\\theta}:\\mathcal{X}\\to \\mathbb{R}^{K}" + }, + { + "bbox": [ + 313, + 364, + 555, + 413 + ], + "type": "text", + "content": " denotes the model with parameters " + }, + { + "bbox": [ + 313, + 364, + 555, + 413 + ], + "type": "inline_equation", + "content": "\\pmb{\\theta}" + }, + { + "bbox": [ + 313, + 364, + 555, + 413 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 364, + 555, + 413 + ], + "type": "inline_equation", + "content": "\\mathcal{L}:\\mathbb{R}^K\\times \\mathcal{V}\\rightarrow \\mathbb{R}" + }, + { + "bbox": [ + 313, + 364, + 555, + 413 + ], + "type": "text", + "content": " is the loss function (typically cross-entropy), and " + }, + { + "bbox": [ + 313, + 364, + 555, + 413 + ], + "type": "inline_equation", + "content": "\\pmb{\\theta}^{\\star}" + }, + { + "bbox": [ + 313, + 364, + 555, + 413 + ], + "type": "text", + "content": " represents the model parameters optimized on the perturbed images:" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 321, + 418, + 555, + 434 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 418, + 555, + 434 + ], + "spans": [ + { + "bbox": [ + 321, + 418, + 555, + 434 + ], + "type": "interline_equation", + "content": "\\boldsymbol {\\theta} ^ {\\star} (\\boldsymbol {\\delta}) = \\operatorname {a r g m i n} _ {\\boldsymbol {\\theta}} \\mathbb {E} _ {(\\mathbf {x} _ {i}, y _ {i}) \\sim \\mathcal {D} _ {\\text {c l e a n}}} [ \\mathcal {L} (f _ {\\boldsymbol {\\theta}} (\\mathbf {x} _ {i} + \\boldsymbol {\\delta} _ {i}), y _ {i}) ]. \\tag {3}", + "image_path": "163c0da759f5b5fff269d049a362b5912163413945dcc58981e0903979609a2a.jpg" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 438, + 555, + 474 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 438, + 555, + 474 + ], + "spans": [ + { + "bbox": [ + 313, + 438, + 555, + 474 + ], + "type": "text", + "content": "As the above problem is intractable, many works have proposed alternative methods to approximate the solution, commonly involving surrogate models:" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 314, + 475, + 555, + 641 + ], + "type": "list", + "angle": 0, + "index": 22, + "blocks": [ + { + "bbox": [ + 314, + 475, + 555, + 534 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 475, + 555, + 534 + ], + "spans": [ + { + "bbox": [ + 314, + 475, + 555, + 534 + ], + "type": "text", + "content": "- Hypocritical perturbations (HYPO) [33] assumes a surrogate model " + }, + { + "bbox": [ + 314, + 475, + 555, + 534 + ], + "type": "inline_equation", + "content": "g_{\\theta} : \\mathcal{X} \\to \\mathbb{R}^{K}" + }, + { + "bbox": [ + 314, + 475, + 555, + 534 + ], + "type": "text", + "content": " with pretrained weights " + }, + { + "bbox": [ + 314, + 475, + 555, + 534 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 314, + 475, + 555, + 534 + ], + "type": "text", + "content": " learned on samples from " + }, + { + "bbox": [ + 314, + 475, + 555, + 534 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 314, + 475, + 555, + 534 + ], + "type": "text", + "content": ", and directly finds the perturbations " + }, + { + "bbox": [ + 314, + 475, + 555, + 534 + ], + "type": "inline_equation", + "content": "\\delta" + }, + { + "bbox": [ + 314, + 475, + 555, + 534 + ], + "type": "text", + "content": " that makes the model easily produce correct predictions for " + }, + { + "bbox": [ + 314, + 475, + 555, + 534 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_{\\mathrm{clean}}" + }, + { + "bbox": [ + 314, + 475, + 555, + 534 + ], + "type": "text", + "content": " images." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 314, + 534, + 554, + 569 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 534, + 554, + 569 + ], + "spans": [ + { + "bbox": [ + 314, + 534, + 554, + 569 + ], + "type": "text", + "content": "- Error maximization (EM) [11] further considers a randomly-initialized surrogate " + }, + { + "bbox": [ + 314, + 534, + 554, + 569 + ], + "type": "inline_equation", + "content": "g_{\\theta}" + }, + { + "bbox": [ + 314, + 534, + 554, + 569 + ], + "type": "text", + "content": ", and optimizes the noise " + }, + { + "bbox": [ + 314, + 534, + 554, + 569 + ], + "type": "inline_equation", + "content": "\\delta" + }, + { + "bbox": [ + 314, + 534, + 554, + 569 + ], + "type": "text", + "content": ", and the surrogate model " + }, + { + "bbox": [ + 314, + 534, + 554, + 569 + ], + "type": "inline_equation", + "content": "g_{\\theta}" + }, + { + "bbox": [ + 314, + 534, + 554, + 569 + ], + "type": "text", + "content": " simultaneously." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 314, + 570, + 555, + 641 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 570, + 555, + 641 + ], + "spans": [ + { + "bbox": [ + 314, + 570, + 555, + 641 + ], + "type": "text", + "content": "- Robust error maximization (REM) [9] extends EM to optimize the surrogate " + }, + { + "bbox": [ + 314, + 570, + 555, + 641 + ], + "type": "inline_equation", + "content": "g_{\\theta}" + }, + { + "bbox": [ + 314, + 570, + 555, + 641 + ], + "type": "text", + "content": " under adversarial training [19], where the adversarial noise is also bounded within the " + }, + { + "bbox": [ + 314, + 570, + 555, + 641 + ], + "type": "inline_equation", + "content": "\\epsilon" + }, + { + "bbox": [ + 314, + 570, + 555, + 641 + ], + "type": "text", + "content": "-ball of " + }, + { + "bbox": [ + 314, + 570, + 555, + 641 + ], + "type": "inline_equation", + "content": "\\ell_p" + }, + { + "bbox": [ + 314, + 570, + 555, + 641 + ], + "type": "text", + "content": "-norm, and optimized via projected gradient descent (PGD) [19]. This helps to improve the effectiveness of the perturbations even under adversarial training." + } + ] + } + ], + "index": 21 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 313, + 642, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 642, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 642, + 555, + 713 + ], + "type": "text", + "content": "Interestingly, recent works have shown that unlearnable examples can also be curated without the need for optimization, where the perturbations form a linearly-separable subspace that can be learned easily by the model. This bias is so strong that it makes the underlying features less learnable by model training algorithms:" + } + ] + } + ], + "index": 23 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 315, + 757 + ], + "type": "text", + "content": "9508" + } + ] + } + ], + "index": 24 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 294, + 383 + ], + "type": "list", + "angle": 0, + "index": 3, + "blocks": [ + { + "bbox": [ + 55, + 72, + 294, + 155 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 294, + 155 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 294, + 155 + ], + "type": "text", + "content": "- Linearly-separable perturbations (LSP) [40] generates random color patches as perturbations, and apply them to the images while ensuring the added noise is bounded within a small " + }, + { + "bbox": [ + 55, + 72, + 294, + 155 + ], + "type": "inline_equation", + "content": "\\ell_2" + }, + { + "bbox": [ + 55, + 72, + 294, + 155 + ], + "type": "text", + "content": "-distance from the original image. This simple method can enable strong unlearnable examples without the expensive optimization process and the need for surrogate models." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 156, + 294, + 239 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 156, + 294, + 239 + ], + "spans": [ + { + "bbox": [ + 55, + 156, + 294, + 239 + ], + "type": "text", + "content": "- Autoregressive Poisoning (AR) [29] Similar to LSP, AR prescribes a simple perturbation strategy which first fills all channels of the first 2 rows and columns of the image with Gaussian noise, then uses an autoregressive process to fill the remaining pixels with a " + }, + { + "bbox": [ + 55, + 156, + 294, + 239 + ], + "type": "inline_equation", + "content": "3 \\times 3" + }, + { + "bbox": [ + 55, + 156, + 294, + 239 + ], + "type": "text", + "content": " sliding window. It then re-scales the perturbations to be within the noise bound " + }, + { + "bbox": [ + 55, + 156, + 294, + 239 + ], + "type": "inline_equation", + "content": "\\mathcal{B}_p(\\mathbf{x}_i, \\epsilon)" + }, + { + "bbox": [ + 55, + 156, + 294, + 239 + ], + "type": "text", + "content": ", before adding them to the image " + }, + { + "bbox": [ + 55, + 156, + 294, + 239 + ], + "type": "inline_equation", + "content": "\\mathbf{x}_i" + }, + { + "bbox": [ + 55, + 156, + 294, + 239 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 240, + 294, + 383 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 240, + 294, + 383 + ], + "spans": [ + { + "bbox": [ + 55, + 240, + 294, + 383 + ], + "type": "text", + "content": "- One-pixel Shortcuts (OPS) [36] For each image belonging to a specific class, OPS searches for an optimal pixel and color value for the class, such that it results in the largest change in the pixel's color value for all perturbed images. This constitutes a simple " + }, + { + "bbox": [ + 55, + 240, + 294, + 383 + ], + "type": "inline_equation", + "content": "\\ell_0" + }, + { + "bbox": [ + 55, + 240, + 294, + 383 + ], + "type": "text", + "content": "-bounded perturbation where only one pixel is modified for each image. Surprisingly, when training models from scratch, OPS can generate even stronger unlearnable examples than EM with a 8/255 noise budget [25, 36]. It also resists even " + }, + { + "bbox": [ + 55, + 240, + 294, + 383 + ], + "type": "inline_equation", + "content": "\\ell_{\\{2,\\infty\\}}" + }, + { + "bbox": [ + 55, + 240, + 294, + 383 + ], + "type": "text", + "content": "-bounded adversarial training, as the noise added by adversarial training cannot effectively perturb the pixel values for the erasure of " + }, + { + "bbox": [ + 55, + 240, + 294, + 383 + ], + "type": "inline_equation", + "content": "\\ell_0" + }, + { + "bbox": [ + 55, + 240, + 294, + 383 + ], + "type": "text", + "content": "-perturbations." + } + ] + } + ], + "index": 2 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 55, + 384, + 295, + 468 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 384, + 295, + 468 + ], + "spans": [ + { + "bbox": [ + 55, + 384, + 295, + 468 + ], + "type": "text", + "content": "As " + }, + { + "bbox": [ + 55, + 384, + 295, + 468 + ], + "type": "inline_equation", + "content": "\\mathsf{A}^3" + }, + { + "bbox": [ + 55, + 384, + 295, + 468 + ], + "type": "text", + "content": " considers the problem of learning from unlearnable examples from the perspective of few-shot learning with prompt learning (PL), we provide a framework that adapts the above methods to this setting, by making the surrogate models " + }, + { + "bbox": [ + 55, + 384, + 295, + 468 + ], + "type": "inline_equation", + "content": "g" + }, + { + "bbox": [ + 55, + 384, + 295, + 468 + ], + "type": "text", + "content": " our prompt learners. It also shows that PL can be in a certain degree effective against unlearnable examples produced by these methods." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 478, + 255, + 491 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 478, + 255, + 491 + ], + "spans": [ + { + "bbox": [ + 55, + 478, + 255, + 491 + ], + "type": "text", + "content": "2.2. Learning from Unlearnable Examples" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 496, + 295, + 532 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 496, + 295, + 532 + ], + "spans": [ + { + "bbox": [ + 55, + 496, + 295, + 532 + ], + "type": "text", + "content": "The emergence of unlearnable examples prompts investigation into the mechanisms that unauthorized trainers might exploit to extract useful features." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 533, + 295, + 640 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 533, + 295, + 640 + ], + "spans": [ + { + "bbox": [ + 55, + 533, + 295, + 640 + ], + "type": "text", + "content": "Adversarial Training (AT) [2, 19] involves the generation of adversarial examples [4, 12, 41] specifically tailored to the model under training, which are in turn used to train the model to enhance the model's robustness. It also has been known to be an effective approach to improve the model generalization when trained on unlearnable examples [11]. However, adversarial training is known to be computationally expensive, and also affects the model's performance on clean data [38], especially when the sample size is small [5]." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 642, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 642, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 642, + 295, + 713 + ], + "type": "text", + "content": "For this reason, Image Shortcut Squeezing (ISS) [16] introduces a suite of simple image processing methods can show surprising effectiveness in mitigating the impact of unlearnable examples, without the costs associated with adversarial training. Grayscale removes the color information from the training images, and JPEG Compression" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 72, + 555, + 240 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 555, + 240 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 555, + 240 + ], + "type": "text", + "content": "(JPEG) instead performs a lossy JPEG with high compression rate on the training images. However, it was recently discovered [26] that simple image processing is much less effective against adaptively-optimized unlearnable examples. Building upon this idea, UEraser [25] further proposes a stochastic augmentation pipeline with a wider range of transformations, and uses a simple adversarial augmentation to optimize models only on augmented images with the maximum loss. This allows the model to learn the underlying features without being affected by the easily learnable shortcuts in unlearnable examples. While all these methods have shown effectiveness against unlearnable examples, they consider the problem of training models from scratch rather than leveraging pretrained models." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 247, + 419, + 259 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 247, + 419, + 259 + ], + "spans": [ + { + "bbox": [ + 313, + 247, + 419, + 259 + ], + "type": "text", + "content": "2.3. Prompt Learning" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 264, + 555, + 443 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 264, + 555, + 443 + ], + "spans": [ + { + "bbox": [ + 313, + 264, + 555, + 443 + ], + "type": "text", + "content": "Vision-language pretrained models (VLMs), such as CLIP [27] and ALIGN [13], represent a significant advancement in multi-modal learning. Trained on extensive image-text pair datasets, The VLM have two main components: an image encoder " + }, + { + "bbox": [ + 313, + 264, + 555, + 443 + ], + "type": "inline_equation", + "content": "f_{\\mathrm{im}}: \\mathcal{X} \\to \\mathbb{R}^d" + }, + { + "bbox": [ + 313, + 264, + 555, + 443 + ], + "type": "text", + "content": " and a text encoder " + }, + { + "bbox": [ + 313, + 264, + 555, + 443 + ], + "type": "inline_equation", + "content": "f_{\\mathrm{tx}}: \\mathcal{W} \\to \\mathbb{R}^d" + }, + { + "bbox": [ + 313, + 264, + 555, + 443 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 313, + 264, + 555, + 443 + ], + "type": "inline_equation", + "content": "\\mathcal{X}" + }, + { + "bbox": [ + 313, + 264, + 555, + 443 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 264, + 555, + 443 + ], + "type": "inline_equation", + "content": "\\mathcal{W}" + }, + { + "bbox": [ + 313, + 264, + 555, + 443 + ], + "type": "text", + "content": " are the input image and text, respectively, and learn a shared embedding space " + }, + { + "bbox": [ + 313, + 264, + 555, + 443 + ], + "type": "inline_equation", + "content": "\\mathbb{R}^d" + }, + { + "bbox": [ + 313, + 264, + 555, + 443 + ], + "type": "text", + "content": " for images and texts. Image and text pairs that are semantically similar will have similar embeddings in this space, and vice versa. This makes them versatile for a wide range of downstream tasks, including image classification [27], captioning [1], retrieval [18], and providing guidance for image generation [7, 28]. Notably, VLMs show impressive zero-shot performance, where they can perform well on new tasks without task-specific training, showcasing their generalization capabilities." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "spans": [ + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "text", + "content": "Prompt Engineering for VLMs seeks to use VLMs for image classification tasks by constructing class-specific text prompts, e.g., \"a photo of a c1s\" for the class c1s, and comparing the model's similarity scores between the features of these prompts and the target image. The class yielding the highest similarity score is selected as the predicted class for the image. Formally, prompt engineering creates " + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "text", + "content": " text prompts, embedded as a tensor " + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "inline_equation", + "content": "\\mathbf{V} = \\{\\mathbf{v}_m\\}_{m=1}^M \\in \\mathbb{R}^{M \\times T \\times h}" + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "inline_equation", + "content": "\\mathbf{v}_m" + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "text", + "content": " denotes an embedded prompt prefix sequence of " + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "inline_equation", + "content": "T" + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "text", + "content": " tokens. By appending each prefix " + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "inline_equation", + "content": "\\mathbf{v}_i" + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "text", + "content": " with " + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "inline_equation", + "content": "\\mathbf{c}_k \\in \\mathbb{R}^h" + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "text", + "content": ", the constant embedding vector of the " + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "inline_equation", + "content": "k^{\\text{th}}" + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "text", + "content": " class, the model can thus construct a holistic classifier " + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "inline_equation", + "content": "h_\\phi: \\mathcal{X} \\to \\mathbb{R}^K" + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "text", + "content": " by averaging the similarity scores across all " + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "text", + "content": " prompts, where " + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "inline_equation", + "content": "\\phi = \\{\\theta, \\mathbf{V}, \\ldots\\}" + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "text", + "content": " denotes all parameters in " + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "inline_equation", + "content": "h_\\phi" + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "text", + "content": ", consisting of the pretrained CLIP weights " + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "inline_equation", + "content": "\\pmb{\\theta}" + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "text", + "content": ", a manually-designed prompt embedding " + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "inline_equation", + "content": "\\mathbf{V}" + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "text", + "content": ", and other potential parameters used by the prompt learning algorithm. For the " + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "inline_equation", + "content": "k^{\\text{th}}" + }, + { + "bbox": [ + 313, + 444, + 556, + 659 + ], + "type": "text", + "content": " class, we can obtain its logit as follows:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 335, + 666, + 555, + 682 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 335, + 666, + 555, + 682 + ], + "spans": [ + { + "bbox": [ + 335, + 666, + 555, + 682 + ], + "type": "interline_equation", + "content": "h _ {\\phi} (\\mathbf {x}) _ {k} = \\frac {1}{M} \\sum_ {m = 1} ^ {M} \\operatorname {s i m} \\left(f _ {\\mathrm {i m}} (\\mathbf {x}), f _ {\\mathrm {t x}} \\left(\\left[ \\mathbf {v} _ {m}, \\mathbf {c} _ {k} \\right]\\right)\\right). \\tag {4}", + "image_path": "98eeec5b4717743ef8bb05ca135e45c6826174fe41974e50d0e1ce8b1fd49b67.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 689, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 689, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 689, + 555, + 713 + ], + "type": "text", + "content": "Here, sim denotes the similarity function used to compute the closeness between the image and text features, typically" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 315, + 757 + ], + "type": "text", + "content": "9509" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 297, + 109 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 297, + 109 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 297, + 109 + ], + "type": "text", + "content": "the cosine similarity. The " + }, + { + "bbox": [ + 55, + 72, + 297, + 109 + ], + "type": "inline_equation", + "content": "k^{\\mathrm{th}}" + }, + { + "bbox": [ + 55, + 72, + 297, + 109 + ], + "type": "text", + "content": " class probabilities can thus be computed using the softmax function, where " + }, + { + "bbox": [ + 55, + 72, + 297, + 109 + ], + "type": "inline_equation", + "content": "\\tau" + }, + { + "bbox": [ + 55, + 72, + 297, + 109 + ], + "type": "text", + "content": " is the softmax temperature and is usually set to 1:" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 77, + 116, + 296, + 133 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 116, + 296, + 133 + ], + "spans": [ + { + "bbox": [ + 77, + 116, + 296, + 133 + ], + "type": "interline_equation", + "content": "p (y = k \\mid \\mathbf {x}, \\phi) = e ^ {h _ {\\phi} (\\mathbf {x}) _ {k} / \\tau} / \\sum_ {j = 1} ^ {K} e ^ {h _ {\\phi} (x) _ {j} / \\tau}. \\quad (5)", + "image_path": "e102b0e89a0a6c3696c47084a9bcf0eeebe49b7996dccad27dda2aa6e3f2b579.jpg" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 140, + 296, + 403 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 140, + 296, + 403 + ], + "spans": [ + { + "bbox": [ + 56, + 140, + 296, + 403 + ], + "type": "text", + "content": "Prompt Learning (PL) While the above zero-shot method is effective for many tasks, it is limited by the quality of the manually-designed prompts and may require extensive labor and expert knowledge to construct. In contrast, PL aims to automatically learn the prompt embeddings " + }, + { + "bbox": [ + 56, + 140, + 296, + 403 + ], + "type": "inline_equation", + "content": "\\mathbf{V}" + }, + { + "bbox": [ + 56, + 140, + 296, + 403 + ], + "type": "text", + "content": ", and possibly other parameters in " + }, + { + "bbox": [ + 56, + 140, + 296, + 403 + ], + "type": "inline_equation", + "content": "\\phi" + }, + { + "bbox": [ + 56, + 140, + 296, + 403 + ], + "type": "text", + "content": ", by optimizing them during training. To improve downstream task performance, CoOp [43] learns the prompt embeddings " + }, + { + "bbox": [ + 56, + 140, + 296, + 403 + ], + "type": "inline_equation", + "content": "\\mathbf{V}" + }, + { + "bbox": [ + 56, + 140, + 296, + 403 + ], + "type": "text", + "content": ", show that even with few-shot examples, can generate better prompts than manual designs, and generalize well to unseen tasks. CoCoOp [42] builds upon CoOp by introducing a trainable meta-net to learn to generate prompt embeddings from the extracted image features, in order to improve the model's performance on unseen tasks. KgCoOp [39] further regularizes the prompt embeddings to be close to the initial handcrafted prompts, showing that by retaining proximity to the original prompts, unseen tasks can be generalized better. Finally, ProDA [17] uses Gaussian to model the prompt embedding distribution, and encourages orthogonality among the prompt embeddings. This paper presents the first work to highlight that prompt learning can be effective in learning useful features from unlearnable examples, even under few-shot scenarios." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 414, + 149, + 427 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 414, + 149, + 427 + ], + "spans": [ + { + "bbox": [ + 55, + 414, + 149, + 427 + ], + "type": "text", + "content": "3. The " + }, + { + "bbox": [ + 55, + 414, + 149, + 427 + ], + "type": "inline_equation", + "content": "\\mathbf{A}^3" + }, + { + "bbox": [ + 55, + 414, + 149, + 427 + ], + "type": "text", + "content": " Method" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 434, + 208, + 448 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 434, + 208, + 448 + ], + "spans": [ + { + "bbox": [ + 55, + 434, + 208, + 448 + ], + "type": "text", + "content": "3.1. Adaptive UEs Targeting PL" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 452, + 297, + 643 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 452, + 297, + 643 + ], + "spans": [ + { + "bbox": [ + 55, + 452, + 297, + 643 + ], + "type": "text", + "content": "Surrogate-based methods (EM [11], REM [9], and HYPO [33]) for synthesizing UEs typically train models from scratch, and do not assume low data availability, which makes them an unsuitable choice for affecting the PL process of the pretrained CLIP model (Table 1). To address this limitation, we first introduce an adaptive approach for all methods that uses the PL process as the surrogate model. To make the UEs stronger, we also assume that the UEs are synthesized with the same PL method using the same pretrained CLIP model. We implemented adaptive-variations of EM, REM, and HYPO. As they are stronger UEs than the original methods, our experiments by default use these adaptive variants. Recall REM [9] in Section 2, we adapt its objective to PL, specifically CoCoOp [42] in this paper, by using the holistic classifier " + }, + { + "bbox": [ + 55, + 452, + 297, + 643 + ], + "type": "inline_equation", + "content": "h_{\\phi}" + }, + { + "bbox": [ + 55, + 452, + 297, + 643 + ], + "type": "text", + "content": " in (4) as the surrogate model. The REM objective to seek " + }, + { + "bbox": [ + 55, + 452, + 297, + 643 + ], + "type": "inline_equation", + "content": "\\delta" + }, + { + "bbox": [ + 55, + 452, + 297, + 643 + ], + "type": "text", + "content": " under PL is thus:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 58, + 652, + 296, + 677 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 652, + 296, + 677 + ], + "spans": [ + { + "bbox": [ + 58, + 652, + 296, + 677 + ], + "type": "interline_equation", + "content": "\\min _ {(\\boldsymbol {\\delta}, \\mathbf {V}, \\dots)} \\max _ {\\boldsymbol {\\eta}} \\mathbb {E} _ {(\\mathbf {x} _ {i}, y _ {i}) \\sim \\mathcal {D} _ {\\text {c l e a n}}} [ \\mathcal {L} (h _ {\\phi} (\\mathbf {x} _ {i} + \\boldsymbol {\\delta} _ {i} + \\boldsymbol {\\eta}), y _ {i}) ], \\tag {6}", + "image_path": "c46c957f8e223dbc00b54ced367ae021c970f483d3be6ef9cb7ea5d4200d2234.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 677, + 270, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 677, + 270, + 689 + ], + "spans": [ + { + "bbox": [ + 55, + 677, + 270, + 689 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 677, + 270, + 689 + ], + "type": "inline_equation", + "content": "\\pmb {\\eta}\\in \\mathcal{B}_p(\\mathbf{x}_i,\\epsilon)" + }, + { + "bbox": [ + 55, + 677, + 270, + 689 + ], + "type": "text", + "content": " is the " + }, + { + "bbox": [ + 55, + 677, + 270, + 689 + ], + "type": "inline_equation", + "content": "\\epsilon" + }, + { + "bbox": [ + 55, + 677, + 270, + 689 + ], + "type": "text", + "content": " -bounded " + }, + { + "bbox": [ + 55, + 677, + 270, + 689 + ], + "type": "inline_equation", + "content": "\\ell_p" + }, + { + "bbox": [ + 55, + 677, + 270, + 689 + ], + "type": "text", + "content": " -norm noise." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 689, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 689, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 689, + 296, + 714 + ], + "type": "text", + "content": "In this adaptive context, EM [11] simplifies the above REM objective by removing the inner maximization over " + }, + { + "bbox": [ + 55, + 689, + 296, + 714 + ], + "type": "inline_equation", + "content": "\\pmb{\\eta}" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 72, + 555, + 108 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 555, + 108 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 555, + 108 + ], + "type": "text", + "content": "and " + }, + { + "bbox": [ + 313, + 72, + 555, + 108 + ], + "type": "inline_equation", + "content": "\\eta = 0" + }, + { + "bbox": [ + 313, + 72, + 555, + 108 + ], + "type": "text", + "content": ". Similarly, HYPO [33] further assumes that " + }, + { + "bbox": [ + 313, + 72, + 555, + 108 + ], + "type": "inline_equation", + "content": "\\mathbf{V}" + }, + { + "bbox": [ + 313, + 72, + 555, + 108 + ], + "type": "text", + "content": " is kept constant to its initial manual design, and searches for the perturbations " + }, + { + "bbox": [ + 313, + 72, + 555, + 108 + ], + "type": "inline_equation", + "content": "\\delta" + }, + { + "bbox": [ + 313, + 72, + 555, + 108 + ], + "type": "text", + "content": " directly." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 108, + 556, + 193 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 108, + 556, + 193 + ], + "spans": [ + { + "bbox": [ + 313, + 108, + 556, + 193 + ], + "type": "text", + "content": "Surrogate-free methods such as LSP [40], AR [29] and OPS [36] do not rely on a surrogate model training process and directly prescribe the perturbations for a given set of clean examples. Therefore, they do not have an adaptive counterpart. In our experiments, we examine the performance of these methods by directly applying them to the training data used by the prompt learning process." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 200, + 423, + 213 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 200, + 423, + 213 + ], + "spans": [ + { + "bbox": [ + 313, + 200, + 423, + 213 + ], + "type": "text", + "content": "3.2. An Overview of " + }, + { + "bbox": [ + 313, + 200, + 423, + 213 + ], + "type": "inline_equation", + "content": "\\mathbf{A}^3" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 217, + 556, + 398 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 217, + 556, + 398 + ], + "spans": [ + { + "bbox": [ + 313, + 217, + 556, + 398 + ], + "type": "text", + "content": "Figure 1 provides an overview of " + }, + { + "bbox": [ + 313, + 217, + 556, + 398 + ], + "type": "inline_equation", + "content": "\\mathrm{A}^3" + }, + { + "bbox": [ + 313, + 217, + 556, + 398 + ], + "type": "text", + "content": ". The overall algorithm is in Algorithm 1 of Appendix A.3. " + }, + { + "bbox": [ + 313, + 217, + 556, + 398 + ], + "type": "inline_equation", + "content": "\\mathrm{A}^3" + }, + { + "bbox": [ + 313, + 217, + 556, + 398 + ], + "type": "text", + "content": " provides a pool of diverse image and text augmentation strategies " + }, + { + "bbox": [ + 313, + 217, + 556, + 398 + ], + "type": "inline_equation", + "content": "\\mathcal{A}_{\\mathrm{im}}" + }, + { + "bbox": [ + 313, + 217, + 556, + 398 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 217, + 556, + 398 + ], + "type": "inline_equation", + "content": "\\mathcal{A}_{\\mathrm{tx}}" + }, + { + "bbox": [ + 313, + 217, + 556, + 398 + ], + "type": "text", + "content": ". For each image-class training pair, it first samples " + }, + { + "bbox": [ + 313, + 217, + 556, + 398 + ], + "type": "inline_equation", + "content": "K_{\\mathrm{im}}" + }, + { + "bbox": [ + 313, + 217, + 556, + 398 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 217, + 556, + 398 + ], + "type": "inline_equation", + "content": "K_{\\mathrm{tx}}" + }, + { + "bbox": [ + 313, + 217, + 556, + 398 + ], + "type": "text", + "content": " different image and text augmentation strategies, and applies them to the image and text sample respectively. This results in " + }, + { + "bbox": [ + 313, + 217, + 556, + 398 + ], + "type": "inline_equation", + "content": "K_{\\mathrm{im}} \\times K_{\\mathrm{tx}}" + }, + { + "bbox": [ + 313, + 217, + 556, + 398 + ], + "type": "text", + "content": " distinct augmented pairs for each training sample. Following the prompt learning technique of CoCoOp [42], it then optimizes the prompt embeddings " + }, + { + "bbox": [ + 313, + 217, + 556, + 398 + ], + "type": "inline_equation", + "content": "\\mathbf{V}" + }, + { + "bbox": [ + 313, + 217, + 556, + 398 + ], + "type": "text", + "content": " and the meta-net weights " + }, + { + "bbox": [ + 313, + 217, + 556, + 398 + ], + "type": "inline_equation", + "content": "\\psi" + }, + { + "bbox": [ + 313, + 217, + 556, + 398 + ], + "type": "text", + "content": " to align the pair of augmented samples with the minimum similarity. Intuitively, training prompts with the most dissimilar augmented pairs of image and text features forces the model to learn from the underlying features rather than fixating on spurious correlations typically exploited by UEs." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 406, + 552, + 419 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 406, + 552, + 419 + ], + "spans": [ + { + "bbox": [ + 313, + 406, + 552, + 419 + ], + "type": "text", + "content": "3.3. Augmentations for Image and Text Modalities" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 423, + 556, + 532 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 423, + 556, + 532 + ], + "spans": [ + { + "bbox": [ + 313, + 423, + 556, + 532 + ], + "type": "text", + "content": "Image Augmentations As noted by Qin et al. [26], the effectiveness of simple augmentation strategies, such as Grayscale and JPEG compression, is greatly diminished in the context of adaptively synthesized UEs. To address this, we follow the approach of UEraser [25], which proposes an extensive set of image augmentation strategies, including not only standard techniques (e.g., random cropping, rotation, etc.), and more complex strategies including fractal-based transformations [21], and TrivialAugment [20]." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 532, + 556, + 639 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 532, + 556, + 639 + ], + "spans": [ + { + "bbox": [ + 313, + 532, + 556, + 639 + ], + "type": "text", + "content": "Text Augmentations Since CLIP models can leverage not only image features but also text embeddings, which makes it particularly effective in zero-shot classification tasks, and few-shot learning facilitated by PL algorithms. In our context of using PL as an effective defense against UEs, we also introduce a set of text augmentation strategies, which include techniques such as random token masking and reordering that operates in the discrete token space of the text input, and small random rotations of the text embeddings." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 639, + 555, + 664 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 639, + 555, + 664 + ], + "spans": [ + { + "bbox": [ + 313, + 639, + 555, + 664 + ], + "type": "text", + "content": "For the details of the augmentation strategies used in " + }, + { + "bbox": [ + 313, + 639, + 555, + 664 + ], + "type": "inline_equation", + "content": "\\mathrm{A}^3" + }, + { + "bbox": [ + 313, + 639, + 555, + 664 + ], + "type": "text", + "content": " for both image and text, please refer to Appendix A.2." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 672, + 545, + 685 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 672, + 545, + 685 + ], + "spans": [ + { + "bbox": [ + 313, + 672, + 545, + 685 + ], + "type": "text", + "content": "3.4. Cross-modal Adversarial Feature Alignment" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "text", + "content": "Given an image " + }, + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "inline_equation", + "content": "\\mathbf{x}" + }, + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "text", + "content": " and its corresponding label " + }, + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "inline_equation", + "content": "y" + }, + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "text", + "content": ", we can use the above augmentation strategies to find " + }, + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "inline_equation", + "content": "K_{\\mathrm{im}}" + }, + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "text", + "content": " augmented" + } + ] + } + ], + "index": 18 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 315, + 757 + ], + "type": "text", + "content": "9510" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 58, + 70, + 553, + 249 + ], + "blocks": [ + { + "bbox": [ + 58, + 70, + 553, + 249 + ], + "lines": [ + { + "bbox": [ + 58, + 70, + 553, + 249 + ], + "spans": [ + { + "bbox": [ + 58, + 70, + 553, + 249 + ], + "type": "image", + "image_path": "55935ae4039ef7df53c264ca9a4b6db363a90b7a5a54779c9db26691094a1d0e.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 251, + 555, + 288 + ], + "lines": [ + { + "bbox": [ + 55, + 251, + 555, + 288 + ], + "spans": [ + { + "bbox": [ + 55, + 251, + 555, + 288 + ], + "type": "text", + "content": "Figure 1. An overview of " + }, + { + "bbox": [ + 55, + 251, + 555, + 288 + ], + "type": "inline_equation", + "content": "A^3" + }, + { + "bbox": [ + 55, + 251, + 555, + 288 + ], + "type": "text", + "content": ". For each image-class training pair, " + }, + { + "bbox": [ + 55, + 251, + 555, + 288 + ], + "type": "inline_equation", + "content": "A^3" + }, + { + "bbox": [ + 55, + 251, + 555, + 288 + ], + "type": "text", + "content": " respectively sample " + }, + { + "bbox": [ + 55, + 251, + 555, + 288 + ], + "type": "inline_equation", + "content": "K_{\\mathrm{im}}" + }, + { + "bbox": [ + 55, + 251, + 555, + 288 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 251, + 555, + 288 + ], + "type": "inline_equation", + "content": "K_{\\mathrm{tx}}" + }, + { + "bbox": [ + 55, + 251, + 555, + 288 + ], + "type": "text", + "content": " different image and text augmentation strategies (" + }, + { + "bbox": [ + 55, + 251, + 555, + 288 + ], + "type": "inline_equation", + "content": "a_{\\mathrm{im}} \\sim \\mathcal{A}_{\\mathrm{im}}" + }, + { + "bbox": [ + 55, + 251, + 555, + 288 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 251, + 555, + 288 + ], + "type": "inline_equation", + "content": "a_{\\mathrm{tx}} \\sim \\mathcal{A}_{\\mathrm{tx}}" + }, + { + "bbox": [ + 55, + 251, + 555, + 288 + ], + "type": "text", + "content": "). It then optimizes the prompt embeddings " + }, + { + "bbox": [ + 55, + 251, + 555, + 288 + ], + "type": "inline_equation", + "content": "\\mathbf{V}" + }, + { + "bbox": [ + 55, + 251, + 555, + 288 + ], + "type": "text", + "content": " and the meta-net " + }, + { + "bbox": [ + 55, + 251, + 555, + 288 + ], + "type": "inline_equation", + "content": "m_{\\psi}" + }, + { + "bbox": [ + 55, + 251, + 555, + 288 + ], + "type": "text", + "content": " for empirical risk minimization by aligning pairs of augmented samples with the minimum similarity (i.e., maximum loss) between the image and text features." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 306, + 296, + 370 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 306, + 296, + 370 + ], + "spans": [ + { + "bbox": [ + 55, + 306, + 296, + 370 + ], + "type": "text", + "content": "images and " + }, + { + "bbox": [ + 55, + 306, + 296, + 370 + ], + "type": "inline_equation", + "content": "K_{\\mathrm{tx}}" + }, + { + "bbox": [ + 55, + 306, + 296, + 370 + ], + "type": "text", + "content": " text embeddings, by drawing from the sets of image " + }, + { + "bbox": [ + 55, + 306, + 296, + 370 + ], + "type": "inline_equation", + "content": "a_{\\mathrm{im}} \\sim \\mathcal{A}_{\\mathrm{im}}" + }, + { + "bbox": [ + 55, + 306, + 296, + 370 + ], + "type": "text", + "content": " and text " + }, + { + "bbox": [ + 55, + 306, + 296, + 370 + ], + "type": "inline_equation", + "content": "a_{\\mathrm{tx}} \\sim \\mathcal{A}_{\\mathrm{tx}}" + }, + { + "bbox": [ + 55, + 306, + 296, + 370 + ], + "type": "text", + "content": " augmentation strategies, and applying them to the image and text embeddings respectively, forming a set of augmented images " + }, + { + "bbox": [ + 55, + 306, + 296, + 370 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{x}} \\triangleq \\{\\tilde{\\mathbf{x}}_i\\}_{i=1}^{K_{\\mathrm{im}}}" + }, + { + "bbox": [ + 55, + 306, + 296, + 370 + ], + "type": "text", + "content": " and a set of text embeddings " + }, + { + "bbox": [ + 55, + 306, + 296, + 370 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{t}} \\triangleq \\{\\tilde{\\mathbf{t}}_j\\}_{j=1}^{K_{\\mathrm{tx}}}" + }, + { + "bbox": [ + 55, + 306, + 296, + 370 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 378, + 296, + 392 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 378, + 296, + 392 + ], + "spans": [ + { + "bbox": [ + 55, + 378, + 296, + 392 + ], + "type": "interline_equation", + "content": "\\tilde {\\mathbf {x}} _ {i} = a _ {\\mathrm {i m}} (\\mathbf {x}), \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad a _ {\\mathrm {i m}} \\sim \\mathcal {A} _ {\\mathrm {i m}}, \\text {f o r} i \\in [ 1, \\dots , K _ {\\mathrm {i m}} ],", + "image_path": "61f5795dfb883e34d17c63b03bc95b92f3600b69c18bda16d26920fe0fa35a13.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 393, + 296, + 417 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 393, + 296, + 417 + ], + "spans": [ + { + "bbox": [ + 56, + 393, + 296, + 417 + ], + "type": "interline_equation", + "content": "\\tilde {\\mathbf {t}} _ {j} = a _ {\\mathrm {t x}} \\left(\\left[ \\mathbf {v} _ {j \\bmod M}, \\mathbf {c} _ {y} \\right]\\right), a _ {\\mathrm {t x}} \\sim \\mathcal {A} _ {\\mathrm {t x}}, \\quad \\text {f o r} j \\in [ 1, \\dots , K _ {\\mathrm {t x}} ], \\tag {7}", + "image_path": "e3e6114d683bb08fff88b3a507b3f8f46f5d4353c0e265dcb3e216d50c35bd00.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 418, + 296, + 489 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 418, + 296, + 489 + ], + "spans": [ + { + "bbox": [ + 55, + 418, + 296, + 489 + ], + "type": "text", + "content": "Here, we note that the text embedding before augmentation " + }, + { + "bbox": [ + 55, + 418, + 296, + 489 + ], + "type": "inline_equation", + "content": "[\\mathbf{v}_{j\\mathrm{mod}M},\\mathbf{c}_y]" + }, + { + "bbox": [ + 55, + 418, + 296, + 489 + ], + "type": "text", + "content": " is a concatenation of the prompt embedding " + }, + { + "bbox": [ + 55, + 418, + 296, + 489 + ], + "type": "inline_equation", + "content": "\\mathbf{v}_{j\\mathrm{mod}M}" + }, + { + "bbox": [ + 55, + 418, + 296, + 489 + ], + "type": "text", + "content": " and the class embedding " + }, + { + "bbox": [ + 55, + 418, + 296, + 489 + ], + "type": "inline_equation", + "content": "\\mathbf{c}_y" + }, + { + "bbox": [ + 55, + 418, + 296, + 489 + ], + "type": "text", + "content": ". Recall that " + }, + { + "bbox": [ + 55, + 418, + 296, + 489 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 55, + 418, + 296, + 489 + ], + "type": "text", + "content": " is the number of prompt embeddings, and the modulus operation " + }, + { + "bbox": [ + 55, + 418, + 296, + 489 + ], + "type": "inline_equation", + "content": "j\\mathrm{mod}M" + }, + { + "bbox": [ + 55, + 418, + 296, + 489 + ], + "type": "text", + "content": " is to ensure that the prompt embedding is selected cyclically, if " + }, + { + "bbox": [ + 55, + 418, + 296, + 489 + ], + "type": "inline_equation", + "content": "K_{\\mathrm{tx}}" + }, + { + "bbox": [ + 55, + 418, + 296, + 489 + ], + "type": "text", + "content": " exceeds " + }, + { + "bbox": [ + 55, + 418, + 296, + 489 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 55, + 418, + 296, + 489 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 489, + 296, + 561 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 489, + 296, + 561 + ], + "spans": [ + { + "bbox": [ + 55, + 489, + 296, + 561 + ], + "type": "text", + "content": "Using the augmented samples, we can compute the similarity between the image and text features for each pair of augmented samples, assuming " + }, + { + "bbox": [ + 55, + 489, + 296, + 561 + ], + "type": "inline_equation", + "content": "S(\\tilde{\\mathbf{x}},\\tilde{\\mathbf{t}})\\in [-1,1]^{K_{\\mathrm{im}}\\times K_{\\mathrm{tx}}}" + }, + { + "bbox": [ + 55, + 489, + 296, + 561 + ], + "type": "text", + "content": " is the (cosine) similarity matrix containing the similarity between each image-text pair of augmented samples. Namely, for the " + }, + { + "bbox": [ + 55, + 489, + 296, + 561 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{x}}_i" + }, + { + "bbox": [ + 55, + 489, + 296, + 561 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 489, + 296, + 561 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{t}}_j" + }, + { + "bbox": [ + 55, + 489, + 296, + 561 + ], + "type": "text", + "content": " pair, we have:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 569, + 296, + 585 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 569, + 296, + 585 + ], + "spans": [ + { + "bbox": [ + 105, + 569, + 296, + 585 + ], + "type": "interline_equation", + "content": "\\mathcal {S} (\\tilde {\\mathbf {x}}, \\tilde {\\mathbf {t}}) _ {i j} = \\operatorname {s i m} \\left(f _ {\\mathrm {i m}} \\left(\\tilde {\\mathbf {x}} _ {i}\\right), f _ {\\mathrm {t x}} \\left(\\tilde {\\mathbf {t}} _ {j}\\right)\\right), \\tag {8}", + "image_path": "f517d38a984f7fd3676f473da9b46112699ec4111f998d50cdd249b6687b1c39.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 593, + 296, + 653 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 593, + 296, + 653 + ], + "spans": [ + { + "bbox": [ + 55, + 593, + 296, + 653 + ], + "type": "text", + "content": "and we optimize the trainable weights in " + }, + { + "bbox": [ + 55, + 593, + 296, + 653 + ], + "type": "inline_equation", + "content": "\\phi" + }, + { + "bbox": [ + 55, + 593, + 296, + 653 + ], + "type": "text", + "content": " by maximizing the similarity alignment between the least similar image and text augmented features. Putting it all together, we have the following min-max problem, which can be optimized using mini-batch stochastic gradient descent (SGD):" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 84, + 661, + 296, + 682 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 661, + 296, + 682 + ], + "spans": [ + { + "bbox": [ + 84, + 661, + 296, + 682 + ], + "type": "interline_equation", + "content": "\\min _ {\\phi} \\mathbb {E} _ {(\\mathbf {x}, y) \\sim \\mathcal {D} _ {\\mathrm {u e}}} \\left[ \\max _ {(i, j)} \\mathcal {L} \\left(\\mathcal {S} (\\tilde {\\mathbf {x}}, \\tilde {\\mathbf {t}}) _ {i j}, y\\right) \\right], \\tag {9}", + "image_path": "f1f1910ada0f6e853e4e84c5376ead14dbca882da022d89a6fc587e2875ad3bf.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 689, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 689, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 689, + 296, + 714 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 689, + 296, + 714 + ], + "type": "inline_equation", + "content": "\\mathcal{L}" + }, + { + "bbox": [ + 55, + 689, + 296, + 714 + ], + "type": "text", + "content": " is the softmax cross-entropy loss, and " + }, + { + "bbox": [ + 55, + 689, + 296, + 714 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_{\\mathrm{ue}}" + }, + { + "bbox": [ + 55, + 689, + 296, + 714 + ], + "type": "text", + "content": " is the set of unlearnable training examples." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 306, + 548, + 319 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 306, + 548, + 319 + ], + "spans": [ + { + "bbox": [ + 313, + 306, + 548, + 319 + ], + "type": "text", + "content": "3.5. More Augmentation Diversity with Meta-Net" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 324, + 555, + 468 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 324, + 555, + 468 + ], + "spans": [ + { + "bbox": [ + 313, + 324, + 555, + 468 + ], + "type": "text", + "content": "To further enhance the diversity of the augmented samples, we introduce a meta-net " + }, + { + "bbox": [ + 313, + 324, + 555, + 468 + ], + "type": "inline_equation", + "content": "m_{\\psi}:\\mathbb{R}^d\\to \\mathbb{R}^{M\\times (T - 1)\\times h}" + }, + { + "bbox": [ + 313, + 324, + 555, + 468 + ], + "type": "text", + "content": " with trainable weights " + }, + { + "bbox": [ + 313, + 324, + 555, + 468 + ], + "type": "inline_equation", + "content": "\\psi" + }, + { + "bbox": [ + 313, + 324, + 555, + 468 + ], + "type": "text", + "content": " , which is a small neural network that learns to predict prompt embeddings " + }, + { + "bbox": [ + 313, + 324, + 555, + 468 + ], + "type": "inline_equation", + "content": "\\mathbf{V}" + }, + { + "bbox": [ + 313, + 324, + 555, + 468 + ], + "type": "text", + "content": " from " + }, + { + "bbox": [ + 313, + 324, + 555, + 468 + ], + "type": "inline_equation", + "content": "f_{\\mathrm{im}}(\\tilde{\\mathbf{x}})" + }, + { + "bbox": [ + 313, + 324, + 555, + 468 + ], + "type": "text", + "content": " , i.e., the feature extracted from an augmented image by the image encoder. This allows us to generate more diverse augmented text embeddings in addition to the text augmentation strategies. While this can yield " + }, + { + "bbox": [ + 313, + 324, + 555, + 468 + ], + "type": "inline_equation", + "content": "K_{\\mathrm{im}}\\times K_{\\mathrm{tx}}" + }, + { + "bbox": [ + 313, + 324, + 555, + 468 + ], + "type": "text", + "content": " different augmented prompt features, with the computational cost of CLIP's feature extraction, we only use the meta-net for a random augmented image for each augmented text embedding. Using the meta-net, the similarity matrix in (8) thus becomes:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 331, + 479, + 434, + 493 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 331, + 479, + 434, + 493 + ], + "spans": [ + { + "bbox": [ + 331, + 479, + 434, + 493 + ], + "type": "interline_equation", + "content": "\\mathcal {S} (\\tilde {\\mathbf {x}}, \\tilde {\\mathbf {t}}) _ {i j} = \\operatorname {s i m} \\left(\\mathbf {p} _ {i}, \\mathbf {q} _ {j}\\right),", + "image_path": "e1e587528e369257436ced5022a2e7807455bf4121938c0315a65ca90874c454.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 325, + 495, + 555, + 507 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 325, + 495, + 555, + 507 + ], + "spans": [ + { + "bbox": [ + 325, + 495, + 555, + 507 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 325, + 495, + 555, + 507 + ], + "type": "inline_equation", + "content": "\\mathbf{p}_i = f_{\\mathrm{im}}(\\tilde{\\mathbf{x}}_i)" + }, + { + "bbox": [ + 325, + 495, + 555, + 507 + ], + "type": "text", + "content": " (10)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 358, + 510, + 528, + 523 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 358, + 510, + 528, + 523 + ], + "spans": [ + { + "bbox": [ + 358, + 510, + 528, + 523 + ], + "type": "interline_equation", + "content": "\\mathbf {q} _ {j} = f _ {\\mathrm {t x}} \\left(\\tilde {\\mathbf {t}} _ {j} + m _ {\\psi} \\left(\\mathbf {p} _ {k}\\right)\\right), k \\sim \\mathcal {U} \\{1, K _ {\\mathrm {i m}} \\}.", + "image_path": "bff85665429893c29f678dabf5e83ac7ce9476a65ae80bd147d312dd00d0b0e6.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 535, + 394, + 549 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 535, + 394, + 549 + ], + "spans": [ + { + "bbox": [ + 313, + 535, + 394, + 549 + ], + "type": "text", + "content": "4. Experiments" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 556, + 555, + 653 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 556, + 555, + 653 + ], + "spans": [ + { + "bbox": [ + 313, + 556, + 555, + 653 + ], + "type": "text", + "content": "Datasets For the main experiments, We evaluate " + }, + { + "bbox": [ + 313, + 556, + 555, + 653 + ], + "type": "inline_equation", + "content": "A^3" + }, + { + "bbox": [ + 313, + 556, + 555, + 653 + ], + "type": "text", + "content": " under 7 datasets, including ImageNet [6] Caltech-101 [8], Oxford Flowers-102 [22], Food-101 [3], Oxford-Pets [23], and UCF-101 [31]. These datasets cover various recognition tasks, including classification of generic objects, fine-grained classification, and action recognition. We also proportionally resized and cropped all images to " + }, + { + "bbox": [ + 313, + 556, + 555, + 653 + ], + "type": "inline_equation", + "content": "224 \\times 224" + }, + { + "bbox": [ + 313, + 556, + 555, + 653 + ], + "type": "text", + "content": ", the input size for the image encoder " + }, + { + "bbox": [ + 313, + 556, + 555, + 653 + ], + "type": "inline_equation", + "content": "f_{\\mathrm{im}}" + }, + { + "bbox": [ + 313, + 556, + 555, + 653 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 654, + 555, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 654, + 555, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 654, + 555, + 714 + ], + "type": "text", + "content": "Models In our experiments, unless otherwise specified, we use either the ViT-B/16 or ResNet-50 as the backbone for the image encoder " + }, + { + "bbox": [ + 313, + 654, + 555, + 714 + ], + "type": "inline_equation", + "content": "f_{\\mathrm{im}}" + }, + { + "bbox": [ + 313, + 654, + 555, + 714 + ], + "type": "text", + "content": ", and the text encoder " + }, + { + "bbox": [ + 313, + 654, + 555, + 714 + ], + "type": "inline_equation", + "content": "f_{\\mathrm{tx}}" + }, + { + "bbox": [ + 313, + 654, + 555, + 714 + ], + "type": "text", + "content": " is a Transformer-based model [34]. All pretrained models are obtained from the official CLIP repository [27]." + } + ] + } + ], + "index": 18 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 314, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 314, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 314, + 757 + ], + "type": "text", + "content": "9511" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "text", + "content": "Evaluation For all experiments in this section, unless otherwise specified, we consider a common few-shot learning setup with " + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "text", + "content": " labeled training examples per class, i.e., " + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "text", + "content": "-shot learning, where " + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "inline_equation", + "content": "S = 16" + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "text", + "content": " by default. We used a context length " + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "inline_equation", + "content": "T" + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "text", + "content": " of 4, and " + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "inline_equation", + "content": "M = 1" + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "text", + "content": " number of prompt embeddings. For all surrogate-based attacks we optimized perturbations " + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "inline_equation", + "content": "\\delta" + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "text", + "content": " for 15 epochs with the cosine annealing scheduler and a learning rate 0.002. For learning, we adopted the SGD optimizer with a momentum of 0.9 and a weight decay of " + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "inline_equation", + "content": "5 \\times 10^{-4}" + }, + { + "bbox": [ + 55, + 72, + 296, + 191 + ], + "type": "text", + "content": ". We consider the following dataset split protocols:" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 193, + 296, + 361 + ], + "type": "list", + "angle": 0, + "index": 3, + "blocks": [ + { + "bbox": [ + 55, + 193, + 296, + 239 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 193, + 296, + 239 + ], + "spans": [ + { + "bbox": [ + 55, + 193, + 296, + 239 + ], + "type": "text", + "content": "- Standard We used the standard train-test splits from [42, 43] to ensure reproducibility. In this setting, all classes are included in the training phase, where each class contains " + }, + { + "bbox": [ + 55, + 193, + 296, + 239 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 55, + 193, + 296, + 239 + ], + "type": "text", + "content": " labeled examples." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 240, + 296, + 361 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 240, + 296, + 361 + ], + "spans": [ + { + "bbox": [ + 55, + 240, + 296, + 361 + ], + "type": "text", + "content": "- Base-to-Novel To better evaluate the model's generalization ability in few-shot scenarios, we also followed [42] to divide each datasets into two equal and non-overlapping groups of classes, where the first group (base) is used for training the prompt learning model and validation, and the second group (novel) which contains unseen classes is also used for performance testing. For this protocol, we reported the test accuracies on the base classes " + }, + { + "bbox": [ + 55, + 240, + 296, + 361 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{b}}" + }, + { + "bbox": [ + 55, + 240, + 296, + 361 + ], + "type": "text", + "content": ", the novel classes " + }, + { + "bbox": [ + 55, + 240, + 296, + 361 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{n}}" + }, + { + "bbox": [ + 55, + 240, + 296, + 361 + ], + "type": "text", + "content": ", and also their harmonic mean " + }, + { + "bbox": [ + 55, + 240, + 296, + 361 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{h}} = 2 / (\\alpha_{\\mathrm{b}}^{-1} + \\alpha_{\\mathrm{n}}^{-1})" + }, + { + "bbox": [ + 55, + 240, + 296, + 361 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 2 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 55, + 361, + 296, + 384 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 361, + 296, + 384 + ], + "spans": [ + { + "bbox": [ + 55, + 361, + 296, + 384 + ], + "type": "text", + "content": "Unlearnable Example Methods Different methods consider distinct perturbation types and perturbation budgets:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 385, + 296, + 517 + ], + "type": "list", + "angle": 0, + "index": 8, + "blocks": [ + { + "bbox": [ + 55, + 385, + 295, + 445 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 385, + 295, + 445 + ], + "spans": [ + { + "bbox": [ + 55, + 385, + 295, + 445 + ], + "type": "text", + "content": "- Surrogate-based methods such as EM [11], REM [9], and HYPO [33], consider " + }, + { + "bbox": [ + 55, + 385, + 295, + 445 + ], + "type": "inline_equation", + "content": "\\ell_{\\infty}" + }, + { + "bbox": [ + 55, + 385, + 295, + 445 + ], + "type": "text", + "content": "-bounded perturbations, with a perturbation budget of " + }, + { + "bbox": [ + 55, + 385, + 295, + 445 + ], + "type": "inline_equation", + "content": "8/255" + }, + { + "bbox": [ + 55, + 385, + 295, + 445 + ], + "type": "text", + "content": ". As these methods require surrogate models, we used Section 3.1 to adapt them to the prompt learning setting." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 445, + 296, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 445, + 296, + 491 + ], + "spans": [ + { + "bbox": [ + 55, + 445, + 296, + 491 + ], + "type": "text", + "content": "- LSP [40] and AR [29] both use the " + }, + { + "bbox": [ + 55, + 445, + 296, + 491 + ], + "type": "inline_equation", + "content": "\\ell_2" + }, + { + "bbox": [ + 55, + 445, + 296, + 491 + ], + "type": "text", + "content": "-norm perturbations, but their perturbation budgets are different due to their original setups. LSP assumes a perturbation budget of 1.30, while AR uses 1.00." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 493, + 296, + 517 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 493, + 296, + 517 + ], + "spans": [ + { + "bbox": [ + 55, + 493, + 296, + 517 + ], + "type": "text", + "content": "- OPS [36] is model-agnostic, and uses the " + }, + { + "bbox": [ + 55, + 493, + 296, + 517 + ], + "type": "inline_equation", + "content": "\\ell_0" + }, + { + "bbox": [ + 55, + 493, + 296, + 517 + ], + "type": "text", + "content": "-norm perturbations with a perturbation budget of 1 by default." + } + ] + } + ], + "index": 7 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 55, + 517, + 296, + 578 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 517, + 296, + 578 + ], + "spans": [ + { + "bbox": [ + 55, + 517, + 296, + 578 + ], + "type": "text", + "content": "For additional details regarding the experimental setup, please refer to Appendix B. We will now present the results and main findings of our experiments below. Appendix C provides additional results including sensitivity analysis of the hyperparameters, and more adaptive variants." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 587, + 212, + 600 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 587, + 212, + 600 + ], + "spans": [ + { + "bbox": [ + 55, + 587, + 212, + 600 + ], + "type": "text", + "content": "4.1. Prompt Learning under UEs" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 55, + 605, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 605, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 605, + 296, + 713 + ], + "type": "text", + "content": "Prompt learning generalizes well to existing UEs. Table 1 compares EM-0 and EM, where EM-0 synthesizes UEs by training ResNet-18 surrogate models from scratch on the Caltech-101 dataset, and EM adapts the EM method to prompt learning using Section 3.1. Notably, we found that UE methods that can effectively thwart [26] supervised learning of small models (e.g., ResNet-18) on small datasets (e.g., CIFAR-10 [14]) are much less effective when transferred to CLIP-based prompt learning algorithms." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 72, + 553, + 156 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 553, + 156 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 553, + 156 + ], + "type": "text", + "content": "Prompt learning generalizes better with increasing number of shots " + }, + { + "bbox": [ + 313, + 72, + 553, + 156 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 313, + 72, + 553, + 156 + ], + "type": "text", + "content": ". Table 1 also shows that the performance of all prompt learning algorithms increases with the number of shots " + }, + { + "bbox": [ + 313, + 72, + 553, + 156 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 313, + 72, + 553, + 156 + ], + "type": "text", + "content": ", even when all shots are UEs. This trend can be observed across all UE methods, but the adaptive EM method consistently suppresses the performance gains from increasing " + }, + { + "bbox": [ + 313, + 72, + 553, + 156 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 313, + 72, + 553, + 156 + ], + "type": "text", + "content": " the most." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 157, + 555, + 361 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 157, + 555, + 361 + ], + "spans": [ + { + "bbox": [ + 313, + 157, + 555, + 361 + ], + "type": "text", + "content": "CoCoOp and ProDA show increased robustness against UEs, while KgCoOp is the most fragile in Table 1. We speculate that this is because the meta-net used in CoCoOp may be able to absorb the shortcut present in the UEs, and ProDA models the Gaussian distribution of prompt embeddings, making it more robust to input-side perturbations. While trained on clean data, KgCoOp [42] exhibits the best performance, as shown in the \"Clean\" row of Table 1. However, it is notably prone to UEs, showing the largest performance drop when UEs are introduced, especially when using the adaptive EM method. This suggests that KgCoOp, guided by the regularization to be in close proximity to the initial manual prompts, cannot effectively evade crafted UEs by EM based on the initial prompts. Because of the robustness of CoCoOp under our default number of shots (" + }, + { + "bbox": [ + 313, + 157, + 555, + 361 + ], + "type": "inline_equation", + "content": "S = 16" + }, + { + "bbox": [ + 313, + 157, + 555, + 361 + ], + "type": "text", + "content": "), we chose it as the baseline algorithm for the subsequent experiments." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 361, + 556, + 433 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 361, + 556, + 433 + ], + "spans": [ + { + "bbox": [ + 313, + 361, + 556, + 433 + ], + "type": "text", + "content": "Without proper defenses, it is better off not learning from UEs. It is interesting to note that usually, the performance of CoCoOp when trained with UEs is worse than the zero-shot performance. This suggests that UEs can indeed be harmful to the model's performance. This behavior can be observed in Tables 1 and 2." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 441, + 455, + 455 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 441, + 455, + 455 + ], + "spans": [ + { + "bbox": [ + 313, + 441, + 455, + 455 + ], + "type": "text", + "content": "4.2. Prompt Learning with " + }, + { + "bbox": [ + 313, + 441, + 455, + 455 + ], + "type": "inline_equation", + "content": "\\mathbf{A}^3" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 460, + 554, + 532 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 460, + 554, + 532 + ], + "spans": [ + { + "bbox": [ + 313, + 460, + 554, + 532 + ], + "type": "inline_equation", + "content": "\\mathbf{A}^3" + }, + { + "bbox": [ + 313, + 460, + 554, + 532 + ], + "type": "text", + "content": " is very effective (up to " + }, + { + "bbox": [ + 313, + 460, + 554, + 532 + ], + "type": "inline_equation", + "content": "33\\%" + }, + { + "bbox": [ + 313, + 460, + 554, + 532 + ], + "type": "text", + "content": " better than CoCoOp) in mitigating UEs. In Table 2, we observe that " + }, + { + "bbox": [ + 313, + 460, + 554, + 532 + ], + "type": "inline_equation", + "content": "\\mathrm{A}^3" + }, + { + "bbox": [ + 313, + 460, + 554, + 532 + ], + "type": "text", + "content": " consistently outperforms CoCoOp across all datasets, producing " + }, + { + "bbox": [ + 313, + 460, + 554, + 532 + ], + "type": "inline_equation", + "content": "15\\%" + }, + { + "bbox": [ + 313, + 460, + 554, + 532 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 313, + 460, + 554, + 532 + ], + "type": "inline_equation", + "content": "33\\%" + }, + { + "bbox": [ + 313, + 460, + 554, + 532 + ], + "type": "text", + "content": " higher " + }, + { + "bbox": [ + 313, + 460, + 554, + 532 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{h}}" + }, + { + "bbox": [ + 313, + 460, + 554, + 532 + ], + "type": "text", + "content": " than CoCoOp for surrogate-based methods (EM, REM, HYPO), and " + }, + { + "bbox": [ + 313, + 460, + 554, + 532 + ], + "type": "inline_equation", + "content": "6\\%" + }, + { + "bbox": [ + 313, + 460, + 554, + 532 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 313, + 460, + 554, + 532 + ], + "type": "inline_equation", + "content": "30\\%" + }, + { + "bbox": [ + 313, + 460, + 554, + 532 + ], + "type": "text", + "content": " higher for surrogate-free methods (LSP, AR, OPS)." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 532, + 555, + 616 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 532, + 555, + 616 + ], + "spans": [ + { + "bbox": [ + 313, + 532, + 555, + 616 + ], + "type": "text", + "content": "The image and text augmentations in " + }, + { + "bbox": [ + 313, + 532, + 555, + 616 + ], + "type": "inline_equation", + "content": "\\mathbf{A}^3" + }, + { + "bbox": [ + 313, + 532, + 555, + 616 + ], + "type": "text", + "content": " are both crucial for its effectiveness. Table 3 performs an ablation study on the individual contributions of image and text augmentations. We note that using either image-only or text-only augmentations can improve the model's performance, but the combination of both is the most effective, giving large accuracy gains over CoCoOp." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 616, + 555, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 616, + 555, + 689 + ], + "spans": [ + { + "bbox": [ + 313, + 616, + 555, + 689 + ], + "type": "text", + "content": "Simple augmentation strategies fall short of " + }, + { + "bbox": [ + 313, + 616, + 555, + 689 + ], + "type": "inline_equation", + "content": "\\mathbf{A}^3" + }, + { + "bbox": [ + 313, + 616, + 555, + 689 + ], + "type": "text", + "content": "'s performance. We also highlight in Table 3 that applying simple augmentation strategies such as Grayscale and JPEG compression on CoCoOp can certainly gain improvement over the CoCoOp baseline. However, they are still outperformed by " + }, + { + "bbox": [ + 313, + 616, + 555, + 689 + ], + "type": "inline_equation", + "content": "\\mathbf{A}^3" + }, + { + "bbox": [ + 313, + 616, + 555, + 689 + ], + "type": "text", + "content": ", sometimes with a large margin (" + }, + { + "bbox": [ + 313, + 616, + 555, + 689 + ], + "type": "inline_equation", + "content": "\\geq 25\\%" + }, + { + "bbox": [ + 313, + 616, + 555, + 689 + ], + "type": "text", + "content": ")." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 313, + 689, + 554, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 689, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 689, + 554, + 713 + ], + "type": "text", + "content": "A large arsenal of image augmentation strategies also fall short of " + }, + { + "bbox": [ + 313, + 689, + 554, + 713 + ], + "type": "inline_equation", + "content": "\\mathrm{A}^3" + }, + { + "bbox": [ + 313, + 689, + 554, + 713 + ], + "type": "text", + "content": "'s performance. Table 3 also shows that while" + } + ] + } + ], + "index": 19 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 315, + 757 + ], + "type": "text", + "content": "9512" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 56, + 180, + 298, + 407 + ], + "blocks": [ + { + "bbox": [ + 55, + 70, + 296, + 180 + ], + "lines": [ + { + "bbox": [ + 55, + 70, + 296, + 180 + ], + "spans": [ + { + "bbox": [ + 55, + 70, + 296, + 180 + ], + "type": "text", + "content": "Table 1. Test accuracies " + }, + { + "bbox": [ + 55, + 70, + 296, + 180 + ], + "type": "inline_equation", + "content": "(\\%)" + }, + { + "bbox": [ + 55, + 70, + 296, + 180 + ], + "type": "text", + "content": " of prompt learning algorithms on an unlearnability-poisoned Caltech-101 dataset under varying number of shots " + }, + { + "bbox": [ + 55, + 70, + 296, + 180 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 55, + 70, + 296, + 180 + ], + "type": "text", + "content": ". Note that \"EM-0\" is the original EM [11] method, and \"EM\" is our adaptive variant (Section 3.1). The image encoder of the CLIP model is ResNet-50, and the zero-shot accuracy is " + }, + { + "bbox": [ + 55, + 70, + 296, + 180 + ], + "type": "inline_equation", + "content": "86.00\\%" + }, + { + "bbox": [ + 55, + 70, + 296, + 180 + ], + "type": "text", + "content": ". We also report the average accuracy across unlearnability methods (the \"Avg.\" rows) and prompt learning algorithms (the \"Avg.\" column). We highlight the best prompt learning algorithm against each unlearnability method in bold, and underline the strongest unlearnability methods for each prompt learning algorithm." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 56, + 180, + 298, + 407 + ], + "lines": [ + { + "bbox": [ + 56, + 180, + 298, + 407 + ], + "spans": [ + { + "bbox": [ + 56, + 180, + 298, + 407 + ], + "type": "table", + "html": "
SMethodCoOpCoCoOpProDAKgCoOpAvg.
2EM-078.2380.4581.9177.8679.61
EM61.7864.1365.0559.2662.56
OPS74.4076.9175.7573.1075.04
AR80.6381.8380.7479.8980.77
Avg.73.7675.8375.8672.5374.50
4EM-083.2985.1986.0280.8383.83
EM68.4170.4570.1464.9168.48
OPS78.8080.8880.1578.0779.48
AR82.5483.7084.4182.6683.33
Avg.78.2680.0680.1876.6278.78
8EM-086.1889.4490.6485.4287.92
EM70.5072.2873.0367.0770.72
OPS80.7483.3582.6379.3181.51
AR85.8188.4188.5083.7786.62
Avg.80.8183.3783.7078.8981.69
16EM-090.7690.8590.4889.6090.42
EM71.4272.8372.1069.3471.42
OPS82.4084.0984.4380.0282.74
AR88.6390.7490.0886.4688.98
Avg.83.3084.6384.2781.3683.39
16Clean91.2091.7091.6091.8091.58
", + "image_path": "011bae5734cdd6190c8860c2c423a1bb6d23c0da25c496ba7e50090047bcbd02.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 422, + 296, + 541 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 422, + 296, + 541 + ], + "spans": [ + { + "bbox": [ + 55, + 422, + 296, + 541 + ], + "type": "text", + "content": "UEraser [25] is the most effective strategy to learn from UEs among all tested existing methods, the performance of UEraser is still inferior to " + }, + { + "bbox": [ + 55, + 422, + 296, + 541 + ], + "type": "inline_equation", + "content": "\\mathrm{A}^3" + }, + { + "bbox": [ + 55, + 422, + 296, + 541 + ], + "type": "text", + "content": ", particularly on HYPO, LSP and AR, with increased perturbation budgets. To preserve image semantics while maximizing augmentation diversity, such image augmentation strategies are often designed with a balance between the two. This trade-off choice may limit the ability to suppress UE perturbations in images. While this is also true for " + }, + { + "bbox": [ + 55, + 422, + 296, + 541 + ], + "type": "inline_equation", + "content": "\\mathrm{A}^3" + }, + { + "bbox": [ + 55, + 422, + 296, + 541 + ], + "type": "text", + "content": ", but the additional text augmentation strategies of " + }, + { + "bbox": [ + 55, + 422, + 296, + 541 + ], + "type": "inline_equation", + "content": "\\mathrm{A}^3" + }, + { + "bbox": [ + 55, + 422, + 296, + 541 + ], + "type": "text", + "content": " can help to work around this limitation." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 544, + 296, + 651 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 544, + 296, + 651 + ], + "spans": [ + { + "bbox": [ + 55, + 544, + 296, + 651 + ], + "type": "text", + "content": "Adversarial training (AT) may not be the most effective defense against UEs. As the settings of AT in Table 3 assume " + }, + { + "bbox": [ + 55, + 544, + 296, + 651 + ], + "type": "inline_equation", + "content": "\\ell_{\\infty}" + }, + { + "bbox": [ + 55, + 544, + 296, + 651 + ], + "type": "text", + "content": "-bounded perturbations with " + }, + { + "bbox": [ + 55, + 544, + 296, + 651 + ], + "type": "inline_equation", + "content": "\\epsilon = 8 / 255" + }, + { + "bbox": [ + 55, + 544, + 296, + 651 + ], + "type": "text", + "content": ", it notably struggles against large " + }, + { + "bbox": [ + 55, + 544, + 296, + 651 + ], + "type": "inline_equation", + "content": "\\ell_{\\infty}" + }, + { + "bbox": [ + 55, + 544, + 296, + 651 + ], + "type": "text", + "content": " perturbation budgets (" + }, + { + "bbox": [ + 55, + 544, + 296, + 651 + ], + "type": "inline_equation", + "content": "\\epsilon = 16 / 255" + }, + { + "bbox": [ + 55, + 544, + 296, + 651 + ], + "type": "text", + "content": "), and other types of perturbation norm-bounds (" + }, + { + "bbox": [ + 55, + 544, + 296, + 651 + ], + "type": "inline_equation", + "content": "\\ell_{2}" + }, + { + "bbox": [ + 55, + 544, + 296, + 651 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 544, + 296, + 651 + ], + "type": "inline_equation", + "content": "\\ell_{0}" + }, + { + "bbox": [ + 55, + 544, + 296, + 651 + ], + "type": "text", + "content": "), as it is not designed to handle them. There may also be an intricate balance between accuracy and robustness [38], which could result in a seesaw effect in the performance as the perturbation budget increases." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 653, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 653, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 653, + 296, + 713 + ], + "type": "text", + "content": "Increasing the number of shots " + }, + { + "bbox": [ + 55, + 653, + 296, + 713 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 55, + 653, + 296, + 713 + ], + "type": "text", + "content": " improves " + }, + { + "bbox": [ + 55, + 653, + 296, + 713 + ], + "type": "inline_equation", + "content": "\\mathbf{A}^3" + }, + { + "bbox": [ + 55, + 653, + 296, + 713 + ], + "type": "text", + "content": "'s performance. In Table 4, we found that while CoCoOp's performance metrics continue to improve with increasing " + }, + { + "bbox": [ + 55, + 653, + 296, + 713 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 55, + 653, + 296, + 713 + ], + "type": "text", + "content": ", it never surpassed the performance of zero-shot CLIP. This echoes the findings in Table 1. On the other hand, " + }, + { + "bbox": [ + 55, + 653, + 296, + 713 + ], + "type": "inline_equation", + "content": "\\mathbf{A}^3" + }, + { + "bbox": [ + 55, + 653, + 296, + 713 + ], + "type": "text", + "content": " im" + } + ] + } + ], + "index": 4 + }, + { + "type": "table", + "bbox": [ + 313, + 169, + 555, + 712 + ], + "blocks": [ + { + "bbox": [ + 313, + 70, + 555, + 169 + ], + "lines": [ + { + "bbox": [ + 313, + 70, + 555, + 169 + ], + "spans": [ + { + "bbox": [ + 313, + 70, + 555, + 169 + ], + "type": "text", + "content": "Table 2. Base-to-novel prompt learning accuracies (\\%) for CoCoOp and " + }, + { + "bbox": [ + 313, + 70, + 555, + 169 + ], + "type": "inline_equation", + "content": "\\mathrm{A}^3" + }, + { + "bbox": [ + 313, + 70, + 555, + 169 + ], + "type": "text", + "content": " trained with unlearnable examples under " + }, + { + "bbox": [ + 313, + 70, + 555, + 169 + ], + "type": "inline_equation", + "content": "\\ell_{\\infty}" + }, + { + "bbox": [ + 313, + 70, + 555, + 169 + ], + "type": "text", + "content": "-bounded attacks. Rows with \"+\" indicate when " + }, + { + "bbox": [ + 313, + 70, + 555, + 169 + ], + "type": "inline_equation", + "content": "\\mathrm{A}^3" + }, + { + "bbox": [ + 313, + 70, + 555, + 169 + ], + "type": "text", + "content": " is applied. " + }, + { + "bbox": [ + 313, + 70, + 555, + 169 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{b}}" + }, + { + "bbox": [ + 313, + 70, + 555, + 169 + ], + "type": "text", + "content": " refers to the model accuracy on poisoned data, while " + }, + { + "bbox": [ + 313, + 70, + 555, + 169 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{n}}" + }, + { + "bbox": [ + 313, + 70, + 555, + 169 + ], + "type": "text", + "content": " refers to the model accuracy on novel classes, excluding the poisoned classes. We also report the harmonic mean " + }, + { + "bbox": [ + 313, + 70, + 555, + 169 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{h}} = 2 / (\\alpha_{\\mathrm{b}}^{-1} + \\alpha_{\\mathrm{n}}^{-1})" + }, + { + "bbox": [ + 313, + 70, + 555, + 169 + ], + "type": "text", + "content": ". For the \"Δ\" column, we report the test accuracy drop for the CoCoOp baseline from the clean training setting, and accuracy gain for " + }, + { + "bbox": [ + 313, + 70, + 555, + 169 + ], + "type": "inline_equation", + "content": "\\mathrm{A}^3" + }, + { + "bbox": [ + 313, + 70, + 555, + 169 + ], + "type": "text", + "content": " over CoCoOp. The backbone is VIT-B/16." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 313, + 169, + 555, + 712 + ], + "lines": [ + { + "bbox": [ + 313, + 169, + 555, + 712 + ], + "spans": [ + { + "bbox": [ + 313, + 169, + 555, + 712 + ], + "type": "table", + "html": "
ImNetCaltechPetsFlowersFoodSUNUCFAvg.Δ
Zero-Shot [27]
αb67.5092.6087.4067.9082.9064.8066.1075.60
αn60.2087.4083.7060.8075.5056.9059.5069.14
αh63.6489.0085.5264.8079.0560.6062.8172.20
Baseline (CoCoOp [42])
αb75.2596.3094.3592.8690.1878.3880.5386.84
αn69.4393.2396.8870.1790.8074.7273.2881.22
αh72.3994.7495.5879.7490.3476.2876.6183.67
+αb76.0397.9594.0692.4190.6877.5980.2587.00
+αn70.4793.8093.1870.2991.2974.8371.8980.82
+αh73.0995.7393.6379.7490.9876.1475.8883.60
EM [11] (l∞, ε = 8/255)
αb56.4780.6878.4474.4079.9358.9063.7170.36-16.48
αn43.2774.9076.3851.6678.1553.3352.0961.40-19.82
αh48.7777.5477.4061.0379.0456.0157.3865.31-18.36
+αb73.4895.4993.7191.8489.3877.6579.3485.8215.46
+αn67.8593.0793.5368.0888.0073.1771.6979.3017.90
+αh70.4194.3493.7478.2088.8775.1875.1282.2716.96
REM [9] (l∞, ε = 8/255)
αb43.5162.6364.6061.9860.2148.2950.5355.94-30.90
αn30.4954.7760.1246.6458.0844.3041.9848.05-33.16
αh35.7458.3962.2553.1759.0646.1846.0251.54-32.12
+αb72.7694.5293.2891.0088.0676.9478.4685.0029.06
+αn66.4292.1691.9466.7487.3471.9871.2278.2630.21
+αh69.3793.6992.7476.7387.5474.4374.9481.3529.80
HYPO [33] (l∞, ε = 8/255)
αb40.0858.5959.8357.3356.5044.1147.6652.01-34.83
αn30.6454.8257.1144.7656.9041.2543.5847.01-34.21
αh34.6856.5758.4650.3056.5542.4745.4549.21-34.46
+αb71.8094.2893.5990.7588.5577.6778.9285.0833.07
+αn65.2990.3792.0165.2584.9370.0469.8576.8229.81
+αh68.3992.1393.0576.2086.6973.7374.4380.6631.45
LSP [40] (l2, ε = 1.30)
αb49.7268.4965.9963.3361.8850.6951.4758.80-28.04
αn36.0456.2963.3550.6259.4146.2048.8851.54-29.67
αh41.7961.8464.4756.1860.5348.4350.0454.75-28.91
+αb72.5394.9794.0291.2988.6477.0478.1785.2426.44
+αn67.3792.4493.8067.5287.4072.2171.1178.8427.30
+αh69.9793.7494.2777.0987.7774.4774.5481.6926.94
AR [29] (l2, ε = 1.00)
αb42.3361.7360.5859.0157.2046.2349.0653.73-33.11
αn31.6756.4258.0844.5455.9643.3042.6947.52-33.69
αh36.2959.0059.2750.6356.5244.7445.6750.30-33.37
+αb71.6894.3892.5590.8287.7776.6777.5884.4930.76
+αn66.0991.5991.6766.0686.2571.1970.8677.6730.15
+αh68.7492.9892.1078.4087.0073.9374.2181.0530.75
OPS [36] (l0, ε = 1)
αb68.2888.6086.2081.5985.6565.6068.9877.84-9.00
αn54.5080.2180.8358.7079.0660.4960.3267.73-13.49
αh60.5784.0583.4568.2882.2462.8964.3972.37-11.40
+αb73.0895.1093.3991.5089.5676.9778.1383.967.55
+αn67.2092.6393.8868.0388.0172.6471.2879.1011.37
+αh70.0693.8693.6378.0588.8774.6974.5081.429.68
", + "image_path": "15837c8e21f004714a7e8b20eca8fb9ee5070b5d62224268676d035ff5fbc19f.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_body" + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 315, + 757 + ], + "type": "text", + "content": "9513" + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 56, + 115, + 342, + 261 + ], + "blocks": [ + { + "bbox": [ + 55, + 70, + 342, + 114 + ], + "lines": [ + { + "bbox": [ + 55, + 70, + 342, + 114 + ], + "spans": [ + { + "bbox": [ + 55, + 70, + 342, + 114 + ], + "type": "text", + "content": "Table 3. Clean test accuracies (\\%) on Caltech-101 datasets (16-shots, standard protocol). \"UEr\" means UEraser. Baseline and compared methods are adapted to CoCoOp [42]. Image encoder backbone is ResNet-50. \"Text\" and \"Image\" refer to the augmented modalities, \"Full\" includes both." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 56, + 115, + 342, + 261 + ], + "lines": [ + { + "bbox": [ + 56, + 115, + 342, + 261 + ], + "spans": [ + { + "bbox": [ + 56, + 115, + 342, + 261 + ], + "type": "table", + "html": "
BaselineGrayJPEGATUErTextImageFull
EM8/25575.5374.2879.2182.8490.3784.6992.5194.28
16/25559.3660.9664.5877.0388.7982.4590.4091.86
REM8/25552.3363.7170.2278.9690.7386.8992.3293.88
16/25537.9759.5466.7874.1086.9683.3389.3590.25
HYPO8/25547.3650.0264.5678.8987.7484.0791.4793.21
16/25527.1834.6759.1172.2682.4879.2786.0989.33
LSP1.3043.2364.7080.0181.5890.6683.3792.8394.02
1.7425.4142.5868.3076.2983.0580.7787.2491.62
AR1.0050.1853.2481.4076.7390.1284.9691.6393.41
1.3032.6535.5769.2664.2282.8981.1786.6190.06
OPS186.1786.5289.7383.1688.6189.9390.5093.86
473.2473.3480.2369.5980.1281.3784.0787.10
", + "image_path": "5fcb750b676d4983519eecab0cb047a3ebd53e763bde4b73ef3bbef34d380c7d.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "type": "table", + "bbox": [ + 359, + 105, + 555, + 259 + ], + "blocks": [ + { + "bbox": [ + 357, + 72, + 555, + 103 + ], + "lines": [ + { + "bbox": [ + 357, + 72, + 555, + 103 + ], + "spans": [ + { + "bbox": [ + 357, + 72, + 555, + 103 + ], + "type": "text", + "content": "Table 4. Base-to-novel metrics for different number of shots " + }, + { + "bbox": [ + 357, + 72, + 555, + 103 + ], + "type": "inline_equation", + "content": "S \\in \\{ 0,2,4,8,{16}\\}" + }, + { + "bbox": [ + 357, + 72, + 555, + 103 + ], + "type": "text", + "content": " . The image encoder backbone is ResNet-50." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 359, + 105, + 555, + 259 + ], + "lines": [ + { + "bbox": [ + 359, + 105, + 555, + 259 + ], + "spans": [ + { + "bbox": [ + 359, + 105, + 555, + 259 + ], + "type": "table", + "html": "
SCaltechFoodImageNet
A3-+-+-+
0αb86.778.162.7
αn78.474.952.8
αh82.676.457.2
2αb69.3488.5166.1184.3444.0466.04
αn61.7883.4560.2981.7730.960.03
αh65.3485.9163.0783.0436.3262.89
4αb76.0991.2375.2187.8150.4368.1
αn67.5187.9366.1986.1536.6162.9
αh71.5489.5570.4186.9742.4265.4
8αb77.8692.8975.8588.0351.6569.01
αn70.1989.1170.0986.4439.2264.58
αh73.8390.9672.8687.2344.5866.72
16αb77.5293.3676.6788.5253.7271.08
αn72.4391.0273.3487.7640.7464.95
αh74.8992.1874.9788.1446.3467.88
", + "image_path": "64c10adeb61a466aca34a2397c622418c4d4c1d5ce286ef8ce3ea6af56ddb772.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 58, + 286, + 175, + 352 + ], + "blocks": [ + { + "bbox": [ + 79, + 272, + 273, + 284 + ], + "lines": [ + { + "bbox": [ + 79, + 272, + 273, + 284 + ], + "spans": [ + { + "bbox": [ + 79, + 272, + 273, + 284 + ], + "type": "text", + "content": "αoriginal αunlearn 01 αtest αtrain" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 58, + 286, + 175, + 352 + ], + "lines": [ + { + "bbox": [ + 58, + 286, + 175, + 352 + ], + "spans": [ + { + "bbox": [ + 58, + 286, + 175, + 352 + ], + "type": "image", + "image_path": "3e63b6b5e9254872d7c39ebffcb88b9b4640ae4b812eb8255b8919a5c846b150.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 179, + 286, + 298, + 352 + ], + "blocks": [ + { + "bbox": [ + 179, + 286, + 298, + 352 + ], + "lines": [ + { + "bbox": [ + 179, + 286, + 298, + 352 + ], + "spans": [ + { + "bbox": [ + 179, + 286, + 298, + 352 + ], + "type": "image", + "image_path": "cd91b39d42053d417575a3e791bbf2c683504123f56e5f0e61be118cd4fe24c9.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 58, + 361, + 176, + 428 + ], + "blocks": [ + { + "bbox": [ + 88, + 353, + 150, + 361 + ], + "lines": [ + { + "bbox": [ + 88, + 353, + 150, + 361 + ], + "spans": [ + { + "bbox": [ + 88, + 353, + 150, + 361 + ], + "type": "text", + "content": "(a) " + }, + { + "bbox": [ + 88, + 353, + 150, + 361 + ], + "type": "inline_equation", + "content": "\\mathrm{EM} + \\mathrm{CoCoOp}" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 58, + 361, + 176, + 428 + ], + "lines": [ + { + "bbox": [ + 58, + 361, + 176, + 428 + ], + "spans": [ + { + "bbox": [ + 58, + 361, + 176, + 428 + ], + "type": "image", + "image_path": "0e966e46ae20f2d8b7aafbc0fd37033660a7ccc32b79ee38a6907e2f24abd3a5.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 85, + 429, + 153, + 437 + ], + "lines": [ + { + "bbox": [ + 85, + 429, + 153, + 437 + ], + "spans": [ + { + "bbox": [ + 85, + 429, + 153, + 437 + ], + "type": "text", + "content": "(c) " + }, + { + "bbox": [ + 85, + 429, + 153, + 437 + ], + "type": "inline_equation", + "content": "\\mathrm{REM} + \\mathrm{CoCoOp}" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 179, + 361, + 298, + 428 + ], + "blocks": [ + { + "bbox": [ + 220, + 353, + 263, + 361 + ], + "lines": [ + { + "bbox": [ + 220, + 353, + 263, + 361 + ], + "spans": [ + { + "bbox": [ + 220, + 353, + 263, + 361 + ], + "type": "text", + "content": "(b) " + }, + { + "bbox": [ + 220, + 353, + 263, + 361 + ], + "type": "inline_equation", + "content": "\\mathrm{EM} + \\mathrm{A}^3" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 179, + 361, + 298, + 428 + ], + "lines": [ + { + "bbox": [ + 179, + 361, + 298, + 428 + ], + "spans": [ + { + "bbox": [ + 179, + 361, + 298, + 428 + ], + "type": "image", + "image_path": "13a94f1e41a87b7f18ea40d518aa2cd3057bf3ccb934147e291bfc18ae11ba04.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 217, + 429, + 265, + 437 + ], + "lines": [ + { + "bbox": [ + 217, + 429, + 265, + 437 + ], + "spans": [ + { + "bbox": [ + 217, + 429, + 265, + 437 + ], + "type": "text", + "content": "(d) " + }, + { + "bbox": [ + 217, + 429, + 265, + 437 + ], + "type": "inline_equation", + "content": "\\mathrm{REM} + \\mathrm{A}^3" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 58, + 438, + 175, + 505 + ], + "blocks": [ + { + "bbox": [ + 58, + 438, + 175, + 505 + ], + "lines": [ + { + "bbox": [ + 58, + 438, + 175, + 505 + ], + "spans": [ + { + "bbox": [ + 58, + 438, + 175, + 505 + ], + "type": "image", + "image_path": "81616334ed89d10775a3865fa15e1d3ed0af05dedecca17cf98b084e1cdb7d74.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 179, + 438, + 298, + 505 + ], + "blocks": [ + { + "bbox": [ + 179, + 438, + 298, + 505 + ], + "lines": [ + { + "bbox": [ + 179, + 438, + 298, + 505 + ], + "spans": [ + { + "bbox": [ + 179, + 438, + 298, + 505 + ], + "type": "image", + "image_path": "a6ad827e2de157735f90f2d0d34f9bc8deb05ba662f5c66c99a57b3fa52a03e9.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 221, + 505, + 261, + 514 + ], + "lines": [ + { + "bbox": [ + 221, + 505, + 261, + 514 + ], + "spans": [ + { + "bbox": [ + 221, + 505, + 261, + 514 + ], + "type": "text", + "content": "(f) " + }, + { + "bbox": [ + 221, + 505, + 261, + 514 + ], + "type": "inline_equation", + "content": "\\mathrm{AR} + \\mathrm{A}^3" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_caption" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 58, + 514, + 176, + 581 + ], + "blocks": [ + { + "bbox": [ + 88, + 506, + 149, + 514 + ], + "lines": [ + { + "bbox": [ + 88, + 506, + 149, + 514 + ], + "spans": [ + { + "bbox": [ + 88, + 506, + 149, + 514 + ], + "type": "text", + "content": "(e) " + }, + { + "bbox": [ + 88, + 506, + 149, + 514 + ], + "type": "inline_equation", + "content": "\\mathrm{AR} + \\mathrm{CoCoOp}" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 58, + 514, + 176, + 581 + ], + "lines": [ + { + "bbox": [ + 58, + 514, + 176, + 581 + ], + "spans": [ + { + "bbox": [ + 58, + 514, + 176, + 581 + ], + "type": "image", + "image_path": "2a809476949a81417905ada8d5f012a84bf314fba9e658888963850b1dcbf5e6.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 86, + 581, + 151, + 590 + ], + "lines": [ + { + "bbox": [ + 86, + 581, + 151, + 590 + ], + "spans": [ + { + "bbox": [ + 86, + 581, + 151, + 590 + ], + "type": "text", + "content": "(g) " + }, + { + "bbox": [ + 86, + 581, + 151, + 590 + ], + "type": "inline_equation", + "content": "\\mathrm{LSP} + \\mathrm{CoCoOp}" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 55, + 592, + 295, + 647 + ], + "lines": [ + { + "bbox": [ + 55, + 592, + 295, + 647 + ], + "spans": [ + { + "bbox": [ + 55, + 592, + 295, + 647 + ], + "type": "text", + "content": "Figure 2. CoCoOp vs. A3 under partial poisoning with rates " + }, + { + "bbox": [ + 55, + 592, + 295, + 647 + ], + "type": "inline_equation", + "content": "R \\in \\{\\frac{1}{8}, \\frac{1}{4}, \\frac{1}{2}, 1\\}" + }, + { + "bbox": [ + 55, + 592, + 295, + 647 + ], + "type": "text", + "content": " (x-axis, %). Accuracy metrics (y-axis, %): " + }, + { + "bbox": [ + 55, + 592, + 295, + 647 + ], + "type": "inline_equation", + "content": "\\alpha_{\\text{unlearn}} = \\text{UEs}" + }, + { + "bbox": [ + 55, + 592, + 295, + 647 + ], + "type": "text", + "content": " in the training set; " + }, + { + "bbox": [ + 55, + 592, + 295, + 647 + ], + "type": "inline_equation", + "content": "\\alpha_{\\text{original}} = \\text{original clean images of the UEs}" + }, + { + "bbox": [ + 55, + 592, + 295, + 647 + ], + "type": "text", + "content": "; " + }, + { + "bbox": [ + 55, + 592, + 295, + 647 + ], + "type": "inline_equation", + "content": "\\alpha_{\\text{test}} = \\text{clean images in the test set}" + }, + { + "bbox": [ + 55, + 592, + 295, + 647 + ], + "type": "text", + "content": "; " + }, + { + "bbox": [ + 55, + 592, + 295, + 647 + ], + "type": "inline_equation", + "content": "\\alpha_{\\text{train}} = \\text{clean images in the training set}" + }, + { + "bbox": [ + 55, + 592, + 295, + 647 + ], + "type": "text", + "content": ". The image encoder backbone is ResNet-50." + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_caption" + } + ], + "index": 17 + }, + { + "type": "image", + "bbox": [ + 179, + 514, + 298, + 580 + ], + "blocks": [ + { + "bbox": [ + 179, + 514, + 298, + 580 + ], + "lines": [ + { + "bbox": [ + 179, + 514, + 298, + 580 + ], + "spans": [ + { + "bbox": [ + 179, + 514, + 298, + 580 + ], + "type": "image", + "image_path": "9c16c1e3695b1908ad6ede841fafda072772a77904d57be754a7105f372b4987.jpg" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 219, + 581, + 263, + 590 + ], + "lines": [ + { + "bbox": [ + 219, + 581, + 263, + 590 + ], + "spans": [ + { + "bbox": [ + 219, + 581, + 263, + 590 + ], + "type": "text", + "content": "(h) " + }, + { + "bbox": [ + 219, + 581, + 263, + 590 + ], + "type": "inline_equation", + "content": "\\mathrm{LSP} + \\mathrm{A}^3" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_caption" + } + ], + "index": 19 + }, + { + "bbox": [ + 55, + 653, + 296, + 677 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 653, + 296, + 677 + ], + "spans": [ + { + "bbox": [ + 55, + 653, + 296, + 677 + ], + "type": "text", + "content": "proves notably with increasing " + }, + { + "bbox": [ + 55, + 653, + 296, + 677 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 55, + 653, + 296, + 677 + ], + "type": "text", + "content": ", where CoCoOp falls behind while " + }, + { + "bbox": [ + 55, + 653, + 296, + 677 + ], + "type": "inline_equation", + "content": "A^3" + }, + { + "bbox": [ + 55, + 653, + 296, + 677 + ], + "type": "text", + "content": " leads by a large margin." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 55, + 677, + 297, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 677, + 297, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 677, + 297, + 713 + ], + "type": "text", + "content": "With partially poisoned datasets, " + }, + { + "bbox": [ + 55, + 677, + 297, + 713 + ], + "type": "inline_equation", + "content": "\\mathbf{A}^3" + }, + { + "bbox": [ + 55, + 677, + 297, + 713 + ], + "type": "text", + "content": " learns the underlying features while CoCoOp likely does not. In practice, model trainers may curate datasets from a variety of sources," + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "spans": [ + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "text", + "content": "and only a portion of the data may contain UE perturbations. Training on such partially poisoned datasets typically result in minimal performance loss over the clean dataset, and is not indicative of the model's ability to learn the underlying features of the UEs. In Figure 2, we thus investigate whether CoCoOp and " + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "inline_equation", + "content": "\\mathbf{A}^3" + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "text", + "content": " can learn such features when trained on partially poisoned datasets. It evaluates the accuracy metrics of the unlearnable part (" + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{unlearn}}" + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "text", + "content": "), the original images of the unlearnable part (" + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{original}}" + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "text", + "content": ", i.e., before perturbation), the samples from the test set (" + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{test}}" + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "text", + "content": "), and the clean part of the train set (" + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{train}}" + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "text", + "content": "). Importantly, if the model can learn the underlying features, then the " + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{original}}" + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "text", + "content": " should be close to " + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{train}}" + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "text", + "content": ", otherwise, it should be close to " + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{test}}" + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "text", + "content": ". It is evident that CoCoOp actually struggles to learn useful features from the UEs, as " + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{original}}" + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "text", + "content": " closely tracks " + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{test}}" + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "text", + "content": ", while " + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{unlearn}}" + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "text", + "content": " is higher than " + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{train}}" + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "text", + "content": ". This suggests that CoCoOp is likely overfitting to the UE perturbations, even more so than the clean training data. In contrast, " + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "inline_equation", + "content": "\\mathbf{A}^3" + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "text", + "content": " shows that " + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{original}}" + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "text", + "content": " follows " + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{train}}" + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "text", + "content": " closely, and " + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{unlearn}}" + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "text", + "content": " is close to " + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{test}}" + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "text", + "content": ", hinting that " + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "inline_equation", + "content": "\\mathbf{A}^3" + }, + { + "bbox": [ + 313, + 272, + 556, + 522 + ], + "type": "text", + "content": " is learning the underlying features instead of the UE-crafted perturbations." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 314, + 536, + 388, + 550 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 536, + 388, + 550 + ], + "spans": [ + { + "bbox": [ + 314, + 536, + 388, + 550 + ], + "type": "text", + "content": "5. Conclusion" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 313, + 557, + 556, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 557, + 556, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 557, + 556, + 713 + ], + "type": "text", + "content": "First, PLs' generalization combined with " + }, + { + "bbox": [ + 313, + 557, + 556, + 713 + ], + "type": "inline_equation", + "content": "A^3" + }, + { + "bbox": [ + 313, + 557, + 556, + 713 + ], + "type": "text", + "content": " make them challenging adversaries for traditional UEs. Second, Augmenting PL with diverse image and text perturbations significantly improves their resilience against UEs, pointing to the need for multimodal considerations in both UEs and countermeasures. Third, compared to simpler augmentations or adversarial training, " + }, + { + "bbox": [ + 313, + 557, + 556, + 713 + ], + "type": "inline_equation", + "content": "A^3" + }, + { + "bbox": [ + 313, + 557, + 556, + 713 + ], + "type": "text", + "content": "'s cross-modal feature alignment proved especially effective in mitigating PL's adaptation to UEs than preexisting learning methods. Finally, we emphasize the need for adaptive, multimodal approaches in UEs and open pathways toward more sophisticated protections against unauthorized training in an era of large multimodal and pretrained models." + } + ] + } + ], + "index": 26 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "9514" + } + ] + } + ], + "index": 27 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 147, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 147, + 85 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 147, + 85 + ], + "type": "text", + "content": "Acknowledgment" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 90, + 297, + 201 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 90, + 297, + 201 + ], + "spans": [ + { + "bbox": [ + 55, + 90, + 297, + 201 + ], + "type": "text", + "content": "This work is supported in part by National Natural Science Foundation of China (62376263, 62372443 and 62271496), Guangdong Basic and Applied Basic Research Foundation (2023B1515130002), Natural Science Foundation of Guangdong (2024A1515030209 and 2024A1515011970), Shenzhen Science and Technology Innovation Commission (JCYJ20230807140507015 and JCYJ20220531100804009), and Yu-Liang Lu's Project Team Development Funding (KY23A102). This work was carried out in part at SICC, which is supported by SKL-IOTSC, University of Macau." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 220, + 115, + 232 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 220, + 115, + 232 + ], + "spans": [ + { + "bbox": [ + 56, + 220, + 115, + 232 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 57, + 240, + 297, + 712 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 61, + 240, + 297, + 306 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 240, + 297, + 306 + ], + "spans": [ + { + "bbox": [ + 61, + 240, + 297, + 306 + ], + "type": "text", + "content": "[1] Manuele Barraco, Marcella Cornia, Silvia Cascianelli, Lorenzo Baraldi, and Rita Cucchiara. The unreasonable effectiveness of clip features for image captioning: An experimental analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 4662-4670, 2022. 3" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 307, + 296, + 350 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 307, + 296, + 350 + ], + "spans": [ + { + "bbox": [ + 61, + 307, + 296, + 350 + ], + "type": "text", + "content": "[2] Battista Biggio and Fabio Roli. Wild patterns: Ten years after the rise of adversarial machine learning. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pages 2154-2156, 2018. 3" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 62, + 351, + 296, + 394 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 351, + 296, + 394 + ], + "spans": [ + { + "bbox": [ + 62, + 351, + 296, + 394 + ], + "type": "text", + "content": "[3] Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101 – mining discriminative components with random forests. In Computer Vision – ECCV 2014, pages 446–461. Springer International Publishing, 2014. 5, 11" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 396, + 296, + 449 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 396, + 296, + 449 + ], + "spans": [ + { + "bbox": [ + 62, + 396, + 296, + 449 + ], + "type": "text", + "content": "[4] Xinquan Chen, Xitong Gao, Juanjuan Zhao, Kejiang Ye, and Cheng-Zhong Xu. AdvDiffuser: Natural adversarial example synthesis with diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4562-4572, 2023. 3" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 449, + 296, + 491 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 449, + 296, + 491 + ], + "spans": [ + { + "bbox": [ + 62, + 449, + 296, + 491 + ], + "type": "text", + "content": "[5] Jacob Clarysse, Julia Hörrmann, and Fanny Yang. Why adversarial training can hurt robust accuracy. In The Eleventh International Conference on Learning Representations, 2023. 3" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 494, + 296, + 537 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 494, + 296, + 537 + ], + "spans": [ + { + "bbox": [ + 62, + 494, + 296, + 537 + ], + "type": "text", + "content": "[6] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE, 2009. 5, 11" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 538, + 296, + 581 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 538, + 296, + 581 + ], + "spans": [ + { + "bbox": [ + 62, + 538, + 296, + 581 + ], + "type": "text", + "content": "[7] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat GANs on image synthesis. In Advances in Neural Information Processing Systems, pages 8780-8794. Curran Associates, Inc., 2021. 3" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 62, + 582, + 296, + 636 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 582, + 296, + 636 + ], + "spans": [ + { + "bbox": [ + 62, + 582, + 296, + 636 + ], + "type": "text", + "content": "[8] Li Fei-Fei, Rob Fergus, and Pietro Perona. Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories. In 2004 conference on computer vision and pattern recognition workshop, pages 178-178. IEEE, 2004. 5, 11" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 62, + 637, + 296, + 680 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 637, + 296, + 680 + ], + "spans": [ + { + "bbox": [ + 62, + 637, + 296, + 680 + ], + "type": "text", + "content": "[9] Shaopeng Fu, Fengxiang He, Yang Liu, Li Shen, and Dacheng Tao. Robust unlearnable examples: Protecting data against adversarial learning. In International Conference on Learning Representations, 2022. 1, 2, 4, 6, 7" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 681, + 296, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 681, + 296, + 712 + ], + "spans": [ + { + "bbox": [ + 57, + 681, + 296, + 712 + ], + "type": "text", + "content": "[10] Micah Goldblum, Dimitris Tsipras, Chulin Xie, and et al. Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses. IEEE Transactions on Pattern" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 556, + 714 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 333, + 73, + 555, + 94 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 333, + 73, + 555, + 94 + ], + "spans": [ + { + "bbox": [ + 333, + 73, + 555, + 94 + ], + "type": "text", + "content": "Analysis and Machine Intelligence, 45(2):1563-1580, 2022. 2" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 96, + 555, + 140 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 96, + 555, + 140 + ], + "spans": [ + { + "bbox": [ + 316, + 96, + 555, + 140 + ], + "type": "text", + "content": "[11] Hanxun Huang, Xingjun Ma, Sarah Monazam Erfani, James Bailey, and Yisen Wang. Unlearnable examples: Making personal data unexploitable. In International Conference on Learning Representations, 2021. 1, 2, 3, 4, 6, 7" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 141, + 555, + 185 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 141, + 555, + 185 + ], + "spans": [ + { + "bbox": [ + 316, + 141, + 555, + 185 + ], + "type": "text", + "content": "[12] Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. Advances in neural information processing systems, 32, 2019. 3" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 317, + 186, + 555, + 251 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 186, + 555, + 251 + ], + "spans": [ + { + "bbox": [ + 317, + 186, + 555, + 251 + ], + "type": "text", + "content": "[13] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In International conference on machine learning, pages 4904-4916. PMLR, 2021. 3" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 254, + 556, + 297 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 254, + 556, + 297 + ], + "spans": [ + { + "bbox": [ + 316, + 254, + 556, + 297 + ], + "type": "text", + "content": "[14] Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The CIFAR-10 and CIFAR-100 datasets, 2014. Available at: http://www.cs.toronto.edu/~kriz/cifar.html.6" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 299, + 555, + 353 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 299, + 555, + 353 + ], + "spans": [ + { + "bbox": [ + 316, + 299, + 555, + 353 + ], + "type": "text", + "content": "[15] Ziyi Lin, Shijie Geng, Renrui Zhang, Peng Gao, Gerard De Melo, Xiaogang Wang, Jifeng Dai, Yu Qiao, and Hongsheng Li. Frozen clip models are efficient video learners. In European Conference on Computer Vision, pages 388-404. Springer, 2022. 1" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 355, + 555, + 399 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 355, + 555, + 399 + ], + "spans": [ + { + "bbox": [ + 316, + 355, + 555, + 399 + ], + "type": "text", + "content": "[16] Zhuoran Liu, Zhengyu Zhao, and Martha Larson. Image shortcut squeezing: Countering perturbative availability poisons with compression. In International conference on machine learning, 2023. 2, 3, 11" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 317, + 399, + 555, + 443 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 399, + 555, + 443 + ], + "spans": [ + { + "bbox": [ + 317, + 399, + 555, + 443 + ], + "type": "text", + "content": "[17] Yuning Lu, Jianzhuang Liu, Yonggang Zhang, Yajing Liu, and Xinmei Tian. Prompt distribution learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2022. 4" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 317, + 445, + 555, + 511 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 445, + 555, + 511 + ], + "spans": [ + { + "bbox": [ + 317, + 445, + 555, + 511 + ], + "type": "text", + "content": "[18] Yiwei Ma, Guohai Xu, Xiaoshuai Sun, Ming Yan, Ji Zhang, and Rongrong Ji. X-clip: End-to-end multi-grained contrastive learning for video-text retrieval. In Proceedings of the 30th ACM International Conference on Multimedia, page 638–647, New York, NY, USA, 2022. Association for Computing Machinery. 3" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 317, + 512, + 555, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 512, + 555, + 555 + ], + "spans": [ + { + "bbox": [ + 317, + 512, + 555, + 555 + ], + "type": "text", + "content": "[19] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. 2, 3, 11" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 317, + 557, + 555, + 601 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 557, + 555, + 601 + ], + "spans": [ + { + "bbox": [ + 317, + 557, + 555, + 601 + ], + "type": "text", + "content": "[20] Samuel G Müller and Frank Hutter. Trivialaugment: Tuning-free yet state-of-the-art data augmentation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 774-782, 2021. 4, 11" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 317, + 602, + 555, + 667 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 602, + 555, + 667 + ], + "spans": [ + { + "bbox": [ + 317, + 602, + 555, + 667 + ], + "type": "text", + "content": "[21] Anguelos Nicolaou, Vincent Christlein, Edgar Riba, Jian Shi, Georg Vogeler, and Mathias Seuret. Tormentor: Deterministic dynamic-path, data augmentations with fractals. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 2707-2711, 2022. 4, 11" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 669, + 555, + 714 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 669, + 555, + 714 + ], + "spans": [ + { + "bbox": [ + 316, + 669, + 555, + 714 + ], + "type": "text", + "content": "[22] Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In 2008 Sixth Indian conference on computer vision, graphics & image processing, pages 722-729. IEEE, 2008. 5, 11" + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "9515" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 73, + 297, + 714 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 56, + 73, + 297, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 73, + 297, + 116 + ], + "spans": [ + { + "bbox": [ + 56, + 73, + 297, + 116 + ], + "type": "text", + "content": "[23] Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, and CV Jawahar. Cats and dogs. In 2012 IEEE conference on computer vision and pattern recognition, pages 3498-3505. IEEE, 2012. 5, 11" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 118, + 296, + 172 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 118, + 296, + 172 + ], + "spans": [ + { + "bbox": [ + 56, + 118, + 296, + 172 + ], + "type": "text", + "content": "[24] Tianrui Qin, Xitong Gao, Juanjuan Zhao, and Kejiang Ye. Destruction-restoration suppresses data protection perturbations against diffusion models. In 2023 IEEE 35th International Conference on Tools with Artificial Intelligence (ICTAI), pages 586-594. IEEE, 2023. 1" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 173, + 295, + 227 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 173, + 295, + 227 + ], + "spans": [ + { + "bbox": [ + 56, + 173, + 295, + 227 + ], + "type": "text", + "content": "[25] Tianrui Qin, Xitong Gao, Juanjuan Zhao, Kejiang Ye, and Cheng-Zhong Xu. Learning the unlearnable: Adversarial augmentations suppress unlearnable example attacks. In 4th Workshop on Adversarial Robustness In the Real World (AROW), ICCV 2023, 2023. 2, 3, 4, 7, 12" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 228, + 295, + 271 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 228, + 295, + 271 + ], + "spans": [ + { + "bbox": [ + 56, + 228, + 295, + 271 + ], + "type": "text", + "content": "[26] Tianrui Qin, Xitong Gao, Juanjuan Zhao, Kejiang Ye, and Cheng zhong Xu. APBench: A unified availability poisoning attack and defenses benchmark. Transactions on Machine Learning Research, 2024. 3, 4, 6" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 272, + 295, + 337 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 272, + 295, + 337 + ], + "spans": [ + { + "bbox": [ + 56, + 272, + 295, + 337 + ], + "type": "text", + "content": "[27] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 3, 5, 7" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 338, + 296, + 392 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 338, + 296, + 392 + ], + "spans": [ + { + "bbox": [ + 56, + 338, + 296, + 392 + ], + "type": "text", + "content": "[28] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684-10695, 2022. 3" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 393, + 296, + 446 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 393, + 296, + 446 + ], + "spans": [ + { + "bbox": [ + 56, + 393, + 296, + 446 + ], + "type": "text", + "content": "[29] Pedro Sandoval-Segura, Vasu Singla, Jonas Geiping, Micah Goldblum, Tom Goldstein, and David Jacobs. Autoregressive perturbations for data poisoning. Advances in Neural Information Processing Systems, 35:27374-27386, 2022. 1, 2, 3, 4, 6, 7" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 449, + 295, + 491 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 449, + 295, + 491 + ], + "spans": [ + { + "bbox": [ + 56, + 449, + 295, + 491 + ], + "type": "text", + "content": "[30] Mainak Singha, Harsh Pal, Ankit Jha, and Biplab Banerjee. Ad-clip: Adapting domains in prompt space using clip. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4355–4364, 2023. 1" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 492, + 295, + 525 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 492, + 295, + 525 + ], + "spans": [ + { + "bbox": [ + 56, + 492, + 295, + 525 + ], + "type": "text", + "content": "[31] Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012. 5, 11" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 526, + 295, + 590 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 526, + 295, + 590 + ], + "spans": [ + { + "bbox": [ + 56, + 526, + 295, + 590 + ], + "type": "text", + "content": "[32] Zeyi Sun, Ye Fang, Tong Wu, Pan Zhang, Yuhang Zang, Shu Kong, Yuanjun Xiong, Dahua Lin, and Jiaqi Wang. Alphaclip: A clip model focusing on wherever you want. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 13019-13029, 2024. 1" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 592, + 295, + 645 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 592, + 295, + 645 + ], + "spans": [ + { + "bbox": [ + 56, + 592, + 295, + 645 + ], + "type": "text", + "content": "[33] Lue Tao, Lei Feng, Jinfeng Yi, Sheng-Jun Huang, and Song-can Chen. Better safe than sorry: Preventing delusive adversaries with adversarial training. Advances in Neural Information Processing Systems, 34:16209-16225, 2021. 2, 4, 6, 7" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 647, + 295, + 690 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 647, + 295, + 690 + ], + "spans": [ + { + "bbox": [ + 56, + 647, + 295, + 690 + ], + "type": "text", + "content": "[34] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. 5" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 56, + 691, + 295, + 714 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 691, + 295, + 714 + ], + "spans": [ + { + "bbox": [ + 56, + 691, + 295, + 714 + ], + "type": "text", + "content": "[35] Jianyi Wang, Kelvin CK Chan, and Chen Change Loy. Exploring clip for assessing the look and feel of images. In" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 555, + 486 + ], + "type": "list", + "angle": 0, + "index": 23, + "blocks": [ + { + "bbox": [ + 333, + 73, + 555, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 333, + 73, + 555, + 95 + ], + "spans": [ + { + "bbox": [ + 333, + 73, + 555, + 95 + ], + "type": "text", + "content": "Proceedings of the AAAI Conference on Artificial Intelligence, pages 2555-2563, 2023. 1" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 96, + 555, + 140 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 96, + 555, + 140 + ], + "spans": [ + { + "bbox": [ + 316, + 96, + 555, + 140 + ], + "type": "text", + "content": "[36] Shutong Wu, Sizhe Chen, Cihang Xie, and Xiaolin Huang. One-pixel shortcut: on the learning preference of deep neural networks. In International Conference on Learning Representations, 2023. 1, 2, 3, 4, 6, 7" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 141, + 555, + 195 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 141, + 555, + 195 + ], + "spans": [ + { + "bbox": [ + 316, + 141, + 555, + 195 + ], + "type": "text", + "content": "[37] Jianxiong Xiao, James Hays, Krista A. Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 3485-3492, 2010. 11" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 197, + 555, + 251 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 197, + 555, + 251 + ], + "spans": [ + { + "bbox": [ + 316, + 197, + 555, + 251 + ], + "type": "text", + "content": "[38] Yao-Yuan Yang, Cyrus Rashtchian, Hongyang Zhang, Russ R Salakhutdinov, and Kamalika Chaudhuri. A closer look at accuracy vs. robustness. In Advances in Neural Information Processing Systems, pages 8588-8601. Curran Associates, Inc., 2020. 3, 7" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 252, + 555, + 306 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 252, + 555, + 306 + ], + "spans": [ + { + "bbox": [ + 316, + 252, + 555, + 306 + ], + "type": "text", + "content": "[39] Hantao Yao, Rui Zhang, and Changsheng Xu. Visual-language prompt tuning with knowledge-guided context optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6757-6767, 2023. 4" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 308, + 555, + 352 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 308, + 555, + 352 + ], + "spans": [ + { + "bbox": [ + 316, + 308, + 555, + 352 + ], + "type": "text", + "content": "[40] Da Yu, Huishuai Zhang, Wei Chen, Jian Yin, and Tie-Yan Liu. Availability attacks create shortcuts. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 2367-2376, 2022. 1, 2, 3, 4, 6, 7" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 354, + 555, + 396 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 354, + 555, + 396 + ], + "spans": [ + { + "bbox": [ + 316, + 354, + 555, + 396 + ], + "type": "text", + "content": "[41] Yunrui Yu, Xitong Gao, and Cheng-Zhong Xu. LAFIT: Efficient and reliable evaluation of adversarial defenses with latent features. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 46(1):354-369, 2024. 3" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 398, + 555, + 441 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 398, + 555, + 441 + ], + "spans": [ + { + "bbox": [ + 316, + 398, + 555, + 441 + ], + "type": "text", + "content": "[42] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2022. 4, 6, 7, 8" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 443, + 555, + 486 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 443, + 555, + 486 + ], + "spans": [ + { + "bbox": [ + 316, + 443, + 555, + 486 + ], + "type": "text", + "content": "[43] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for vision-language models. International Journal of Computer Vision, 130(9):2337-2348, 2022. 4, 6" + } + ] + } + ], + "index": 22 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "type": "text", + "content": "9516" + } + ] + } + ], + "index": 24 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/A4A_ Adapter for Adapter Transfer via All-for-All Mapping for Cross-Architecture Models/66a56aa1-a030-49e2-8d93-323c4a070c6f_content_list.json b/2025/A4A_ Adapter for Adapter Transfer via All-for-All Mapping for Cross-Architecture Models/66a56aa1-a030-49e2-8d93-323c4a070c6f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..21b17742c53a1686124051baa602a307cc0a48e3 --- /dev/null +++ b/2025/A4A_ Adapter for Adapter Transfer via All-for-All Mapping for Cross-Architecture Models/66a56aa1-a030-49e2-8d93-323c4a070c6f_content_list.json @@ -0,0 +1,1942 @@ +[ + { + "type": "text", + "text": "A4A: Adapter for Adapter Transfer via All-for-All Mapping for Cross-Architecture Models", + "text_level": 1, + "bbox": [ + 178, + 130, + 818, + 174 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Keyu Tu, Mengqi Huang, Zhuowei Chen, Zhendong Mao* \nUniversity of Science and Technology of China", + "bbox": [ + 243, + 203, + 743, + 239 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "{kytu,huangmq,chenzw01}@mail.ustc.edu.cn,{zdmao}@ustc.edu.cn", + "bbox": [ + 218, + 241, + 772, + 257 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 246, + 291, + 326, + 306 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Large-scale text-to-image models evolve rapidly in size and architecture. The existing adapters struggle to keep pace with these models, requiring extensive retraining. This paper proposes a novel adapter transfer framework, A4A (Adapter for Adapter), which uses an all-for-all mapping approach to seamlessly transfer attention-based adapters across different model architectures (e.g., U-Net to transformer). The framework consists of Coupling Space Projection and Upgraded Space Mapping. During Coupling Space Projection, all attention features of the pretrained adapter are aggregated to fully capture the coupling relationship before being projected into a unified space. The unified space maintains coupling features in a consistent dimension, effectively and efficiently addressing feature scale discrepancies arising from the base model's architecture. In the Upgraded Space Mapping Module, randomly initialized learnable features are introduced to connect the unified and upgraded spaces by integrating reference features via the attention mechanism. The learned features are adaptively injected into the upgrade model through the Alignment module, which bridges the discrepancies between the models using the all-for-all mapping. Experimental results on personalized image generation tasks demonstrate that A4A outperforms previous methods in transferring adapters while being the first to achieve adapter transfer across model architectures.", + "bbox": [ + 91, + 323, + 483, + 715 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 91, + 744, + 220, + 762 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Recent advancements in large-scale text-to-image diffusion models [5, 11, 23, 26] have significantly improved their ability to generate high-quality, realistic images based on user-friendly textual prompts. Building on these generative capabilities, numerous adapters have been developed upon these pretrained models to further endow them with new control conditions, such as pose and human identity control,", + "bbox": [ + 89, + 771, + 482, + 878 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "thereby fostering the growth of downstream real-world applications like personalized image creation. As pretrained models rapidly evolve, with increasing parameters (e.g., from SD1.5's 860M to SDXL's 2600M) and developing architectures (e.g., from the convolution U-Net [27] to the transformer [22]), the original adapters built on base models require substantial resources for retraining and significant effort for redesign to accommodate upgraded models. This leads to a lag in adapter development compared to the progression of upgraded models1. Therefore, the adapter transfer task, i.e., effectively and efficiently transferring existing adapters from base models to upgraded models to leverage the strong control capabilities of the original well-developed adapters and the superior generative abilities of the upgraded models, has become an increasingly important and urgent requirement in both academia and industry.", + "bbox": [ + 511, + 292, + 906, + 532 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Given the substantial potential benefits of adapter transfer, several prior studies have been conducted in this field. For instance, Ctrl-Adapter [19] has been proposed to transfer the ControlNet [41] architecture by fusing the output of the zero-convolution from the pretrained ControlNet into the corresponding layer in the upgraded model. Concurrently, X-Adapter [25] has been explored for mapping the latent from the base model's decoder block and adding them to the corresponding location within the upgraded model's decoder. In summary, existing adapter transfer methods primarily focus on addition-based adapters (i.e., the control conditions are injected into the pretrained models by simple addition, typically, ControlNet), through a layer-by-layer mapping, i.e., the output of each layer in the base model's adapters are mapped to the semantically equivalent layer in the upgraded model.", + "bbox": [ + 511, + 536, + 908, + 776 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "However, in this study, we argue that the existing layer-by-layer mapping fails to fully exploit the coupling between original adapters and base models to effectively bridge the", + "bbox": [ + 511, + 777, + 906, + 825 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 46 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "1In this study, we define \"base models\" as the original pretrained text-to-image models with well-developed adapters (e.g., SD1.5), and \"upgraded models\" as those pretrained models that require adapter transfer (e.g., SDXL). Upgraded models typically have more parameters and advanced architectural designs compared to the base models.", + "bbox": [ + 511, + 838, + 906, + 902 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "*Zhendong Mao is the corresponding author.", + "bbox": [ + 107, + 886, + 346, + 900 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "18476", + "bbox": [ + 480, + 944, + 519, + 955 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "discrepancies between base models and upgraded models. Here, coupling refers to the already well-trained compatibility between the original adapters and base models, while differences refer to the architectural discrepancies between base models and upgraded models. The reason behind this is that the original adapters and base models function as an integrated whole, and the layer-by-layer mapping disrupts this holistic consistency by isolating the output of each layer. Consequently, during the subsequent mapping to the upgraded models, this overall consistency cannot be effectively utilized. As a result, existing adapter transfer methods suffer from limited transfer scopes. On the one hand, they can only transfer addition-based adapters (typically, ControlNet) but fail to generalize to the attention-based adapters, which are more widely used. This is technically more challenging because attention-based adapters (i.e. the control conditions are injected into pretrained models by attention mechanisms) involve more complex interactions and dependencies across different layers of the model. On the other hand, these methods are limited to transferring between similar pretrained model architectures (e.g., from a U-Net model to another U-Net model) but fail when transferring from a U-Net model to a transformer model. This limitation is particularly critical, as the latest state-of-the-art pretrained text-to-image models [1, 6] are predominantly transformer-based, while the current most mature adapters remain developed on the U-Net architecture.", + "bbox": [ + 93, + 90, + 480, + 496 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To address this challenge, we propose a novel adapter transfer framework, A4A (Adapter for Adapter), which utilizes an innovative all-for-all mapping approach to seamlessly transfer the intrinsic coupling between all layers of the original adapters and base models to all layers of the upgraded models, thereby enabling the transfer of more difficult and widely used attention-based adapters and facilitating cross-architecture model transfers. Specifically, A4A achieves all-for-all mapping through the Coupling Space Projection and Upgraded Space Mapping. In the Coupling Space Projection phase, all attention features of the pretrained adapter are collected, capturing the complete coupling relationship between the adapter and the base model, and then projected into a unified space. The coupling relationship treats the features of all layers of the adapter as a unified whole, conveying a continuous representation of the new control conditions throughout the generation process, distinguishing it from isolated layer mappings. Upgraded space refers to the coupled feature space that corresponds to the upgraded model, where we randomly initialize learnable attention features to transfer the coupling relationship from the base model to the upgraded model. By integrating the reference features through the attention mechanism and aligning them with the upgraded architecture, the learnable features bridge the discrepancies between the models.", + "bbox": [ + 93, + 503, + 480, + 878 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The contributions of this work are as follows:", + "bbox": [ + 114, + 886, + 410, + 898 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. To alleviate the limited transfer scopes, we introduce a novel all-for-all mapping approach that enables the transfer of attention-based adapters and facilitates cross-architecture model transfers.", + "2. A4A projects the adapter's complete features into the unified coupling space and bridges it with the upgraded space by fusing these features with randomly initialized learnable features through the attention mechanism.", + "3. Experiments in various types of personalized image creation demonstrate that A4A is an effective attention-based adapter transfer approach for cross-architecture models, achieving better performance than the pretrained adapter from the upgraded model with minimal training costs." + ], + "bbox": [ + 516, + 90, + 903, + 301 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related work", + "text_level": 1, + "bbox": [ + 516, + 319, + 648, + 334 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.1. Latent Diffusion Models", + "text_level": 1, + "bbox": [ + 516, + 344, + 733, + 358 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Recent diffusion-based text-to-image models [11, 23, 26, 29] have received wide acclaim for their outstanding image fidelity and diversity. Ho et al. [11] introduce the denoising diffusion process into generation models in the seminal work DDPM. Diffusion models learn the generation process through iterative denoising steps. Latent Diffusion Model (LDM) [26] proposed to perform the diffusion process in the latent space of a Variational Autoencoder [31]. Under the LDM architecture, two primary backbone models are employed: the convolutional U-Net [27] and the transformer [22]. While these models share a similar generation process, there are significant differences between the two models. The most notable example of U-Net models is the StableDiffusion [23, 26] series, excluding SDv3.0 [6], which consists of the symmetric encoder and decoder. The encoder is composed of multiple blocks of diverse dimensions, interconnected by down-sampling layers. Each block incorporates several attention layers [32] to fuse latents and conditions. Another series of models [4, 6, 9, 28] adopts DiT blocks [22] for denoising. The transformer model has been widely used in recent video generation tasks [20], where it has achieved state-of-the-art results. After patchification, the resulting features are progressively processed through a series of DiT blocks. In the DiT blocks, the features are scaled and shifted using AdaLN, maintaining the same dimension, which distinguishes them from U-Net-based models. Transformer-based models, with their flexible structure and impressive generative capabilities, have garnered significant attention due to their great potential.", + "bbox": [ + 516, + 367, + 903, + 804 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.2. Adapters for Text-to-Image Diffusion Models", + "text_level": 1, + "bbox": [ + 516, + 816, + 893, + 832 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Given the inefficiencies of fine-tuning large pretrained models, an alternative strategy is to utilize adapters, which introduce a limited number of trainable parameters while keeping the pretrained model frozen. Due to their flexibility and", + "bbox": [ + 516, + 840, + 903, + 898 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "18477", + "bbox": [ + 480, + 945, + 517, + 955 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "greater efficiency compared to fine-tuning, adapters have gained significant interest. By designing conditional modules, an adapter can introduce new control conditions, such as personalized characters, objects, layouts, and style information, to a pretrained text-to-image model. Downstream tasks for these adapters include personalized character generation (ID customization) [7, 34, 40, 42, 44], personalized object generation (IP customization) [2, 7, 13, 21, 30, 36, 37, 40, 42], attribute and layout control [3, 16, 18, 35, 38, 43], and stylization [10, 39]. As discussed in Sec. 1, these adapters can be categorized into two main types: attention-based and addition-based. In the addition-based adapter (typically, ControlNet [41]), the encoded new conditions are directly added to the output of the sub-blocks of the generation model. In contrast, the attention-based adapter processes the encoded conditions through attention layers, modifying the original attention values based on both the new conditions and the text prompts. Attention-based adapters [7, 10, 30, 34, 36-40, 42-44] have received widespread attention due to their efficient and precise condition control capabilities, dominating the field of adapters.", + "bbox": [ + 89, + 90, + 483, + 409 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.3. Adapter Transferring", + "text_level": 1, + "bbox": [ + 89, + 417, + 294, + 431 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The rapid evolution and diverse architectures of text-to-image generation models place constraints on transferring the aforementioned adapters. Consequently, to accommodate new models, adapters often require retraining from scratch on these pretrained diffusion models. X-Adapter [25] designates models equipped with the well-trained adapter as base models, with the upgraded version referred to as the upgraded model. It establishes manual connections between decoder blocks of the same dimensions in both the base and upgraded models. Specifically, the decoder of SD1.5 and SDXL consists of blocks with dimensions 1280, 640, and 320. X-Adapter [25] maps the output of the base model's blocks to the corresponding blocks of identical dimensions in the upgraded model. As a result, X-Adapter is specifically suited for SD-series models and faces challenges when transferring to transformer-based models [4, 6, 9, 22, 28]. Ctrl-Adapter [19] aims to transfer the addition-based adapter, ControlNet [41], for video generation models or upgraded image generation models. It connects the output of the zero-convolutional layers of both models through mappers to transfer the ControlNet.", + "bbox": [ + 89, + 439, + 483, + 758 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. Method", + "text_level": 1, + "bbox": [ + 89, + 768, + 181, + 784 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "We propose a novel framework for transferring well-trained adapters from the base model to the upgraded model with architectural discrepancies, specifically U-Net model and transformer model. For the sake of brevity, we define PTA as the Pre-Trained Adapter. Specifically, pretrained refers to the version that has been officially published. Additionally, we denote $M_{base}$ and $M_{up}$ as the previously mentioned", + "bbox": [ + 89, + 794, + 483, + 902 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "base model and upgraded model. A4A first projects the extracted attention features into the unified coupling to maintain the coupling relationship between PTAs and $M_{base}$ . Then, the coupling space is mapped to the upgraded space, where learnable features integrate the reference coupling features with attention layers. These learned features are then adaptively aligned with $M_{up}$ through the Alignment component.", + "bbox": [ + 511, + 90, + 906, + 212 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Preliminaries", + "text_level": 1, + "bbox": [ + 511, + 222, + 653, + 236 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Before presenting our method, we introduce the diffusion model with various backbones. The Latent Diffusion Model (LDM), which perform noise addition and denoising in the latent space $z$ of the VAE encoder, is the most prominent open-source community for text-to-image generation. The objective of training LDMs is:", + "bbox": [ + 511, + 243, + 906, + 335 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\min _ {\\theta} \\mathcal {L} _ {L D M} = \\mathbb {E} _ {z, \\epsilon \\sim \\mathcal {N} (0, I), t} | | \\epsilon - \\epsilon_ {\\theta} (z _ {t}, t, E _ {t} (y _ {t})) | | _ {2} ^ {2}, \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 524, + 345, + 906, + 369 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $t$ is uniformly sampled from the time steps $\\{1, \\dots, T\\}$ , $y_{t}$ denotes the conditional text prompt, and $E_{t}$ represents the text encoder. The parameterized denoising network denoted as $\\epsilon_{\\theta}$ , may take the form of either a U-Net model or a transformer model. The U-Net architecture consists of blocks with varying dimensions. Each U-Net block includes down-sampling or up-sampling layers along with attention layers. The transformer model primarily consists of multiple DiT blocks grounded on the transformer architecture. Since our method aims to transfer the widely used attention-based adapters, we define the cross-attention process and its associated signals as follows:", + "bbox": [ + 511, + 378, + 908, + 561 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {q} = \\boldsymbol {W} ^ {q} \\cdot \\boldsymbol {i}, \\quad \\boldsymbol {k} = \\boldsymbol {W} ^ {k} \\cdot \\boldsymbol {c}, \\quad a n d \\quad \\boldsymbol {v} = \\boldsymbol {W} ^ {v} \\cdot \\boldsymbol {c} \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 531, + 571, + 906, + 590 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $i$ denotes the latents of the image, and $c$ signifies the embeddings of the condition, such as the text prompt in the original model. Additionally, $W$ represents the weights for attention projection. Attention is conducted as follows:", + "bbox": [ + 511, + 602, + 905, + 662 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\operatorname {A t t e n t i o n} (\\boldsymbol {q}, \\boldsymbol {k}, \\boldsymbol {v}) = \\operatorname {S o f t m a x} \\left(\\frac {\\boldsymbol {q} \\cdot \\boldsymbol {k} ^ {T}}{\\sqrt {d}}\\right) \\boldsymbol {v} \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 568, + 672, + 906, + 710 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $d$ represents the dimensions of $\\pmb{k}$ and $\\pmb{v}$ . Through cross-attention, the image latents and condition embeddings are comprehensively integrated.", + "bbox": [ + 511, + 719, + 905, + 765 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.2. Coupling Space Projection", + "text_level": 1, + "bbox": [ + 511, + 773, + 754, + 791 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Condition Encoder of PTA. For new control conditions $y_{n}$ beyond the original text prompt, adapters typically incorporate a condition encoder as illustrated in Fig. 1. We denote $E_{n}$ to distinguish it from the original text encoder $E_{t}$ of the pretrained large-scale T2I models:", + "bbox": [ + 511, + 796, + 905, + 872 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {c} _ {n} = E _ {n} \\left(y _ {n}\\right), \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 658, + 885, + 906, + 901 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "18478", + "bbox": [ + 480, + 944, + 519, + 955 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/a86d0fff88187fdbf456d2a9bebc8d42078a61a7a43735c068e5321b344f0df4.jpg", + "image_caption": [ + "Figure 1. The illustration of the Adapter for Adapter (A4A) framework. Both the base model and the upgraded model are kept frozen. (a) Coupling Space Projection: The pretrained Adapter, consisting of the condition encoder and attention layers (highlighted in pink), is loaded. The adapter features $k_{i}$ and $v_{i}$ are projected into a unified coupling space, reshaping them as $K$ and $V$ . (b) Upgraded Space Mapping: Randomly initialized learnable upgraded features, $\\bar{K}$ and $\\bar{V}$ , are concatenated with $K$ and $V$ as references. The learning process of $\\bar{K}$ and $\\bar{V}$ bridges the discrepancy between the base model and the upgraded model. These features are then aligned with the original cross-attention layers of the upgraded model through Alignment, which can be a U-Net or Transformer model. Best viewed in color." + ], + "image_footnote": [], + "bbox": [ + 91, + 85, + 903, + 281 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $c_{n}$ denotes the new condition embeddings. Taking IP-Adapter [40] as an example, the Image Encoder external to the original generation model serves as the condition encoder. And we directly integrate the pretrained condition encoder $E_{n}$ from the adapter to efficiently transfer the well-trained adapter to the upgraded model.", + "bbox": [ + 88, + 401, + 480, + 491 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Attention Layers of PTA. As illustrated in Fig. 1, the encoded new condition embeddings $c_{n}$ are processed through the attention mechanism coupling with the base model. We explicitly depict the attention layers associated with the pretrained adapter in the figure, denoting the weights of the $i$ -th attention layer of the adapter as $W_{A,i}$ , distinguishing them from the original attention weights $W$ in the base model. Subscript $A$ represents the adapter, and $i$ represents the $i$ -th adapter attention layer. For instance, the fine-tuned cross-attention layers in the Decoupled Cross-Attention module of IP-Adapter [40] exemplify this. $c_{n}$ are sequentially fed into the aforementioned adapter's attention layers $W_{A,i}$ :", + "bbox": [ + 88, + 491, + 482, + 674 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {k} _ {i} = \\boldsymbol {W} _ {A, i} ^ {k} \\left(\\boldsymbol {c} _ {n}\\right), \\quad \\boldsymbol {k} _ {i} \\in \\mathbb {R} ^ {N \\times d _ {i}}, \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 174, + 681, + 482, + 702 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {v} _ {i} = \\boldsymbol {W} _ {A, i} ^ {v} \\left(\\boldsymbol {c} _ {n}\\right), \\quad \\boldsymbol {v} _ {i} \\in \\mathbb {R} ^ {N \\times d _ {i}}, \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 174, + 710, + 482, + 729 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $d_{i}$ is the dimension of the feature, and $\\mathbf{N}$ denotes the number of tokens for feature $k_{i}$ and $\\boldsymbol{v}_{i}$ . To extract the attention features from $\\boldsymbol{c}_{n}$ , we employ the attention layers of the adapter rather than utilizing the entire base model.", + "bbox": [ + 89, + 734, + 482, + 795 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Projection onto the Coupling Space. The features obtained from multiple layers form a sequence of length $l$ , $[k_1, k_2, \\dots, k_l]$ and $[v_1, v_2, \\dots, v_l]$ , where $l$ represents the number of attention layers of PTA. The dimensions $d_i$ of the features vary, as shown in Fig. 1. To achieve the all-for-all mapping for the attention-based adapter from $M_{base}$ to $M_{up}$ , the features of all cross-attention layers need to be", + "bbox": [ + 89, + 795, + 482, + 901 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "projected into a unified coupling space $S_{co}$ which is defined by the smallest common multiple $d_{scm}$ of all dimensions $d_{i}$ :", + "bbox": [ + 511, + 401, + 906, + 446 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {K} = \\operatorname {P r o j} ([ \\boldsymbol {k} _ {1}, \\boldsymbol {k} _ {2}, \\dots , \\boldsymbol {k} _ {l} ]), \\quad \\boldsymbol {K} \\in \\mathbb {R} ^ {l \\times N \\times d _ {s c m}}, \\quad (7)\n$$\n", + "text_format": "latex", + "bbox": [ + 531, + 458, + 906, + 479 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\pmb {V} = \\mathrm {P r o j} ([ \\pmb {v} _ {1}, \\pmb {v} _ {2}, \\dots , \\pmb {v} _ {l} ]), \\quad \\pmb {V} \\in \\mathbb {R} ^ {l \\times N \\times d _ {s c m}}. \\quad (8)\n$$\n", + "text_format": "latex", + "bbox": [ + 534, + 489, + 906, + 511 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The projection module consists of several linear layers designed to map dimensions $d_{i}$ to $d_{scm}$ . After the sequences are projected to $S_{co}$ , they are reshaped into matrices $K$ , and $V$ . Mapping to $S_{co}$ defined by $d_{scm}$ strikes the best balance between efficiency and effectiveness. Sec. 7.3 in the supplementary material demonstrates this through experiments. It reduces computational complexity by aligning features to a common dimension, avoiding the overhead of larger spaces. At the same time, it maintains sufficient representational capacity, preventing the loss of important information, which can happen with smaller spaces. This ensures effective feature alignment without excessive resource usage, making it an optimal choice for both computational efficiency and model performance.", + "bbox": [ + 511, + 518, + 906, + 731 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.3. Upgraded Space Mapping", + "text_level": 1, + "bbox": [ + 511, + 741, + 751, + 758 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Similarly, we define the number of the cross-attention layers in the upgraded model as $\\bar{l}$ , and the space for upgraded models as $S_{up}$ . To transfer the unified adapter features to the upgraded space, inspired by BLIP-2 [17], we randomly initialize two learnable parameters $\\bar{K} \\in \\mathbb{R}^{\\bar{l} \\times N \\times d_{\\text{scm}}}$ and $\\bar{V} \\in \\mathbb{R}^{\\bar{l} \\times N \\times d_{\\text{scm}}}$ . Given that the attention layer is effective for integrating features, we adopt this architecture to learn the aforementioned parameters. Consider the learning of $\\bar{K}$ as an example. To enhance the robustness, we first", + "bbox": [ + 511, + 763, + 906, + 902 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "18479", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "normalize $\\bar{\\pmb{K}}$ and $\\pmb{K}$ using layer normalization. And then, the following operation is performed:", + "bbox": [ + 89, + 90, + 482, + 121 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\bar {\\boldsymbol {K}} = \\operatorname {F F N} \\left[ \\boldsymbol {W} ^ {o u t} \\cdot \\text {A t t e n t i o n} \\left(\\bar {\\boldsymbol {K}} \\boldsymbol {W} _ {1} ^ {k}, \\right. \\right. \\tag {9} \\\\ [ \\boldsymbol {K}, \\bar {\\boldsymbol {K}} ] W _ {2} ^ {k}, [ \\boldsymbol {K}, \\bar {\\boldsymbol {K}} ] W _ {3} ^ {k}) ]. \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 133, + 127, + 480, + 167 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $[\\pmb{K},\\bar{\\pmb{K}} ]$ denotes the concatenation of $\\pmb{K}$ and $\\bar{\\pmb{K}}$ along dimension $N$ . The projection weights $\\pmb{W}_1^k,\\pmb{W}_2^k,\\pmb{W}_3^k\\in \\mathbb{R}^{d_{scm}\\times d_{in}}$ map $\\bar{\\pmb{K}}$ and $\\pmb{K}$ , respectively, to the intermediate space with dimension $d_{in}$ . Following the processing of features using the Attention as described in Eq. (3), they are subsequently transformed back to the original space via the $\\pmb{W}^{out}$ . The Feed Forward Network (FFN) is composed of layers arranged sequentially, including layer normalization, linear transformations, GELU activation, and additional linear layers. The aforementioned process of Eq. (9) is iterated $\\mathbf{R}$ times, with $\\mathbf{R}$ serving as a hyperparameter. The learning process of $\\bar{\\pmb{V}}$ is similar to $\\bar{\\pmb{K}}$ :", + "bbox": [ + 89, + 174, + 483, + 357 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\bar {\\boldsymbol {V}} = \\operatorname {F F N} \\left[ \\boldsymbol {W} ^ {\\text {o u t}} \\cdot \\text {A t t e n t i o n} \\left(\\bar {\\boldsymbol {V}} \\boldsymbol {W} _ {1} ^ {v}, \\right. \\right. \\\\ \\left. [ \\boldsymbol {V}, \\bar {\\boldsymbol {V}} ] \\boldsymbol {W} _ {2} ^ {v}, [ \\boldsymbol {V}, \\bar {\\boldsymbol {V}} ] \\boldsymbol {W} _ {3} ^ {v}) \\right]. \\tag {10}\n$$\n", + "text_format": "latex", + "bbox": [ + 124, + 363, + 480, + 398 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We clarify that two distinct modules with an identical structure are responsible for learning $\\bar{K}$ and $\\bar{V}$ , respectively. Throughout this process, the learnable features $\\bar{\\bar{K}}$ and $\\bar{V}$ are seamlessly integrated with the adapter features $K$ and $V$ of the PTA, effectively bridging the unified coupling space to the upgraded space.", + "bbox": [ + 89, + 406, + 483, + 496 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Alignment with the Upgraded Model. The learned features in the upgraded space should be aligned with the attention layers of $M_{up}$ to fully leverage the adapter's capabilities. Specifically, we first fetch the dimensions $\\bar{d}_i$ of the cross-attention layers within the original upgraded model. Then, we employ the linear layer to align the $i$ -th row of the matrix $\\bar{\\mathbf{K}}$ to $\\bar{d}_i$ dimensions through Alignment, which we denote as $\\bar{k}_i$ :", + "bbox": [ + 89, + 497, + 483, + 617 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n[ \\bar {\\boldsymbol {k}} _ {1}, \\dots , \\bar {\\boldsymbol {k}} _ {\\bar {l}} ] = \\operatorname {A l i g n} (\\bar {\\boldsymbol {K}}). \\tag {11}\n$$\n", + "text_format": "latex", + "bbox": [ + 196, + 625, + 480, + 643 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Similarly, the identical operation is applied to the $\\bar{\\mathbf{V}}$ matrix in order to derive the vector $\\bar{\\pmb{v}}_i$ :", + "bbox": [ + 89, + 648, + 482, + 679 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n[ \\bar {\\boldsymbol {v}} _ {1}, \\dots , \\bar {\\boldsymbol {v}} _ {\\bar {l}} ] = \\operatorname {A l i g n} (\\bar {\\boldsymbol {V}}). \\tag {12}\n$$\n", + "text_format": "latex", + "bbox": [ + 197, + 686, + 480, + 705 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "For the $i$ -th cross-attention layer, let $\\pmb{q}_i^{up}$ , $\\pmb{k}_i^{up}$ , and $\\pmb{v}_i^{up}$ be the original features. The extracted features, $\\bar{\\pmb{k}}_i$ and $\\bar{\\pmb{v}}_i$ , are combined through linear weighting and summation with the prior values of the upgraded model using Attention Eq. (3):", + "bbox": [ + 89, + 710, + 482, + 772 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\bar {\\boldsymbol {Z}} = \\operatorname {A t t e n t i o n} \\left(\\boldsymbol {q} _ {i} ^ {u p}, \\boldsymbol {k} _ {i} ^ {u p}, \\boldsymbol {v} _ {i} ^ {u p}\\right) + \\tag {13} \\\\ \\lambda \\text {A t t e n t i o n} \\left(\\boldsymbol {q} _ {i} ^ {u p}, \\bar {\\boldsymbol {k}} _ {i}, \\bar {\\boldsymbol {v}} _ {i}\\right). \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 145, + 777, + 480, + 816 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The result of Eq. (13), $\\bar{Z}$ , serves as the output for the $i$ -th cross-attention layer and will be forwarded to the next layer of the upgraded model. The parameter $\\lambda$ serves as a balancing factor, fixed at 1.0 during training and subsequently adjusted for downstream tasks during inference.", + "bbox": [ + 89, + 824, + 483, + 901 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.4. Optimization Loss Function", + "text_level": 1, + "bbox": [ + 513, + 90, + 764, + 107 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "To optimize the A4A framework, we employ the loss function $\\mathcal{L}_{LDM}$ of the upgraded model $M_{up}$ as defined in Equation Eq. (1):", + "bbox": [ + 511, + 112, + 903, + 157 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {L D M} = \\mathbb {E} _ {z, \\epsilon \\sim \\mathcal {N} (0, I), t} \\| \\epsilon - \\epsilon_ {\\theta} ^ {u p} \\left(z _ {t}, t, c ^ {u p}\\right) \\| _ {2} ^ {2}, \\tag {14}\n$$\n", + "text_format": "latex", + "bbox": [ + 542, + 165, + 903, + 184 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The condition $c^{up}$ includes the original text prompt $y_{t}$ and new control conditions $y_{n}$ . The upgraded model $\\epsilon_{\\theta}^{up}$ acquires new control conditions and capabilities by injecting learned adapter features.", + "bbox": [ + 511, + 191, + 903, + 252 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The base model and upgraded model remain frozen. The loss function is exclusively used to update the A4A module, which consists of the network with learnable features and the PTA for fine-tuning. To optimize model training, given the varying numbers of parameters in each trainable module, we adopt an asynchronous training strategy. Specifically, for training the projection in $S_{co}$ and alignment in $S_{up}$ , a learning rate of $1 \\times 10^{-5}$ is employed to avoid overfitting, while a learning rate of $1 \\times 10^{-4}$ is applied to the other components. If the pretrained adapter is fine-tuned, a smaller learning rate of $1 \\times 10^{-6}$ is used to effectively retain PTA's conditional encoding capabilities.", + "bbox": [ + 511, + 252, + 906, + 434 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4. Experiments", + "text_level": 1, + "bbox": [ + 513, + 446, + 645, + 463 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.1. Experimental Settings", + "text_level": 1, + "bbox": [ + 511, + 470, + 720, + 487 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Datasets. In our study, we utilize the CelebAMask-HQ dataset [15], which consists of approximately 20,000 high-quality facial images, each with a resolution of $1024 \\times 1024$ pixels, and the OpenImages dataset [14], which offers a diverse collection of images featuring a wide range of clearly identifiable objects. We employ the BLIP-2 model [17] to generate captions for the aforementioned datasets, which serve as text prompts paired with the images. For validation, we randomly select 100 images from CelebAMask-HQ, ensuring they are distinct from those in the training set, and generate four images for each reference image.", + "bbox": [ + 511, + 492, + 906, + 657 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Implementation Details. We implement A4A using SD1.5 [26] as the base model, and SDXL [23] and Pixart-Alpha (XL) [4] as the upgraded models. Both SDXL and Pixart-Alpha are significantly larger text-to-image models compared to SD1.5, and we use them as upgraded models for the U-Net and transformer architectures, respectively. In this paper, we utilize the IP-Adapter [40] as our pretrained adapter. The IP-Adapter series has recently demonstrated remarkable capabilities, garnering significant interest for its ability to enhance personalization in generative models. Its performance across a variety of tasks highlights its increasing potential to advance text-to-image generation. Due to variations in GPU types across compared methods and the absence of comprehensive GPU hour reports, we use Sample Count (SC), which represents the number of samples processed up to a specific time point, as a metric", + "bbox": [ + 511, + 659, + 908, + 901 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "18480", + "bbox": [ + 480, + 944, + 519, + 955 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/66b93cd27e2503d438b30bfa123b5769814d45f9d50523f09cd82bea5f899aa3.jpg", + "image_caption": [ + "Figure 2. The visualization of personalized human generation using SDXL with U-Net architecture as the upgraded model. A4A(ours) compares with the previous work X-Adapter and pretrained adapter from the upgraded model (PTA-UM)." + ], + "image_footnote": [], + "bbox": [ + 91, + 85, + 903, + 359 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/9ce9673e3ffcd9fd32f55633093dce31a2d23a1288de851c5e4c5d47441de51a.jpg", + "image_caption": [ + "Figure 3. The visualization of personalized object generation using SDXL with U-Net architecture as the upgraded model. A4A(ours) compares with the previous work X-Adapter and pretrained adapter from the upgraded model (PTA-UM)." + ], + "image_footnote": [], + "bbox": [ + 94, + 407, + 908, + 683 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "for evaluating training costs and efficiency. For example, the officially published PTA has an SC of 64M (i.e., the PTA is trained with 8 V100 GPUs for 1M steps with a batch size of 8 per GPU). The terms in the following charts are defined as: (1) A4A (ours): transferring the PTA from the base model to the upgraded model using our method A4A; (2) X-Adapter: transferring the PTA using the published X-Adapter [25] checkpoint; (3) PTA-UM: the officially published pretrained adapter from the upgraded model.", + "bbox": [ + 89, + 747, + 482, + 883 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Evaluation Metrics. To verify the effectiveness and effi", + "bbox": [ + 89, + 885, + 482, + 900 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "ciency of our method, we evaluate A4A on two tasks: personalized human generation (ID customization) and personalized object generation (IP customization). For ID customization, we utilize IDentity Alignment scores (IDA) to measure the similarity between generated and reference facial features, alongside the OMG method [12]. Specifically, we employ the Antelopev2 model from the InsightFace library [8] to detect faces and extract facial embeddings from both reference and generated images. For IP customization, we extract image embeddings using pretrained CLIP", + "bbox": [ + 511, + 747, + 906, + 898 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "18481", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "A4A(ours)", + "text_level": 1, + "bbox": [ + 217, + 90, + 290, + 104 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Photo of a man/woman standing in a garden, dressed in casual clothing, dressed in casual clothing", + "bbox": [ + 106, + 114, + 403, + 137 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/fe64143bd4cf19a61c1bf3199a22b1d70b415ba7436a7f604f6aa56a82aa538b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 91, + 143, + 169, + 208 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/7cbebc0524ffe155f0ad73ec4d8ecaf31faee6c9492bb38ad5ee3223746d5a13.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 93, + 208, + 169, + 268 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/47ac2fc31e6a7d274633709f46eda1e61a85864123096e5ec72d3fa992b45b5b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 93, + 268, + 169, + 330 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/f3980b481ad7f811b9edbfb4db3d972ef35f68e0d8074f2b15b4ad6cbd0330fb.jpg", + "image_caption": [ + "Figure 4. The visualization of transferring the IP-Adapter from SD1.5 (U-Net architecture) to Pixart-Alpha (Transformer architecture). The middle line, framed in orange, serves as the reference for comparison. The left side shows the A4A effect, which closely resembles the reference, while the right side (IP-Adapter*) displays the results of training IP-Adapter from scratch." + ], + "image_footnote": [], + "bbox": [ + 93, + 330, + 169, + 393 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/8f7780d3c305c94afee09681906fd201028bf1dd539e67ff149a9b2f836f12cd.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 171, + 145, + 254, + 208 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/a1041bd4031e5f914ab1d88e8219f1e2c2444a34b4fcaa721a492d8addd44746.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 171, + 208, + 253, + 268 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/83158e4f5833c57823086f319547980fdf46aba07f0b41f5b6e9a324039fc58c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 171, + 268, + 253, + 330 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/337e5402090e1bbfd29065a07a30ca3b9b53cb40e8f89e1bbd30d525117e5a58.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 171, + 330, + 253, + 393 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/7c531b3162fde19d76daede2e22a3e8be96995b3a1790e2b383e8ef6e3c15251.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 254, + 145, + 336, + 208 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/16528312b9c32bd01ab2a8f9b83c5bed68a88e60f9c21df5e58aaa5feb0d3df2.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 256, + 208, + 336, + 268 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/4b72913cfc36aad22e8606f661f048f5b333a606e398c6ee29790df710b5a8b4.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 256, + 268, + 336, + 330 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/af99a3e34f50e28b99e049bc0a3f27ba947b4b200d5f87b2ef7e203bb92bf519.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 256, + 330, + 336, + 393 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/57ae4f7e95072d572b606bb37c9f5f6a14a0dc165cb0830d7edfeab86e59d2a4.jpg", + "image_caption": [ + "Text Prompt" + ], + "image_footnote": [], + "bbox": [ + 338, + 145, + 419, + 208 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/61d26fa82a7bb677a72b24b19ad76424bc57ba2a107e45ecbfd3bcfaafcb8239.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 338, + 208, + 419, + 268 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/848236a53755e1139544513f3d308a4305297bcda1f6f10336ca8cbcd9ca02f7.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 338, + 268, + 419, + 330 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/ae32af3ce002090ac247c4c2d85168234bc82d85ca77bd6053d2fbd1a25003e5.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 338, + 330, + 419, + 393 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/c7bc67cba504b80415a83eee1f1a51e7e2f5d6ef8950e1d978c7e39bfc1f22b1.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 457, + 143, + 540, + 208 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/5bc2b7e4ea6ee70730c32a70883a7d7d20f799edb14bd29fd180c609dd7bf10c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 457, + 208, + 540, + 268 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/5fb976c3468f42757a85f83aa4eca62f05cc02e23375216a8f167825dda78ce2.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 457, + 268, + 540, + 330 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/a2283328ee278da6f9fba1185c9d3a8d7829de61d95a5a08045866e2e02d252e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 457, + 330, + 540, + 393 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "IP-Adapter\\*", + "text_level": 1, + "bbox": [ + 692, + 90, + 779, + 106 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Photo of a man/woman standing in a garden, dressed in casual clothing, dressed in casual clothing", + "bbox": [ + 586, + 114, + 883, + 137 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/962e45ecfd2217bb41ba0a074ccdef3c37c8c0136df0fae9dcd51d9bad7d0de9.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 571, + 143, + 651, + 207 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/959c47cf92a3b815ce1eb45301729c6de4eedf17943a5249b17aaf16d2d7bb73.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 571, + 207, + 651, + 268 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/fdba5f17f5d669645cd571225b302c820053f927e6bf0ff0efe5b08c90a39b6d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 571, + 268, + 651, + 330 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/6e44ce8e6807b350b95d84a69ae27961cdec95add122ba0a5be6afb69f5efa02.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 571, + 330, + 651, + 392 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/3e7d6088f62cdd2d630e57fd4eda834eac73b2b2fc292729dab221070bc55960.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 653, + 145, + 733, + 207 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/a25eb9338a194802f6bf62585c6c47b963d3c8212afc35a53caf27593f4eabd4.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 653, + 207, + 733, + 268 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/53512d3eae80263c154286529562d1c7bfc6e7d3016850b84a731822613a82cc.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 653, + 268, + 733, + 330 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/79f36e8373c041b56e98430c85e0e398b5791190220a2d660decdd554635447d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 653, + 330, + 733, + 392 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/20bcbc720f725f45c4fdc3a1f29bd471a6bbcbcd51a9460863ffd252192a1dac.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 736, + 145, + 818, + 207 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/128dbbaf9f45590515898394dc2828d87dbc506efaa05e4fa0279c7ad9cc5c68.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 736, + 207, + 818, + 268 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/37f8a1f3a2e0e035f59c33aec3b06ab7a3c6a4afeb366f6d6b7038b88bb12ca3.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 736, + 268, + 818, + 330 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/079969bf3c7337de7a2f70e422babc77e6788101ecb5c08954fd16130ca45eea.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 736, + 330, + 818, + 392 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/55081e8dd368ae1851a919b8f7d61a0dbf81c0a74aa4f2d73f79c8620cdd262d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 821, + 145, + 901, + 207 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/e51493f7821e830e5deca317a74b0dae8367f4398fe4c3ed78b7114f2b73bd39.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 821, + 207, + 901, + 268 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/21b4bfc819dbbaa88e276dcbdde80a291423726dc2c681a93e83bcdc22838a8a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 821, + 268, + 901, + 330 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/4a1e99966750ed40efee9a98d7b465d20557986942b83b8fcb7631d2cf81ac04.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 821, + 330, + 901, + 392 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "model [24] and report the embedding similarity between generated and reference images, referred to as the CLIP-score. In the graph, we use the previously defined SC as the horizontal axis to represent the training process.", + "bbox": [ + 89, + 474, + 482, + 536 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.2. Transferring to the U-Net model", + "text_level": 1, + "bbox": [ + 89, + 547, + 375, + 564 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We use SDXL as the upgraded model and compare our results with the officially published PTA-UM [40] (represented by the green star in Fig. 5). To facilitate comparison, we extend a horizontal line from this point to represent the performance of PTA-UM, rather than the actual training curve. As illustrated in Fig. 5, transferring the PTA using A4A achieves an IDA comparable to that of the PTA-UM at approximately $\\mathrm{SC}0.5\\mathrm{M}^2$ . Furthermore, as shown in Fig. 2, the lines \"A4A (ours)\" and \"PTA-UM\" demonstrate that A4A not only preserves the intellectual property of the characters but also maintains the editing capabilities of the upgraded model with minimal training cost.", + "bbox": [ + 89, + 571, + 482, + 752 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "For personalized object generation (IP customization), as shown in Fig. 6, transferring the PTA using A4A achieves a CLIP-score comparable to that of PTA-UM at approximately 2.5M SC, compared to 64M SC for PTA-UM. As illustrated in Fig. 3, the lines labeled \"A4A (ours)\" and \"PTA-UM\" demonstrate that control ability, as guided by the text prompt, is also preserved. When the attention-based adapter is transferred to the upgraded U-Net model, A4A ef", + "bbox": [ + 89, + 753, + 482, + 875 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/aa1bc97f37480ed25043a456cef3b14af810c985f912776542fd091739d160a3.jpg", + "image_caption": [ + "Figure 5. The graph shows the IDA of ID preservation when transferring to SDXL, compared to the pretrained adapter from SDXL (PTA-UM). The horizontal axis is in units of M." + ], + "image_footnote": [], + "bbox": [ + 522, + 477, + 898, + 589 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/c93ac3380b00ea9ed059918b0e1a147a89f8ee44b8013b744a536d2b44511e41.jpg", + "image_caption": [ + "Figure 6. The graph shows the CLIP-score of personalized object generation when transferring to SDXL, compared to PTA-UM. The horizontal axis is in units of M." + ], + "image_footnote": [], + "bbox": [ + 521, + 669, + 898, + 781 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "fectively retains and transfers the adapter's capabilities with minimal training cost. The data of the quantitative indica", + "bbox": [ + 511, + 869, + 906, + 901 + ], + "page_idx": 6 + }, + { + "type": "page_footnote", + "text": "$^{2}$ A4A is trained on 2 A100 GPUs with a batch size of 4 per GPU.", + "bbox": [ + 107, + 886, + 455, + 900 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "18482", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "tors corresponding to the line chart are shown in Tab. 1.", + "bbox": [ + 89, + 90, + 457, + 107 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.3. Transferring to Transformer model", + "text_level": 1, + "bbox": [ + 89, + 127, + 401, + 142 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We use Pixart-Alpha [4] with the transformer architecture as the upgraded model. Given the absence of a corresponding published version of the IP-Adapter for Pixart-Alpha, we train it from scratch using the CelebAMask-HQ dataset as our baseline, represented by the orange line labeled \"IP-Adapter*\" in Fig. 7 and Fig. 4. As illustrated in Fig. 7, employing A4A to transfer pretrained adapters to transformer models offers a significant advantage over training adapters directly on the transformer models. It is worth noting that our work is the first to transfer the adapter from the U-Net model to the transformer model, achieving strong results. Fig. 4 presents a visualization comparing our method with training the IP-Adapter from scratch, both at 30k steps with a batch size of 8. The images generated by A4A show significant facial similarity to the reference image, while the image on the right does not yield comparable results.", + "bbox": [ + 88, + 152, + 485, + 396 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/576d606fe6975be35450003a2984ee536f2e2db225ca61a8629aba5825bef934.jpg", + "image_caption": [ + "Figure 7. The IDA of transferring the pretrained IP-Adapter from SD1.5 to Pixart-Alpha using A4A (red line with dots), compared to training the IP-Adapter from scratch (orange line with stars). The horizontal axis is in units of K. Best viewed in color." + ], + "image_footnote": [], + "bbox": [ + 114, + 419, + 454, + 530 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.4. Ablation Study", + "text_level": 1, + "bbox": [ + 89, + 643, + 243, + 660 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In the previous section, we adopted the A4A paradigm, which includes training our modules, namely Coupling Space Projection and Upgraded Space Mapping, along with fine-tuning the PTA. To demonstrate that fine-tuning is not the core driving force of our method, we conducted the following experiment. As shown, the two curves are very close, with the fine-tuning paradigm showing only a slight improvement over the non-fine-tuning version.", + "bbox": [ + 89, + 670, + 482, + 791 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We present experiments on the hyperparameter settings of the learning rates for each component in Sec. 7. It is worth mentioning that, since Projection and Alignment have similar numbers of parameters, we group them together. The ablation study of the two core modules is presented in Sec. 8, which demonstrates that our design achieves a satisfactory transfer effect with an efficient structure.", + "bbox": [ + 89, + 795, + 483, + 902 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/973239fcf626e413ae96724f8697f9c406221073a8f100f655bb295998e01bc3.jpg", + "image_caption": [ + "Figure 8. The yellow line labeled \"A4A(ours w/fine-tuning)\" represents training A4A without fine-tuning the PTA, while the red line represents the full A4A approach with fine-tuning the PTA." + ], + "image_footnote": [], + "bbox": [ + 537, + 93, + 870, + 204 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/a907c943b41c74b3690bbf147ae7e908690fd6d25bed4ff2bbd41d252b8d490b.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
PTA: IP-AdapterIDA↑CLIP-score↑
X-Adapter0.0620.7894
PTA-UM0.45310.9124
A4A(ours)0.51270.9154
", + "bbox": [ + 570, + 277, + 849, + 351 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Table 1. The evaluation metrics for IP customization (CLIP-score) and ID customization (IDA) are presented. Using SDXL as the upgraded model and IP-Adapter as the pretrained adapter, A4A (ours) is compared with the transfer method X-Adapter and the pretrained adapter from the upgraded model (PTA-UM).", + "bbox": [ + 511, + 361, + 906, + 431 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.5. Comparison with X-Adapter", + "text_level": 1, + "bbox": [ + 511, + 458, + 769, + 474 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "The previous work X-Adapter [25] is designed specifically for transferring adapters from U-Net models. To demonstrate the effectiveness of A4A, we also compare it with X-Adapter. As shown in Tab. 1, our method achieves better results in generation using the transferred adapter for both IP and ID customization. The visualizations in Figure 2 and Fig. 3, particularly the row labeled \"X-Adapter\", further substantiate this when compared to the adjacent rows. It is also worth noting that our method requires only the adapter for training and inference, without the need for denoising using the base model as in X-Adapter, which makes it more efficient.", + "bbox": [ + 511, + 481, + 908, + 661 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5. Conclusion", + "text_level": 1, + "bbox": [ + 511, + 678, + 633, + 694 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We propose A4A (Adapter for Adapter), a novel framework designed to address the challenges of transferring pretrained adapters across rapidly evolving model architectures. By employing an all-for-all mapping approach, A4A seamlessly transfers attention-based adapters from U-Net to transformer models without the need for extensive retraining. The framework's two key components, Coupling Space Projection and Upgraded Space Mapping, enable effective bridging of adapter features with upgraded model structures. Our experimental results demonstrate that A4A preserves both the generative power of upgraded models and the controllability of the original adapters. This work offers a scalable solution for cross-architecture adapter transfer.", + "bbox": [ + 511, + 704, + 908, + 902 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "18483", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6. Acknowledgment", + "text_level": 1, + "bbox": [ + 89, + 90, + 263, + 107 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "This research is supported by Artificial Intelligence National Science and Technology Major Project 2023ZD0121200, and National Natural Science Foundation of China under Grant 62222212 and 623B2094.", + "bbox": [ + 89, + 113, + 485, + 170 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 195, + 187, + 210 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. 2", + "[2] Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Qinsheng Zhang, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, et al. ediff-i: Text-to-image diffusion models with an ensemble of expert denoisers. arXiv preprint arXiv:2211.01324, 2022. 3", + "[3] Marco Bellagente, Manuel Brack, Hannah Teufel, Felix Friedrich, Björn Deiseroth, Constantin Eichenberg, Andrew M Dai, Robert Baldock, Souradeep Nanda, Koen Oostermeijer, et al. Multifusion: Fusing pre-trained models for multi-lingual, multi-modal image generation. Advances in Neural Information Processing Systems, 36:59502-59521, 2023. 3", + "[4] Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, et al. Pixart-alpha: Fast training of diffusion transformer for photorealistic text-to-image synthesis. arXiv preprint arXiv:2310.00426, 2023. 2, 3, 5, 8", + "[5] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780-8794, 2021. 1", + "[6] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis. In Forty-first International Conference on Machine Learning, 2024. 2, 3", + "[7] Rinon Gal, Moab Arar, Yuval Atzmon, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. Encoder-based domain tuning for fast personalization of text-to-image models. ACM Transactions on Graphics (TOG), 42(4):1-13, 2023. 3", + "[8] Jia Guo, Jiankang Deng, Xiang An, Jack Yu, and Baris Gecer. Insightface: 2d and 3d face analysis project, 2019. 6", + "[9] Thomas A Halgren, Robert B Murphy, Richard A Friesner, Hege S Beard, Leah L Frye, W Thomas Pollard, and Jay L Banks. Glide: a new approach for rapid, accurate docking and scoring. 2. enrichment factors in database screening. Journal of medicinal chemistry, 47(7):1750-1759, 2004. 2, 3", + "[10] Amir Hertz, Andrey Voynov, Shlomi Fruchter, and Daniel Cohen-Or. Style aligned image generation via shared attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4775–4785, 2024. 3" + ], + "bbox": [ + 93, + 220, + 483, + 898 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[11] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020. 1, 2", + "[12] Zhe Kong, Yong Zhang, Tianyu Yang, Tao Wang, Kaihao Zhang, Bizhu Wu, Guanying Chen, Wei Liu, and Wenhan Luo. Omg: Occlusion-friendly personalized multi-concept generation in diffusion models. arXiv preprint arXiv:2403.10983, 2024.6", + "[13] Nupur Kumari, Bingliang Zhang, Richard Zhang, Eli Shechtman, and Jun-Yan Zhu. Multi-concept customization of text-to-image diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1931-1941, 2023. 3", + "[14] Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, et al. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. International Journal of Computer Vision, 128(7):1956-1981, 2020. 5", + "[15] Cheng-Han Lee, Ziwei Liu, Lingyun Wu, and Ping Luo. Maskgan: Towards diverse and interactive facial image manipulation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 5", + "[16] Yuseung Lee and Minhyuk Sung. Reground: Improving textual and spatial grounding at no cost. arXiv preprint arXiv:2403.13589, 2024. 3", + "[17] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pages 19730–19742. PMLR, 2023. 4, 5", + "[18] Yuheng Li, Haotian Liu, Qingyang Wu, Fangzhou Mu, Jianwei Yang, Jianfeng Gao, Chunyuan Li, and Yong Jae Lee. Gligen: Open-set grounded text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22511-22521, 2023. 3", + "[19] Han Lin, Jaemin Cho, Abhay Zala, and Mohit Bansal. Ctrl-adapter: An efficient and versatile framework for adapting diverse controls to any diffusion model. arXiv preprint arXiv:2404.09967, 2024. 1, 3", + "[20] Yixin Liu, Kai Zhang, Yuan Li, Zhiling Yan, Chujie Gao, Ruoxi Chen, Zhengqing Yuan, Yue Huang, Hanchi Sun, Jianfeng Gao, et al. Sora: A review on background, technology, limitations, and opportunities of large vision models. arXiv preprint arXiv:2402.17177, 2024. 2", + "[21] Xichen Pan, Li Dong, Shaohan Huang, Zhiliang Peng, Wenhu Chen, and Furu Wei. Kosmos-g: Generating images in context with multimodal large language models. arXiv preprint arXiv:2310.02992, 2023. 3", + "[22] William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4195-4205, 2023. 1, 2, 3", + "[23] Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion mod" + ], + "bbox": [ + 516, + 92, + 906, + 900 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "18484", + "bbox": [ + 480, + 944, + 519, + 955 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "els for high-resolution image synthesis. arXiv preprint arXiv:2307.01952, 2023. 1, 2, 5", + "[24] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 7", + "[25] Lingmin Ran, Xiaodong Cun, Jia-Wei Liu, Rui Zhao, Song Zijie, Xintao Wang, Jussi Keppo, and Mike Zheng Shou. X-adapter: Adding universal compatibility of plugins for upgraded diffusion model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8775-8784, 2024. 1, 3, 6, 8", + "[26] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695, 2022. 1, 2, 5", + "[27] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical image computing and computer-assisted intervention-MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18, pages 234-241. Springer, 2015. 1, 2", + "[28] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. Advances in neural information processing systems, 35:36479-36494, 2022. 2, 3", + "[29] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020. 2", + "[30] Yoad Tewel, Rinon Gal, Gal Chechik, and Yuval Atzmon. Key-locked rank one editing for text-to-image personalization. In ACM SIGGRAPH 2023 Conference Proceedings, pages 1-11, 2023. 3", + "[31] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017. 2", + "[32] A Vaswani. Attention is all you need. Advances in Neural Information Processing Systems, 2017. 2", + "[33] Haofan Wang, Matteo Spinelli, Qixun Wang, Xu Bai, Zekui Qin, and Anthony Chen. Instantstyle: Free lunch towards style-preserving in text-to-image generation. arXiv preprint arXiv:2404.02733, 2024. 3", + "[34] Qixun Wang, Xu Bai, Haofan Wang, Zekui Qin, Anthony Chen, Huaxia Li, Xu Tang, and Yao Hu. Instantid: Zero-shot identity-preserving generation in seconds. arXiv preprint arXiv:2401.07519, 2024. 3", + "[35] Xudong Wang, Trevor Darrell, Sai Saketh Rambhatla, Rohit Girdhar, and Ishan Misra. Instancediffusion: Instance-level control for image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6232-6242, 2024. 3" + ], + "bbox": [ + 91, + 90, + 483, + 900 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[36] Yuxiang Wei, Yabo Zhang, Zhilong Ji, Jinfeng Bai, Lei Zhang, and Wangmeng Zuo. Elite: Encoding visual concepts into textual embeddings for customized text-to-image generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15943-15953, 2023. 3", + "[37] Zhichao Wei, Qingkun Su, Long Qin, and Weizhi Wang. Mm-diff: High-fidelity image personalization via multi-modal condition integration. arXiv preprint arXiv:2403.15059, 2024. 3", + "[38] Yinwei Wu, Xianpan Zhou, Bing Ma, Xuefeng Su, Kai Ma, and Xinchao Wang. Ifadapter: Instance feature control for grounded text-to-image generation. arXiv preprint arXiv:2409.08240, 2024. 3", + "[39] Peng Xing, Haofan Wang, Yanpeng Sun, Qixun Wang, Xu Bai, Hao Ai, Renyuan Huang, and Zechao Li. Csgo: Content-style composition in text-to-image generation. arXiv preprint arXiv:2408.16766, 2024. 3", + "[40] Hu Ye, Jun Zhang, Sibo Liu, Xiao Han, and Wei Yang. Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models. arXiv preprint arXiv:2308.06721, 2023. 3, 4, 5, 7, 1", + "[41] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3836-3847, 2023. 1, 3", + "[42] Yuxuan Zhang, Yiren Song, Jiaming Liu, Rui Wang, Jinpeng Yu, Hao Tang, Huaxia Li, Xu Tang, Yao Hu, Han Pan, et al. Ssr-encoder: Encoding selective subject representation for subject-driven generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8069–8078, 2024. 3", + "[43] Dewei Zhou, You Li, Fan Ma, Xiaoting Zhang, and Yi Yang. Mige: Multi-instance generation controller for text-to-image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6818-6828, 2024. 3", + "[44] Zhengguang Zhou, Jing Li, Huaxia Li, Nemo Chen, and Xu Tang. Storymaker: Towards holistic consistent characters in text-to-image generation. arXiv preprint arXiv:2409.12576, 2024.3" + ], + "bbox": [ + 516, + 90, + 905, + 667 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "18485", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 9 + } +] \ No newline at end of file diff --git a/2025/A4A_ Adapter for Adapter Transfer via All-for-All Mapping for Cross-Architecture Models/66a56aa1-a030-49e2-8d93-323c4a070c6f_model.json b/2025/A4A_ Adapter for Adapter Transfer via All-for-All Mapping for Cross-Architecture Models/66a56aa1-a030-49e2-8d93-323c4a070c6f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4efa754e8d2019a9c597d3abffaf483a2aa891f9 --- /dev/null +++ b/2025/A4A_ Adapter for Adapter Transfer via All-for-All Mapping for Cross-Architecture Models/66a56aa1-a030-49e2-8d93-323c4a070c6f_model.json @@ -0,0 +1,2387 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.043 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.001, + 0.812, + 0.047 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.179, + 0.131, + 0.82, + 0.175 + ], + "angle": 0, + "content": "A4A: Adapter for Adapter Transfer via All-for-All Mapping for Cross-Architecture Models" + }, + { + "type": "text", + "bbox": [ + 0.245, + 0.204, + 0.744, + 0.24 + ], + "angle": 0, + "content": "Keyu Tu, Mengqi Huang, Zhuowei Chen, Zhendong Mao* \nUniversity of Science and Technology of China" + }, + { + "type": "text", + "bbox": [ + 0.22, + 0.242, + 0.773, + 0.258 + ], + "angle": 0, + "content": "{kytu,huangmq,chenzw01}@mail.ustc.edu.cn,{zdmao}@ustc.edu.cn" + }, + { + "type": "title", + "bbox": [ + 0.248, + 0.292, + 0.327, + 0.308 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.324, + 0.485, + 0.716 + ], + "angle": 0, + "content": "Large-scale text-to-image models evolve rapidly in size and architecture. The existing adapters struggle to keep pace with these models, requiring extensive retraining. This paper proposes a novel adapter transfer framework, A4A (Adapter for Adapter), which uses an all-for-all mapping approach to seamlessly transfer attention-based adapters across different model architectures (e.g., U-Net to transformer). The framework consists of Coupling Space Projection and Upgraded Space Mapping. During Coupling Space Projection, all attention features of the pretrained adapter are aggregated to fully capture the coupling relationship before being projected into a unified space. The unified space maintains coupling features in a consistent dimension, effectively and efficiently addressing feature scale discrepancies arising from the base model's architecture. In the Upgraded Space Mapping Module, randomly initialized learnable features are introduced to connect the unified and upgraded spaces by integrating reference features via the attention mechanism. The learned features are adaptively injected into the upgrade model through the Alignment module, which bridges the discrepancies between the models using the all-for-all mapping. Experimental results on personalized image generation tasks demonstrate that A4A outperforms previous methods in transferring adapters while being the first to achieve adapter transfer across model architectures." + }, + { + "type": "title", + "bbox": [ + 0.092, + 0.746, + 0.222, + 0.763 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.772, + 0.483, + 0.879 + ], + "angle": 0, + "content": "Recent advancements in large-scale text-to-image diffusion models [5, 11, 23, 26] have significantly improved their ability to generate high-quality, realistic images based on user-friendly textual prompts. Building on these generative capabilities, numerous adapters have been developed upon these pretrained models to further endow them with new control conditions, such as pose and human identity control," + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.293, + 0.908, + 0.534 + ], + "angle": 0, + "content": "thereby fostering the growth of downstream real-world applications like personalized image creation. As pretrained models rapidly evolve, with increasing parameters (e.g., from SD1.5's 860M to SDXL's 2600M) and developing architectures (e.g., from the convolution U-Net [27] to the transformer [22]), the original adapters built on base models require substantial resources for retraining and significant effort for redesign to accommodate upgraded models. This leads to a lag in adapter development compared to the progression of upgraded models1. Therefore, the adapter transfer task, i.e., effectively and efficiently transferring existing adapters from base models to upgraded models to leverage the strong control capabilities of the original well-developed adapters and the superior generative abilities of the upgraded models, has become an increasingly important and urgent requirement in both academia and industry." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.537, + 0.909, + 0.777 + ], + "angle": 0, + "content": "Given the substantial potential benefits of adapter transfer, several prior studies have been conducted in this field. For instance, Ctrl-Adapter [19] has been proposed to transfer the ControlNet [41] architecture by fusing the output of the zero-convolution from the pretrained ControlNet into the corresponding layer in the upgraded model. Concurrently, X-Adapter [25] has been explored for mapping the latent from the base model's decoder block and adding them to the corresponding location within the upgraded model's decoder. In summary, existing adapter transfer methods primarily focus on addition-based adapters (i.e., the control conditions are injected into the pretrained models by simple addition, typically, ControlNet), through a layer-by-layer mapping, i.e., the output of each layer in the base model's adapters are mapped to the semantically equivalent layer in the upgraded model." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.779, + 0.908, + 0.827 + ], + "angle": 0, + "content": "However, in this study, we argue that the existing layer-by-layer mapping fails to fully exploit the coupling between original adapters and base models to effectively bridge the" + }, + { + "type": "page_footnote", + "bbox": [ + 0.512, + 0.839, + 0.908, + 0.903 + ], + "angle": 0, + "content": "1In this study, we define \"base models\" as the original pretrained text-to-image models with well-developed adapters (e.g., SD1.5), and \"upgraded models\" as those pretrained models that require adapter transfer (e.g., SDXL). Upgraded models typically have more parameters and advanced architectural designs compared to the base models." + }, + { + "type": "page_footnote", + "bbox": [ + 0.109, + 0.887, + 0.348, + 0.901 + ], + "angle": 0, + "content": "*Zhendong Mao is the corresponding author." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "18476" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.094, + 0.092, + 0.482, + 0.497 + ], + "angle": 0, + "content": "discrepancies between base models and upgraded models. Here, coupling refers to the already well-trained compatibility between the original adapters and base models, while differences refer to the architectural discrepancies between base models and upgraded models. The reason behind this is that the original adapters and base models function as an integrated whole, and the layer-by-layer mapping disrupts this holistic consistency by isolating the output of each layer. Consequently, during the subsequent mapping to the upgraded models, this overall consistency cannot be effectively utilized. As a result, existing adapter transfer methods suffer from limited transfer scopes. On the one hand, they can only transfer addition-based adapters (typically, ControlNet) but fail to generalize to the attention-based adapters, which are more widely used. This is technically more challenging because attention-based adapters (i.e. the control conditions are injected into pretrained models by attention mechanisms) involve more complex interactions and dependencies across different layers of the model. On the other hand, these methods are limited to transferring between similar pretrained model architectures (e.g., from a U-Net model to another U-Net model) but fail when transferring from a U-Net model to a transformer model. This limitation is particularly critical, as the latest state-of-the-art pretrained text-to-image models [1, 6] are predominantly transformer-based, while the current most mature adapters remain developed on the U-Net architecture." + }, + { + "type": "text", + "bbox": [ + 0.094, + 0.505, + 0.482, + 0.88 + ], + "angle": 0, + "content": "To address this challenge, we propose a novel adapter transfer framework, A4A (Adapter for Adapter), which utilizes an innovative all-for-all mapping approach to seamlessly transfer the intrinsic coupling between all layers of the original adapters and base models to all layers of the upgraded models, thereby enabling the transfer of more difficult and widely used attention-based adapters and facilitating cross-architecture model transfers. Specifically, A4A achieves all-for-all mapping through the Coupling Space Projection and Upgraded Space Mapping. In the Coupling Space Projection phase, all attention features of the pretrained adapter are collected, capturing the complete coupling relationship between the adapter and the base model, and then projected into a unified space. The coupling relationship treats the features of all layers of the adapter as a unified whole, conveying a continuous representation of the new control conditions throughout the generation process, distinguishing it from isolated layer mappings. Upgraded space refers to the coupled feature space that corresponds to the upgraded model, where we randomly initialize learnable attention features to transfer the coupling relationship from the base model to the upgraded model. By integrating the reference features through the attention mechanism and aligning them with the upgraded architecture, the learnable features bridge the discrepancies between the models." + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.887, + 0.411, + 0.899 + ], + "angle": 0, + "content": "The contributions of this work are as follows:" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.092, + 0.905, + 0.15 + ], + "angle": 0, + "content": "1. To alleviate the limited transfer scopes, we introduce a novel all-for-all mapping approach that enables the transfer of attention-based adapters and facilitates cross-architecture model transfers." + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.152, + 0.905, + 0.211 + ], + "angle": 0, + "content": "2. A4A projects the adapter's complete features into the unified coupling space and bridges it with the upgraded space by fusing these features with randomly initialized learnable features through the attention mechanism." + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.213, + 0.905, + 0.302 + ], + "angle": 0, + "content": "3. Experiments in various types of personalized image creation demonstrate that A4A is an effective attention-based adapter transfer approach for cross-architecture models, achieving better performance than the pretrained adapter from the upgraded model with minimal training costs." + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.092, + 0.905, + 0.302 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.517, + 0.32, + 0.65, + 0.335 + ], + "angle": 0, + "content": "2. Related work" + }, + { + "type": "title", + "bbox": [ + 0.517, + 0.345, + 0.735, + 0.359 + ], + "angle": 0, + "content": "2.1. Latent Diffusion Models" + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.368, + 0.905, + 0.805 + ], + "angle": 0, + "content": "Recent diffusion-based text-to-image models [11, 23, 26, 29] have received wide acclaim for their outstanding image fidelity and diversity. Ho et al. [11] introduce the denoising diffusion process into generation models in the seminal work DDPM. Diffusion models learn the generation process through iterative denoising steps. Latent Diffusion Model (LDM) [26] proposed to perform the diffusion process in the latent space of a Variational Autoencoder [31]. Under the LDM architecture, two primary backbone models are employed: the convolutional U-Net [27] and the transformer [22]. While these models share a similar generation process, there are significant differences between the two models. The most notable example of U-Net models is the StableDiffusion [23, 26] series, excluding SDv3.0 [6], which consists of the symmetric encoder and decoder. The encoder is composed of multiple blocks of diverse dimensions, interconnected by down-sampling layers. Each block incorporates several attention layers [32] to fuse latents and conditions. Another series of models [4, 6, 9, 28] adopts DiT blocks [22] for denoising. The transformer model has been widely used in recent video generation tasks [20], where it has achieved state-of-the-art results. After patchification, the resulting features are progressively processed through a series of DiT blocks. In the DiT blocks, the features are scaled and shifted using AdaLN, maintaining the same dimension, which distinguishes them from U-Net-based models. Transformer-based models, with their flexible structure and impressive generative capabilities, have garnered significant attention due to their great potential." + }, + { + "type": "title", + "bbox": [ + 0.517, + 0.818, + 0.895, + 0.833 + ], + "angle": 0, + "content": "2.2. Adapters for Text-to-Image Diffusion Models" + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.841, + 0.905, + 0.9 + ], + "angle": 0, + "content": "Given the inefficiencies of fine-tuning large pretrained models, an alternative strategy is to utilize adapters, which introduce a limited number of trainable parameters while keeping the pretrained model frozen. Due to their flexibility and" + }, + { + "type": "page_number", + "bbox": [ + 0.482, + 0.946, + 0.518, + 0.956 + ], + "angle": 0, + "content": "18477" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.485, + 0.41 + ], + "angle": 0, + "content": "greater efficiency compared to fine-tuning, adapters have gained significant interest. By designing conditional modules, an adapter can introduce new control conditions, such as personalized characters, objects, layouts, and style information, to a pretrained text-to-image model. Downstream tasks for these adapters include personalized character generation (ID customization) [7, 34, 40, 42, 44], personalized object generation (IP customization) [2, 7, 13, 21, 30, 36, 37, 40, 42], attribute and layout control [3, 16, 18, 35, 38, 43], and stylization [10, 39]. As discussed in Sec. 1, these adapters can be categorized into two main types: attention-based and addition-based. In the addition-based adapter (typically, ControlNet [41]), the encoded new conditions are directly added to the output of the sub-blocks of the generation model. In contrast, the attention-based adapter processes the encoded conditions through attention layers, modifying the original attention values based on both the new conditions and the text prompts. Attention-based adapters [7, 10, 30, 34, 36-40, 42-44] have received widespread attention due to their efficient and precise condition control capabilities, dominating the field of adapters." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.418, + 0.295, + 0.433 + ], + "angle": 0, + "content": "2.3. Adapter Transferring" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.44, + 0.484, + 0.759 + ], + "angle": 0, + "content": "The rapid evolution and diverse architectures of text-to-image generation models place constraints on transferring the aforementioned adapters. Consequently, to accommodate new models, adapters often require retraining from scratch on these pretrained diffusion models. X-Adapter [25] designates models equipped with the well-trained adapter as base models, with the upgraded version referred to as the upgraded model. It establishes manual connections between decoder blocks of the same dimensions in both the base and upgraded models. Specifically, the decoder of SD1.5 and SDXL consists of blocks with dimensions 1280, 640, and 320. X-Adapter [25] maps the output of the base model's blocks to the corresponding blocks of identical dimensions in the upgraded model. As a result, X-Adapter is specifically suited for SD-series models and faces challenges when transferring to transformer-based models [4, 6, 9, 22, 28]. Ctrl-Adapter [19] aims to transfer the addition-based adapter, ControlNet [41], for video generation models or upgraded image generation models. It connects the output of the zero-convolutional layers of both models through mappers to transfer the ControlNet." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.77, + 0.182, + 0.785 + ], + "angle": 0, + "content": "3. Method" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.795, + 0.484, + 0.903 + ], + "angle": 0, + "content": "We propose a novel framework for transferring well-trained adapters from the base model to the upgraded model with architectural discrepancies, specifically U-Net model and transformer model. For the sake of brevity, we define PTA as the Pre-Trained Adapter. Specifically, pretrained refers to the version that has been officially published. Additionally, we denote \\( M_{base} \\) and \\( M_{up} \\) as the previously mentioned" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.907, + 0.213 + ], + "angle": 0, + "content": "base model and upgraded model. A4A first projects the extracted attention features into the unified coupling to maintain the coupling relationship between PTAs and \\( M_{base} \\). Then, the coupling space is mapped to the upgraded space, where learnable features integrate the reference coupling features with attention layers. These learned features are then adaptively aligned with \\( M_{up} \\) through the Alignment component." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.223, + 0.655, + 0.237 + ], + "angle": 0, + "content": "3.1. Preliminaries" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.244, + 0.907, + 0.336 + ], + "angle": 0, + "content": "Before presenting our method, we introduce the diffusion model with various backbones. The Latent Diffusion Model (LDM), which perform noise addition and denoising in the latent space \\(z\\) of the VAE encoder, is the most prominent open-source community for text-to-image generation. The objective of training LDMs is:" + }, + { + "type": "equation", + "bbox": [ + 0.525, + 0.347, + 0.907, + 0.37 + ], + "angle": 0, + "content": "\\[\n\\min _ {\\theta} \\mathcal {L} _ {L D M} = \\mathbb {E} _ {z, \\epsilon \\sim \\mathcal {N} (0, I), t} | | \\epsilon - \\epsilon_ {\\theta} (z _ {t}, t, E _ {t} (y _ {t})) | | _ {2} ^ {2}, \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.38, + 0.909, + 0.562 + ], + "angle": 0, + "content": "where \\( t \\) is uniformly sampled from the time steps \\( \\{1, \\dots, T\\} \\), \\( y_{t} \\) denotes the conditional text prompt, and \\( E_{t} \\) represents the text encoder. The parameterized denoising network denoted as \\( \\epsilon_{\\theta} \\), may take the form of either a U-Net model or a transformer model. The U-Net architecture consists of blocks with varying dimensions. Each U-Net block includes down-sampling or up-sampling layers along with attention layers. The transformer model primarily consists of multiple DiT blocks grounded on the transformer architecture. Since our method aims to transfer the widely used attention-based adapters, we define the cross-attention process and its associated signals as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.532, + 0.573, + 0.907, + 0.591 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {q} = \\boldsymbol {W} ^ {q} \\cdot \\boldsymbol {i}, \\quad \\boldsymbol {k} = \\boldsymbol {W} ^ {k} \\cdot \\boldsymbol {c}, \\quad a n d \\quad \\boldsymbol {v} = \\boldsymbol {W} ^ {v} \\cdot \\boldsymbol {c} \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.603, + 0.906, + 0.664 + ], + "angle": 0, + "content": "where \\( i \\) denotes the latents of the image, and \\( c \\) signifies the embeddings of the condition, such as the text prompt in the original model. Additionally, \\( W \\) represents the weights for attention projection. Attention is conducted as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.57, + 0.674, + 0.907, + 0.711 + ], + "angle": 0, + "content": "\\[\n\\operatorname {A t t e n t i o n} (\\boldsymbol {q}, \\boldsymbol {k}, \\boldsymbol {v}) = \\operatorname {S o f t m a x} \\left(\\frac {\\boldsymbol {q} \\cdot \\boldsymbol {k} ^ {T}}{\\sqrt {d}}\\right) \\boldsymbol {v} \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.72, + 0.906, + 0.766 + ], + "angle": 0, + "content": "where \\(d\\) represents the dimensions of \\(\\pmb{k}\\) and \\(\\pmb{v}\\). Through cross-attention, the image latents and condition embeddings are comprehensively integrated." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.775, + 0.756, + 0.792 + ], + "angle": 0, + "content": "3.2. Coupling Space Projection" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.797, + 0.906, + 0.873 + ], + "angle": 0, + "content": "Condition Encoder of PTA. For new control conditions \\( y_{n} \\) beyond the original text prompt, adapters typically incorporate a condition encoder as illustrated in Fig. 1. We denote \\( E_{n} \\) to distinguish it from the original text encoder \\( E_{t} \\) of the pretrained large-scale T2I models:" + }, + { + "type": "equation", + "bbox": [ + 0.66, + 0.886, + 0.907, + 0.902 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {c} _ {n} = E _ {n} \\left(y _ {n}\\right), \\tag {4}\n\\]" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "18478" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.092, + 0.087, + 0.905, + 0.282 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.292, + 0.907, + 0.377 + ], + "angle": 0, + "content": "Figure 1. The illustration of the Adapter for Adapter (A4A) framework. Both the base model and the upgraded model are kept frozen. (a) Coupling Space Projection: The pretrained Adapter, consisting of the condition encoder and attention layers (highlighted in pink), is loaded. The adapter features \\( k_{i} \\) and \\( v_{i} \\) are projected into a unified coupling space, reshaping them as \\( K \\) and \\( V \\). (b) Upgraded Space Mapping: Randomly initialized learnable upgraded features, \\( \\bar{K} \\) and \\( \\bar{V} \\), are concatenated with \\( K \\) and \\( V \\) as references. The learning process of \\( \\bar{K} \\) and \\( \\bar{V} \\) bridges the discrepancy between the base model and the upgraded model. These features are then aligned with the original cross-attention layers of the upgraded model through Alignment, which can be a U-Net or Transformer model. Best viewed in color." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.402, + 0.482, + 0.492 + ], + "angle": 0, + "content": "where \\( c_{n} \\) denotes the new condition embeddings. Taking IP-Adapter [40] as an example, the Image Encoder external to the original generation model serves as the condition encoder. And we directly integrate the pretrained condition encoder \\( E_{n} \\) from the adapter to efficiently transfer the well-trained adapter to the upgraded model." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.492, + 0.483, + 0.675 + ], + "angle": 0, + "content": "Attention Layers of PTA. As illustrated in Fig. 1, the encoded new condition embeddings \\( c_{n} \\) are processed through the attention mechanism coupling with the base model. We explicitly depict the attention layers associated with the pretrained adapter in the figure, denoting the weights of the \\( i \\)-th attention layer of the adapter as \\( W_{A,i} \\), distinguishing them from the original attention weights \\( W \\) in the base model. Subscript \\( A \\) represents the adapter, and \\( i \\) represents the \\( i \\)-th adapter attention layer. For instance, the fine-tuned cross-attention layers in the Decoupled Cross-Attention module of IP-Adapter [40] exemplify this. \\( c_{n} \\) are sequentially fed into the aforementioned adapter's attention layers \\( W_{A,i} \\):" + }, + { + "type": "equation", + "bbox": [ + 0.175, + 0.683, + 0.483, + 0.703 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {k} _ {i} = \\boldsymbol {W} _ {A, i} ^ {k} \\left(\\boldsymbol {c} _ {n}\\right), \\quad \\boldsymbol {k} _ {i} \\in \\mathbb {R} ^ {N \\times d _ {i}}, \\tag {5}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.176, + 0.711, + 0.483, + 0.731 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {v} _ {i} = \\boldsymbol {W} _ {A, i} ^ {v} \\left(\\boldsymbol {c} _ {n}\\right), \\quad \\boldsymbol {v} _ {i} \\in \\mathbb {R} ^ {N \\times d _ {i}}, \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.735, + 0.483, + 0.796 + ], + "angle": 0, + "content": "where \\(d_{i}\\) is the dimension of the feature, and \\(\\mathbf{N}\\) denotes the number of tokens for feature \\(k_{i}\\) and \\(\\boldsymbol{v}_{i}\\). To extract the attention features from \\(\\boldsymbol{c}_{n}\\), we employ the attention layers of the adapter rather than utilizing the entire base model." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.796, + 0.483, + 0.902 + ], + "angle": 0, + "content": "Projection onto the Coupling Space. The features obtained from multiple layers form a sequence of length \\(l\\), \\([k_1, k_2, \\dots, k_l]\\) and \\([v_1, v_2, \\dots, v_l]\\), where \\(l\\) represents the number of attention layers of PTA. The dimensions \\(d_i\\) of the features vary, as shown in Fig. 1. To achieve the all-for-all mapping for the attention-based adapter from \\(M_{base}\\) to \\(M_{up}\\), the features of all cross-attention layers need to be" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.402, + 0.907, + 0.448 + ], + "angle": 0, + "content": "projected into a unified coupling space \\( S_{co} \\) which is defined by the smallest common multiple \\( d_{scm} \\) of all dimensions \\( d_{i} \\):" + }, + { + "type": "equation", + "bbox": [ + 0.532, + 0.459, + 0.907, + 0.48 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {K} = \\operatorname {P r o j} ([ \\boldsymbol {k} _ {1}, \\boldsymbol {k} _ {2}, \\dots , \\boldsymbol {k} _ {l} ]), \\quad \\boldsymbol {K} \\in \\mathbb {R} ^ {l \\times N \\times d _ {s c m}}, \\quad (7)\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.535, + 0.491, + 0.907, + 0.512 + ], + "angle": 0, + "content": "\\[\n\\pmb {V} = \\mathrm {P r o j} ([ \\pmb {v} _ {1}, \\pmb {v} _ {2}, \\dots , \\pmb {v} _ {l} ]), \\quad \\pmb {V} \\in \\mathbb {R} ^ {l \\times N \\times d _ {s c m}}. \\quad (8)\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.519, + 0.907, + 0.732 + ], + "angle": 0, + "content": "The projection module consists of several linear layers designed to map dimensions \\( d_{i} \\) to \\( d_{scm} \\). After the sequences are projected to \\( S_{co} \\), they are reshaped into matrices \\( K \\), and \\( V \\). Mapping to \\( S_{co} \\) defined by \\( d_{scm} \\) strikes the best balance between efficiency and effectiveness. Sec. 7.3 in the supplementary material demonstrates this through experiments. It reduces computational complexity by aligning features to a common dimension, avoiding the overhead of larger spaces. At the same time, it maintains sufficient representational capacity, preventing the loss of important information, which can happen with smaller spaces. This ensures effective feature alignment without excessive resource usage, making it an optimal choice for both computational efficiency and model performance." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.742, + 0.753, + 0.76 + ], + "angle": 0, + "content": "3.3. Upgraded Space Mapping" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.765, + 0.907, + 0.903 + ], + "angle": 0, + "content": "Similarly, we define the number of the cross-attention layers in the upgraded model as \\(\\bar{l}\\), and the space for upgraded models as \\(S_{up}\\). To transfer the unified adapter features to the upgraded space, inspired by BLIP-2 [17], we randomly initialize two learnable parameters \\(\\bar{K} \\in \\mathbb{R}^{\\bar{l} \\times N \\times d_{\\text{scm}}}\\) and \\(\\bar{V} \\in \\mathbb{R}^{\\bar{l} \\times N \\times d_{\\text{scm}}}\\). Given that the attention layer is effective for integrating features, we adopt this architecture to learn the aforementioned parameters. Consider the learning of \\(\\bar{K}\\) as an example. To enhance the robustness, we first" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "18479" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.091, + 0.483, + 0.122 + ], + "angle": 0, + "content": "normalize \\(\\bar{\\pmb{K}}\\) and \\(\\pmb{K}\\) using layer normalization. And then, the following operation is performed:" + }, + { + "type": "equation", + "bbox": [ + 0.134, + 0.128, + 0.482, + 0.169 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\bar {\\boldsymbol {K}} = \\operatorname {F F N} \\left[ \\boldsymbol {W} ^ {o u t} \\cdot \\text {A t t e n t i o n} \\left(\\bar {\\boldsymbol {K}} \\boldsymbol {W} _ {1} ^ {k}, \\right. \\right. \\tag {9} \\\\ [ \\boldsymbol {K}, \\bar {\\boldsymbol {K}} ] W _ {2} ^ {k}, [ \\boldsymbol {K}, \\bar {\\boldsymbol {K}} ] W _ {3} ^ {k}) ]. \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.175, + 0.484, + 0.358 + ], + "angle": 0, + "content": "where \\([\\pmb{K},\\bar{\\pmb{K}} ]\\) denotes the concatenation of \\(\\pmb{K}\\) and \\(\\bar{\\pmb{K}}\\) along dimension \\(N\\). The projection weights \\(\\pmb{W}_1^k,\\pmb{W}_2^k,\\pmb{W}_3^k\\in \\mathbb{R}^{d_{scm}\\times d_{in}}\\) map \\(\\bar{\\pmb{K}}\\) and \\(\\pmb{K}\\), respectively, to the intermediate space with dimension \\(d_{in}\\). Following the processing of features using the Attention as described in Eq. (3), they are subsequently transformed back to the original space via the \\(\\pmb{W}^{out}\\). The Feed Forward Network (FFN) is composed of layers arranged sequentially, including layer normalization, linear transformations, GELU activation, and additional linear layers. The aforementioned process of Eq. (9) is iterated \\(\\mathbf{R}\\) times, with \\(\\mathbf{R}\\) serving as a hyperparameter. The learning process of \\(\\bar{\\pmb{V}}\\) is similar to \\(\\bar{\\pmb{K}}\\):" + }, + { + "type": "equation", + "bbox": [ + 0.125, + 0.364, + 0.482, + 0.399 + ], + "angle": 0, + "content": "\\[\n\\bar {\\boldsymbol {V}} = \\operatorname {F F N} \\left[ \\boldsymbol {W} ^ {\\text {o u t}} \\cdot \\text {A t t e n t i o n} \\left(\\bar {\\boldsymbol {V}} \\boldsymbol {W} _ {1} ^ {v}, \\right. \\right. \\\\ \\left. [ \\boldsymbol {V}, \\bar {\\boldsymbol {V}} ] \\boldsymbol {W} _ {2} ^ {v}, [ \\boldsymbol {V}, \\bar {\\boldsymbol {V}} ] \\boldsymbol {W} _ {3} ^ {v}) \\right]. \\tag {10}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.407, + 0.484, + 0.497 + ], + "angle": 0, + "content": "We clarify that two distinct modules with an identical structure are responsible for learning \\(\\bar{K}\\) and \\(\\bar{V}\\), respectively. Throughout this process, the learnable features \\(\\bar{\\bar{K}}\\) and \\(\\bar{V}\\) are seamlessly integrated with the adapter features \\(K\\) and \\(V\\) of the PTA, effectively bridging the unified coupling space to the upgraded space." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.498, + 0.484, + 0.618 + ], + "angle": 0, + "content": "Alignment with the Upgraded Model. The learned features in the upgraded space should be aligned with the attention layers of \\( M_{up} \\) to fully leverage the adapter's capabilities. Specifically, we first fetch the dimensions \\( \\bar{d}_i \\) of the cross-attention layers within the original upgraded model. Then, we employ the linear layer to align the \\( i \\)-th row of the matrix \\( \\bar{\\mathbf{K}} \\) to \\( \\bar{d}_i \\) dimensions through Alignment, which we denote as \\( \\bar{k}_i \\):" + }, + { + "type": "equation", + "bbox": [ + 0.197, + 0.625, + 0.482, + 0.645 + ], + "angle": 0, + "content": "\\[\n[ \\bar {\\boldsymbol {k}} _ {1}, \\dots , \\bar {\\boldsymbol {k}} _ {\\bar {l}} ] = \\operatorname {A l i g n} (\\bar {\\boldsymbol {K}}). \\tag {11}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.65, + 0.483, + 0.68 + ], + "angle": 0, + "content": "Similarly, the identical operation is applied to the \\(\\bar{\\mathbf{V}}\\) matrix in order to derive the vector \\(\\bar{\\pmb{v}}_i\\):" + }, + { + "type": "equation", + "bbox": [ + 0.198, + 0.687, + 0.482, + 0.706 + ], + "angle": 0, + "content": "\\[\n[ \\bar {\\boldsymbol {v}} _ {1}, \\dots , \\bar {\\boldsymbol {v}} _ {\\bar {l}} ] = \\operatorname {A l i g n} (\\bar {\\boldsymbol {V}}). \\tag {12}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.712, + 0.483, + 0.773 + ], + "angle": 0, + "content": "For the \\(i\\)-th cross-attention layer, let \\(\\pmb{q}_i^{up}\\), \\(\\pmb{k}_i^{up}\\), and \\(\\pmb{v}_i^{up}\\) be the original features. The extracted features, \\(\\bar{\\pmb{k}}_i\\) and \\(\\bar{\\pmb{v}}_i\\), are combined through linear weighting and summation with the prior values of the upgraded model using Attention Eq. (3):" + }, + { + "type": "equation", + "bbox": [ + 0.146, + 0.779, + 0.482, + 0.817 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\bar {\\boldsymbol {Z}} = \\operatorname {A t t e n t i o n} \\left(\\boldsymbol {q} _ {i} ^ {u p}, \\boldsymbol {k} _ {i} ^ {u p}, \\boldsymbol {v} _ {i} ^ {u p}\\right) + \\tag {13} \\\\ \\lambda \\text {A t t e n t i o n} \\left(\\boldsymbol {q} _ {i} ^ {u p}, \\bar {\\boldsymbol {k}} _ {i}, \\bar {\\boldsymbol {v}} _ {i}\\right). \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.825, + 0.484, + 0.902 + ], + "angle": 0, + "content": "The result of Eq. (13), \\(\\bar{Z}\\), serves as the output for the \\(i\\)-th cross-attention layer and will be forwarded to the next layer of the upgraded model. The parameter \\(\\lambda\\) serves as a balancing factor, fixed at 1.0 during training and subsequently adjusted for downstream tasks during inference." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.091, + 0.766, + 0.108 + ], + "angle": 0, + "content": "3.4. Optimization Loss Function" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.113, + 0.905, + 0.159 + ], + "angle": 0, + "content": "To optimize the A4A framework, we employ the loss function \\(\\mathcal{L}_{LDM}\\) of the upgraded model \\(M_{up}\\) as defined in Equation Eq. (1):" + }, + { + "type": "equation", + "bbox": [ + 0.543, + 0.166, + 0.905, + 0.185 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {L D M} = \\mathbb {E} _ {z, \\epsilon \\sim \\mathcal {N} (0, I), t} \\| \\epsilon - \\epsilon_ {\\theta} ^ {u p} \\left(z _ {t}, t, c ^ {u p}\\right) \\| _ {2} ^ {2}, \\tag {14}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.192, + 0.905, + 0.253 + ], + "angle": 0, + "content": "The condition \\( c^{up} \\) includes the original text prompt \\( y_{t} \\) and new control conditions \\( y_{n} \\). The upgraded model \\( \\epsilon_{\\theta}^{up} \\) acquires new control conditions and capabilities by injecting learned adapter features." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.253, + 0.907, + 0.435 + ], + "angle": 0, + "content": "The base model and upgraded model remain frozen. The loss function is exclusively used to update the A4A module, which consists of the network with learnable features and the PTA for fine-tuning. To optimize model training, given the varying numbers of parameters in each trainable module, we adopt an asynchronous training strategy. Specifically, for training the projection in \\( S_{co} \\) and alignment in \\( S_{up} \\), a learning rate of \\( 1 \\times 10^{-5} \\) is employed to avoid overfitting, while a learning rate of \\( 1 \\times 10^{-4} \\) is applied to the other components. If the pretrained adapter is fine-tuned, a smaller learning rate of \\( 1 \\times 10^{-6} \\) is used to effectively retain PTA's conditional encoding capabilities." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.447, + 0.646, + 0.464 + ], + "angle": 0, + "content": "4. Experiments" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.472, + 0.722, + 0.488 + ], + "angle": 0, + "content": "4.1. Experimental Settings" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.493, + 0.907, + 0.659 + ], + "angle": 0, + "content": "Datasets. In our study, we utilize the CelebAMask-HQ dataset [15], which consists of approximately 20,000 high-quality facial images, each with a resolution of \\(1024 \\times 1024\\) pixels, and the OpenImages dataset [14], which offers a diverse collection of images featuring a wide range of clearly identifiable objects. We employ the BLIP-2 model [17] to generate captions for the aforementioned datasets, which serve as text prompts paired with the images. For validation, we randomly select 100 images from CelebAMask-HQ, ensuring they are distinct from those in the training set, and generate four images for each reference image." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.66, + 0.909, + 0.902 + ], + "angle": 0, + "content": "Implementation Details. We implement A4A using SD1.5 [26] as the base model, and SDXL [23] and Pixart-Alpha (XL) [4] as the upgraded models. Both SDXL and Pixart-Alpha are significantly larger text-to-image models compared to SD1.5, and we use them as upgraded models for the U-Net and transformer architectures, respectively. In this paper, we utilize the IP-Adapter [40] as our pretrained adapter. The IP-Adapter series has recently demonstrated remarkable capabilities, garnering significant interest for its ability to enhance personalization in generative models. Its performance across a variety of tasks highlights its increasing potential to advance text-to-image generation. Due to variations in GPU types across compared methods and the absence of comprehensive GPU hour reports, we use Sample Count (SC), which represents the number of samples processed up to a specific time point, as a metric" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "18480" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.092, + 0.087, + 0.905, + 0.36 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.368, + 0.908, + 0.397 + ], + "angle": 0, + "content": "Figure 2. The visualization of personalized human generation using SDXL with U-Net architecture as the upgraded model. A4A(ours) compares with the previous work X-Adapter and pretrained adapter from the upgraded model (PTA-UM)." + }, + { + "type": "image", + "bbox": [ + 0.095, + 0.408, + 0.909, + 0.684 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.694, + 0.908, + 0.723 + ], + "angle": 0, + "content": "Figure 3. The visualization of personalized object generation using SDXL with U-Net architecture as the upgraded model. A4A(ours) compares with the previous work X-Adapter and pretrained adapter from the upgraded model (PTA-UM)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.748, + 0.483, + 0.884 + ], + "angle": 0, + "content": "for evaluating training costs and efficiency. For example, the officially published PTA has an SC of 64M (i.e., the PTA is trained with 8 V100 GPUs for 1M steps with a batch size of 8 per GPU). The terms in the following charts are defined as: (1) A4A (ours): transferring the PTA from the base model to the upgraded model using our method A4A; (2) X-Adapter: transferring the PTA using the published X-Adapter [25] checkpoint; (3) PTA-UM: the officially published pretrained adapter from the upgraded model." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.886, + 0.483, + 0.901 + ], + "angle": 0, + "content": "Evaluation Metrics. To verify the effectiveness and effi" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.748, + 0.907, + 0.9 + ], + "angle": 0, + "content": "ciency of our method, we evaluate A4A on two tasks: personalized human generation (ID customization) and personalized object generation (IP customization). For ID customization, we utilize IDentity Alignment scores (IDA) to measure the similarity between generated and reference facial features, alongside the OMG method [12]. Specifically, we employ the Antelopev2 model from the InsightFace library [8] to detect faces and extract facial embeddings from both reference and generated images. For IP customization, we extract image embeddings using pretrained CLIP" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.518, + 0.957 + ], + "angle": 0, + "content": "18481" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.218, + 0.091, + 0.292, + 0.105 + ], + "angle": 0, + "content": "A4A(ours)" + }, + { + "type": "text", + "bbox": [ + 0.107, + 0.115, + 0.404, + 0.138 + ], + "angle": 0, + "content": "Photo of a man/woman standing in a garden, dressed in casual clothing, dressed in casual clothing" + }, + { + "type": "image", + "bbox": [ + 0.093, + 0.145, + 0.171, + 0.209 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.094, + 0.209, + 0.171, + 0.269 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.094, + 0.27, + 0.171, + 0.332 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.094, + 0.332, + 0.171, + 0.394 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.172, + 0.146, + 0.255, + 0.209 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.173, + 0.209, + 0.254, + 0.269 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.173, + 0.27, + 0.254, + 0.332 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.173, + 0.332, + 0.254, + 0.394 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.256, + 0.146, + 0.337, + 0.209 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.257, + 0.209, + 0.337, + 0.269 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.257, + 0.27, + 0.337, + 0.332 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.257, + 0.332, + 0.337, + 0.394 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.339, + 0.146, + 0.42, + 0.209 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.339, + 0.209, + 0.42, + 0.269 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.339, + 0.27, + 0.42, + 0.332 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.339, + 0.332, + 0.42, + 0.394 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.456, + 0.12, + 0.543, + 0.134 + ], + "angle": 0, + "content": "Text Prompt" + }, + { + "type": "image", + "bbox": [ + 0.458, + 0.144, + 0.542, + 0.209 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.458, + 0.209, + 0.541, + 0.269 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.458, + 0.27, + 0.541, + 0.332 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.458, + 0.332, + 0.541, + 0.394 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.693, + 0.092, + 0.781, + 0.107 + ], + "angle": 0, + "content": "IP-Adapter\\*" + }, + { + "type": "text", + "bbox": [ + 0.588, + 0.115, + 0.885, + 0.138 + ], + "angle": 0, + "content": "Photo of a man/woman standing in a garden, dressed in casual clothing, dressed in casual clothing" + }, + { + "type": "image", + "bbox": [ + 0.572, + 0.145, + 0.652, + 0.208 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.572, + 0.208, + 0.652, + 0.269 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.572, + 0.27, + 0.652, + 0.332 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.572, + 0.332, + 0.652, + 0.393 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.655, + 0.146, + 0.735, + 0.208 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.655, + 0.208, + 0.735, + 0.269 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.655, + 0.27, + 0.735, + 0.332 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.655, + 0.332, + 0.735, + 0.393 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.738, + 0.146, + 0.819, + 0.208 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.738, + 0.208, + 0.819, + 0.269 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.738, + 0.27, + 0.819, + 0.332 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.738, + 0.332, + 0.819, + 0.393 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.822, + 0.146, + 0.902, + 0.208 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.822, + 0.208, + 0.902, + 0.269 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.822, + 0.27, + 0.902, + 0.332 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.822, + 0.332, + 0.902, + 0.393 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.407, + 0.907, + 0.451 + ], + "angle": 0, + "content": "Figure 4. The visualization of transferring the IP-Adapter from SD1.5 (U-Net architecture) to Pixart-Alpha (Transformer architecture). The middle line, framed in orange, serves as the reference for comparison. The left side shows the A4A effect, which closely resembles the reference, while the right side (IP-Adapter*) displays the results of training IP-Adapter from scratch." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.476, + 0.483, + 0.537 + ], + "angle": 0, + "content": "model [24] and report the embedding similarity between generated and reference images, referred to as the CLIP-score. In the graph, we use the previously defined SC as the horizontal axis to represent the training process." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.548, + 0.377, + 0.565 + ], + "angle": 0, + "content": "4.2. Transferring to the U-Net model" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.572, + 0.483, + 0.753 + ], + "angle": 0, + "content": "We use SDXL as the upgraded model and compare our results with the officially published PTA-UM [40] (represented by the green star in Fig. 5). To facilitate comparison, we extend a horizontal line from this point to represent the performance of PTA-UM, rather than the actual training curve. As illustrated in Fig. 5, transferring the PTA using A4A achieves an IDA comparable to that of the PTA-UM at approximately \\(\\mathrm{SC}0.5\\mathrm{M}^2\\). Furthermore, as shown in Fig. 2, the lines \"A4A (ours)\" and \"PTA-UM\" demonstrate that A4A not only preserves the intellectual property of the characters but also maintains the editing capabilities of the upgraded model with minimal training cost." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.754, + 0.483, + 0.875 + ], + "angle": 0, + "content": "For personalized object generation (IP customization), as shown in Fig. 6, transferring the PTA using A4A achieves a CLIP-score comparable to that of PTA-UM at approximately 2.5M SC, compared to 64M SC for PTA-UM. As illustrated in Fig. 3, the lines labeled \"A4A (ours)\" and \"PTA-UM\" demonstrate that control ability, as guided by the text prompt, is also preserved. When the attention-based adapter is transferred to the upgraded U-Net model, A4A ef" + }, + { + "type": "image", + "bbox": [ + 0.523, + 0.478, + 0.899, + 0.59 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.513, + 0.606, + 0.907, + 0.648 + ], + "angle": 0, + "content": "Figure 5. The graph shows the IDA of ID preservation when transferring to SDXL, compared to the pretrained adapter from SDXL (PTA-UM). The horizontal axis is in units of M." + }, + { + "type": "image", + "bbox": [ + 0.522, + 0.67, + 0.9, + 0.782 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.513, + 0.798, + 0.907, + 0.84 + ], + "angle": 0, + "content": "Figure 6. The graph shows the CLIP-score of personalized object generation when transferring to SDXL, compared to PTA-UM. The horizontal axis is in units of M." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.871, + 0.907, + 0.902 + ], + "angle": 0, + "content": "fectively retains and transfers the adapter's capabilities with minimal training cost. The data of the quantitative indica" + }, + { + "type": "page_footnote", + "bbox": [ + 0.108, + 0.887, + 0.456, + 0.901 + ], + "angle": 0, + "content": "\\(^{2}\\)A4A is trained on 2 A100 GPUs with a batch size of 4 per GPU." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "18482" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.091, + 0.092, + 0.459, + 0.108 + ], + "angle": 0, + "content": "tors corresponding to the line chart are shown in Tab. 1." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.128, + 0.403, + 0.143 + ], + "angle": 0, + "content": "4.3. Transferring to Transformer model" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.153, + 0.486, + 0.397 + ], + "angle": 0, + "content": "We use Pixart-Alpha [4] with the transformer architecture as the upgraded model. Given the absence of a corresponding published version of the IP-Adapter for Pixart-Alpha, we train it from scratch using the CelebAMask-HQ dataset as our baseline, represented by the orange line labeled \"IP-Adapter*\" in Fig. 7 and Fig. 4. As illustrated in Fig. 7, employing A4A to transfer pretrained adapters to transformer models offers a significant advantage over training adapters directly on the transformer models. It is worth noting that our work is the first to transfer the adapter from the U-Net model to the transformer model, achieving strong results. Fig. 4 presents a visualization comparing our method with training the IP-Adapter from scratch, both at 30k steps with a batch size of 8. The images generated by A4A show significant facial similarity to the reference image, while the image on the right does not yield comparable results." + }, + { + "type": "image", + "bbox": [ + 0.116, + 0.42, + 0.455, + 0.531 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.548, + 0.485, + 0.605 + ], + "angle": 0, + "content": "Figure 7. The IDA of transferring the pretrained IP-Adapter from SD1.5 to Pixart-Alpha using A4A (red line with dots), compared to training the IP-Adapter from scratch (orange line with stars). The horizontal axis is in units of K. Best viewed in color." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.645, + 0.245, + 0.661 + ], + "angle": 0, + "content": "4.4. Ablation Study" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.671, + 0.483, + 0.792 + ], + "angle": 0, + "content": "In the previous section, we adopted the A4A paradigm, which includes training our modules, namely Coupling Space Projection and Upgraded Space Mapping, along with fine-tuning the PTA. To demonstrate that fine-tuning is not the core driving force of our method, we conducted the following experiment. As shown, the two curves are very close, with the fine-tuning paradigm showing only a slight improvement over the non-fine-tuning version." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.796, + 0.484, + 0.903 + ], + "angle": 0, + "content": "We present experiments on the hyperparameter settings of the learning rates for each component in Sec. 7. It is worth mentioning that, since Projection and Alignment have similar numbers of parameters, we group them together. The ablation study of the two core modules is presented in Sec. 8, which demonstrates that our design achieves a satisfactory transfer effect with an efficient structure." + }, + { + "type": "image", + "bbox": [ + 0.539, + 0.094, + 0.872, + 0.205 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.513, + 0.221, + 0.908, + 0.265 + ], + "angle": 0, + "content": "Figure 8. The yellow line labeled \"A4A(ours w/fine-tuning)\" represents training A4A without fine-tuning the PTA, while the red line represents the full A4A approach with fine-tuning the PTA." + }, + { + "type": "table", + "bbox": [ + 0.571, + 0.278, + 0.851, + 0.352 + ], + "angle": 0, + "content": "
PTA: IP-AdapterIDA↑CLIP-score↑
X-Adapter0.0620.7894
PTA-UM0.45310.9124
A4A(ours)0.51270.9154
" + }, + { + "type": "table_caption", + "bbox": [ + 0.513, + 0.362, + 0.907, + 0.432 + ], + "angle": 0, + "content": "Table 1. The evaluation metrics for IP customization (CLIP-score) and ID customization (IDA) are presented. Using SDXL as the upgraded model and IP-Adapter as the pretrained adapter, A4A (ours) is compared with the transfer method X-Adapter and the pretrained adapter from the upgraded model (PTA-UM)." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.459, + 0.771, + 0.475 + ], + "angle": 0, + "content": "4.5. Comparison with X-Adapter" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.482, + 0.909, + 0.662 + ], + "angle": 0, + "content": "The previous work X-Adapter [25] is designed specifically for transferring adapters from U-Net models. To demonstrate the effectiveness of A4A, we also compare it with X-Adapter. As shown in Tab. 1, our method achieves better results in generation using the transferred adapter for both IP and ID customization. The visualizations in Figure 2 and Fig. 3, particularly the row labeled \"X-Adapter\", further substantiate this when compared to the adjacent rows. It is also worth noting that our method requires only the adapter for training and inference, without the need for denoising using the base model as in X-Adapter, which makes it more efficient." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.679, + 0.634, + 0.695 + ], + "angle": 0, + "content": "5. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.705, + 0.909, + 0.903 + ], + "angle": 0, + "content": "We propose A4A (Adapter for Adapter), a novel framework designed to address the challenges of transferring pretrained adapters across rapidly evolving model architectures. By employing an all-for-all mapping approach, A4A seamlessly transfers attention-based adapters from U-Net to transformer models without the need for extensive retraining. The framework's two key components, Coupling Space Projection and Upgraded Space Mapping, enable effective bridging of adapter features with upgraded model structures. Our experimental results demonstrate that A4A preserves both the generative power of upgraded models and the controllability of the original adapters. This work offers a scalable solution for cross-architecture adapter transfer." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "18483" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.091, + 0.091, + 0.264, + 0.108 + ], + "angle": 0, + "content": "6. Acknowledgment" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.114, + 0.486, + 0.171 + ], + "angle": 0, + "content": "This research is supported by Artificial Intelligence National Science and Technology Major Project 2023ZD0121200, and National Natural Science Foundation of China under Grant 62222212 and 623B2094." + }, + { + "type": "title", + "bbox": [ + 0.093, + 0.196, + 0.188, + 0.212 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.221, + 0.484, + 0.289 + ], + "angle": 0, + "content": "[1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.1, + 0.291, + 0.484, + 0.36 + ], + "angle": 0, + "content": "[2] Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Qinsheng Zhang, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, et al. ediff-i: Text-to-image diffusion models with an ensemble of expert denoisers. arXiv preprint arXiv:2211.01324, 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.361, + 0.484, + 0.456 + ], + "angle": 0, + "content": "[3] Marco Bellagente, Manuel Brack, Hannah Teufel, Felix Friedrich, Björn Deiseroth, Constantin Eichenberg, Andrew M Dai, Robert Baldock, Souradeep Nanda, Koen Oostermeijer, et al. Multifusion: Fusing pre-trained models for multi-lingual, multi-modal image generation. Advances in Neural Information Processing Systems, 36:59502-59521, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.457, + 0.485, + 0.526 + ], + "angle": 0, + "content": "[4] Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, et al. Pixart-alpha: Fast training of diffusion transformer for photorealistic text-to-image synthesis. arXiv preprint arXiv:2310.00426, 2023. 2, 3, 5, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.527, + 0.484, + 0.568 + ], + "angle": 0, + "content": "[5] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780-8794, 2021. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.568, + 0.484, + 0.651 + ], + "angle": 0, + "content": "[6] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis. In Forty-first International Conference on Machine Learning, 2024. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.652, + 0.484, + 0.707 + ], + "angle": 0, + "content": "[7] Rinon Gal, Moab Arar, Yuval Atzmon, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. Encoder-based domain tuning for fast personalization of text-to-image models. ACM Transactions on Graphics (TOG), 42(4):1-13, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.708, + 0.484, + 0.747 + ], + "angle": 0, + "content": "[8] Jia Guo, Jiankang Deng, Xiang An, Jack Yu, and Baris Gecer. Insightface: 2d and 3d face analysis project, 2019. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.749, + 0.484, + 0.829 + ], + "angle": 0, + "content": "[9] Thomas A Halgren, Robert B Murphy, Richard A Friesner, Hege S Beard, Leah L Frye, W Thomas Pollard, and Jay L Banks. Glide: a new approach for rapid, accurate docking and scoring. 2. enrichment factors in database screening. Journal of medicinal chemistry, 47(7):1750-1759, 2004. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.831, + 0.485, + 0.9 + ], + "angle": 0, + "content": "[10] Amir Hertz, Andrey Voynov, Shlomi Fruchter, and Daniel Cohen-Or. Style aligned image generation via shared attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4775–4785, 2024. 3" + }, + { + "type": "list", + "bbox": [ + 0.094, + 0.221, + 0.485, + 0.9 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.093, + 0.907, + 0.135 + ], + "angle": 0, + "content": "[11] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.137, + 0.907, + 0.205 + ], + "angle": 0, + "content": "[12] Zhe Kong, Yong Zhang, Tianyu Yang, Tao Wang, Kaihao Zhang, Bizhu Wu, Guanying Chen, Wei Liu, and Wenhan Luo. Omg: Occlusion-friendly personalized multi-concept generation in diffusion models. arXiv preprint arXiv:2403.10983, 2024.6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.207, + 0.906, + 0.276 + ], + "angle": 0, + "content": "[13] Nupur Kumari, Bingliang Zhang, Richard Zhang, Eli Shechtman, and Jun-Yan Zhu. Multi-concept customization of text-to-image diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1931-1941, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.278, + 0.906, + 0.373 + ], + "angle": 0, + "content": "[14] Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, et al. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. International Journal of Computer Vision, 128(7):1956-1981, 2020. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.376, + 0.906, + 0.432 + ], + "angle": 0, + "content": "[15] Cheng-Han Lee, Ziwei Liu, Lingyun Wu, and Ping Luo. Maskgan: Towards diverse and interactive facial image manipulation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.434, + 0.906, + 0.473 + ], + "angle": 0, + "content": "[16] Yuseung Lee and Minhyuk Sung. Reground: Improving textual and spatial grounding at no cost. arXiv preprint arXiv:2403.13589, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.476, + 0.906, + 0.545 + ], + "angle": 0, + "content": "[17] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pages 19730–19742. PMLR, 2023. 4, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.547, + 0.906, + 0.616 + ], + "angle": 0, + "content": "[18] Yuheng Li, Haotian Liu, Qingyang Wu, Fangzhou Mu, Jianwei Yang, Jianfeng Gao, Chunyuan Li, and Yong Jae Lee. Gligen: Open-set grounded text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22511-22521, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.617, + 0.906, + 0.673 + ], + "angle": 0, + "content": "[19] Han Lin, Jaemin Cho, Abhay Zala, and Mohit Bansal. Ctrl-adapter: An efficient and versatile framework for adapting diverse controls to any diffusion model. arXiv preprint arXiv:2404.09967, 2024. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.675, + 0.906, + 0.744 + ], + "angle": 0, + "content": "[20] Yixin Liu, Kai Zhang, Yuan Li, Zhiling Yan, Chujie Gao, Ruoxi Chen, Zhengqing Yuan, Yue Huang, Hanchi Sun, Jianfeng Gao, et al. Sora: A review on background, technology, limitations, and opportunities of large vision models. arXiv preprint arXiv:2402.17177, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.745, + 0.906, + 0.801 + ], + "angle": 0, + "content": "[21] Xichen Pan, Li Dong, Shaohan Huang, Zhiliang Peng, Wenhu Chen, and Furu Wei. Kosmos-g: Generating images in context with multimodal large language models. arXiv preprint arXiv:2310.02992, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.802, + 0.906, + 0.857 + ], + "angle": 0, + "content": "[22] William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4195-4205, 2023. 1, 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.859, + 0.906, + 0.901 + ], + "angle": 0, + "content": "[23] Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion mod" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.907, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "18484" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.125, + 0.092, + 0.484, + 0.12 + ], + "angle": 0, + "content": "els for high-resolution image synthesis. arXiv preprint arXiv:2307.01952, 2023. 1, 2, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.122, + 0.484, + 0.205 + ], + "angle": 0, + "content": "[24] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.207, + 0.484, + 0.29 + ], + "angle": 0, + "content": "[25] Lingmin Ran, Xiaodong Cun, Jia-Wei Liu, Rui Zhao, Song Zijie, Xintao Wang, Jussi Keppo, and Mike Zheng Shou. X-adapter: Adding universal compatibility of plugins for upgraded diffusion model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8775-8784, 2024. 1, 3, 6, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.292, + 0.484, + 0.36 + ], + "angle": 0, + "content": "[26] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695, 2022. 1, 2, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.362, + 0.484, + 0.445 + ], + "angle": 0, + "content": "[27] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical image computing and computer-assisted intervention-MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18, pages 234-241. Springer, 2015. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.447, + 0.484, + 0.53 + ], + "angle": 0, + "content": "[28] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. Advances in neural information processing systems, 35:36479-36494, 2022. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.532, + 0.484, + 0.587 + ], + "angle": 0, + "content": "[29] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.589, + 0.484, + 0.643 + ], + "angle": 0, + "content": "[30] Yoad Tewel, Rinon Gal, Gal Chechik, and Yuval Atzmon. Key-locked rank one editing for text-to-image personalization. In ACM SIGGRAPH 2023 Conference Proceedings, pages 1-11, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.646, + 0.484, + 0.686 + ], + "angle": 0, + "content": "[31] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.689, + 0.484, + 0.716 + ], + "angle": 0, + "content": "[32] A Vaswani. Attention is all you need. Advances in Neural Information Processing Systems, 2017. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.718, + 0.484, + 0.772 + ], + "angle": 0, + "content": "[33] Haofan Wang, Matteo Spinelli, Qixun Wang, Xu Bai, Zekui Qin, and Anthony Chen. Instantstyle: Free lunch towards style-preserving in text-to-image generation. arXiv preprint arXiv:2404.02733, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.775, + 0.484, + 0.829 + ], + "angle": 0, + "content": "[34] Qixun Wang, Xu Bai, Haofan Wang, Zekui Qin, Anthony Chen, Huaxia Li, Xu Tang, and Yao Hu. Instantid: Zero-shot identity-preserving generation in seconds. arXiv preprint arXiv:2401.07519, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.832, + 0.484, + 0.901 + ], + "angle": 0, + "content": "[35] Xudong Wang, Trevor Darrell, Sai Saketh Rambhatla, Rohit Girdhar, and Ishan Misra. Instancediffusion: Instance-level control for image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6232-6242, 2024. 3" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.484, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.092, + 0.906, + 0.174 + ], + "angle": 0, + "content": "[36] Yuxiang Wei, Yabo Zhang, Zhilong Ji, Jinfeng Bai, Lei Zhang, and Wangmeng Zuo. Elite: Encoding visual concepts into textual embeddings for customized text-to-image generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15943-15953, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.178, + 0.906, + 0.232 + ], + "angle": 0, + "content": "[37] Zhichao Wei, Qingkun Su, Long Qin, and Weizhi Wang. Mm-diff: High-fidelity image personalization via multi-modal condition integration. arXiv preprint arXiv:2403.15059, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.234, + 0.906, + 0.288 + ], + "angle": 0, + "content": "[38] Yinwei Wu, Xianpan Zhou, Bing Ma, Xuefeng Su, Kai Ma, and Xinchao Wang. Ifadapter: Instance feature control for grounded text-to-image generation. arXiv preprint arXiv:2409.08240, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.291, + 0.906, + 0.345 + ], + "angle": 0, + "content": "[39] Peng Xing, Haofan Wang, Yanpeng Sun, Qixun Wang, Xu Bai, Hao Ai, Renyuan Huang, and Zechao Li. Csgo: Content-style composition in text-to-image generation. arXiv preprint arXiv:2408.16766, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.347, + 0.906, + 0.401 + ], + "angle": 0, + "content": "[40] Hu Ye, Jun Zhang, Sibo Liu, Xiao Han, and Wei Yang. Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models. arXiv preprint arXiv:2308.06721, 2023. 3, 4, 5, 7, 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.404, + 0.906, + 0.459 + ], + "angle": 0, + "content": "[41] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3836-3847, 2023. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.461, + 0.906, + 0.543 + ], + "angle": 0, + "content": "[42] Yuxuan Zhang, Yiren Song, Jiaming Liu, Rui Wang, Jinpeng Yu, Hao Tang, Huaxia Li, Xu Tang, Yao Hu, Han Pan, et al. Ssr-encoder: Encoding selective subject representation for subject-driven generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8069–8078, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.545, + 0.906, + 0.613 + ], + "angle": 0, + "content": "[43] Dewei Zhou, You Li, Fan Ma, Xiaoting Zhang, and Yi Yang. Mige: Multi-instance generation controller for text-to-image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6818-6828, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.615, + 0.906, + 0.668 + ], + "angle": 0, + "content": "[44] Zhengguang Zhou, Jing Li, Huaxia Li, Nemo Chen, and Xu Tang. Storymaker: Towards holistic consistent characters in text-to-image generation. arXiv preprint arXiv:2409.12576, 2024.3" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.092, + 0.906, + 0.668 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "18485" + } + ] +] \ No newline at end of file diff --git a/2025/A4A_ Adapter for Adapter Transfer via All-for-All Mapping for Cross-Architecture Models/66a56aa1-a030-49e2-8d93-323c4a070c6f_origin.pdf b/2025/A4A_ Adapter for Adapter Transfer via All-for-All Mapping for Cross-Architecture Models/66a56aa1-a030-49e2-8d93-323c4a070c6f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..88907bcbf6c0591236d52248c003e2fb4f80f2ff --- /dev/null +++ b/2025/A4A_ Adapter for Adapter Transfer via All-for-All Mapping for Cross-Architecture Models/66a56aa1-a030-49e2-8d93-323c4a070c6f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:11448f03981dbf4a38c2b1eabc496959672b1c7ef0158567dffb1a54354b7e8c +size 2289615 diff --git a/2025/A4A_ Adapter for Adapter Transfer via All-for-All Mapping for Cross-Architecture Models/full.md b/2025/A4A_ Adapter for Adapter Transfer via All-for-All Mapping for Cross-Architecture Models/full.md new file mode 100644 index 0000000000000000000000000000000000000000..73dd1f4b5c3e7e806bd5380ebe1fa3f807e69ac8 --- /dev/null +++ b/2025/A4A_ Adapter for Adapter Transfer via All-for-All Mapping for Cross-Architecture Models/full.md @@ -0,0 +1,366 @@ +# A4A: Adapter for Adapter Transfer via All-for-All Mapping for Cross-Architecture Models + +Keyu Tu, Mengqi Huang, Zhuowei Chen, Zhendong Mao* +University of Science and Technology of China + +{kytu,huangmq,chenzw01}@mail.ustc.edu.cn,{zdmao}@ustc.edu.cn + +# Abstract + +Large-scale text-to-image models evolve rapidly in size and architecture. The existing adapters struggle to keep pace with these models, requiring extensive retraining. This paper proposes a novel adapter transfer framework, A4A (Adapter for Adapter), which uses an all-for-all mapping approach to seamlessly transfer attention-based adapters across different model architectures (e.g., U-Net to transformer). The framework consists of Coupling Space Projection and Upgraded Space Mapping. During Coupling Space Projection, all attention features of the pretrained adapter are aggregated to fully capture the coupling relationship before being projected into a unified space. The unified space maintains coupling features in a consistent dimension, effectively and efficiently addressing feature scale discrepancies arising from the base model's architecture. In the Upgraded Space Mapping Module, randomly initialized learnable features are introduced to connect the unified and upgraded spaces by integrating reference features via the attention mechanism. The learned features are adaptively injected into the upgrade model through the Alignment module, which bridges the discrepancies between the models using the all-for-all mapping. Experimental results on personalized image generation tasks demonstrate that A4A outperforms previous methods in transferring adapters while being the first to achieve adapter transfer across model architectures. + +# 1. Introduction + +Recent advancements in large-scale text-to-image diffusion models [5, 11, 23, 26] have significantly improved their ability to generate high-quality, realistic images based on user-friendly textual prompts. Building on these generative capabilities, numerous adapters have been developed upon these pretrained models to further endow them with new control conditions, such as pose and human identity control, + +thereby fostering the growth of downstream real-world applications like personalized image creation. As pretrained models rapidly evolve, with increasing parameters (e.g., from SD1.5's 860M to SDXL's 2600M) and developing architectures (e.g., from the convolution U-Net [27] to the transformer [22]), the original adapters built on base models require substantial resources for retraining and significant effort for redesign to accommodate upgraded models. This leads to a lag in adapter development compared to the progression of upgraded models1. Therefore, the adapter transfer task, i.e., effectively and efficiently transferring existing adapters from base models to upgraded models to leverage the strong control capabilities of the original well-developed adapters and the superior generative abilities of the upgraded models, has become an increasingly important and urgent requirement in both academia and industry. + +Given the substantial potential benefits of adapter transfer, several prior studies have been conducted in this field. For instance, Ctrl-Adapter [19] has been proposed to transfer the ControlNet [41] architecture by fusing the output of the zero-convolution from the pretrained ControlNet into the corresponding layer in the upgraded model. Concurrently, X-Adapter [25] has been explored for mapping the latent from the base model's decoder block and adding them to the corresponding location within the upgraded model's decoder. In summary, existing adapter transfer methods primarily focus on addition-based adapters (i.e., the control conditions are injected into the pretrained models by simple addition, typically, ControlNet), through a layer-by-layer mapping, i.e., the output of each layer in the base model's adapters are mapped to the semantically equivalent layer in the upgraded model. + +However, in this study, we argue that the existing layer-by-layer mapping fails to fully exploit the coupling between original adapters and base models to effectively bridge the + +discrepancies between base models and upgraded models. Here, coupling refers to the already well-trained compatibility between the original adapters and base models, while differences refer to the architectural discrepancies between base models and upgraded models. The reason behind this is that the original adapters and base models function as an integrated whole, and the layer-by-layer mapping disrupts this holistic consistency by isolating the output of each layer. Consequently, during the subsequent mapping to the upgraded models, this overall consistency cannot be effectively utilized. As a result, existing adapter transfer methods suffer from limited transfer scopes. On the one hand, they can only transfer addition-based adapters (typically, ControlNet) but fail to generalize to the attention-based adapters, which are more widely used. This is technically more challenging because attention-based adapters (i.e. the control conditions are injected into pretrained models by attention mechanisms) involve more complex interactions and dependencies across different layers of the model. On the other hand, these methods are limited to transferring between similar pretrained model architectures (e.g., from a U-Net model to another U-Net model) but fail when transferring from a U-Net model to a transformer model. This limitation is particularly critical, as the latest state-of-the-art pretrained text-to-image models [1, 6] are predominantly transformer-based, while the current most mature adapters remain developed on the U-Net architecture. + +To address this challenge, we propose a novel adapter transfer framework, A4A (Adapter for Adapter), which utilizes an innovative all-for-all mapping approach to seamlessly transfer the intrinsic coupling between all layers of the original adapters and base models to all layers of the upgraded models, thereby enabling the transfer of more difficult and widely used attention-based adapters and facilitating cross-architecture model transfers. Specifically, A4A achieves all-for-all mapping through the Coupling Space Projection and Upgraded Space Mapping. In the Coupling Space Projection phase, all attention features of the pretrained adapter are collected, capturing the complete coupling relationship between the adapter and the base model, and then projected into a unified space. The coupling relationship treats the features of all layers of the adapter as a unified whole, conveying a continuous representation of the new control conditions throughout the generation process, distinguishing it from isolated layer mappings. Upgraded space refers to the coupled feature space that corresponds to the upgraded model, where we randomly initialize learnable attention features to transfer the coupling relationship from the base model to the upgraded model. By integrating the reference features through the attention mechanism and aligning them with the upgraded architecture, the learnable features bridge the discrepancies between the models. + +The contributions of this work are as follows: + +1. To alleviate the limited transfer scopes, we introduce a novel all-for-all mapping approach that enables the transfer of attention-based adapters and facilitates cross-architecture model transfers. +2. A4A projects the adapter's complete features into the unified coupling space and bridges it with the upgraded space by fusing these features with randomly initialized learnable features through the attention mechanism. +3. Experiments in various types of personalized image creation demonstrate that A4A is an effective attention-based adapter transfer approach for cross-architecture models, achieving better performance than the pretrained adapter from the upgraded model with minimal training costs. + +# 2. Related work + +# 2.1. Latent Diffusion Models + +Recent diffusion-based text-to-image models [11, 23, 26, 29] have received wide acclaim for their outstanding image fidelity and diversity. Ho et al. [11] introduce the denoising diffusion process into generation models in the seminal work DDPM. Diffusion models learn the generation process through iterative denoising steps. Latent Diffusion Model (LDM) [26] proposed to perform the diffusion process in the latent space of a Variational Autoencoder [31]. Under the LDM architecture, two primary backbone models are employed: the convolutional U-Net [27] and the transformer [22]. While these models share a similar generation process, there are significant differences between the two models. The most notable example of U-Net models is the StableDiffusion [23, 26] series, excluding SDv3.0 [6], which consists of the symmetric encoder and decoder. The encoder is composed of multiple blocks of diverse dimensions, interconnected by down-sampling layers. Each block incorporates several attention layers [32] to fuse latents and conditions. Another series of models [4, 6, 9, 28] adopts DiT blocks [22] for denoising. The transformer model has been widely used in recent video generation tasks [20], where it has achieved state-of-the-art results. After patchification, the resulting features are progressively processed through a series of DiT blocks. In the DiT blocks, the features are scaled and shifted using AdaLN, maintaining the same dimension, which distinguishes them from U-Net-based models. Transformer-based models, with their flexible structure and impressive generative capabilities, have garnered significant attention due to their great potential. + +# 2.2. Adapters for Text-to-Image Diffusion Models + +Given the inefficiencies of fine-tuning large pretrained models, an alternative strategy is to utilize adapters, which introduce a limited number of trainable parameters while keeping the pretrained model frozen. Due to their flexibility and + +greater efficiency compared to fine-tuning, adapters have gained significant interest. By designing conditional modules, an adapter can introduce new control conditions, such as personalized characters, objects, layouts, and style information, to a pretrained text-to-image model. Downstream tasks for these adapters include personalized character generation (ID customization) [7, 34, 40, 42, 44], personalized object generation (IP customization) [2, 7, 13, 21, 30, 36, 37, 40, 42], attribute and layout control [3, 16, 18, 35, 38, 43], and stylization [10, 39]. As discussed in Sec. 1, these adapters can be categorized into two main types: attention-based and addition-based. In the addition-based adapter (typically, ControlNet [41]), the encoded new conditions are directly added to the output of the sub-blocks of the generation model. In contrast, the attention-based adapter processes the encoded conditions through attention layers, modifying the original attention values based on both the new conditions and the text prompts. Attention-based adapters [7, 10, 30, 34, 36-40, 42-44] have received widespread attention due to their efficient and precise condition control capabilities, dominating the field of adapters. + +# 2.3. Adapter Transferring + +The rapid evolution and diverse architectures of text-to-image generation models place constraints on transferring the aforementioned adapters. Consequently, to accommodate new models, adapters often require retraining from scratch on these pretrained diffusion models. X-Adapter [25] designates models equipped with the well-trained adapter as base models, with the upgraded version referred to as the upgraded model. It establishes manual connections between decoder blocks of the same dimensions in both the base and upgraded models. Specifically, the decoder of SD1.5 and SDXL consists of blocks with dimensions 1280, 640, and 320. X-Adapter [25] maps the output of the base model's blocks to the corresponding blocks of identical dimensions in the upgraded model. As a result, X-Adapter is specifically suited for SD-series models and faces challenges when transferring to transformer-based models [4, 6, 9, 22, 28]. Ctrl-Adapter [19] aims to transfer the addition-based adapter, ControlNet [41], for video generation models or upgraded image generation models. It connects the output of the zero-convolutional layers of both models through mappers to transfer the ControlNet. + +# 3. Method + +We propose a novel framework for transferring well-trained adapters from the base model to the upgraded model with architectural discrepancies, specifically U-Net model and transformer model. For the sake of brevity, we define PTA as the Pre-Trained Adapter. Specifically, pretrained refers to the version that has been officially published. Additionally, we denote $M_{base}$ and $M_{up}$ as the previously mentioned + +base model and upgraded model. A4A first projects the extracted attention features into the unified coupling to maintain the coupling relationship between PTAs and $M_{base}$ . Then, the coupling space is mapped to the upgraded space, where learnable features integrate the reference coupling features with attention layers. These learned features are then adaptively aligned with $M_{up}$ through the Alignment component. + +# 3.1. Preliminaries + +Before presenting our method, we introduce the diffusion model with various backbones. The Latent Diffusion Model (LDM), which perform noise addition and denoising in the latent space $z$ of the VAE encoder, is the most prominent open-source community for text-to-image generation. The objective of training LDMs is: + +$$ +\min _ {\theta} \mathcal {L} _ {L D M} = \mathbb {E} _ {z, \epsilon \sim \mathcal {N} (0, I), t} | | \epsilon - \epsilon_ {\theta} (z _ {t}, t, E _ {t} (y _ {t})) | | _ {2} ^ {2}, \tag {1} +$$ + +where $t$ is uniformly sampled from the time steps $\{1, \dots, T\}$ , $y_{t}$ denotes the conditional text prompt, and $E_{t}$ represents the text encoder. The parameterized denoising network denoted as $\epsilon_{\theta}$ , may take the form of either a U-Net model or a transformer model. The U-Net architecture consists of blocks with varying dimensions. Each U-Net block includes down-sampling or up-sampling layers along with attention layers. The transformer model primarily consists of multiple DiT blocks grounded on the transformer architecture. Since our method aims to transfer the widely used attention-based adapters, we define the cross-attention process and its associated signals as follows: + +$$ +\boldsymbol {q} = \boldsymbol {W} ^ {q} \cdot \boldsymbol {i}, \quad \boldsymbol {k} = \boldsymbol {W} ^ {k} \cdot \boldsymbol {c}, \quad a n d \quad \boldsymbol {v} = \boldsymbol {W} ^ {v} \cdot \boldsymbol {c} \tag {2} +$$ + +where $i$ denotes the latents of the image, and $c$ signifies the embeddings of the condition, such as the text prompt in the original model. Additionally, $W$ represents the weights for attention projection. Attention is conducted as follows: + +$$ +\operatorname {A t t e n t i o n} (\boldsymbol {q}, \boldsymbol {k}, \boldsymbol {v}) = \operatorname {S o f t m a x} \left(\frac {\boldsymbol {q} \cdot \boldsymbol {k} ^ {T}}{\sqrt {d}}\right) \boldsymbol {v} \tag {3} +$$ + +where $d$ represents the dimensions of $\pmb{k}$ and $\pmb{v}$ . Through cross-attention, the image latents and condition embeddings are comprehensively integrated. + +# 3.2. Coupling Space Projection + +Condition Encoder of PTA. For new control conditions $y_{n}$ beyond the original text prompt, adapters typically incorporate a condition encoder as illustrated in Fig. 1. We denote $E_{n}$ to distinguish it from the original text encoder $E_{t}$ of the pretrained large-scale T2I models: + +$$ +\boldsymbol {c} _ {n} = E _ {n} \left(y _ {n}\right), \tag {4} +$$ + +![](images/a86d0fff88187fdbf456d2a9bebc8d42078a61a7a43735c068e5321b344f0df4.jpg) +Figure 1. The illustration of the Adapter for Adapter (A4A) framework. Both the base model and the upgraded model are kept frozen. (a) Coupling Space Projection: The pretrained Adapter, consisting of the condition encoder and attention layers (highlighted in pink), is loaded. The adapter features $k_{i}$ and $v_{i}$ are projected into a unified coupling space, reshaping them as $K$ and $V$ . (b) Upgraded Space Mapping: Randomly initialized learnable upgraded features, $\bar{K}$ and $\bar{V}$ , are concatenated with $K$ and $V$ as references. The learning process of $\bar{K}$ and $\bar{V}$ bridges the discrepancy between the base model and the upgraded model. These features are then aligned with the original cross-attention layers of the upgraded model through Alignment, which can be a U-Net or Transformer model. Best viewed in color. + +where $c_{n}$ denotes the new condition embeddings. Taking IP-Adapter [40] as an example, the Image Encoder external to the original generation model serves as the condition encoder. And we directly integrate the pretrained condition encoder $E_{n}$ from the adapter to efficiently transfer the well-trained adapter to the upgraded model. + +Attention Layers of PTA. As illustrated in Fig. 1, the encoded new condition embeddings $c_{n}$ are processed through the attention mechanism coupling with the base model. We explicitly depict the attention layers associated with the pretrained adapter in the figure, denoting the weights of the $i$ -th attention layer of the adapter as $W_{A,i}$ , distinguishing them from the original attention weights $W$ in the base model. Subscript $A$ represents the adapter, and $i$ represents the $i$ -th adapter attention layer. For instance, the fine-tuned cross-attention layers in the Decoupled Cross-Attention module of IP-Adapter [40] exemplify this. $c_{n}$ are sequentially fed into the aforementioned adapter's attention layers $W_{A,i}$ : + +$$ +\boldsymbol {k} _ {i} = \boldsymbol {W} _ {A, i} ^ {k} \left(\boldsymbol {c} _ {n}\right), \quad \boldsymbol {k} _ {i} \in \mathbb {R} ^ {N \times d _ {i}}, \tag {5} +$$ + +$$ +\boldsymbol {v} _ {i} = \boldsymbol {W} _ {A, i} ^ {v} \left(\boldsymbol {c} _ {n}\right), \quad \boldsymbol {v} _ {i} \in \mathbb {R} ^ {N \times d _ {i}}, \tag {6} +$$ + +where $d_{i}$ is the dimension of the feature, and $\mathbf{N}$ denotes the number of tokens for feature $k_{i}$ and $\boldsymbol{v}_{i}$ . To extract the attention features from $\boldsymbol{c}_{n}$ , we employ the attention layers of the adapter rather than utilizing the entire base model. + +Projection onto the Coupling Space. The features obtained from multiple layers form a sequence of length $l$ , $[k_1, k_2, \dots, k_l]$ and $[v_1, v_2, \dots, v_l]$ , where $l$ represents the number of attention layers of PTA. The dimensions $d_i$ of the features vary, as shown in Fig. 1. To achieve the all-for-all mapping for the attention-based adapter from $M_{base}$ to $M_{up}$ , the features of all cross-attention layers need to be + +projected into a unified coupling space $S_{co}$ which is defined by the smallest common multiple $d_{scm}$ of all dimensions $d_{i}$ : + +$$ +\boldsymbol {K} = \operatorname {P r o j} ([ \boldsymbol {k} _ {1}, \boldsymbol {k} _ {2}, \dots , \boldsymbol {k} _ {l} ]), \quad \boldsymbol {K} \in \mathbb {R} ^ {l \times N \times d _ {s c m}}, \quad (7) +$$ + +$$ +\pmb {V} = \mathrm {P r o j} ([ \pmb {v} _ {1}, \pmb {v} _ {2}, \dots , \pmb {v} _ {l} ]), \quad \pmb {V} \in \mathbb {R} ^ {l \times N \times d _ {s c m}}. \quad (8) +$$ + +The projection module consists of several linear layers designed to map dimensions $d_{i}$ to $d_{scm}$ . After the sequences are projected to $S_{co}$ , they are reshaped into matrices $K$ , and $V$ . Mapping to $S_{co}$ defined by $d_{scm}$ strikes the best balance between efficiency and effectiveness. Sec. 7.3 in the supplementary material demonstrates this through experiments. It reduces computational complexity by aligning features to a common dimension, avoiding the overhead of larger spaces. At the same time, it maintains sufficient representational capacity, preventing the loss of important information, which can happen with smaller spaces. This ensures effective feature alignment without excessive resource usage, making it an optimal choice for both computational efficiency and model performance. + +# 3.3. Upgraded Space Mapping + +Similarly, we define the number of the cross-attention layers in the upgraded model as $\bar{l}$ , and the space for upgraded models as $S_{up}$ . To transfer the unified adapter features to the upgraded space, inspired by BLIP-2 [17], we randomly initialize two learnable parameters $\bar{K} \in \mathbb{R}^{\bar{l} \times N \times d_{\text{scm}}}$ and $\bar{V} \in \mathbb{R}^{\bar{l} \times N \times d_{\text{scm}}}$ . Given that the attention layer is effective for integrating features, we adopt this architecture to learn the aforementioned parameters. Consider the learning of $\bar{K}$ as an example. To enhance the robustness, we first + +normalize $\bar{\pmb{K}}$ and $\pmb{K}$ using layer normalization. And then, the following operation is performed: + +$$ +\begin{array}{l} \bar {\boldsymbol {K}} = \operatorname {F F N} \left[ \boldsymbol {W} ^ {o u t} \cdot \text {A t t e n t i o n} \left(\bar {\boldsymbol {K}} \boldsymbol {W} _ {1} ^ {k}, \right. \right. \tag {9} \\ [ \boldsymbol {K}, \bar {\boldsymbol {K}} ] W _ {2} ^ {k}, [ \boldsymbol {K}, \bar {\boldsymbol {K}} ] W _ {3} ^ {k}) ]. \\ \end{array} +$$ + +where $[\pmb{K},\bar{\pmb{K}} ]$ denotes the concatenation of $\pmb{K}$ and $\bar{\pmb{K}}$ along dimension $N$ . The projection weights $\pmb{W}_1^k,\pmb{W}_2^k,\pmb{W}_3^k\in \mathbb{R}^{d_{scm}\times d_{in}}$ map $\bar{\pmb{K}}$ and $\pmb{K}$ , respectively, to the intermediate space with dimension $d_{in}$ . Following the processing of features using the Attention as described in Eq. (3), they are subsequently transformed back to the original space via the $\pmb{W}^{out}$ . The Feed Forward Network (FFN) is composed of layers arranged sequentially, including layer normalization, linear transformations, GELU activation, and additional linear layers. The aforementioned process of Eq. (9) is iterated $\mathbf{R}$ times, with $\mathbf{R}$ serving as a hyperparameter. The learning process of $\bar{\pmb{V}}$ is similar to $\bar{\pmb{K}}$ : + +$$ +\bar {\boldsymbol {V}} = \operatorname {F F N} \left[ \boldsymbol {W} ^ {\text {o u t}} \cdot \text {A t t e n t i o n} \left(\bar {\boldsymbol {V}} \boldsymbol {W} _ {1} ^ {v}, \right. \right. \\ \left. [ \boldsymbol {V}, \bar {\boldsymbol {V}} ] \boldsymbol {W} _ {2} ^ {v}, [ \boldsymbol {V}, \bar {\boldsymbol {V}} ] \boldsymbol {W} _ {3} ^ {v}) \right]. \tag {10} +$$ + +We clarify that two distinct modules with an identical structure are responsible for learning $\bar{K}$ and $\bar{V}$ , respectively. Throughout this process, the learnable features $\bar{\bar{K}}$ and $\bar{V}$ are seamlessly integrated with the adapter features $K$ and $V$ of the PTA, effectively bridging the unified coupling space to the upgraded space. + +Alignment with the Upgraded Model. The learned features in the upgraded space should be aligned with the attention layers of $M_{up}$ to fully leverage the adapter's capabilities. Specifically, we first fetch the dimensions $\bar{d}_i$ of the cross-attention layers within the original upgraded model. Then, we employ the linear layer to align the $i$ -th row of the matrix $\bar{\mathbf{K}}$ to $\bar{d}_i$ dimensions through Alignment, which we denote as $\bar{k}_i$ : + +$$ +[ \bar {\boldsymbol {k}} _ {1}, \dots , \bar {\boldsymbol {k}} _ {\bar {l}} ] = \operatorname {A l i g n} (\bar {\boldsymbol {K}}). \tag {11} +$$ + +Similarly, the identical operation is applied to the $\bar{\mathbf{V}}$ matrix in order to derive the vector $\bar{\pmb{v}}_i$ : + +$$ +[ \bar {\boldsymbol {v}} _ {1}, \dots , \bar {\boldsymbol {v}} _ {\bar {l}} ] = \operatorname {A l i g n} (\bar {\boldsymbol {V}}). \tag {12} +$$ + +For the $i$ -th cross-attention layer, let $\pmb{q}_i^{up}$ , $\pmb{k}_i^{up}$ , and $\pmb{v}_i^{up}$ be the original features. The extracted features, $\bar{\pmb{k}}_i$ and $\bar{\pmb{v}}_i$ , are combined through linear weighting and summation with the prior values of the upgraded model using Attention Eq. (3): + +$$ +\begin{array}{l} \bar {\boldsymbol {Z}} = \operatorname {A t t e n t i o n} \left(\boldsymbol {q} _ {i} ^ {u p}, \boldsymbol {k} _ {i} ^ {u p}, \boldsymbol {v} _ {i} ^ {u p}\right) + \tag {13} \\ \lambda \text {A t t e n t i o n} \left(\boldsymbol {q} _ {i} ^ {u p}, \bar {\boldsymbol {k}} _ {i}, \bar {\boldsymbol {v}} _ {i}\right). \\ \end{array} +$$ + +The result of Eq. (13), $\bar{Z}$ , serves as the output for the $i$ -th cross-attention layer and will be forwarded to the next layer of the upgraded model. The parameter $\lambda$ serves as a balancing factor, fixed at 1.0 during training and subsequently adjusted for downstream tasks during inference. + +# 3.4. Optimization Loss Function + +To optimize the A4A framework, we employ the loss function $\mathcal{L}_{LDM}$ of the upgraded model $M_{up}$ as defined in Equation Eq. (1): + +$$ +\mathcal {L} _ {L D M} = \mathbb {E} _ {z, \epsilon \sim \mathcal {N} (0, I), t} \| \epsilon - \epsilon_ {\theta} ^ {u p} \left(z _ {t}, t, c ^ {u p}\right) \| _ {2} ^ {2}, \tag {14} +$$ + +The condition $c^{up}$ includes the original text prompt $y_{t}$ and new control conditions $y_{n}$ . The upgraded model $\epsilon_{\theta}^{up}$ acquires new control conditions and capabilities by injecting learned adapter features. + +The base model and upgraded model remain frozen. The loss function is exclusively used to update the A4A module, which consists of the network with learnable features and the PTA for fine-tuning. To optimize model training, given the varying numbers of parameters in each trainable module, we adopt an asynchronous training strategy. Specifically, for training the projection in $S_{co}$ and alignment in $S_{up}$ , a learning rate of $1 \times 10^{-5}$ is employed to avoid overfitting, while a learning rate of $1 \times 10^{-4}$ is applied to the other components. If the pretrained adapter is fine-tuned, a smaller learning rate of $1 \times 10^{-6}$ is used to effectively retain PTA's conditional encoding capabilities. + +# 4. Experiments + +# 4.1. Experimental Settings + +Datasets. In our study, we utilize the CelebAMask-HQ dataset [15], which consists of approximately 20,000 high-quality facial images, each with a resolution of $1024 \times 1024$ pixels, and the OpenImages dataset [14], which offers a diverse collection of images featuring a wide range of clearly identifiable objects. We employ the BLIP-2 model [17] to generate captions for the aforementioned datasets, which serve as text prompts paired with the images. For validation, we randomly select 100 images from CelebAMask-HQ, ensuring they are distinct from those in the training set, and generate four images for each reference image. + +Implementation Details. We implement A4A using SD1.5 [26] as the base model, and SDXL [23] and Pixart-Alpha (XL) [4] as the upgraded models. Both SDXL and Pixart-Alpha are significantly larger text-to-image models compared to SD1.5, and we use them as upgraded models for the U-Net and transformer architectures, respectively. In this paper, we utilize the IP-Adapter [40] as our pretrained adapter. The IP-Adapter series has recently demonstrated remarkable capabilities, garnering significant interest for its ability to enhance personalization in generative models. Its performance across a variety of tasks highlights its increasing potential to advance text-to-image generation. Due to variations in GPU types across compared methods and the absence of comprehensive GPU hour reports, we use Sample Count (SC), which represents the number of samples processed up to a specific time point, as a metric + +![](images/66b93cd27e2503d438b30bfa123b5769814d45f9d50523f09cd82bea5f899aa3.jpg) +Figure 2. The visualization of personalized human generation using SDXL with U-Net architecture as the upgraded model. A4A(ours) compares with the previous work X-Adapter and pretrained adapter from the upgraded model (PTA-UM). + +![](images/9ce9673e3ffcd9fd32f55633093dce31a2d23a1288de851c5e4c5d47441de51a.jpg) +Figure 3. The visualization of personalized object generation using SDXL with U-Net architecture as the upgraded model. A4A(ours) compares with the previous work X-Adapter and pretrained adapter from the upgraded model (PTA-UM). + +for evaluating training costs and efficiency. For example, the officially published PTA has an SC of 64M (i.e., the PTA is trained with 8 V100 GPUs for 1M steps with a batch size of 8 per GPU). The terms in the following charts are defined as: (1) A4A (ours): transferring the PTA from the base model to the upgraded model using our method A4A; (2) X-Adapter: transferring the PTA using the published X-Adapter [25] checkpoint; (3) PTA-UM: the officially published pretrained adapter from the upgraded model. + +Evaluation Metrics. To verify the effectiveness and effi + +ciency of our method, we evaluate A4A on two tasks: personalized human generation (ID customization) and personalized object generation (IP customization). For ID customization, we utilize IDentity Alignment scores (IDA) to measure the similarity between generated and reference facial features, alongside the OMG method [12]. Specifically, we employ the Antelopev2 model from the InsightFace library [8] to detect faces and extract facial embeddings from both reference and generated images. For IP customization, we extract image embeddings using pretrained CLIP + +# A4A(ours) + +Photo of a man/woman standing in a garden, dressed in casual clothing, dressed in casual clothing + +![](images/fe64143bd4cf19a61c1bf3199a22b1d70b415ba7436a7f604f6aa56a82aa538b.jpg) + +![](images/7cbebc0524ffe155f0ad73ec4d8ecaf31faee6c9492bb38ad5ee3223746d5a13.jpg) + +![](images/47ac2fc31e6a7d274633709f46eda1e61a85864123096e5ec72d3fa992b45b5b.jpg) + +![](images/f3980b481ad7f811b9edbfb4db3d972ef35f68e0d8074f2b15b4ad6cbd0330fb.jpg) +Figure 4. The visualization of transferring the IP-Adapter from SD1.5 (U-Net architecture) to Pixart-Alpha (Transformer architecture). The middle line, framed in orange, serves as the reference for comparison. The left side shows the A4A effect, which closely resembles the reference, while the right side (IP-Adapter*) displays the results of training IP-Adapter from scratch. + +![](images/8f7780d3c305c94afee09681906fd201028bf1dd539e67ff149a9b2f836f12cd.jpg) + +![](images/a1041bd4031e5f914ab1d88e8219f1e2c2444a34b4fcaa721a492d8addd44746.jpg) + +![](images/83158e4f5833c57823086f319547980fdf46aba07f0b41f5b6e9a324039fc58c.jpg) + +![](images/337e5402090e1bbfd29065a07a30ca3b9b53cb40e8f89e1bbd30d525117e5a58.jpg) + +![](images/7c531b3162fde19d76daede2e22a3e8be96995b3a1790e2b383e8ef6e3c15251.jpg) + +![](images/16528312b9c32bd01ab2a8f9b83c5bed68a88e60f9c21df5e58aaa5feb0d3df2.jpg) + +![](images/4b72913cfc36aad22e8606f661f048f5b333a606e398c6ee29790df710b5a8b4.jpg) + +![](images/af99a3e34f50e28b99e049bc0a3f27ba947b4b200d5f87b2ef7e203bb92bf519.jpg) + +![](images/57ae4f7e95072d572b606bb37c9f5f6a14a0dc165cb0830d7edfeab86e59d2a4.jpg) +Text Prompt + +![](images/61d26fa82a7bb677a72b24b19ad76424bc57ba2a107e45ecbfd3bcfaafcb8239.jpg) + +![](images/848236a53755e1139544513f3d308a4305297bcda1f6f10336ca8cbcd9ca02f7.jpg) + +![](images/ae32af3ce002090ac247c4c2d85168234bc82d85ca77bd6053d2fbd1a25003e5.jpg) + +![](images/c7bc67cba504b80415a83eee1f1a51e7e2f5d6ef8950e1d978c7e39bfc1f22b1.jpg) + +![](images/5bc2b7e4ea6ee70730c32a70883a7d7d20f799edb14bd29fd180c609dd7bf10c.jpg) + +![](images/5fb976c3468f42757a85f83aa4eca62f05cc02e23375216a8f167825dda78ce2.jpg) + +![](images/a2283328ee278da6f9fba1185c9d3a8d7829de61d95a5a08045866e2e02d252e.jpg) + +# IP-Adapter\* + +Photo of a man/woman standing in a garden, dressed in casual clothing, dressed in casual clothing + +![](images/962e45ecfd2217bb41ba0a074ccdef3c37c8c0136df0fae9dcd51d9bad7d0de9.jpg) + +![](images/959c47cf92a3b815ce1eb45301729c6de4eedf17943a5249b17aaf16d2d7bb73.jpg) + +![](images/fdba5f17f5d669645cd571225b302c820053f927e6bf0ff0efe5b08c90a39b6d.jpg) + +![](images/6e44ce8e6807b350b95d84a69ae27961cdec95add122ba0a5be6afb69f5efa02.jpg) + +![](images/3e7d6088f62cdd2d630e57fd4eda834eac73b2b2fc292729dab221070bc55960.jpg) + +![](images/a25eb9338a194802f6bf62585c6c47b963d3c8212afc35a53caf27593f4eabd4.jpg) + +![](images/53512d3eae80263c154286529562d1c7bfc6e7d3016850b84a731822613a82cc.jpg) + +![](images/79f36e8373c041b56e98430c85e0e398b5791190220a2d660decdd554635447d.jpg) + +![](images/20bcbc720f725f45c4fdc3a1f29bd471a6bbcbcd51a9460863ffd252192a1dac.jpg) + +![](images/128dbbaf9f45590515898394dc2828d87dbc506efaa05e4fa0279c7ad9cc5c68.jpg) + +![](images/37f8a1f3a2e0e035f59c33aec3b06ab7a3c6a4afeb366f6d6b7038b88bb12ca3.jpg) + +![](images/079969bf3c7337de7a2f70e422babc77e6788101ecb5c08954fd16130ca45eea.jpg) + +![](images/55081e8dd368ae1851a919b8f7d61a0dbf81c0a74aa4f2d73f79c8620cdd262d.jpg) + +![](images/e51493f7821e830e5deca317a74b0dae8367f4398fe4c3ed78b7114f2b73bd39.jpg) + +![](images/21b4bfc819dbbaa88e276dcbdde80a291423726dc2c681a93e83bcdc22838a8a.jpg) + +![](images/4a1e99966750ed40efee9a98d7b465d20557986942b83b8fcb7631d2cf81ac04.jpg) + +model [24] and report the embedding similarity between generated and reference images, referred to as the CLIP-score. In the graph, we use the previously defined SC as the horizontal axis to represent the training process. + +# 4.2. Transferring to the U-Net model + +We use SDXL as the upgraded model and compare our results with the officially published PTA-UM [40] (represented by the green star in Fig. 5). To facilitate comparison, we extend a horizontal line from this point to represent the performance of PTA-UM, rather than the actual training curve. As illustrated in Fig. 5, transferring the PTA using A4A achieves an IDA comparable to that of the PTA-UM at approximately $\mathrm{SC}0.5\mathrm{M}^2$ . Furthermore, as shown in Fig. 2, the lines "A4A (ours)" and "PTA-UM" demonstrate that A4A not only preserves the intellectual property of the characters but also maintains the editing capabilities of the upgraded model with minimal training cost. + +For personalized object generation (IP customization), as shown in Fig. 6, transferring the PTA using A4A achieves a CLIP-score comparable to that of PTA-UM at approximately 2.5M SC, compared to 64M SC for PTA-UM. As illustrated in Fig. 3, the lines labeled "A4A (ours)" and "PTA-UM" demonstrate that control ability, as guided by the text prompt, is also preserved. When the attention-based adapter is transferred to the upgraded U-Net model, A4A ef + +![](images/aa1bc97f37480ed25043a456cef3b14af810c985f912776542fd091739d160a3.jpg) +Figure 5. The graph shows the IDA of ID preservation when transferring to SDXL, compared to the pretrained adapter from SDXL (PTA-UM). The horizontal axis is in units of M. + +![](images/c93ac3380b00ea9ed059918b0e1a147a89f8ee44b8013b744a536d2b44511e41.jpg) +Figure 6. The graph shows the CLIP-score of personalized object generation when transferring to SDXL, compared to PTA-UM. The horizontal axis is in units of M. + +fectively retains and transfers the adapter's capabilities with minimal training cost. The data of the quantitative indica + +tors corresponding to the line chart are shown in Tab. 1. + +# 4.3. Transferring to Transformer model + +We use Pixart-Alpha [4] with the transformer architecture as the upgraded model. Given the absence of a corresponding published version of the IP-Adapter for Pixart-Alpha, we train it from scratch using the CelebAMask-HQ dataset as our baseline, represented by the orange line labeled "IP-Adapter*" in Fig. 7 and Fig. 4. As illustrated in Fig. 7, employing A4A to transfer pretrained adapters to transformer models offers a significant advantage over training adapters directly on the transformer models. It is worth noting that our work is the first to transfer the adapter from the U-Net model to the transformer model, achieving strong results. Fig. 4 presents a visualization comparing our method with training the IP-Adapter from scratch, both at 30k steps with a batch size of 8. The images generated by A4A show significant facial similarity to the reference image, while the image on the right does not yield comparable results. + +![](images/576d606fe6975be35450003a2984ee536f2e2db225ca61a8629aba5825bef934.jpg) +Figure 7. The IDA of transferring the pretrained IP-Adapter from SD1.5 to Pixart-Alpha using A4A (red line with dots), compared to training the IP-Adapter from scratch (orange line with stars). The horizontal axis is in units of K. Best viewed in color. + +# 4.4. Ablation Study + +In the previous section, we adopted the A4A paradigm, which includes training our modules, namely Coupling Space Projection and Upgraded Space Mapping, along with fine-tuning the PTA. To demonstrate that fine-tuning is not the core driving force of our method, we conducted the following experiment. As shown, the two curves are very close, with the fine-tuning paradigm showing only a slight improvement over the non-fine-tuning version. + +We present experiments on the hyperparameter settings of the learning rates for each component in Sec. 7. It is worth mentioning that, since Projection and Alignment have similar numbers of parameters, we group them together. The ablation study of the two core modules is presented in Sec. 8, which demonstrates that our design achieves a satisfactory transfer effect with an efficient structure. + +![](images/973239fcf626e413ae96724f8697f9c406221073a8f100f655bb295998e01bc3.jpg) +Figure 8. The yellow line labeled "A4A(ours w/fine-tuning)" represents training A4A without fine-tuning the PTA, while the red line represents the full A4A approach with fine-tuning the PTA. + +
PTA: IP-AdapterIDA↑CLIP-score↑
X-Adapter0.0620.7894
PTA-UM0.45310.9124
A4A(ours)0.51270.9154
+ +Table 1. The evaluation metrics for IP customization (CLIP-score) and ID customization (IDA) are presented. Using SDXL as the upgraded model and IP-Adapter as the pretrained adapter, A4A (ours) is compared with the transfer method X-Adapter and the pretrained adapter from the upgraded model (PTA-UM). + +# 4.5. Comparison with X-Adapter + +The previous work X-Adapter [25] is designed specifically for transferring adapters from U-Net models. To demonstrate the effectiveness of A4A, we also compare it with X-Adapter. As shown in Tab. 1, our method achieves better results in generation using the transferred adapter for both IP and ID customization. The visualizations in Figure 2 and Fig. 3, particularly the row labeled "X-Adapter", further substantiate this when compared to the adjacent rows. It is also worth noting that our method requires only the adapter for training and inference, without the need for denoising using the base model as in X-Adapter, which makes it more efficient. + +# 5. Conclusion + +We propose A4A (Adapter for Adapter), a novel framework designed to address the challenges of transferring pretrained adapters across rapidly evolving model architectures. By employing an all-for-all mapping approach, A4A seamlessly transfers attention-based adapters from U-Net to transformer models without the need for extensive retraining. The framework's two key components, Coupling Space Projection and Upgraded Space Mapping, enable effective bridging of adapter features with upgraded model structures. Our experimental results demonstrate that A4A preserves both the generative power of upgraded models and the controllability of the original adapters. This work offers a scalable solution for cross-architecture adapter transfer. + +# 6. Acknowledgment + +This research is supported by Artificial Intelligence National Science and Technology Major Project 2023ZD0121200, and National Natural Science Foundation of China under Grant 62222212 and 623B2094. + +# References + +[1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. 2 +[2] Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Qinsheng Zhang, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, et al. ediff-i: Text-to-image diffusion models with an ensemble of expert denoisers. arXiv preprint arXiv:2211.01324, 2022. 3 +[3] Marco Bellagente, Manuel Brack, Hannah Teufel, Felix Friedrich, Björn Deiseroth, Constantin Eichenberg, Andrew M Dai, Robert Baldock, Souradeep Nanda, Koen Oostermeijer, et al. Multifusion: Fusing pre-trained models for multi-lingual, multi-modal image generation. Advances in Neural Information Processing Systems, 36:59502-59521, 2023. 3 +[4] Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, et al. Pixart-alpha: Fast training of diffusion transformer for photorealistic text-to-image synthesis. arXiv preprint arXiv:2310.00426, 2023. 2, 3, 5, 8 +[5] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780-8794, 2021. 1 +[6] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis. In Forty-first International Conference on Machine Learning, 2024. 2, 3 +[7] Rinon Gal, Moab Arar, Yuval Atzmon, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. Encoder-based domain tuning for fast personalization of text-to-image models. ACM Transactions on Graphics (TOG), 42(4):1-13, 2023. 3 +[8] Jia Guo, Jiankang Deng, Xiang An, Jack Yu, and Baris Gecer. Insightface: 2d and 3d face analysis project, 2019. 6 +[9] Thomas A Halgren, Robert B Murphy, Richard A Friesner, Hege S Beard, Leah L Frye, W Thomas Pollard, and Jay L Banks. Glide: a new approach for rapid, accurate docking and scoring. 2. enrichment factors in database screening. Journal of medicinal chemistry, 47(7):1750-1759, 2004. 2, 3 +[10] Amir Hertz, Andrey Voynov, Shlomi Fruchter, and Daniel Cohen-Or. Style aligned image generation via shared attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4775–4785, 2024. 3 + +[11] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020. 1, 2 +[12] Zhe Kong, Yong Zhang, Tianyu Yang, Tao Wang, Kaihao Zhang, Bizhu Wu, Guanying Chen, Wei Liu, and Wenhan Luo. Omg: Occlusion-friendly personalized multi-concept generation in diffusion models. arXiv preprint arXiv:2403.10983, 2024.6 +[13] Nupur Kumari, Bingliang Zhang, Richard Zhang, Eli Shechtman, and Jun-Yan Zhu. Multi-concept customization of text-to-image diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1931-1941, 2023. 3 +[14] Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, et al. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. International Journal of Computer Vision, 128(7):1956-1981, 2020. 5 +[15] Cheng-Han Lee, Ziwei Liu, Lingyun Wu, and Ping Luo. Maskgan: Towards diverse and interactive facial image manipulation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 5 +[16] Yuseung Lee and Minhyuk Sung. Reground: Improving textual and spatial grounding at no cost. arXiv preprint arXiv:2403.13589, 2024. 3 +[17] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pages 19730–19742. PMLR, 2023. 4, 5 +[18] Yuheng Li, Haotian Liu, Qingyang Wu, Fangzhou Mu, Jianwei Yang, Jianfeng Gao, Chunyuan Li, and Yong Jae Lee. Gligen: Open-set grounded text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22511-22521, 2023. 3 +[19] Han Lin, Jaemin Cho, Abhay Zala, and Mohit Bansal. Ctrl-adapter: An efficient and versatile framework for adapting diverse controls to any diffusion model. arXiv preprint arXiv:2404.09967, 2024. 1, 3 +[20] Yixin Liu, Kai Zhang, Yuan Li, Zhiling Yan, Chujie Gao, Ruoxi Chen, Zhengqing Yuan, Yue Huang, Hanchi Sun, Jianfeng Gao, et al. Sora: A review on background, technology, limitations, and opportunities of large vision models. arXiv preprint arXiv:2402.17177, 2024. 2 +[21] Xichen Pan, Li Dong, Shaohan Huang, Zhiliang Peng, Wenhu Chen, and Furu Wei. Kosmos-g: Generating images in context with multimodal large language models. arXiv preprint arXiv:2310.02992, 2023. 3 +[22] William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4195-4205, 2023. 1, 2, 3 +[23] Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion mod + +els for high-resolution image synthesis. arXiv preprint arXiv:2307.01952, 2023. 1, 2, 5 +[24] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 7 +[25] Lingmin Ran, Xiaodong Cun, Jia-Wei Liu, Rui Zhao, Song Zijie, Xintao Wang, Jussi Keppo, and Mike Zheng Shou. X-adapter: Adding universal compatibility of plugins for upgraded diffusion model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8775-8784, 2024. 1, 3, 6, 8 +[26] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695, 2022. 1, 2, 5 +[27] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical image computing and computer-assisted intervention-MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18, pages 234-241. Springer, 2015. 1, 2 +[28] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. Advances in neural information processing systems, 35:36479-36494, 2022. 2, 3 +[29] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020. 2 +[30] Yoad Tewel, Rinon Gal, Gal Chechik, and Yuval Atzmon. Key-locked rank one editing for text-to-image personalization. In ACM SIGGRAPH 2023 Conference Proceedings, pages 1-11, 2023. 3 +[31] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017. 2 +[32] A Vaswani. Attention is all you need. Advances in Neural Information Processing Systems, 2017. 2 +[33] Haofan Wang, Matteo Spinelli, Qixun Wang, Xu Bai, Zekui Qin, and Anthony Chen. Instantstyle: Free lunch towards style-preserving in text-to-image generation. arXiv preprint arXiv:2404.02733, 2024. 3 +[34] Qixun Wang, Xu Bai, Haofan Wang, Zekui Qin, Anthony Chen, Huaxia Li, Xu Tang, and Yao Hu. Instantid: Zero-shot identity-preserving generation in seconds. arXiv preprint arXiv:2401.07519, 2024. 3 +[35] Xudong Wang, Trevor Darrell, Sai Saketh Rambhatla, Rohit Girdhar, and Ishan Misra. Instancediffusion: Instance-level control for image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6232-6242, 2024. 3 + +[36] Yuxiang Wei, Yabo Zhang, Zhilong Ji, Jinfeng Bai, Lei Zhang, and Wangmeng Zuo. Elite: Encoding visual concepts into textual embeddings for customized text-to-image generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15943-15953, 2023. 3 +[37] Zhichao Wei, Qingkun Su, Long Qin, and Weizhi Wang. Mm-diff: High-fidelity image personalization via multi-modal condition integration. arXiv preprint arXiv:2403.15059, 2024. 3 +[38] Yinwei Wu, Xianpan Zhou, Bing Ma, Xuefeng Su, Kai Ma, and Xinchao Wang. Ifadapter: Instance feature control for grounded text-to-image generation. arXiv preprint arXiv:2409.08240, 2024. 3 +[39] Peng Xing, Haofan Wang, Yanpeng Sun, Qixun Wang, Xu Bai, Hao Ai, Renyuan Huang, and Zechao Li. Csgo: Content-style composition in text-to-image generation. arXiv preprint arXiv:2408.16766, 2024. 3 +[40] Hu Ye, Jun Zhang, Sibo Liu, Xiao Han, and Wei Yang. Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models. arXiv preprint arXiv:2308.06721, 2023. 3, 4, 5, 7, 1 +[41] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3836-3847, 2023. 1, 3 +[42] Yuxuan Zhang, Yiren Song, Jiaming Liu, Rui Wang, Jinpeng Yu, Hao Tang, Huaxia Li, Xu Tang, Yao Hu, Han Pan, et al. Ssr-encoder: Encoding selective subject representation for subject-driven generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8069–8078, 2024. 3 +[43] Dewei Zhou, You Li, Fan Ma, Xiaoting Zhang, and Yi Yang. Mige: Multi-instance generation controller for text-to-image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6818-6828, 2024. 3 +[44] Zhengguang Zhou, Jing Li, Huaxia Li, Nemo Chen, and Xu Tang. Storymaker: Towards holistic consistent characters in text-to-image generation. arXiv preprint arXiv:2409.12576, 2024.3 \ No newline at end of file diff --git a/2025/A4A_ Adapter for Adapter Transfer via All-for-All Mapping for Cross-Architecture Models/images.zip b/2025/A4A_ Adapter for Adapter Transfer via All-for-All Mapping for Cross-Architecture Models/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..bd28620a0895cb48281e00f63514201cd0a879ad --- /dev/null +++ b/2025/A4A_ Adapter for Adapter Transfer via All-for-All Mapping for Cross-Architecture Models/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d3bb4a67c61ce16809a95ff9b26493bd7e1e1ef60b0e7076ed31b6b1410d9850 +size 744922 diff --git a/2025/A4A_ Adapter for Adapter Transfer via All-for-All Mapping for Cross-Architecture Models/layout.json b/2025/A4A_ Adapter for Adapter Transfer via All-for-All Mapping for Cross-Architecture Models/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..2dbf61b18f84fc8c8625f919ef53f1df62da1c15 --- /dev/null +++ b/2025/A4A_ Adapter for Adapter Transfer via All-for-All Mapping for Cross-Architecture Models/layout.json @@ -0,0 +1,10029 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 109, + 103, + 501, + 138 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 109, + 103, + 501, + 138 + ], + "spans": [ + { + "bbox": [ + 109, + 103, + 501, + 138 + ], + "type": "text", + "content": "A4A: Adapter for Adapter Transfer via All-for-All Mapping for Cross-Architecture Models" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 149, + 161, + 455, + 190 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 149, + 161, + 455, + 190 + ], + "spans": [ + { + "bbox": [ + 149, + 161, + 455, + 190 + ], + "type": "text", + "content": "Keyu Tu, Mengqi Huang, Zhuowei Chen, Zhendong Mao* \nUniversity of Science and Technology of China" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 134, + 191, + 473, + 204 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 134, + 191, + 473, + 204 + ], + "spans": [ + { + "bbox": [ + 134, + 191, + 473, + 204 + ], + "type": "text", + "content": "{kytu,huangmq,chenzw01}@mail.ustc.edu.cn,{zdmao}@ustc.edu.cn" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 151, + 231, + 200, + 243 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 231, + 200, + 243 + ], + "spans": [ + { + "bbox": [ + 151, + 231, + 200, + 243 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 256, + 296, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 256, + 296, + 567 + ], + "spans": [ + { + "bbox": [ + 56, + 256, + 296, + 567 + ], + "type": "text", + "content": "Large-scale text-to-image models evolve rapidly in size and architecture. The existing adapters struggle to keep pace with these models, requiring extensive retraining. This paper proposes a novel adapter transfer framework, A4A (Adapter for Adapter), which uses an all-for-all mapping approach to seamlessly transfer attention-based adapters across different model architectures (e.g., U-Net to transformer). The framework consists of Coupling Space Projection and Upgraded Space Mapping. During Coupling Space Projection, all attention features of the pretrained adapter are aggregated to fully capture the coupling relationship before being projected into a unified space. The unified space maintains coupling features in a consistent dimension, effectively and efficiently addressing feature scale discrepancies arising from the base model's architecture. In the Upgraded Space Mapping Module, randomly initialized learnable features are introduced to connect the unified and upgraded spaces by integrating reference features via the attention mechanism. The learned features are adaptively injected into the upgrade model through the Alignment module, which bridges the discrepancies between the models using the all-for-all mapping. Experimental results on personalized image generation tasks demonstrate that A4A outperforms previous methods in transferring adapters while being the first to achieve adapter transfer across model architectures." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 590, + 135, + 604 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 590, + 135, + 604 + ], + "spans": [ + { + "bbox": [ + 56, + 590, + 135, + 604 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 611, + 295, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 611, + 295, + 696 + ], + "spans": [ + { + "bbox": [ + 55, + 611, + 295, + 696 + ], + "type": "text", + "content": "Recent advancements in large-scale text-to-image diffusion models [5, 11, 23, 26] have significantly improved their ability to generate high-quality, realistic images based on user-friendly textual prompts. Building on these generative capabilities, numerous adapters have been developed upon these pretrained models to further endow them with new control conditions, such as pose and human identity control," + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 232, + 555, + 422 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 232, + 555, + 422 + ], + "spans": [ + { + "bbox": [ + 313, + 232, + 555, + 422 + ], + "type": "text", + "content": "thereby fostering the growth of downstream real-world applications like personalized image creation. As pretrained models rapidly evolve, with increasing parameters (e.g., from SD1.5's 860M to SDXL's 2600M) and developing architectures (e.g., from the convolution U-Net [27] to the transformer [22]), the original adapters built on base models require substantial resources for retraining and significant effort for redesign to accommodate upgraded models. This leads to a lag in adapter development compared to the progression of upgraded models1. Therefore, the adapter transfer task, i.e., effectively and efficiently transferring existing adapters from base models to upgraded models to leverage the strong control capabilities of the original well-developed adapters and the superior generative abilities of the upgraded models, has become an increasingly important and urgent requirement in both academia and industry." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 425, + 556, + 615 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 425, + 556, + 615 + ], + "spans": [ + { + "bbox": [ + 313, + 425, + 556, + 615 + ], + "type": "text", + "content": "Given the substantial potential benefits of adapter transfer, several prior studies have been conducted in this field. For instance, Ctrl-Adapter [19] has been proposed to transfer the ControlNet [41] architecture by fusing the output of the zero-convolution from the pretrained ControlNet into the corresponding layer in the upgraded model. Concurrently, X-Adapter [25] has been explored for mapping the latent from the base model's decoder block and adding them to the corresponding location within the upgraded model's decoder. In summary, existing adapter transfer methods primarily focus on addition-based adapters (i.e., the control conditions are injected into the pretrained models by simple addition, typically, ControlNet), through a layer-by-layer mapping, i.e., the output of each layer in the base model's adapters are mapped to the semantically equivalent layer in the upgraded model." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 616, + 555, + 654 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 616, + 555, + 654 + ], + "spans": [ + { + "bbox": [ + 313, + 616, + 555, + 654 + ], + "type": "text", + "content": "However, in this study, we argue that the existing layer-by-layer mapping fails to fully exploit the coupling between original adapters and base models to effectively bridge the" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 313, + 664, + 555, + 715 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 664, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 664, + 555, + 715 + ], + "type": "text", + "content": "1In this study, we define \"base models\" as the original pretrained text-to-image models with well-developed adapters (e.g., SD1.5), and \"upgraded models\" as those pretrained models that require adapter transfer (e.g., SDXL). Upgraded models typically have more parameters and advanced architectural designs compared to the base models." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 66, + 702, + 212, + 713 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 702, + 212, + 713 + ], + "spans": [ + { + "bbox": [ + 66, + 702, + 212, + 713 + ], + "type": "text", + "content": "*Zhendong Mao is the corresponding author." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "text", + "content": "18476" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 57, + 72, + 294, + 393 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 72, + 294, + 393 + ], + "spans": [ + { + "bbox": [ + 57, + 72, + 294, + 393 + ], + "type": "text", + "content": "discrepancies between base models and upgraded models. Here, coupling refers to the already well-trained compatibility between the original adapters and base models, while differences refer to the architectural discrepancies between base models and upgraded models. The reason behind this is that the original adapters and base models function as an integrated whole, and the layer-by-layer mapping disrupts this holistic consistency by isolating the output of each layer. Consequently, during the subsequent mapping to the upgraded models, this overall consistency cannot be effectively utilized. As a result, existing adapter transfer methods suffer from limited transfer scopes. On the one hand, they can only transfer addition-based adapters (typically, ControlNet) but fail to generalize to the attention-based adapters, which are more widely used. This is technically more challenging because attention-based adapters (i.e. the control conditions are injected into pretrained models by attention mechanisms) involve more complex interactions and dependencies across different layers of the model. On the other hand, these methods are limited to transferring between similar pretrained model architectures (e.g., from a U-Net model to another U-Net model) but fail when transferring from a U-Net model to a transformer model. This limitation is particularly critical, as the latest state-of-the-art pretrained text-to-image models [1, 6] are predominantly transformer-based, while the current most mature adapters remain developed on the U-Net architecture." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 57, + 399, + 294, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 399, + 294, + 696 + ], + "spans": [ + { + "bbox": [ + 57, + 399, + 294, + 696 + ], + "type": "text", + "content": "To address this challenge, we propose a novel adapter transfer framework, A4A (Adapter for Adapter), which utilizes an innovative all-for-all mapping approach to seamlessly transfer the intrinsic coupling between all layers of the original adapters and base models to all layers of the upgraded models, thereby enabling the transfer of more difficult and widely used attention-based adapters and facilitating cross-architecture model transfers. Specifically, A4A achieves all-for-all mapping through the Coupling Space Projection and Upgraded Space Mapping. In the Coupling Space Projection phase, all attention features of the pretrained adapter are collected, capturing the complete coupling relationship between the adapter and the base model, and then projected into a unified space. The coupling relationship treats the features of all layers of the adapter as a unified whole, conveying a continuous representation of the new control conditions throughout the generation process, distinguishing it from isolated layer mappings. Upgraded space refers to the coupled feature space that corresponds to the upgraded model, where we randomly initialize learnable attention features to transfer the coupling relationship from the base model to the upgraded model. By integrating the reference features through the attention mechanism and aligning them with the upgraded architecture, the learnable features bridge the discrepancies between the models." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 70, + 702, + 251, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 70, + 702, + 251, + 712 + ], + "spans": [ + { + "bbox": [ + 70, + 702, + 251, + 712 + ], + "type": "text", + "content": "The contributions of this work are as follows:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 316, + 72, + 553, + 239 + ], + "type": "list", + "angle": 0, + "index": 6, + "blocks": [ + { + "bbox": [ + 317, + 72, + 553, + 118 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 72, + 553, + 118 + ], + "spans": [ + { + "bbox": [ + 317, + 72, + 553, + 118 + ], + "type": "text", + "content": "1. To alleviate the limited transfer scopes, we introduce a novel all-for-all mapping approach that enables the transfer of attention-based adapters and facilitates cross-architecture model transfers." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 316, + 120, + 553, + 167 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 120, + 553, + 167 + ], + "spans": [ + { + "bbox": [ + 316, + 120, + 553, + 167 + ], + "type": "text", + "content": "2. A4A projects the adapter's complete features into the unified coupling space and bridges it with the upgraded space by fusing these features with randomly initialized learnable features through the attention mechanism." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 316, + 168, + 553, + 239 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 168, + 553, + 239 + ], + "spans": [ + { + "bbox": [ + 316, + 168, + 553, + 239 + ], + "type": "text", + "content": "3. Experiments in various types of personalized image creation demonstrate that A4A is an effective attention-based adapter transfer approach for cross-architecture models, achieving better performance than the pretrained adapter from the upgraded model with minimal training costs." + } + ] + } + ], + "index": 5 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 316, + 253, + 397, + 265 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 253, + 397, + 265 + ], + "spans": [ + { + "bbox": [ + 316, + 253, + 397, + 265 + ], + "type": "text", + "content": "2. Related work" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 316, + 273, + 449, + 284 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 273, + 449, + 284 + ], + "spans": [ + { + "bbox": [ + 316, + 273, + 449, + 284 + ], + "type": "text", + "content": "2.1. Latent Diffusion Models" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 316, + 291, + 553, + 637 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 291, + 553, + 637 + ], + "spans": [ + { + "bbox": [ + 316, + 291, + 553, + 637 + ], + "type": "text", + "content": "Recent diffusion-based text-to-image models [11, 23, 26, 29] have received wide acclaim for their outstanding image fidelity and diversity. Ho et al. [11] introduce the denoising diffusion process into generation models in the seminal work DDPM. Diffusion models learn the generation process through iterative denoising steps. Latent Diffusion Model (LDM) [26] proposed to perform the diffusion process in the latent space of a Variational Autoencoder [31]. Under the LDM architecture, two primary backbone models are employed: the convolutional U-Net [27] and the transformer [22]. While these models share a similar generation process, there are significant differences between the two models. The most notable example of U-Net models is the StableDiffusion [23, 26] series, excluding SDv3.0 [6], which consists of the symmetric encoder and decoder. The encoder is composed of multiple blocks of diverse dimensions, interconnected by down-sampling layers. Each block incorporates several attention layers [32] to fuse latents and conditions. Another series of models [4, 6, 9, 28] adopts DiT blocks [22] for denoising. The transformer model has been widely used in recent video generation tasks [20], where it has achieved state-of-the-art results. After patchification, the resulting features are progressively processed through a series of DiT blocks. In the DiT blocks, the features are scaled and shifted using AdaLN, maintaining the same dimension, which distinguishes them from U-Net-based models. Transformer-based models, with their flexible structure and impressive generative capabilities, have garnered significant attention due to their great potential." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 316, + 647, + 547, + 659 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 647, + 547, + 659 + ], + "spans": [ + { + "bbox": [ + 316, + 647, + 547, + 659 + ], + "type": "text", + "content": "2.2. Adapters for Text-to-Image Diffusion Models" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 316, + 666, + 553, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 666, + 553, + 712 + ], + "spans": [ + { + "bbox": [ + 316, + 666, + 553, + 712 + ], + "type": "text", + "content": "Given the inefficiencies of fine-tuning large pretrained models, an alternative strategy is to utilize adapters, which introduce a limited number of trainable parameters while keeping the pretrained model frozen. Due to their flexibility and" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 749, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 749, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 749, + 317, + 757 + ], + "type": "text", + "content": "18477" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 296, + 324 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 296, + 324 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 296, + 324 + ], + "type": "text", + "content": "greater efficiency compared to fine-tuning, adapters have gained significant interest. By designing conditional modules, an adapter can introduce new control conditions, such as personalized characters, objects, layouts, and style information, to a pretrained text-to-image model. Downstream tasks for these adapters include personalized character generation (ID customization) [7, 34, 40, 42, 44], personalized object generation (IP customization) [2, 7, 13, 21, 30, 36, 37, 40, 42], attribute and layout control [3, 16, 18, 35, 38, 43], and stylization [10, 39]. As discussed in Sec. 1, these adapters can be categorized into two main types: attention-based and addition-based. In the addition-based adapter (typically, ControlNet [41]), the encoded new conditions are directly added to the output of the sub-blocks of the generation model. In contrast, the attention-based adapter processes the encoded conditions through attention layers, modifying the original attention values based on both the new conditions and the text prompts. Attention-based adapters [7, 10, 30, 34, 36-40, 42-44] have received widespread attention due to their efficient and precise condition control capabilities, dominating the field of adapters." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 331, + 180, + 342 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 331, + 180, + 342 + ], + "spans": [ + { + "bbox": [ + 55, + 331, + 180, + 342 + ], + "type": "text", + "content": "2.3. Adapter Transferring" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 348, + 296, + 601 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 348, + 296, + 601 + ], + "spans": [ + { + "bbox": [ + 55, + 348, + 296, + 601 + ], + "type": "text", + "content": "The rapid evolution and diverse architectures of text-to-image generation models place constraints on transferring the aforementioned adapters. Consequently, to accommodate new models, adapters often require retraining from scratch on these pretrained diffusion models. X-Adapter [25] designates models equipped with the well-trained adapter as base models, with the upgraded version referred to as the upgraded model. It establishes manual connections between decoder blocks of the same dimensions in both the base and upgraded models. Specifically, the decoder of SD1.5 and SDXL consists of blocks with dimensions 1280, 640, and 320. X-Adapter [25] maps the output of the base model's blocks to the corresponding blocks of identical dimensions in the upgraded model. As a result, X-Adapter is specifically suited for SD-series models and faces challenges when transferring to transformer-based models [4, 6, 9, 22, 28]. Ctrl-Adapter [19] aims to transfer the addition-based adapter, ControlNet [41], for video generation models or upgraded image generation models. It connects the output of the zero-convolutional layers of both models through mappers to transfer the ControlNet." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 609, + 111, + 621 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 609, + 111, + 621 + ], + "spans": [ + { + "bbox": [ + 55, + 609, + 111, + 621 + ], + "type": "text", + "content": "3. Method" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 629, + 296, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 629, + 296, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 629, + 296, + 715 + ], + "type": "text", + "content": "We propose a novel framework for transferring well-trained adapters from the base model to the upgraded model with architectural discrepancies, specifically U-Net model and transformer model. For the sake of brevity, we define PTA as the Pre-Trained Adapter. Specifically, pretrained refers to the version that has been officially published. Additionally, we denote " + }, + { + "bbox": [ + 55, + 629, + 296, + 715 + ], + "type": "inline_equation", + "content": "M_{base}" + }, + { + "bbox": [ + 55, + 629, + 296, + 715 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 629, + 296, + 715 + ], + "type": "inline_equation", + "content": "M_{up}" + }, + { + "bbox": [ + 55, + 629, + 296, + 715 + ], + "type": "text", + "content": " as the previously mentioned" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 313, + 72, + 555, + 168 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 555, + 168 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 555, + 168 + ], + "type": "text", + "content": "base model and upgraded model. A4A first projects the extracted attention features into the unified coupling to maintain the coupling relationship between PTAs and " + }, + { + "bbox": [ + 313, + 72, + 555, + 168 + ], + "type": "inline_equation", + "content": "M_{base}" + }, + { + "bbox": [ + 313, + 72, + 555, + 168 + ], + "type": "text", + "content": ". Then, the coupling space is mapped to the upgraded space, where learnable features integrate the reference coupling features with attention layers. These learned features are then adaptively aligned with " + }, + { + "bbox": [ + 313, + 72, + 555, + 168 + ], + "type": "inline_equation", + "content": "M_{up}" + }, + { + "bbox": [ + 313, + 72, + 555, + 168 + ], + "type": "text", + "content": " through the Alignment component." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 176, + 400, + 187 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 176, + 400, + 187 + ], + "spans": [ + { + "bbox": [ + 313, + 176, + 400, + 187 + ], + "type": "text", + "content": "3.1. Preliminaries" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 193, + 555, + 266 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 193, + 555, + 266 + ], + "spans": [ + { + "bbox": [ + 313, + 193, + 555, + 266 + ], + "type": "text", + "content": "Before presenting our method, we introduce the diffusion model with various backbones. The Latent Diffusion Model (LDM), which perform noise addition and denoising in the latent space " + }, + { + "bbox": [ + 313, + 193, + 555, + 266 + ], + "type": "inline_equation", + "content": "z" + }, + { + "bbox": [ + 313, + 193, + 555, + 266 + ], + "type": "text", + "content": " of the VAE encoder, is the most prominent open-source community for text-to-image generation. The objective of training LDMs is:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 321, + 274, + 555, + 293 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 274, + 555, + 293 + ], + "spans": [ + { + "bbox": [ + 321, + 274, + 555, + 293 + ], + "type": "interline_equation", + "content": "\\min _ {\\theta} \\mathcal {L} _ {L D M} = \\mathbb {E} _ {z, \\epsilon \\sim \\mathcal {N} (0, I), t} | | \\epsilon - \\epsilon_ {\\theta} (z _ {t}, t, E _ {t} (y _ {t})) | | _ {2} ^ {2}, \\tag {1}", + "image_path": "57b8acaf33be762eea0d1e60b413f7d24e0e68576092769ad138586c629d84a2.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 300, + 556, + 445 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 300, + 556, + 445 + ], + "spans": [ + { + "bbox": [ + 313, + 300, + 556, + 445 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 300, + 556, + 445 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 313, + 300, + 556, + 445 + ], + "type": "text", + "content": " is uniformly sampled from the time steps " + }, + { + "bbox": [ + 313, + 300, + 556, + 445 + ], + "type": "inline_equation", + "content": "\\{1, \\dots, T\\}" + }, + { + "bbox": [ + 313, + 300, + 556, + 445 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 300, + 556, + 445 + ], + "type": "inline_equation", + "content": "y_{t}" + }, + { + "bbox": [ + 313, + 300, + 556, + 445 + ], + "type": "text", + "content": " denotes the conditional text prompt, and " + }, + { + "bbox": [ + 313, + 300, + 556, + 445 + ], + "type": "inline_equation", + "content": "E_{t}" + }, + { + "bbox": [ + 313, + 300, + 556, + 445 + ], + "type": "text", + "content": " represents the text encoder. The parameterized denoising network denoted as " + }, + { + "bbox": [ + 313, + 300, + 556, + 445 + ], + "type": "inline_equation", + "content": "\\epsilon_{\\theta}" + }, + { + "bbox": [ + 313, + 300, + 556, + 445 + ], + "type": "text", + "content": ", may take the form of either a U-Net model or a transformer model. The U-Net architecture consists of blocks with varying dimensions. Each U-Net block includes down-sampling or up-sampling layers along with attention layers. The transformer model primarily consists of multiple DiT blocks grounded on the transformer architecture. Since our method aims to transfer the widely used attention-based adapters, we define the cross-attention process and its associated signals as follows:" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 325, + 453, + 555, + 468 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 325, + 453, + 555, + 468 + ], + "spans": [ + { + "bbox": [ + 325, + 453, + 555, + 468 + ], + "type": "interline_equation", + "content": "\\boldsymbol {q} = \\boldsymbol {W} ^ {q} \\cdot \\boldsymbol {i}, \\quad \\boldsymbol {k} = \\boldsymbol {W} ^ {k} \\cdot \\boldsymbol {c}, \\quad a n d \\quad \\boldsymbol {v} = \\boldsymbol {W} ^ {v} \\cdot \\boldsymbol {c} \\tag {2}", + "image_path": "38e5ec156ba1d1f79047c634afcf18ed857eca3e4a106afb22cfb57b11a610ce.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 477, + 554, + 525 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 477, + 554, + 525 + ], + "spans": [ + { + "bbox": [ + 313, + 477, + 554, + 525 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 477, + 554, + 525 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 313, + 477, + 554, + 525 + ], + "type": "text", + "content": " denotes the latents of the image, and " + }, + { + "bbox": [ + 313, + 477, + 554, + 525 + ], + "type": "inline_equation", + "content": "c" + }, + { + "bbox": [ + 313, + 477, + 554, + 525 + ], + "type": "text", + "content": " signifies the embeddings of the condition, such as the text prompt in the original model. Additionally, " + }, + { + "bbox": [ + 313, + 477, + 554, + 525 + ], + "type": "inline_equation", + "content": "W" + }, + { + "bbox": [ + 313, + 477, + 554, + 525 + ], + "type": "text", + "content": " represents the weights for attention projection. Attention is conducted as follows:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 348, + 533, + 555, + 563 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 348, + 533, + 555, + 563 + ], + "spans": [ + { + "bbox": [ + 348, + 533, + 555, + 563 + ], + "type": "interline_equation", + "content": "\\operatorname {A t t e n t i o n} (\\boldsymbol {q}, \\boldsymbol {k}, \\boldsymbol {v}) = \\operatorname {S o f t m a x} \\left(\\frac {\\boldsymbol {q} \\cdot \\boldsymbol {k} ^ {T}}{\\sqrt {d}}\\right) \\boldsymbol {v} \\tag {3}", + "image_path": "740172dc590fee37b5b191e4a85000cc67af6ef27d3b0c52f901b5d6f821dfa1.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 570, + 554, + 606 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 570, + 554, + 606 + ], + "spans": [ + { + "bbox": [ + 313, + 570, + 554, + 606 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 570, + 554, + 606 + ], + "type": "inline_equation", + "content": "d" + }, + { + "bbox": [ + 313, + 570, + 554, + 606 + ], + "type": "text", + "content": " represents the dimensions of " + }, + { + "bbox": [ + 313, + 570, + 554, + 606 + ], + "type": "inline_equation", + "content": "\\pmb{k}" + }, + { + "bbox": [ + 313, + 570, + 554, + 606 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 570, + 554, + 606 + ], + "type": "inline_equation", + "content": "\\pmb{v}" + }, + { + "bbox": [ + 313, + 570, + 554, + 606 + ], + "type": "text", + "content": ". Through cross-attention, the image latents and condition embeddings are comprehensively integrated." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 613, + 462, + 627 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 613, + 462, + 627 + ], + "spans": [ + { + "bbox": [ + 313, + 613, + 462, + 627 + ], + "type": "text", + "content": "3.2. Coupling Space Projection" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 631, + 554, + 691 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 631, + 554, + 691 + ], + "spans": [ + { + "bbox": [ + 313, + 631, + 554, + 691 + ], + "type": "text", + "content": "Condition Encoder of PTA. For new control conditions " + }, + { + "bbox": [ + 313, + 631, + 554, + 691 + ], + "type": "inline_equation", + "content": "y_{n}" + }, + { + "bbox": [ + 313, + 631, + 554, + 691 + ], + "type": "text", + "content": " beyond the original text prompt, adapters typically incorporate a condition encoder as illustrated in Fig. 1. We denote " + }, + { + "bbox": [ + 313, + 631, + 554, + 691 + ], + "type": "inline_equation", + "content": "E_{n}" + }, + { + "bbox": [ + 313, + 631, + 554, + 691 + ], + "type": "text", + "content": " to distinguish it from the original text encoder " + }, + { + "bbox": [ + 313, + 631, + 554, + 691 + ], + "type": "inline_equation", + "content": "E_{t}" + }, + { + "bbox": [ + 313, + 631, + 554, + 691 + ], + "type": "text", + "content": " of the pretrained large-scale T2I models:" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 403, + 701, + 555, + 714 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 403, + 701, + 555, + 714 + ], + "spans": [ + { + "bbox": [ + 403, + 701, + 555, + 714 + ], + "type": "interline_equation", + "content": "\\boldsymbol {c} _ {n} = E _ {n} \\left(y _ {n}\\right), \\tag {4}", + "image_path": "468a4f77daae0568a244cfae6a719a43b53f9a03d22ee97c56bd48d45690be29.jpg" + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "text", + "content": "18478" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 56, + 68, + 553, + 223 + ], + "blocks": [ + { + "bbox": [ + 56, + 68, + 553, + 223 + ], + "lines": [ + { + "bbox": [ + 56, + 68, + 553, + 223 + ], + "spans": [ + { + "bbox": [ + 56, + 68, + 553, + 223 + ], + "type": "image", + "image_path": "a86d0fff88187fdbf456d2a9bebc8d42078a61a7a43735c068e5321b344f0df4.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 231, + 555, + 298 + ], + "lines": [ + { + "bbox": [ + 55, + 231, + 555, + 298 + ], + "spans": [ + { + "bbox": [ + 55, + 231, + 555, + 298 + ], + "type": "text", + "content": "Figure 1. The illustration of the Adapter for Adapter (A4A) framework. Both the base model and the upgraded model are kept frozen. (a) Coupling Space Projection: The pretrained Adapter, consisting of the condition encoder and attention layers (highlighted in pink), is loaded. The adapter features " + }, + { + "bbox": [ + 55, + 231, + 555, + 298 + ], + "type": "inline_equation", + "content": "k_{i}" + }, + { + "bbox": [ + 55, + 231, + 555, + 298 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 231, + 555, + 298 + ], + "type": "inline_equation", + "content": "v_{i}" + }, + { + "bbox": [ + 55, + 231, + 555, + 298 + ], + "type": "text", + "content": " are projected into a unified coupling space, reshaping them as " + }, + { + "bbox": [ + 55, + 231, + 555, + 298 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 55, + 231, + 555, + 298 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 231, + 555, + 298 + ], + "type": "inline_equation", + "content": "V" + }, + { + "bbox": [ + 55, + 231, + 555, + 298 + ], + "type": "text", + "content": ". (b) Upgraded Space Mapping: Randomly initialized learnable upgraded features, " + }, + { + "bbox": [ + 55, + 231, + 555, + 298 + ], + "type": "inline_equation", + "content": "\\bar{K}" + }, + { + "bbox": [ + 55, + 231, + 555, + 298 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 231, + 555, + 298 + ], + "type": "inline_equation", + "content": "\\bar{V}" + }, + { + "bbox": [ + 55, + 231, + 555, + 298 + ], + "type": "text", + "content": ", are concatenated with " + }, + { + "bbox": [ + 55, + 231, + 555, + 298 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 55, + 231, + 555, + 298 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 231, + 555, + 298 + ], + "type": "inline_equation", + "content": "V" + }, + { + "bbox": [ + 55, + 231, + 555, + 298 + ], + "type": "text", + "content": " as references. The learning process of " + }, + { + "bbox": [ + 55, + 231, + 555, + 298 + ], + "type": "inline_equation", + "content": "\\bar{K}" + }, + { + "bbox": [ + 55, + 231, + 555, + 298 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 231, + 555, + 298 + ], + "type": "inline_equation", + "content": "\\bar{V}" + }, + { + "bbox": [ + 55, + 231, + 555, + 298 + ], + "type": "text", + "content": " bridges the discrepancy between the base model and the upgraded model. These features are then aligned with the original cross-attention layers of the upgraded model through Alignment, which can be a U-Net or Transformer model. Best viewed in color." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 54, + 318, + 294, + 389 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 318, + 294, + 389 + ], + "spans": [ + { + "bbox": [ + 54, + 318, + 294, + 389 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 54, + 318, + 294, + 389 + ], + "type": "inline_equation", + "content": "c_{n}" + }, + { + "bbox": [ + 54, + 318, + 294, + 389 + ], + "type": "text", + "content": " denotes the new condition embeddings. Taking IP-Adapter [40] as an example, the Image Encoder external to the original generation model serves as the condition encoder. And we directly integrate the pretrained condition encoder " + }, + { + "bbox": [ + 54, + 318, + 294, + 389 + ], + "type": "inline_equation", + "content": "E_{n}" + }, + { + "bbox": [ + 54, + 318, + 294, + 389 + ], + "type": "text", + "content": " from the adapter to efficiently transfer the well-trained adapter to the upgraded model." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 54, + 389, + 295, + 534 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 389, + 295, + 534 + ], + "spans": [ + { + "bbox": [ + 54, + 389, + 295, + 534 + ], + "type": "text", + "content": "Attention Layers of PTA. As illustrated in Fig. 1, the encoded new condition embeddings " + }, + { + "bbox": [ + 54, + 389, + 295, + 534 + ], + "type": "inline_equation", + "content": "c_{n}" + }, + { + "bbox": [ + 54, + 389, + 295, + 534 + ], + "type": "text", + "content": " are processed through the attention mechanism coupling with the base model. We explicitly depict the attention layers associated with the pretrained adapter in the figure, denoting the weights of the " + }, + { + "bbox": [ + 54, + 389, + 295, + 534 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 54, + 389, + 295, + 534 + ], + "type": "text", + "content": "-th attention layer of the adapter as " + }, + { + "bbox": [ + 54, + 389, + 295, + 534 + ], + "type": "inline_equation", + "content": "W_{A,i}" + }, + { + "bbox": [ + 54, + 389, + 295, + 534 + ], + "type": "text", + "content": ", distinguishing them from the original attention weights " + }, + { + "bbox": [ + 54, + 389, + 295, + 534 + ], + "type": "inline_equation", + "content": "W" + }, + { + "bbox": [ + 54, + 389, + 295, + 534 + ], + "type": "text", + "content": " in the base model. Subscript " + }, + { + "bbox": [ + 54, + 389, + 295, + 534 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 54, + 389, + 295, + 534 + ], + "type": "text", + "content": " represents the adapter, and " + }, + { + "bbox": [ + 54, + 389, + 295, + 534 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 54, + 389, + 295, + 534 + ], + "type": "text", + "content": " represents the " + }, + { + "bbox": [ + 54, + 389, + 295, + 534 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 54, + 389, + 295, + 534 + ], + "type": "text", + "content": "-th adapter attention layer. For instance, the fine-tuned cross-attention layers in the Decoupled Cross-Attention module of IP-Adapter [40] exemplify this. " + }, + { + "bbox": [ + 54, + 389, + 295, + 534 + ], + "type": "inline_equation", + "content": "c_{n}" + }, + { + "bbox": [ + 54, + 389, + 295, + 534 + ], + "type": "text", + "content": " are sequentially fed into the aforementioned adapter's attention layers " + }, + { + "bbox": [ + 54, + 389, + 295, + 534 + ], + "type": "inline_equation", + "content": "W_{A,i}" + }, + { + "bbox": [ + 54, + 389, + 295, + 534 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 107, + 540, + 295, + 556 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 540, + 295, + 556 + ], + "spans": [ + { + "bbox": [ + 107, + 540, + 295, + 556 + ], + "type": "interline_equation", + "content": "\\boldsymbol {k} _ {i} = \\boldsymbol {W} _ {A, i} ^ {k} \\left(\\boldsymbol {c} _ {n}\\right), \\quad \\boldsymbol {k} _ {i} \\in \\mathbb {R} ^ {N \\times d _ {i}}, \\tag {5}", + "image_path": "b35546fb456680290312e249a36215ce1a8e1617c5c78f128cf42c19a7fef940.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 107, + 563, + 295, + 578 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 563, + 295, + 578 + ], + "spans": [ + { + "bbox": [ + 107, + 563, + 295, + 578 + ], + "type": "interline_equation", + "content": "\\boldsymbol {v} _ {i} = \\boldsymbol {W} _ {A, i} ^ {v} \\left(\\boldsymbol {c} _ {n}\\right), \\quad \\boldsymbol {v} _ {i} \\in \\mathbb {R} ^ {N \\times d _ {i}}, \\tag {6}", + "image_path": "55f4e28fae9674228560c2f41c5f483d57050bcdac631d21bbb0b67b1716f596.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 582, + 295, + 630 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 582, + 295, + 630 + ], + "spans": [ + { + "bbox": [ + 55, + 582, + 295, + 630 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 582, + 295, + 630 + ], + "type": "inline_equation", + "content": "d_{i}" + }, + { + "bbox": [ + 55, + 582, + 295, + 630 + ], + "type": "text", + "content": " is the dimension of the feature, and " + }, + { + "bbox": [ + 55, + 582, + 295, + 630 + ], + "type": "inline_equation", + "content": "\\mathbf{N}" + }, + { + "bbox": [ + 55, + 582, + 295, + 630 + ], + "type": "text", + "content": " denotes the number of tokens for feature " + }, + { + "bbox": [ + 55, + 582, + 295, + 630 + ], + "type": "inline_equation", + "content": "k_{i}" + }, + { + "bbox": [ + 55, + 582, + 295, + 630 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 582, + 295, + 630 + ], + "type": "inline_equation", + "content": "\\boldsymbol{v}_{i}" + }, + { + "bbox": [ + 55, + 582, + 295, + 630 + ], + "type": "text", + "content": ". To extract the attention features from " + }, + { + "bbox": [ + 55, + 582, + 295, + 630 + ], + "type": "inline_equation", + "content": "\\boldsymbol{c}_{n}" + }, + { + "bbox": [ + 55, + 582, + 295, + 630 + ], + "type": "text", + "content": ", we employ the attention layers of the adapter rather than utilizing the entire base model." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 630, + 295, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 630, + 295, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 630, + 295, + 714 + ], + "type": "text", + "content": "Projection onto the Coupling Space. The features obtained from multiple layers form a sequence of length " + }, + { + "bbox": [ + 55, + 630, + 295, + 714 + ], + "type": "inline_equation", + "content": "l" + }, + { + "bbox": [ + 55, + 630, + 295, + 714 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 630, + 295, + 714 + ], + "type": "inline_equation", + "content": "[k_1, k_2, \\dots, k_l]" + }, + { + "bbox": [ + 55, + 630, + 295, + 714 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 630, + 295, + 714 + ], + "type": "inline_equation", + "content": "[v_1, v_2, \\dots, v_l]" + }, + { + "bbox": [ + 55, + 630, + 295, + 714 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 55, + 630, + 295, + 714 + ], + "type": "inline_equation", + "content": "l" + }, + { + "bbox": [ + 55, + 630, + 295, + 714 + ], + "type": "text", + "content": " represents the number of attention layers of PTA. The dimensions " + }, + { + "bbox": [ + 55, + 630, + 295, + 714 + ], + "type": "inline_equation", + "content": "d_i" + }, + { + "bbox": [ + 55, + 630, + 295, + 714 + ], + "type": "text", + "content": " of the features vary, as shown in Fig. 1. To achieve the all-for-all mapping for the attention-based adapter from " + }, + { + "bbox": [ + 55, + 630, + 295, + 714 + ], + "type": "inline_equation", + "content": "M_{base}" + }, + { + "bbox": [ + 55, + 630, + 295, + 714 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 55, + 630, + 295, + 714 + ], + "type": "inline_equation", + "content": "M_{up}" + }, + { + "bbox": [ + 55, + 630, + 295, + 714 + ], + "type": "text", + "content": ", the features of all cross-attention layers need to be" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 318, + 555, + 354 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 318, + 555, + 354 + ], + "spans": [ + { + "bbox": [ + 313, + 318, + 555, + 354 + ], + "type": "text", + "content": "projected into a unified coupling space " + }, + { + "bbox": [ + 313, + 318, + 555, + 354 + ], + "type": "inline_equation", + "content": "S_{co}" + }, + { + "bbox": [ + 313, + 318, + 555, + 354 + ], + "type": "text", + "content": " which is defined by the smallest common multiple " + }, + { + "bbox": [ + 313, + 318, + 555, + 354 + ], + "type": "inline_equation", + "content": "d_{scm}" + }, + { + "bbox": [ + 313, + 318, + 555, + 354 + ], + "type": "text", + "content": " of all dimensions " + }, + { + "bbox": [ + 313, + 318, + 555, + 354 + ], + "type": "inline_equation", + "content": "d_{i}" + }, + { + "bbox": [ + 313, + 318, + 555, + 354 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 325, + 363, + 555, + 380 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 325, + 363, + 555, + 380 + ], + "spans": [ + { + "bbox": [ + 325, + 363, + 555, + 380 + ], + "type": "interline_equation", + "content": "\\boldsymbol {K} = \\operatorname {P r o j} ([ \\boldsymbol {k} _ {1}, \\boldsymbol {k} _ {2}, \\dots , \\boldsymbol {k} _ {l} ]), \\quad \\boldsymbol {K} \\in \\mathbb {R} ^ {l \\times N \\times d _ {s c m}}, \\quad (7)", + "image_path": "e5caf5dfc7a27cc2b56eb662be75401e76b7255d4fb2b5f0366359fbfa359c22.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 327, + 388, + 555, + 405 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 327, + 388, + 555, + 405 + ], + "spans": [ + { + "bbox": [ + 327, + 388, + 555, + 405 + ], + "type": "interline_equation", + "content": "\\pmb {V} = \\mathrm {P r o j} ([ \\pmb {v} _ {1}, \\pmb {v} _ {2}, \\dots , \\pmb {v} _ {l} ]), \\quad \\pmb {V} \\in \\mathbb {R} ^ {l \\times N \\times d _ {s c m}}. \\quad (8)", + "image_path": "f11d7f054fbcec294c8537f89b7b42cad395a14235bdc369ae04ecd43c6e9648.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 411, + 555, + 579 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 411, + 555, + 579 + ], + "spans": [ + { + "bbox": [ + 313, + 411, + 555, + 579 + ], + "type": "text", + "content": "The projection module consists of several linear layers designed to map dimensions " + }, + { + "bbox": [ + 313, + 411, + 555, + 579 + ], + "type": "inline_equation", + "content": "d_{i}" + }, + { + "bbox": [ + 313, + 411, + 555, + 579 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 313, + 411, + 555, + 579 + ], + "type": "inline_equation", + "content": "d_{scm}" + }, + { + "bbox": [ + 313, + 411, + 555, + 579 + ], + "type": "text", + "content": ". After the sequences are projected to " + }, + { + "bbox": [ + 313, + 411, + 555, + 579 + ], + "type": "inline_equation", + "content": "S_{co}" + }, + { + "bbox": [ + 313, + 411, + 555, + 579 + ], + "type": "text", + "content": ", they are reshaped into matrices " + }, + { + "bbox": [ + 313, + 411, + 555, + 579 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 313, + 411, + 555, + 579 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 313, + 411, + 555, + 579 + ], + "type": "inline_equation", + "content": "V" + }, + { + "bbox": [ + 313, + 411, + 555, + 579 + ], + "type": "text", + "content": ". Mapping to " + }, + { + "bbox": [ + 313, + 411, + 555, + 579 + ], + "type": "inline_equation", + "content": "S_{co}" + }, + { + "bbox": [ + 313, + 411, + 555, + 579 + ], + "type": "text", + "content": " defined by " + }, + { + "bbox": [ + 313, + 411, + 555, + 579 + ], + "type": "inline_equation", + "content": "d_{scm}" + }, + { + "bbox": [ + 313, + 411, + 555, + 579 + ], + "type": "text", + "content": " strikes the best balance between efficiency and effectiveness. Sec. 7.3 in the supplementary material demonstrates this through experiments. It reduces computational complexity by aligning features to a common dimension, avoiding the overhead of larger spaces. At the same time, it maintains sufficient representational capacity, preventing the loss of important information, which can happen with smaller spaces. This ensures effective feature alignment without excessive resource usage, making it an optimal choice for both computational efficiency and model performance." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 587, + 460, + 601 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 587, + 460, + 601 + ], + "spans": [ + { + "bbox": [ + 313, + 587, + 460, + 601 + ], + "type": "text", + "content": "3.3. Upgraded Space Mapping" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 605, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 605, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 605, + 555, + 715 + ], + "type": "text", + "content": "Similarly, we define the number of the cross-attention layers in the upgraded model as " + }, + { + "bbox": [ + 313, + 605, + 555, + 715 + ], + "type": "inline_equation", + "content": "\\bar{l}" + }, + { + "bbox": [ + 313, + 605, + 555, + 715 + ], + "type": "text", + "content": ", and the space for upgraded models as " + }, + { + "bbox": [ + 313, + 605, + 555, + 715 + ], + "type": "inline_equation", + "content": "S_{up}" + }, + { + "bbox": [ + 313, + 605, + 555, + 715 + ], + "type": "text", + "content": ". To transfer the unified adapter features to the upgraded space, inspired by BLIP-2 [17], we randomly initialize two learnable parameters " + }, + { + "bbox": [ + 313, + 605, + 555, + 715 + ], + "type": "inline_equation", + "content": "\\bar{K} \\in \\mathbb{R}^{\\bar{l} \\times N \\times d_{\\text{scm}}}" + }, + { + "bbox": [ + 313, + 605, + 555, + 715 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 605, + 555, + 715 + ], + "type": "inline_equation", + "content": "\\bar{V} \\in \\mathbb{R}^{\\bar{l} \\times N \\times d_{\\text{scm}}}" + }, + { + "bbox": [ + 313, + 605, + 555, + 715 + ], + "type": "text", + "content": ". Given that the attention layer is effective for integrating features, we adopt this architecture to learn the aforementioned parameters. Consider the learning of " + }, + { + "bbox": [ + 313, + 605, + 555, + 715 + ], + "type": "inline_equation", + "content": "\\bar{K}" + }, + { + "bbox": [ + 313, + 605, + 555, + 715 + ], + "type": "text", + "content": " as an example. To enhance the robustness, we first" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "18479" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 295, + 96 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 295, + 96 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 295, + 96 + ], + "type": "text", + "content": "normalize " + }, + { + "bbox": [ + 55, + 72, + 295, + 96 + ], + "type": "inline_equation", + "content": "\\bar{\\pmb{K}}" + }, + { + "bbox": [ + 55, + 72, + 295, + 96 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 72, + 295, + 96 + ], + "type": "inline_equation", + "content": "\\pmb{K}" + }, + { + "bbox": [ + 55, + 72, + 295, + 96 + ], + "type": "text", + "content": " using layer normalization. And then, the following operation is performed:" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 82, + 101, + 294, + 133 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 101, + 294, + 133 + ], + "spans": [ + { + "bbox": [ + 82, + 101, + 294, + 133 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\bar {\\boldsymbol {K}} = \\operatorname {F F N} \\left[ \\boldsymbol {W} ^ {o u t} \\cdot \\text {A t t e n t i o n} \\left(\\bar {\\boldsymbol {K}} \\boldsymbol {W} _ {1} ^ {k}, \\right. \\right. \\tag {9} \\\\ [ \\boldsymbol {K}, \\bar {\\boldsymbol {K}} ] W _ {2} ^ {k}, [ \\boldsymbol {K}, \\bar {\\boldsymbol {K}} ] W _ {3} ^ {k}) ]. \\\\ \\end{array}", + "image_path": "150e7821eeb73f34a0f41371e0efc1b14f46f8fe35c29ba98fa450ad8ae6211f.jpg" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "spans": [ + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "inline_equation", + "content": "[\\pmb{K},\\bar{\\pmb{K}} ]" + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "text", + "content": " denotes the concatenation of " + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "inline_equation", + "content": "\\pmb{K}" + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "inline_equation", + "content": "\\bar{\\pmb{K}}" + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "text", + "content": " along dimension " + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "text", + "content": ". The projection weights " + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "inline_equation", + "content": "\\pmb{W}_1^k,\\pmb{W}_2^k,\\pmb{W}_3^k\\in \\mathbb{R}^{d_{scm}\\times d_{in}}" + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "text", + "content": " map " + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "inline_equation", + "content": "\\bar{\\pmb{K}}" + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "inline_equation", + "content": "\\pmb{K}" + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "text", + "content": ", respectively, to the intermediate space with dimension " + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "inline_equation", + "content": "d_{in}" + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "text", + "content": ". Following the processing of features using the Attention as described in Eq. (3), they are subsequently transformed back to the original space via the " + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "inline_equation", + "content": "\\pmb{W}^{out}" + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "text", + "content": ". The Feed Forward Network (FFN) is composed of layers arranged sequentially, including layer normalization, linear transformations, GELU activation, and additional linear layers. The aforementioned process of Eq. (9) is iterated " + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "inline_equation", + "content": "\\mathbf{R}" + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "text", + "content": " times, with " + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "inline_equation", + "content": "\\mathbf{R}" + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "text", + "content": " serving as a hyperparameter. The learning process of " + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "inline_equation", + "content": "\\bar{\\pmb{V}}" + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "text", + "content": " is similar to " + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "inline_equation", + "content": "\\bar{\\pmb{K}}" + }, + { + "bbox": [ + 55, + 138, + 296, + 283 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 76, + 288, + 294, + 316 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 288, + 294, + 316 + ], + "spans": [ + { + "bbox": [ + 76, + 288, + 294, + 316 + ], + "type": "interline_equation", + "content": "\\bar {\\boldsymbol {V}} = \\operatorname {F F N} \\left[ \\boldsymbol {W} ^ {\\text {o u t}} \\cdot \\text {A t t e n t i o n} \\left(\\bar {\\boldsymbol {V}} \\boldsymbol {W} _ {1} ^ {v}, \\right. \\right. \\\\ \\left. [ \\boldsymbol {V}, \\bar {\\boldsymbol {V}} ] \\boldsymbol {W} _ {2} ^ {v}, [ \\boldsymbol {V}, \\bar {\\boldsymbol {V}} ] \\boldsymbol {W} _ {3} ^ {v}) \\right]. \\tag {10}", + "image_path": "1b0730889d8039f8f67aa579229df48ba5db2228771c1ce87cf5de2597d05da8.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 322, + 296, + 393 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 322, + 296, + 393 + ], + "spans": [ + { + "bbox": [ + 55, + 322, + 296, + 393 + ], + "type": "text", + "content": "We clarify that two distinct modules with an identical structure are responsible for learning " + }, + { + "bbox": [ + 55, + 322, + 296, + 393 + ], + "type": "inline_equation", + "content": "\\bar{K}" + }, + { + "bbox": [ + 55, + 322, + 296, + 393 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 322, + 296, + 393 + ], + "type": "inline_equation", + "content": "\\bar{V}" + }, + { + "bbox": [ + 55, + 322, + 296, + 393 + ], + "type": "text", + "content": ", respectively. Throughout this process, the learnable features " + }, + { + "bbox": [ + 55, + 322, + 296, + 393 + ], + "type": "inline_equation", + "content": "\\bar{\\bar{K}}" + }, + { + "bbox": [ + 55, + 322, + 296, + 393 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 322, + 296, + 393 + ], + "type": "inline_equation", + "content": "\\bar{V}" + }, + { + "bbox": [ + 55, + 322, + 296, + 393 + ], + "type": "text", + "content": " are seamlessly integrated with the adapter features " + }, + { + "bbox": [ + 55, + 322, + 296, + 393 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 55, + 322, + 296, + 393 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 322, + 296, + 393 + ], + "type": "inline_equation", + "content": "V" + }, + { + "bbox": [ + 55, + 322, + 296, + 393 + ], + "type": "text", + "content": " of the PTA, effectively bridging the unified coupling space to the upgraded space." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 394, + 296, + 489 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 394, + 296, + 489 + ], + "spans": [ + { + "bbox": [ + 55, + 394, + 296, + 489 + ], + "type": "text", + "content": "Alignment with the Upgraded Model. The learned features in the upgraded space should be aligned with the attention layers of " + }, + { + "bbox": [ + 55, + 394, + 296, + 489 + ], + "type": "inline_equation", + "content": "M_{up}" + }, + { + "bbox": [ + 55, + 394, + 296, + 489 + ], + "type": "text", + "content": " to fully leverage the adapter's capabilities. Specifically, we first fetch the dimensions " + }, + { + "bbox": [ + 55, + 394, + 296, + 489 + ], + "type": "inline_equation", + "content": "\\bar{d}_i" + }, + { + "bbox": [ + 55, + 394, + 296, + 489 + ], + "type": "text", + "content": " of the cross-attention layers within the original upgraded model. Then, we employ the linear layer to align the " + }, + { + "bbox": [ + 55, + 394, + 296, + 489 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 55, + 394, + 296, + 489 + ], + "type": "text", + "content": "-th row of the matrix " + }, + { + "bbox": [ + 55, + 394, + 296, + 489 + ], + "type": "inline_equation", + "content": "\\bar{\\mathbf{K}}" + }, + { + "bbox": [ + 55, + 394, + 296, + 489 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 55, + 394, + 296, + 489 + ], + "type": "inline_equation", + "content": "\\bar{d}_i" + }, + { + "bbox": [ + 55, + 394, + 296, + 489 + ], + "type": "text", + "content": " dimensions through Alignment, which we denote as " + }, + { + "bbox": [ + 55, + 394, + 296, + 489 + ], + "type": "inline_equation", + "content": "\\bar{k}_i" + }, + { + "bbox": [ + 55, + 394, + 296, + 489 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 120, + 495, + 294, + 510 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 120, + 495, + 294, + 510 + ], + "spans": [ + { + "bbox": [ + 120, + 495, + 294, + 510 + ], + "type": "interline_equation", + "content": "[ \\bar {\\boldsymbol {k}} _ {1}, \\dots , \\bar {\\boldsymbol {k}} _ {\\bar {l}} ] = \\operatorname {A l i g n} (\\bar {\\boldsymbol {K}}). \\tag {11}", + "image_path": "e7965165b0f5d0846ba8e4d21e8ef312b0f8f07213e7bd0d7e997735b8f14e02.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 514, + 295, + 538 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 514, + 295, + 538 + ], + "spans": [ + { + "bbox": [ + 55, + 514, + 295, + 538 + ], + "type": "text", + "content": "Similarly, the identical operation is applied to the " + }, + { + "bbox": [ + 55, + 514, + 295, + 538 + ], + "type": "inline_equation", + "content": "\\bar{\\mathbf{V}}" + }, + { + "bbox": [ + 55, + 514, + 295, + 538 + ], + "type": "text", + "content": " matrix in order to derive the vector " + }, + { + "bbox": [ + 55, + 514, + 295, + 538 + ], + "type": "inline_equation", + "content": "\\bar{\\pmb{v}}_i" + }, + { + "bbox": [ + 55, + 514, + 295, + 538 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 121, + 544, + 294, + 559 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 121, + 544, + 294, + 559 + ], + "spans": [ + { + "bbox": [ + 121, + 544, + 294, + 559 + ], + "type": "interline_equation", + "content": "[ \\bar {\\boldsymbol {v}} _ {1}, \\dots , \\bar {\\boldsymbol {v}} _ {\\bar {l}} ] = \\operatorname {A l i g n} (\\bar {\\boldsymbol {V}}). \\tag {12}", + "image_path": "d7ded03db72de74a8e7cfa354867a0997397cd312bdbb8077c79b01fb1460765.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 563, + 295, + 612 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 563, + 295, + 612 + ], + "spans": [ + { + "bbox": [ + 55, + 563, + 295, + 612 + ], + "type": "text", + "content": "For the " + }, + { + "bbox": [ + 55, + 563, + 295, + 612 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 55, + 563, + 295, + 612 + ], + "type": "text", + "content": "-th cross-attention layer, let " + }, + { + "bbox": [ + 55, + 563, + 295, + 612 + ], + "type": "inline_equation", + "content": "\\pmb{q}_i^{up}" + }, + { + "bbox": [ + 55, + 563, + 295, + 612 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 563, + 295, + 612 + ], + "type": "inline_equation", + "content": "\\pmb{k}_i^{up}" + }, + { + "bbox": [ + 55, + 563, + 295, + 612 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 55, + 563, + 295, + 612 + ], + "type": "inline_equation", + "content": "\\pmb{v}_i^{up}" + }, + { + "bbox": [ + 55, + 563, + 295, + 612 + ], + "type": "text", + "content": " be the original features. The extracted features, " + }, + { + "bbox": [ + 55, + 563, + 295, + 612 + ], + "type": "inline_equation", + "content": "\\bar{\\pmb{k}}_i" + }, + { + "bbox": [ + 55, + 563, + 295, + 612 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 563, + 295, + 612 + ], + "type": "inline_equation", + "content": "\\bar{\\pmb{v}}_i" + }, + { + "bbox": [ + 55, + 563, + 295, + 612 + ], + "type": "text", + "content": ", are combined through linear weighting and summation with the prior values of the upgraded model using Attention Eq. (3):" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 89, + 616, + 294, + 647 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 89, + 616, + 294, + 647 + ], + "spans": [ + { + "bbox": [ + 89, + 616, + 294, + 647 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\bar {\\boldsymbol {Z}} = \\operatorname {A t t e n t i o n} \\left(\\boldsymbol {q} _ {i} ^ {u p}, \\boldsymbol {k} _ {i} ^ {u p}, \\boldsymbol {v} _ {i} ^ {u p}\\right) + \\tag {13} \\\\ \\lambda \\text {A t t e n t i o n} \\left(\\boldsymbol {q} _ {i} ^ {u p}, \\bar {\\boldsymbol {k}} _ {i}, \\bar {\\boldsymbol {v}} _ {i}\\right). \\\\ \\end{array}", + "image_path": "487b40ef252258c8289612e5ee6d025d667f2cea1858a8ab09e88b1ed7ce3b8a.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 55, + 653, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 653, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 653, + 296, + 714 + ], + "type": "text", + "content": "The result of Eq. (13), " + }, + { + "bbox": [ + 55, + 653, + 296, + 714 + ], + "type": "inline_equation", + "content": "\\bar{Z}" + }, + { + "bbox": [ + 55, + 653, + 296, + 714 + ], + "type": "text", + "content": ", serves as the output for the " + }, + { + "bbox": [ + 55, + 653, + 296, + 714 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 55, + 653, + 296, + 714 + ], + "type": "text", + "content": "-th cross-attention layer and will be forwarded to the next layer of the upgraded model. The parameter " + }, + { + "bbox": [ + 55, + 653, + 296, + 714 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 55, + 653, + 296, + 714 + ], + "type": "text", + "content": " serves as a balancing factor, fixed at 1.0 during training and subsequently adjusted for downstream tasks during inference." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 314, + 72, + 468, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 72, + 468, + 85 + ], + "spans": [ + { + "bbox": [ + 314, + 72, + 468, + 85 + ], + "type": "text", + "content": "3.4. Optimization Loss Function" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 89, + 553, + 125 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 89, + 553, + 125 + ], + "spans": [ + { + "bbox": [ + 313, + 89, + 553, + 125 + ], + "type": "text", + "content": "To optimize the A4A framework, we employ the loss function " + }, + { + "bbox": [ + 313, + 89, + 553, + 125 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{LDM}" + }, + { + "bbox": [ + 313, + 89, + 553, + 125 + ], + "type": "text", + "content": " of the upgraded model " + }, + { + "bbox": [ + 313, + 89, + 553, + 125 + ], + "type": "inline_equation", + "content": "M_{up}" + }, + { + "bbox": [ + 313, + 89, + 553, + 125 + ], + "type": "text", + "content": " as defined in Equation Eq. (1):" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 332, + 131, + 553, + 146 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 332, + 131, + 553, + 146 + ], + "spans": [ + { + "bbox": [ + 332, + 131, + 553, + 146 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {L D M} = \\mathbb {E} _ {z, \\epsilon \\sim \\mathcal {N} (0, I), t} \\| \\epsilon - \\epsilon_ {\\theta} ^ {u p} \\left(z _ {t}, t, c ^ {u p}\\right) \\| _ {2} ^ {2}, \\tag {14}", + "image_path": "f50028f8fa77f29d618ce26dd6f3ef3580a56e8ff23f46d37a6be9b6555f0590.jpg" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 152, + 553, + 200 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 152, + 553, + 200 + ], + "spans": [ + { + "bbox": [ + 313, + 152, + 553, + 200 + ], + "type": "text", + "content": "The condition " + }, + { + "bbox": [ + 313, + 152, + 553, + 200 + ], + "type": "inline_equation", + "content": "c^{up}" + }, + { + "bbox": [ + 313, + 152, + 553, + 200 + ], + "type": "text", + "content": " includes the original text prompt " + }, + { + "bbox": [ + 313, + 152, + 553, + 200 + ], + "type": "inline_equation", + "content": "y_{t}" + }, + { + "bbox": [ + 313, + 152, + 553, + 200 + ], + "type": "text", + "content": " and new control conditions " + }, + { + "bbox": [ + 313, + 152, + 553, + 200 + ], + "type": "inline_equation", + "content": "y_{n}" + }, + { + "bbox": [ + 313, + 152, + 553, + 200 + ], + "type": "text", + "content": ". The upgraded model " + }, + { + "bbox": [ + 313, + 152, + 553, + 200 + ], + "type": "inline_equation", + "content": "\\epsilon_{\\theta}^{up}" + }, + { + "bbox": [ + 313, + 152, + 553, + 200 + ], + "type": "text", + "content": " acquires new control conditions and capabilities by injecting learned adapter features." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 200, + 555, + 344 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 200, + 555, + 344 + ], + "spans": [ + { + "bbox": [ + 313, + 200, + 555, + 344 + ], + "type": "text", + "content": "The base model and upgraded model remain frozen. The loss function is exclusively used to update the A4A module, which consists of the network with learnable features and the PTA for fine-tuning. To optimize model training, given the varying numbers of parameters in each trainable module, we adopt an asynchronous training strategy. Specifically, for training the projection in " + }, + { + "bbox": [ + 313, + 200, + 555, + 344 + ], + "type": "inline_equation", + "content": "S_{co}" + }, + { + "bbox": [ + 313, + 200, + 555, + 344 + ], + "type": "text", + "content": " and alignment in " + }, + { + "bbox": [ + 313, + 200, + 555, + 344 + ], + "type": "inline_equation", + "content": "S_{up}" + }, + { + "bbox": [ + 313, + 200, + 555, + 344 + ], + "type": "text", + "content": ", a learning rate of " + }, + { + "bbox": [ + 313, + 200, + 555, + 344 + ], + "type": "inline_equation", + "content": "1 \\times 10^{-5}" + }, + { + "bbox": [ + 313, + 200, + 555, + 344 + ], + "type": "text", + "content": " is employed to avoid overfitting, while a learning rate of " + }, + { + "bbox": [ + 313, + 200, + 555, + 344 + ], + "type": "inline_equation", + "content": "1 \\times 10^{-4}" + }, + { + "bbox": [ + 313, + 200, + 555, + 344 + ], + "type": "text", + "content": " is applied to the other components. If the pretrained adapter is fine-tuned, a smaller learning rate of " + }, + { + "bbox": [ + 313, + 200, + 555, + 344 + ], + "type": "inline_equation", + "content": "1 \\times 10^{-6}" + }, + { + "bbox": [ + 313, + 200, + 555, + 344 + ], + "type": "text", + "content": " is used to effectively retain PTA's conditional encoding capabilities." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 314, + 354, + 395, + 367 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 354, + 395, + 367 + ], + "spans": [ + { + "bbox": [ + 314, + 354, + 395, + 367 + ], + "type": "text", + "content": "4. Experiments" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 373, + 441, + 386 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 373, + 441, + 386 + ], + "spans": [ + { + "bbox": [ + 313, + 373, + 441, + 386 + ], + "type": "text", + "content": "4.1. Experimental Settings" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 313, + 390, + 555, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 390, + 555, + 521 + ], + "spans": [ + { + "bbox": [ + 313, + 390, + 555, + 521 + ], + "type": "text", + "content": "Datasets. In our study, we utilize the CelebAMask-HQ dataset [15], which consists of approximately 20,000 high-quality facial images, each with a resolution of " + }, + { + "bbox": [ + 313, + 390, + 555, + 521 + ], + "type": "inline_equation", + "content": "1024 \\times 1024" + }, + { + "bbox": [ + 313, + 390, + 555, + 521 + ], + "type": "text", + "content": " pixels, and the OpenImages dataset [14], which offers a diverse collection of images featuring a wide range of clearly identifiable objects. We employ the BLIP-2 model [17] to generate captions for the aforementioned datasets, which serve as text prompts paired with the images. For validation, we randomly select 100 images from CelebAMask-HQ, ensuring they are distinct from those in the training set, and generate four images for each reference image." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 313, + 522, + 556, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 522, + 556, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 522, + 556, + 714 + ], + "type": "text", + "content": "Implementation Details. We implement A4A using SD1.5 [26] as the base model, and SDXL [23] and Pixart-Alpha (XL) [4] as the upgraded models. Both SDXL and Pixart-Alpha are significantly larger text-to-image models compared to SD1.5, and we use them as upgraded models for the U-Net and transformer architectures, respectively. In this paper, we utilize the IP-Adapter [40] as our pretrained adapter. The IP-Adapter series has recently demonstrated remarkable capabilities, garnering significant interest for its ability to enhance personalization in generative models. Its performance across a variety of tasks highlights its increasing potential to advance text-to-image generation. Due to variations in GPU types across compared methods and the absence of comprehensive GPU hour reports, we use Sample Count (SC), which represents the number of samples processed up to a specific time point, as a metric" + } + ] + } + ], + "index": 20 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "text", + "content": "18480" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 56, + 68, + 553, + 285 + ], + "blocks": [ + { + "bbox": [ + 56, + 68, + 553, + 285 + ], + "lines": [ + { + "bbox": [ + 56, + 68, + 553, + 285 + ], + "spans": [ + { + "bbox": [ + 56, + 68, + 553, + 285 + ], + "type": "image", + "image_path": "66b93cd27e2503d438b30bfa123b5769814d45f9d50523f09cd82bea5f899aa3.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 291, + 555, + 314 + ], + "lines": [ + { + "bbox": [ + 55, + 291, + 555, + 314 + ], + "spans": [ + { + "bbox": [ + 55, + 291, + 555, + 314 + ], + "type": "text", + "content": "Figure 2. The visualization of personalized human generation using SDXL with U-Net architecture as the upgraded model. A4A(ours) compares with the previous work X-Adapter and pretrained adapter from the upgraded model (PTA-UM)." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 58, + 323, + 556, + 541 + ], + "blocks": [ + { + "bbox": [ + 58, + 323, + 556, + 541 + ], + "lines": [ + { + "bbox": [ + 58, + 323, + 556, + 541 + ], + "spans": [ + { + "bbox": [ + 58, + 323, + 556, + 541 + ], + "type": "image", + "image_path": "9ce9673e3ffcd9fd32f55633093dce31a2d23a1288de851c5e4c5d47441de51a.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 549, + 555, + 572 + ], + "lines": [ + { + "bbox": [ + 55, + 549, + 555, + 572 + ], + "spans": [ + { + "bbox": [ + 55, + 549, + 555, + 572 + ], + "type": "text", + "content": "Figure 3. The visualization of personalized object generation using SDXL with U-Net architecture as the upgraded model. A4A(ours) compares with the previous work X-Adapter and pretrained adapter from the upgraded model (PTA-UM)." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 592, + 295, + 700 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 592, + 295, + 700 + ], + "spans": [ + { + "bbox": [ + 55, + 592, + 295, + 700 + ], + "type": "text", + "content": "for evaluating training costs and efficiency. For example, the officially published PTA has an SC of 64M (i.e., the PTA is trained with 8 V100 GPUs for 1M steps with a batch size of 8 per GPU). The terms in the following charts are defined as: (1) A4A (ours): transferring the PTA from the base model to the upgraded model using our method A4A; (2) X-Adapter: transferring the PTA using the published X-Adapter [25] checkpoint; (3) PTA-UM: the officially published pretrained adapter from the upgraded model." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 701, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 701, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 701, + 295, + 713 + ], + "type": "text", + "content": "Evaluation Metrics. To verify the effectiveness and effi" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 592, + 555, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 592, + 555, + 712 + ], + "spans": [ + { + "bbox": [ + 313, + 592, + 555, + 712 + ], + "type": "text", + "content": "ciency of our method, we evaluate A4A on two tasks: personalized human generation (ID customization) and personalized object generation (IP customization). For ID customization, we utilize IDentity Alignment scores (IDA) to measure the similarity between generated and reference facial features, alongside the OMG method [12]. Specifically, we employ the Antelopev2 model from the InsightFace library [8] to detect faces and extract facial embeddings from both reference and generated images. For IP customization, we extract image embeddings using pretrained CLIP" + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "18481" + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "bbox": [ + 133, + 72, + 178, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 72, + 178, + 83 + ], + "spans": [ + { + "bbox": [ + 133, + 72, + 178, + 83 + ], + "type": "text", + "content": "A4A(ours)" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 65, + 91, + 247, + 109 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 91, + 247, + 109 + ], + "spans": [ + { + "bbox": [ + 65, + 91, + 247, + 109 + ], + "type": "text", + "content": "Photo of a man/woman standing in a garden, dressed in casual clothing, dressed in casual clothing" + } + ] + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 56, + 114, + 104, + 165 + ], + "blocks": [ + { + "bbox": [ + 56, + 114, + 104, + 165 + ], + "lines": [ + { + "bbox": [ + 56, + 114, + 104, + 165 + ], + "spans": [ + { + "bbox": [ + 56, + 114, + 104, + 165 + ], + "type": "image", + "image_path": "fe64143bd4cf19a61c1bf3199a22b1d70b415ba7436a7f604f6aa56a82aa538b.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 57, + 165, + 104, + 213 + ], + "blocks": [ + { + "bbox": [ + 57, + 165, + 104, + 213 + ], + "lines": [ + { + "bbox": [ + 57, + 165, + 104, + 213 + ], + "spans": [ + { + "bbox": [ + 57, + 165, + 104, + 213 + ], + "type": "image", + "image_path": "7cbebc0524ffe155f0ad73ec4d8ecaf31faee6c9492bb38ad5ee3223746d5a13.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 57, + 213, + 104, + 262 + ], + "blocks": [ + { + "bbox": [ + 57, + 213, + 104, + 262 + ], + "lines": [ + { + "bbox": [ + 57, + 213, + 104, + 262 + ], + "spans": [ + { + "bbox": [ + 57, + 213, + 104, + 262 + ], + "type": "image", + "image_path": "47ac2fc31e6a7d274633709f46eda1e61a85864123096e5ec72d3fa992b45b5b.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 57, + 262, + 104, + 312 + ], + "blocks": [ + { + "bbox": [ + 57, + 262, + 104, + 312 + ], + "lines": [ + { + "bbox": [ + 57, + 262, + 104, + 312 + ], + "spans": [ + { + "bbox": [ + 57, + 262, + 104, + 312 + ], + "type": "image", + "image_path": "f3980b481ad7f811b9edbfb4db3d972ef35f68e0d8074f2b15b4ad6cbd0330fb.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 322, + 555, + 357 + ], + "lines": [ + { + "bbox": [ + 55, + 322, + 555, + 357 + ], + "spans": [ + { + "bbox": [ + 55, + 322, + 555, + 357 + ], + "type": "text", + "content": "Figure 4. The visualization of transferring the IP-Adapter from SD1.5 (U-Net architecture) to Pixart-Alpha (Transformer architecture). The middle line, framed in orange, serves as the reference for comparison. The left side shows the A4A effect, which closely resembles the reference, while the right side (IP-Adapter*) displays the results of training IP-Adapter from scratch." + } + ] + } + ], + "index": 41, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 105, + 115, + 156, + 165 + ], + "blocks": [ + { + "bbox": [ + 105, + 115, + 156, + 165 + ], + "lines": [ + { + "bbox": [ + 105, + 115, + 156, + 165 + ], + "spans": [ + { + "bbox": [ + 105, + 115, + 156, + 165 + ], + "type": "image", + "image_path": "8f7780d3c305c94afee09681906fd201028bf1dd539e67ff149a9b2f836f12cd.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 105, + 165, + 155, + 213 + ], + "blocks": [ + { + "bbox": [ + 105, + 165, + 155, + 213 + ], + "lines": [ + { + "bbox": [ + 105, + 165, + 155, + 213 + ], + "spans": [ + { + "bbox": [ + 105, + 165, + 155, + 213 + ], + "type": "image", + "image_path": "a1041bd4031e5f914ab1d88e8219f1e2c2444a34b4fcaa721a492d8addd44746.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 105, + 213, + 155, + 262 + ], + "blocks": [ + { + "bbox": [ + 105, + 213, + 155, + 262 + ], + "lines": [ + { + "bbox": [ + 105, + 213, + 155, + 262 + ], + "spans": [ + { + "bbox": [ + 105, + 213, + 155, + 262 + ], + "type": "image", + "image_path": "83158e4f5833c57823086f319547980fdf46aba07f0b41f5b6e9a324039fc58c.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 105, + 262, + 155, + 312 + ], + "blocks": [ + { + "bbox": [ + 105, + 262, + 155, + 312 + ], + "lines": [ + { + "bbox": [ + 105, + 262, + 155, + 312 + ], + "spans": [ + { + "bbox": [ + 105, + 262, + 155, + 312 + ], + "type": "image", + "image_path": "337e5402090e1bbfd29065a07a30ca3b9b53cb40e8f89e1bbd30d525117e5a58.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 156, + 115, + 206, + 165 + ], + "blocks": [ + { + "bbox": [ + 156, + 115, + 206, + 165 + ], + "lines": [ + { + "bbox": [ + 156, + 115, + 206, + 165 + ], + "spans": [ + { + "bbox": [ + 156, + 115, + 206, + 165 + ], + "type": "image", + "image_path": "7c531b3162fde19d76daede2e22a3e8be96995b3a1790e2b383e8ef6e3c15251.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 157, + 165, + 206, + 213 + ], + "blocks": [ + { + "bbox": [ + 157, + 165, + 206, + 213 + ], + "lines": [ + { + "bbox": [ + 157, + 165, + 206, + 213 + ], + "spans": [ + { + "bbox": [ + 157, + 165, + 206, + 213 + ], + "type": "image", + "image_path": "16528312b9c32bd01ab2a8f9b83c5bed68a88e60f9c21df5e58aaa5feb0d3df2.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 157, + 213, + 206, + 262 + ], + "blocks": [ + { + "bbox": [ + 157, + 213, + 206, + 262 + ], + "lines": [ + { + "bbox": [ + 157, + 213, + 206, + 262 + ], + "spans": [ + { + "bbox": [ + 157, + 213, + 206, + 262 + ], + "type": "image", + "image_path": "4b72913cfc36aad22e8606f661f048f5b333a606e398c6ee29790df710b5a8b4.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 157, + 262, + 206, + 312 + ], + "blocks": [ + { + "bbox": [ + 157, + 262, + 206, + 312 + ], + "lines": [ + { + "bbox": [ + 157, + 262, + 206, + 312 + ], + "spans": [ + { + "bbox": [ + 157, + 262, + 206, + 312 + ], + "type": "image", + "image_path": "af99a3e34f50e28b99e049bc0a3f27ba947b4b200d5f87b2ef7e203bb92bf519.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 207, + 115, + 257, + 165 + ], + "blocks": [ + { + "bbox": [ + 207, + 115, + 257, + 165 + ], + "lines": [ + { + "bbox": [ + 207, + 115, + 257, + 165 + ], + "spans": [ + { + "bbox": [ + 207, + 115, + 257, + 165 + ], + "type": "image", + "image_path": "57ae4f7e95072d572b606bb37c9f5f6a14a0dc165cb0830d7edfeab86e59d2a4.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 279, + 95, + 332, + 106 + ], + "lines": [ + { + "bbox": [ + 279, + 95, + 332, + 106 + ], + "spans": [ + { + "bbox": [ + 279, + 95, + 332, + 106 + ], + "type": "text", + "content": "Text Prompt" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_caption" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 207, + 165, + 257, + 213 + ], + "blocks": [ + { + "bbox": [ + 207, + 165, + 257, + 213 + ], + "lines": [ + { + "bbox": [ + 207, + 165, + 257, + 213 + ], + "spans": [ + { + "bbox": [ + 207, + 165, + 257, + 213 + ], + "type": "image", + "image_path": "61d26fa82a7bb677a72b24b19ad76424bc57ba2a107e45ecbfd3bcfaafcb8239.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 207, + 213, + 257, + 262 + ], + "blocks": [ + { + "bbox": [ + 207, + 213, + 257, + 262 + ], + "lines": [ + { + "bbox": [ + 207, + 213, + 257, + 262 + ], + "spans": [ + { + "bbox": [ + 207, + 213, + 257, + 262 + ], + "type": "image", + "image_path": "848236a53755e1139544513f3d308a4305297bcda1f6f10336ca8cbcd9ca02f7.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 207, + 262, + 257, + 312 + ], + "blocks": [ + { + "bbox": [ + 207, + 262, + 257, + 312 + ], + "lines": [ + { + "bbox": [ + 207, + 262, + 257, + 312 + ], + "spans": [ + { + "bbox": [ + 207, + 262, + 257, + 312 + ], + "type": "image", + "image_path": "ae32af3ce002090ac247c4c2d85168234bc82d85ca77bd6053d2fbd1a25003e5.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + } + ], + "index": 17 + }, + { + "type": "image", + "bbox": [ + 280, + 114, + 331, + 165 + ], + "blocks": [ + { + "bbox": [ + 280, + 114, + 331, + 165 + ], + "lines": [ + { + "bbox": [ + 280, + 114, + 331, + 165 + ], + "spans": [ + { + "bbox": [ + 280, + 114, + 331, + 165 + ], + "type": "image", + "image_path": "c7bc67cba504b80415a83eee1f1a51e7e2f5d6ef8950e1d978c7e39bfc1f22b1.jpg" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_body" + } + ], + "index": 19 + }, + { + "type": "image", + "bbox": [ + 280, + 165, + 331, + 213 + ], + "blocks": [ + { + "bbox": [ + 280, + 165, + 331, + 213 + ], + "lines": [ + { + "bbox": [ + 280, + 165, + 331, + 213 + ], + "spans": [ + { + "bbox": [ + 280, + 165, + 331, + 213 + ], + "type": "image", + "image_path": "5bc2b7e4ea6ee70730c32a70883a7d7d20f799edb14bd29fd180c609dd7bf10c.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + } + ], + "index": 20 + }, + { + "type": "image", + "bbox": [ + 280, + 213, + 331, + 262 + ], + "blocks": [ + { + "bbox": [ + 280, + 213, + 331, + 262 + ], + "lines": [ + { + "bbox": [ + 280, + 213, + 331, + 262 + ], + "spans": [ + { + "bbox": [ + 280, + 213, + 331, + 262 + ], + "type": "image", + "image_path": "5fb976c3468f42757a85f83aa4eca62f05cc02e23375216a8f167825dda78ce2.jpg" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_body" + } + ], + "index": 21 + }, + { + "type": "image", + "bbox": [ + 280, + 262, + 331, + 312 + ], + "blocks": [ + { + "bbox": [ + 280, + 262, + 331, + 312 + ], + "lines": [ + { + "bbox": [ + 280, + 262, + 331, + 312 + ], + "spans": [ + { + "bbox": [ + 280, + 262, + 331, + 312 + ], + "type": "image", + "image_path": "a2283328ee278da6f9fba1185c9d3a8d7829de61d95a5a08045866e2e02d252e.jpg" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_body" + } + ], + "index": 22 + }, + { + "bbox": [ + 424, + 72, + 477, + 84 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 424, + 72, + 477, + 84 + ], + "spans": [ + { + "bbox": [ + 424, + 72, + 477, + 84 + ], + "type": "text", + "content": "IP-Adapter\\*" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 359, + 91, + 541, + 109 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 359, + 91, + 541, + 109 + ], + "spans": [ + { + "bbox": [ + 359, + 91, + 541, + 109 + ], + "type": "text", + "content": "Photo of a man/woman standing in a garden, dressed in casual clothing, dressed in casual clothing" + } + ] + } + ], + "index": 24 + }, + { + "type": "image", + "bbox": [ + 350, + 114, + 399, + 164 + ], + "blocks": [ + { + "bbox": [ + 350, + 114, + 399, + 164 + ], + "lines": [ + { + "bbox": [ + 350, + 114, + 399, + 164 + ], + "spans": [ + { + "bbox": [ + 350, + 114, + 399, + 164 + ], + "type": "image", + "image_path": "962e45ecfd2217bb41ba0a074ccdef3c37c8c0136df0fae9dcd51d9bad7d0de9.jpg" + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "image_body" + } + ], + "index": 25 + }, + { + "type": "image", + "bbox": [ + 350, + 164, + 399, + 213 + ], + "blocks": [ + { + "bbox": [ + 350, + 164, + 399, + 213 + ], + "lines": [ + { + "bbox": [ + 350, + 164, + 399, + 213 + ], + "spans": [ + { + "bbox": [ + 350, + 164, + 399, + 213 + ], + "type": "image", + "image_path": "959c47cf92a3b815ce1eb45301729c6de4eedf17943a5249b17aaf16d2d7bb73.jpg" + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_body" + } + ], + "index": 26 + }, + { + "type": "image", + "bbox": [ + 350, + 213, + 399, + 262 + ], + "blocks": [ + { + "bbox": [ + 350, + 213, + 399, + 262 + ], + "lines": [ + { + "bbox": [ + 350, + 213, + 399, + 262 + ], + "spans": [ + { + "bbox": [ + 350, + 213, + 399, + 262 + ], + "type": "image", + "image_path": "fdba5f17f5d669645cd571225b302c820053f927e6bf0ff0efe5b08c90a39b6d.jpg" + } + ] + } + ], + "index": 27, + "angle": 0, + "type": "image_body" + } + ], + "index": 27 + }, + { + "type": "image", + "bbox": [ + 350, + 262, + 399, + 311 + ], + "blocks": [ + { + "bbox": [ + 350, + 262, + 399, + 311 + ], + "lines": [ + { + "bbox": [ + 350, + 262, + 399, + 311 + ], + "spans": [ + { + "bbox": [ + 350, + 262, + 399, + 311 + ], + "type": "image", + "image_path": "6e44ce8e6807b350b95d84a69ae27961cdec95add122ba0a5be6afb69f5efa02.jpg" + } + ] + } + ], + "index": 28, + "angle": 0, + "type": "image_body" + } + ], + "index": 28 + }, + { + "type": "image", + "bbox": [ + 400, + 115, + 449, + 164 + ], + "blocks": [ + { + "bbox": [ + 400, + 115, + 449, + 164 + ], + "lines": [ + { + "bbox": [ + 400, + 115, + 449, + 164 + ], + "spans": [ + { + "bbox": [ + 400, + 115, + 449, + 164 + ], + "type": "image", + "image_path": "3e7d6088f62cdd2d630e57fd4eda834eac73b2b2fc292729dab221070bc55960.jpg" + } + ] + } + ], + "index": 29, + "angle": 0, + "type": "image_body" + } + ], + "index": 29 + }, + { + "type": "image", + "bbox": [ + 400, + 164, + 449, + 213 + ], + "blocks": [ + { + "bbox": [ + 400, + 164, + 449, + 213 + ], + "lines": [ + { + "bbox": [ + 400, + 164, + 449, + 213 + ], + "spans": [ + { + "bbox": [ + 400, + 164, + 449, + 213 + ], + "type": "image", + "image_path": "a25eb9338a194802f6bf62585c6c47b963d3c8212afc35a53caf27593f4eabd4.jpg" + } + ] + } + ], + "index": 30, + "angle": 0, + "type": "image_body" + } + ], + "index": 30 + }, + { + "type": "image", + "bbox": [ + 400, + 213, + 449, + 262 + ], + "blocks": [ + { + "bbox": [ + 400, + 213, + 449, + 262 + ], + "lines": [ + { + "bbox": [ + 400, + 213, + 449, + 262 + ], + "spans": [ + { + "bbox": [ + 400, + 213, + 449, + 262 + ], + "type": "image", + "image_path": "53512d3eae80263c154286529562d1c7bfc6e7d3016850b84a731822613a82cc.jpg" + } + ] + } + ], + "index": 31, + "angle": 0, + "type": "image_body" + } + ], + "index": 31 + }, + { + "type": "image", + "bbox": [ + 400, + 262, + 449, + 311 + ], + "blocks": [ + { + "bbox": [ + 400, + 262, + 449, + 311 + ], + "lines": [ + { + "bbox": [ + 400, + 262, + 449, + 311 + ], + "spans": [ + { + "bbox": [ + 400, + 262, + 449, + 311 + ], + "type": "image", + "image_path": "79f36e8373c041b56e98430c85e0e398b5791190220a2d660decdd554635447d.jpg" + } + ] + } + ], + "index": 32, + "angle": 0, + "type": "image_body" + } + ], + "index": 32 + }, + { + "type": "image", + "bbox": [ + 451, + 115, + 501, + 164 + ], + "blocks": [ + { + "bbox": [ + 451, + 115, + 501, + 164 + ], + "lines": [ + { + "bbox": [ + 451, + 115, + 501, + 164 + ], + "spans": [ + { + "bbox": [ + 451, + 115, + 501, + 164 + ], + "type": "image", + "image_path": "20bcbc720f725f45c4fdc3a1f29bd471a6bbcbcd51a9460863ffd252192a1dac.jpg" + } + ] + } + ], + "index": 33, + "angle": 0, + "type": "image_body" + } + ], + "index": 33 + }, + { + "type": "image", + "bbox": [ + 451, + 164, + 501, + 213 + ], + "blocks": [ + { + "bbox": [ + 451, + 164, + 501, + 213 + ], + "lines": [ + { + "bbox": [ + 451, + 164, + 501, + 213 + ], + "spans": [ + { + "bbox": [ + 451, + 164, + 501, + 213 + ], + "type": "image", + "image_path": "128dbbaf9f45590515898394dc2828d87dbc506efaa05e4fa0279c7ad9cc5c68.jpg" + } + ] + } + ], + "index": 34, + "angle": 0, + "type": "image_body" + } + ], + "index": 34 + }, + { + "type": "image", + "bbox": [ + 451, + 213, + 501, + 262 + ], + "blocks": [ + { + "bbox": [ + 451, + 213, + 501, + 262 + ], + "lines": [ + { + "bbox": [ + 451, + 213, + 501, + 262 + ], + "spans": [ + { + "bbox": [ + 451, + 213, + 501, + 262 + ], + "type": "image", + "image_path": "37f8a1f3a2e0e035f59c33aec3b06ab7a3c6a4afeb366f6d6b7038b88bb12ca3.jpg" + } + ] + } + ], + "index": 35, + "angle": 0, + "type": "image_body" + } + ], + "index": 35 + }, + { + "type": "image", + "bbox": [ + 451, + 262, + 501, + 311 + ], + "blocks": [ + { + "bbox": [ + 451, + 262, + 501, + 311 + ], + "lines": [ + { + "bbox": [ + 451, + 262, + 501, + 311 + ], + "spans": [ + { + "bbox": [ + 451, + 262, + 501, + 311 + ], + "type": "image", + "image_path": "079969bf3c7337de7a2f70e422babc77e6788101ecb5c08954fd16130ca45eea.jpg" + } + ] + } + ], + "index": 36, + "angle": 0, + "type": "image_body" + } + ], + "index": 36 + }, + { + "type": "image", + "bbox": [ + 503, + 115, + 552, + 164 + ], + "blocks": [ + { + "bbox": [ + 503, + 115, + 552, + 164 + ], + "lines": [ + { + "bbox": [ + 503, + 115, + 552, + 164 + ], + "spans": [ + { + "bbox": [ + 503, + 115, + 552, + 164 + ], + "type": "image", + "image_path": "55081e8dd368ae1851a919b8f7d61a0dbf81c0a74aa4f2d73f79c8620cdd262d.jpg" + } + ] + } + ], + "index": 37, + "angle": 0, + "type": "image_body" + } + ], + "index": 37 + }, + { + "type": "image", + "bbox": [ + 503, + 164, + 552, + 213 + ], + "blocks": [ + { + "bbox": [ + 503, + 164, + 552, + 213 + ], + "lines": [ + { + "bbox": [ + 503, + 164, + 552, + 213 + ], + "spans": [ + { + "bbox": [ + 503, + 164, + 552, + 213 + ], + "type": "image", + "image_path": "e51493f7821e830e5deca317a74b0dae8367f4398fe4c3ed78b7114f2b73bd39.jpg" + } + ] + } + ], + "index": 38, + "angle": 0, + "type": "image_body" + } + ], + "index": 38 + }, + { + "type": "image", + "bbox": [ + 503, + 213, + 552, + 262 + ], + "blocks": [ + { + "bbox": [ + 503, + 213, + 552, + 262 + ], + "lines": [ + { + "bbox": [ + 503, + 213, + 552, + 262 + ], + "spans": [ + { + "bbox": [ + 503, + 213, + 552, + 262 + ], + "type": "image", + "image_path": "21b4bfc819dbbaa88e276dcbdde80a291423726dc2c681a93e83bcdc22838a8a.jpg" + } + ] + } + ], + "index": 39, + "angle": 0, + "type": "image_body" + } + ], + "index": 39 + }, + { + "type": "image", + "bbox": [ + 503, + 262, + 552, + 311 + ], + "blocks": [ + { + "bbox": [ + 503, + 262, + 552, + 311 + ], + "lines": [ + { + "bbox": [ + 503, + 262, + 552, + 311 + ], + "spans": [ + { + "bbox": [ + 503, + 262, + 552, + 311 + ], + "type": "image", + "image_path": "4a1e99966750ed40efee9a98d7b465d20557986942b83b8fcb7631d2cf81ac04.jpg" + } + ] + } + ], + "index": 40, + "angle": 0, + "type": "image_body" + } + ], + "index": 40 + }, + { + "bbox": [ + 55, + 376, + 295, + 425 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 376, + 295, + 425 + ], + "spans": [ + { + "bbox": [ + 55, + 376, + 295, + 425 + ], + "type": "text", + "content": "model [24] and report the embedding similarity between generated and reference images, referred to as the CLIP-score. In the graph, we use the previously defined SC as the horizontal axis to represent the training process." + } + ] + } + ], + "index": 42 + }, + { + "bbox": [ + 55, + 434, + 230, + 447 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 434, + 230, + 447 + ], + "spans": [ + { + "bbox": [ + 55, + 434, + 230, + 447 + ], + "type": "text", + "content": "4.2. Transferring to the U-Net model" + } + ] + } + ], + "index": 43 + }, + { + "bbox": [ + 55, + 453, + 295, + 596 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 453, + 295, + 596 + ], + "spans": [ + { + "bbox": [ + 55, + 453, + 295, + 596 + ], + "type": "text", + "content": "We use SDXL as the upgraded model and compare our results with the officially published PTA-UM [40] (represented by the green star in Fig. 5). To facilitate comparison, we extend a horizontal line from this point to represent the performance of PTA-UM, rather than the actual training curve. As illustrated in Fig. 5, transferring the PTA using A4A achieves an IDA comparable to that of the PTA-UM at approximately " + }, + { + "bbox": [ + 55, + 453, + 295, + 596 + ], + "type": "inline_equation", + "content": "\\mathrm{SC}0.5\\mathrm{M}^2" + }, + { + "bbox": [ + 55, + 453, + 295, + 596 + ], + "type": "text", + "content": ". Furthermore, as shown in Fig. 2, the lines \"A4A (ours)\" and \"PTA-UM\" demonstrate that A4A not only preserves the intellectual property of the characters but also maintains the editing capabilities of the upgraded model with minimal training cost." + } + ] + } + ], + "index": 44 + }, + { + "bbox": [ + 55, + 597, + 295, + 693 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 597, + 295, + 693 + ], + "spans": [ + { + "bbox": [ + 55, + 597, + 295, + 693 + ], + "type": "text", + "content": "For personalized object generation (IP customization), as shown in Fig. 6, transferring the PTA using A4A achieves a CLIP-score comparable to that of PTA-UM at approximately 2.5M SC, compared to 64M SC for PTA-UM. As illustrated in Fig. 3, the lines labeled \"A4A (ours)\" and \"PTA-UM\" demonstrate that control ability, as guided by the text prompt, is also preserved. When the attention-based adapter is transferred to the upgraded U-Net model, A4A ef" + } + ] + } + ], + "index": 45 + }, + { + "type": "image", + "bbox": [ + 320, + 378, + 550, + 467 + ], + "blocks": [ + { + "bbox": [ + 320, + 378, + 550, + 467 + ], + "lines": [ + { + "bbox": [ + 320, + 378, + 550, + 467 + ], + "spans": [ + { + "bbox": [ + 320, + 378, + 550, + 467 + ], + "type": "image", + "image_path": "aa1bc97f37480ed25043a456cef3b14af810c985f912776542fd091739d160a3.jpg" + } + ] + } + ], + "index": 46, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 479, + 555, + 513 + ], + "lines": [ + { + "bbox": [ + 313, + 479, + 555, + 513 + ], + "spans": [ + { + "bbox": [ + 313, + 479, + 555, + 513 + ], + "type": "text", + "content": "Figure 5. The graph shows the IDA of ID preservation when transferring to SDXL, compared to the pretrained adapter from SDXL (PTA-UM). The horizontal axis is in units of M." + } + ] + } + ], + "index": 47, + "angle": 0, + "type": "image_caption" + } + ], + "index": 46 + }, + { + "type": "image", + "bbox": [ + 319, + 530, + 550, + 619 + ], + "blocks": [ + { + "bbox": [ + 319, + 530, + 550, + 619 + ], + "lines": [ + { + "bbox": [ + 319, + 530, + 550, + 619 + ], + "spans": [ + { + "bbox": [ + 319, + 530, + 550, + 619 + ], + "type": "image", + "image_path": "c93ac3380b00ea9ed059918b0e1a147a89f8ee44b8013b744a536d2b44511e41.jpg" + } + ] + } + ], + "index": 48, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 632, + 555, + 665 + ], + "lines": [ + { + "bbox": [ + 313, + 632, + 555, + 665 + ], + "spans": [ + { + "bbox": [ + 313, + 632, + 555, + 665 + ], + "type": "text", + "content": "Figure 6. The graph shows the CLIP-score of personalized object generation when transferring to SDXL, compared to PTA-UM. The horizontal axis is in units of M." + } + ] + } + ], + "index": 49, + "angle": 0, + "type": "image_caption" + } + ], + "index": 48 + }, + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "text", + "content": "fectively retains and transfers the adapter's capabilities with minimal training cost. The data of the quantitative indica" + } + ] + } + ], + "index": 50 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 66, + 702, + 279, + 713 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 702, + 279, + 713 + ], + "spans": [ + { + "bbox": [ + 66, + 702, + 279, + 713 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 66, + 702, + 279, + 713 + ], + "type": "text", + "content": "A4A is trained on 2 A100 GPUs with a batch size of 4 per GPU." + } + ] + } + ], + "index": 51 + }, + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "18482" + } + ] + } + ], + "index": 52 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 280, + 85 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 280, + 85 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 280, + 85 + ], + "type": "text", + "content": "tors corresponding to the line chart are shown in Tab. 1." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 101, + 246, + 113 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 101, + 246, + 113 + ], + "spans": [ + { + "bbox": [ + 55, + 101, + 246, + 113 + ], + "type": "text", + "content": "4.3. Transferring to Transformer model" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 54, + 121, + 297, + 314 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 121, + 297, + 314 + ], + "spans": [ + { + "bbox": [ + 54, + 121, + 297, + 314 + ], + "type": "text", + "content": "We use Pixart-Alpha [4] with the transformer architecture as the upgraded model. Given the absence of a corresponding published version of the IP-Adapter for Pixart-Alpha, we train it from scratch using the CelebAMask-HQ dataset as our baseline, represented by the orange line labeled \"IP-Adapter*\" in Fig. 7 and Fig. 4. As illustrated in Fig. 7, employing A4A to transfer pretrained adapters to transformer models offers a significant advantage over training adapters directly on the transformer models. It is worth noting that our work is the first to transfer the adapter from the U-Net model to the transformer model, achieving strong results. Fig. 4 presents a visualization comparing our method with training the IP-Adapter from scratch, both at 30k steps with a batch size of 8. The images generated by A4A show significant facial similarity to the reference image, while the image on the right does not yield comparable results." + } + ] + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 70, + 332, + 278, + 420 + ], + "blocks": [ + { + "bbox": [ + 70, + 332, + 278, + 420 + ], + "lines": [ + { + "bbox": [ + 70, + 332, + 278, + 420 + ], + "spans": [ + { + "bbox": [ + 70, + 332, + 278, + 420 + ], + "type": "image", + "image_path": "576d606fe6975be35450003a2984ee536f2e2db225ca61a8629aba5825bef934.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 434, + 296, + 479 + ], + "lines": [ + { + "bbox": [ + 55, + 434, + 296, + 479 + ], + "spans": [ + { + "bbox": [ + 55, + 434, + 296, + 479 + ], + "type": "text", + "content": "Figure 7. The IDA of transferring the pretrained IP-Adapter from SD1.5 to Pixart-Alpha using A4A (red line with dots), compared to training the IP-Adapter from scratch (orange line with stars). The horizontal axis is in units of K. Best viewed in color." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 510, + 149, + 523 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 510, + 149, + 523 + ], + "spans": [ + { + "bbox": [ + 55, + 510, + 149, + 523 + ], + "type": "text", + "content": "4.4. Ablation Study" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 531, + 295, + 627 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 531, + 295, + 627 + ], + "spans": [ + { + "bbox": [ + 55, + 531, + 295, + 627 + ], + "type": "text", + "content": "In the previous section, we adopted the A4A paradigm, which includes training our modules, namely Coupling Space Projection and Upgraded Space Mapping, along with fine-tuning the PTA. To demonstrate that fine-tuning is not the core driving force of our method, we conducted the following experiment. As shown, the two curves are very close, with the fine-tuning paradigm showing only a slight improvement over the non-fine-tuning version." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 630, + 296, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 630, + 296, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 630, + 296, + 715 + ], + "type": "text", + "content": "We present experiments on the hyperparameter settings of the learning rates for each component in Sec. 7. It is worth mentioning that, since Projection and Alignment have similar numbers of parameters, we group them together. The ablation study of the two core modules is presented in Sec. 8, which demonstrates that our design achieves a satisfactory transfer effect with an efficient structure." + } + ] + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 329, + 74, + 533, + 162 + ], + "blocks": [ + { + "bbox": [ + 329, + 74, + 533, + 162 + ], + "lines": [ + { + "bbox": [ + 329, + 74, + 533, + 162 + ], + "spans": [ + { + "bbox": [ + 329, + 74, + 533, + 162 + ], + "type": "image", + "image_path": "973239fcf626e413ae96724f8697f9c406221073a8f100f655bb295998e01bc3.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 175, + 555, + 209 + ], + "lines": [ + { + "bbox": [ + 313, + 175, + 555, + 209 + ], + "spans": [ + { + "bbox": [ + 313, + 175, + 555, + 209 + ], + "type": "text", + "content": "Figure 8. The yellow line labeled \"A4A(ours w/fine-tuning)\" represents training A4A without fine-tuning the PTA, while the red line represents the full A4A approach with fine-tuning the PTA." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "table", + "bbox": [ + 349, + 220, + 520, + 278 + ], + "blocks": [ + { + "bbox": [ + 349, + 220, + 520, + 278 + ], + "lines": [ + { + "bbox": [ + 349, + 220, + 520, + 278 + ], + "spans": [ + { + "bbox": [ + 349, + 220, + 520, + 278 + ], + "type": "table", + "html": "
PTA: IP-AdapterIDA↑CLIP-score↑
X-Adapter0.0620.7894
PTA-UM0.45310.9124
A4A(ours)0.51270.9154
", + "image_path": "a907c943b41c74b3690bbf147ae7e908690fd6d25bed4ff2bbd41d252b8d490b.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "table_body" + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 286, + 555, + 342 + ], + "lines": [ + { + "bbox": [ + 313, + 286, + 555, + 342 + ], + "spans": [ + { + "bbox": [ + 313, + 286, + 555, + 342 + ], + "type": "text", + "content": "Table 1. The evaluation metrics for IP customization (CLIP-score) and ID customization (IDA) are presented. Using SDXL as the upgraded model and IP-Adapter as the pretrained adapter, A4A (ours) is compared with the transfer method X-Adapter and the pretrained adapter from the upgraded model (PTA-UM)." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 313, + 363, + 471, + 376 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 363, + 471, + 376 + ], + "spans": [ + { + "bbox": [ + 313, + 363, + 471, + 376 + ], + "type": "text", + "content": "4.5. Comparison with X-Adapter" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 381, + 556, + 524 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 381, + 556, + 524 + ], + "spans": [ + { + "bbox": [ + 313, + 381, + 556, + 524 + ], + "type": "text", + "content": "The previous work X-Adapter [25] is designed specifically for transferring adapters from U-Net models. To demonstrate the effectiveness of A4A, we also compare it with X-Adapter. As shown in Tab. 1, our method achieves better results in generation using the transferred adapter for both IP and ID customization. The visualizations in Figure 2 and Fig. 3, particularly the row labeled \"X-Adapter\", further substantiate this when compared to the adjacent rows. It is also worth noting that our method requires only the adapter for training and inference, without the need for denoising using the base model as in X-Adapter, which makes it more efficient." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 537, + 388, + 550 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 537, + 388, + 550 + ], + "spans": [ + { + "bbox": [ + 313, + 537, + 388, + 550 + ], + "type": "text", + "content": "5. Conclusion" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 558, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 558, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 558, + 556, + 715 + ], + "type": "text", + "content": "We propose A4A (Adapter for Adapter), a novel framework designed to address the challenges of transferring pretrained adapters across rapidly evolving model architectures. By employing an all-for-all mapping approach, A4A seamlessly transfers attention-based adapters from U-Net to transformer models without the need for extensive retraining. The framework's two key components, Coupling Space Projection and Upgraded Space Mapping, enable effective bridging of adapter features with upgraded model structures. Our experimental results demonstrate that A4A preserves both the generative power of upgraded models and the controllability of the original adapters. This work offers a scalable solution for cross-architecture adapter transfer." + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "18483" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 161, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 161, + 85 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 161, + 85 + ], + "type": "text", + "content": "6. Acknowledgment" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 90, + 297, + 135 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 90, + 297, + 135 + ], + "spans": [ + { + "bbox": [ + 55, + 90, + 297, + 135 + ], + "type": "text", + "content": "This research is supported by Artificial Intelligence National Science and Technology Major Project 2023ZD0121200, and National Natural Science Foundation of China under Grant 62222212 and 623B2094." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 155, + 115, + 167 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 155, + 115, + 167 + ], + "spans": [ + { + "bbox": [ + 56, + 155, + 115, + 167 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 57, + 175, + 296, + 712 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 61, + 175, + 296, + 228 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 175, + 296, + 228 + ], + "spans": [ + { + "bbox": [ + 61, + 175, + 296, + 228 + ], + "type": "text", + "content": "[1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. 2" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 230, + 296, + 285 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 230, + 296, + 285 + ], + "spans": [ + { + "bbox": [ + 61, + 230, + 296, + 285 + ], + "type": "text", + "content": "[2] Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Qinsheng Zhang, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, et al. ediff-i: Text-to-image diffusion models with an ensemble of expert denoisers. arXiv preprint arXiv:2211.01324, 2022. 3" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 61, + 285, + 296, + 361 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 285, + 296, + 361 + ], + "spans": [ + { + "bbox": [ + 61, + 285, + 296, + 361 + ], + "type": "text", + "content": "[3] Marco Bellagente, Manuel Brack, Hannah Teufel, Felix Friedrich, Björn Deiseroth, Constantin Eichenberg, Andrew M Dai, Robert Baldock, Souradeep Nanda, Koen Oostermeijer, et al. Multifusion: Fusing pre-trained models for multi-lingual, multi-modal image generation. Advances in Neural Information Processing Systems, 36:59502-59521, 2023. 3" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 361, + 296, + 416 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 361, + 296, + 416 + ], + "spans": [ + { + "bbox": [ + 62, + 361, + 296, + 416 + ], + "type": "text", + "content": "[4] Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, et al. Pixart-alpha: Fast training of diffusion transformer for photorealistic text-to-image synthesis. arXiv preprint arXiv:2310.00426, 2023. 2, 3, 5, 8" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 417, + 296, + 449 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 417, + 296, + 449 + ], + "spans": [ + { + "bbox": [ + 62, + 417, + 296, + 449 + ], + "type": "text", + "content": "[5] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780-8794, 2021. 1" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 449, + 296, + 515 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 449, + 296, + 515 + ], + "spans": [ + { + "bbox": [ + 62, + 449, + 296, + 515 + ], + "type": "text", + "content": "[6] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis. In Forty-first International Conference on Machine Learning, 2024. 2, 3" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 516, + 296, + 559 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 516, + 296, + 559 + ], + "spans": [ + { + "bbox": [ + 62, + 516, + 296, + 559 + ], + "type": "text", + "content": "[7] Rinon Gal, Moab Arar, Yuval Atzmon, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. Encoder-based domain tuning for fast personalization of text-to-image models. ACM Transactions on Graphics (TOG), 42(4):1-13, 2023. 3" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 62, + 560, + 296, + 591 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 560, + 296, + 591 + ], + "spans": [ + { + "bbox": [ + 62, + 560, + 296, + 591 + ], + "type": "text", + "content": "[8] Jia Guo, Jiankang Deng, Xiang An, Jack Yu, and Baris Gecer. Insightface: 2d and 3d face analysis project, 2019. 6" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 62, + 593, + 296, + 656 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 593, + 296, + 656 + ], + "spans": [ + { + "bbox": [ + 62, + 593, + 296, + 656 + ], + "type": "text", + "content": "[9] Thomas A Halgren, Robert B Murphy, Richard A Friesner, Hege S Beard, Leah L Frye, W Thomas Pollard, and Jay L Banks. Glide: a new approach for rapid, accurate docking and scoring. 2. enrichment factors in database screening. Journal of medicinal chemistry, 47(7):1750-1759, 2004. 2, 3" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 658, + 296, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 658, + 296, + 712 + ], + "spans": [ + { + "bbox": [ + 57, + 658, + 296, + 712 + ], + "type": "text", + "content": "[10] Amir Hertz, Andrey Voynov, Shlomi Fruchter, and Daniel Cohen-Or. Style aligned image generation via shared attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4775–4785, 2024. 3" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 555, + 713 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 316, + 73, + 555, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 73, + 555, + 106 + ], + "spans": [ + { + "bbox": [ + 316, + 73, + 555, + 106 + ], + "type": "text", + "content": "[11] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020. 1, 2" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 317, + 108, + 555, + 162 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 108, + 555, + 162 + ], + "spans": [ + { + "bbox": [ + 317, + 108, + 555, + 162 + ], + "type": "text", + "content": "[12] Zhe Kong, Yong Zhang, Tianyu Yang, Tao Wang, Kaihao Zhang, Bizhu Wu, Guanying Chen, Wei Liu, and Wenhan Luo. Omg: Occlusion-friendly personalized multi-concept generation in diffusion models. arXiv preprint arXiv:2403.10983, 2024.6" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 163, + 554, + 218 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 163, + 554, + 218 + ], + "spans": [ + { + "bbox": [ + 316, + 163, + 554, + 218 + ], + "type": "text", + "content": "[13] Nupur Kumari, Bingliang Zhang, Richard Zhang, Eli Shechtman, and Jun-Yan Zhu. Multi-concept customization of text-to-image diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1931-1941, 2023. 3" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 220, + 554, + 295 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 220, + 554, + 295 + ], + "spans": [ + { + "bbox": [ + 316, + 220, + 554, + 295 + ], + "type": "text", + "content": "[14] Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, et al. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. International Journal of Computer Vision, 128(7):1956-1981, 2020. 5" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 297, + 554, + 342 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 297, + 554, + 342 + ], + "spans": [ + { + "bbox": [ + 316, + 297, + 554, + 342 + ], + "type": "text", + "content": "[15] Cheng-Han Lee, Ziwei Liu, Lingyun Wu, and Ping Luo. Maskgan: Towards diverse and interactive facial image manipulation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 5" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 343, + 554, + 374 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 343, + 554, + 374 + ], + "spans": [ + { + "bbox": [ + 316, + 343, + 554, + 374 + ], + "type": "text", + "content": "[16] Yuseung Lee and Minhyuk Sung. Reground: Improving textual and spatial grounding at no cost. arXiv preprint arXiv:2403.13589, 2024. 3" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 376, + 554, + 431 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 376, + 554, + 431 + ], + "spans": [ + { + "bbox": [ + 316, + 376, + 554, + 431 + ], + "type": "text", + "content": "[17] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pages 19730–19742. PMLR, 2023. 4, 5" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 433, + 554, + 487 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 433, + 554, + 487 + ], + "spans": [ + { + "bbox": [ + 316, + 433, + 554, + 487 + ], + "type": "text", + "content": "[18] Yuheng Li, Haotian Liu, Qingyang Wu, Fangzhou Mu, Jianwei Yang, Jianfeng Gao, Chunyuan Li, and Yong Jae Lee. Gligen: Open-set grounded text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22511-22521, 2023. 3" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 488, + 554, + 533 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 488, + 554, + 533 + ], + "spans": [ + { + "bbox": [ + 316, + 488, + 554, + 533 + ], + "type": "text", + "content": "[19] Han Lin, Jaemin Cho, Abhay Zala, and Mohit Bansal. Ctrl-adapter: An efficient and versatile framework for adapting diverse controls to any diffusion model. arXiv preprint arXiv:2404.09967, 2024. 1, 3" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 317, + 534, + 554, + 589 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 534, + 554, + 589 + ], + "spans": [ + { + "bbox": [ + 317, + 534, + 554, + 589 + ], + "type": "text", + "content": "[20] Yixin Liu, Kai Zhang, Yuan Li, Zhiling Yan, Chujie Gao, Ruoxi Chen, Zhengqing Yuan, Yue Huang, Hanchi Sun, Jianfeng Gao, et al. Sora: A review on background, technology, limitations, and opportunities of large vision models. arXiv preprint arXiv:2402.17177, 2024. 2" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 590, + 554, + 634 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 590, + 554, + 634 + ], + "spans": [ + { + "bbox": [ + 316, + 590, + 554, + 634 + ], + "type": "text", + "content": "[21] Xichen Pan, Li Dong, Shaohan Huang, Zhiliang Peng, Wenhu Chen, and Furu Wei. Kosmos-g: Generating images in context with multimodal large language models. arXiv preprint arXiv:2310.02992, 2023. 3" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 635, + 554, + 678 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 635, + 554, + 678 + ], + "spans": [ + { + "bbox": [ + 316, + 635, + 554, + 678 + ], + "type": "text", + "content": "[22] William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4195-4205, 2023. 1, 2, 3" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 680, + 554, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 680, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 680, + 554, + 713 + ], + "type": "text", + "content": "[23] Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion mod" + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "text", + "content": "18484" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 296, + 713 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 76, + 72, + 296, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 72, + 296, + 95 + ], + "spans": [ + { + "bbox": [ + 76, + 72, + 296, + 95 + ], + "type": "text", + "content": "els for high-resolution image synthesis. arXiv preprint arXiv:2307.01952, 2023. 1, 2, 5" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 96, + 296, + 162 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 96, + 296, + 162 + ], + "spans": [ + { + "bbox": [ + 56, + 96, + 296, + 162 + ], + "type": "text", + "content": "[24] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 7" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 163, + 296, + 229 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 163, + 296, + 229 + ], + "spans": [ + { + "bbox": [ + 56, + 163, + 296, + 229 + ], + "type": "text", + "content": "[25] Lingmin Ran, Xiaodong Cun, Jia-Wei Liu, Rui Zhao, Song Zijie, Xintao Wang, Jussi Keppo, and Mike Zheng Shou. X-adapter: Adding universal compatibility of plugins for upgraded diffusion model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8775-8784, 2024. 1, 3, 6, 8" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 231, + 296, + 285 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 231, + 296, + 285 + ], + "spans": [ + { + "bbox": [ + 56, + 231, + 296, + 285 + ], + "type": "text", + "content": "[26] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695, 2022. 1, 2, 5" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 286, + 296, + 352 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 286, + 296, + 352 + ], + "spans": [ + { + "bbox": [ + 56, + 286, + 296, + 352 + ], + "type": "text", + "content": "[27] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical image computing and computer-assisted intervention-MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18, pages 234-241. Springer, 2015. 1, 2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 354, + 296, + 419 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 354, + 296, + 419 + ], + "spans": [ + { + "bbox": [ + 56, + 354, + 296, + 419 + ], + "type": "text", + "content": "[28] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. Advances in neural information processing systems, 35:36479-36494, 2022. 2, 3" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 421, + 296, + 464 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 421, + 296, + 464 + ], + "spans": [ + { + "bbox": [ + 56, + 421, + 296, + 464 + ], + "type": "text", + "content": "[29] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020. 2" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 466, + 296, + 509 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 466, + 296, + 509 + ], + "spans": [ + { + "bbox": [ + 56, + 466, + 296, + 509 + ], + "type": "text", + "content": "[30] Yoad Tewel, Rinon Gal, Gal Chechik, and Yuval Atzmon. Key-locked rank one editing for text-to-image personalization. In ACM SIGGRAPH 2023 Conference Proceedings, pages 1-11, 2023. 3" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 511, + 296, + 543 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 511, + 296, + 543 + ], + "spans": [ + { + "bbox": [ + 56, + 511, + 296, + 543 + ], + "type": "text", + "content": "[31] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017. 2" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 545, + 296, + 567 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 545, + 296, + 567 + ], + "spans": [ + { + "bbox": [ + 56, + 545, + 296, + 567 + ], + "type": "text", + "content": "[32] A Vaswani. Attention is all you need. Advances in Neural Information Processing Systems, 2017. 2" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 568, + 296, + 611 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 568, + 296, + 611 + ], + "spans": [ + { + "bbox": [ + 56, + 568, + 296, + 611 + ], + "type": "text", + "content": "[33] Haofan Wang, Matteo Spinelli, Qixun Wang, Xu Bai, Zekui Qin, and Anthony Chen. Instantstyle: Free lunch towards style-preserving in text-to-image generation. arXiv preprint arXiv:2404.02733, 2024. 3" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 613, + 296, + 656 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 613, + 296, + 656 + ], + "spans": [ + { + "bbox": [ + 56, + 613, + 296, + 656 + ], + "type": "text", + "content": "[34] Qixun Wang, Xu Bai, Haofan Wang, Zekui Qin, Anthony Chen, Huaxia Li, Xu Tang, and Yao Hu. Instantid: Zero-shot identity-preserving generation in seconds. arXiv preprint arXiv:2401.07519, 2024. 3" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 56, + 658, + 296, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 658, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 56, + 658, + 296, + 713 + ], + "type": "text", + "content": "[35] Xudong Wang, Trevor Darrell, Sai Saketh Rambhatla, Rohit Girdhar, and Ishan Misra. Instancediffusion: Instance-level control for image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6232-6242, 2024. 3" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 72, + 554, + 529 + ], + "type": "list", + "angle": 0, + "index": 23, + "blocks": [ + { + "bbox": [ + 316, + 72, + 554, + 137 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 72, + 554, + 137 + ], + "spans": [ + { + "bbox": [ + 316, + 72, + 554, + 137 + ], + "type": "text", + "content": "[36] Yuxiang Wei, Yabo Zhang, Zhilong Ji, Jinfeng Bai, Lei Zhang, and Wangmeng Zuo. Elite: Encoding visual concepts into textual embeddings for customized text-to-image generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15943-15953, 2023. 3" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 140, + 554, + 183 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 140, + 554, + 183 + ], + "spans": [ + { + "bbox": [ + 316, + 140, + 554, + 183 + ], + "type": "text", + "content": "[37] Zhichao Wei, Qingkun Su, Long Qin, and Weizhi Wang. Mm-diff: High-fidelity image personalization via multi-modal condition integration. arXiv preprint arXiv:2403.15059, 2024. 3" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 185, + 554, + 228 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 185, + 554, + 228 + ], + "spans": [ + { + "bbox": [ + 316, + 185, + 554, + 228 + ], + "type": "text", + "content": "[38] Yinwei Wu, Xianpan Zhou, Bing Ma, Xuefeng Su, Kai Ma, and Xinchao Wang. Ifadapter: Instance feature control for grounded text-to-image generation. arXiv preprint arXiv:2409.08240, 2024. 3" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 230, + 554, + 273 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 230, + 554, + 273 + ], + "spans": [ + { + "bbox": [ + 316, + 230, + 554, + 273 + ], + "type": "text", + "content": "[39] Peng Xing, Haofan Wang, Yanpeng Sun, Qixun Wang, Xu Bai, Hao Ai, Renyuan Huang, and Zechao Li. Csgo: Content-style composition in text-to-image generation. arXiv preprint arXiv:2408.16766, 2024. 3" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 274, + 554, + 317 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 274, + 554, + 317 + ], + "spans": [ + { + "bbox": [ + 316, + 274, + 554, + 317 + ], + "type": "text", + "content": "[40] Hu Ye, Jun Zhang, Sibo Liu, Xiao Han, and Wei Yang. Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models. arXiv preprint arXiv:2308.06721, 2023. 3, 4, 5, 7, 1" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 319, + 554, + 363 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 319, + 554, + 363 + ], + "spans": [ + { + "bbox": [ + 316, + 319, + 554, + 363 + ], + "type": "text", + "content": "[41] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3836-3847, 2023. 1, 3" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 365, + 554, + 430 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 365, + 554, + 430 + ], + "spans": [ + { + "bbox": [ + 316, + 365, + 554, + 430 + ], + "type": "text", + "content": "[42] Yuxuan Zhang, Yiren Song, Jiaming Liu, Rui Wang, Jinpeng Yu, Hao Tang, Huaxia Li, Xu Tang, Yao Hu, Han Pan, et al. Ssr-encoder: Encoding selective subject representation for subject-driven generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8069–8078, 2024. 3" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 431, + 554, + 485 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 431, + 554, + 485 + ], + "spans": [ + { + "bbox": [ + 316, + 431, + 554, + 485 + ], + "type": "text", + "content": "[43] Dewei Zhou, You Li, Fan Ma, Xiaoting Zhang, and Yi Yang. Mige: Multi-instance generation controller for text-to-image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6818-6828, 2024. 3" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 487, + 554, + 529 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 487, + 554, + 529 + ], + "spans": [ + { + "bbox": [ + 316, + 487, + 554, + 529 + ], + "type": "text", + "content": "[44] Zhengguang Zhou, Jing Li, Huaxia Li, Nemo Chen, and Xu Tang. Storymaker: Towards holistic consistent characters in text-to-image generation. arXiv preprint arXiv:2409.12576, 2024.3" + } + ] + } + ], + "index": 22 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "18485" + } + ] + } + ], + "index": 24 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/AA-CLIP_ Enhancing Zero-Shot Anomaly Detection via Anomaly-Aware CLIP/1a15f5c2-189a-4bf7-936d-11fcae85b257_content_list.json b/2025/AA-CLIP_ Enhancing Zero-Shot Anomaly Detection via Anomaly-Aware CLIP/1a15f5c2-189a-4bf7-936d-11fcae85b257_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0c9376a8193c5042f2b75453b344d2d2306f7812 --- /dev/null +++ b/2025/AA-CLIP_ Enhancing Zero-Shot Anomaly Detection via Anomaly-Aware CLIP/1a15f5c2-189a-4bf7-936d-11fcae85b257_content_list.json @@ -0,0 +1,1728 @@ +[ + { + "type": "text", + "text": "AA-CLIP: Enhancing Zero-Shot Anomaly Detection via Anomaly-Aware CLIP", + "text_level": 1, + "bbox": [ + 99, + 130, + 898, + 152 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Wenxin Ma $^{1,2}$ Xu Zhang $^{1,2}$ Qingsong Yao $^{5}$ Fenghe Tang $^{1,2}$ Chenxu Wu $^{1,2}$", + "bbox": [ + 133, + 178, + 864, + 198 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Yingtai Li $^{1,2}$ Rui Yan $^{1,2}$ Zihang Jiang $^{1,2*}$ S.Kevin Zhou $^{1,2,3,4*}$", + "bbox": [ + 205, + 198, + 797, + 217 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 School of Biomedical Engineering, Division of Life Sciences and Medicine, USTC", + "bbox": [ + 163, + 215, + 833, + 233 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{2}$ MIRACLE Center, Suzhou Institute for Advance Research, USTC", + "bbox": [ + 228, + 233, + 766, + 250 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "3 Key Laboratory of Intelligent Information Processing of CAS, ICT, CAS", + "bbox": [ + 205, + 250, + 790, + 268 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{4}$ State Key Laboratory of Precision and Intelligent Chemistry, USTC", + "bbox": [ + 225, + 268, + 771, + 287 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "5 Stanford University", + "bbox": [ + 413, + 287, + 583, + 304 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "wxma@mail.ustc.edu.cn jzh0103@ustc.edu.cn s.kevin.zhou@gmail.com", + "bbox": [ + 210, + 306, + 779, + 321 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/c262008dc6a5c352ce4c87a0c0b8d241c611605d740b4021175addac6a5ab112.jpg", + "image_caption": [ + "Features from Original CLIP", + "(Left)", + "Figure 1. (Left) CLIP's anomaly unawareness: Category-level image-text alignment in pre-training leads to CLIP's vague distinctions in anomaly/normal semantics and inaccurate patch-text alignment. (Middle) Our two-stage adaptation strategy: In Stage1, anomaly and normal text features are disentangled as anchors in text space; in Stage2, patch-level visual features are trained to align to these anchors, forming Anomaly-Aware CLIP. (Right) Generalizable anomaly awareness: Our method enables CLIP with generalizable anomaly awareness for both known and unseen classes." + ], + "image_footnote": [], + "bbox": [ + 91, + 345, + 285, + 500 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/405427f90a30740df75eebe906d8559fa16b9540f06778b3baeee3d49ac5bb62.jpg", + "image_caption": [ + "Stage1: Disentangling Anomaly-Aware Text Anchors", + "(Middle)" + ], + "image_footnote": [], + "bbox": [ + 287, + 345, + 486, + 498 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/199d6c14ef078069f92c1d73c20758998ba76787d4d6aa8c8ad9f11f37685ab5.jpg", + "image_caption": [ + "Stage2: Aligning Patch Features According to Text Anchors" + ], + "image_footnote": [], + "bbox": [ + 486, + 345, + 684, + 498 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/f86c1c21560c68444a6a9dbb56d8d95195a458263ac431a206d76551bc0659ef.jpg", + "image_caption": [ + "Features from our Anomaly-Aware CLIP", + "(Right)" + ], + "image_footnote": [], + "bbox": [ + 696, + 345, + 895, + 498 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 246, + 619, + 326, + 635 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Anomaly detection (AD) identifies outliers for applications like defect and lesion detection. While CLIP shows promise for zero-shot AD tasks due to its strong generalization capabilities, its inherent Anomaly-Unawareness leads to limited discrimination between normal and abnormal features. To address this problem, we propose Anomaly-Aware CLIP (AA-CLIP), which enhances CLIP's anomaly discrimination ability in both text and visual spaces while preserving its generalization capability. AA-CLIP is achieved through a straightforward yet effective two-stage approach: it first creates anomaly-aware text anchors to differentiate normal and abnormal semantics clearly, then aligns patch-level visual features with these anchors for precise anomaly localization. This two-stage strategy, with the help of residual adapters, gradually adapts CLIP in a controlled man", + "bbox": [ + 88, + 651, + 485, + 878 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "ner, achieving effective AD while maintaining CLIP's class knowledge. Extensive experiments validate AA-CLIP as a resource-efficient solution for zero-shot AD tasks, achieving state-of-the-art results in industrial and medical applications. The code is available at https://github.com/Mwxinnn/AA-CLIP.", + "bbox": [ + 511, + 621, + 906, + 712 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 513, + 729, + 645, + 744 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Anomaly detection (AD) involves modeling the distribution of a dataset to identify outliers, such as defects in industrial products [2] or lesions in medical images [13]. Despite that previous AD frameworks [10, 11, 15, 23, 31, 58] effectively detect anomalies when sufficient labeled data is available for specific classes, their high resource demands often limit their generalization ability to novel and rare classes. This limitation is particularly challenging in real-world scenarios where collecting comprehensive labeled datasets for AD is often infeasible, necessitating the exploration of low-shot", + "bbox": [ + 509, + 750, + 906, + 902 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 46 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "*Corresponding author.", + "bbox": [ + 109, + 887, + 235, + 900 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "4744", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "learning and transfer learning approaches.", + "bbox": [ + 89, + 90, + 370, + 106 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Contrastive Language-Image Pretraining (CLIP) model has emerged as a promising solution, demonstrating remarkable generalization capabilities across various zero-shot tasks [24-26, 42]. Building upon CLIP's success, several recent studies have adapted CLIP for few/zero-shot AD tasks by utilizing anomaly-related descriptions to guide the detection of anomalous regions. Specifically, the vision encoder is trained to map anomaly images to visual features that align more closely with text features of abnormal descriptions than with those of normal descriptions [29, 30, 49, 60]. Further works [6, 7, 17, 41] have focused on enhancing CLIP's patch-level feature representations to achieve better alignment with text features, resulting in improved anomaly localization performance.", + "bbox": [ + 89, + 108, + 483, + 319 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "These methods depend on text features that need to be anomaly-aware to effectively differentiate abnormalities. However, recent studies highlight CLIP's limitations in fine-grained semantic perception and reasoning [21, 22, 36, 38, 45, 46]. Upon exploring CLIP's texture features for AD, we observe that while CLIP's text encoder effectively captures object-level information, it struggles to reliably distinguish between normal and abnormal semantics. As shown in conceptual visualization Fig. 1(left) and sampled examples in Fig. 2, CLIP has the intrinsic Anomaly-Unawareness problem: the overlap of normal and abnormal texture features hampers the precision of text-guided anomaly detection. We argue that making CLIP anomaly-aware — by establishing clearer distinctions between normal and abnormal semantics in the text space — is essential for guiding the vision encoder to precisely detect and localize anomalies.", + "bbox": [ + 89, + 321, + 483, + 564 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "This observation drives us to improve CLIP-based zero-shot AD through enhancing anomaly discrimination in text space, achieved with our method Anomaly-Aware CLIP (AA-CLIP) — a CLIP model with anomaly-aware information encoded. AA-CLIP is implemented through a novel two-stage adaptation approach. In the first stage, AA-CLIP adapts the text encoder with frozen visual encoder, creating \"anchors\" for anomaly-aware semantics within the text space for each trained class. As illustrated in Fig. 1(middle), each class's text features are disentangled to distinct anchors, with clear abnormality discrimination. Notably, this disentanglement also applies to novel, unseen classes, supporting effective zero-shot inference in AD tasks (refer to Fig. 1(right)). In the second stage, AA-CLIP aligns patch-level visual features with these specially adapted texture anchors, guiding CLIP's visual encoder to concentrate on anomaly-relevant regions. This two-stage approach ensures a focused and precise anomaly detection framework.", + "bbox": [ + 89, + 566, + 482, + 838 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Importantly, as CLIP is extensively trained on massive data, to preserve its pre-trained knowledge, we utilize simple-structured Residual Adapters in both stages. This design enables a controlled adaptation of CLIP while en", + "bbox": [ + 89, + 839, + 483, + 901 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "hancing its capability to handle fine-grained AD tasks without sacrificing its generalization ability.", + "bbox": [ + 511, + 90, + 903, + 121 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Our extensive experiments in both industrial and medical domains demonstrate that our straightforward approach equips CLIP with improved zero-shot AD ability, even in data-limited scenarios. By training with a minimal sample — such as one normal sample and one anomaly sample (2-shot) per class — and testing across unseen datasets, our method achieves zero-shot performance comparable to other CLIP-based AD techniques. With only 64-shot of each class seen in the training set, our method reaches state-of-the-art (SOTA) results in cross-dataset zero-shot testing, validating our method's ability to maximize the CLIP's potential for AD with a minimal data requirement.", + "bbox": [ + 511, + 122, + 906, + 301 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Our contributions are summarized as follows:", + "bbox": [ + 531, + 303, + 834, + 316 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. Anomaly-Aware CLIP with enhanced and generalizable anomaly-discriminative ability. We introduce AA-CLIP which is more sensitive to anomalies sequentially in text and visual spaces, encoding anomaly-aware information into the original CLIP.", + "2. Efficient adaptation using residual adapters. We implement simple residual adapters to boost zero-shot anomaly detection performance without compromising the model's generalization ability.", + "3. SOTA performance with high training efficiency. Our method achieves SOTA results across diverse datasets, showing robust anomaly detection capabilities even with limited training samples." + ], + "bbox": [ + 514, + 318, + 903, + 513 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related Work", + "text_level": 1, + "bbox": [ + 511, + 527, + 653, + 542 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Traditional Anomaly Detection in images involves modeling the normal data distribution to detect rare and diverse unexpected signals within visual data [44, 52, 53, 61]. Reconstruction-based [11, 16, 32, 33, 54, 56], augmentation-based [31, 44, 48, 55, 58] and discriminative [10, 15, 23, 31, 43, 61] methods are typically used to facilitate better modeling. Despite the huge progress of traditional anomaly detection methods, their effectiveness relies heavily on a well-modeled normal data distribution. Without sufficient normal data, their ability to accurately detect anomalies is significantly reduced.", + "bbox": [ + 511, + 547, + 906, + 715 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "CLIP, trained on a vast amount of image-text data, leverages contrastive learning alongside powerful language models and visual feature encoders to capture robust concepts. This combination enables CLIP to achieve impressive zero-shot performance on image classification, as it can generalize well to new categories without requiring task-specific training [24-27, 42, 51]. More recently, numerous studies [9, 12, 35, 39] have explored ways to transfer the knowledge embedded in CLIP models to a variety of downstream tasks, yielding promising results in fields like image captioning, image-text retrieval, and image generation. These efforts demonstrate CLIP's versatility and potential to drive", + "bbox": [ + 511, + 719, + 908, + 901 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "4745", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "advancements across diverse applications.", + "bbox": [ + 89, + 90, + 372, + 104 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Despite the rapid advancements achieved by CLIP, numerous studies have highlighted persistent limitations in the features it extracts. While CLIP demonstrates strong generalization across various tasks, it often struggles to capture nuanced details and essential spatial relationships, which are crucial for tasks demanding precise boundary delineation and fine-grained feature extraction. This limitation results in suboptimal performance in downstream applications, especially that require high levels of detail, such as object detection, scene segmentation, or tasks in medical imaging [14, 29, 30, 37, 49, 50, 57, 60]. As a result, leveraging CLIP for fine-granular tasks frequently necessitates task-specific adaptations to bridge the gap between its generalized feature extraction and the precision required for specialized applications.", + "bbox": [ + 89, + 106, + 483, + 333 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "CLIP-based Anomaly Detection There have been several efforts to leverage CLIP for AD tasks. One of the pioneering approaches, WinCLIP [19], proposes a method for extracting and aggregating visual features from multiple levels to align with text features, demonstrating the potential of CLIP in this context. Subsequent research investigates various adaptation methods to bridge the gap between natural domains and the AD domain, resulting in performance improvements. For instance, [7, 8, 18] focus on refining visual features by employing adapters to enhance patch-level visual representations. However, these approaches often rely on text embeddings from the original CLIP model as soft supervision and overlook a critical limitation of CLIP in AD: itsunclearness in distinguishing between anomalous and normal semantics, particularly within the text encoder, resulting in suboptimal performance. Other works have employed prompt-learning-based methods[5, 6, 41, 59], introducing learnable embeddings into the text encoder to better represent abnormality. However, the class information in CLIP can be damaged, potentially degrading generalization, especially in data-limited and zero-shot settings.", + "bbox": [ + 89, + 335, + 483, + 654 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Different from previous methods, we are the first to investigate CLIP's inherent limitation in capturing anomaly-aware information, specifically in differentiating between normal and anomalous semantics in text prompts. Rather than relying solely on the original anomaly-unaware text embeddings or unaltered feature spaces, our method is able to refine the embeddings to actively incorporate anomaly-discriminative representations.", + "bbox": [ + 89, + 654, + 483, + 776 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. Method", + "text_level": 1, + "bbox": [ + 89, + 789, + 181, + 804 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Overview", + "text_level": 1, + "bbox": [ + 89, + 814, + 200, + 828 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1.1. Problem Formulation", + "text_level": 1, + "bbox": [ + 89, + 835, + 287, + 849 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Zero-shot AD models are trained to identify anomalous samples whose categories may be unseen in the training dataset. Specifically, the model is expected to learn", + "bbox": [ + 89, + 854, + 483, + 900 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/1f4b294e7d4def9bcc526a78197fc3385e97f3cb55a77e48738270adecb8c5f1.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 544, + 95, + 629, + 161 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/b75f5d93de627136e49c895ecdf83d74a7eab7e2e9f221c69e97c6c8bbd33721.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 555, + 164, + 622, + 226 + ], + "page_idx": 2 + }, + { + "type": "table", + "img_path": "images/cdad65d46b81837292f32eee6ab9f89dc599629d25def5594226a42493289766.jpg", + "table_caption": [ + "\"This is a [ ] carpet.\"" + ], + "table_footnote": [], + "table_body": "
SemanticsSimilarityProbabilityτ=0.01
broken0.180.22
normal0.190.78
", + "bbox": [ + 640, + 109, + 875, + 157 + ], + "page_idx": 2 + }, + { + "type": "table", + "img_path": "images/7a8e14cba207c72c91d6fd37dac789f1a0ac3750789fc0aaedd3930ddbe323e5.jpg", + "table_caption": [ + "\"This is a [ ] zipper.\"" + ], + "table_footnote": [], + "table_body": "
SemanticsSimilarityProbabilityτ=0.01
broken0.200.38
normal0.210.62
", + "bbox": [ + 640, + 176, + 875, + 226 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/467ee38e87dd5ff7fa4576a42be09710e9d3a22d802523419eec73f0820be9d3.jpg", + "image_caption": [ + "Figure 2. (Top) Examples illustrating CLIP's Anomaly Unawareness. Despite the obvious anomalies present in the images, image features have higher similarities to normal descriptions, rather than anomaly descriptions, mistakenly. This problem is enlarged with a low temperature $\\tau$ . (Bottom) Text Feature Similarity Heatmap among Normal and Anomaly Descriptions: Original CLIP vs. After Text Adaptation. Red indicates high similarity. In original CLIP, normal features exhibit strong similarity with anomaly features, whereas text adaptation successfully separates them, clarifying the semantic distinctions between normal and anomaly descriptions." + ], + "image_footnote": [], + "bbox": [ + 514, + 227, + 901, + 338 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "both normal and abnormal patterns that are shared across different classes given a training set $\\mathcal{D}_{train}$ with normal or anomalous samples, in order to be capable of performing AD tasks on a series of different test datasets $\\{\\mathcal{D}_{test}^{1},\\mathcal{D}_{test}^{2},\\dots,\\mathcal{D}_{test}^{n}\\}$ , where each $\\mathcal{D}_{test}^{i}$ is distinct from $\\mathcal{D}_{train}$ . Image-level AD can be formally defined as a binary classification problem, where the model aims to classify samples $x\\in \\mathcal{D}$ as either normal ( $y = 0$ ) or anomalous ( $y = 1$ ). Anomaly segmentation extends this concept to pixel-level with mask $S$ , aiming to identify anomalous regions by highlighting pixels associated with anomalies.", + "bbox": [ + 511, + 510, + 905, + 676 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1.2. Current Challenges", + "text_level": 1, + "bbox": [ + 511, + 685, + 696, + 700 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Anomaly Unawareness in CLIP: The CLIP-based AD method classifies visual features as \"anomalies\" if they exhibit greater similarities to anomaly prompt embeddings than to normal prompt embeddings, thus requiring well-defined boundaries between these two kinds of prompts. However, in real applications, CLIP's text embeddings often lack the clear separability needed to reliably distinguish between normal and anomaly classes.", + "bbox": [ + 511, + 704, + 905, + 824 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "We observe that, despite the visible defects in example images from the MVTec-AD [2], their features exhibit higher cosine similarity with \"normal\" prompts than with correct \"anomaly\" descriptions (see Fig. 2 (top)), indicating CLIP's inaccurate semantic understanding. Without adap", + "bbox": [ + 511, + 825, + 905, + 900 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "4746", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/85d153d5a9c1ba27ae30deddc2e6b7643a16c6f2889c919f03a81988d171c738.jpg", + "image_caption": [ + "Figure 3. t-SNE Visualization of Text Features from Original CLIP vs. AA-CLIP. Each point represents a text feature encoded from a prompt. Original CLIP's normal and anomaly text features are intertwined, while our method effectively disentangles them. This disentanglement is generalizable to novel classes, validating the anomaly-awareness of our model." + ], + "image_footnote": [], + "bbox": [ + 98, + 93, + 480, + 388 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "tation, there persists a high similarity between the normal and abnormal text embeddings of a single class, as shown in Fig. 2 (bottom), suggesting a potential entanglement of normal and anomaly semantics within text space. We term this limitation Anomaly Unawareness and attribute it to the training process of CLIP: it is primarily trained on general, non-anomalous datasets and lacks specific guidance on defect detection. Consequently, it is challenging to rely on original CLIP embeddings to detect subtle or context-specific anomalies.", + "bbox": [ + 89, + 498, + 483, + 648 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "This issue remains evident across different categories in our t-SNE analysis: as shown in Fig. 3 (top), only subtle separations are observed within an object cluster, where text embeddings for both normal and abnormal semantics are intermixed. This entangled pattern may potentially lead to anomaly-unaware text-image alignment, which reinforces the necessity to adapt CLIP's to enhance its ability of anomaly-awareness.", + "bbox": [ + 89, + 648, + 483, + 772 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "**Embedding Adaptation Dilemma:** Discussion above renders the adaptation of CLIP essential for effective AD. However, since CLIP's embeddings are already optimized through extensive pretraining, it could be susceptible to overfitting to new dataset during adaptation. Overfitting convergence leads to minimized intra-class distinctions in the training data, often at the expense of the feature separability for effective generalization to unseen data.", + "bbox": [ + 89, + 779, + 483, + 902 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "To address this, a carefully controlled refinement is crucial to preserve CLIP's generalization capabilities while enhancing its sensitivity to anomalies.", + "bbox": [ + 511, + 90, + 905, + 137 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.1.3. Overview of Our Solution", + "text_level": 1, + "bbox": [ + 511, + 143, + 741, + 159 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Motivated by Sec. 3.1.2, we propose Anomaly-Aware CLIP (AA-CLIP) with improved anomaly awareness. As shown in Fig. 4, AA-CLIP is trained through a two-stage training strategy that sequentially adapts the semantic-rich text space and detail-focused visual space, with original CLIP parameters remaining frozen. In the first stage (see Fig. 4 (Top)), we incorporate Residual Adapters into the shallow layers of the text encoder, and the visual features from the fixed image encoder serve as a stable reference for optimization. A Disentangle Loss is proposed to enforce effective discrimination by ensuring independence between normal and anomaly embeddings. In the second stage, we integrate Residual Adapters into the shallow layers of the visual encoder to align patch-level features with the fixed, specially adapted texture features from the fixed text encoder (see in Fig. 4 (Bottom)). Ultimately, our AA-CLIP succeeds in equipping CLIP with anomaly awareness across seen and unseen classes, as shown in Fig. 3 (bottom).", + "bbox": [ + 511, + 164, + 906, + 436 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2. AA-CLIP with Two-Stage Adaptation Strategy", + "text_level": 1, + "bbox": [ + 511, + 445, + 905, + 462 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2.1. Residual Adapter", + "text_level": 1, + "bbox": [ + 511, + 468, + 683, + 483 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "To preserve CLIP's pre-trained knowledge while enabling targeted adaptation, we introduce lightweight Residual Adapters in the shallow layers (up to layer $K$ ) of both text and vision encoders.", + "bbox": [ + 511, + 487, + 905, + 546 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The output feature $x^{i} \\in \\mathbb{R}^{N \\times d}$ of CLIP's $i$ -th ( $i \\leq K$ ) transformer layer is fed into the $i$ -th adapter, outputting adapted feature $x_{residual}^{i}$ , as shown in Eq. (1),", + "bbox": [ + 511, + 547, + 905, + 594 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nx _ {\\text {r e s i d u a l}} ^ {i} = \\operatorname {N o r m} \\left(\\operatorname {A c t} \\left(W ^ {i} x ^ {i}\\right)\\right), \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 586, + 604, + 903, + 622 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $W^{i} \\in \\mathbb{R}^{d \\times d}$ is the trainable linear weight of $i$ -th adapter, $Act(\\cdot)$ is an activation function, and $Norm(\\cdot)$ is a normalizing function. The original feature $x^{i}$ and the enhanced feature $x_{residual}^{i}$ are fused in a weighted manner, generating $x_{enhanced}^{i}$ , the input to the next transformer layer, as shown in Eq. (2),", + "bbox": [ + 511, + 633, + 905, + 727 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nx _ {\\text {e n h a n c e d}} ^ {i} = \\lambda x _ {\\text {r e s i d u a l}} ^ {i} + (1 - \\lambda) x ^ {i}, \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 578, + 737, + 903, + 756 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\lambda$ is a hyper-parameter to control the residual ratio, adjusting the fusing degree of AD-specific knowledge for preserving the original CLIP's generalization ability and improved performance.", + "bbox": [ + 511, + 767, + 905, + 828 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2.2. Two-Stage Training Strategy", + "text_level": 1, + "bbox": [ + 511, + 835, + 761, + 852 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Disentangling Anomaly-Aware Text Anchors: In the first stage, our objective is to learn anomaly-discriminative text anchors by adapting the text encoder while keeping the", + "bbox": [ + 511, + 854, + 905, + 901 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4747", + "bbox": [ + 480, + 944, + 514, + 955 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/a4a610b1167236135a85d221e41e21c181b00e2ab31d2c098f0a54df24da772a.jpg", + "image_caption": [ + "Stage1: Disentangling Anomaly-Aware Text Anchors" + ], + "image_footnote": [], + "bbox": [ + 127, + 107, + 867, + 252 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/b1ece9f7bcb30903f6b874d348410a737150e0d213c2bd10ec011036cf8438ca.jpg", + "image_caption": [ + "Stage2: Aligning Patch Features According to Text Anchors", + "Figure 4. The Two-Stage Training Pipeline of Anomaly-Aware CLIP. In the first stage, the text encoder of AA-CLIP is trained to identify anomaly-related semantics, helped by a disentangle loss. In the second stage, patch features are aligned with these text anchors. Both stages are achieved by the integration of Residual Adapters into the shallow layers of CLIP's backbone. This controlled adaptation enables CLIP to effectively distinguish anomalies, which forms our Anomaly-Aware CLIP." + ], + "image_footnote": [], + "bbox": [ + 124, + 276, + 869, + 460 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "image encoder fixed. We incorporate Residual Adapters into the first $K_{T}$ layers of the CLIP text encoder, as illustrated in Fig. 4 (Top), and set the final projector in the text encoder to be learnable to facilitate improved alignment.", + "bbox": [ + 88, + 542, + 482, + 602 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Using prompts designed to encapsulate both normal and anomalous semantics (as detailed in Appendix), text encoder generates corresponding high-level embeddings. The average embeddings of the normal and anomaly prompts serve as our initial text anchors, denoted as $T_{N}$ and $T_{A} \\in \\mathbb{R}^{d}$ , respectively. These anchors are refined by being aligned with visual features extracted from an enhanced CLIP visual encoder, as [28, 59]. Alignment is conducted at both image and patch levels to incorporate both global and local semantics. By calculating the cosine similarity between these anchors and the image features $V_{image} \\in \\mathbb{R}^{d}$ or patch features $V_{patch} \\in \\mathbb{R}^{N \\times d}$ , as shown in Eq. (3),", + "bbox": [ + 89, + 602, + 483, + 785 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\np _ {c l s} = \\operatorname {C o s S i m} \\left(V _ {\\text {i m a g e}}, \\left[ T _ {N}, T _ {A} \\right]\\right), \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 165, + 792, + 482, + 816 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\np _ {s e g} ^ {o} = \\operatorname {C o s S i m} (V _ {p a t c h}, [ T _ {N}, T _ {A} ]),\n$$\n", + "text_format": "latex", + "bbox": [ + 165, + 811, + 403, + 830 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $[\\cdot, \\cdot]$ means concatenate operation, we obtain the classification prediction $p_{cls} \\in \\mathbb{R}^2$ and the segmentation prediction $p_{seg}^{o} \\in \\mathbb{R}^{N \\times 2}$ . The segmentation prediction $p_{seg}^{o}$ is then reshaped and upsampled to $p_{seg} \\in \\mathbb{R}^{H \\times W \\times 2}$ to align", + "bbox": [ + 89, + 838, + 483, + 902 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "with the height $H$ and width $W$ of segmentation mask $S$ . Following previous works [6, 7, 18, 59], we compute the classification loss $\\mathcal{L}_{cls}$ and segmentation loss $\\mathcal{L}_{seg}$ to optimize parameters, as specified in Eq. (4). Specifically, the classification loss is a binary cross-entropy that compares classification predictions with ground-truth labels $y$ , and the segmentation loss is a combination of dice loss and focal loss applied to segmentation predictions and the anomaly segmentation mask $S$ .", + "bbox": [ + 511, + 542, + 906, + 678 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {c l s} = \\operatorname {B C E} (p _ {c l s}, y),\n$$\n", + "text_format": "latex", + "bbox": [ + 575, + 684, + 720, + 700 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\text {s e g}} = \\operatorname {D i c e} \\left(p _ {\\text {s e g}}, S\\right) + \\operatorname {F o c a l} \\left(p _ {\\text {s e g}}, S\\right), \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 576, + 703, + 903, + 719 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\text {a l i g n}} = \\mathcal {L} _ {\\text {c l s}} + \\mathcal {L} _ {\\text {s e g}}.\n$$\n", + "text_format": "latex", + "bbox": [ + 576, + 722, + 728, + 739 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "To enhance the separation between normal and anomaly text embeddings, we introduce a Disentangle Loss encouraging orthogonality between $T_{N}$ and $T_{A}$ to minimize correlation, as in Eq. (5):", + "bbox": [ + 511, + 743, + 906, + 804 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\text {d i s}} = | < T _ {N}, T _ {A} > | ^ {2}. \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 624, + 809, + 903, + 825 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The Disentangle Loss $\\mathcal{L}_{dis}$ is incorporated into the alignment loss $\\mathcal{L}_{\\text {align }}$ as a regularization term, weighted by a factor $\\gamma$ , which forms the total loss, as in Eq. (6):", + "bbox": [ + 511, + 833, + 906, + 878 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\text {t o t a l}} = \\mathcal {L} _ {\\text {a l i g n}} + \\gamma \\mathcal {L} _ {\\text {d i s}}. \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 622, + 885, + 903, + 902 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "4748", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In this stage, the distinction between normal and anomaly semantics is embedded into CLIP's text encoder while its original object-recognition capability is preserved. Figure 3 indicates that this ability of anomaly-awareness is robust and generalizable to novel classes.", + "bbox": [ + 89, + 90, + 480, + 167 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Aligning Patch Features According to Text Anchors: Anomaly-aware semantic anchors can facilitate the adaptation of patch features, thereby improving the effectiveness and generalizability of anomaly localization. To achieve alignment between patch features and anchors from the previous stage, we introduce trainable Residual Adapters within the initial $K_{I}$ layers of the CLIP visual encoder.", + "bbox": [ + 89, + 172, + 482, + 277 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Features with multi-granularities are utilized to enhance segmentation [7, 19, 59]. Specifically, as shown in Fig. 4 (bottom), the intermediate output feature $F^i$ are extracted from four distinct granularities. These multi-granularity features are then projected to align with the channel of text anchors via a trainable projector $Proj_i(\\cdot)$ , yielding $V_{patch}^i$ at four distinct levels of granularity. The aggregated output $V_{patch}$ is computed by summing individual $V_{patch}^i$ outputs, as in Eq. (7):", + "bbox": [ + 89, + 279, + 483, + 415 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} V _ {p a t c h} ^ {i} = \\operatorname {P r o j} _ {i} \\left(F ^ {i}\\right), i \\in \\{1, 2, 3, 4 \\} \\\\ V _ {p a t c h} = \\sum_ {i = 1} ^ {4} V _ {p a t c h} ^ {i}. \\tag {7} \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 161, + 422, + 480, + 484 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The cosine similarity scores between the aggregated $V_{patch}$ and the text anchors are calculated to generate patch-level predictions as Eq. (3), resulting in the prediction maps.", + "bbox": [ + 89, + 493, + 482, + 539 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "During training, alignment is guided by the loss function defined in Eq. (4), facilitating both global and local alignment. During inference, anomaly prediction maps and corresponding anomaly scores are derived by comparing the similarity scores of visual features against normal and anomaly text embeddings.", + "bbox": [ + 89, + 539, + 483, + 630 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4. Experiments", + "text_level": 1, + "bbox": [ + 89, + 642, + 223, + 659 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.1. Experiment Setups", + "text_level": 1, + "bbox": [ + 89, + 667, + 272, + 683 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Datasets We evaluate our model on 11 widely used benchmarks, as previous AD works [6, 7, 18, 19, 59], with distinct foreground objects spanning a variety of modalities, including photography, endoscopy, CT, MRI, and OCT. For the industrial domain, we use MVtec AD [2], VisA [62], BTAD [34] and MPDD [20]. For medical domain, we use brain MRI, liver CT and retina OCT from BMAD [1], and four different colon polyp detection datasets with different views (CVC-ClinicDB [4], CVC-ColonDB [3], Kvasir-SEG [40] and CVC-300 [47]). Each dataset has both image-level labels and pixel-level masks for evaluation.", + "bbox": [ + 89, + 688, + 482, + 854 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We train our model on a real-world industrial AD dataset - VisA [62] - in which objects are different from other datasets. Results of VisA are obtained using MVtec-AD as", + "bbox": [ + 89, + 854, + 483, + 900 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "the training dataset. To demonstrate adaptation efficiency, we conduct training under various data levels: 2-shot per class, 16-shot per class, 64-shot per class, and full-shot. The corresponding number of samples are randomly selected from each class, while maintaining a consistent 1:1 ratio between normal and anomaly samples.", + "bbox": [ + 511, + 90, + 903, + 181 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Metrics Following [6, 7, 10, 11, 18, 19, 43, 55], we use the Area Under the Receiver Operating Characteristic Curve (AUROC) as the metric. We compute AUROC at both the image and pixel levels to comprehensively assess the model's effectiveness in detecting and localizing anomalies. Implementation Details Following [6, 7, 41, 59], we use OpenCLIP with the ViT-L/14 architecture as the backbone, and input images are resized to $518 \\times 518$ . All parameters of CLIP remain frozen. We set $\\lambda$ to 0.1, $K_{T}$ to 3, $K_{I}$ to 6, and $\\gamma$ to 0.1. For multi-level feature extraction, we utilize outputs from the 6-th, 12-th, 18-th, and 24-th layers of the visual encoder to compose the overall output. For the first stage, we train the model for 5 epochs with a learning rate of $1 \\times 10^{-5}$ . For the second stage, we continue training for 20 epochs, adjusting the learning rate to $5 \\times 10^{-4}$ . Parameters are updated by Adam optimizers. All experiments are conducted on a single NVIDIA GeForce RTX 3090 GPU. More details are available in Appendix.", + "bbox": [ + 511, + 181, + 903, + 455 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2. Comparison with SOTA Methods", + "text_level": 1, + "bbox": [ + 511, + 468, + 805, + 484 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We compare our method against CLIP and several recent SOTA models. Among them, WinCLIP [19], VAND [7] and MVFA-AD [18] use original CLIP text encoder, and AnomalyCLIP [59] and AdaCLIP [6] incorporate learnable prompts. To ensure a fair comparison, we re-train models that are originally trained on different datasets to match the dataset settings of other approaches (detailed in Appendix).", + "bbox": [ + 511, + 489, + 905, + 595 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Quantitative results are presented in Tab. 1 and Tab. 2. Although adapting only the patch feature with original text embeddings has made progress in AD, the superior performance of AA-CLIP highlights its effective disentanglement of anomaly-discriminative semantics, leading to further progress. Notably, even in data-limited situations, our method consistently demonstrates top performance. At the pixel level, with only 2 shots per class used for training, our method achieves improved average zero-shot performance compared to previous methods. With the full dataset, we set a new pixel-level SOTA with an AUROC of $93.4\\%$ . At the image level, our method is competitive with just 2 shots for training and establishes a new SOTA of $83.1\\%$ with 64 shots per class.", + "bbox": [ + 511, + 598, + 905, + 808 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Unlike previous methods, our approach does not rely heavily on data resources to achieve top-tier performance. Comparison under different levels of data available, as shown in Fig. 5, reveals that our approach consistently outperforms other methods in general. Even with limited data, our model reaches competitive results, while other methods", + "bbox": [ + 511, + 810, + 905, + 900 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "4749", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/0b66cec7ce5cdae20101e758991082072c08f5ac429d228d5ab084a42e23b08f.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
DomainDatasetCLIP*WinCLIP*VAND*MVFA-ADAnomalyCLIP*AdaCLIPOurs
OpenCLIPCVPR 2023CVPRw 2023CVPR 2024ICLR2024ECCV2024-
Available training shots--fullfullfullfull21664full
IndustrialBTAD30.632.891.190.193.390.892.894.496.597.0
MPDD62.195.294.994.596.296.696.396.596.396.7
MVTec-AD38.485.187.684.991.189.991.091.291.691.9
VisA46.679.694.293.495.495.593.493.894.095.5
MedicalBrain MRI68.386.094.595.696.293.996.396.496.595.5
Liver CT90.596.295.696.893.994.597.397.797.797.8
Retina OCT21.380.688.590.992.688.594.295.194.495.5
ColonDB49.551.278.278.482.980.083.983.584.784.0
ClinicDB47.570.385.183.985.085.989.287.687.889.9
Kvasir44.669.780.381.981.986.482.184.685.287.2
CVC-30049.9-92.882.695.492.996.097.496.096.4
Average49.974.789.388.591.390.492.092.692.893.4
", + "bbox": [ + 94, + 88, + 906, + 297 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/66c762401f393f3ab750145aee57a896577c599ee8bdc764d4a3ac1dce09cbb8.jpg", + "table_caption": [ + "Table 1. Pixel-level AUROC of zero-shot AD methods in Industrial and Medical domains. Method sources and the number of shots used for training are noted. Results of methods with * are copied from the papers or inferred from official weight. Best results are highlighted as first, second and third." + ], + "table_footnote": [], + "table_body": "
DomainDatasetCLIP&VAND*WinCLIP*MVFA-ADAnomalyCLIP*AdaCLIPOurs
OpenCLIPCVPR 2023CVPR 2024ICLR2024ECCV2024-
Available training shots--fullfullfull21664full
IndustrialBTAD73.668.294.385.390.988.090.994.794.8
MPDD73.063.670.973.772.163.678.375.775.1
MVTec-AD86.191.886.690.990.085.989.792.090.5
VisA66.478.076.582.184.378.484.084.184.6
MedicalBrain MRI58.866.570.983.380.284.380.483.480.2
Liver CT54.764.263.061.664.269.468.169.269.7
Retina OCT65.642.577.375.782.777.481.082.982.7
Average68.367.877.178.480.678.181.883.182.5
", + "bbox": [ + 94, + 353, + 906, + 522 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "display signs of underfitting. As data increases, our method maintains its lead, establishing a new SOTA at both pixel and image levels.", + "bbox": [ + 89, + 585, + 482, + 631 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.3. Visualization", + "text_level": 1, + "bbox": [ + 89, + 642, + 227, + 657 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "To illustrate the alignment intuitively, we present visualization examples in Fig. 6 with original configuration for previous works. Although previous methods with can detect anomalous regions, our AA-CLIP demonstrates fewer false-negative predictions in both industrial and medical domains, accurately highlighting the correct anomaly regions.", + "bbox": [ + 89, + 664, + 482, + 755 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.4. Ablations Analysis", + "text_level": 1, + "bbox": [ + 89, + 766, + 271, + 782 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We conduct thorough ablation experiments of our refinement of both visual and text space, as shown in Tab. 3 and Fig. 7. The second row in Tab. 3, which mirrors the structure of VAND [7], serves as our baseline.", + "bbox": [ + 89, + 789, + 482, + 849 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Image Space: As shown in Tab. 3 line “2,” inserting the vallina linear adapter into transformer layers results in a significant decline in zero-shot performance, indicating the", + "bbox": [ + 89, + 854, + 483, + 902 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/310b5e0e3ab578af9dc8c5345eb476cb4ecfb653e3f92ce809ac3f021c9ea9e0.jpg", + "table_caption": [ + "Table 2. Image-level AUROC of zero-shot AD methods in Industrial and Medical domains. Method sources and the number of shots used for training are noted. Results of methods with * are copied from the papers or inferred from official weight. Best results are highlighted as first, second and third." + ], + "table_footnote": [], + "table_body": "
MethodAvg. AUROC
Pixel-LevelImage-Level
CLIP50.369.3
Image1. + Linear Proj. (VAND [7])88.969.3
2. + Adapter48.9(-40.0)53.4(-15.9)
3. + Residual Adapter91.3(+2.4)80.7(+11.4)
Text4. + Residual Adapter92.1(+3.2)82.6(+13.3)
5. + Disentangle Loss92.7(+3.8)83.3(+14.0)
", + "bbox": [ + 517, + 583, + 903, + 700 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 3. Ablation Study of Our Training Strategy with VisA-Trained 64-Shot Setup. Our contributions are bold. While VAND uses linear projectors to improve AD performance, incorporating Residual Adapters further refines patch feature adaptation. Moreover, integrating our Disentangle Loss yields the best overall results.", + "bbox": [ + 511, + 703, + 906, + 787 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "damage of the original generalization ability of CLIP. Incorporating our Residual Adapters mitigates this issue (shown in line \"3\"). enhancing performance while preserving original information stored in CLIP.", + "bbox": [ + 511, + 803, + 905, + 863 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Text Space: The last two rows in Tab. 3 highlight the impact of our approach in equipping CLIP's encoder with", + "bbox": [ + 511, + 869, + 906, + 902 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "4750", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/7ed1b930e4e1be716f6dffd2599518a4b01f6650e935501bb949bd4c8306b1b1.jpg", + "image_caption": [ + "Figure 5. Average Results (Top) and Results on BTAD (Bottom) of Different methods Trained on 2-, 16-, 64-shot per Class and Full Data of VisA. Our method shows high fitting efficiency, achieving strong results across all data scales." + ], + "image_footnote": [], + "bbox": [ + 119, + 89, + 282, + 354 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/4bd703a8e390087c44ea087922519b276bf75f577a1ad93c32da10a282ef7e44.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 287, + 89, + 450, + 354 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/54bc6d0e000fbbc5adb9f62b15ea2babb5449548126f172d62748e47bc6428f7.jpg", + "image_caption": [ + "Figure 6. Visualization of Anomaly Localization Results of Original CLIP [42], AnomalyCLIP [59], VAND [7] and our AA-CLIP. Compared to previous methods, AA-CLIP demonstrates more reliable prediction capabilities in localizing anomaly." + ], + "image_footnote": [], + "bbox": [ + 99, + 431, + 470, + 632 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "anomaly-aware semantics. Line \"4.\" validates that, with AA-CLIP, the model's ability to discriminate anomalies further improves, as the AA-CLIP's text encoder provides a more precise semantic foundation. Adding Disentangle Loss leads to an additional improvement (shown in Line \"5\"), especially at image-level, validating the necessity of independence between normal and anomaly anchors. These results underscore the crucial role of text space refinement in improved anomaly localization and classification.", + "bbox": [ + 89, + 712, + 482, + 849 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Two-Stage Training: To validate the necessity of two-stage training, we adapt both text and image encoders together within one stage (also adopted by AdaCLIP). As shown", + "bbox": [ + 89, + 854, + 482, + 900 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/b940ce809bb20f102d5ab0c763d469f6ba1df9bd728863b2e2e5aa8edcb03ef5.jpg", + "image_caption": [ + "Figure 7. Visualization of Text Space from One-Stage Training and from AdaCLIP. During one-stage training, class information collapses easily, leading to damaged zero-shot performance." + ], + "image_footnote": [], + "bbox": [ + 524, + 94, + 890, + 233 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "in Fig. 7, one-stage model can easily exaggerate anomaly semantics and forget class information embedded in CLIP, damaging the model's generalization ability. The two-stage training strategy allows controlled adaptation, preserving CLIP's class-relevant knowledge in one end while adapting the other, as shown in Fig. 3.", + "bbox": [ + 511, + 308, + 906, + 398 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5. Conclusion and Discussion", + "text_level": 1, + "bbox": [ + 513, + 417, + 761, + 431 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "To our knowledge, this is the first work to explicitly analyze the intrinsic Anomaly Unawareness problem in CLIP. To tackle this issue, we propose a simple yet effective two-stage training strategy to embed anomaly-aware information into CLIP, enabling clear disentanglement of anomaly representations across both seen and novel classes. By leveraging residual adapters, our method preserves CLIP's strong generalization ability, achieving outstanding zero-shot performance across multiple datasets.", + "bbox": [ + 511, + 444, + 906, + 580 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Our adapted AA-CLIP, developed through this two-stage adaptation strategy, reveals the potential of refining CLIP's feature space for improved performance in downstream applications. Beyond addressing anomaly unawareness, our work also provides a potential foundation for tackling other \"unawareness\" issues within CLIP. These may include limitations in context-awareness or specificity to domain-relevant nuances, suggesting further applications of our method in expanding CLIP's adaptability across diverse tasks. Additionally, we observe signs of overfitting with full-shot training, suggesting potential saturation during CLIP adaptation and warranting further investigation.", + "bbox": [ + 511, + 582, + 908, + 765 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Acknowledgement", + "text_level": 1, + "bbox": [ + 513, + 782, + 671, + 799 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "This work is supported by Natural Science Foundation of China under Grant 62271465, Suzhou Basic Research Program under Grant SYG202338, Open Fund Project of Guangdong Academy of Medical Sciences, China (No. YKY-KF202206), and Jiangsu Province Science Foundation for Youths (NO. BK20240464).", + "bbox": [ + 511, + 809, + 906, + 898 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "4751", + "bbox": [ + 482, + 944, + 513, + 955 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 90, + 187, + 104 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Jinan Bao, Hanshi Sun, Hanqiu Deng, Yinsheng He, Zhaoxiang Zhang, and Xingyu Li. Bmad: Benchmarks for medical anomaly detection, 2024. 6", + "[2] Paul Bergmann, Michael Fauser, David Sattlegger, and Carsten Steger. Mvtec ad-a comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9592-9600, 2019. 1, 3, 6", + "[3] Jorge Bernal, Javier Sánchez, and Fernando Vilarino. Towards automatic polyp detection with a polyp appearance model. Pattern Recognition, 45(9):3166-3182, 2012. 6", + "[4] Jorge Bernal, F Javier Sánchez, Gloria Fernández-Esparrach, Debora Gil, Cristina Rodríguez, and Fernando Vilarino. Wm-dova maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians. Computerized medical imaging and graphics, 43:99-111, 2015. 6", + "[5] Yunkang Cao, Xiaohao Xu, Yuqi Cheng, Chen Sun, Zongwei Du, Liang Gao, and Weiming Shen. Personalizing vision-language models with hybrid prompts for zero-shot anomaly detection. IEEE Transactions on Cybernetics, 2025. 3", + "[6] Yunkang Cao, Jiangning Zhang, Luca Frittoli, Yuqi Cheng, Weiming Shen, and Giacomo Boracchi. Adaclip: Adapting clip with hybrid learnable prompts for zero-shot anomaly detection. In European Conference on Computer Vision, pages 55-72. Springer, 2025. 2, 3, 5, 6", + "[7] Xuhai Chen, Yue Han, and Jiangning Zhang. April-gan: A zero-/few-shot anomaly classification and segmentation method for cvpr 2023 vand workshop challenge tracks 1&2: 1st place on zero-shot ad and 4th place on few-shot ad. arXiv preprint arXiv:2305.17382, 2023. 2, 3, 5, 6, 7, 8", + "[8] Xuhai Chen, Jiangning Zhang, Guanzhong Tian, Haoyang He, Wuhao Zhang, Yabiao Wang, Chengjie Wang, and Yong Liu. Clip-ad: A language-guided staged dual-path model for zero-shot anomaly detection. arXiv preprint arXiv:2311.00453, 2023. 3", + "[9] Jaemin Cho, Seunghyun Yoon, Ajinkya Kale, Franck Dernoncourt, Trung Bui, and Mohit Bansal. Fine-grained image captioning with clip reward. arXiv preprint arXiv:2205.13115, 2022. 2", + "[10] Thomas Defard, Aleksandr Setkov, Angelique Loesch, and Romaric Audigier. Padim: a patch distribution modeling framework for anomaly detection and localization. In International Conference on Pattern Recognition, pages 475-489. Springer, 2021. 1, 2, 6", + "[11] Hanqiu Deng and Xingyu Li. Anomaly detection via reverse distillation from one-class embedding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9737-9746, 2022. 1, 2, 6", + "[12] Han Fang, Pengfei Xiong, Luhui Xu, and Yu Chen. Clip2video: Mastering video-text retrieval via image clip. arXiv preprint arXiv:2106.11097, 2021. 2", + "[13] Tharindu Fernando, Harshala Gammulle, Simon Denman, Sridha Sridharan, and Clinton Fookes. Deep learning for medical anomaly detection-a survey. ACM Computing Surveys (CSUR), 54(7):1-37, 2021. 1" + ], + "bbox": [ + 93, + 114, + 483, + 900 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[14] Peng Gao, Shijie Geng, Renrui Zhang, Teli Ma, Rongyao Fang, Yongfeng Zhang, Hongsheng Li, and Yu Qiao. Clip-adapter: Better vision-language models with feature adapters. International Journal of Computer Vision, 132(2): 581-595, 2024. 3", + "[15] Denis Gudovskiy, Shun Ishizaka, and Kazuki Kozuka. Cflow-ad: Real-time unsupervised anomaly detection with localization via conditional normalizing flows. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 98-107, 2022. 1, 2", + "[16] Haoyang He, Jiangning Zhang, Hongxu Chen, Xuhai Chen, Zhishan Li, Xu Chen, Yabiao Wang, Chengjie Wang, and Lei Xie. Diad: A diffusion-based framework for multi-class anomaly detection. arXiv preprint arXiv:2312.06607, 2023. 2", + "[17] Chaoqin Huang, Aofan Jiang, Jinghao Feng, Ya Zhang, Xin chao Wang, and Yanfeng Wang. Adapting visual-language models for generalizable anomaly detection in medical images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11375-11385, 2024. 2", + "[18] Chaoqin Huang, Aofan Jiang, Jinghao Feng, Ya Zhang, Xin chao Wang, and Yanfeng Wang. Adapting visual-language models for generalizable anomaly detection in medical images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11375-11385, 2024. 3, 5, 6", + "[19] Jongheon Jeong, Yang Zou, Taewan Kim, Dongqing Zhang, Avinash Ravichandran, and Onkar Dabeer. Winclip: Zero/few-shot anomaly classification and segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19606-19616, 2023. 3, 6", + "[20] Stepan Jezek, Martin Jonak, Radim Burget, Pavel Dvorak, and Milos Skotak. Deep learning-based defect detection of metal parts: evaluating current methods in complex conditions. In 2021 13th International congress on ultra modern telecommunications and control systems and workshops (ICUMT), pages 66-71. IEEE, 2021. 6", + "[21] Ruixiang Jiang, Lingbo Liu, and Changwen Chen. Clipcount: Towards text-guided zero-shot object counting. In Proceedings of the 31st ACM International Conference on Multimedia, pages 4535-4545, 2023. 2", + "[22] Zeeshan Khan, Makarand Tapaswi, et al. Figclip: Fine-grained clip adaptation via densely annotated videos. arXiv preprint arXiv:2401.07669, 2024. 2", + "[23] Daehyun Kim, Sungyong Baik, and Tae Hyun Kim. Sanflow: Semantic-aware normalizing flow for anomaly detection. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. 1, 2", + "[24] Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. Align before fuse: Vision and language representation learning with momentum distillation. Advances in neural information processing systems, 34:9694-9705, 2021. 2", + "[25] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In Interna" + ], + "bbox": [ + 516, + 92, + 906, + 900 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "4752", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "tional conference on machine learning, pages 12888-12900. PMLR, 2022.", + "[26] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pages 19730-19742. PMLR, 2023. 2", + "[27] Xin Li, Dongze Lian, Zhihe Lu, Jiawang Bai, Zhibo Chen, and Xinchao Wang. Graphadapter: Tuning vision-language models with dual knowledge graph. Advances in Neural Information Processing Systems, 36, 2024. 2", + "[28] Yi Li, Hualiang Wang, Yiqun Duan, and Xiaomeng Li. Clip surgery for better explainability with enhancement in open-vocabulary tasks. arXiv preprint arXiv:2304.05653, 2023. 5", + "[29] Yuqi Lin, Minghao Chen, Wenxiao Wang, Boxi Wu, Ke Li, Binbin Lin, Haifeng Liu, and Xiaofei He. Clip is also an efficient segmenter: A text-driven approach for weakly supervised semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15305-15314, 2023. 2, 3", + "[30] Jie Liu, Yixiao Zhang, Jie-Neng Chen, Junfei Xiao, Yongyi Lu, Bennett A Landman, Yixuan Yuan, Alan Yuille, Yucheng Tang, and Zongwei Zhou. Clip-driven universal model for organ segmentation and tumor detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 21152-21164, 2023. 2, 3", + "[31] Zhikang Liu, Yiming Zhou, Yuansheng Xu, and Zilei Wang. Simplenet: A simple network for image anomaly detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20402-20411, 2023. 1, 2", + "[32] Ruiying Lu, YuJie Wu, Long Tian, Dongsheng Wang, Bo Chen, Xiyang Liu, and Ruimin Hu. Hierarchical vector quantized transformer for multi-class unsupervised anomaly detection. arXiv preprint arXiv:2310.14228, 2023. 2", + "[33] Wenxin Ma, Qingsong Yao, Xiang Zhang, Zhelong Huang, Zihang Jiang, and S.Kevin Zhou. Towards accurate unified anomaly segmentation. In Proceedings of the Winter Conference on Applications of Computer Vision (WACV), pages 1342-1352, 2025. 2", + "[34] Pankaj Mishra, Riccardo Verk, Daniele Fornasier, Claudio Piciarelli, and Gian Luca Foresti. Vt-adt: A vision transformer network for image anomaly detection and localization. In 2021 IEEE 30th International Symposium on Industrial Electronics (ISIE), pages 01–06. IEEE, 2021. 6", + "[35] Ron Mokady, Amir Hertz, and Amit H Bermano. Clipcap: Clip prefix for image captioning. arXiv preprint arXiv:2111.09734, 2021. 2", + "[36] Liliane Momeni, Mathilde Caron, Arsha Nagrani, Andrew Zisserman, and Cordelia Schmid. Verbs in action: Improving verb understanding in video-language models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15579-15591, 2023. 2", + "[37] Amin Karimi Monsefi, Kishore Prakash Sailaja, Ali Alilooee, Ser-Nam Lim, and Rajiv Ramnath. Detailclip: Detail-oriented clip for fine-grained tasks. arXiv preprint arXiv:2409.06809, 2024. 3" + ], + "bbox": [ + 91, + 90, + 482, + 898 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[38] Roni Paiss, Ariel Ephrat, Omer Tov, Shiran Zada, Inbar Mosseri, Michal Irani, and Tali Dekel. Teaching clip to count to ten. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3170-3180, 2023. 2", + "[39] Or Patashnik, Zongze Wu, Eli Shechtman, Daniel Cohen-Or, and Dani Lischinski. Styleclip: Text-driven manipulation of stylegan imagery. In Proceedings of the IEEE/CVF international conference on computer vision, pages 2085–2094, 2021. 2", + "[40] Konstantin Pogorelov, Kristin Ranheim Randel, Carsten Griwodz, Sigrun Losada Eskeland, Thomas de Lange, Dag Johansen, Concetto Spampinato, Duc-Tien Dang-Nguyen, Mathias Lux, Peter Thelin Schmidt, Michael Riegler, and Pål Halvorsen. Kvasir: A multi-class image dataset for computer aided gastrointestinal disease detection. In Proceedings of the 8th ACM on Multimedia Systems Conference, pages 164-169, New York, NY, USA, 2017. ACM. 6", + "[41] Zhen Qu, Xian Tao, Mukesh Prasad, Fei Shen, Zhengtao Zhang, Xinyi Gong, and Guiguang Ding. Vcp-clip: A visual context prompting model for zero-shot anomaly segmentation. arXiv preprint arXiv:2407.12276, 2024. 2, 3, 6", + "[42] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 2, 8", + "[43] Karsten Roth, Latha Pemula, Joaquin Zepeda, Bernhard Schölkopf, Thomas Brox, and Peter Gehler. Towards total recall in industrial anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14318-14328, 2022. 2, 6", + "[44] Thomas Schlegl, Philipp Seebock, Sebastian M Waldstein, Ursula Schmidt-Erfurth, and Georg Langs. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In International conference on information processing in medical imaging, pages 146-157. Springer, 2017. 2", + "[45] Sanjay Subramanian, William Merrill, Trevor Darrell, Matt Gardner, Sameer Singh, and Anna Rohrbach. Reclip: A strong zero-shot baseline for referring expression comprehension. arXiv preprint arXiv:2204.05991, 2022. 2", + "[46] Yingtian Tang, Yutaro Yamada, Yoyo Zhang, and Ilker Yildirim. When are lemons purple? the concept association bias of vision-language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 14333-14348, 2023. 2", + "[47] David Vázquez, Jorge Bernal, F Javier Sánchez, Gloria Fernández-Esparrach, Antonio M López, Adriana Romero, Michal Drozdzal, and Aaron Courville. A benchmark for endoluminal scene segmentation of colonoscopy images. Journal of healthcare engineering, 2017(1):4037190, 2017. 6", + "[48] Guodong Wang, Shumin Han, Errui Ding, and Di Huang. Student-teacher feature pyramid matching for anomaly detection. arXiv preprint arXiv:2103.04257, 2021. 2", + "[49] Zhaoqing Wang, Yu Lu, Qiang Li, Xunqiang Tao, Yandong Guo, Mingming Gong, and Tongliang Liu. Cris: Clipdriven referring image segmentation. In Proceedings of" + ], + "bbox": [ + 516, + 92, + 903, + 900 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "4753", + "bbox": [ + 482, + 945, + 514, + 955 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "the IEEE/CVF conference on computer vision and pattern recognition, pages 11686-11695, 2022. 2, 3", + "[50] Mengde Xu, Zheng Zhang, Fangyun Wei, Han Hu, and Xiang Bai. Side adapter network for open-vocabulary semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2945-2954, 2023. 3", + "[51] Xingyi Yang and Xinchao Wang. Diffusion model as representation learner. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 18938-18949, 2023. 2", + "[52] Qingsong Yao, Li Xiao, Peihang Liu, and S Kevin Zhou. Label-free segmentation of Covid-19 lesions in lung ct. IEEE transactions on medical imaging, 40(10):2808-2819, 2021. 2", + "[53] Qingsong Yao, Zecheng He, Yuexiang Li, Yi Lin, Kai Ma, Yefeng Zheng, and S Kevin Zhou. Adversarial medical image with hierarchical feature hiding. IEEE Transactions on Medical Imaging, 43(4):1296-1307, 2023. 2", + "[54] Xincheng Yao, Chongyang Zhang, Ruoqi Li, Jun Sun, and Zhenyu Liu. One-for-all: Proposal masked cross-class anomaly detection. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 4792-4800, 2023. 2", + "[55] Zhiyuan You, Lei Cui, Yujun Shen, Kai Yang, Xin Lu, Yu Zheng, and Xinyi Le. A unified model for multi-class anomaly detection. Advances in Neural Information Processing Systems, 35:4571-4584, 2022. 2, 6", + "[56] Zhiyuan You, Kai Yang, Wenhan Luo, Lei Cui, Yu Zheng, and Xinyi Le. Adtr: Anomaly detection transformer with feature reconstruction. In International Conference on Neural Information Processing, pages 298-310. Springer, 2022. 2", + "[57] Renrui Zhang, Wei Zhang, Rongyao Fang, Peng Gao, Kunchang Li, Jifeng Dai, Yu Qiao, and Hongsheng Li. Tip-adapter: Training-free adaption of clip for few-shot classification. In European conference on computer vision, pages 493-510. Springer, 2022. 3", + "[58] Xuan Zhang, Shiyu Li, Xi Li, Ping Huang, Jiulong Shan, and Ting Chen. Destseg: Segmentation guided denoising student-teacher for anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3914–3923, 2023. 1, 2", + "[59] Qihang Zhou, Guansong Pang, Yu Tian, Shibo He, and Jiming Chen. Anomalyclip: Object-agnostic prompt learning for zero-shot anomaly detection. arXiv preprint arXiv:2310.18961, 2023. 3, 5, 6, 8", + "[60] Ziqin Zhou, Yinjie Lei, Bowen Zhang, Lingqiao Liu, and Yifan Liu. Zegclip: Towards adapting clip for zero-shot semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11175-11185, 2023. 2, 3", + "[61] Bo Zong, Qi Song, Martin Renqiang Min, Wei Cheng, Cristian Lumezanu, Daeki Cho, and Haifeng Chen. Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In International conference on learning representations, 2018. 2" + ], + "bbox": [ + 91, + 90, + 482, + 898 + ], + "page_idx": 10 + }, + { + "type": "ref_text", + "text": "[62] Yang Zou, Jongheon Jeong, Latha Pemula, Dongqing Zhang, and Onkar Dabeer. Spot-the-difference self-supervised pretraining for anomaly detection and segmentation. In European Conference on Computer Vision, pages 392-408. Springer, 2022. 6", + "bbox": [ + 516, + 90, + 903, + 160 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "4754", + "bbox": [ + 482, + 945, + 514, + 955 + ], + "page_idx": 10 + } +] \ No newline at end of file diff --git a/2025/AA-CLIP_ Enhancing Zero-Shot Anomaly Detection via Anomaly-Aware CLIP/1a15f5c2-189a-4bf7-936d-11fcae85b257_model.json b/2025/AA-CLIP_ Enhancing Zero-Shot Anomaly Detection via Anomaly-Aware CLIP/1a15f5c2-189a-4bf7-936d-11fcae85b257_model.json new file mode 100644 index 0000000000000000000000000000000000000000..37b36e4b924dd1a93d0fcc1650630f2bc9148b20 --- /dev/null +++ b/2025/AA-CLIP_ Enhancing Zero-Shot Anomaly Detection via Anomaly-Aware CLIP/1a15f5c2-189a-4bf7-936d-11fcae85b257_model.json @@ -0,0 +1,2510 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.044 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.001, + 0.812, + 0.047 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.101, + 0.131, + 0.9, + 0.154 + ], + "angle": 0, + "content": "AA-CLIP: Enhancing Zero-Shot Anomaly Detection via Anomaly-Aware CLIP" + }, + { + "type": "text", + "bbox": [ + 0.134, + 0.179, + 0.865, + 0.199 + ], + "angle": 0, + "content": "Wenxin Ma\\(^{1,2}\\) Xu Zhang\\(^{1,2}\\) Qingsong Yao\\(^{5}\\) Fenghe Tang\\(^{1,2}\\) Chenxu Wu\\(^{1,2}\\)" + }, + { + "type": "text", + "bbox": [ + 0.207, + 0.199, + 0.798, + 0.218 + ], + "angle": 0, + "content": "Yingtai Li\\(^{1,2}\\) Rui Yan\\(^{1,2}\\) Zihang Jiang\\(^{1,2*}\\) S.Kevin Zhou\\(^{1,2,3,4*}\\)" + }, + { + "type": "text", + "bbox": [ + 0.165, + 0.216, + 0.834, + 0.234 + ], + "angle": 0, + "content": "1 School of Biomedical Engineering, Division of Life Sciences and Medicine, USTC" + }, + { + "type": "text", + "bbox": [ + 0.23, + 0.234, + 0.767, + 0.251 + ], + "angle": 0, + "content": "\\(^{2}\\) MIRACLE Center, Suzhou Institute for Advance Research, USTC" + }, + { + "type": "text", + "bbox": [ + 0.207, + 0.251, + 0.792, + 0.27 + ], + "angle": 0, + "content": "3 Key Laboratory of Intelligent Information Processing of CAS, ICT, CAS" + }, + { + "type": "text", + "bbox": [ + 0.226, + 0.269, + 0.772, + 0.288 + ], + "angle": 0, + "content": "\\(^{4}\\) State Key Laboratory of Precision and Intelligent Chemistry, USTC" + }, + { + "type": "text", + "bbox": [ + 0.414, + 0.288, + 0.584, + 0.305 + ], + "angle": 0, + "content": "5 Stanford University" + }, + { + "type": "text", + "bbox": [ + 0.212, + 0.308, + 0.78, + 0.322 + ], + "angle": 0, + "content": "wxma@mail.ustc.edu.cn jzh0103@ustc.edu.cn s.kevin.zhou@gmail.com" + }, + { + "type": "image", + "bbox": [ + 0.092, + 0.346, + 0.287, + 0.5 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.128, + 0.5, + 0.261, + 0.512 + ], + "angle": 0, + "content": "Features from Original CLIP" + }, + { + "type": "image_caption", + "bbox": [ + 0.17, + 0.518, + 0.213, + 0.532 + ], + "angle": 0, + "content": "(Left)" + }, + { + "type": "image", + "bbox": [ + 0.288, + 0.346, + 0.488, + 0.499 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.304, + 0.499, + 0.478, + 0.519 + ], + "angle": 0, + "content": "Stage1: Disentangling Anomaly-Aware Text Anchors" + }, + { + "type": "image", + "bbox": [ + 0.488, + 0.346, + 0.686, + 0.499 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.511, + 0.499, + 0.652, + 0.52 + ], + "angle": 0, + "content": "Stage2: Aligning Patch Features According to Text Anchors" + }, + { + "type": "image_caption", + "bbox": [ + 0.458, + 0.518, + 0.521, + 0.533 + ], + "angle": 0, + "content": "(Middle)" + }, + { + "type": "image", + "bbox": [ + 0.697, + 0.346, + 0.897, + 0.499 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.701, + 0.5, + 0.889, + 0.511 + ], + "angle": 0, + "content": "Features from our Anomaly-Aware CLIP" + }, + { + "type": "image_caption", + "bbox": [ + 0.772, + 0.517, + 0.824, + 0.533 + ], + "angle": 0, + "content": "(Right)" + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.538, + 0.907, + 0.608 + ], + "angle": 0, + "content": "Figure 1. (Left) CLIP's anomaly unawareness: Category-level image-text alignment in pre-training leads to CLIP's vague distinctions in anomaly/normal semantics and inaccurate patch-text alignment. (Middle) Our two-stage adaptation strategy: In Stage1, anomaly and normal text features are disentangled as anchors in text space; in Stage2, patch-level visual features are trained to align to these anchors, forming Anomaly-Aware CLIP. (Right) Generalizable anomaly awareness: Our method enables CLIP with generalizable anomaly awareness for both known and unseen classes." + }, + { + "type": "title", + "bbox": [ + 0.248, + 0.621, + 0.327, + 0.636 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.652, + 0.486, + 0.879 + ], + "angle": 0, + "content": "Anomaly detection (AD) identifies outliers for applications like defect and lesion detection. While CLIP shows promise for zero-shot AD tasks due to its strong generalization capabilities, its inherent Anomaly-Unawareness leads to limited discrimination between normal and abnormal features. To address this problem, we propose Anomaly-Aware CLIP (AA-CLIP), which enhances CLIP's anomaly discrimination ability in both text and visual spaces while preserving its generalization capability. AA-CLIP is achieved through a straightforward yet effective two-stage approach: it first creates anomaly-aware text anchors to differentiate normal and abnormal semantics clearly, then aligns patch-level visual features with these anchors for precise anomaly localization. This two-stage strategy, with the help of residual adapters, gradually adapts CLIP in a controlled man" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.622, + 0.907, + 0.713 + ], + "angle": 0, + "content": "ner, achieving effective AD while maintaining CLIP's class knowledge. Extensive experiments validate AA-CLIP as a resource-efficient solution for zero-shot AD tasks, achieving state-of-the-art results in industrial and medical applications. The code is available at https://github.com/Mwxinnn/AA-CLIP." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.73, + 0.646, + 0.745 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.75, + 0.907, + 0.903 + ], + "angle": 0, + "content": "Anomaly detection (AD) involves modeling the distribution of a dataset to identify outliers, such as defects in industrial products [2] or lesions in medical images [13]. Despite that previous AD frameworks [10, 11, 15, 23, 31, 58] effectively detect anomalies when sufficient labeled data is available for specific classes, their high resource demands often limit their generalization ability to novel and rare classes. This limitation is particularly challenging in real-world scenarios where collecting comprehensive labeled datasets for AD is often infeasible, necessitating the exploration of low-shot" + }, + { + "type": "page_footnote", + "bbox": [ + 0.11, + 0.888, + 0.236, + 0.901 + ], + "angle": 0, + "content": "*Corresponding author." + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "4744" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.091, + 0.092, + 0.371, + 0.107 + ], + "angle": 0, + "content": "learning and transfer learning approaches." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.109, + 0.484, + 0.32 + ], + "angle": 0, + "content": "Contrastive Language-Image Pretraining (CLIP) model has emerged as a promising solution, demonstrating remarkable generalization capabilities across various zero-shot tasks [24-26, 42]. Building upon CLIP's success, several recent studies have adapted CLIP for few/zero-shot AD tasks by utilizing anomaly-related descriptions to guide the detection of anomalous regions. Specifically, the vision encoder is trained to map anomaly images to visual features that align more closely with text features of abnormal descriptions than with those of normal descriptions [29, 30, 49, 60]. Further works [6, 7, 17, 41] have focused on enhancing CLIP's patch-level feature representations to achieve better alignment with text features, resulting in improved anomaly localization performance." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.322, + 0.484, + 0.565 + ], + "angle": 0, + "content": "These methods depend on text features that need to be anomaly-aware to effectively differentiate abnormalities. However, recent studies highlight CLIP's limitations in fine-grained semantic perception and reasoning [21, 22, 36, 38, 45, 46]. Upon exploring CLIP's texture features for AD, we observe that while CLIP's text encoder effectively captures object-level information, it struggles to reliably distinguish between normal and abnormal semantics. As shown in conceptual visualization Fig. 1(left) and sampled examples in Fig. 2, CLIP has the intrinsic Anomaly-Unawareness problem: the overlap of normal and abnormal texture features hampers the precision of text-guided anomaly detection. We argue that making CLIP anomaly-aware — by establishing clearer distinctions between normal and abnormal semantics in the text space — is essential for guiding the vision encoder to precisely detect and localize anomalies." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.567, + 0.483, + 0.839 + ], + "angle": 0, + "content": "This observation drives us to improve CLIP-based zero-shot AD through enhancing anomaly discrimination in text space, achieved with our method Anomaly-Aware CLIP (AA-CLIP) — a CLIP model with anomaly-aware information encoded. AA-CLIP is implemented through a novel two-stage adaptation approach. In the first stage, AA-CLIP adapts the text encoder with frozen visual encoder, creating \"anchors\" for anomaly-aware semantics within the text space for each trained class. As illustrated in Fig. 1(middle), each class's text features are disentangled to distinct anchors, with clear abnormality discrimination. Notably, this disentanglement also applies to novel, unseen classes, supporting effective zero-shot inference in AD tasks (refer to Fig. 1(right)). In the second stage, AA-CLIP aligns patch-level visual features with these specially adapted texture anchors, guiding CLIP's visual encoder to concentrate on anomaly-relevant regions. This two-stage approach ensures a focused and precise anomaly detection framework." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.84, + 0.484, + 0.902 + ], + "angle": 0, + "content": "Importantly, as CLIP is extensively trained on massive data, to preserve its pre-trained knowledge, we utilize simple-structured Residual Adapters in both stages. This design enables a controlled adaptation of CLIP while en" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.905, + 0.122 + ], + "angle": 0, + "content": "hancing its capability to handle fine-grained AD tasks without sacrificing its generalization ability." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.123, + 0.907, + 0.303 + ], + "angle": 0, + "content": "Our extensive experiments in both industrial and medical domains demonstrate that our straightforward approach equips CLIP with improved zero-shot AD ability, even in data-limited scenarios. By training with a minimal sample — such as one normal sample and one anomaly sample (2-shot) per class — and testing across unseen datasets, our method achieves zero-shot performance comparable to other CLIP-based AD techniques. With only 64-shot of each class seen in the training set, our method reaches state-of-the-art (SOTA) results in cross-dataset zero-shot testing, validating our method's ability to maximize the CLIP's potential for AD with a minimal data requirement." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.304, + 0.836, + 0.317 + ], + "angle": 0, + "content": "Our contributions are summarized as follows:" + }, + { + "type": "text", + "bbox": [ + 0.515, + 0.319, + 0.905, + 0.395 + ], + "angle": 0, + "content": "1. Anomaly-Aware CLIP with enhanced and generalizable anomaly-discriminative ability. We introduce AA-CLIP which is more sensitive to anomalies sequentially in text and visual spaces, encoding anomaly-aware information into the original CLIP." + }, + { + "type": "text", + "bbox": [ + 0.515, + 0.395, + 0.905, + 0.455 + ], + "angle": 0, + "content": "2. Efficient adaptation using residual adapters. We implement simple residual adapters to boost zero-shot anomaly detection performance without compromising the model's generalization ability." + }, + { + "type": "text", + "bbox": [ + 0.515, + 0.455, + 0.905, + 0.515 + ], + "angle": 0, + "content": "3. SOTA performance with high training efficiency. Our method achieves SOTA results across diverse datasets, showing robust anomaly detection capabilities even with limited training samples." + }, + { + "type": "list", + "bbox": [ + 0.515, + 0.319, + 0.905, + 0.515 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.529, + 0.655, + 0.544 + ], + "angle": 0, + "content": "2. Related Work" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.549, + 0.907, + 0.716 + ], + "angle": 0, + "content": "Traditional Anomaly Detection in images involves modeling the normal data distribution to detect rare and diverse unexpected signals within visual data [44, 52, 53, 61]. Reconstruction-based [11, 16, 32, 33, 54, 56], augmentation-based [31, 44, 48, 55, 58] and discriminative [10, 15, 23, 31, 43, 61] methods are typically used to facilitate better modeling. Despite the huge progress of traditional anomaly detection methods, their effectiveness relies heavily on a well-modeled normal data distribution. Without sufficient normal data, their ability to accurately detect anomalies is significantly reduced." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.72, + 0.909, + 0.902 + ], + "angle": 0, + "content": "CLIP, trained on a vast amount of image-text data, leverages contrastive learning alongside powerful language models and visual feature encoders to capture robust concepts. This combination enables CLIP to achieve impressive zero-shot performance on image classification, as it can generalize well to new categories without requiring task-specific training [24-27, 42, 51]. More recently, numerous studies [9, 12, 35, 39] have explored ways to transfer the knowledge embedded in CLIP models to a variety of downstream tasks, yielding promising results in fields like image captioning, image-text retrieval, and image generation. These efforts demonstrate CLIP's versatility and potential to drive" + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.956 + ], + "angle": 0, + "content": "4745" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.091, + 0.092, + 0.373, + 0.106 + ], + "angle": 0, + "content": "advancements across diverse applications." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.107, + 0.484, + 0.334 + ], + "angle": 0, + "content": "Despite the rapid advancements achieved by CLIP, numerous studies have highlighted persistent limitations in the features it extracts. While CLIP demonstrates strong generalization across various tasks, it often struggles to capture nuanced details and essential spatial relationships, which are crucial for tasks demanding precise boundary delineation and fine-grained feature extraction. This limitation results in suboptimal performance in downstream applications, especially that require high levels of detail, such as object detection, scene segmentation, or tasks in medical imaging [14, 29, 30, 37, 49, 50, 57, 60]. As a result, leveraging CLIP for fine-granular tasks frequently necessitates task-specific adaptations to bridge the gap between its generalized feature extraction and the precision required for specialized applications." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.337, + 0.484, + 0.655 + ], + "angle": 0, + "content": "CLIP-based Anomaly Detection There have been several efforts to leverage CLIP for AD tasks. One of the pioneering approaches, WinCLIP [19], proposes a method for extracting and aggregating visual features from multiple levels to align with text features, demonstrating the potential of CLIP in this context. Subsequent research investigates various adaptation methods to bridge the gap between natural domains and the AD domain, resulting in performance improvements. For instance, [7, 8, 18] focus on refining visual features by employing adapters to enhance patch-level visual representations. However, these approaches often rely on text embeddings from the original CLIP model as soft supervision and overlook a critical limitation of CLIP in AD: itsunclearness in distinguishing between anomalous and normal semantics, particularly within the text encoder, resulting in suboptimal performance. Other works have employed prompt-learning-based methods[5, 6, 41, 59], introducing learnable embeddings into the text encoder to better represent abnormality. However, the class information in CLIP can be damaged, potentially degrading generalization, especially in data-limited and zero-shot settings." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.655, + 0.484, + 0.777 + ], + "angle": 0, + "content": "Different from previous methods, we are the first to investigate CLIP's inherent limitation in capturing anomaly-aware information, specifically in differentiating between normal and anomalous semantics in text prompts. Rather than relying solely on the original anomaly-unaware text embeddings or unaltered feature spaces, our method is able to refine the embeddings to actively incorporate anomaly-discriminative representations." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.79, + 0.182, + 0.805 + ], + "angle": 0, + "content": "3. Method" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.815, + 0.201, + 0.829 + ], + "angle": 0, + "content": "3.1. Overview" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.837, + 0.288, + 0.851 + ], + "angle": 0, + "content": "3.1.1. Problem Formulation" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.856, + 0.484, + 0.901 + ], + "angle": 0, + "content": "Zero-shot AD models are trained to identify anomalous samples whose categories may be unseen in the training dataset. Specifically, the model is expected to learn" + }, + { + "type": "image", + "bbox": [ + 0.545, + 0.096, + 0.63, + 0.162 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.557, + 0.165, + 0.623, + 0.227 + ], + "angle": 0, + "content": null + }, + { + "type": "table_caption", + "bbox": [ + 0.645, + 0.094, + 0.777, + 0.108 + ], + "angle": 0, + "content": "\"This is a [ ] carpet.\"" + }, + { + "type": "table", + "bbox": [ + 0.641, + 0.11, + 0.877, + 0.159 + ], + "angle": 0, + "content": "
SemanticsSimilarityProbabilityτ=0.01
broken0.180.22
normal0.190.78
" + }, + { + "type": "table_caption", + "bbox": [ + 0.642, + 0.163, + 0.774, + 0.177 + ], + "angle": 0, + "content": "\"This is a [ ] zipper.\"" + }, + { + "type": "table", + "bbox": [ + 0.641, + 0.178, + 0.877, + 0.227 + ], + "angle": 0, + "content": "
SemanticsSimilarityProbabilityτ=0.01
broken0.200.38
normal0.210.62
" + }, + { + "type": "image", + "bbox": [ + 0.516, + 0.228, + 0.903, + 0.339 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.512, + 0.34, + 0.906, + 0.493 + ], + "angle": 0, + "content": "Figure 2. (Top) Examples illustrating CLIP's Anomaly Unawareness. Despite the obvious anomalies present in the images, image features have higher similarities to normal descriptions, rather than anomaly descriptions, mistakenly. This problem is enlarged with a low temperature \\(\\tau\\). (Bottom) Text Feature Similarity Heatmap among Normal and Anomaly Descriptions: Original CLIP vs. After Text Adaptation. Red indicates high similarity. In original CLIP, normal features exhibit strong similarity with anomaly features, whereas text adaptation successfully separates them, clarifying the semantic distinctions between normal and anomaly descriptions." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.511, + 0.906, + 0.678 + ], + "angle": 0, + "content": "both normal and abnormal patterns that are shared across different classes given a training set \\(\\mathcal{D}_{train}\\) with normal or anomalous samples, in order to be capable of performing AD tasks on a series of different test datasets \\(\\{\\mathcal{D}_{test}^{1},\\mathcal{D}_{test}^{2},\\dots,\\mathcal{D}_{test}^{n}\\}\\), where each \\(\\mathcal{D}_{test}^{i}\\) is distinct from \\(\\mathcal{D}_{train}\\). Image-level AD can be formally defined as a binary classification problem, where the model aims to classify samples \\(x\\in \\mathcal{D}\\) as either normal (\\(y = 0\\)) or anomalous (\\(y = 1\\)). Anomaly segmentation extends this concept to pixel-level with mask \\(S\\), aiming to identify anomalous regions by highlighting pixels associated with anomalies." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.686, + 0.697, + 0.701 + ], + "angle": 0, + "content": "3.1.2. Current Challenges" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.705, + 0.906, + 0.825 + ], + "angle": 0, + "content": "Anomaly Unawareness in CLIP: The CLIP-based AD method classifies visual features as \"anomalies\" if they exhibit greater similarities to anomaly prompt embeddings than to normal prompt embeddings, thus requiring well-defined boundaries between these two kinds of prompts. However, in real applications, CLIP's text embeddings often lack the clear separability needed to reliably distinguish between normal and anomaly classes." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.826, + 0.906, + 0.901 + ], + "angle": 0, + "content": "We observe that, despite the visible defects in example images from the MVTec-AD [2], their features exhibit higher cosine similarity with \"normal\" prompts than with correct \"anomaly\" descriptions (see Fig. 2 (top)), indicating CLIP's inaccurate semantic understanding. Without adap" + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.956 + ], + "angle": 0, + "content": "4746" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.099, + 0.094, + 0.482, + 0.389 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.4, + 0.485, + 0.485 + ], + "angle": 0, + "content": "Figure 3. t-SNE Visualization of Text Features from Original CLIP vs. AA-CLIP. Each point represents a text feature encoded from a prompt. Original CLIP's normal and anomaly text features are intertwined, while our method effectively disentangles them. This disentanglement is generalizable to novel classes, validating the anomaly-awareness of our model." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.499, + 0.484, + 0.649 + ], + "angle": 0, + "content": "tation, there persists a high similarity between the normal and abnormal text embeddings of a single class, as shown in Fig. 2 (bottom), suggesting a potential entanglement of normal and anomaly semantics within text space. We term this limitation Anomaly Unawareness and attribute it to the training process of CLIP: it is primarily trained on general, non-anomalous datasets and lacks specific guidance on defect detection. Consequently, it is challenging to rely on original CLIP embeddings to detect subtle or context-specific anomalies." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.65, + 0.484, + 0.773 + ], + "angle": 0, + "content": "This issue remains evident across different categories in our t-SNE analysis: as shown in Fig. 3 (top), only subtle separations are observed within an object cluster, where text embeddings for both normal and abnormal semantics are intermixed. This entangled pattern may potentially lead to anomaly-unaware text-image alignment, which reinforces the necessity to adapt CLIP's to enhance its ability of anomaly-awareness." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.78, + 0.484, + 0.903 + ], + "angle": 0, + "content": "**Embedding Adaptation Dilemma:** Discussion above renders the adaptation of CLIP essential for effective AD. However, since CLIP's embeddings are already optimized through extensive pretraining, it could be susceptible to overfitting to new dataset during adaptation. Overfitting convergence leads to minimized intra-class distinctions in the training data, often at the expense of the feature separability for effective generalization to unseen data." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.906, + 0.138 + ], + "angle": 0, + "content": "To address this, a carefully controlled refinement is crucial to preserve CLIP's generalization capabilities while enhancing its sensitivity to anomalies." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.145, + 0.742, + 0.16 + ], + "angle": 0, + "content": "3.1.3. Overview of Our Solution" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.165, + 0.907, + 0.438 + ], + "angle": 0, + "content": "Motivated by Sec. 3.1.2, we propose Anomaly-Aware CLIP (AA-CLIP) with improved anomaly awareness. As shown in Fig. 4, AA-CLIP is trained through a two-stage training strategy that sequentially adapts the semantic-rich text space and detail-focused visual space, with original CLIP parameters remaining frozen. In the first stage (see Fig. 4 (Top)), we incorporate Residual Adapters into the shallow layers of the text encoder, and the visual features from the fixed image encoder serve as a stable reference for optimization. A Disentangle Loss is proposed to enforce effective discrimination by ensuring independence between normal and anomaly embeddings. In the second stage, we integrate Residual Adapters into the shallow layers of the visual encoder to align patch-level features with the fixed, specially adapted texture features from the fixed text encoder (see in Fig. 4 (Bottom)). Ultimately, our AA-CLIP succeeds in equipping CLIP with anomaly awareness across seen and unseen classes, as shown in Fig. 3 (bottom)." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.446, + 0.906, + 0.463 + ], + "angle": 0, + "content": "3.2. AA-CLIP with Two-Stage Adaptation Strategy" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.469, + 0.684, + 0.484 + ], + "angle": 0, + "content": "3.2.1. Residual Adapter" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.488, + 0.906, + 0.547 + ], + "angle": 0, + "content": "To preserve CLIP's pre-trained knowledge while enabling targeted adaptation, we introduce lightweight Residual Adapters in the shallow layers (up to layer \\( K \\)) of both text and vision encoders." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.548, + 0.906, + 0.595 + ], + "angle": 0, + "content": "The output feature \\( x^{i} \\in \\mathbb{R}^{N \\times d} \\) of CLIP's \\( i \\)-th (\\( i \\leq K \\)) transformer layer is fed into the \\( i \\)-th adapter, outputting adapted feature \\( x_{residual}^{i} \\), as shown in Eq. (1)," + }, + { + "type": "equation", + "bbox": [ + 0.588, + 0.605, + 0.905, + 0.623 + ], + "angle": 0, + "content": "\\[\nx _ {\\text {r e s i d u a l}} ^ {i} = \\operatorname {N o r m} \\left(\\operatorname {A c t} \\left(W ^ {i} x ^ {i}\\right)\\right), \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.635, + 0.906, + 0.728 + ], + "angle": 0, + "content": "where \\( W^{i} \\in \\mathbb{R}^{d \\times d} \\) is the trainable linear weight of \\( i \\)-th adapter, \\( Act(\\cdot) \\) is an activation function, and \\( Norm(\\cdot) \\) is a normalizing function. The original feature \\( x^{i} \\) and the enhanced feature \\( x_{residual}^{i} \\) are fused in a weighted manner, generating \\( x_{enhanced}^{i} \\), the input to the next transformer layer, as shown in Eq. (2)," + }, + { + "type": "equation", + "bbox": [ + 0.579, + 0.738, + 0.905, + 0.757 + ], + "angle": 0, + "content": "\\[\nx _ {\\text {e n h a n c e d}} ^ {i} = \\lambda x _ {\\text {r e s i d u a l}} ^ {i} + (1 - \\lambda) x ^ {i}, \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.768, + 0.906, + 0.829 + ], + "angle": 0, + "content": "where \\(\\lambda\\) is a hyper-parameter to control the residual ratio, adjusting the fusing degree of AD-specific knowledge for preserving the original CLIP's generalization ability and improved performance." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.837, + 0.762, + 0.853 + ], + "angle": 0, + "content": "3.2.2. Two-Stage Training Strategy" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.856, + 0.906, + 0.902 + ], + "angle": 0, + "content": "Disentangling Anomaly-Aware Text Anchors: In the first stage, our objective is to learn anomaly-discriminative text anchors by adapting the text encoder while keeping the" + }, + { + "type": "page_number", + "bbox": [ + 0.482, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "4747" + } + ], + [ + { + "type": "image_caption", + "bbox": [ + 0.132, + 0.088, + 0.5, + 0.104 + ], + "angle": 0, + "content": "Stage1: Disentangling Anomaly-Aware Text Anchors" + }, + { + "type": "image", + "bbox": [ + 0.129, + 0.108, + 0.868, + 0.253 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.137, + 0.258, + 0.548, + 0.274 + ], + "angle": 0, + "content": "Stage2: Aligning Patch Features According to Text Anchors" + }, + { + "type": "image", + "bbox": [ + 0.125, + 0.277, + 0.87, + 0.462 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.469, + 0.908, + 0.526 + ], + "angle": 0, + "content": "Figure 4. The Two-Stage Training Pipeline of Anomaly-Aware CLIP. In the first stage, the text encoder of AA-CLIP is trained to identify anomaly-related semantics, helped by a disentangle loss. In the second stage, patch features are aligned with these text anchors. Both stages are achieved by the integration of Residual Adapters into the shallow layers of CLIP's backbone. This controlled adaptation enables CLIP to effectively distinguish anomalies, which forms our Anomaly-Aware CLIP." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.543, + 0.483, + 0.603 + ], + "angle": 0, + "content": "image encoder fixed. We incorporate Residual Adapters into the first \\( K_{T} \\) layers of the CLIP text encoder, as illustrated in Fig. 4 (Top), and set the final projector in the text encoder to be learnable to facilitate improved alignment." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.603, + 0.484, + 0.786 + ], + "angle": 0, + "content": "Using prompts designed to encapsulate both normal and anomalous semantics (as detailed in Appendix), text encoder generates corresponding high-level embeddings. The average embeddings of the normal and anomaly prompts serve as our initial text anchors, denoted as \\( T_{N} \\) and \\( T_{A} \\in \\mathbb{R}^{d} \\), respectively. These anchors are refined by being aligned with visual features extracted from an enhanced CLIP visual encoder, as [28, 59]. Alignment is conducted at both image and patch levels to incorporate both global and local semantics. By calculating the cosine similarity between these anchors and the image features \\( V_{image} \\in \\mathbb{R}^{d} \\) or patch features \\( V_{patch} \\in \\mathbb{R}^{N \\times d} \\), as shown in Eq. (3)," + }, + { + "type": "equation", + "bbox": [ + 0.166, + 0.794, + 0.483, + 0.818 + ], + "angle": 0, + "content": "\\[\np _ {c l s} = \\operatorname {C o s S i m} \\left(V _ {\\text {i m a g e}}, \\left[ T _ {N}, T _ {A} \\right]\\right), \\tag {3}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.166, + 0.813, + 0.405, + 0.831 + ], + "angle": 0, + "content": "\\[\np _ {s e g} ^ {o} = \\operatorname {C o s S i m} (V _ {p a t c h}, [ T _ {N}, T _ {A} ]),\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.839, + 0.484, + 0.903 + ], + "angle": 0, + "content": "where \\([\\cdot, \\cdot]\\) means concatenate operation, we obtain the classification prediction \\(p_{cls} \\in \\mathbb{R}^2\\) and the segmentation prediction \\(p_{seg}^{o} \\in \\mathbb{R}^{N \\times 2}\\). The segmentation prediction \\(p_{seg}^{o}\\) is then reshaped and upsampled to \\(p_{seg} \\in \\mathbb{R}^{H \\times W \\times 2}\\) to align" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.543, + 0.907, + 0.679 + ], + "angle": 0, + "content": "with the height \\( H \\) and width \\( W \\) of segmentation mask \\( S \\). Following previous works [6, 7, 18, 59], we compute the classification loss \\( \\mathcal{L}_{cls} \\) and segmentation loss \\( \\mathcal{L}_{seg} \\) to optimize parameters, as specified in Eq. (4). Specifically, the classification loss is a binary cross-entropy that compares classification predictions with ground-truth labels \\( y \\), and the segmentation loss is a combination of dice loss and focal loss applied to segmentation predictions and the anomaly segmentation mask \\( S \\)." + }, + { + "type": "equation", + "bbox": [ + 0.576, + 0.685, + 0.722, + 0.701 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {c l s} = \\operatorname {B C E} (p _ {c l s}, y),\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.577, + 0.704, + 0.905, + 0.72 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\text {s e g}} = \\operatorname {D i c e} \\left(p _ {\\text {s e g}}, S\\right) + \\operatorname {F o c a l} \\left(p _ {\\text {s e g}}, S\\right), \\tag {4}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.577, + 0.723, + 0.73, + 0.74 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\text {a l i g n}} = \\mathcal {L} _ {\\text {c l s}} + \\mathcal {L} _ {\\text {s e g}}.\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.744, + 0.907, + 0.805 + ], + "angle": 0, + "content": "To enhance the separation between normal and anomaly text embeddings, we introduce a Disentangle Loss encouraging orthogonality between \\( T_{N} \\) and \\( T_{A} \\) to minimize correlation, as in Eq. (5):" + }, + { + "type": "equation", + "bbox": [ + 0.625, + 0.81, + 0.905, + 0.827 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\text {d i s}} = | < T _ {N}, T _ {A} > | ^ {2}. \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.834, + 0.907, + 0.879 + ], + "angle": 0, + "content": "The Disentangle Loss \\(\\mathcal{L}_{dis}\\) is incorporated into the alignment loss \\(\\mathcal{L}_{\\text {align }}\\) as a regularization term, weighted by a factor \\(\\gamma\\), which forms the total loss, as in Eq. (6):" + }, + { + "type": "equation", + "bbox": [ + 0.624, + 0.886, + 0.905, + 0.903 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\text {t o t a l}} = \\mathcal {L} _ {\\text {a l i g n}} + \\gamma \\mathcal {L} _ {\\text {d i s}}. \\tag {6}\n\\]" + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.956 + ], + "angle": 0, + "content": "4748" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.482, + 0.168 + ], + "angle": 0, + "content": "In this stage, the distinction between normal and anomaly semantics is embedded into CLIP's text encoder while its original object-recognition capability is preserved. Figure 3 indicates that this ability of anomaly-awareness is robust and generalizable to novel classes." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.173, + 0.483, + 0.278 + ], + "angle": 0, + "content": "Aligning Patch Features According to Text Anchors: Anomaly-aware semantic anchors can facilitate the adaptation of patch features, thereby improving the effectiveness and generalizability of anomaly localization. To achieve alignment between patch features and anchors from the previous stage, we introduce trainable Residual Adapters within the initial \\( K_{I} \\) layers of the CLIP visual encoder." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.28, + 0.485, + 0.416 + ], + "angle": 0, + "content": "Features with multi-granularities are utilized to enhance segmentation [7, 19, 59]. Specifically, as shown in Fig. 4 (bottom), the intermediate output feature \\( F^i \\) are extracted from four distinct granularities. These multi-granularity features are then projected to align with the channel of text anchors via a trainable projector \\( Proj_i(\\cdot) \\), yielding \\( V_{patch}^i \\) at four distinct levels of granularity. The aggregated output \\( V_{patch} \\) is computed by summing individual \\( V_{patch}^i \\) outputs, as in Eq. (7):" + }, + { + "type": "equation", + "bbox": [ + 0.162, + 0.424, + 0.482, + 0.486 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} V _ {p a t c h} ^ {i} = \\operatorname {P r o j} _ {i} \\left(F ^ {i}\\right), i \\in \\{1, 2, 3, 4 \\} \\\\ V _ {p a t c h} = \\sum_ {i = 1} ^ {4} V _ {p a t c h} ^ {i}. \\tag {7} \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.494, + 0.483, + 0.54 + ], + "angle": 0, + "content": "The cosine similarity scores between the aggregated \\( V_{patch} \\) and the text anchors are calculated to generate patch-level predictions as Eq. (3), resulting in the prediction maps." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.54, + 0.484, + 0.631 + ], + "angle": 0, + "content": "During training, alignment is guided by the loss function defined in Eq. (4), facilitating both global and local alignment. During inference, anomaly prediction maps and corresponding anomaly scores are derived by comparing the similarity scores of visual features against normal and anomaly text embeddings." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.643, + 0.224, + 0.66 + ], + "angle": 0, + "content": "4. Experiments" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.668, + 0.274, + 0.684 + ], + "angle": 0, + "content": "4.1. Experiment Setups" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.689, + 0.483, + 0.855 + ], + "angle": 0, + "content": "Datasets We evaluate our model on 11 widely used benchmarks, as previous AD works [6, 7, 18, 19, 59], with distinct foreground objects spanning a variety of modalities, including photography, endoscopy, CT, MRI, and OCT. For the industrial domain, we use MVtec AD [2], VisA [62], BTAD [34] and MPDD [20]. For medical domain, we use brain MRI, liver CT and retina OCT from BMAD [1], and four different colon polyp detection datasets with different views (CVC-ClinicDB [4], CVC-ColonDB [3], Kvasir-SEG [40] and CVC-300 [47]). Each dataset has both image-level labels and pixel-level masks for evaluation." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.856, + 0.484, + 0.901 + ], + "angle": 0, + "content": "We train our model on a real-world industrial AD dataset - VisA [62] - in which objects are different from other datasets. Results of VisA are obtained using MVtec-AD as" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.905, + 0.182 + ], + "angle": 0, + "content": "the training dataset. To demonstrate adaptation efficiency, we conduct training under various data levels: 2-shot per class, 16-shot per class, 64-shot per class, and full-shot. The corresponding number of samples are randomly selected from each class, while maintaining a consistent 1:1 ratio between normal and anomaly samples." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.183, + 0.905, + 0.457 + ], + "angle": 0, + "content": "Metrics Following [6, 7, 10, 11, 18, 19, 43, 55], we use the Area Under the Receiver Operating Characteristic Curve (AUROC) as the metric. We compute AUROC at both the image and pixel levels to comprehensively assess the model's effectiveness in detecting and localizing anomalies. Implementation Details Following [6, 7, 41, 59], we use OpenCLIP with the ViT-L/14 architecture as the backbone, and input images are resized to \\(518 \\times 518\\). All parameters of CLIP remain frozen. We set \\(\\lambda\\) to 0.1, \\(K_{T}\\) to 3, \\(K_{I}\\) to 6, and \\(\\gamma\\) to 0.1. For multi-level feature extraction, we utilize outputs from the 6-th, 12-th, 18-th, and 24-th layers of the visual encoder to compose the overall output. For the first stage, we train the model for 5 epochs with a learning rate of \\(1 \\times 10^{-5}\\). For the second stage, we continue training for 20 epochs, adjusting the learning rate to \\(5 \\times 10^{-4}\\). Parameters are updated by Adam optimizers. All experiments are conducted on a single NVIDIA GeForce RTX 3090 GPU. More details are available in Appendix." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.469, + 0.807, + 0.485 + ], + "angle": 0, + "content": "4.2. Comparison with SOTA Methods" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.491, + 0.906, + 0.597 + ], + "angle": 0, + "content": "We compare our method against CLIP and several recent SOTA models. Among them, WinCLIP [19], VAND [7] and MVFA-AD [18] use original CLIP text encoder, and AnomalyCLIP [59] and AdaCLIP [6] incorporate learnable prompts. To ensure a fair comparison, we re-train models that are originally trained on different datasets to match the dataset settings of other approaches (detailed in Appendix)." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.599, + 0.906, + 0.809 + ], + "angle": 0, + "content": "Quantitative results are presented in Tab. 1 and Tab. 2. Although adapting only the patch feature with original text embeddings has made progress in AD, the superior performance of AA-CLIP highlights its effective disentanglement of anomaly-discriminative semantics, leading to further progress. Notably, even in data-limited situations, our method consistently demonstrates top performance. At the pixel level, with only 2 shots per class used for training, our method achieves improved average zero-shot performance compared to previous methods. With the full dataset, we set a new pixel-level SOTA with an AUROC of \\(93.4\\%\\). At the image level, our method is competitive with just 2 shots for training and establishes a new SOTA of \\(83.1\\%\\) with 64 shots per class." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.811, + 0.906, + 0.901 + ], + "angle": 0, + "content": "Unlike previous methods, our approach does not rely heavily on data resources to achieve top-tier performance. Comparison under different levels of data available, as shown in Fig. 5, reveals that our approach consistently outperforms other methods in general. Even with limited data, our model reaches competitive results, while other methods" + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.956 + ], + "angle": 0, + "content": "4749" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.095, + 0.089, + 0.907, + 0.299 + ], + "angle": 0, + "content": "
DomainDatasetCLIP*WinCLIP*VAND*MVFA-ADAnomalyCLIP*AdaCLIPOurs
OpenCLIPCVPR 2023CVPRw 2023CVPR 2024ICLR2024ECCV2024-
Available training shots--fullfullfullfull21664full
IndustrialBTAD30.632.891.190.193.390.892.894.496.597.0
MPDD62.195.294.994.596.296.696.396.596.396.7
MVTec-AD38.485.187.684.991.189.991.091.291.691.9
VisA46.679.694.293.495.495.593.493.894.095.5
MedicalBrain MRI68.386.094.595.696.293.996.396.496.595.5
Liver CT90.596.295.696.893.994.597.397.797.797.8
Retina OCT21.380.688.590.992.688.594.295.194.495.5
ColonDB49.551.278.278.482.980.083.983.584.784.0
ClinicDB47.570.385.183.985.085.989.287.687.889.9
Kvasir44.669.780.381.981.986.482.184.685.287.2
CVC-30049.9-92.882.695.492.996.097.496.096.4
Average49.974.789.388.591.390.492.092.692.893.4
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.301, + 0.907, + 0.346 + ], + "angle": 0, + "content": "Table 1. Pixel-level AUROC of zero-shot AD methods in Industrial and Medical domains. Method sources and the number of shots used for training are noted. Results of methods with * are copied from the papers or inferred from official weight. Best results are highlighted as first, second and third." + }, + { + "type": "table", + "bbox": [ + 0.095, + 0.354, + 0.907, + 0.523 + ], + "angle": 0, + "content": "
DomainDatasetCLIP&VAND*WinCLIP*MVFA-ADAnomalyCLIP*AdaCLIPOurs
OpenCLIPCVPR 2023CVPR 2024ICLR2024ECCV2024-
Available training shots--fullfullfull21664full
IndustrialBTAD73.668.294.385.390.988.090.994.794.8
MPDD73.063.670.973.772.163.678.375.775.1
MVTec-AD86.191.886.690.990.085.989.792.090.5
VisA66.478.076.582.184.378.484.084.184.6
MedicalBrain MRI58.866.570.983.380.284.380.483.480.2
Liver CT54.764.263.061.664.269.468.169.269.7
Retina OCT65.642.577.375.782.777.481.082.982.7
Average68.367.877.178.480.678.181.883.182.5
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.525, + 0.907, + 0.569 + ], + "angle": 0, + "content": "Table 2. Image-level AUROC of zero-shot AD methods in Industrial and Medical domains. Method sources and the number of shots used for training are noted. Results of methods with * are copied from the papers or inferred from official weight. Best results are highlighted as first, second and third." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.586, + 0.483, + 0.632 + ], + "angle": 0, + "content": "display signs of underfitting. As data increases, our method maintains its lead, establishing a new SOTA at both pixel and image levels." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.643, + 0.228, + 0.658 + ], + "angle": 0, + "content": "4.3. Visualization" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.665, + 0.483, + 0.756 + ], + "angle": 0, + "content": "To illustrate the alignment intuitively, we present visualization examples in Fig. 6 with original configuration for previous works. Although previous methods with can detect anomalous regions, our AA-CLIP demonstrates fewer false-negative predictions in both industrial and medical domains, accurately highlighting the correct anomaly regions." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.767, + 0.272, + 0.783 + ], + "angle": 0, + "content": "4.4. Ablations Analysis" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.79, + 0.483, + 0.85 + ], + "angle": 0, + "content": "We conduct thorough ablation experiments of our refinement of both visual and text space, as shown in Tab. 3 and Fig. 7. The second row in Tab. 3, which mirrors the structure of VAND [7], serves as our baseline." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.856, + 0.484, + 0.903 + ], + "angle": 0, + "content": "Image Space: As shown in Tab. 3 line “2,” inserting the vallina linear adapter into transformer layers results in a significant decline in zero-shot performance, indicating the" + }, + { + "type": "table", + "bbox": [ + 0.518, + 0.584, + 0.905, + 0.702 + ], + "angle": 0, + "content": "
MethodAvg. AUROC
Pixel-LevelImage-Level
CLIP50.369.3
Image1. + Linear Proj. (VAND [7])88.969.3
2. + Adapter48.9(-40.0)53.4(-15.9)
3. + Residual Adapter91.3(+2.4)80.7(+11.4)
Text4. + Residual Adapter92.1(+3.2)82.6(+13.3)
5. + Disentangle Loss92.7(+3.8)83.3(+14.0)
" + }, + { + "type": "table_caption", + "bbox": [ + 0.513, + 0.704, + 0.907, + 0.788 + ], + "angle": 0, + "content": "Table 3. Ablation Study of Our Training Strategy with VisA-Trained 64-Shot Setup. Our contributions are bold. While VAND uses linear projectors to improve AD performance, incorporating Residual Adapters further refines patch feature adaptation. Moreover, integrating our Disentangle Loss yields the best overall results." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.804, + 0.906, + 0.864 + ], + "angle": 0, + "content": "damage of the original generalization ability of CLIP. Incorporating our Residual Adapters mitigates this issue (shown in line \"3\"). enhancing performance while preserving original information stored in CLIP." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.871, + 0.907, + 0.903 + ], + "angle": 0, + "content": "Text Space: The last two rows in Tab. 3 highlight the impact of our approach in equipping CLIP's encoder with" + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "4750" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.12, + 0.09, + 0.283, + 0.356 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.288, + 0.09, + 0.451, + 0.356 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.365, + 0.483, + 0.421 + ], + "angle": 0, + "content": "Figure 5. Average Results (Top) and Results on BTAD (Bottom) of Different methods Trained on 2-, 16-, 64-shot per Class and Full Data of VisA. Our method shows high fitting efficiency, achieving strong results across all data scales." + }, + { + "type": "image", + "bbox": [ + 0.101, + 0.433, + 0.472, + 0.633 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.639, + 0.483, + 0.696 + ], + "angle": 0, + "content": "Figure 6. Visualization of Anomaly Localization Results of Original CLIP [42], AnomalyCLIP [59], VAND [7] and our AA-CLIP. Compared to previous methods, AA-CLIP demonstrates more reliable prediction capabilities in localizing anomaly." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.713, + 0.483, + 0.85 + ], + "angle": 0, + "content": "anomaly-aware semantics. Line \"4.\" validates that, with AA-CLIP, the model's ability to discriminate anomalies further improves, as the AA-CLIP's text encoder provides a more precise semantic foundation. Adding Disentangle Loss leads to an additional improvement (shown in Line \"5\"), especially at image-level, validating the necessity of independence between normal and anomaly anchors. These results underscore the crucial role of text space refinement in improved anomaly localization and classification." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.856, + 0.483, + 0.901 + ], + "angle": 0, + "content": "Two-Stage Training: To validate the necessity of two-stage training, we adapt both text and image encoders together within one stage (also adopted by AdaCLIP). As shown" + }, + { + "type": "image", + "bbox": [ + 0.525, + 0.095, + 0.892, + 0.234 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.513, + 0.243, + 0.907, + 0.286 + ], + "angle": 0, + "content": "Figure 7. Visualization of Text Space from One-Stage Training and from AdaCLIP. During one-stage training, class information collapses easily, leading to damaged zero-shot performance." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.309, + 0.907, + 0.4 + ], + "angle": 0, + "content": "in Fig. 7, one-stage model can easily exaggerate anomaly semantics and forget class information embedded in CLIP, damaging the model's generalization ability. The two-stage training strategy allows controlled adaptation, preserving CLIP's class-relevant knowledge in one end while adapting the other, as shown in Fig. 3." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.418, + 0.763, + 0.433 + ], + "angle": 0, + "content": "5. Conclusion and Discussion" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.445, + 0.907, + 0.581 + ], + "angle": 0, + "content": "To our knowledge, this is the first work to explicitly analyze the intrinsic Anomaly Unawareness problem in CLIP. To tackle this issue, we propose a simple yet effective two-stage training strategy to embed anomaly-aware information into CLIP, enabling clear disentanglement of anomaly representations across both seen and novel classes. By leveraging residual adapters, our method preserves CLIP's strong generalization ability, achieving outstanding zero-shot performance across multiple datasets." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.583, + 0.909, + 0.766 + ], + "angle": 0, + "content": "Our adapted AA-CLIP, developed through this two-stage adaptation strategy, reveals the potential of refining CLIP's feature space for improved performance in downstream applications. Beyond addressing anomaly unawareness, our work also provides a potential foundation for tackling other \"unawareness\" issues within CLIP. These may include limitations in context-awareness or specificity to domain-relevant nuances, suggesting further applications of our method in expanding CLIP's adaptability across diverse tasks. Additionally, we observe signs of overfitting with full-shot training, suggesting potential saturation during CLIP adaptation and warranting further investigation." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.783, + 0.673, + 0.8 + ], + "angle": 0, + "content": "Acknowledgement" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.81, + 0.907, + 0.9 + ], + "angle": 0, + "content": "This work is supported by Natural Science Foundation of China under Grant 62271465, Suzhou Basic Research Program under Grant SYG202338, Open Fund Project of Guangdong Academy of Medical Sciences, China (No. YKY-KF202206), and Jiangsu Province Science Foundation for Youths (NO. BK20240464)." + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.514, + 0.957 + ], + "angle": 0, + "content": "4751" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.093, + 0.091, + 0.188, + 0.106 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.115, + 0.484, + 0.156 + ], + "angle": 0, + "content": "[1] Jinan Bao, Hanshi Sun, Hanqiu Deng, Yinsheng He, Zhaoxiang Zhang, and Xingyu Li. Bmad: Benchmarks for medical anomaly detection, 2024. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.158, + 0.484, + 0.227 + ], + "angle": 0, + "content": "[2] Paul Bergmann, Michael Fauser, David Sattlegger, and Carsten Steger. Mvtec ad-a comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9592-9600, 2019. 1, 3, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.228, + 0.483, + 0.269 + ], + "angle": 0, + "content": "[3] Jorge Bernal, Javier Sánchez, and Fernando Vilarino. Towards automatic polyp detection with a polyp appearance model. Pattern Recognition, 45(9):3166-3182, 2012. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.271, + 0.483, + 0.352 + ], + "angle": 0, + "content": "[4] Jorge Bernal, F Javier Sánchez, Gloria Fernández-Esparrach, Debora Gil, Cristina Rodríguez, and Fernando Vilarino. Wm-dova maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians. Computerized medical imaging and graphics, 43:99-111, 2015. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.354, + 0.482, + 0.409 + ], + "angle": 0, + "content": "[5] Yunkang Cao, Xiaohao Xu, Yuqi Cheng, Chen Sun, Zongwei Du, Liang Gao, and Weiming Shen. Personalizing vision-language models with hybrid prompts for zero-shot anomaly detection. IEEE Transactions on Cybernetics, 2025. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.411, + 0.483, + 0.479 + ], + "angle": 0, + "content": "[6] Yunkang Cao, Jiangning Zhang, Luca Frittoli, Yuqi Cheng, Weiming Shen, and Giacomo Boracchi. Adaclip: Adapting clip with hybrid learnable prompts for zero-shot anomaly detection. In European Conference on Computer Vision, pages 55-72. Springer, 2025. 2, 3, 5, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.481, + 0.483, + 0.549 + ], + "angle": 0, + "content": "[7] Xuhai Chen, Yue Han, and Jiangning Zhang. April-gan: A zero-/few-shot anomaly classification and segmentation method for cvpr 2023 vand workshop challenge tracks 1&2: 1st place on zero-shot ad and 4th place on few-shot ad. arXiv preprint arXiv:2305.17382, 2023. 2, 3, 5, 6, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.551, + 0.483, + 0.619 + ], + "angle": 0, + "content": "[8] Xuhai Chen, Jiangning Zhang, Guanzhong Tian, Haoyang He, Wuhao Zhang, Yabiao Wang, Chengjie Wang, and Yong Liu. Clip-ad: A language-guided staged dual-path model for zero-shot anomaly detection. arXiv preprint arXiv:2311.00453, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.621, + 0.483, + 0.675 + ], + "angle": 0, + "content": "[9] Jaemin Cho, Seunghyun Yoon, Ajinkya Kale, Franck Dernoncourt, Trung Bui, and Mohit Bansal. Fine-grained image captioning with clip reward. arXiv preprint arXiv:2205.13115, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.677, + 0.483, + 0.746 + ], + "angle": 0, + "content": "[10] Thomas Defard, Aleksandr Setkov, Angelique Loesch, and Romaric Audigier. Padim: a patch distribution modeling framework for anomaly detection and localization. In International Conference on Pattern Recognition, pages 475-489. Springer, 2021. 1, 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.747, + 0.483, + 0.802 + ], + "angle": 0, + "content": "[11] Hanqiu Deng and Xingyu Li. Anomaly detection via reverse distillation from one-class embedding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9737-9746, 2022. 1, 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.803, + 0.483, + 0.845 + ], + "angle": 0, + "content": "[12] Han Fang, Pengfei Xiong, Luhui Xu, and Yu Chen. Clip2video: Mastering video-text retrieval via image clip. arXiv preprint arXiv:2106.11097, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.846, + 0.483, + 0.901 + ], + "angle": 0, + "content": "[13] Tharindu Fernando, Harshala Gammulle, Simon Denman, Sridha Sridharan, and Clinton Fookes. Deep learning for medical anomaly detection-a survey. ACM Computing Surveys (CSUR), 54(7):1-37, 2021. 1" + }, + { + "type": "list", + "bbox": [ + 0.094, + 0.115, + 0.484, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.093, + 0.906, + 0.161 + ], + "angle": 0, + "content": "[14] Peng Gao, Shijie Geng, Renrui Zhang, Teli Ma, Rongyao Fang, Yongfeng Zhang, Hongsheng Li, and Yu Qiao. Clip-adapter: Better vision-language models with feature adapters. International Journal of Computer Vision, 132(2): 581-595, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.164, + 0.907, + 0.233 + ], + "angle": 0, + "content": "[15] Denis Gudovskiy, Shun Ishizaka, and Kazuki Kozuka. Cflow-ad: Real-time unsupervised anomaly detection with localization via conditional normalizing flows. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 98-107, 2022. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.235, + 0.906, + 0.302 + ], + "angle": 0, + "content": "[16] Haoyang He, Jiangning Zhang, Hongxu Chen, Xuhai Chen, Zhishan Li, Xu Chen, Yabiao Wang, Chengjie Wang, and Lei Xie. Diad: A diffusion-based framework for multi-class anomaly detection. arXiv preprint arXiv:2312.06607, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.306, + 0.906, + 0.387 + ], + "angle": 0, + "content": "[17] Chaoqin Huang, Aofan Jiang, Jinghao Feng, Ya Zhang, Xin chao Wang, and Yanfeng Wang. Adapting visual-language models for generalizable anomaly detection in medical images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11375-11385, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.39, + 0.906, + 0.473 + ], + "angle": 0, + "content": "[18] Chaoqin Huang, Aofan Jiang, Jinghao Feng, Ya Zhang, Xin chao Wang, and Yanfeng Wang. Adapting visual-language models for generalizable anomaly detection in medical images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11375-11385, 2024. 3, 5, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.476, + 0.906, + 0.545 + ], + "angle": 0, + "content": "[19] Jongheon Jeong, Yang Zou, Taewan Kim, Dongqing Zhang, Avinash Ravichandran, and Onkar Dabeer. Winclip: Zero/few-shot anomaly classification and segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19606-19616, 2023. 3, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.547, + 0.906, + 0.629 + ], + "angle": 0, + "content": "[20] Stepan Jezek, Martin Jonak, Radim Burget, Pavel Dvorak, and Milos Skotak. Deep learning-based defect detection of metal parts: evaluating current methods in complex conditions. In 2021 13th International congress on ultra modern telecommunications and control systems and workshops (ICUMT), pages 66-71. IEEE, 2021. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.631, + 0.906, + 0.686 + ], + "angle": 0, + "content": "[21] Ruixiang Jiang, Lingbo Liu, and Changwen Chen. Clipcount: Towards text-guided zero-shot object counting. In Proceedings of the 31st ACM International Conference on Multimedia, pages 4535-4545, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.688, + 0.906, + 0.73 + ], + "angle": 0, + "content": "[22] Zeeshan Khan, Makarand Tapaswi, et al. Figclip: Fine-grained clip adaptation via densely annotated videos. arXiv preprint arXiv:2401.07669, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.732, + 0.906, + 0.787 + ], + "angle": 0, + "content": "[23] Daehyun Kim, Sungyong Baik, and Tae Hyun Kim. Sanflow: Semantic-aware normalizing flow for anomaly detection. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.789, + 0.906, + 0.857 + ], + "angle": 0, + "content": "[24] Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. Align before fuse: Vision and language representation learning with momentum distillation. Advances in neural information processing systems, 34:9694-9705, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.859, + 0.906, + 0.901 + ], + "angle": 0, + "content": "[25] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In Interna" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.907, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.956 + ], + "angle": 0, + "content": "4752" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.126, + 0.092, + 0.482, + 0.119 + ], + "angle": 0, + "content": "tional conference on machine learning, pages 12888-12900. PMLR, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.122, + 0.483, + 0.189 + ], + "angle": 0, + "content": "[26] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pages 19730-19742. PMLR, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.191, + 0.482, + 0.245 + ], + "angle": 0, + "content": "[27] Xin Li, Dongze Lian, Zhihe Lu, Jiawang Bai, Zhibo Chen, and Xinchao Wang. Graphadapter: Tuning vision-language models with dual knowledge graph. Advances in Neural Information Processing Systems, 36, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.247, + 0.482, + 0.3 + ], + "angle": 0, + "content": "[28] Yi Li, Hualiang Wang, Yiqun Duan, and Xiaomeng Li. Clip surgery for better explainability with enhancement in open-vocabulary tasks. arXiv preprint arXiv:2304.05653, 2023. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.302, + 0.482, + 0.384 + ], + "angle": 0, + "content": "[29] Yuqi Lin, Minghao Chen, Wenxiao Wang, Boxi Wu, Ke Li, Binbin Lin, Haifeng Liu, and Xiaofei He. Clip is also an efficient segmenter: A text-driven approach for weakly supervised semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15305-15314, 2023. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.386, + 0.482, + 0.468 + ], + "angle": 0, + "content": "[30] Jie Liu, Yixiao Zhang, Jie-Neng Chen, Junfei Xiao, Yongyi Lu, Bennett A Landman, Yixuan Yuan, Alan Yuille, Yucheng Tang, and Zongwei Zhou. Clip-driven universal model for organ segmentation and tumor detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 21152-21164, 2023. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.47, + 0.482, + 0.537 + ], + "angle": 0, + "content": "[31] Zhikang Liu, Yiming Zhou, Yuansheng Xu, and Zilei Wang. Simplenet: A simple network for image anomaly detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20402-20411, 2023. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.539, + 0.482, + 0.594 + ], + "angle": 0, + "content": "[32] Ruiying Lu, YuJie Wu, Long Tian, Dongsheng Wang, Bo Chen, Xiyang Liu, and Ruimin Hu. Hierarchical vector quantized transformer for multi-class unsupervised anomaly detection. arXiv preprint arXiv:2310.14228, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.595, + 0.482, + 0.662 + ], + "angle": 0, + "content": "[33] Wenxin Ma, Qingsong Yao, Xiang Zhang, Zhelong Huang, Zihang Jiang, and S.Kevin Zhou. Towards accurate unified anomaly segmentation. In Proceedings of the Winter Conference on Applications of Computer Vision (WACV), pages 1342-1352, 2025. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.665, + 0.482, + 0.734 + ], + "angle": 0, + "content": "[34] Pankaj Mishra, Riccardo Verk, Daniele Fornasier, Claudio Piciarelli, and Gian Luca Foresti. Vt-adt: A vision transformer network for image anomaly detection and localization. In 2021 IEEE 30th International Symposium on Industrial Electronics (ISIE), pages 01–06. IEEE, 2021. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.735, + 0.482, + 0.774 + ], + "angle": 0, + "content": "[35] Ron Mokady, Amir Hertz, and Amit H Bermano. Clipcap: Clip prefix for image captioning. arXiv preprint arXiv:2111.09734, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.776, + 0.482, + 0.845 + ], + "angle": 0, + "content": "[36] Liliane Momeni, Mathilde Caron, Arsha Nagrani, Andrew Zisserman, and Cordelia Schmid. Verbs in action: Improving verb understanding in video-language models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15579-15591, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.846, + 0.482, + 0.9 + ], + "angle": 0, + "content": "[37] Amin Karimi Monsefi, Kishore Prakash Sailaja, Ali Alilooee, Ser-Nam Lim, and Rajiv Ramnath. Detailclip: Detail-oriented clip for fine-grained tasks. arXiv preprint arXiv:2409.06809, 2024. 3" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.483, + 0.9 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.148 + ], + "angle": 0, + "content": "[38] Roni Paiss, Ariel Ephrat, Omer Tov, Shiran Zada, Inbar Mosseri, Michal Irani, and Tali Dekel. Teaching clip to count to ten. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3170-3180, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.149, + 0.905, + 0.216 + ], + "angle": 0, + "content": "[39] Or Patashnik, Zongze Wu, Eli Shechtman, Daniel Cohen-Or, and Dani Lischinski. Styleclip: Text-driven manipulation of stylegan imagery. In Proceedings of the IEEE/CVF international conference on computer vision, pages 2085–2094, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.218, + 0.905, + 0.327 + ], + "angle": 0, + "content": "[40] Konstantin Pogorelov, Kristin Ranheim Randel, Carsten Griwodz, Sigrun Losada Eskeland, Thomas de Lange, Dag Johansen, Concetto Spampinato, Duc-Tien Dang-Nguyen, Mathias Lux, Peter Thelin Schmidt, Michael Riegler, and Pål Halvorsen. Kvasir: A multi-class image dataset for computer aided gastrointestinal disease detection. In Proceedings of the 8th ACM on Multimedia Systems Conference, pages 164-169, New York, NY, USA, 2017. ACM. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.329, + 0.905, + 0.384 + ], + "angle": 0, + "content": "[41] Zhen Qu, Xian Tao, Mukesh Prasad, Fei Shen, Zhengtao Zhang, Xinyi Gong, and Guiguang Ding. Vcp-clip: A visual context prompting model for zero-shot anomaly segmentation. arXiv preprint arXiv:2407.12276, 2024. 2, 3, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.385, + 0.905, + 0.468 + ], + "angle": 0, + "content": "[42] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 2, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.469, + 0.905, + 0.537 + ], + "angle": 0, + "content": "[43] Karsten Roth, Latha Pemula, Joaquin Zepeda, Bernhard Schölkopf, Thomas Brox, and Peter Gehler. Towards total recall in industrial anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14318-14328, 2022. 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.539, + 0.905, + 0.621 + ], + "angle": 0, + "content": "[44] Thomas Schlegl, Philipp Seebock, Sebastian M Waldstein, Ursula Schmidt-Erfurth, and Georg Langs. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In International conference on information processing in medical imaging, pages 146-157. Springer, 2017. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.622, + 0.905, + 0.677 + ], + "angle": 0, + "content": "[45] Sanjay Subramanian, William Merrill, Trevor Darrell, Matt Gardner, Sameer Singh, and Anna Rohrbach. Reclip: A strong zero-shot baseline for referring expression comprehension. arXiv preprint arXiv:2204.05991, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.679, + 0.905, + 0.747 + ], + "angle": 0, + "content": "[46] Yingtian Tang, Yutaro Yamada, Yoyo Zhang, and Ilker Yildirim. When are lemons purple? the concept association bias of vision-language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 14333-14348, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.748, + 0.905, + 0.816 + ], + "angle": 0, + "content": "[47] David Vázquez, Jorge Bernal, F Javier Sánchez, Gloria Fernández-Esparrach, Antonio M López, Adriana Romero, Michal Drozdzal, and Aaron Courville. A benchmark for endoluminal scene segmentation of colonoscopy images. Journal of healthcare engineering, 2017(1):4037190, 2017. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.818, + 0.905, + 0.859 + ], + "angle": 0, + "content": "[48] Guodong Wang, Shumin Han, Errui Ding, and Di Huang. Student-teacher feature pyramid matching for anomaly detection. arXiv preprint arXiv:2103.04257, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.86, + 0.905, + 0.901 + ], + "angle": 0, + "content": "[49] Zhaoqing Wang, Yu Lu, Qiang Li, Xunqiang Tao, Yandong Guo, Mingming Gong, and Tongliang Liu. Cris: Clipdriven referring image segmentation. In Proceedings of" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.946, + 0.515, + 0.956 + ], + "angle": 0, + "content": "4753" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.126, + 0.092, + 0.482, + 0.121 + ], + "angle": 0, + "content": "the IEEE/CVF conference on computer vision and pattern recognition, pages 11686-11695, 2022. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.124, + 0.483, + 0.191 + ], + "angle": 0, + "content": "[50] Mengde Xu, Zheng Zhang, Fangyun Wei, Han Hu, and Xiang Bai. Side adapter network for open-vocabulary semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2945-2954, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.195, + 0.482, + 0.248 + ], + "angle": 0, + "content": "[51] Xingyi Yang and Xinchao Wang. Diffusion model as representation learner. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 18938-18949, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.254, + 0.482, + 0.307 + ], + "angle": 0, + "content": "[52] Qingsong Yao, Li Xiao, Peihang Liu, and S Kevin Zhou. Label-free segmentation of Covid-19 lesions in lung ct. IEEE transactions on medical imaging, 40(10):2808-2819, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.312, + 0.482, + 0.366 + ], + "angle": 0, + "content": "[53] Qingsong Yao, Zecheng He, Yuexiang Li, Yi Lin, Kai Ma, Yefeng Zheng, and S Kevin Zhou. Adversarial medical image with hierarchical feature hiding. IEEE Transactions on Medical Imaging, 43(4):1296-1307, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.37, + 0.482, + 0.425 + ], + "angle": 0, + "content": "[54] Xincheng Yao, Chongyang Zhang, Ruoqi Li, Jun Sun, and Zhenyu Liu. One-for-all: Proposal masked cross-class anomaly detection. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 4792-4800, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.428, + 0.482, + 0.482 + ], + "angle": 0, + "content": "[55] Zhiyuan You, Lei Cui, Yujun Shen, Kai Yang, Xin Lu, Yu Zheng, and Xinyi Le. A unified model for multi-class anomaly detection. Advances in Neural Information Processing Systems, 35:4571-4584, 2022. 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.486, + 0.482, + 0.553 + ], + "angle": 0, + "content": "[56] Zhiyuan You, Kai Yang, Wenhan Luo, Lei Cui, Yu Zheng, and Xinyi Le. Adtr: Anomaly detection transformer with feature reconstruction. In International Conference on Neural Information Processing, pages 298-310. Springer, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.558, + 0.482, + 0.626 + ], + "angle": 0, + "content": "[57] Renrui Zhang, Wei Zhang, Rongyao Fang, Peng Gao, Kunchang Li, Jifeng Dai, Yu Qiao, and Hongsheng Li. Tip-adapter: Training-free adaption of clip for few-shot classification. In European conference on computer vision, pages 493-510. Springer, 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.63, + 0.482, + 0.699 + ], + "angle": 0, + "content": "[58] Xuan Zhang, Shiyu Li, Xi Li, Ping Huang, Jiulong Shan, and Ting Chen. Destseg: Segmentation guided denoising student-teacher for anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3914–3923, 2023. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.702, + 0.482, + 0.756 + ], + "angle": 0, + "content": "[59] Qihang Zhou, Guansong Pang, Yu Tian, Shibo He, and Jiming Chen. Anomalyclip: Object-agnostic prompt learning for zero-shot anomaly detection. arXiv preprint arXiv:2310.18961, 2023. 3, 5, 6, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.76, + 0.482, + 0.828 + ], + "angle": 0, + "content": "[60] Ziqin Zhou, Yinjie Lei, Bowen Zhang, Lingqiao Liu, and Yifan Liu. Zegclip: Towards adapting clip for zero-shot semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11175-11185, 2023. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.832, + 0.482, + 0.899 + ], + "angle": 0, + "content": "[61] Bo Zong, Qi Song, Martin Renqiang Min, Wei Cheng, Cristian Lumezanu, Daeki Cho, and Haifeng Chen. Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In International conference on learning representations, 2018. 2" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.483, + 0.899 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.092, + 0.905, + 0.161 + ], + "angle": 0, + "content": "[62] Yang Zou, Jongheon Jeong, Latha Pemula, Dongqing Zhang, and Onkar Dabeer. Spot-the-difference self-supervised pretraining for anomaly detection and segmentation. In European Conference on Computer Vision, pages 392-408. Springer, 2022. 6" + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.946, + 0.515, + 0.956 + ], + "angle": 0, + "content": "4754" + } + ] +] \ No newline at end of file diff --git a/2025/AA-CLIP_ Enhancing Zero-Shot Anomaly Detection via Anomaly-Aware CLIP/1a15f5c2-189a-4bf7-936d-11fcae85b257_origin.pdf b/2025/AA-CLIP_ Enhancing Zero-Shot Anomaly Detection via Anomaly-Aware CLIP/1a15f5c2-189a-4bf7-936d-11fcae85b257_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ed571a6f7f57c990129a3b4c3a3eb1d413110d3e --- /dev/null +++ b/2025/AA-CLIP_ Enhancing Zero-Shot Anomaly Detection via Anomaly-Aware CLIP/1a15f5c2-189a-4bf7-936d-11fcae85b257_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ba2141ceaca14dac2da806efcd7c27dbeb81d810cd59a9bed1e75580d5823810 +size 2586543 diff --git a/2025/AA-CLIP_ Enhancing Zero-Shot Anomaly Detection via Anomaly-Aware CLIP/full.md b/2025/AA-CLIP_ Enhancing Zero-Shot Anomaly Detection via Anomaly-Aware CLIP/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c9997dac6b2d5ed64370c1ad8d8a0512a8b86976 --- /dev/null +++ b/2025/AA-CLIP_ Enhancing Zero-Shot Anomaly Detection via Anomaly-Aware CLIP/full.md @@ -0,0 +1,355 @@ +# AA-CLIP: Enhancing Zero-Shot Anomaly Detection via Anomaly-Aware CLIP + +Wenxin Ma $^{1,2}$ Xu Zhang $^{1,2}$ Qingsong Yao $^{5}$ Fenghe Tang $^{1,2}$ Chenxu Wu $^{1,2}$ + +Yingtai Li $^{1,2}$ Rui Yan $^{1,2}$ Zihang Jiang $^{1,2*}$ S.Kevin Zhou $^{1,2,3,4*}$ + +1 School of Biomedical Engineering, Division of Life Sciences and Medicine, USTC + +$^{2}$ MIRACLE Center, Suzhou Institute for Advance Research, USTC + +3 Key Laboratory of Intelligent Information Processing of CAS, ICT, CAS + +$^{4}$ State Key Laboratory of Precision and Intelligent Chemistry, USTC + +5 Stanford University + +wxma@mail.ustc.edu.cn jzh0103@ustc.edu.cn s.kevin.zhou@gmail.com + +![](images/c262008dc6a5c352ce4c87a0c0b8d241c611605d740b4021175addac6a5ab112.jpg) +Features from Original CLIP +(Left) +Figure 1. (Left) CLIP's anomaly unawareness: Category-level image-text alignment in pre-training leads to CLIP's vague distinctions in anomaly/normal semantics and inaccurate patch-text alignment. (Middle) Our two-stage adaptation strategy: In Stage1, anomaly and normal text features are disentangled as anchors in text space; in Stage2, patch-level visual features are trained to align to these anchors, forming Anomaly-Aware CLIP. (Right) Generalizable anomaly awareness: Our method enables CLIP with generalizable anomaly awareness for both known and unseen classes. + +![](images/405427f90a30740df75eebe906d8559fa16b9540f06778b3baeee3d49ac5bb62.jpg) +Stage1: Disentangling Anomaly-Aware Text Anchors +(Middle) + +![](images/199d6c14ef078069f92c1d73c20758998ba76787d4d6aa8c8ad9f11f37685ab5.jpg) +Stage2: Aligning Patch Features According to Text Anchors + +![](images/f86c1c21560c68444a6a9dbb56d8d95195a458263ac431a206d76551bc0659ef.jpg) +Features from our Anomaly-Aware CLIP +(Right) + +# Abstract + +Anomaly detection (AD) identifies outliers for applications like defect and lesion detection. While CLIP shows promise for zero-shot AD tasks due to its strong generalization capabilities, its inherent Anomaly-Unawareness leads to limited discrimination between normal and abnormal features. To address this problem, we propose Anomaly-Aware CLIP (AA-CLIP), which enhances CLIP's anomaly discrimination ability in both text and visual spaces while preserving its generalization capability. AA-CLIP is achieved through a straightforward yet effective two-stage approach: it first creates anomaly-aware text anchors to differentiate normal and abnormal semantics clearly, then aligns patch-level visual features with these anchors for precise anomaly localization. This two-stage strategy, with the help of residual adapters, gradually adapts CLIP in a controlled man + +ner, achieving effective AD while maintaining CLIP's class knowledge. Extensive experiments validate AA-CLIP as a resource-efficient solution for zero-shot AD tasks, achieving state-of-the-art results in industrial and medical applications. The code is available at https://github.com/Mwxinnn/AA-CLIP. + +# 1. Introduction + +Anomaly detection (AD) involves modeling the distribution of a dataset to identify outliers, such as defects in industrial products [2] or lesions in medical images [13]. Despite that previous AD frameworks [10, 11, 15, 23, 31, 58] effectively detect anomalies when sufficient labeled data is available for specific classes, their high resource demands often limit their generalization ability to novel and rare classes. This limitation is particularly challenging in real-world scenarios where collecting comprehensive labeled datasets for AD is often infeasible, necessitating the exploration of low-shot + +learning and transfer learning approaches. + +Contrastive Language-Image Pretraining (CLIP) model has emerged as a promising solution, demonstrating remarkable generalization capabilities across various zero-shot tasks [24-26, 42]. Building upon CLIP's success, several recent studies have adapted CLIP for few/zero-shot AD tasks by utilizing anomaly-related descriptions to guide the detection of anomalous regions. Specifically, the vision encoder is trained to map anomaly images to visual features that align more closely with text features of abnormal descriptions than with those of normal descriptions [29, 30, 49, 60]. Further works [6, 7, 17, 41] have focused on enhancing CLIP's patch-level feature representations to achieve better alignment with text features, resulting in improved anomaly localization performance. + +These methods depend on text features that need to be anomaly-aware to effectively differentiate abnormalities. However, recent studies highlight CLIP's limitations in fine-grained semantic perception and reasoning [21, 22, 36, 38, 45, 46]. Upon exploring CLIP's texture features for AD, we observe that while CLIP's text encoder effectively captures object-level information, it struggles to reliably distinguish between normal and abnormal semantics. As shown in conceptual visualization Fig. 1(left) and sampled examples in Fig. 2, CLIP has the intrinsic Anomaly-Unawareness problem: the overlap of normal and abnormal texture features hampers the precision of text-guided anomaly detection. We argue that making CLIP anomaly-aware — by establishing clearer distinctions between normal and abnormal semantics in the text space — is essential for guiding the vision encoder to precisely detect and localize anomalies. + +This observation drives us to improve CLIP-based zero-shot AD through enhancing anomaly discrimination in text space, achieved with our method Anomaly-Aware CLIP (AA-CLIP) — a CLIP model with anomaly-aware information encoded. AA-CLIP is implemented through a novel two-stage adaptation approach. In the first stage, AA-CLIP adapts the text encoder with frozen visual encoder, creating "anchors" for anomaly-aware semantics within the text space for each trained class. As illustrated in Fig. 1(middle), each class's text features are disentangled to distinct anchors, with clear abnormality discrimination. Notably, this disentanglement also applies to novel, unseen classes, supporting effective zero-shot inference in AD tasks (refer to Fig. 1(right)). In the second stage, AA-CLIP aligns patch-level visual features with these specially adapted texture anchors, guiding CLIP's visual encoder to concentrate on anomaly-relevant regions. This two-stage approach ensures a focused and precise anomaly detection framework. + +Importantly, as CLIP is extensively trained on massive data, to preserve its pre-trained knowledge, we utilize simple-structured Residual Adapters in both stages. This design enables a controlled adaptation of CLIP while en + +hancing its capability to handle fine-grained AD tasks without sacrificing its generalization ability. + +Our extensive experiments in both industrial and medical domains demonstrate that our straightforward approach equips CLIP with improved zero-shot AD ability, even in data-limited scenarios. By training with a minimal sample — such as one normal sample and one anomaly sample (2-shot) per class — and testing across unseen datasets, our method achieves zero-shot performance comparable to other CLIP-based AD techniques. With only 64-shot of each class seen in the training set, our method reaches state-of-the-art (SOTA) results in cross-dataset zero-shot testing, validating our method's ability to maximize the CLIP's potential for AD with a minimal data requirement. + +Our contributions are summarized as follows: + +1. Anomaly-Aware CLIP with enhanced and generalizable anomaly-discriminative ability. We introduce AA-CLIP which is more sensitive to anomalies sequentially in text and visual spaces, encoding anomaly-aware information into the original CLIP. +2. Efficient adaptation using residual adapters. We implement simple residual adapters to boost zero-shot anomaly detection performance without compromising the model's generalization ability. +3. SOTA performance with high training efficiency. Our method achieves SOTA results across diverse datasets, showing robust anomaly detection capabilities even with limited training samples. + +# 2. Related Work + +Traditional Anomaly Detection in images involves modeling the normal data distribution to detect rare and diverse unexpected signals within visual data [44, 52, 53, 61]. Reconstruction-based [11, 16, 32, 33, 54, 56], augmentation-based [31, 44, 48, 55, 58] and discriminative [10, 15, 23, 31, 43, 61] methods are typically used to facilitate better modeling. Despite the huge progress of traditional anomaly detection methods, their effectiveness relies heavily on a well-modeled normal data distribution. Without sufficient normal data, their ability to accurately detect anomalies is significantly reduced. + +CLIP, trained on a vast amount of image-text data, leverages contrastive learning alongside powerful language models and visual feature encoders to capture robust concepts. This combination enables CLIP to achieve impressive zero-shot performance on image classification, as it can generalize well to new categories without requiring task-specific training [24-27, 42, 51]. More recently, numerous studies [9, 12, 35, 39] have explored ways to transfer the knowledge embedded in CLIP models to a variety of downstream tasks, yielding promising results in fields like image captioning, image-text retrieval, and image generation. These efforts demonstrate CLIP's versatility and potential to drive + +advancements across diverse applications. + +Despite the rapid advancements achieved by CLIP, numerous studies have highlighted persistent limitations in the features it extracts. While CLIP demonstrates strong generalization across various tasks, it often struggles to capture nuanced details and essential spatial relationships, which are crucial for tasks demanding precise boundary delineation and fine-grained feature extraction. This limitation results in suboptimal performance in downstream applications, especially that require high levels of detail, such as object detection, scene segmentation, or tasks in medical imaging [14, 29, 30, 37, 49, 50, 57, 60]. As a result, leveraging CLIP for fine-granular tasks frequently necessitates task-specific adaptations to bridge the gap between its generalized feature extraction and the precision required for specialized applications. + +CLIP-based Anomaly Detection There have been several efforts to leverage CLIP for AD tasks. One of the pioneering approaches, WinCLIP [19], proposes a method for extracting and aggregating visual features from multiple levels to align with text features, demonstrating the potential of CLIP in this context. Subsequent research investigates various adaptation methods to bridge the gap between natural domains and the AD domain, resulting in performance improvements. For instance, [7, 8, 18] focus on refining visual features by employing adapters to enhance patch-level visual representations. However, these approaches often rely on text embeddings from the original CLIP model as soft supervision and overlook a critical limitation of CLIP in AD: itsunclearness in distinguishing between anomalous and normal semantics, particularly within the text encoder, resulting in suboptimal performance. Other works have employed prompt-learning-based methods[5, 6, 41, 59], introducing learnable embeddings into the text encoder to better represent abnormality. However, the class information in CLIP can be damaged, potentially degrading generalization, especially in data-limited and zero-shot settings. + +Different from previous methods, we are the first to investigate CLIP's inherent limitation in capturing anomaly-aware information, specifically in differentiating between normal and anomalous semantics in text prompts. Rather than relying solely on the original anomaly-unaware text embeddings or unaltered feature spaces, our method is able to refine the embeddings to actively incorporate anomaly-discriminative representations. + +# 3. Method + +# 3.1. Overview + +# 3.1.1. Problem Formulation + +Zero-shot AD models are trained to identify anomalous samples whose categories may be unseen in the training dataset. Specifically, the model is expected to learn + +![](images/1f4b294e7d4def9bcc526a78197fc3385e97f3cb55a77e48738270adecb8c5f1.jpg) + +![](images/b75f5d93de627136e49c895ecdf83d74a7eab7e2e9f221c69e97c6c8bbd33721.jpg) + +"This is a [ ] carpet." + +
SemanticsSimilarityProbabilityτ=0.01
broken0.180.22
normal0.190.78
+ +"This is a [ ] zipper." + +
SemanticsSimilarityProbabilityτ=0.01
broken0.200.38
normal0.210.62
+ +![](images/467ee38e87dd5ff7fa4576a42be09710e9d3a22d802523419eec73f0820be9d3.jpg) +Figure 2. (Top) Examples illustrating CLIP's Anomaly Unawareness. Despite the obvious anomalies present in the images, image features have higher similarities to normal descriptions, rather than anomaly descriptions, mistakenly. This problem is enlarged with a low temperature $\tau$ . (Bottom) Text Feature Similarity Heatmap among Normal and Anomaly Descriptions: Original CLIP vs. After Text Adaptation. Red indicates high similarity. In original CLIP, normal features exhibit strong similarity with anomaly features, whereas text adaptation successfully separates them, clarifying the semantic distinctions between normal and anomaly descriptions. + +both normal and abnormal patterns that are shared across different classes given a training set $\mathcal{D}_{train}$ with normal or anomalous samples, in order to be capable of performing AD tasks on a series of different test datasets $\{\mathcal{D}_{test}^{1},\mathcal{D}_{test}^{2},\dots,\mathcal{D}_{test}^{n}\}$ , where each $\mathcal{D}_{test}^{i}$ is distinct from $\mathcal{D}_{train}$ . Image-level AD can be formally defined as a binary classification problem, where the model aims to classify samples $x\in \mathcal{D}$ as either normal ( $y = 0$ ) or anomalous ( $y = 1$ ). Anomaly segmentation extends this concept to pixel-level with mask $S$ , aiming to identify anomalous regions by highlighting pixels associated with anomalies. + +# 3.1.2. Current Challenges + +Anomaly Unawareness in CLIP: The CLIP-based AD method classifies visual features as "anomalies" if they exhibit greater similarities to anomaly prompt embeddings than to normal prompt embeddings, thus requiring well-defined boundaries between these two kinds of prompts. However, in real applications, CLIP's text embeddings often lack the clear separability needed to reliably distinguish between normal and anomaly classes. + +We observe that, despite the visible defects in example images from the MVTec-AD [2], their features exhibit higher cosine similarity with "normal" prompts than with correct "anomaly" descriptions (see Fig. 2 (top)), indicating CLIP's inaccurate semantic understanding. Without adap + +![](images/85d153d5a9c1ba27ae30deddc2e6b7643a16c6f2889c919f03a81988d171c738.jpg) +Figure 3. t-SNE Visualization of Text Features from Original CLIP vs. AA-CLIP. Each point represents a text feature encoded from a prompt. Original CLIP's normal and anomaly text features are intertwined, while our method effectively disentangles them. This disentanglement is generalizable to novel classes, validating the anomaly-awareness of our model. + +tation, there persists a high similarity between the normal and abnormal text embeddings of a single class, as shown in Fig. 2 (bottom), suggesting a potential entanglement of normal and anomaly semantics within text space. We term this limitation Anomaly Unawareness and attribute it to the training process of CLIP: it is primarily trained on general, non-anomalous datasets and lacks specific guidance on defect detection. Consequently, it is challenging to rely on original CLIP embeddings to detect subtle or context-specific anomalies. + +This issue remains evident across different categories in our t-SNE analysis: as shown in Fig. 3 (top), only subtle separations are observed within an object cluster, where text embeddings for both normal and abnormal semantics are intermixed. This entangled pattern may potentially lead to anomaly-unaware text-image alignment, which reinforces the necessity to adapt CLIP's to enhance its ability of anomaly-awareness. + +**Embedding Adaptation Dilemma:** Discussion above renders the adaptation of CLIP essential for effective AD. However, since CLIP's embeddings are already optimized through extensive pretraining, it could be susceptible to overfitting to new dataset during adaptation. Overfitting convergence leads to minimized intra-class distinctions in the training data, often at the expense of the feature separability for effective generalization to unseen data. + +To address this, a carefully controlled refinement is crucial to preserve CLIP's generalization capabilities while enhancing its sensitivity to anomalies. + +# 3.1.3. Overview of Our Solution + +Motivated by Sec. 3.1.2, we propose Anomaly-Aware CLIP (AA-CLIP) with improved anomaly awareness. As shown in Fig. 4, AA-CLIP is trained through a two-stage training strategy that sequentially adapts the semantic-rich text space and detail-focused visual space, with original CLIP parameters remaining frozen. In the first stage (see Fig. 4 (Top)), we incorporate Residual Adapters into the shallow layers of the text encoder, and the visual features from the fixed image encoder serve as a stable reference for optimization. A Disentangle Loss is proposed to enforce effective discrimination by ensuring independence between normal and anomaly embeddings. In the second stage, we integrate Residual Adapters into the shallow layers of the visual encoder to align patch-level features with the fixed, specially adapted texture features from the fixed text encoder (see in Fig. 4 (Bottom)). Ultimately, our AA-CLIP succeeds in equipping CLIP with anomaly awareness across seen and unseen classes, as shown in Fig. 3 (bottom). + +# 3.2. AA-CLIP with Two-Stage Adaptation Strategy + +# 3.2.1. Residual Adapter + +To preserve CLIP's pre-trained knowledge while enabling targeted adaptation, we introduce lightweight Residual Adapters in the shallow layers (up to layer $K$ ) of both text and vision encoders. + +The output feature $x^{i} \in \mathbb{R}^{N \times d}$ of CLIP's $i$ -th ( $i \leq K$ ) transformer layer is fed into the $i$ -th adapter, outputting adapted feature $x_{residual}^{i}$ , as shown in Eq. (1), + +$$ +x _ {\text {r e s i d u a l}} ^ {i} = \operatorname {N o r m} \left(\operatorname {A c t} \left(W ^ {i} x ^ {i}\right)\right), \tag {1} +$$ + +where $W^{i} \in \mathbb{R}^{d \times d}$ is the trainable linear weight of $i$ -th adapter, $Act(\cdot)$ is an activation function, and $Norm(\cdot)$ is a normalizing function. The original feature $x^{i}$ and the enhanced feature $x_{residual}^{i}$ are fused in a weighted manner, generating $x_{enhanced}^{i}$ , the input to the next transformer layer, as shown in Eq. (2), + +$$ +x _ {\text {e n h a n c e d}} ^ {i} = \lambda x _ {\text {r e s i d u a l}} ^ {i} + (1 - \lambda) x ^ {i}, \tag {2} +$$ + +where $\lambda$ is a hyper-parameter to control the residual ratio, adjusting the fusing degree of AD-specific knowledge for preserving the original CLIP's generalization ability and improved performance. + +# 3.2.2. Two-Stage Training Strategy + +Disentangling Anomaly-Aware Text Anchors: In the first stage, our objective is to learn anomaly-discriminative text anchors by adapting the text encoder while keeping the + +![](images/a4a610b1167236135a85d221e41e21c181b00e2ab31d2c098f0a54df24da772a.jpg) +Stage1: Disentangling Anomaly-Aware Text Anchors + +![](images/b1ece9f7bcb30903f6b874d348410a737150e0d213c2bd10ec011036cf8438ca.jpg) +Stage2: Aligning Patch Features According to Text Anchors +Figure 4. The Two-Stage Training Pipeline of Anomaly-Aware CLIP. In the first stage, the text encoder of AA-CLIP is trained to identify anomaly-related semantics, helped by a disentangle loss. In the second stage, patch features are aligned with these text anchors. Both stages are achieved by the integration of Residual Adapters into the shallow layers of CLIP's backbone. This controlled adaptation enables CLIP to effectively distinguish anomalies, which forms our Anomaly-Aware CLIP. + +image encoder fixed. We incorporate Residual Adapters into the first $K_{T}$ layers of the CLIP text encoder, as illustrated in Fig. 4 (Top), and set the final projector in the text encoder to be learnable to facilitate improved alignment. + +Using prompts designed to encapsulate both normal and anomalous semantics (as detailed in Appendix), text encoder generates corresponding high-level embeddings. The average embeddings of the normal and anomaly prompts serve as our initial text anchors, denoted as $T_{N}$ and $T_{A} \in \mathbb{R}^{d}$ , respectively. These anchors are refined by being aligned with visual features extracted from an enhanced CLIP visual encoder, as [28, 59]. Alignment is conducted at both image and patch levels to incorporate both global and local semantics. By calculating the cosine similarity between these anchors and the image features $V_{image} \in \mathbb{R}^{d}$ or patch features $V_{patch} \in \mathbb{R}^{N \times d}$ , as shown in Eq. (3), + +$$ +p _ {c l s} = \operatorname {C o s S i m} \left(V _ {\text {i m a g e}}, \left[ T _ {N}, T _ {A} \right]\right), \tag {3} +$$ + +$$ +p _ {s e g} ^ {o} = \operatorname {C o s S i m} (V _ {p a t c h}, [ T _ {N}, T _ {A} ]), +$$ + +where $[\cdot, \cdot]$ means concatenate operation, we obtain the classification prediction $p_{cls} \in \mathbb{R}^2$ and the segmentation prediction $p_{seg}^{o} \in \mathbb{R}^{N \times 2}$ . The segmentation prediction $p_{seg}^{o}$ is then reshaped and upsampled to $p_{seg} \in \mathbb{R}^{H \times W \times 2}$ to align + +with the height $H$ and width $W$ of segmentation mask $S$ . Following previous works [6, 7, 18, 59], we compute the classification loss $\mathcal{L}_{cls}$ and segmentation loss $\mathcal{L}_{seg}$ to optimize parameters, as specified in Eq. (4). Specifically, the classification loss is a binary cross-entropy that compares classification predictions with ground-truth labels $y$ , and the segmentation loss is a combination of dice loss and focal loss applied to segmentation predictions and the anomaly segmentation mask $S$ . + +$$ +\mathcal {L} _ {c l s} = \operatorname {B C E} (p _ {c l s}, y), +$$ + +$$ +\mathcal {L} _ {\text {s e g}} = \operatorname {D i c e} \left(p _ {\text {s e g}}, S\right) + \operatorname {F o c a l} \left(p _ {\text {s e g}}, S\right), \tag {4} +$$ + +$$ +\mathcal {L} _ {\text {a l i g n}} = \mathcal {L} _ {\text {c l s}} + \mathcal {L} _ {\text {s e g}}. +$$ + +To enhance the separation between normal and anomaly text embeddings, we introduce a Disentangle Loss encouraging orthogonality between $T_{N}$ and $T_{A}$ to minimize correlation, as in Eq. (5): + +$$ +\mathcal {L} _ {\text {d i s}} = | < T _ {N}, T _ {A} > | ^ {2}. \tag {5} +$$ + +The Disentangle Loss $\mathcal{L}_{dis}$ is incorporated into the alignment loss $\mathcal{L}_{\text {align }}$ as a regularization term, weighted by a factor $\gamma$ , which forms the total loss, as in Eq. (6): + +$$ +\mathcal {L} _ {\text {t o t a l}} = \mathcal {L} _ {\text {a l i g n}} + \gamma \mathcal {L} _ {\text {d i s}}. \tag {6} +$$ + +In this stage, the distinction between normal and anomaly semantics is embedded into CLIP's text encoder while its original object-recognition capability is preserved. Figure 3 indicates that this ability of anomaly-awareness is robust and generalizable to novel classes. + +Aligning Patch Features According to Text Anchors: Anomaly-aware semantic anchors can facilitate the adaptation of patch features, thereby improving the effectiveness and generalizability of anomaly localization. To achieve alignment between patch features and anchors from the previous stage, we introduce trainable Residual Adapters within the initial $K_{I}$ layers of the CLIP visual encoder. + +Features with multi-granularities are utilized to enhance segmentation [7, 19, 59]. Specifically, as shown in Fig. 4 (bottom), the intermediate output feature $F^i$ are extracted from four distinct granularities. These multi-granularity features are then projected to align with the channel of text anchors via a trainable projector $Proj_i(\cdot)$ , yielding $V_{patch}^i$ at four distinct levels of granularity. The aggregated output $V_{patch}$ is computed by summing individual $V_{patch}^i$ outputs, as in Eq. (7): + +$$ +\begin{array}{l} V _ {p a t c h} ^ {i} = \operatorname {P r o j} _ {i} \left(F ^ {i}\right), i \in \{1, 2, 3, 4 \} \\ V _ {p a t c h} = \sum_ {i = 1} ^ {4} V _ {p a t c h} ^ {i}. \tag {7} \\ \end{array} +$$ + +The cosine similarity scores between the aggregated $V_{patch}$ and the text anchors are calculated to generate patch-level predictions as Eq. (3), resulting in the prediction maps. + +During training, alignment is guided by the loss function defined in Eq. (4), facilitating both global and local alignment. During inference, anomaly prediction maps and corresponding anomaly scores are derived by comparing the similarity scores of visual features against normal and anomaly text embeddings. + +# 4. Experiments + +# 4.1. Experiment Setups + +Datasets We evaluate our model on 11 widely used benchmarks, as previous AD works [6, 7, 18, 19, 59], with distinct foreground objects spanning a variety of modalities, including photography, endoscopy, CT, MRI, and OCT. For the industrial domain, we use MVtec AD [2], VisA [62], BTAD [34] and MPDD [20]. For medical domain, we use brain MRI, liver CT and retina OCT from BMAD [1], and four different colon polyp detection datasets with different views (CVC-ClinicDB [4], CVC-ColonDB [3], Kvasir-SEG [40] and CVC-300 [47]). Each dataset has both image-level labels and pixel-level masks for evaluation. + +We train our model on a real-world industrial AD dataset - VisA [62] - in which objects are different from other datasets. Results of VisA are obtained using MVtec-AD as + +the training dataset. To demonstrate adaptation efficiency, we conduct training under various data levels: 2-shot per class, 16-shot per class, 64-shot per class, and full-shot. The corresponding number of samples are randomly selected from each class, while maintaining a consistent 1:1 ratio between normal and anomaly samples. + +Metrics Following [6, 7, 10, 11, 18, 19, 43, 55], we use the Area Under the Receiver Operating Characteristic Curve (AUROC) as the metric. We compute AUROC at both the image and pixel levels to comprehensively assess the model's effectiveness in detecting and localizing anomalies. Implementation Details Following [6, 7, 41, 59], we use OpenCLIP with the ViT-L/14 architecture as the backbone, and input images are resized to $518 \times 518$ . All parameters of CLIP remain frozen. We set $\lambda$ to 0.1, $K_{T}$ to 3, $K_{I}$ to 6, and $\gamma$ to 0.1. For multi-level feature extraction, we utilize outputs from the 6-th, 12-th, 18-th, and 24-th layers of the visual encoder to compose the overall output. For the first stage, we train the model for 5 epochs with a learning rate of $1 \times 10^{-5}$ . For the second stage, we continue training for 20 epochs, adjusting the learning rate to $5 \times 10^{-4}$ . Parameters are updated by Adam optimizers. All experiments are conducted on a single NVIDIA GeForce RTX 3090 GPU. More details are available in Appendix. + +# 4.2. Comparison with SOTA Methods + +We compare our method against CLIP and several recent SOTA models. Among them, WinCLIP [19], VAND [7] and MVFA-AD [18] use original CLIP text encoder, and AnomalyCLIP [59] and AdaCLIP [6] incorporate learnable prompts. To ensure a fair comparison, we re-train models that are originally trained on different datasets to match the dataset settings of other approaches (detailed in Appendix). + +Quantitative results are presented in Tab. 1 and Tab. 2. Although adapting only the patch feature with original text embeddings has made progress in AD, the superior performance of AA-CLIP highlights its effective disentanglement of anomaly-discriminative semantics, leading to further progress. Notably, even in data-limited situations, our method consistently demonstrates top performance. At the pixel level, with only 2 shots per class used for training, our method achieves improved average zero-shot performance compared to previous methods. With the full dataset, we set a new pixel-level SOTA with an AUROC of $93.4\%$ . At the image level, our method is competitive with just 2 shots for training and establishes a new SOTA of $83.1\%$ with 64 shots per class. + +Unlike previous methods, our approach does not rely heavily on data resources to achieve top-tier performance. Comparison under different levels of data available, as shown in Fig. 5, reveals that our approach consistently outperforms other methods in general. Even with limited data, our model reaches competitive results, while other methods + +
DomainDatasetCLIP*WinCLIP*VAND*MVFA-ADAnomalyCLIP*AdaCLIPOurs
OpenCLIPCVPR 2023CVPRw 2023CVPR 2024ICLR2024ECCV2024-
Available training shots--fullfullfullfull21664full
IndustrialBTAD30.632.891.190.193.390.892.894.496.597.0
MPDD62.195.294.994.596.296.696.396.596.396.7
MVTec-AD38.485.187.684.991.189.991.091.291.691.9
VisA46.679.694.293.495.495.593.493.894.095.5
MedicalBrain MRI68.386.094.595.696.293.996.396.496.595.5
Liver CT90.596.295.696.893.994.597.397.797.797.8
Retina OCT21.380.688.590.992.688.594.295.194.495.5
ColonDB49.551.278.278.482.980.083.983.584.784.0
ClinicDB47.570.385.183.985.085.989.287.687.889.9
Kvasir44.669.780.381.981.986.482.184.685.287.2
CVC-30049.9-92.882.695.492.996.097.496.096.4
Average49.974.789.388.591.390.492.092.692.893.4
+ +Table 1. Pixel-level AUROC of zero-shot AD methods in Industrial and Medical domains. Method sources and the number of shots used for training are noted. Results of methods with * are copied from the papers or inferred from official weight. Best results are highlighted as first, second and third. + +
DomainDatasetCLIP&VAND*WinCLIP*MVFA-ADAnomalyCLIP*AdaCLIPOurs
OpenCLIPCVPR 2023CVPR 2024ICLR2024ECCV2024-
Available training shots--fullfullfull21664full
IndustrialBTAD73.668.294.385.390.988.090.994.794.8
MPDD73.063.670.973.772.163.678.375.775.1
MVTec-AD86.191.886.690.990.085.989.792.090.5
VisA66.478.076.582.184.378.484.084.184.6
MedicalBrain MRI58.866.570.983.380.284.380.483.480.2
Liver CT54.764.263.061.664.269.468.169.269.7
Retina OCT65.642.577.375.782.777.481.082.982.7
Average68.367.877.178.480.678.181.883.182.5
+ +display signs of underfitting. As data increases, our method maintains its lead, establishing a new SOTA at both pixel and image levels. + +# 4.3. Visualization + +To illustrate the alignment intuitively, we present visualization examples in Fig. 6 with original configuration for previous works. Although previous methods with can detect anomalous regions, our AA-CLIP demonstrates fewer false-negative predictions in both industrial and medical domains, accurately highlighting the correct anomaly regions. + +# 4.4. Ablations Analysis + +We conduct thorough ablation experiments of our refinement of both visual and text space, as shown in Tab. 3 and Fig. 7. The second row in Tab. 3, which mirrors the structure of VAND [7], serves as our baseline. + +Image Space: As shown in Tab. 3 line “2,” inserting the vallina linear adapter into transformer layers results in a significant decline in zero-shot performance, indicating the + +Table 2. Image-level AUROC of zero-shot AD methods in Industrial and Medical domains. Method sources and the number of shots used for training are noted. Results of methods with * are copied from the papers or inferred from official weight. Best results are highlighted as first, second and third. + +
MethodAvg. AUROC
Pixel-LevelImage-Level
CLIP50.369.3
Image1. + Linear Proj. (VAND [7])88.969.3
2. + Adapter48.9(-40.0)53.4(-15.9)
3. + Residual Adapter91.3(+2.4)80.7(+11.4)
Text4. + Residual Adapter92.1(+3.2)82.6(+13.3)
5. + Disentangle Loss92.7(+3.8)83.3(+14.0)
+ +Table 3. Ablation Study of Our Training Strategy with VisA-Trained 64-Shot Setup. Our contributions are bold. While VAND uses linear projectors to improve AD performance, incorporating Residual Adapters further refines patch feature adaptation. Moreover, integrating our Disentangle Loss yields the best overall results. + +damage of the original generalization ability of CLIP. Incorporating our Residual Adapters mitigates this issue (shown in line "3"). enhancing performance while preserving original information stored in CLIP. + +Text Space: The last two rows in Tab. 3 highlight the impact of our approach in equipping CLIP's encoder with + +![](images/7ed1b930e4e1be716f6dffd2599518a4b01f6650e935501bb949bd4c8306b1b1.jpg) +Figure 5. Average Results (Top) and Results on BTAD (Bottom) of Different methods Trained on 2-, 16-, 64-shot per Class and Full Data of VisA. Our method shows high fitting efficiency, achieving strong results across all data scales. + +![](images/4bd703a8e390087c44ea087922519b276bf75f577a1ad93c32da10a282ef7e44.jpg) + +![](images/54bc6d0e000fbbc5adb9f62b15ea2babb5449548126f172d62748e47bc6428f7.jpg) +Figure 6. Visualization of Anomaly Localization Results of Original CLIP [42], AnomalyCLIP [59], VAND [7] and our AA-CLIP. Compared to previous methods, AA-CLIP demonstrates more reliable prediction capabilities in localizing anomaly. + +anomaly-aware semantics. Line "4." validates that, with AA-CLIP, the model's ability to discriminate anomalies further improves, as the AA-CLIP's text encoder provides a more precise semantic foundation. Adding Disentangle Loss leads to an additional improvement (shown in Line "5"), especially at image-level, validating the necessity of independence between normal and anomaly anchors. These results underscore the crucial role of text space refinement in improved anomaly localization and classification. + +Two-Stage Training: To validate the necessity of two-stage training, we adapt both text and image encoders together within one stage (also adopted by AdaCLIP). As shown + +![](images/b940ce809bb20f102d5ab0c763d469f6ba1df9bd728863b2e2e5aa8edcb03ef5.jpg) +Figure 7. Visualization of Text Space from One-Stage Training and from AdaCLIP. During one-stage training, class information collapses easily, leading to damaged zero-shot performance. + +in Fig. 7, one-stage model can easily exaggerate anomaly semantics and forget class information embedded in CLIP, damaging the model's generalization ability. The two-stage training strategy allows controlled adaptation, preserving CLIP's class-relevant knowledge in one end while adapting the other, as shown in Fig. 3. + +# 5. Conclusion and Discussion + +To our knowledge, this is the first work to explicitly analyze the intrinsic Anomaly Unawareness problem in CLIP. To tackle this issue, we propose a simple yet effective two-stage training strategy to embed anomaly-aware information into CLIP, enabling clear disentanglement of anomaly representations across both seen and novel classes. By leveraging residual adapters, our method preserves CLIP's strong generalization ability, achieving outstanding zero-shot performance across multiple datasets. + +Our adapted AA-CLIP, developed through this two-stage adaptation strategy, reveals the potential of refining CLIP's feature space for improved performance in downstream applications. Beyond addressing anomaly unawareness, our work also provides a potential foundation for tackling other "unawareness" issues within CLIP. These may include limitations in context-awareness or specificity to domain-relevant nuances, suggesting further applications of our method in expanding CLIP's adaptability across diverse tasks. Additionally, we observe signs of overfitting with full-shot training, suggesting potential saturation during CLIP adaptation and warranting further investigation. + +# Acknowledgement + +This work is supported by Natural Science Foundation of China under Grant 62271465, Suzhou Basic Research Program under Grant SYG202338, Open Fund Project of Guangdong Academy of Medical Sciences, China (No. YKY-KF202206), and Jiangsu Province Science Foundation for Youths (NO. BK20240464). + +# References + +[1] Jinan Bao, Hanshi Sun, Hanqiu Deng, Yinsheng He, Zhaoxiang Zhang, and Xingyu Li. Bmad: Benchmarks for medical anomaly detection, 2024. 6 +[2] Paul Bergmann, Michael Fauser, David Sattlegger, and Carsten Steger. Mvtec ad-a comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9592-9600, 2019. 1, 3, 6 +[3] Jorge Bernal, Javier Sánchez, and Fernando Vilarino. Towards automatic polyp detection with a polyp appearance model. Pattern Recognition, 45(9):3166-3182, 2012. 6 +[4] Jorge Bernal, F Javier Sánchez, Gloria Fernández-Esparrach, Debora Gil, Cristina Rodríguez, and Fernando Vilarino. Wm-dova maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians. Computerized medical imaging and graphics, 43:99-111, 2015. 6 +[5] Yunkang Cao, Xiaohao Xu, Yuqi Cheng, Chen Sun, Zongwei Du, Liang Gao, and Weiming Shen. Personalizing vision-language models with hybrid prompts for zero-shot anomaly detection. IEEE Transactions on Cybernetics, 2025. 3 +[6] Yunkang Cao, Jiangning Zhang, Luca Frittoli, Yuqi Cheng, Weiming Shen, and Giacomo Boracchi. Adaclip: Adapting clip with hybrid learnable prompts for zero-shot anomaly detection. In European Conference on Computer Vision, pages 55-72. Springer, 2025. 2, 3, 5, 6 +[7] Xuhai Chen, Yue Han, and Jiangning Zhang. April-gan: A zero-/few-shot anomaly classification and segmentation method for cvpr 2023 vand workshop challenge tracks 1&2: 1st place on zero-shot ad and 4th place on few-shot ad. arXiv preprint arXiv:2305.17382, 2023. 2, 3, 5, 6, 7, 8 +[8] Xuhai Chen, Jiangning Zhang, Guanzhong Tian, Haoyang He, Wuhao Zhang, Yabiao Wang, Chengjie Wang, and Yong Liu. Clip-ad: A language-guided staged dual-path model for zero-shot anomaly detection. arXiv preprint arXiv:2311.00453, 2023. 3 +[9] Jaemin Cho, Seunghyun Yoon, Ajinkya Kale, Franck Dernoncourt, Trung Bui, and Mohit Bansal. Fine-grained image captioning with clip reward. arXiv preprint arXiv:2205.13115, 2022. 2 +[10] Thomas Defard, Aleksandr Setkov, Angelique Loesch, and Romaric Audigier. Padim: a patch distribution modeling framework for anomaly detection and localization. In International Conference on Pattern Recognition, pages 475-489. Springer, 2021. 1, 2, 6 +[11] Hanqiu Deng and Xingyu Li. Anomaly detection via reverse distillation from one-class embedding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9737-9746, 2022. 1, 2, 6 +[12] Han Fang, Pengfei Xiong, Luhui Xu, and Yu Chen. Clip2video: Mastering video-text retrieval via image clip. arXiv preprint arXiv:2106.11097, 2021. 2 +[13] Tharindu Fernando, Harshala Gammulle, Simon Denman, Sridha Sridharan, and Clinton Fookes. Deep learning for medical anomaly detection-a survey. ACM Computing Surveys (CSUR), 54(7):1-37, 2021. 1 + +[14] Peng Gao, Shijie Geng, Renrui Zhang, Teli Ma, Rongyao Fang, Yongfeng Zhang, Hongsheng Li, and Yu Qiao. Clip-adapter: Better vision-language models with feature adapters. International Journal of Computer Vision, 132(2): 581-595, 2024. 3 +[15] Denis Gudovskiy, Shun Ishizaka, and Kazuki Kozuka. Cflow-ad: Real-time unsupervised anomaly detection with localization via conditional normalizing flows. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 98-107, 2022. 1, 2 +[16] Haoyang He, Jiangning Zhang, Hongxu Chen, Xuhai Chen, Zhishan Li, Xu Chen, Yabiao Wang, Chengjie Wang, and Lei Xie. Diad: A diffusion-based framework for multi-class anomaly detection. arXiv preprint arXiv:2312.06607, 2023. 2 +[17] Chaoqin Huang, Aofan Jiang, Jinghao Feng, Ya Zhang, Xin chao Wang, and Yanfeng Wang. Adapting visual-language models for generalizable anomaly detection in medical images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11375-11385, 2024. 2 +[18] Chaoqin Huang, Aofan Jiang, Jinghao Feng, Ya Zhang, Xin chao Wang, and Yanfeng Wang. Adapting visual-language models for generalizable anomaly detection in medical images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11375-11385, 2024. 3, 5, 6 +[19] Jongheon Jeong, Yang Zou, Taewan Kim, Dongqing Zhang, Avinash Ravichandran, and Onkar Dabeer. Winclip: Zero/few-shot anomaly classification and segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19606-19616, 2023. 3, 6 +[20] Stepan Jezek, Martin Jonak, Radim Burget, Pavel Dvorak, and Milos Skotak. Deep learning-based defect detection of metal parts: evaluating current methods in complex conditions. In 2021 13th International congress on ultra modern telecommunications and control systems and workshops (ICUMT), pages 66-71. IEEE, 2021. 6 +[21] Ruixiang Jiang, Lingbo Liu, and Changwen Chen. Clipcount: Towards text-guided zero-shot object counting. In Proceedings of the 31st ACM International Conference on Multimedia, pages 4535-4545, 2023. 2 +[22] Zeeshan Khan, Makarand Tapaswi, et al. Figclip: Fine-grained clip adaptation via densely annotated videos. arXiv preprint arXiv:2401.07669, 2024. 2 +[23] Daehyun Kim, Sungyong Baik, and Tae Hyun Kim. Sanflow: Semantic-aware normalizing flow for anomaly detection. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. 1, 2 +[24] Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. Align before fuse: Vision and language representation learning with momentum distillation. Advances in neural information processing systems, 34:9694-9705, 2021. 2 +[25] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In Interna + +tional conference on machine learning, pages 12888-12900. PMLR, 2022. +[26] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pages 19730-19742. PMLR, 2023. 2 +[27] Xin Li, Dongze Lian, Zhihe Lu, Jiawang Bai, Zhibo Chen, and Xinchao Wang. Graphadapter: Tuning vision-language models with dual knowledge graph. Advances in Neural Information Processing Systems, 36, 2024. 2 +[28] Yi Li, Hualiang Wang, Yiqun Duan, and Xiaomeng Li. Clip surgery for better explainability with enhancement in open-vocabulary tasks. arXiv preprint arXiv:2304.05653, 2023. 5 +[29] Yuqi Lin, Minghao Chen, Wenxiao Wang, Boxi Wu, Ke Li, Binbin Lin, Haifeng Liu, and Xiaofei He. Clip is also an efficient segmenter: A text-driven approach for weakly supervised semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15305-15314, 2023. 2, 3 +[30] Jie Liu, Yixiao Zhang, Jie-Neng Chen, Junfei Xiao, Yongyi Lu, Bennett A Landman, Yixuan Yuan, Alan Yuille, Yucheng Tang, and Zongwei Zhou. Clip-driven universal model for organ segmentation and tumor detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 21152-21164, 2023. 2, 3 +[31] Zhikang Liu, Yiming Zhou, Yuansheng Xu, and Zilei Wang. Simplenet: A simple network for image anomaly detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20402-20411, 2023. 1, 2 +[32] Ruiying Lu, YuJie Wu, Long Tian, Dongsheng Wang, Bo Chen, Xiyang Liu, and Ruimin Hu. Hierarchical vector quantized transformer for multi-class unsupervised anomaly detection. arXiv preprint arXiv:2310.14228, 2023. 2 +[33] Wenxin Ma, Qingsong Yao, Xiang Zhang, Zhelong Huang, Zihang Jiang, and S.Kevin Zhou. Towards accurate unified anomaly segmentation. In Proceedings of the Winter Conference on Applications of Computer Vision (WACV), pages 1342-1352, 2025. 2 +[34] Pankaj Mishra, Riccardo Verk, Daniele Fornasier, Claudio Piciarelli, and Gian Luca Foresti. Vt-adt: A vision transformer network for image anomaly detection and localization. In 2021 IEEE 30th International Symposium on Industrial Electronics (ISIE), pages 01–06. IEEE, 2021. 6 +[35] Ron Mokady, Amir Hertz, and Amit H Bermano. Clipcap: Clip prefix for image captioning. arXiv preprint arXiv:2111.09734, 2021. 2 +[36] Liliane Momeni, Mathilde Caron, Arsha Nagrani, Andrew Zisserman, and Cordelia Schmid. Verbs in action: Improving verb understanding in video-language models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15579-15591, 2023. 2 +[37] Amin Karimi Monsefi, Kishore Prakash Sailaja, Ali Alilooee, Ser-Nam Lim, and Rajiv Ramnath. Detailclip: Detail-oriented clip for fine-grained tasks. arXiv preprint arXiv:2409.06809, 2024. 3 + +[38] Roni Paiss, Ariel Ephrat, Omer Tov, Shiran Zada, Inbar Mosseri, Michal Irani, and Tali Dekel. Teaching clip to count to ten. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3170-3180, 2023. 2 +[39] Or Patashnik, Zongze Wu, Eli Shechtman, Daniel Cohen-Or, and Dani Lischinski. Styleclip: Text-driven manipulation of stylegan imagery. In Proceedings of the IEEE/CVF international conference on computer vision, pages 2085–2094, 2021. 2 +[40] Konstantin Pogorelov, Kristin Ranheim Randel, Carsten Griwodz, Sigrun Losada Eskeland, Thomas de Lange, Dag Johansen, Concetto Spampinato, Duc-Tien Dang-Nguyen, Mathias Lux, Peter Thelin Schmidt, Michael Riegler, and Pål Halvorsen. Kvasir: A multi-class image dataset for computer aided gastrointestinal disease detection. In Proceedings of the 8th ACM on Multimedia Systems Conference, pages 164-169, New York, NY, USA, 2017. ACM. 6 +[41] Zhen Qu, Xian Tao, Mukesh Prasad, Fei Shen, Zhengtao Zhang, Xinyi Gong, and Guiguang Ding. Vcp-clip: A visual context prompting model for zero-shot anomaly segmentation. arXiv preprint arXiv:2407.12276, 2024. 2, 3, 6 +[42] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 2, 8 +[43] Karsten Roth, Latha Pemula, Joaquin Zepeda, Bernhard Schölkopf, Thomas Brox, and Peter Gehler. Towards total recall in industrial anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14318-14328, 2022. 2, 6 +[44] Thomas Schlegl, Philipp Seebock, Sebastian M Waldstein, Ursula Schmidt-Erfurth, and Georg Langs. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In International conference on information processing in medical imaging, pages 146-157. Springer, 2017. 2 +[45] Sanjay Subramanian, William Merrill, Trevor Darrell, Matt Gardner, Sameer Singh, and Anna Rohrbach. Reclip: A strong zero-shot baseline for referring expression comprehension. arXiv preprint arXiv:2204.05991, 2022. 2 +[46] Yingtian Tang, Yutaro Yamada, Yoyo Zhang, and Ilker Yildirim. When are lemons purple? the concept association bias of vision-language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 14333-14348, 2023. 2 +[47] David Vázquez, Jorge Bernal, F Javier Sánchez, Gloria Fernández-Esparrach, Antonio M López, Adriana Romero, Michal Drozdzal, and Aaron Courville. A benchmark for endoluminal scene segmentation of colonoscopy images. Journal of healthcare engineering, 2017(1):4037190, 2017. 6 +[48] Guodong Wang, Shumin Han, Errui Ding, and Di Huang. Student-teacher feature pyramid matching for anomaly detection. arXiv preprint arXiv:2103.04257, 2021. 2 +[49] Zhaoqing Wang, Yu Lu, Qiang Li, Xunqiang Tao, Yandong Guo, Mingming Gong, and Tongliang Liu. Cris: Clipdriven referring image segmentation. In Proceedings of + +the IEEE/CVF conference on computer vision and pattern recognition, pages 11686-11695, 2022. 2, 3 +[50] Mengde Xu, Zheng Zhang, Fangyun Wei, Han Hu, and Xiang Bai. Side adapter network for open-vocabulary semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2945-2954, 2023. 3 +[51] Xingyi Yang and Xinchao Wang. Diffusion model as representation learner. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 18938-18949, 2023. 2 +[52] Qingsong Yao, Li Xiao, Peihang Liu, and S Kevin Zhou. Label-free segmentation of Covid-19 lesions in lung ct. IEEE transactions on medical imaging, 40(10):2808-2819, 2021. 2 +[53] Qingsong Yao, Zecheng He, Yuexiang Li, Yi Lin, Kai Ma, Yefeng Zheng, and S Kevin Zhou. Adversarial medical image with hierarchical feature hiding. IEEE Transactions on Medical Imaging, 43(4):1296-1307, 2023. 2 +[54] Xincheng Yao, Chongyang Zhang, Ruoqi Li, Jun Sun, and Zhenyu Liu. One-for-all: Proposal masked cross-class anomaly detection. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 4792-4800, 2023. 2 +[55] Zhiyuan You, Lei Cui, Yujun Shen, Kai Yang, Xin Lu, Yu Zheng, and Xinyi Le. A unified model for multi-class anomaly detection. Advances in Neural Information Processing Systems, 35:4571-4584, 2022. 2, 6 +[56] Zhiyuan You, Kai Yang, Wenhan Luo, Lei Cui, Yu Zheng, and Xinyi Le. Adtr: Anomaly detection transformer with feature reconstruction. In International Conference on Neural Information Processing, pages 298-310. Springer, 2022. 2 +[57] Renrui Zhang, Wei Zhang, Rongyao Fang, Peng Gao, Kunchang Li, Jifeng Dai, Yu Qiao, and Hongsheng Li. Tip-adapter: Training-free adaption of clip for few-shot classification. In European conference on computer vision, pages 493-510. Springer, 2022. 3 +[58] Xuan Zhang, Shiyu Li, Xi Li, Ping Huang, Jiulong Shan, and Ting Chen. Destseg: Segmentation guided denoising student-teacher for anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3914–3923, 2023. 1, 2 +[59] Qihang Zhou, Guansong Pang, Yu Tian, Shibo He, and Jiming Chen. Anomalyclip: Object-agnostic prompt learning for zero-shot anomaly detection. arXiv preprint arXiv:2310.18961, 2023. 3, 5, 6, 8 +[60] Ziqin Zhou, Yinjie Lei, Bowen Zhang, Lingqiao Liu, and Yifan Liu. Zegclip: Towards adapting clip for zero-shot semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11175-11185, 2023. 2, 3 +[61] Bo Zong, Qi Song, Martin Renqiang Min, Wei Cheng, Cristian Lumezanu, Daeki Cho, and Haifeng Chen. Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In International conference on learning representations, 2018. 2 + +[62] Yang Zou, Jongheon Jeong, Latha Pemula, Dongqing Zhang, and Onkar Dabeer. Spot-the-difference self-supervised pretraining for anomaly detection and segmentation. In European Conference on Computer Vision, pages 392-408. Springer, 2022. 6 \ No newline at end of file diff --git a/2025/AA-CLIP_ Enhancing Zero-Shot Anomaly Detection via Anomaly-Aware CLIP/images.zip b/2025/AA-CLIP_ Enhancing Zero-Shot Anomaly Detection via Anomaly-Aware CLIP/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..632e38ebc47990a464cb4ce6bb2f4b78d24e5241 --- /dev/null +++ b/2025/AA-CLIP_ Enhancing Zero-Shot Anomaly Detection via Anomaly-Aware CLIP/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3112080eedceceee8beba742a4fe8bfd34f6a03786800d9fa100da4558b02fdd +size 716184 diff --git a/2025/AA-CLIP_ Enhancing Zero-Shot Anomaly Detection via Anomaly-Aware CLIP/layout.json b/2025/AA-CLIP_ Enhancing Zero-Shot Anomaly Detection via Anomaly-Aware CLIP/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..11c8b5d7fc20a728a806df0e762e94a01061dd73 --- /dev/null +++ b/2025/AA-CLIP_ Enhancing Zero-Shot Anomaly Detection via Anomaly-Aware CLIP/layout.json @@ -0,0 +1,9114 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 61, + 103, + 550, + 121 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 103, + 550, + 121 + ], + "spans": [ + { + "bbox": [ + 61, + 103, + 550, + 121 + ], + "type": "text", + "content": "AA-CLIP: Enhancing Zero-Shot Anomaly Detection via Anomaly-Aware CLIP" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 82, + 141, + 529, + 157 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 141, + 529, + 157 + ], + "spans": [ + { + "bbox": [ + 82, + 141, + 529, + 157 + ], + "type": "text", + "content": "Wenxin Ma" + }, + { + "bbox": [ + 82, + 141, + 529, + 157 + ], + "type": "inline_equation", + "content": "^{1,2}" + }, + { + "bbox": [ + 82, + 141, + 529, + 157 + ], + "type": "text", + "content": " Xu Zhang" + }, + { + "bbox": [ + 82, + 141, + 529, + 157 + ], + "type": "inline_equation", + "content": "^{1,2}" + }, + { + "bbox": [ + 82, + 141, + 529, + 157 + ], + "type": "text", + "content": " Qingsong Yao" + }, + { + "bbox": [ + 82, + 141, + 529, + 157 + ], + "type": "inline_equation", + "content": "^{5}" + }, + { + "bbox": [ + 82, + 141, + 529, + 157 + ], + "type": "text", + "content": " Fenghe Tang" + }, + { + "bbox": [ + 82, + 141, + 529, + 157 + ], + "type": "inline_equation", + "content": "^{1,2}" + }, + { + "bbox": [ + 82, + 141, + 529, + 157 + ], + "type": "text", + "content": " Chenxu Wu" + }, + { + "bbox": [ + 82, + 141, + 529, + 157 + ], + "type": "inline_equation", + "content": "^{1,2}" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 126, + 157, + 488, + 172 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 126, + 157, + 488, + 172 + ], + "spans": [ + { + "bbox": [ + 126, + 157, + 488, + 172 + ], + "type": "text", + "content": "Yingtai Li" + }, + { + "bbox": [ + 126, + 157, + 488, + 172 + ], + "type": "inline_equation", + "content": "^{1,2}" + }, + { + "bbox": [ + 126, + 157, + 488, + 172 + ], + "type": "text", + "content": " Rui Yan" + }, + { + "bbox": [ + 126, + 157, + 488, + 172 + ], + "type": "inline_equation", + "content": "^{1,2}" + }, + { + "bbox": [ + 126, + 157, + 488, + 172 + ], + "type": "text", + "content": " Zihang Jiang" + }, + { + "bbox": [ + 126, + 157, + 488, + 172 + ], + "type": "inline_equation", + "content": "^{1,2*}" + }, + { + "bbox": [ + 126, + 157, + 488, + 172 + ], + "type": "text", + "content": " S.Kevin Zhou" + }, + { + "bbox": [ + 126, + 157, + 488, + 172 + ], + "type": "inline_equation", + "content": "^{1,2,3,4*}" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 100, + 171, + 510, + 185 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 100, + 171, + 510, + 185 + ], + "spans": [ + { + "bbox": [ + 100, + 171, + 510, + 185 + ], + "type": "text", + "content": "1 School of Biomedical Engineering, Division of Life Sciences and Medicine, USTC" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 140, + 185, + 469, + 198 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 140, + 185, + 469, + 198 + ], + "spans": [ + { + "bbox": [ + 140, + 185, + 469, + 198 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 140, + 185, + 469, + 198 + ], + "type": "text", + "content": " MIRACLE Center, Suzhou Institute for Advance Research, USTC" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 126, + 198, + 484, + 213 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 126, + 198, + 484, + 213 + ], + "spans": [ + { + "bbox": [ + 126, + 198, + 484, + 213 + ], + "type": "text", + "content": "3 Key Laboratory of Intelligent Information Processing of CAS, ICT, CAS" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 138, + 213, + 472, + 228 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 213, + 472, + 228 + ], + "spans": [ + { + "bbox": [ + 138, + 213, + 472, + 228 + ], + "type": "inline_equation", + "content": "^{4}" + }, + { + "bbox": [ + 138, + 213, + 472, + 228 + ], + "type": "text", + "content": " State Key Laboratory of Precision and Intelligent Chemistry, USTC" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 253, + 228, + 357, + 241 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 253, + 228, + 357, + 241 + ], + "spans": [ + { + "bbox": [ + 253, + 228, + 357, + 241 + ], + "type": "text", + "content": "5 Stanford University" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 129, + 243, + 477, + 255 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 243, + 477, + 255 + ], + "spans": [ + { + "bbox": [ + 129, + 243, + 477, + 255 + ], + "type": "text", + "content": "wxma@mail.ustc.edu.cn jzh0103@ustc.edu.cn s.kevin.zhou@gmail.com" + } + ] + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 56, + 274, + 175, + 396 + ], + "blocks": [ + { + "bbox": [ + 56, + 274, + 175, + 396 + ], + "lines": [ + { + "bbox": [ + 56, + 274, + 175, + 396 + ], + "spans": [ + { + "bbox": [ + 56, + 274, + 175, + 396 + ], + "type": "image", + "image_path": "c262008dc6a5c352ce4c87a0c0b8d241c611605d740b4021175addac6a5ab112.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 78, + 396, + 159, + 405 + ], + "lines": [ + { + "bbox": [ + 78, + 396, + 159, + 405 + ], + "spans": [ + { + "bbox": [ + 78, + 396, + 159, + 405 + ], + "type": "text", + "content": "Features from Original CLIP" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 104, + 410, + 130, + 421 + ], + "lines": [ + { + "bbox": [ + 104, + 410, + 130, + 421 + ], + "spans": [ + { + "bbox": [ + 104, + 410, + 130, + 421 + ], + "type": "text", + "content": "(Left)" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 54, + 426, + 555, + 481 + ], + "lines": [ + { + "bbox": [ + 54, + 426, + 555, + 481 + ], + "spans": [ + { + "bbox": [ + 54, + 426, + 555, + 481 + ], + "type": "text", + "content": "Figure 1. (Left) CLIP's anomaly unawareness: Category-level image-text alignment in pre-training leads to CLIP's vague distinctions in anomaly/normal semantics and inaccurate patch-text alignment. (Middle) Our two-stage adaptation strategy: In Stage1, anomaly and normal text features are disentangled as anchors in text space; in Stage2, patch-level visual features are trained to align to these anchors, forming Anomaly-Aware CLIP. (Right) Generalizable anomaly awareness: Our method enables CLIP with generalizable anomaly awareness for both known and unseen classes." + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_caption" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 176, + 274, + 298, + 395 + ], + "blocks": [ + { + "bbox": [ + 176, + 274, + 298, + 395 + ], + "lines": [ + { + "bbox": [ + 176, + 274, + 298, + 395 + ], + "spans": [ + { + "bbox": [ + 176, + 274, + 298, + 395 + ], + "type": "image", + "image_path": "405427f90a30740df75eebe906d8559fa16b9540f06778b3baeee3d49ac5bb62.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 186, + 395, + 292, + 411 + ], + "lines": [ + { + "bbox": [ + 186, + 395, + 292, + 411 + ], + "spans": [ + { + "bbox": [ + 186, + 395, + 292, + 411 + ], + "type": "text", + "content": "Stage1: Disentangling Anomaly-Aware Text Anchors" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 280, + 410, + 318, + 422 + ], + "lines": [ + { + "bbox": [ + 280, + 410, + 318, + 422 + ], + "spans": [ + { + "bbox": [ + 280, + 410, + 318, + 422 + ], + "type": "text", + "content": "(Middle)" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_caption" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 298, + 274, + 419, + 395 + ], + "blocks": [ + { + "bbox": [ + 298, + 274, + 419, + 395 + ], + "lines": [ + { + "bbox": [ + 298, + 274, + 419, + 395 + ], + "spans": [ + { + "bbox": [ + 298, + 274, + 419, + 395 + ], + "type": "image", + "image_path": "199d6c14ef078069f92c1d73c20758998ba76787d4d6aa8c8ad9f11f37685ab5.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 312, + 395, + 399, + 411 + ], + "lines": [ + { + "bbox": [ + 312, + 395, + 399, + 411 + ], + "spans": [ + { + "bbox": [ + 312, + 395, + 399, + 411 + ], + "type": "text", + "content": "Stage2: Aligning Patch Features According to Text Anchors" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_caption" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 426, + 274, + 548, + 395 + ], + "blocks": [ + { + "bbox": [ + 426, + 274, + 548, + 395 + ], + "lines": [ + { + "bbox": [ + 426, + 274, + 548, + 395 + ], + "spans": [ + { + "bbox": [ + 426, + 274, + 548, + 395 + ], + "type": "image", + "image_path": "f86c1c21560c68444a6a9dbb56d8d95195a458263ac431a206d76551bc0659ef.jpg" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 429, + 396, + 544, + 404 + ], + "lines": [ + { + "bbox": [ + 429, + 396, + 544, + 404 + ], + "spans": [ + { + "bbox": [ + 429, + 396, + 544, + 404 + ], + "type": "text", + "content": "Features from our Anomaly-Aware CLIP" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 472, + 409, + 504, + 422 + ], + "lines": [ + { + "bbox": [ + 472, + 409, + 504, + 422 + ], + "spans": [ + { + "bbox": [ + 472, + 409, + 504, + 422 + ], + "type": "text", + "content": "(Right)" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_caption" + } + ], + "index": 19 + }, + { + "bbox": [ + 151, + 491, + 200, + 503 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 491, + 200, + 503 + ], + "spans": [ + { + "bbox": [ + 151, + 491, + 200, + 503 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 54, + 516, + 297, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 516, + 297, + 696 + ], + "spans": [ + { + "bbox": [ + 54, + 516, + 297, + 696 + ], + "type": "text", + "content": "Anomaly detection (AD) identifies outliers for applications like defect and lesion detection. While CLIP shows promise for zero-shot AD tasks due to its strong generalization capabilities, its inherent Anomaly-Unawareness leads to limited discrimination between normal and abnormal features. To address this problem, we propose Anomaly-Aware CLIP (AA-CLIP), which enhances CLIP's anomaly discrimination ability in both text and visual spaces while preserving its generalization capability. AA-CLIP is achieved through a straightforward yet effective two-stage approach: it first creates anomaly-aware text anchors to differentiate normal and abnormal semantics clearly, then aligns patch-level visual features with these anchors for precise anomaly localization. This two-stage strategy, with the help of residual adapters, gradually adapts CLIP in a controlled man" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 313, + 492, + 555, + 564 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 492, + 555, + 564 + ], + "spans": [ + { + "bbox": [ + 313, + 492, + 555, + 564 + ], + "type": "text", + "content": "ner, achieving effective AD while maintaining CLIP's class knowledge. Extensive experiments validate AA-CLIP as a resource-efficient solution for zero-shot AD tasks, achieving state-of-the-art results in industrial and medical applications. The code is available at https://github.com/Mwxinnn/AA-CLIP." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 314, + 578, + 395, + 590 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 578, + 395, + 590 + ], + "spans": [ + { + "bbox": [ + 314, + 578, + 395, + 590 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 312, + 594, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 594, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 312, + 594, + 555, + 715 + ], + "type": "text", + "content": "Anomaly detection (AD) involves modeling the distribution of a dataset to identify outliers, such as defects in industrial products [2] or lesions in medical images [13]. Despite that previous AD frameworks [10, 11, 15, 23, 31, 58] effectively detect anomalies when sufficient labeled data is available for specific classes, their high resource demands often limit their generalization ability to novel and rare classes. This limitation is particularly challenging in real-world scenarios where collecting comprehensive labeled datasets for AD is often infeasible, necessitating the exploration of low-shot" + } + ] + } + ], + "index": 27 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 703, + 144, + 713 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 703, + 144, + 713 + ], + "spans": [ + { + "bbox": [ + 67, + 703, + 144, + 713 + ], + "type": "text", + "content": "*Corresponding author." + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "4744" + } + ] + } + ], + "index": 29 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 227, + 84 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 227, + 84 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 227, + 84 + ], + "type": "text", + "content": "learning and transfer learning approaches." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 86, + 296, + 253 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 86, + 296, + 253 + ], + "spans": [ + { + "bbox": [ + 55, + 86, + 296, + 253 + ], + "type": "text", + "content": "Contrastive Language-Image Pretraining (CLIP) model has emerged as a promising solution, demonstrating remarkable generalization capabilities across various zero-shot tasks [24-26, 42]. Building upon CLIP's success, several recent studies have adapted CLIP for few/zero-shot AD tasks by utilizing anomaly-related descriptions to guide the detection of anomalous regions. Specifically, the vision encoder is trained to map anomaly images to visual features that align more closely with text features of abnormal descriptions than with those of normal descriptions [29, 30, 49, 60]. Further works [6, 7, 17, 41] have focused on enhancing CLIP's patch-level feature representations to achieve better alignment with text features, resulting in improved anomaly localization performance." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 255, + 296, + 447 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 255, + 296, + 447 + ], + "spans": [ + { + "bbox": [ + 55, + 255, + 296, + 447 + ], + "type": "text", + "content": "These methods depend on text features that need to be anomaly-aware to effectively differentiate abnormalities. However, recent studies highlight CLIP's limitations in fine-grained semantic perception and reasoning [21, 22, 36, 38, 45, 46]. Upon exploring CLIP's texture features for AD, we observe that while CLIP's text encoder effectively captures object-level information, it struggles to reliably distinguish between normal and abnormal semantics. As shown in conceptual visualization Fig. 1(left) and sampled examples in Fig. 2, CLIP has the intrinsic Anomaly-Unawareness problem: the overlap of normal and abnormal texture features hampers the precision of text-guided anomaly detection. We argue that making CLIP anomaly-aware — by establishing clearer distinctions between normal and abnormal semantics in the text space — is essential for guiding the vision encoder to precisely detect and localize anomalies." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 449, + 295, + 664 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 449, + 295, + 664 + ], + "spans": [ + { + "bbox": [ + 55, + 449, + 295, + 664 + ], + "type": "text", + "content": "This observation drives us to improve CLIP-based zero-shot AD through enhancing anomaly discrimination in text space, achieved with our method Anomaly-Aware CLIP (AA-CLIP) — a CLIP model with anomaly-aware information encoded. AA-CLIP is implemented through a novel two-stage adaptation approach. In the first stage, AA-CLIP adapts the text encoder with frozen visual encoder, creating \"anchors\" for anomaly-aware semantics within the text space for each trained class. As illustrated in Fig. 1(middle), each class's text features are disentangled to distinct anchors, with clear abnormality discrimination. Notably, this disentanglement also applies to novel, unseen classes, supporting effective zero-shot inference in AD tasks (refer to Fig. 1(right)). In the second stage, AA-CLIP aligns patch-level visual features with these specially adapted texture anchors, guiding CLIP's visual encoder to concentrate on anomaly-relevant regions. This two-stage approach ensures a focused and precise anomaly detection framework." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 665, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 665, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 665, + 296, + 714 + ], + "type": "text", + "content": "Importantly, as CLIP is extensively trained on massive data, to preserve its pre-trained knowledge, we utilize simple-structured Residual Adapters in both stages. This design enables a controlled adaptation of CLIP while en" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 313, + 72, + 553, + 96 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 553, + 96 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 553, + 96 + ], + "type": "text", + "content": "hancing its capability to handle fine-grained AD tasks without sacrificing its generalization ability." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 97, + 555, + 239 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 97, + 555, + 239 + ], + "spans": [ + { + "bbox": [ + 313, + 97, + 555, + 239 + ], + "type": "text", + "content": "Our extensive experiments in both industrial and medical domains demonstrate that our straightforward approach equips CLIP with improved zero-shot AD ability, even in data-limited scenarios. By training with a minimal sample — such as one normal sample and one anomaly sample (2-shot) per class — and testing across unseen datasets, our method achieves zero-shot performance comparable to other CLIP-based AD techniques. With only 64-shot of each class seen in the training set, our method reaches state-of-the-art (SOTA) results in cross-dataset zero-shot testing, validating our method's ability to maximize the CLIP's potential for AD with a minimal data requirement." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 325, + 240, + 511, + 251 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 325, + 240, + 511, + 251 + ], + "spans": [ + { + "bbox": [ + 325, + 240, + 511, + 251 + ], + "type": "text", + "content": "Our contributions are summarized as follows:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 315, + 252, + 553, + 407 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 315, + 252, + 553, + 312 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 252, + 553, + 312 + ], + "spans": [ + { + "bbox": [ + 315, + 252, + 553, + 312 + ], + "type": "text", + "content": "1. Anomaly-Aware CLIP with enhanced and generalizable anomaly-discriminative ability. We introduce AA-CLIP which is more sensitive to anomalies sequentially in text and visual spaces, encoding anomaly-aware information into the original CLIP." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 315, + 312, + 553, + 360 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 312, + 553, + 360 + ], + "spans": [ + { + "bbox": [ + 315, + 312, + 553, + 360 + ], + "type": "text", + "content": "2. Efficient adaptation using residual adapters. We implement simple residual adapters to boost zero-shot anomaly detection performance without compromising the model's generalization ability." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 315, + 360, + 553, + 407 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 360, + 553, + 407 + ], + "spans": [ + { + "bbox": [ + 315, + 360, + 553, + 407 + ], + "type": "text", + "content": "3. SOTA performance with high training efficiency. Our method achieves SOTA results across diverse datasets, showing robust anomaly detection capabilities even with limited training samples." + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 313, + 418, + 400, + 430 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 418, + 400, + 430 + ], + "spans": [ + { + "bbox": [ + 313, + 418, + 400, + 430 + ], + "type": "text", + "content": "2. Related Work" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 434, + 555, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 434, + 555, + 567 + ], + "spans": [ + { + "bbox": [ + 313, + 434, + 555, + 567 + ], + "type": "text", + "content": "Traditional Anomaly Detection in images involves modeling the normal data distribution to detect rare and diverse unexpected signals within visual data [44, 52, 53, 61]. Reconstruction-based [11, 16, 32, 33, 54, 56], augmentation-based [31, 44, 48, 55, 58] and discriminative [10, 15, 23, 31, 43, 61] methods are typically used to facilitate better modeling. Despite the huge progress of traditional anomaly detection methods, their effectiveness relies heavily on a well-modeled normal data distribution. Without sufficient normal data, their ability to accurately detect anomalies is significantly reduced." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 570, + 556, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 570, + 556, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 570, + 556, + 714 + ], + "type": "text", + "content": "CLIP, trained on a vast amount of image-text data, leverages contrastive learning alongside powerful language models and visual feature encoders to capture robust concepts. This combination enables CLIP to achieve impressive zero-shot performance on image classification, as it can generalize well to new categories without requiring task-specific training [24-27, 42, 51]. More recently, numerous studies [9, 12, 35, 39] have explored ways to transfer the knowledge embedded in CLIP models to a variety of downstream tasks, yielding promising results in fields like image captioning, image-text retrieval, and image generation. These efforts demonstrate CLIP's versatility and potential to drive" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "4745" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 228, + 83 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 228, + 83 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 228, + 83 + ], + "type": "text", + "content": "advancements across diverse applications." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 84, + 296, + 264 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 84, + 296, + 264 + ], + "spans": [ + { + "bbox": [ + 55, + 84, + 296, + 264 + ], + "type": "text", + "content": "Despite the rapid advancements achieved by CLIP, numerous studies have highlighted persistent limitations in the features it extracts. While CLIP demonstrates strong generalization across various tasks, it often struggles to capture nuanced details and essential spatial relationships, which are crucial for tasks demanding precise boundary delineation and fine-grained feature extraction. This limitation results in suboptimal performance in downstream applications, especially that require high levels of detail, such as object detection, scene segmentation, or tasks in medical imaging [14, 29, 30, 37, 49, 50, 57, 60]. As a result, leveraging CLIP for fine-granular tasks frequently necessitates task-specific adaptations to bridge the gap between its generalized feature extraction and the precision required for specialized applications." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 266, + 296, + 518 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 266, + 296, + 518 + ], + "spans": [ + { + "bbox": [ + 55, + 266, + 296, + 518 + ], + "type": "text", + "content": "CLIP-based Anomaly Detection There have been several efforts to leverage CLIP for AD tasks. One of the pioneering approaches, WinCLIP [19], proposes a method for extracting and aggregating visual features from multiple levels to align with text features, demonstrating the potential of CLIP in this context. Subsequent research investigates various adaptation methods to bridge the gap between natural domains and the AD domain, resulting in performance improvements. For instance, [7, 8, 18] focus on refining visual features by employing adapters to enhance patch-level visual representations. However, these approaches often rely on text embeddings from the original CLIP model as soft supervision and overlook a critical limitation of CLIP in AD: itsunclearness in distinguishing between anomalous and normal semantics, particularly within the text encoder, resulting in suboptimal performance. Other works have employed prompt-learning-based methods[5, 6, 41, 59], introducing learnable embeddings into the text encoder to better represent abnormality. However, the class information in CLIP can be damaged, potentially degrading generalization, especially in data-limited and zero-shot settings." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 518, + 296, + 615 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 518, + 296, + 615 + ], + "spans": [ + { + "bbox": [ + 55, + 518, + 296, + 615 + ], + "type": "text", + "content": "Different from previous methods, we are the first to investigate CLIP's inherent limitation in capturing anomaly-aware information, specifically in differentiating between normal and anomalous semantics in text prompts. Rather than relying solely on the original anomaly-unaware text embeddings or unaltered feature spaces, our method is able to refine the embeddings to actively incorporate anomaly-discriminative representations." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 625, + 111, + 637 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 625, + 111, + 637 + ], + "spans": [ + { + "bbox": [ + 55, + 625, + 111, + 637 + ], + "type": "text", + "content": "3. Method" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 645, + 123, + 656 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 645, + 123, + 656 + ], + "spans": [ + { + "bbox": [ + 55, + 645, + 123, + 656 + ], + "type": "text", + "content": "3.1. Overview" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 662, + 176, + 673 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 662, + 176, + 673 + ], + "spans": [ + { + "bbox": [ + 55, + 662, + 176, + 673 + ], + "type": "text", + "content": "3.1.1. Problem Formulation" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 677, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 677, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 677, + 296, + 713 + ], + "type": "text", + "content": "Zero-shot AD models are trained to identify anomalous samples whose categories may be unseen in the training dataset. Specifically, the model is expected to learn" + } + ] + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 333, + 76, + 385, + 128 + ], + "blocks": [ + { + "bbox": [ + 333, + 76, + 385, + 128 + ], + "lines": [ + { + "bbox": [ + 333, + 76, + 385, + 128 + ], + "spans": [ + { + "bbox": [ + 333, + 76, + 385, + 128 + ], + "type": "image", + "image_path": "1f4b294e7d4def9bcc526a78197fc3385e97f3cb55a77e48738270adecb8c5f1.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 340, + 130, + 381, + 179 + ], + "blocks": [ + { + "bbox": [ + 340, + 130, + 381, + 179 + ], + "lines": [ + { + "bbox": [ + 340, + 130, + 381, + 179 + ], + "spans": [ + { + "bbox": [ + 340, + 130, + 381, + 179 + ], + "type": "image", + "image_path": "b75f5d93de627136e49c895ecdf83d74a7eab7e2e9f221c69e97c6c8bbd33721.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "type": "table", + "bbox": [ + 392, + 87, + 536, + 125 + ], + "blocks": [ + { + "bbox": [ + 394, + 74, + 475, + 85 + ], + "lines": [ + { + "bbox": [ + 394, + 74, + 475, + 85 + ], + "spans": [ + { + "bbox": [ + 394, + 74, + 475, + 85 + ], + "type": "text", + "content": "\"This is a [ ] carpet.\"" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 392, + 87, + 536, + 125 + ], + "lines": [ + { + "bbox": [ + 392, + 87, + 536, + 125 + ], + "spans": [ + { + "bbox": [ + 392, + 87, + 536, + 125 + ], + "type": "table", + "html": "
SemanticsSimilarityProbabilityτ=0.01
broken0.180.22
normal0.190.78
", + "image_path": "cdad65d46b81837292f32eee6ab9f89dc599629d25def5594226a42493289766.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "table_body" + } + ], + "index": 11 + }, + { + "type": "table", + "bbox": [ + 392, + 140, + 536, + 179 + ], + "blocks": [ + { + "bbox": [ + 392, + 129, + 473, + 140 + ], + "lines": [ + { + "bbox": [ + 392, + 129, + 473, + 140 + ], + "spans": [ + { + "bbox": [ + 392, + 129, + 473, + 140 + ], + "type": "text", + "content": "\"This is a [ ] zipper.\"" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 392, + 140, + 536, + 179 + ], + "lines": [ + { + "bbox": [ + 392, + 140, + 536, + 179 + ], + "spans": [ + { + "bbox": [ + 392, + 140, + 536, + 179 + ], + "type": "table", + "html": "
SemanticsSimilarityProbabilityτ=0.01
broken0.200.38
normal0.210.62
", + "image_path": "7a8e14cba207c72c91d6fd37dac789f1a0ac3750789fc0aaedd3930ddbe323e5.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "table_body" + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 315, + 180, + 552, + 268 + ], + "blocks": [ + { + "bbox": [ + 315, + 180, + 552, + 268 + ], + "lines": [ + { + "bbox": [ + 315, + 180, + 552, + 268 + ], + "spans": [ + { + "bbox": [ + 315, + 180, + 552, + 268 + ], + "type": "image", + "image_path": "467ee38e87dd5ff7fa4576a42be09710e9d3a22d802523419eec73f0820be9d3.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 269, + 554, + 390 + ], + "lines": [ + { + "bbox": [ + 313, + 269, + 554, + 390 + ], + "spans": [ + { + "bbox": [ + 313, + 269, + 554, + 390 + ], + "type": "text", + "content": "Figure 2. (Top) Examples illustrating CLIP's Anomaly Unawareness. Despite the obvious anomalies present in the images, image features have higher similarities to normal descriptions, rather than anomaly descriptions, mistakenly. This problem is enlarged with a low temperature " + }, + { + "bbox": [ + 313, + 269, + 554, + 390 + ], + "type": "inline_equation", + "content": "\\tau" + }, + { + "bbox": [ + 313, + 269, + 554, + 390 + ], + "type": "text", + "content": ". (Bottom) Text Feature Similarity Heatmap among Normal and Anomaly Descriptions: Original CLIP vs. After Text Adaptation. Red indicates high similarity. In original CLIP, normal features exhibit strong similarity with anomaly features, whereas text adaptation successfully separates them, clarifying the semantic distinctions between normal and anomaly descriptions." + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_caption" + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 404, + 554, + 536 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 404, + 554, + 536 + ], + "spans": [ + { + "bbox": [ + 313, + 404, + 554, + 536 + ], + "type": "text", + "content": "both normal and abnormal patterns that are shared across different classes given a training set " + }, + { + "bbox": [ + 313, + 404, + 554, + 536 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_{train}" + }, + { + "bbox": [ + 313, + 404, + 554, + 536 + ], + "type": "text", + "content": " with normal or anomalous samples, in order to be capable of performing AD tasks on a series of different test datasets " + }, + { + "bbox": [ + 313, + 404, + 554, + 536 + ], + "type": "inline_equation", + "content": "\\{\\mathcal{D}_{test}^{1},\\mathcal{D}_{test}^{2},\\dots,\\mathcal{D}_{test}^{n}\\}" + }, + { + "bbox": [ + 313, + 404, + 554, + 536 + ], + "type": "text", + "content": ", where each " + }, + { + "bbox": [ + 313, + 404, + 554, + 536 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_{test}^{i}" + }, + { + "bbox": [ + 313, + 404, + 554, + 536 + ], + "type": "text", + "content": " is distinct from " + }, + { + "bbox": [ + 313, + 404, + 554, + 536 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_{train}" + }, + { + "bbox": [ + 313, + 404, + 554, + 536 + ], + "type": "text", + "content": ". Image-level AD can be formally defined as a binary classification problem, where the model aims to classify samples " + }, + { + "bbox": [ + 313, + 404, + 554, + 536 + ], + "type": "inline_equation", + "content": "x\\in \\mathcal{D}" + }, + { + "bbox": [ + 313, + 404, + 554, + 536 + ], + "type": "text", + "content": " as either normal (" + }, + { + "bbox": [ + 313, + 404, + 554, + 536 + ], + "type": "inline_equation", + "content": "y = 0" + }, + { + "bbox": [ + 313, + 404, + 554, + 536 + ], + "type": "text", + "content": ") or anomalous (" + }, + { + "bbox": [ + 313, + 404, + 554, + 536 + ], + "type": "inline_equation", + "content": "y = 1" + }, + { + "bbox": [ + 313, + 404, + 554, + 536 + ], + "type": "text", + "content": "). Anomaly segmentation extends this concept to pixel-level with mask " + }, + { + "bbox": [ + 313, + 404, + 554, + 536 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 313, + 404, + 554, + 536 + ], + "type": "text", + "content": ", aiming to identify anomalous regions by highlighting pixels associated with anomalies." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 543, + 426, + 555 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 543, + 426, + 555 + ], + "spans": [ + { + "bbox": [ + 313, + 543, + 426, + 555 + ], + "type": "text", + "content": "3.1.2. Current Challenges" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 558, + 554, + 653 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 558, + 554, + 653 + ], + "spans": [ + { + "bbox": [ + 313, + 558, + 554, + 653 + ], + "type": "text", + "content": "Anomaly Unawareness in CLIP: The CLIP-based AD method classifies visual features as \"anomalies\" if they exhibit greater similarities to anomaly prompt embeddings than to normal prompt embeddings, thus requiring well-defined boundaries between these two kinds of prompts. However, in real applications, CLIP's text embeddings often lack the clear separability needed to reliably distinguish between normal and anomaly classes." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 313, + 654, + 554, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 654, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 654, + 554, + 713 + ], + "type": "text", + "content": "We observe that, despite the visible defects in example images from the MVTec-AD [2], their features exhibit higher cosine similarity with \"normal\" prompts than with correct \"anomaly\" descriptions (see Fig. 2 (top)), indicating CLIP's inaccurate semantic understanding. Without adap" + } + ] + } + ], + "index": 19 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "4746" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 60, + 74, + 294, + 308 + ], + "blocks": [ + { + "bbox": [ + 60, + 74, + 294, + 308 + ], + "lines": [ + { + "bbox": [ + 60, + 74, + 294, + 308 + ], + "spans": [ + { + "bbox": [ + 60, + 74, + 294, + 308 + ], + "type": "image", + "image_path": "85d153d5a9c1ba27ae30deddc2e6b7643a16c6f2889c919f03a81988d171c738.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 316, + 296, + 384 + ], + "lines": [ + { + "bbox": [ + 55, + 316, + 296, + 384 + ], + "spans": [ + { + "bbox": [ + 55, + 316, + 296, + 384 + ], + "type": "text", + "content": "Figure 3. t-SNE Visualization of Text Features from Original CLIP vs. AA-CLIP. Each point represents a text feature encoded from a prompt. Original CLIP's normal and anomaly text features are intertwined, while our method effectively disentangles them. This disentanglement is generalizable to novel classes, validating the anomaly-awareness of our model." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 395, + 296, + 514 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 395, + 296, + 514 + ], + "spans": [ + { + "bbox": [ + 55, + 395, + 296, + 514 + ], + "type": "text", + "content": "tation, there persists a high similarity between the normal and abnormal text embeddings of a single class, as shown in Fig. 2 (bottom), suggesting a potential entanglement of normal and anomaly semantics within text space. We term this limitation Anomaly Unawareness and attribute it to the training process of CLIP: it is primarily trained on general, non-anomalous datasets and lacks specific guidance on defect detection. Consequently, it is challenging to rely on original CLIP embeddings to detect subtle or context-specific anomalies." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 514, + 296, + 612 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 514, + 296, + 612 + ], + "spans": [ + { + "bbox": [ + 55, + 514, + 296, + 612 + ], + "type": "text", + "content": "This issue remains evident across different categories in our t-SNE analysis: as shown in Fig. 3 (top), only subtle separations are observed within an object cluster, where text embeddings for both normal and abnormal semantics are intermixed. This entangled pattern may potentially lead to anomaly-unaware text-image alignment, which reinforces the necessity to adapt CLIP's to enhance its ability of anomaly-awareness." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 617, + 296, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 617, + 296, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 617, + 296, + 715 + ], + "type": "text", + "content": "**Embedding Adaptation Dilemma:** Discussion above renders the adaptation of CLIP essential for effective AD. However, since CLIP's embeddings are already optimized through extensive pretraining, it could be susceptible to overfitting to new dataset during adaptation. Overfitting convergence leads to minimized intra-class distinctions in the training data, often at the expense of the feature separability for effective generalization to unseen data." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 313, + 72, + 554, + 109 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 554, + 109 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 554, + 109 + ], + "type": "text", + "content": "To address this, a carefully controlled refinement is crucial to preserve CLIP's generalization capabilities while enhancing its sensitivity to anomalies." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 114, + 454, + 126 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 114, + 454, + 126 + ], + "spans": [ + { + "bbox": [ + 313, + 114, + 454, + 126 + ], + "type": "text", + "content": "3.1.3. Overview of Our Solution" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 130, + 555, + 346 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 130, + 555, + 346 + ], + "spans": [ + { + "bbox": [ + 313, + 130, + 555, + 346 + ], + "type": "text", + "content": "Motivated by Sec. 3.1.2, we propose Anomaly-Aware CLIP (AA-CLIP) with improved anomaly awareness. As shown in Fig. 4, AA-CLIP is trained through a two-stage training strategy that sequentially adapts the semantic-rich text space and detail-focused visual space, with original CLIP parameters remaining frozen. In the first stage (see Fig. 4 (Top)), we incorporate Residual Adapters into the shallow layers of the text encoder, and the visual features from the fixed image encoder serve as a stable reference for optimization. A Disentangle Loss is proposed to enforce effective discrimination by ensuring independence between normal and anomaly embeddings. In the second stage, we integrate Residual Adapters into the shallow layers of the visual encoder to align patch-level features with the fixed, specially adapted texture features from the fixed text encoder (see in Fig. 4 (Bottom)). Ultimately, our AA-CLIP succeeds in equipping CLIP with anomaly awareness across seen and unseen classes, as shown in Fig. 3 (bottom)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 353, + 554, + 366 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 353, + 554, + 366 + ], + "spans": [ + { + "bbox": [ + 313, + 353, + 554, + 366 + ], + "type": "text", + "content": "3.2. AA-CLIP with Two-Stage Adaptation Strategy" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 371, + 418, + 383 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 371, + 418, + 383 + ], + "spans": [ + { + "bbox": [ + 313, + 371, + 418, + 383 + ], + "type": "text", + "content": "3.2.1. Residual Adapter" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 386, + 554, + 433 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 386, + 554, + 433 + ], + "spans": [ + { + "bbox": [ + 313, + 386, + 554, + 433 + ], + "type": "text", + "content": "To preserve CLIP's pre-trained knowledge while enabling targeted adaptation, we introduce lightweight Residual Adapters in the shallow layers (up to layer " + }, + { + "bbox": [ + 313, + 386, + 554, + 433 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 313, + 386, + 554, + 433 + ], + "type": "text", + "content": ") of both text and vision encoders." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 434, + 554, + 471 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 434, + 554, + 471 + ], + "spans": [ + { + "bbox": [ + 313, + 434, + 554, + 471 + ], + "type": "text", + "content": "The output feature " + }, + { + "bbox": [ + 313, + 434, + 554, + 471 + ], + "type": "inline_equation", + "content": "x^{i} \\in \\mathbb{R}^{N \\times d}" + }, + { + "bbox": [ + 313, + 434, + 554, + 471 + ], + "type": "text", + "content": " of CLIP's " + }, + { + "bbox": [ + 313, + 434, + 554, + 471 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 313, + 434, + 554, + 471 + ], + "type": "text", + "content": "-th (" + }, + { + "bbox": [ + 313, + 434, + 554, + 471 + ], + "type": "inline_equation", + "content": "i \\leq K" + }, + { + "bbox": [ + 313, + 434, + 554, + 471 + ], + "type": "text", + "content": ") transformer layer is fed into the " + }, + { + "bbox": [ + 313, + 434, + 554, + 471 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 313, + 434, + 554, + 471 + ], + "type": "text", + "content": "-th adapter, outputting adapted feature " + }, + { + "bbox": [ + 313, + 434, + 554, + 471 + ], + "type": "inline_equation", + "content": "x_{residual}^{i}" + }, + { + "bbox": [ + 313, + 434, + 554, + 471 + ], + "type": "text", + "content": ", as shown in Eq. (1)," + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 359, + 479, + 553, + 493 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 359, + 479, + 553, + 493 + ], + "spans": [ + { + "bbox": [ + 359, + 479, + 553, + 493 + ], + "type": "interline_equation", + "content": "x _ {\\text {r e s i d u a l}} ^ {i} = \\operatorname {N o r m} \\left(\\operatorname {A c t} \\left(W ^ {i} x ^ {i}\\right)\\right), \\tag {1}", + "image_path": "3e9bdbd98ad66893474a59b92aef089c8fdb4de4e5dea8afb6dbef1c9551352e.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 502, + 554, + 576 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 502, + 554, + 576 + ], + "spans": [ + { + "bbox": [ + 313, + 502, + 554, + 576 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 502, + 554, + 576 + ], + "type": "inline_equation", + "content": "W^{i} \\in \\mathbb{R}^{d \\times d}" + }, + { + "bbox": [ + 313, + 502, + 554, + 576 + ], + "type": "text", + "content": " is the trainable linear weight of " + }, + { + "bbox": [ + 313, + 502, + 554, + 576 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 313, + 502, + 554, + 576 + ], + "type": "text", + "content": "-th adapter, " + }, + { + "bbox": [ + 313, + 502, + 554, + 576 + ], + "type": "inline_equation", + "content": "Act(\\cdot)" + }, + { + "bbox": [ + 313, + 502, + 554, + 576 + ], + "type": "text", + "content": " is an activation function, and " + }, + { + "bbox": [ + 313, + 502, + 554, + 576 + ], + "type": "inline_equation", + "content": "Norm(\\cdot)" + }, + { + "bbox": [ + 313, + 502, + 554, + 576 + ], + "type": "text", + "content": " is a normalizing function. The original feature " + }, + { + "bbox": [ + 313, + 502, + 554, + 576 + ], + "type": "inline_equation", + "content": "x^{i}" + }, + { + "bbox": [ + 313, + 502, + 554, + 576 + ], + "type": "text", + "content": " and the enhanced feature " + }, + { + "bbox": [ + 313, + 502, + 554, + 576 + ], + "type": "inline_equation", + "content": "x_{residual}^{i}" + }, + { + "bbox": [ + 313, + 502, + 554, + 576 + ], + "type": "text", + "content": " are fused in a weighted manner, generating " + }, + { + "bbox": [ + 313, + 502, + 554, + 576 + ], + "type": "inline_equation", + "content": "x_{enhanced}^{i}" + }, + { + "bbox": [ + 313, + 502, + 554, + 576 + ], + "type": "text", + "content": ", the input to the next transformer layer, as shown in Eq. (2)," + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 354, + 584, + 553, + 599 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 354, + 584, + 553, + 599 + ], + "spans": [ + { + "bbox": [ + 354, + 584, + 553, + 599 + ], + "type": "interline_equation", + "content": "x _ {\\text {e n h a n c e d}} ^ {i} = \\lambda x _ {\\text {r e s i d u a l}} ^ {i} + (1 - \\lambda) x ^ {i}, \\tag {2}", + "image_path": "efd25e23af6a7f37b54ee140623f7505b935376355be311eaf973e1c69b81d61.jpg" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 608, + 554, + 656 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 608, + 554, + 656 + ], + "spans": [ + { + "bbox": [ + 313, + 608, + 554, + 656 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 608, + 554, + 656 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 313, + 608, + 554, + 656 + ], + "type": "text", + "content": " is a hyper-parameter to control the residual ratio, adjusting the fusing degree of AD-specific knowledge for preserving the original CLIP's generalization ability and improved performance." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 662, + 466, + 675 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 662, + 466, + 675 + ], + "spans": [ + { + "bbox": [ + 313, + 662, + 466, + 675 + ], + "type": "text", + "content": "3.2.2. Two-Stage Training Strategy" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 677, + 554, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 677, + 554, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 677, + 554, + 714 + ], + "type": "text", + "content": "Disentangling Anomaly-Aware Text Anchors: In the first stage, our objective is to learn anomaly-discriminative text anchors by adapting the text encoder while keeping the" + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 315, + 757 + ], + "type": "text", + "content": "4747" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 78, + 85, + 531, + 200 + ], + "blocks": [ + { + "bbox": [ + 80, + 69, + 306, + 82 + ], + "lines": [ + { + "bbox": [ + 80, + 69, + 306, + 82 + ], + "spans": [ + { + "bbox": [ + 80, + 69, + 306, + 82 + ], + "type": "text", + "content": "Stage1: Disentangling Anomaly-Aware Text Anchors" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 78, + 85, + 531, + 200 + ], + "lines": [ + { + "bbox": [ + 78, + 85, + 531, + 200 + ], + "spans": [ + { + "bbox": [ + 78, + 85, + 531, + 200 + ], + "type": "image", + "image_path": "a4a610b1167236135a85d221e41e21c181b00e2ab31d2c098f0a54df24da772a.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 76, + 219, + 532, + 365 + ], + "blocks": [ + { + "bbox": [ + 83, + 204, + 335, + 217 + ], + "lines": [ + { + "bbox": [ + 83, + 204, + 335, + 217 + ], + "spans": [ + { + "bbox": [ + 83, + 204, + 335, + 217 + ], + "type": "text", + "content": "Stage2: Aligning Patch Features According to Text Anchors" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 76, + 219, + 532, + 365 + ], + "lines": [ + { + "bbox": [ + 76, + 219, + 532, + 365 + ], + "spans": [ + { + "bbox": [ + 76, + 219, + 532, + 365 + ], + "type": "image", + "image_path": "b1ece9f7bcb30903f6b874d348410a737150e0d213c2bd10ec011036cf8438ca.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 54, + 371, + 555, + 416 + ], + "lines": [ + { + "bbox": [ + 54, + 371, + 555, + 416 + ], + "spans": [ + { + "bbox": [ + 54, + 371, + 555, + 416 + ], + "type": "text", + "content": "Figure 4. The Two-Stage Training Pipeline of Anomaly-Aware CLIP. In the first stage, the text encoder of AA-CLIP is trained to identify anomaly-related semantics, helped by a disentangle loss. In the second stage, patch features are aligned with these text anchors. Both stages are achieved by the integration of Residual Adapters into the shallow layers of CLIP's backbone. This controlled adaptation enables CLIP to effectively distinguish anomalies, which forms our Anomaly-Aware CLIP." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 54, + 430, + 295, + 477 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 430, + 295, + 477 + ], + "spans": [ + { + "bbox": [ + 54, + 430, + 295, + 477 + ], + "type": "text", + "content": "image encoder fixed. We incorporate Residual Adapters into the first " + }, + { + "bbox": [ + 54, + 430, + 295, + 477 + ], + "type": "inline_equation", + "content": "K_{T}" + }, + { + "bbox": [ + 54, + 430, + 295, + 477 + ], + "type": "text", + "content": " layers of the CLIP text encoder, as illustrated in Fig. 4 (Top), and set the final projector in the text encoder to be learnable to facilitate improved alignment." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 477, + 296, + 622 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 477, + 296, + 622 + ], + "spans": [ + { + "bbox": [ + 55, + 477, + 296, + 622 + ], + "type": "text", + "content": "Using prompts designed to encapsulate both normal and anomalous semantics (as detailed in Appendix), text encoder generates corresponding high-level embeddings. The average embeddings of the normal and anomaly prompts serve as our initial text anchors, denoted as " + }, + { + "bbox": [ + 55, + 477, + 296, + 622 + ], + "type": "inline_equation", + "content": "T_{N}" + }, + { + "bbox": [ + 55, + 477, + 296, + 622 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 477, + 296, + 622 + ], + "type": "inline_equation", + "content": "T_{A} \\in \\mathbb{R}^{d}" + }, + { + "bbox": [ + 55, + 477, + 296, + 622 + ], + "type": "text", + "content": ", respectively. These anchors are refined by being aligned with visual features extracted from an enhanced CLIP visual encoder, as [28, 59]. Alignment is conducted at both image and patch levels to incorporate both global and local semantics. By calculating the cosine similarity between these anchors and the image features " + }, + { + "bbox": [ + 55, + 477, + 296, + 622 + ], + "type": "inline_equation", + "content": "V_{image} \\in \\mathbb{R}^{d}" + }, + { + "bbox": [ + 55, + 477, + 296, + 622 + ], + "type": "text", + "content": " or patch features " + }, + { + "bbox": [ + 55, + 477, + 296, + 622 + ], + "type": "inline_equation", + "content": "V_{patch} \\in \\mathbb{R}^{N \\times d}" + }, + { + "bbox": [ + 55, + 477, + 296, + 622 + ], + "type": "text", + "content": ", as shown in Eq. (3)," + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 101, + 628, + 295, + 647 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 101, + 628, + 295, + 647 + ], + "spans": [ + { + "bbox": [ + 101, + 628, + 295, + 647 + ], + "type": "interline_equation", + "content": "p _ {c l s} = \\operatorname {C o s S i m} \\left(V _ {\\text {i m a g e}}, \\left[ T _ {N}, T _ {A} \\right]\\right), \\tag {3}", + "image_path": "345dbd68ad4867901ddee89cddffca8aaefc3804b1732692175689ee3536bdf7.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 101, + 643, + 247, + 658 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 101, + 643, + 247, + 658 + ], + "spans": [ + { + "bbox": [ + 101, + 643, + 247, + 658 + ], + "type": "interline_equation", + "content": "p _ {s e g} ^ {o} = \\operatorname {C o s S i m} (V _ {p a t c h}, [ T _ {N}, T _ {A} ]),", + "image_path": "ac7e2dc3009139e11df2d118fb917be9cb00128fb3192ea4d0cb3945b27fd73b.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 664, + 296, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 664, + 296, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 664, + 296, + 715 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 664, + 296, + 715 + ], + "type": "inline_equation", + "content": "[\\cdot, \\cdot]" + }, + { + "bbox": [ + 55, + 664, + 296, + 715 + ], + "type": "text", + "content": " means concatenate operation, we obtain the classification prediction " + }, + { + "bbox": [ + 55, + 664, + 296, + 715 + ], + "type": "inline_equation", + "content": "p_{cls} \\in \\mathbb{R}^2" + }, + { + "bbox": [ + 55, + 664, + 296, + 715 + ], + "type": "text", + "content": " and the segmentation prediction " + }, + { + "bbox": [ + 55, + 664, + 296, + 715 + ], + "type": "inline_equation", + "content": "p_{seg}^{o} \\in \\mathbb{R}^{N \\times 2}" + }, + { + "bbox": [ + 55, + 664, + 296, + 715 + ], + "type": "text", + "content": ". The segmentation prediction " + }, + { + "bbox": [ + 55, + 664, + 296, + 715 + ], + "type": "inline_equation", + "content": "p_{seg}^{o}" + }, + { + "bbox": [ + 55, + 664, + 296, + 715 + ], + "type": "text", + "content": " is then reshaped and upsampled to " + }, + { + "bbox": [ + 55, + 664, + 296, + 715 + ], + "type": "inline_equation", + "content": "p_{seg} \\in \\mathbb{R}^{H \\times W \\times 2}" + }, + { + "bbox": [ + 55, + 664, + 296, + 715 + ], + "type": "text", + "content": " to align" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 430, + 555, + 537 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 430, + 555, + 537 + ], + "spans": [ + { + "bbox": [ + 313, + 430, + 555, + 537 + ], + "type": "text", + "content": "with the height " + }, + { + "bbox": [ + 313, + 430, + 555, + 537 + ], + "type": "inline_equation", + "content": "H" + }, + { + "bbox": [ + 313, + 430, + 555, + 537 + ], + "type": "text", + "content": " and width " + }, + { + "bbox": [ + 313, + 430, + 555, + 537 + ], + "type": "inline_equation", + "content": "W" + }, + { + "bbox": [ + 313, + 430, + 555, + 537 + ], + "type": "text", + "content": " of segmentation mask " + }, + { + "bbox": [ + 313, + 430, + 555, + 537 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 313, + 430, + 555, + 537 + ], + "type": "text", + "content": ". Following previous works [6, 7, 18, 59], we compute the classification loss " + }, + { + "bbox": [ + 313, + 430, + 555, + 537 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{cls}" + }, + { + "bbox": [ + 313, + 430, + 555, + 537 + ], + "type": "text", + "content": " and segmentation loss " + }, + { + "bbox": [ + 313, + 430, + 555, + 537 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{seg}" + }, + { + "bbox": [ + 313, + 430, + 555, + 537 + ], + "type": "text", + "content": " to optimize parameters, as specified in Eq. (4). Specifically, the classification loss is a binary cross-entropy that compares classification predictions with ground-truth labels " + }, + { + "bbox": [ + 313, + 430, + 555, + 537 + ], + "type": "inline_equation", + "content": "y" + }, + { + "bbox": [ + 313, + 430, + 555, + 537 + ], + "type": "text", + "content": ", and the segmentation loss is a combination of dice loss and focal loss applied to segmentation predictions and the anomaly segmentation mask " + }, + { + "bbox": [ + 313, + 430, + 555, + 537 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 313, + 430, + 555, + 537 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 352, + 542, + 441, + 555 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 352, + 542, + 441, + 555 + ], + "spans": [ + { + "bbox": [ + 352, + 542, + 441, + 555 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {c l s} = \\operatorname {B C E} (p _ {c l s}, y),", + "image_path": "7490859b8c24ac25d0e987722362b1ef1130b443dfc3b0f19d27b9fb6fbdd555.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 353, + 557, + 553, + 570 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 353, + 557, + 553, + 570 + ], + "spans": [ + { + "bbox": [ + 353, + 557, + 553, + 570 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\text {s e g}} = \\operatorname {D i c e} \\left(p _ {\\text {s e g}}, S\\right) + \\operatorname {F o c a l} \\left(p _ {\\text {s e g}}, S\\right), \\tag {4}", + "image_path": "284e20962fbee0185cbda90d49f4122b67eb86e16d01e78e5c4d5175f96b76bc.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 353, + 572, + 446, + 586 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 353, + 572, + 446, + 586 + ], + "spans": [ + { + "bbox": [ + 353, + 572, + 446, + 586 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\text {a l i g n}} = \\mathcal {L} _ {\\text {c l s}} + \\mathcal {L} _ {\\text {s e g}}.", + "image_path": "fe09995ff1cf75d772b2c193d5bf75a9334ddeafd69e00cfad15d4671b28b544.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 589, + 555, + 637 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 589, + 555, + 637 + ], + "spans": [ + { + "bbox": [ + 313, + 589, + 555, + 637 + ], + "type": "text", + "content": "To enhance the separation between normal and anomaly text embeddings, we introduce a Disentangle Loss encouraging orthogonality between " + }, + { + "bbox": [ + 313, + 589, + 555, + 637 + ], + "type": "inline_equation", + "content": "T_{N}" + }, + { + "bbox": [ + 313, + 589, + 555, + 637 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 589, + 555, + 637 + ], + "type": "inline_equation", + "content": "T_{A}" + }, + { + "bbox": [ + 313, + 589, + 555, + 637 + ], + "type": "text", + "content": " to minimize correlation, as in Eq. (5):" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 382, + 641, + 553, + 654 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 382, + 641, + 553, + 654 + ], + "spans": [ + { + "bbox": [ + 382, + 641, + 553, + 654 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\text {d i s}} = | < T _ {N}, T _ {A} > | ^ {2}. \\tag {5}", + "image_path": "b278201ba578ca0b14081aef00ce3af1d0dbf566406b0ae1208d3af17aaf01d8.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 660, + 555, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 660, + 555, + 696 + ], + "spans": [ + { + "bbox": [ + 313, + 660, + 555, + 696 + ], + "type": "text", + "content": "The Disentangle Loss " + }, + { + "bbox": [ + 313, + 660, + 555, + 696 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{dis}" + }, + { + "bbox": [ + 313, + 660, + 555, + 696 + ], + "type": "text", + "content": " is incorporated into the alignment loss " + }, + { + "bbox": [ + 313, + 660, + 555, + 696 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\text {align }}" + }, + { + "bbox": [ + 313, + 660, + 555, + 696 + ], + "type": "text", + "content": " as a regularization term, weighted by a factor " + }, + { + "bbox": [ + 313, + 660, + 555, + 696 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 313, + 660, + 555, + 696 + ], + "type": "text", + "content": ", which forms the total loss, as in Eq. (6):" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 381, + 701, + 553, + 715 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 381, + 701, + 553, + 715 + ], + "spans": [ + { + "bbox": [ + 381, + 701, + 553, + 715 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\text {t o t a l}} = \\mathcal {L} _ {\\text {a l i g n}} + \\gamma \\mathcal {L} _ {\\text {d i s}}. \\tag {6}", + "image_path": "257f94d56104e1bd7526b76666af1184201f0ef7ca4ebe169d2013e8daf43913.jpg" + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "4748" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 294, + 133 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 294, + 133 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 294, + 133 + ], + "type": "text", + "content": "In this stage, the distinction between normal and anomaly semantics is embedded into CLIP's text encoder while its original object-recognition capability is preserved. Figure 3 indicates that this ability of anomaly-awareness is robust and generalizable to novel classes." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 137, + 295, + 220 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 137, + 295, + 220 + ], + "spans": [ + { + "bbox": [ + 55, + 137, + 295, + 220 + ], + "type": "text", + "content": "Aligning Patch Features According to Text Anchors: Anomaly-aware semantic anchors can facilitate the adaptation of patch features, thereby improving the effectiveness and generalizability of anomaly localization. To achieve alignment between patch features and anchors from the previous stage, we introduce trainable Residual Adapters within the initial " + }, + { + "bbox": [ + 55, + 137, + 295, + 220 + ], + "type": "inline_equation", + "content": "K_{I}" + }, + { + "bbox": [ + 55, + 137, + 295, + 220 + ], + "type": "text", + "content": " layers of the CLIP visual encoder." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 221, + 296, + 329 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 221, + 296, + 329 + ], + "spans": [ + { + "bbox": [ + 55, + 221, + 296, + 329 + ], + "type": "text", + "content": "Features with multi-granularities are utilized to enhance segmentation [7, 19, 59]. Specifically, as shown in Fig. 4 (bottom), the intermediate output feature " + }, + { + "bbox": [ + 55, + 221, + 296, + 329 + ], + "type": "inline_equation", + "content": "F^i" + }, + { + "bbox": [ + 55, + 221, + 296, + 329 + ], + "type": "text", + "content": " are extracted from four distinct granularities. These multi-granularity features are then projected to align with the channel of text anchors via a trainable projector " + }, + { + "bbox": [ + 55, + 221, + 296, + 329 + ], + "type": "inline_equation", + "content": "Proj_i(\\cdot)" + }, + { + "bbox": [ + 55, + 221, + 296, + 329 + ], + "type": "text", + "content": ", yielding " + }, + { + "bbox": [ + 55, + 221, + 296, + 329 + ], + "type": "inline_equation", + "content": "V_{patch}^i" + }, + { + "bbox": [ + 55, + 221, + 296, + 329 + ], + "type": "text", + "content": " at four distinct levels of granularity. The aggregated output " + }, + { + "bbox": [ + 55, + 221, + 296, + 329 + ], + "type": "inline_equation", + "content": "V_{patch}" + }, + { + "bbox": [ + 55, + 221, + 296, + 329 + ], + "type": "text", + "content": " is computed by summing individual " + }, + { + "bbox": [ + 55, + 221, + 296, + 329 + ], + "type": "inline_equation", + "content": "V_{patch}^i" + }, + { + "bbox": [ + 55, + 221, + 296, + 329 + ], + "type": "text", + "content": " outputs, as in Eq. (7):" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 99, + 335, + 294, + 384 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 99, + 335, + 294, + 384 + ], + "spans": [ + { + "bbox": [ + 99, + 335, + 294, + 384 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} V _ {p a t c h} ^ {i} = \\operatorname {P r o j} _ {i} \\left(F ^ {i}\\right), i \\in \\{1, 2, 3, 4 \\} \\\\ V _ {p a t c h} = \\sum_ {i = 1} ^ {4} V _ {p a t c h} ^ {i}. \\tag {7} \\\\ \\end{array}", + "image_path": "f9bd2cafec1ea7f7c50f3b7a6f103af52e28b124710528079750284894de1f8b.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 391, + 295, + 427 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 391, + 295, + 427 + ], + "spans": [ + { + "bbox": [ + 55, + 391, + 295, + 427 + ], + "type": "text", + "content": "The cosine similarity scores between the aggregated " + }, + { + "bbox": [ + 55, + 391, + 295, + 427 + ], + "type": "inline_equation", + "content": "V_{patch}" + }, + { + "bbox": [ + 55, + 391, + 295, + 427 + ], + "type": "text", + "content": " and the text anchors are calculated to generate patch-level predictions as Eq. (3), resulting in the prediction maps." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 427, + 296, + 499 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 427, + 296, + 499 + ], + "spans": [ + { + "bbox": [ + 55, + 427, + 296, + 499 + ], + "type": "text", + "content": "During training, alignment is guided by the loss function defined in Eq. (4), facilitating both global and local alignment. During inference, anomaly prediction maps and corresponding anomaly scores are derived by comparing the similarity scores of visual features against normal and anomaly text embeddings." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 509, + 137, + 522 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 509, + 137, + 522 + ], + "spans": [ + { + "bbox": [ + 55, + 509, + 137, + 522 + ], + "type": "text", + "content": "4. Experiments" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 529, + 167, + 541 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 529, + 167, + 541 + ], + "spans": [ + { + "bbox": [ + 55, + 529, + 167, + 541 + ], + "type": "text", + "content": "4.1. Experiment Setups" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 545, + 295, + 677 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 545, + 295, + 677 + ], + "spans": [ + { + "bbox": [ + 55, + 545, + 295, + 677 + ], + "type": "text", + "content": "Datasets We evaluate our model on 11 widely used benchmarks, as previous AD works [6, 7, 18, 19, 59], with distinct foreground objects spanning a variety of modalities, including photography, endoscopy, CT, MRI, and OCT. For the industrial domain, we use MVtec AD [2], VisA [62], BTAD [34] and MPDD [20]. For medical domain, we use brain MRI, liver CT and retina OCT from BMAD [1], and four different colon polyp detection datasets with different views (CVC-ClinicDB [4], CVC-ColonDB [3], Kvasir-SEG [40] and CVC-300 [47]). Each dataset has both image-level labels and pixel-level masks for evaluation." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 677, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 677, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 677, + 296, + 713 + ], + "type": "text", + "content": "We train our model on a real-world industrial AD dataset - VisA [62] - in which objects are different from other datasets. Results of VisA are obtained using MVtec-AD as" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 72, + 553, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 553, + 144 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 553, + 144 + ], + "type": "text", + "content": "the training dataset. To demonstrate adaptation efficiency, we conduct training under various data levels: 2-shot per class, 16-shot per class, 64-shot per class, and full-shot. The corresponding number of samples are randomly selected from each class, while maintaining a consistent 1:1 ratio between normal and anomaly samples." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 144, + 553, + 361 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 144, + 553, + 361 + ], + "spans": [ + { + "bbox": [ + 313, + 144, + 553, + 361 + ], + "type": "text", + "content": "Metrics Following [6, 7, 10, 11, 18, 19, 43, 55], we use the Area Under the Receiver Operating Characteristic Curve (AUROC) as the metric. We compute AUROC at both the image and pixel levels to comprehensively assess the model's effectiveness in detecting and localizing anomalies. Implementation Details Following [6, 7, 41, 59], we use OpenCLIP with the ViT-L/14 architecture as the backbone, and input images are resized to " + }, + { + "bbox": [ + 313, + 144, + 553, + 361 + ], + "type": "inline_equation", + "content": "518 \\times 518" + }, + { + "bbox": [ + 313, + 144, + 553, + 361 + ], + "type": "text", + "content": ". All parameters of CLIP remain frozen. We set " + }, + { + "bbox": [ + 313, + 144, + 553, + 361 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 313, + 144, + 553, + 361 + ], + "type": "text", + "content": " to 0.1, " + }, + { + "bbox": [ + 313, + 144, + 553, + 361 + ], + "type": "inline_equation", + "content": "K_{T}" + }, + { + "bbox": [ + 313, + 144, + 553, + 361 + ], + "type": "text", + "content": " to 3, " + }, + { + "bbox": [ + 313, + 144, + 553, + 361 + ], + "type": "inline_equation", + "content": "K_{I}" + }, + { + "bbox": [ + 313, + 144, + 553, + 361 + ], + "type": "text", + "content": " to 6, and " + }, + { + "bbox": [ + 313, + 144, + 553, + 361 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 313, + 144, + 553, + 361 + ], + "type": "text", + "content": " to 0.1. For multi-level feature extraction, we utilize outputs from the 6-th, 12-th, 18-th, and 24-th layers of the visual encoder to compose the overall output. For the first stage, we train the model for 5 epochs with a learning rate of " + }, + { + "bbox": [ + 313, + 144, + 553, + 361 + ], + "type": "inline_equation", + "content": "1 \\times 10^{-5}" + }, + { + "bbox": [ + 313, + 144, + 553, + 361 + ], + "type": "text", + "content": ". For the second stage, we continue training for 20 epochs, adjusting the learning rate to " + }, + { + "bbox": [ + 313, + 144, + 553, + 361 + ], + "type": "inline_equation", + "content": "5 \\times 10^{-4}" + }, + { + "bbox": [ + 313, + 144, + 553, + 361 + ], + "type": "text", + "content": ". Parameters are updated by Adam optimizers. All experiments are conducted on a single NVIDIA GeForce RTX 3090 GPU. More details are available in Appendix." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 371, + 493, + 384 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 371, + 493, + 384 + ], + "spans": [ + { + "bbox": [ + 313, + 371, + 493, + 384 + ], + "type": "text", + "content": "4.2. Comparison with SOTA Methods" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 388, + 554, + 472 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 388, + 554, + 472 + ], + "spans": [ + { + "bbox": [ + 313, + 388, + 554, + 472 + ], + "type": "text", + "content": "We compare our method against CLIP and several recent SOTA models. Among them, WinCLIP [19], VAND [7] and MVFA-AD [18] use original CLIP text encoder, and AnomalyCLIP [59] and AdaCLIP [6] incorporate learnable prompts. To ensure a fair comparison, we re-train models that are originally trained on different datasets to match the dataset settings of other approaches (detailed in Appendix)." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 474, + 554, + 640 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 474, + 554, + 640 + ], + "spans": [ + { + "bbox": [ + 313, + 474, + 554, + 640 + ], + "type": "text", + "content": "Quantitative results are presented in Tab. 1 and Tab. 2. Although adapting only the patch feature with original text embeddings has made progress in AD, the superior performance of AA-CLIP highlights its effective disentanglement of anomaly-discriminative semantics, leading to further progress. Notably, even in data-limited situations, our method consistently demonstrates top performance. At the pixel level, with only 2 shots per class used for training, our method achieves improved average zero-shot performance compared to previous methods. With the full dataset, we set a new pixel-level SOTA with an AUROC of " + }, + { + "bbox": [ + 313, + 474, + 554, + 640 + ], + "type": "inline_equation", + "content": "93.4\\%" + }, + { + "bbox": [ + 313, + 474, + 554, + 640 + ], + "type": "text", + "content": ". At the image level, our method is competitive with just 2 shots for training and establishes a new SOTA of " + }, + { + "bbox": [ + 313, + 474, + 554, + 640 + ], + "type": "inline_equation", + "content": "83.1\\%" + }, + { + "bbox": [ + 313, + 474, + 554, + 640 + ], + "type": "text", + "content": " with 64 shots per class." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 642, + 554, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 642, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 642, + 554, + 713 + ], + "type": "text", + "content": "Unlike previous methods, our approach does not rely heavily on data resources to achieve top-tier performance. Comparison under different levels of data available, as shown in Fig. 5, reveals that our approach consistently outperforms other methods in general. Even with limited data, our model reaches competitive results, while other methods" + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "4749" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 58, + 70, + 555, + 236 + ], + "blocks": [ + { + "bbox": [ + 58, + 70, + 555, + 236 + ], + "lines": [ + { + "bbox": [ + 58, + 70, + 555, + 236 + ], + "spans": [ + { + "bbox": [ + 58, + 70, + 555, + 236 + ], + "type": "table", + "html": "
DomainDatasetCLIP*WinCLIP*VAND*MVFA-ADAnomalyCLIP*AdaCLIPOurs
OpenCLIPCVPR 2023CVPRw 2023CVPR 2024ICLR2024ECCV2024-
Available training shots--fullfullfullfull21664full
IndustrialBTAD30.632.891.190.193.390.892.894.496.597.0
MPDD62.195.294.994.596.296.696.396.596.396.7
MVTec-AD38.485.187.684.991.189.991.091.291.691.9
VisA46.679.694.293.495.495.593.493.894.095.5
MedicalBrain MRI68.386.094.595.696.293.996.396.496.595.5
Liver CT90.596.295.696.893.994.597.397.797.797.8
Retina OCT21.380.688.590.992.688.594.295.194.495.5
ColonDB49.551.278.278.482.980.083.983.584.784.0
ClinicDB47.570.385.183.985.085.989.287.687.889.9
Kvasir44.669.780.381.981.986.482.184.685.287.2
CVC-30049.9-92.882.695.492.996.097.496.096.4
Average49.974.789.388.591.390.492.092.692.893.4
", + "image_path": "0b66cec7ce5cdae20101e758991082072c08f5ac429d228d5ab084a42e23b08f.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 58, + 280, + 555, + 414 + ], + "blocks": [ + { + "bbox": [ + 55, + 238, + 555, + 274 + ], + "lines": [ + { + "bbox": [ + 55, + 238, + 555, + 274 + ], + "spans": [ + { + "bbox": [ + 55, + 238, + 555, + 274 + ], + "type": "text", + "content": "Table 1. Pixel-level AUROC of zero-shot AD methods in Industrial and Medical domains. Method sources and the number of shots used for training are noted. Results of methods with * are copied from the papers or inferred from official weight. Best results are highlighted as first, second and third." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 58, + 280, + 555, + 414 + ], + "lines": [ + { + "bbox": [ + 58, + 280, + 555, + 414 + ], + "spans": [ + { + "bbox": [ + 58, + 280, + 555, + 414 + ], + "type": "table", + "html": "
DomainDatasetCLIP&VAND*WinCLIP*MVFA-ADAnomalyCLIP*AdaCLIPOurs
OpenCLIPCVPR 2023CVPR 2024ICLR2024ECCV2024-
Available training shots--fullfullfull21664full
IndustrialBTAD73.668.294.385.390.988.090.994.794.8
MPDD73.063.670.973.772.163.678.375.775.1
MVTec-AD86.191.886.690.990.085.989.792.090.5
VisA66.478.076.582.184.378.484.084.184.6
MedicalBrain MRI58.866.570.983.380.284.380.483.480.2
Liver CT54.764.263.061.664.269.468.169.269.7
Retina OCT65.642.577.375.782.777.481.082.982.7
Average68.367.877.178.480.678.181.883.182.5
", + "image_path": "66c762401f393f3ab750145aee57a896577c599ee8bdc764d4a3ac1dce09cbb8.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 464, + 295, + 500 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 464, + 295, + 500 + ], + "spans": [ + { + "bbox": [ + 55, + 464, + 295, + 500 + ], + "type": "text", + "content": "display signs of underfitting. As data increases, our method maintains its lead, establishing a new SOTA at both pixel and image levels." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 509, + 139, + 521 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 509, + 139, + 521 + ], + "spans": [ + { + "bbox": [ + 55, + 509, + 139, + 521 + ], + "type": "text", + "content": "4.3. Visualization" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 526, + 295, + 598 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 526, + 295, + 598 + ], + "spans": [ + { + "bbox": [ + 55, + 526, + 295, + 598 + ], + "type": "text", + "content": "To illustrate the alignment intuitively, we present visualization examples in Fig. 6 with original configuration for previous works. Although previous methods with can detect anomalous regions, our AA-CLIP demonstrates fewer false-negative predictions in both industrial and medical domains, accurately highlighting the correct anomaly regions." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 607, + 166, + 620 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 607, + 166, + 620 + ], + "spans": [ + { + "bbox": [ + 55, + 607, + 166, + 620 + ], + "type": "text", + "content": "4.4. Ablations Analysis" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 625, + 295, + 673 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 625, + 295, + 673 + ], + "spans": [ + { + "bbox": [ + 55, + 625, + 295, + 673 + ], + "type": "text", + "content": "We conduct thorough ablation experiments of our refinement of both visual and text space, as shown in Tab. 3 and Fig. 7. The second row in Tab. 3, which mirrors the structure of VAND [7], serves as our baseline." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 677, + 296, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 677, + 296, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 677, + 296, + 715 + ], + "type": "text", + "content": "Image Space: As shown in Tab. 3 line “2,” inserting the vallina linear adapter into transformer layers results in a significant decline in zero-shot performance, indicating the" + } + ] + } + ], + "index": 9 + }, + { + "type": "table", + "bbox": [ + 317, + 462, + 553, + 555 + ], + "blocks": [ + { + "bbox": [ + 55, + 415, + 555, + 450 + ], + "lines": [ + { + "bbox": [ + 55, + 415, + 555, + 450 + ], + "spans": [ + { + "bbox": [ + 55, + 415, + 555, + 450 + ], + "type": "text", + "content": "Table 2. Image-level AUROC of zero-shot AD methods in Industrial and Medical domains. Method sources and the number of shots used for training are noted. Results of methods with * are copied from the papers or inferred from official weight. Best results are highlighted as first, second and third." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 317, + 462, + 553, + 555 + ], + "lines": [ + { + "bbox": [ + 317, + 462, + 553, + 555 + ], + "spans": [ + { + "bbox": [ + 317, + 462, + 553, + 555 + ], + "type": "table", + "html": "
MethodAvg. AUROC
Pixel-LevelImage-Level
CLIP50.369.3
Image1. + Linear Proj. (VAND [7])88.969.3
2. + Adapter48.9(-40.0)53.4(-15.9)
3. + Residual Adapter91.3(+2.4)80.7(+11.4)
Text4. + Residual Adapter92.1(+3.2)82.6(+13.3)
5. + Disentangle Loss92.7(+3.8)83.3(+14.0)
", + "image_path": "310b5e0e3ab578af9dc8c5345eb476cb4ecfb653e3f92ce809ac3f021c9ea9e0.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "table_body" + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 557, + 555, + 624 + ], + "lines": [ + { + "bbox": [ + 313, + 557, + 555, + 624 + ], + "spans": [ + { + "bbox": [ + 313, + 557, + 555, + 624 + ], + "type": "text", + "content": "Table 3. Ablation Study of Our Training Strategy with VisA-Trained 64-Shot Setup. Our contributions are bold. While VAND uses linear projectors to improve AD performance, incorporating Residual Adapters further refines patch feature adaptation. Moreover, integrating our Disentangle Loss yields the best overall results." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 313, + 636, + 554, + 684 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 636, + 554, + 684 + ], + "spans": [ + { + "bbox": [ + 313, + 636, + 554, + 684 + ], + "type": "text", + "content": "damage of the original generalization ability of CLIP. Incorporating our Residual Adapters mitigates this issue (shown in line \"3\"). enhancing performance while preserving original information stored in CLIP." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 689, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 689, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 689, + 555, + 715 + ], + "type": "text", + "content": "Text Space: The last two rows in Tab. 3 highlight the impact of our approach in equipping CLIP's encoder with" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "4750" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 73, + 71, + 173, + 281 + ], + "blocks": [ + { + "bbox": [ + 73, + 71, + 173, + 281 + ], + "lines": [ + { + "bbox": [ + 73, + 71, + 173, + 281 + ], + "spans": [ + { + "bbox": [ + 73, + 71, + 173, + 281 + ], + "type": "image", + "image_path": "7ed1b930e4e1be716f6dffd2599518a4b01f6650e935501bb949bd4c8306b1b1.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 289, + 295, + 333 + ], + "lines": [ + { + "bbox": [ + 55, + 289, + 295, + 333 + ], + "spans": [ + { + "bbox": [ + 55, + 289, + 295, + 333 + ], + "type": "text", + "content": "Figure 5. Average Results (Top) and Results on BTAD (Bottom) of Different methods Trained on 2-, 16-, 64-shot per Class and Full Data of VisA. Our method shows high fitting efficiency, achieving strong results across all data scales." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 176, + 71, + 276, + 281 + ], + "blocks": [ + { + "bbox": [ + 176, + 71, + 276, + 281 + ], + "lines": [ + { + "bbox": [ + 176, + 71, + 276, + 281 + ], + "spans": [ + { + "bbox": [ + 176, + 71, + 276, + 281 + ], + "type": "image", + "image_path": "4bd703a8e390087c44ea087922519b276bf75f577a1ad93c32da10a282ef7e44.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 61, + 342, + 288, + 501 + ], + "blocks": [ + { + "bbox": [ + 61, + 342, + 288, + 501 + ], + "lines": [ + { + "bbox": [ + 61, + 342, + 288, + 501 + ], + "spans": [ + { + "bbox": [ + 61, + 342, + 288, + 501 + ], + "type": "image", + "image_path": "54bc6d0e000fbbc5adb9f62b15ea2babb5449548126f172d62748e47bc6428f7.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 506, + 295, + 551 + ], + "lines": [ + { + "bbox": [ + 55, + 506, + 295, + 551 + ], + "spans": [ + { + "bbox": [ + 55, + 506, + 295, + 551 + ], + "type": "text", + "content": "Figure 6. Visualization of Anomaly Localization Results of Original CLIP [42], AnomalyCLIP [59], VAND [7] and our AA-CLIP. Compared to previous methods, AA-CLIP demonstrates more reliable prediction capabilities in localizing anomaly." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 564, + 295, + 673 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 564, + 295, + 673 + ], + "spans": [ + { + "bbox": [ + 55, + 564, + 295, + 673 + ], + "type": "text", + "content": "anomaly-aware semantics. Line \"4.\" validates that, with AA-CLIP, the model's ability to discriminate anomalies further improves, as the AA-CLIP's text encoder provides a more precise semantic foundation. Adding Disentangle Loss leads to an additional improvement (shown in Line \"5\"), especially at image-level, validating the necessity of independence between normal and anomaly anchors. These results underscore the crucial role of text space refinement in improved anomaly localization and classification." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 677, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 677, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 677, + 295, + 713 + ], + "type": "text", + "content": "Two-Stage Training: To validate the necessity of two-stage training, we adapt both text and image encoders together within one stage (also adopted by AdaCLIP). As shown" + } + ] + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 321, + 75, + 545, + 185 + ], + "blocks": [ + { + "bbox": [ + 321, + 75, + 545, + 185 + ], + "lines": [ + { + "bbox": [ + 321, + 75, + 545, + 185 + ], + "spans": [ + { + "bbox": [ + 321, + 75, + 545, + 185 + ], + "type": "image", + "image_path": "b940ce809bb20f102d5ab0c763d469f6ba1df9bd728863b2e2e5aa8edcb03ef5.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 192, + 555, + 226 + ], + "lines": [ + { + "bbox": [ + 313, + 192, + 555, + 226 + ], + "spans": [ + { + "bbox": [ + 313, + 192, + 555, + 226 + ], + "type": "text", + "content": "Figure 7. Visualization of Text Space from One-Stage Training and from AdaCLIP. During one-stage training, class information collapses easily, leading to damaged zero-shot performance." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 244, + 555, + 316 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 244, + 555, + 316 + ], + "spans": [ + { + "bbox": [ + 313, + 244, + 555, + 316 + ], + "type": "text", + "content": "in Fig. 7, one-stage model can easily exaggerate anomaly semantics and forget class information embedded in CLIP, damaging the model's generalization ability. The two-stage training strategy allows controlled adaptation, preserving CLIP's class-relevant knowledge in one end while adapting the other, as shown in Fig. 3." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 314, + 331, + 466, + 342 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 331, + 466, + 342 + ], + "spans": [ + { + "bbox": [ + 314, + 331, + 466, + 342 + ], + "type": "text", + "content": "5. Conclusion and Discussion" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 352, + 555, + 460 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 352, + 555, + 460 + ], + "spans": [ + { + "bbox": [ + 313, + 352, + 555, + 460 + ], + "type": "text", + "content": "To our knowledge, this is the first work to explicitly analyze the intrinsic Anomaly Unawareness problem in CLIP. To tackle this issue, we propose a simple yet effective two-stage training strategy to embed anomaly-aware information into CLIP, enabling clear disentanglement of anomaly representations across both seen and novel classes. By leveraging residual adapters, our method preserves CLIP's strong generalization ability, achieving outstanding zero-shot performance across multiple datasets." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 461, + 556, + 606 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 461, + 556, + 606 + ], + "spans": [ + { + "bbox": [ + 313, + 461, + 556, + 606 + ], + "type": "text", + "content": "Our adapted AA-CLIP, developed through this two-stage adaptation strategy, reveals the potential of refining CLIP's feature space for improved performance in downstream applications. Beyond addressing anomaly unawareness, our work also provides a potential foundation for tackling other \"unawareness\" issues within CLIP. These may include limitations in context-awareness or specificity to domain-relevant nuances, suggesting further applications of our method in expanding CLIP's adaptability across diverse tasks. Additionally, we observe signs of overfitting with full-shot training, suggesting potential saturation during CLIP adaptation and warranting further investigation." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 314, + 620, + 411, + 633 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 620, + 411, + 633 + ], + "spans": [ + { + "bbox": [ + 314, + 620, + 411, + 633 + ], + "type": "text", + "content": "Acknowledgement" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 641, + 555, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 641, + 555, + 712 + ], + "spans": [ + { + "bbox": [ + 313, + 641, + 555, + 712 + ], + "type": "text", + "content": "This work is supported by Natural Science Foundation of China under Grant 62271465, Suzhou Basic Research Program under Grant SYG202338, Open Fund Project of Guangdong Academy of Medical Sciences, China (No. YKY-KF202206), and Jiangsu Province Science Foundation for Youths (NO. BK20240464)." + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 314, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 314, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 314, + 757 + ], + "type": "text", + "content": "4751" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 115, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 115, + 83 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 115, + 83 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 57, + 91, + 296, + 713 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 61, + 91, + 296, + 123 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 91, + 296, + 123 + ], + "spans": [ + { + "bbox": [ + 61, + 91, + 296, + 123 + ], + "type": "text", + "content": "[1] Jinan Bao, Hanshi Sun, Hanqiu Deng, Yinsheng He, Zhaoxiang Zhang, and Xingyu Li. Bmad: Benchmarks for medical anomaly detection, 2024. 6" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 125, + 296, + 179 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 125, + 296, + 179 + ], + "spans": [ + { + "bbox": [ + 61, + 125, + 296, + 179 + ], + "type": "text", + "content": "[2] Paul Bergmann, Michael Fauser, David Sattlegger, and Carsten Steger. Mvtec ad-a comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9592-9600, 2019. 1, 3, 6" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 62, + 180, + 295, + 213 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 180, + 295, + 213 + ], + "spans": [ + { + "bbox": [ + 62, + 180, + 295, + 213 + ], + "type": "text", + "content": "[3] Jorge Bernal, Javier Sánchez, and Fernando Vilarino. Towards automatic polyp detection with a polyp appearance model. Pattern Recognition, 45(9):3166-3182, 2012. 6" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 62, + 214, + 295, + 278 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 214, + 295, + 278 + ], + "spans": [ + { + "bbox": [ + 62, + 214, + 295, + 278 + ], + "type": "text", + "content": "[4] Jorge Bernal, F Javier Sánchez, Gloria Fernández-Esparrach, Debora Gil, Cristina Rodríguez, and Fernando Vilarino. Wm-dova maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians. Computerized medical imaging and graphics, 43:99-111, 2015. 6" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 62, + 280, + 294, + 323 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 280, + 294, + 323 + ], + "spans": [ + { + "bbox": [ + 62, + 280, + 294, + 323 + ], + "type": "text", + "content": "[5] Yunkang Cao, Xiaohao Xu, Yuqi Cheng, Chen Sun, Zongwei Du, Liang Gao, and Weiming Shen. Personalizing vision-language models with hybrid prompts for zero-shot anomaly detection. IEEE Transactions on Cybernetics, 2025. 3" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 325, + 295, + 379 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 325, + 295, + 379 + ], + "spans": [ + { + "bbox": [ + 62, + 325, + 295, + 379 + ], + "type": "text", + "content": "[6] Yunkang Cao, Jiangning Zhang, Luca Frittoli, Yuqi Cheng, Weiming Shen, and Giacomo Boracchi. Adaclip: Adapting clip with hybrid learnable prompts for zero-shot anomaly detection. In European Conference on Computer Vision, pages 55-72. Springer, 2025. 2, 3, 5, 6" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 380, + 295, + 434 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 380, + 295, + 434 + ], + "spans": [ + { + "bbox": [ + 62, + 380, + 295, + 434 + ], + "type": "text", + "content": "[7] Xuhai Chen, Yue Han, and Jiangning Zhang. April-gan: A zero-/few-shot anomaly classification and segmentation method for cvpr 2023 vand workshop challenge tracks 1&2: 1st place on zero-shot ad and 4th place on few-shot ad. arXiv preprint arXiv:2305.17382, 2023. 2, 3, 5, 6, 7, 8" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 436, + 295, + 490 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 436, + 295, + 490 + ], + "spans": [ + { + "bbox": [ + 62, + 436, + 295, + 490 + ], + "type": "text", + "content": "[8] Xuhai Chen, Jiangning Zhang, Guanzhong Tian, Haoyang He, Wuhao Zhang, Yabiao Wang, Chengjie Wang, and Yong Liu. Clip-ad: A language-guided staged dual-path model for zero-shot anomaly detection. arXiv preprint arXiv:2311.00453, 2023. 3" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 491, + 295, + 534 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 491, + 295, + 534 + ], + "spans": [ + { + "bbox": [ + 62, + 491, + 295, + 534 + ], + "type": "text", + "content": "[9] Jaemin Cho, Seunghyun Yoon, Ajinkya Kale, Franck Dernoncourt, Trung Bui, and Mohit Bansal. Fine-grained image captioning with clip reward. arXiv preprint arXiv:2205.13115, 2022. 2" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 57, + 536, + 295, + 590 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 536, + 295, + 590 + ], + "spans": [ + { + "bbox": [ + 57, + 536, + 295, + 590 + ], + "type": "text", + "content": "[10] Thomas Defard, Aleksandr Setkov, Angelique Loesch, and Romaric Audigier. Padim: a patch distribution modeling framework for anomaly detection and localization. In International Conference on Pattern Recognition, pages 475-489. Springer, 2021. 1, 2, 6" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 57, + 591, + 295, + 635 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 591, + 295, + 635 + ], + "spans": [ + { + "bbox": [ + 57, + 591, + 295, + 635 + ], + "type": "text", + "content": "[11] Hanqiu Deng and Xingyu Li. Anomaly detection via reverse distillation from one-class embedding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9737-9746, 2022. 1, 2, 6" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 635, + 295, + 669 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 635, + 295, + 669 + ], + "spans": [ + { + "bbox": [ + 57, + 635, + 295, + 669 + ], + "type": "text", + "content": "[12] Han Fang, Pengfei Xiong, Luhui Xu, and Yu Chen. Clip2video: Mastering video-text retrieval via image clip. arXiv preprint arXiv:2106.11097, 2021. 2" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 57, + 670, + 295, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 670, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 57, + 670, + 295, + 713 + ], + "type": "text", + "content": "[13] Tharindu Fernando, Harshala Gammulle, Simon Denman, Sridha Sridharan, and Clinton Fookes. Deep learning for medical anomaly detection-a survey. ACM Computing Surveys (CSUR), 54(7):1-37, 2021. 1" + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 555, + 713 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 316, + 73, + 554, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 73, + 554, + 127 + ], + "spans": [ + { + "bbox": [ + 316, + 73, + 554, + 127 + ], + "type": "text", + "content": "[14] Peng Gao, Shijie Geng, Renrui Zhang, Teli Ma, Rongyao Fang, Yongfeng Zhang, Hongsheng Li, and Yu Qiao. Clip-adapter: Better vision-language models with feature adapters. International Journal of Computer Vision, 132(2): 581-595, 2024. 3" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 317, + 129, + 555, + 184 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 129, + 555, + 184 + ], + "spans": [ + { + "bbox": [ + 317, + 129, + 555, + 184 + ], + "type": "text", + "content": "[15] Denis Gudovskiy, Shun Ishizaka, and Kazuki Kozuka. Cflow-ad: Real-time unsupervised anomaly detection with localization via conditional normalizing flows. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 98-107, 2022. 1, 2" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 317, + 186, + 554, + 239 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 186, + 554, + 239 + ], + "spans": [ + { + "bbox": [ + 317, + 186, + 554, + 239 + ], + "type": "text", + "content": "[16] Haoyang He, Jiangning Zhang, Hongxu Chen, Xuhai Chen, Zhishan Li, Xu Chen, Yabiao Wang, Chengjie Wang, and Lei Xie. Diad: A diffusion-based framework for multi-class anomaly detection. arXiv preprint arXiv:2312.06607, 2023. 2" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 242, + 554, + 306 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 242, + 554, + 306 + ], + "spans": [ + { + "bbox": [ + 316, + 242, + 554, + 306 + ], + "type": "text", + "content": "[17] Chaoqin Huang, Aofan Jiang, Jinghao Feng, Ya Zhang, Xin chao Wang, and Yanfeng Wang. Adapting visual-language models for generalizable anomaly detection in medical images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11375-11385, 2024. 2" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 308, + 554, + 374 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 308, + 554, + 374 + ], + "spans": [ + { + "bbox": [ + 316, + 308, + 554, + 374 + ], + "type": "text", + "content": "[18] Chaoqin Huang, Aofan Jiang, Jinghao Feng, Ya Zhang, Xin chao Wang, and Yanfeng Wang. Adapting visual-language models for generalizable anomaly detection in medical images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11375-11385, 2024. 3, 5, 6" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 317, + 376, + 554, + 431 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 376, + 554, + 431 + ], + "spans": [ + { + "bbox": [ + 317, + 376, + 554, + 431 + ], + "type": "text", + "content": "[19] Jongheon Jeong, Yang Zou, Taewan Kim, Dongqing Zhang, Avinash Ravichandran, and Onkar Dabeer. Winclip: Zero/few-shot anomaly classification and segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19606-19616, 2023. 3, 6" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 317, + 433, + 554, + 498 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 433, + 554, + 498 + ], + "spans": [ + { + "bbox": [ + 317, + 433, + 554, + 498 + ], + "type": "text", + "content": "[20] Stepan Jezek, Martin Jonak, Radim Burget, Pavel Dvorak, and Milos Skotak. Deep learning-based defect detection of metal parts: evaluating current methods in complex conditions. In 2021 13th International congress on ultra modern telecommunications and control systems and workshops (ICUMT), pages 66-71. IEEE, 2021. 6" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 499, + 554, + 543 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 499, + 554, + 543 + ], + "spans": [ + { + "bbox": [ + 316, + 499, + 554, + 543 + ], + "type": "text", + "content": "[21] Ruixiang Jiang, Lingbo Liu, and Changwen Chen. Clipcount: Towards text-guided zero-shot object counting. In Proceedings of the 31st ACM International Conference on Multimedia, pages 4535-4545, 2023. 2" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 544, + 554, + 578 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 544, + 554, + 578 + ], + "spans": [ + { + "bbox": [ + 316, + 544, + 554, + 578 + ], + "type": "text", + "content": "[22] Zeeshan Khan, Makarand Tapaswi, et al. Figclip: Fine-grained clip adaptation via densely annotated videos. arXiv preprint arXiv:2401.07669, 2024. 2" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 579, + 554, + 623 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 579, + 554, + 623 + ], + "spans": [ + { + "bbox": [ + 316, + 579, + 554, + 623 + ], + "type": "text", + "content": "[23] Daehyun Kim, Sungyong Baik, and Tae Hyun Kim. Sanflow: Semantic-aware normalizing flow for anomaly detection. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. 1, 2" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 624, + 554, + 678 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 624, + 554, + 678 + ], + "spans": [ + { + "bbox": [ + 316, + 624, + 554, + 678 + ], + "type": "text", + "content": "[24] Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. Align before fuse: Vision and language representation learning with momentum distillation. Advances in neural information processing systems, 34:9694-9705, 2021. 2" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 680, + 554, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 680, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 680, + 554, + 713 + ], + "type": "text", + "content": "[25] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In Interna" + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "4752" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 295, + 712 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 77, + 72, + 294, + 94 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 72, + 294, + 94 + ], + "spans": [ + { + "bbox": [ + 77, + 72, + 294, + 94 + ], + "type": "text", + "content": "tional conference on machine learning, pages 12888-12900. PMLR, 2022." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 96, + 295, + 149 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 96, + 295, + 149 + ], + "spans": [ + { + "bbox": [ + 56, + 96, + 295, + 149 + ], + "type": "text", + "content": "[26] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pages 19730-19742. PMLR, 2023. 2" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 151, + 294, + 194 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 151, + 294, + 194 + ], + "spans": [ + { + "bbox": [ + 56, + 151, + 294, + 194 + ], + "type": "text", + "content": "[27] Xin Li, Dongze Lian, Zhihe Lu, Jiawang Bai, Zhibo Chen, and Xinchao Wang. Graphadapter: Tuning vision-language models with dual knowledge graph. Advances in Neural Information Processing Systems, 36, 2024. 2" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 195, + 294, + 237 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 195, + 294, + 237 + ], + "spans": [ + { + "bbox": [ + 56, + 195, + 294, + 237 + ], + "type": "text", + "content": "[28] Yi Li, Hualiang Wang, Yiqun Duan, and Xiaomeng Li. Clip surgery for better explainability with enhancement in open-vocabulary tasks. arXiv preprint arXiv:2304.05653, 2023. 5" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 239, + 294, + 304 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 239, + 294, + 304 + ], + "spans": [ + { + "bbox": [ + 56, + 239, + 294, + 304 + ], + "type": "text", + "content": "[29] Yuqi Lin, Minghao Chen, Wenxiao Wang, Boxi Wu, Ke Li, Binbin Lin, Haifeng Liu, and Xiaofei He. Clip is also an efficient segmenter: A text-driven approach for weakly supervised semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15305-15314, 2023. 2, 3" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 305, + 294, + 370 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 305, + 294, + 370 + ], + "spans": [ + { + "bbox": [ + 56, + 305, + 294, + 370 + ], + "type": "text", + "content": "[30] Jie Liu, Yixiao Zhang, Jie-Neng Chen, Junfei Xiao, Yongyi Lu, Bennett A Landman, Yixuan Yuan, Alan Yuille, Yucheng Tang, and Zongwei Zhou. Clip-driven universal model for organ segmentation and tumor detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 21152-21164, 2023. 2, 3" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 372, + 294, + 425 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 372, + 294, + 425 + ], + "spans": [ + { + "bbox": [ + 56, + 372, + 294, + 425 + ], + "type": "text", + "content": "[31] Zhikang Liu, Yiming Zhou, Yuansheng Xu, and Zilei Wang. Simplenet: A simple network for image anomaly detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20402-20411, 2023. 1, 2" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 426, + 294, + 470 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 426, + 294, + 470 + ], + "spans": [ + { + "bbox": [ + 56, + 426, + 294, + 470 + ], + "type": "text", + "content": "[32] Ruiying Lu, YuJie Wu, Long Tian, Dongsheng Wang, Bo Chen, Xiyang Liu, and Ruimin Hu. Hierarchical vector quantized transformer for multi-class unsupervised anomaly detection. arXiv preprint arXiv:2310.14228, 2023. 2" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 471, + 294, + 524 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 471, + 294, + 524 + ], + "spans": [ + { + "bbox": [ + 56, + 471, + 294, + 524 + ], + "type": "text", + "content": "[33] Wenxin Ma, Qingsong Yao, Xiang Zhang, Zhelong Huang, Zihang Jiang, and S.Kevin Zhou. Towards accurate unified anomaly segmentation. In Proceedings of the Winter Conference on Applications of Computer Vision (WACV), pages 1342-1352, 2025. 2" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 526, + 294, + 581 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 526, + 294, + 581 + ], + "spans": [ + { + "bbox": [ + 56, + 526, + 294, + 581 + ], + "type": "text", + "content": "[34] Pankaj Mishra, Riccardo Verk, Daniele Fornasier, Claudio Piciarelli, and Gian Luca Foresti. Vt-adt: A vision transformer network for image anomaly detection and localization. In 2021 IEEE 30th International Symposium on Industrial Electronics (ISIE), pages 01–06. IEEE, 2021. 6" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 582, + 294, + 613 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 582, + 294, + 613 + ], + "spans": [ + { + "bbox": [ + 56, + 582, + 294, + 613 + ], + "type": "text", + "content": "[35] Ron Mokady, Amir Hertz, and Amit H Bermano. Clipcap: Clip prefix for image captioning. arXiv preprint arXiv:2111.09734, 2021. 2" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 614, + 294, + 669 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 614, + 294, + 669 + ], + "spans": [ + { + "bbox": [ + 56, + 614, + 294, + 669 + ], + "type": "text", + "content": "[36] Liliane Momeni, Mathilde Caron, Arsha Nagrani, Andrew Zisserman, and Cordelia Schmid. Verbs in action: Improving verb understanding in video-language models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15579-15591, 2023. 2" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 56, + 670, + 294, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 670, + 294, + 712 + ], + "spans": [ + { + "bbox": [ + 56, + 670, + 294, + 712 + ], + "type": "text", + "content": "[37] Amin Karimi Monsefi, Kishore Prakash Sailaja, Ali Alilooee, Ser-Nam Lim, and Rajiv Ramnath. Detailclip: Detail-oriented clip for fine-grained tasks. arXiv preprint arXiv:2409.06809, 2024. 3" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 553, + 713 + ], + "type": "list", + "angle": 0, + "index": 26, + "blocks": [ + { + "bbox": [ + 316, + 73, + 553, + 117 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 73, + 553, + 117 + ], + "spans": [ + { + "bbox": [ + 316, + 73, + 553, + 117 + ], + "type": "text", + "content": "[38] Roni Paiss, Ariel Ephrat, Omer Tov, Shiran Zada, Inbar Mosseri, Michal Irani, and Tali Dekel. Teaching clip to count to ten. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3170-3180, 2023. 2" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 118, + 553, + 171 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 118, + 553, + 171 + ], + "spans": [ + { + "bbox": [ + 316, + 118, + 553, + 171 + ], + "type": "text", + "content": "[39] Or Patashnik, Zongze Wu, Eli Shechtman, Daniel Cohen-Or, and Dani Lischinski. Styleclip: Text-driven manipulation of stylegan imagery. In Proceedings of the IEEE/CVF international conference on computer vision, pages 2085–2094, 2021. 2" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 172, + 553, + 258 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 172, + 553, + 258 + ], + "spans": [ + { + "bbox": [ + 316, + 172, + 553, + 258 + ], + "type": "text", + "content": "[40] Konstantin Pogorelov, Kristin Ranheim Randel, Carsten Griwodz, Sigrun Losada Eskeland, Thomas de Lange, Dag Johansen, Concetto Spampinato, Duc-Tien Dang-Nguyen, Mathias Lux, Peter Thelin Schmidt, Michael Riegler, and Pål Halvorsen. Kvasir: A multi-class image dataset for computer aided gastrointestinal disease detection. In Proceedings of the 8th ACM on Multimedia Systems Conference, pages 164-169, New York, NY, USA, 2017. ACM. 6" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 260, + 553, + 304 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 260, + 553, + 304 + ], + "spans": [ + { + "bbox": [ + 316, + 260, + 553, + 304 + ], + "type": "text", + "content": "[41] Zhen Qu, Xian Tao, Mukesh Prasad, Fei Shen, Zhengtao Zhang, Xinyi Gong, and Guiguang Ding. Vcp-clip: A visual context prompting model for zero-shot anomaly segmentation. arXiv preprint arXiv:2407.12276, 2024. 2, 3, 6" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 304, + 553, + 370 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 304, + 553, + 370 + ], + "spans": [ + { + "bbox": [ + 316, + 304, + 553, + 370 + ], + "type": "text", + "content": "[42] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 2, 8" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 371, + 553, + 425 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 371, + 553, + 425 + ], + "spans": [ + { + "bbox": [ + 316, + 371, + 553, + 425 + ], + "type": "text", + "content": "[43] Karsten Roth, Latha Pemula, Joaquin Zepeda, Bernhard Schölkopf, Thomas Brox, and Peter Gehler. Towards total recall in industrial anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14318-14328, 2022. 2, 6" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 426, + 553, + 491 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 426, + 553, + 491 + ], + "spans": [ + { + "bbox": [ + 316, + 426, + 553, + 491 + ], + "type": "text", + "content": "[44] Thomas Schlegl, Philipp Seebock, Sebastian M Waldstein, Ursula Schmidt-Erfurth, and Georg Langs. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In International conference on information processing in medical imaging, pages 146-157. Springer, 2017. 2" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 492, + 553, + 536 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 492, + 553, + 536 + ], + "spans": [ + { + "bbox": [ + 316, + 492, + 553, + 536 + ], + "type": "text", + "content": "[45] Sanjay Subramanian, William Merrill, Trevor Darrell, Matt Gardner, Sameer Singh, and Anna Rohrbach. Reclip: A strong zero-shot baseline for referring expression comprehension. arXiv preprint arXiv:2204.05991, 2022. 2" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 537, + 553, + 591 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 537, + 553, + 591 + ], + "spans": [ + { + "bbox": [ + 316, + 537, + 553, + 591 + ], + "type": "text", + "content": "[46] Yingtian Tang, Yutaro Yamada, Yoyo Zhang, and Ilker Yildirim. When are lemons purple? the concept association bias of vision-language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 14333-14348, 2023. 2" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 592, + 553, + 646 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 592, + 553, + 646 + ], + "spans": [ + { + "bbox": [ + 316, + 592, + 553, + 646 + ], + "type": "text", + "content": "[47] David Vázquez, Jorge Bernal, F Javier Sánchez, Gloria Fernández-Esparrach, Antonio M López, Adriana Romero, Michal Drozdzal, and Aaron Courville. A benchmark for endoluminal scene segmentation of colonoscopy images. Journal of healthcare engineering, 2017(1):4037190, 2017. 6" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 647, + 553, + 680 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 647, + 553, + 680 + ], + "spans": [ + { + "bbox": [ + 316, + 647, + 553, + 680 + ], + "type": "text", + "content": "[48] Guodong Wang, Shumin Han, Errui Ding, and Di Huang. Student-teacher feature pyramid matching for anomaly detection. arXiv preprint arXiv:2103.04257, 2021. 2" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 681, + 553, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 681, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 681, + 553, + 713 + ], + "type": "text", + "content": "[49] Zhaoqing Wang, Yu Lu, Qiang Li, Xunqiang Tao, Yandong Guo, Mingming Gong, and Tongliang Liu. Cris: Clipdriven referring image segmentation. In Proceedings of" + } + ] + } + ], + "index": 25 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "type": "text", + "content": "4753" + } + ] + } + ], + "index": 27 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 295, + 712 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 77, + 72, + 294, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 72, + 294, + 95 + ], + "spans": [ + { + "bbox": [ + 77, + 72, + 294, + 95 + ], + "type": "text", + "content": "the IEEE/CVF conference on computer vision and pattern recognition, pages 11686-11695, 2022. 2, 3" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 98, + 295, + 151 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 98, + 295, + 151 + ], + "spans": [ + { + "bbox": [ + 56, + 98, + 295, + 151 + ], + "type": "text", + "content": "[50] Mengde Xu, Zheng Zhang, Fangyun Wei, Han Hu, and Xiang Bai. Side adapter network for open-vocabulary semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2945-2954, 2023. 3" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 154, + 294, + 196 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 154, + 294, + 196 + ], + "spans": [ + { + "bbox": [ + 56, + 154, + 294, + 196 + ], + "type": "text", + "content": "[51] Xingyi Yang and Xinchao Wang. Diffusion model as representation learner. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 18938-18949, 2023. 2" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 201, + 294, + 243 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 201, + 294, + 243 + ], + "spans": [ + { + "bbox": [ + 56, + 201, + 294, + 243 + ], + "type": "text", + "content": "[52] Qingsong Yao, Li Xiao, Peihang Liu, and S Kevin Zhou. Label-free segmentation of Covid-19 lesions in lung ct. IEEE transactions on medical imaging, 40(10):2808-2819, 2021. 2" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 247, + 294, + 289 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 247, + 294, + 289 + ], + "spans": [ + { + "bbox": [ + 56, + 247, + 294, + 289 + ], + "type": "text", + "content": "[53] Qingsong Yao, Zecheng He, Yuexiang Li, Yi Lin, Kai Ma, Yefeng Zheng, and S Kevin Zhou. Adversarial medical image with hierarchical feature hiding. IEEE Transactions on Medical Imaging, 43(4):1296-1307, 2023. 2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 293, + 294, + 336 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 293, + 294, + 336 + ], + "spans": [ + { + "bbox": [ + 56, + 293, + 294, + 336 + ], + "type": "text", + "content": "[54] Xincheng Yao, Chongyang Zhang, Ruoqi Li, Jun Sun, and Zhenyu Liu. One-for-all: Proposal masked cross-class anomaly detection. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 4792-4800, 2023. 2" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 338, + 294, + 381 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 338, + 294, + 381 + ], + "spans": [ + { + "bbox": [ + 56, + 338, + 294, + 381 + ], + "type": "text", + "content": "[55] Zhiyuan You, Lei Cui, Yujun Shen, Kai Yang, Xin Lu, Yu Zheng, and Xinyi Le. A unified model for multi-class anomaly detection. Advances in Neural Information Processing Systems, 35:4571-4584, 2022. 2, 6" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 384, + 294, + 437 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 384, + 294, + 437 + ], + "spans": [ + { + "bbox": [ + 56, + 384, + 294, + 437 + ], + "type": "text", + "content": "[56] Zhiyuan You, Kai Yang, Wenhan Luo, Lei Cui, Yu Zheng, and Xinyi Le. Adtr: Anomaly detection transformer with feature reconstruction. In International Conference on Neural Information Processing, pages 298-310. Springer, 2022. 2" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 441, + 294, + 495 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 441, + 294, + 495 + ], + "spans": [ + { + "bbox": [ + 56, + 441, + 294, + 495 + ], + "type": "text", + "content": "[57] Renrui Zhang, Wei Zhang, Rongyao Fang, Peng Gao, Kunchang Li, Jifeng Dai, Yu Qiao, and Hongsheng Li. Tip-adapter: Training-free adaption of clip for few-shot classification. In European conference on computer vision, pages 493-510. Springer, 2022. 3" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 498, + 294, + 553 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 498, + 294, + 553 + ], + "spans": [ + { + "bbox": [ + 56, + 498, + 294, + 553 + ], + "type": "text", + "content": "[58] Xuan Zhang, Shiyu Li, Xi Li, Ping Huang, Jiulong Shan, and Ting Chen. Destseg: Segmentation guided denoising student-teacher for anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3914–3923, 2023. 1, 2" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 555, + 294, + 598 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 555, + 294, + 598 + ], + "spans": [ + { + "bbox": [ + 56, + 555, + 294, + 598 + ], + "type": "text", + "content": "[59] Qihang Zhou, Guansong Pang, Yu Tian, Shibo He, and Jiming Chen. Anomalyclip: Object-agnostic prompt learning for zero-shot anomaly detection. arXiv preprint arXiv:2310.18961, 2023. 3, 5, 6, 8" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 601, + 294, + 655 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 601, + 294, + 655 + ], + "spans": [ + { + "bbox": [ + 56, + 601, + 294, + 655 + ], + "type": "text", + "content": "[60] Ziqin Zhou, Yinjie Lei, Bowen Zhang, Lingqiao Liu, and Yifan Liu. Zegclip: Towards adapting clip for zero-shot semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11175-11185, 2023. 2, 3" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 56, + 658, + 294, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 658, + 294, + 712 + ], + "spans": [ + { + "bbox": [ + 56, + 658, + 294, + 712 + ], + "type": "text", + "content": "[61] Bo Zong, Qi Song, Martin Renqiang Min, Wei Cheng, Cristian Lumezanu, Daeki Cho, and Haifeng Chen. Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In International conference on learning representations, 2018. 2" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 72, + 553, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 72, + 553, + 127 + ], + "spans": [ + { + "bbox": [ + 316, + 72, + 553, + 127 + ], + "type": "text", + "content": "[62] Yang Zou, Jongheon Jeong, Latha Pemula, Dongqing Zhang, and Onkar Dabeer. Spot-the-difference self-supervised pretraining for anomaly detection and segmentation. In European Conference on Computer Vision, pages 392-408. Springer, 2022. 6" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "type": "text", + "content": "4754" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/ABBSPO_ Adaptive Bounding Box Scaling and Symmetric Prior based Orientation Prediction for Detecting Aerial Image Objects/4a21cd8b-8cbb-4db3-b4a2-0a741c570b46_content_list.json b/2025/ABBSPO_ Adaptive Bounding Box Scaling and Symmetric Prior based Orientation Prediction for Detecting Aerial Image Objects/4a21cd8b-8cbb-4db3-b4a2-0a741c570b46_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a5e73cbda8d2c658dced14db9b6671ae1061e383 --- /dev/null +++ b/2025/ABBSPO_ Adaptive Bounding Box Scaling and Symmetric Prior based Orientation Prediction for Detecting Aerial Image Objects/4a21cd8b-8cbb-4db3-b4a2-0a741c570b46_content_list.json @@ -0,0 +1,1897 @@ +[ + { + "type": "text", + "text": "ABBSPO: Adaptive Bounding Box Scaling and Symmetric Prior based Orientation Prediction for Detecting Aerial Image Objects", + "text_level": 1, + "bbox": [ + 142, + 130, + 854, + 176 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Woojin Lee $^{1*}$ Hyugjae Chang $^{1*}$ Jaeho Moon $^{1}$ Jaehyup Lee $^{2\\dagger}$ Munchurl Kim $^{1\\dagger}$ $^{1}$ KAIST ${}^{2}$ KNU", + "bbox": [ + 169, + 202, + 833, + 237 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "{woojin412, hmnc97, jaeho.moon, mkimee}@kaist.ac.kr jaehyuplee@knu.ac.kr", + "bbox": [ + 253, + 239, + 743, + 257 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "https://kaist-viclab.github.io/ABBSPO_site/", + "bbox": [ + 305, + 258, + 691, + 273 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/568859ed62ff0b20ad4e6ee353770fe3878e44cbdbe8d43e11f475ee89f78213.jpg", + "image_caption": [ + "① GT RBox ② GT C-HBox" + ], + "image_footnote": [], + "bbox": [ + 112, + 304, + 256, + 407 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/b27b775cfa0792a81a90726212506fafdc20232a590b12f4e10a34d0a8bb840b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 261, + 295, + 382, + 419 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/4036c351f70e6be817546f6ab541a1e7b674cd0d755dd3972a0e99308263a526.jpg", + "image_caption": [ + "(a) Two types of GT HBox", + "(b) Visual comparison of HBox-supervised oriented detectors", + "Figure 1. Performance comparison of HBox-supervised orientated detectors. (a) Top: A coarse horizontal bounding box (C-HBox) $(②)$ and its corresponding rotated bounding box (RBox) $(①)$ . Bottom: A tight horizontal bounding box (T-HBox)( $(2)$ ) and its corresponding RBox $(①)$ . (b) Our ABBSPO is capable of accurately detecting both orientations and scales for GT C-HBoxes and T-HBoxes. (c) Average Precision $\\left(\\mathrm{AP}_{50}\\right)$ for H2RBox [42], H2RBox-v2 [48], and our ABBSPO. $3-\\mathrm{AP}_{50}$ represents the mean $\\mathrm{AP}_{50}$ for three complex shaped objects: (i) DIOR: 'airplane', 'expressway service area', and 'overpass' and (ii) DOTA: 'plane', 'swimming pool', and 'helicopter'." + ], + "image_footnote": [], + "bbox": [ + 261, + 421, + 380, + 534 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/abbe6ce2f4ccfae50e61911adbfc36c43a0599bc320175b91a448662945033d5.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 383, + 295, + 504, + 419 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/d935d857a5b65cffbee17d0414115fdc61d65805e5ef4c398351ff5540f95a21.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 385, + 420, + 504, + 532 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/8bd69b0bee5866f4cba10b95e82ac4c8cc5b57f2cd8ba8783c90a33616727935.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 509, + 295, + 629, + 419 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/e955c6c9f48296b82376f41da38d1df88096b41e872664228316cfcd675731db.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 509, + 420, + 629, + 532 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/949e5f4d898a521f3773f2cdc29482ebb42f4bc6bc2189ba275e0d71a3b33682.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 635, + 311, + 883, + 425 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/a5b098d46dbb0f780bcd7fc46372d3c5816887a0ef6a2fc8e0c422f150b54626.jpg", + "image_caption": [ + "H2RBox H2RBox-v2 ABBSPO) Performance overview 3-AP50 AP50" + ], + "image_footnote": [], + "bbox": [ + 635, + 431, + 883, + 534 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 246, + 651, + 326, + 667 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Weakly supervised Oriented Object Detection (WS-OOD) has gained attention as a cost-effective alternative to fully supervised methods, providing efficiency and high accuracy. Among weakly supervised approaches, horizontal bounding box (HBox) supervised OOD stands out for its ability to directly leverage existing HBox annotations while achieving the highest accuracy under weak supervision settings. This paper introduces adaptive bounding box scaling and symmetry-prior-based orientation prediction, called ABBSPO that is a framework for WS-OOD. Our ABB-", + "bbox": [ + 89, + 690, + 483, + 843 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "SPO addresses the limitations of previous HBox-supervised OOD methods, which compare ground truth (GT) HBoxes directly with predicted RBoxes' minimum circumscribed rectangles, often leading to inaccuracies. To overcome this, we propose: (i) Adaptive Bounding Box Scaling (ABBS) that appropriately scales the GT HBoxes to optimize for the size of each predicted RBox, ensuring more accurate prediction for RBoxes' scales; and (ii) a Symmetric Prior Angle (SPA) loss that uses the inherent symmetry of aerial objects for self-supervised learning, addressing the issue in previous methods where learning fails if they consistently make incorrect predictions for all three augmented views (original, rotated, and flipped). Extensive experimental results demonstrate that our ABBsPO achieves state-of-the-art results, outperforming existing methods.", + "bbox": [ + 511, + 652, + 906, + 880 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 46 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "*Co-first authors (equal contribution).", + "bbox": [ + 107, + 849, + 310, + 863 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "† Co-corresponding authors.", + "bbox": [ + 109, + 863, + 256, + 875 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "$^{1}$ Korea Advanced Institute of Science and Technology (KAIST).", + "bbox": [ + 109, + 875, + 449, + 887 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "$^{2}$ Kyungpook National University (KNU).", + "bbox": [ + 109, + 887, + 328, + 898 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "8848", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 91, + 89, + 222, + 104 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Object detection often leverages supervised learning with ground truth horizontal bounding box labels (GT HBoxes) to locate the objects of interest. However, the usage of GT HBoxes limits the precise localization of the objects with their orientations and tight surrounding boundaries, especially for objects such as airplanes and ships of various orientations in aerial images. To handle object detection as an oriented object detection problem, more precise rotated bounding box labels (GT RBoxes) are required, which is very costly to generate [49]. So, to mitigate this challenge, previous methods [17, 23, 42, 47-49] have explored weakly supervised oriented object detection (OOD) that utilizes less expensive forms of annotations, such as image-level, point and HBox annotations. Among these, the use of HBoxes is the most popular due to their widespread availability in existing public datasets [4, 9, 12, 21, 30, 31] to predict the RBoxes for objects of interest. So, this approach can detour the costly process of generating GT RBoxes.", + "bbox": [ + 89, + 114, + 485, + 387 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The previous weakly supervised (WS) learning of OOD [22, 27, 42, 48] utilizes GT HBoxes in the forms of coarse HBoxes, called GT C-HBoxes, as supervision to compare with the HBoxes derived as the minimum circumscribed rectangles from the predicted RBoxes by their OOD models. As shown in the upper figure of Fig. 1-(a), the GT C-HBoxes are defined as coarse horizontal bounding boxes that loosely encompass the boundaries of objects (not tightly bounded). The GT HBoxes of the DOTA [29] dataset are in the forms of C-HBoxes which are derived as the minimum circumscribed horizontal bounding boxes of their GT RBoxes. However, when the previous OOD methods [42, 48] are supervised with the other GT HBoxes that are in the form of tight HBoxes, called GT T-HBoxes (e.g. DIOR dataset [13]), as shown in the bottom figure of Fig. 1-(a), we found that their performances are significantly degraded because GT T-HBoxes tend to have different scales, compared to those of GT C-HBoxes (see Fig. 1-(c)). As shown in Fig. 1-(b), this causes the previous methods to predict either RBoxes with accurate orientations but inaccurate scales smaller than the sizes of their corresponding objects, or the RBoxes with inaccurate (close to horizontal) orientations but somewhat accurate scales (almost the same as HBoxes).", + "bbox": [ + 89, + 393, + 483, + 743 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To overcome the above limitations of the previous WS-OOD methods, we propose an adaptive bounding box scaling and symmetry-prior-based orientation prediction, called as ABBSPO, as a WS-OOD framework that can be effectively trained with either GT C-HBoxes or GT T-Hboxes for aerial images. For this, (i) a novel Adaptive Bounding Box Scaling (ABBS) module is designed to have the flexibility of adjusting the GT HBoxes for each object into random sizes and then selecting the optimal scaled GT HBoxes that allow it to encompass the predicted RBoxes. Note that", + "bbox": [ + 89, + 750, + 485, + 901 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "the previous methods are not possible to have such flexibility for the adjustment of GT HBoxes; (ii) An angle learning module is proposed in a self-supervised manner that utilizes the symmetric priors of the objects that open appear in top-down views of aerial images. As shown in Figs 1-(b) and (c), Our proposed method predicts accurate orientation and surrounding boxes of objects for both cases of using GT C-HBoxes and GT T-HBoxes, outperforming the previous methods in angle accuracy and localization in terms of average precision (AP). Our contributions are summarized as:", + "bbox": [ + 511, + 90, + 906, + 243 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- To the best of our knowledge, our work is the first to address the limitations of previous weakly supervised OOD learning methods with T-HBoxes as GT. To overcome this, we propose a novel weakly supervised OOD method that can be effectively trained with T-HBoxes or C-HBoxes that can be cheaply annotated as GT;", + "- The adaptive bounding box scaling (ABBS) module is proposed to flexibly adjust the HBox (GT) for each object toward an appropriately scaled HBox. This allows part of the predicted RBoxes to place outside the T-HBox (GT), yielding precise RBox prediction;", + "- A symmetric prior angle (SPA) loss is presented to enhance the orientation prediction accuracy by leveraging the symmetric priors of the objects in aerial images;", + "- Our method significantly outperforms the state-of-the-art OOD methods using weakly supervised learning with HBoxes (GT) for aerial datasets." + ], + "bbox": [ + 519, + 258, + 906, + 515 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related Work", + "text_level": 1, + "bbox": [ + 513, + 535, + 653, + 550 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.1. RBox-supervised Oriented Object Detection", + "text_level": 1, + "bbox": [ + 511, + 560, + 885, + 575 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Oriented Object Detection (OOD) has gained significant attention, leading to extensive research in RBox-supervised methods (using GT RBoxes) such as Rotated RetinaNet [20], Rotated FCOS [24], $\\mathrm{R}^3\\mathrm{Det}$ [37], ROI Transformer [6], ReDet [8], and $\\mathrm{S}^2\\mathrm{A}$ -Net [7]. Rotated FCOS [24] improves OOD performance by introducing center-ness, which assigns weights to samples based on their proposal locations, thereby emphasizing well-positioned proposals. OrientedRepPoints methods [3, 14, 43], in contrast, utilize flexible receptive fields to extract key object points. However, a common challenge in RBox-supervised OOD methods is the boundary discontinuity problem that arises from the definition and prediction of angle parameters $(\\theta)$ [33, 34]. To address this, several methods modified the ways of defining the RBox representations, such as Gaussian distributions [25, 26, 35, 36, 38-41], thereby avoiding straightforward regression of angle parameters. On the other hand, in weakly supervised learning, the boundary discontinuity issue does not arise thanks to the absence of direct RBox supervision, allowing for more stable angle predictions without the need for complex mitigation strategies.", + "bbox": [ + 511, + 583, + 908, + 902 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "8849", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/9058721b81e8835b11068c548c9030cde9f80c85472d688e79cb7a9fc205dd43.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 94, + 89, + 903, + 232 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/3b91cfd3d439130d68778ce5b060bd3e80c614824624b643e61de1a383877e2f.jpg", + "image_caption": [ + "Figure 2. Overall pipeline of our ABBsPO framework. Our ABBsPO leverages weakly supervised learning from HBox annotations to accurately predict RBoxes. The framework incorporates the Orientation Learning Branch (OLB) for precise angle estimation, using the Symmetric Prior Angle (SPA) loss, and the Scale Learning Branch (SLB) for optimal scale adjustment via the Adaptive Bounding Box Scaling (ABBS) module. The framework supports both C-HBox and T-HBox ground truths, ensuring robust and accurate predictions." + ], + "image_footnote": [], + "bbox": [ + 94, + 231, + 403, + 393 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/55d9e3d1694b0495f9e982e4c5a131893a68faaf37d1a22e07983bd4262d8421.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 408, + 231, + 901, + 396 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.2. Weakly-supervised Orientd Object Detection", + "text_level": 1, + "bbox": [ + 89, + 477, + 472, + 493 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Weakly supervised OOD methods learn to predict RBoxes without directly utilizing GT RBoxes. The approaches in this domain are primarily categorized based on the types of labels they employ: image-based [23], point-based [17, 47], and HBox-based [42, 48].", + "bbox": [ + 89, + 500, + 483, + 575 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Image-based supervision. WSODet [23], aims to generate pseudo-RBoxes without explicit localization supervision, thus encountering significant limitations when relying solely on image labels, especially for the scenes with numerous and diverse object types.", + "bbox": [ + 89, + 589, + 483, + 664 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Point-based supervision. By leveraging one representative point at the center location as a label for each object, point label-based methods offer the advantage of being cost-effective [1, 2, 11, 16, 19, 44]. PointOBB [17] estimates angles from geometric relationships across original, rotated, and flipped views, and determines scales by analyzing proposal distributions between original and scaled input images. PointOBB-v2 [18] improves single point supervision by refining pseudo-label generation, leading to enhanced efficiency and accuracy. Point2RBox [47] employed fundamental patterns as priors to guide the regression of RBoxes. Point-based methods are cost-effective and straightforward, but still struggle with limited supervision.", + "bbox": [ + 89, + 676, + 482, + 872 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "HBox-based supervision. As annotating HBoxes is more", + "bbox": [ + 89, + 885, + 482, + 900 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "straightforward than RBoxes, the HBox-supervised OOD has gained increasing attention in recent studies. H2RBox [42] utilized rotated views from the original view and provided self-supervision of object orientations without requiring the GT angles. H2RBox-v2 [48] expanded the use of geometric relationships between views by adding flipped views. These methods learn to predict RBoxes by converting minimum circumscribed HBoxes that encompass the predicted RBoxes to directly compare IoU with the GT HBoxes, thereby enabling HBox-supervised OOD. However, these methods only guarantee performance when trained with GT C-HBoxes that are derived from GT RBoxes. These methods struggle to learn precise OOD when being trained with GT T-HBoxes, because of the significant gap between the GT T-HBoxes and the HBoxes derived from predicted RBoxes. To address the issue, we propose an ABBS module to effectively handle both types of GTs (C-HBoxes and T-HBoxes).", + "bbox": [ + 511, + 478, + 906, + 750 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. Method", + "text_level": 1, + "bbox": [ + 513, + 762, + 604, + 776 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Overall Pipeline", + "text_level": 1, + "bbox": [ + 511, + 786, + 673, + 801 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Weakly supervised OOD aims to predict RBoxes using less expensive annotations such as HBoxes. Existing methods such as H2RBox [42] and its improved version H2RBox-v2 [48] have laid the foundation for directly predicting RBoxes from HBoxes. Our proposed pipeline builds upon the H2RBox-v2 [48] framework to effectively enable weakly", + "bbox": [ + 511, + 809, + 906, + 900 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "8850", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "supervised OOD from either C-HBoxes or T-Boxes. Figure 2 depicts the conceptual framework of our weakly supervised OOD method. Given an input image $\\mathrm{I}_{\\mathrm{ori}}$ and its rotated and flipped version $\\mathrm{I}_{\\mathrm{rot}}$ and $\\mathrm{I}_{\\mathrm{flp}}$ , our proposed pipeline obtains the RBox for each input view ( $\\mathrm{I}_{\\mathrm{ori}}$ , $\\mathrm{I}_{\\mathrm{rot}}$ , $\\mathrm{I}_{\\mathrm{flp}}$ ), including the center position $(x,y)$ , size $(w,h)$ , angle $(\\theta)$ , class scores $(p)$ , and the center-ness $(cn)$ . To classify each detected object, we follow FCOS [24] by supervising both the classification $(p)$ and the center-ness $(cn)$ . The angle $(\\theta)$ prediction is obtained by using the method proposed in PSC [46]. Our contribution mainly lies in the supervision for localization, consisting of two branches: a scale learning branch (SLB) and an orientation learning branch (OLB).", + "bbox": [ + 89, + 90, + 483, + 287 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In the SLB, the adaptive bounding box scaling (ABBS) module addresses the relationships between GT HBoxes and predicted RBoxes. This ABBS module provides proper minimum circumscribed rectangle for the accurately predicted RBoxes, by adaptively scaling the HBoxes based on a predefined scale range. The OLB guides accurate prediction of object orientation by utilizing three input views $(\\mathrm{I}_{\\mathrm{ori}},\\mathrm{I}_{\\mathrm{rot}},\\mathrm{I}_{\\mathrm{flp}})$ , following H2RBox-v2 [48]. Additionally, the OLB utilizes these orientation predictions for our symmetric prior angle (SPA) loss, which leverages the inherent left-right symmetry of objects in aerial images. The SPA loss enforces to further adjust the orientations of the predicted RBoxes to be aligned with the orientations of the symmetric objects such as airplanes, ships, ground track fields etc.", + "bbox": [ + 89, + 294, + 483, + 507 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2. Adaptive Bounding Box Scaling Module", + "text_level": 1, + "bbox": [ + 89, + 513, + 434, + 530 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In Fig. 2, 'Scale Learning Branch' illustrates the conceptual process of our adaptive bounding box scaling module (ABBS) module. In HBox-supervised OOD learning, the predicted RBoxes $(RB^{\\mathrm{pred}})$ must be compared with the GT HBoxes $(HB^{\\mathrm{gt}})$ . Since they cannot be directly compared, $RB^{\\mathrm{pred}}$ is first converted to $HB^{\\mathrm{pred}}$ , defined as the minimum circumscribed HBox of $RB^{\\mathrm{pred}}$ as:", + "bbox": [ + 89, + 537, + 483, + 642 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nH B ^ {\\text {p r e d}} = M C R (R B ^ {\\text {p r e d}}), \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 204, + 646, + 480, + 664 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $MCR(\\cdot)$ is an operator converting $RB$ to the minimum circumscribed $HB$ , allowing $HB^{\\mathrm{pred}}$ to be compared with $HB^{\\mathrm{gt}}$ . $HB^{\\mathrm{pred}}$ and $RB^{\\mathrm{pred}}$ are given by:", + "bbox": [ + 89, + 667, + 483, + 714 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nR B ^ {\\text {p r e d}} = \\left[ x _ {r b} ^ {\\text {p r e d}}, y _ {r b} ^ {\\text {p r e d}}, w _ {r b} ^ {\\text {p r e d}}, h _ {r b} ^ {\\text {p r e d}}, \\theta_ {r b} ^ {\\text {p r e d}} \\right], \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 153, + 719, + 483, + 747 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nH B ^ {\\text {p r e d}} = \\left[ x _ {h b} ^ {\\text {p r e d}}, y _ {h b} ^ {\\text {p r e d}}, w _ {h b} ^ {\\text {p r e d}}, h _ {h b} ^ {\\text {p r e d}} \\right],\n$$\n", + "text_format": "latex", + "bbox": [ + 156, + 743, + 382, + 762 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $(x_{rb}^{\\mathrm{pred}},y_{rb}^{\\mathrm{pred}})$ and $(x_{hb}^{\\mathrm{pred}},y_{hb}^{\\mathrm{pred}})$ are the centers of $RB^{\\mathrm{pred}}$ and $HB^{\\mathrm{pred}}$ , respectively. The width $w$ and height $h$ of $HB^{\\mathrm{pred}}$ can be computed as:", + "bbox": [ + 89, + 766, + 483, + 814 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nw _ {h b} ^ {\\text {p r e d}} = w _ {r b} ^ {\\text {p r e d}} \\left| \\cos \\theta_ {r b} ^ {\\text {p r e d}} \\right| + h _ {r b} ^ {\\text {p r e d}} \\left| \\sin \\theta_ {r b} ^ {\\text {p r e d}} \\right|, \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 148, + 820, + 483, + 847 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nh _ {h b} ^ {\\text {p r e d}} = w _ {r b} ^ {\\text {p r e d}} | \\sin \\theta_ {r b} ^ {\\text {p r e d}} | + h _ {r b} ^ {\\text {p r e d}} | \\cos \\theta_ {r b} ^ {\\text {p r e d}} |.\n$$\n", + "text_format": "latex", + "bbox": [ + 151, + 843, + 419, + 862 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "If $RB^{\\mathrm{opt}}$ is defined as the tightly surrounding object boundary RBox with the precise orientation, then we have", + "bbox": [ + 89, + 869, + 483, + 901 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/0f0e3f67749295fd52092e18a0c6894c7b925e01576410d11a332bc47f5de4d9.jpg", + "image_caption": [ + "(a)" + ], + "image_footnote": [], + "bbox": [ + 552, + 88, + 647, + 179 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/325bb1fc626aaefc2c05d0140a6886eacafedfc82231735440732d56a2dee2c6.jpg", + "image_caption": [ + "(b)" + ], + "image_footnote": [], + "bbox": [ + 651, + 88, + 741, + 179 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/5c549cac7a8d8f46a869cf8c9dfc6cfb6146911f9a57937bfa47698c6a67e05b.jpg", + "image_caption": [ + "(c)" + ], + "image_footnote": [], + "bbox": [ + 746, + 88, + 864, + 179 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/11ae87346cdf7ff7e0c88c472005c62c18768baf3a886fa74174d5159e1b3c45.jpg", + "image_caption": [ + "(d)" + ], + "image_footnote": [], + "bbox": [ + 553, + 194, + 648, + 268 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/c71e66550e76ebe42e0f7ba49bef5c0a389b45ee411cfcf5cca25d30fde80b8f.jpg", + "image_caption": [ + "(e)" + ], + "image_footnote": [], + "bbox": [ + 651, + 194, + 763, + 268 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/d75f200c7a4f589524d78653290a80f31a6c6d2a6006a1a73ed76d29a5d60139.jpg", + "image_caption": [ + "(f)", + "Figure 3. Analysis of scale adjustment function $(f(\\cdot))$ based on the shape and angle of objects. (a) rectangular shape, (b) rounded rectangular shape, (c) complex shape, (d) horizontal orientation, (e) slightly tilted orientation, (f) diagonal orientation. The cyan solid box, green solid box and red dotted box represent GT T-HBox, RBox and adjusted GT HBox, respectively." + ], + "image_footnote": [], + "bbox": [ + 764, + 194, + 862, + 268 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "$HB^{\\mathrm{opt}} = MCR(RB^{\\mathrm{opt}})$ for the 'Predicted RBox Projection Process' block in the SLB of Fig. 2. When the size of $HB^{\\mathrm{gt}}$ is larger or smaller than that of $HB^{\\mathrm{opt}}$ , the model needs to adaptively adjust and find the optimal scale within a predefined range of scale variations. We propose an ABBS module that estimates $RB^{\\mathrm{opt}}$ by adaptively adjusting the scale of $HB^{\\mathrm{gt}}$ . Notably, even if $RB^{\\mathrm{pred}}$ is accurately estimated, its $HB^{\\mathrm{pred}}$ may not overlap well with $HB^{\\mathrm{gt}}$ , leading to a low Intersection over Union (IoU) value. Enforcing $HB^{\\mathrm{pred}}$ to match $HB^{\\mathrm{gt}}$ can cause a misalignment with $RB^{\\mathrm{pred}}$ because $HB^{\\mathrm{gt}}$ may not be ideal in estimating $RB^{\\mathrm{opt}}$ . To address this, our ABBS module adaptively scales and adjusts $HB^{\\mathrm{gt}}$ in the context of $RB^{\\mathrm{pred}}$ , rather than forcing $HB^{\\mathrm{pred}}$ to match $HB^{\\mathrm{gt}}$ .", + "bbox": [ + 511, + 385, + 906, + 582 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "For the detailed explanation of our ABBS module, we first define a set of scaled versions of $HB^{\\mathrm{gt}}$ for the 'Scaled GT HBoxes Generation Process' in the SLB of Fig. 2 as:", + "bbox": [ + 511, + 590, + 906, + 636 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {H B} _ {\\mathrm {s}} ^ {\\mathrm {g t}} = \\left\\{H B _ {\\mathrm {s}, 1} ^ {\\mathrm {g t}}, H B _ {\\mathrm {s}, 2} ^ {\\mathrm {g t}}, \\dots , H B _ {\\mathrm {s}, \\mathrm {K}} ^ {\\mathrm {g t}} \\right\\}, \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 589, + 645, + 903, + 666 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $HB_{s,k}^{\\mathrm{gt}}$ is the k-th scaled version of $HB^{\\mathrm{gt}}$ and $\\mathbf{K}$ is the total number of scaled variations of $HB^{\\mathrm{gt}}$ . $\\mathbf{HB}_{\\mathbf{S}}^{\\mathrm{gt}}$ is determined as the combinations of angle-adjusted width and height scale factors, $\\{s_{adj,i}^{w}\\}_{i=1}^{N_s}$ and $\\{s_{adj,j}^{h}\\}_{j=1}^{N_s}$ , that are transformed from basic width and height scale factors, $\\{s_i^{w}\\}_{i=1}^{N_s}$ and $\\{s_j^{h}\\}_{j=1}^{N_s}$ by considering angle prediction. Basic scale factors are uniformly spaced in a predefined scale range as:", + "bbox": [ + 511, + 676, + 906, + 787 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nS _ {w} = \\left\\{s _ {1} ^ {w}, s _ {2} ^ {w}, \\dots , s _ {N _ {s}} ^ {w} \\right\\}, S _ {h} = \\left\\{s _ {1} ^ {h}, s _ {2} ^ {h}, \\dots , s _ {N _ {s}} ^ {h} \\right\\}, \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 534, + 797, + 903, + 816 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $S_w$ and $S_h$ are the sets of basic width and height scale factors, respectively. $s_i^w$ and $s_i^h$ are calculated as:", + "bbox": [ + 511, + 829, + 903, + 856 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\ns _ {i} ^ {w} = s _ {1} ^ {w} + \\left(s _ {N _ {s}} ^ {w} - s _ {1} ^ {w}\\right) / \\left(N _ {s} - 1\\right) \\cdot (i - 1),\n$$\n", + "text_format": "latex", + "bbox": [ + 566, + 864, + 849, + 882 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\ns _ {j} ^ {h} = s _ {1} ^ {h} + \\left(s _ {N _ {s}} ^ {h} - s _ {1} ^ {h}\\right) / \\left(N _ {s} - 1\\right) \\cdot (j - 1), \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 568, + 885, + 903, + 904 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "8851", + "bbox": [ + 482, + 944, + 513, + 955 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $s_{N_s}^w = s_{N_s}^h$ is the predefined largest basic scale factor for both width and height of $HB^{\\mathrm{gt}}$ , and $N_{s}$ is the number of uniform quantization for the both range $[s_1^w..s_{N_s}^w]$ and $[s_1^h..s_{N_s}^h]$ . In order to generate $HB_{s,i}^{\\mathrm{gt}}$ , we transform the basic width and height scale factors, $\\{s_i^w\\}_{i=1}^{N_s}$ and $\\{s_j^h\\}_{j=1}^{N_s}$ , into angle-adjusted width and height scale factors, $\\{s_{adj,i}^w\\}_{i=1}^{N_s}$ and $\\{s_{adj,j}^h\\}_{j=1}^{N_s}$ , using the predicted angle $\\theta^{\\mathrm{pred}}$ through the scale adjustment function $f$ :", + "bbox": [ + 89, + 90, + 483, + 222 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\ns _ {a d j, i} ^ {w} = f \\left(\\theta_ {r b} ^ {\\text {p r e d}}, s _ {i} ^ {w}\\right), s _ {a d j, j} ^ {h} = f \\left(\\theta_ {r b} ^ {\\text {p r e d}}, s _ {j} ^ {h}\\right). \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 151, + 224, + 482, + 246 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "To define $f(\\cdot)$ , it's essential to consider the object types and rotation angles. Fig. 3 shows the effect of scale adjustments on T-HBoxes for three object types: (i) For rectangular objects like tennis courts (Fig. 3-(a)), the adjusted T-HBox (red dotted box) aligns precisely with the GT T-HBox (cyan solid box) and tightly circumscribes the optimal RBox (green solid box); (ii) For rounded rectangular objects (Fig. 3-(b)), the optimal RBox slightly exceeds the GT T-HBox; (iii) Complex shapes like airplanes (Fig. 3-(c)) show a larger discrepancy, with parts of the optimal RBox lying outside the GT T-HBox. Furthermore, scale adjustments also depend on rotation angles: (i) Fig. 3-(d) for a vertically (or horizontally) aligned airplane, the GT T-HBox and optimal RBox are identical; (ii) For Fig. 3-(e) with a small rotation angle, they differ slightly; (iii) For Fig. 3-(f) with a larger angle, the difference is more pronounced. Therefore, to take the object's shape types and orientation degrees into account for the scale adjustment for the widths and heights of T-HBoxes, $f(\\cdot)$ in Eq. 7 is defined as:", + "bbox": [ + 91, + 253, + 483, + 537 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nf (\\theta , s) = \\left\\{ \\begin{array}{l l} \\frac {4}{\\pi} (s - 1) \\cdot \\theta + 1, & \\text {i f} 0 \\leq \\theta < \\frac {\\pi}{4}, \\\\ \\frac {4}{\\pi} (1 - s) \\cdot \\theta + (2 s - 1), & \\text {i f} \\frac {\\pi}{4} \\leq \\theta < \\frac {\\pi}{2}, \\end{array} \\right. \\tag {8}\n$$\n", + "text_format": "latex", + "bbox": [ + 98, + 542, + 482, + 584 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where the angle range is set to $\\theta \\in [0,\\pi /2)$ due to the periodicity of the angle. According to $f(\\cdot)$ and Eq. 7, $HB_{s,k}^{\\mathrm{gt}}$ in $\\mathbf{HB}_s^{\\mathrm{gt}}$ can be expressed as:", + "bbox": [ + 89, + 592, + 482, + 638 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nH B _ {s, k} ^ {\\mathrm {g t}} = \\left[ x ^ {\\mathrm {g t}}, y ^ {\\mathrm {g t}}, w ^ {\\mathrm {g t}} \\cdot s _ {a d j, i} ^ {w}, h ^ {\\mathrm {g t}} \\cdot s _ {a d j, j} ^ {h} \\right], \\tag {9}\n$$\n", + "text_format": "latex", + "bbox": [ + 158, + 648, + 482, + 667 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $(x^{\\mathrm{gt}},y^{\\mathrm{gt}})$ is the center point, and $w^{\\mathrm{gt}}$ and $h^{\\mathrm{gt}}$ are width and height of $HB^{\\mathrm{gt}}$ . As shown in the 'IoU Calculation' and 'Optimal Scale Learning' blocks in the SLB of Fig. 2, $HB^{\\mathrm{opt}}$ among $\\{HB_{s,k}^{\\mathrm{gt}}\\}_{k=1}^{K}$ can be determined which minimizes the IoU loss for all proposals by an ABBS loss as:", + "bbox": [ + 89, + 670, + 483, + 746 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\mathrm {a s}} = \\frac {1}{N _ {p}} \\sum_ {l = 1} ^ {N _ {p}} \\min _ {s _ {i} ^ {w} \\in S _ {w}, \\atop s _ {j} ^ {h} \\in S _ {h}} \\mathcal {L} _ {\\mathrm {I o U}} \\left(H B _ {l} ^ {\\text {p r e d}}, H B _ {s, k} ^ {\\text {g t}, l} \\left(s _ {i} ^ {w}, s _ {j} ^ {h}\\right)\\right), \\tag {10}\n$$\n", + "text_format": "latex", + "bbox": [ + 99, + 756, + 482, + 805 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $N_{p}$ is the total number of proposals for input I. $HB_{l}^{\\mathrm{pred}}$ is $HB^{\\mathrm{pred}}$ for $l$ -th proposal, and $HB_{s,k}^{\\mathrm{gt},l}(s_i^w,s_j^h)$ is $k$ -th scaled $HB^{\\mathrm{gt}}$ , as $HB_{s,k}^{\\mathrm{gt}}$ , whose width and height are scaled for $l$ -th proposal according to Eq. 7 to Eq. 9. Finally, by adding a regularization term using the IoU loss between $HB^{\\mathrm{pred}}$ and non-scaled $HB^{\\mathrm{gt}}$ , we formed the regression loss as:", + "bbox": [ + 89, + 804, + 483, + 900 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/20f8eef00637ab7600c03ee878ea6683fc07db4f9171afee28e7c45469de0d08.jpg", + "image_caption": [ + "Airplane" + ], + "image_footnote": [], + "bbox": [ + 516, + 101, + 629, + 176 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/45fc781940dddcb8806fd987a3b42f61e4fd13f1476d76fda1771d4abaa461d1.jpg", + "image_caption": [ + "Ship" + ], + "image_footnote": [], + "bbox": [ + 630, + 101, + 720, + 176 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/a2d1e58850e2cd6b5de90fa8fe55dff6b4f19935e977bc1b59efc803fba9d08a.jpg", + "image_caption": [ + "Tennis court" + ], + "image_footnote": [], + "bbox": [ + 725, + 101, + 777, + 176 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/ee66c9f2b21acbe6db6b232718f3f1b68a60af53fcaff428182869ab07575ae3.jpg", + "image_caption": [ + "Vehicle", + "Figure 4. Examples of symmetric objects in aerial images. In SPA loss, the $x,y$ coordinates and angle $\\theta$ are used to define the symmetry axis, splitting the object into two parts for comparison." + ], + "image_footnote": [], + "bbox": [ + 777, + 101, + 903, + 178 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\mathrm {r e g}} = \\mathcal {L} _ {\\mathrm {a s}} + \\alpha \\cdot \\left(1 / N _ {p}\\right) \\sum_ {l = 1} ^ {N _ {p}} \\mathcal {L} _ {\\mathrm {I o U}} \\left(H B _ {l} ^ {\\text {p r e d}}, H B ^ {\\mathrm {g t}, l}\\right), \\tag {11}\n$$\n", + "text_format": "latex", + "bbox": [ + 521, + 243, + 906, + 268 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\alpha$ is a hyperparameter which is set to 0.01 by default.", + "bbox": [ + 513, + 280, + 903, + 295 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.3. Symmetric Prior Angle Loss", + "text_level": 1, + "bbox": [ + 511, + 321, + 767, + 338 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In aerial images, objects such as airplanes, ships, tennis courts, and vehicles are often captured from top-down viewpoints, where most of these objects exhibit symmetries in their appearance, as shown in Fig. 4.", + "bbox": [ + 511, + 344, + 905, + 405 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In the previous pipelines [42, 48], both the regression loss associated with the bounding box's center point, width, and height, and the angle loss for accurate angle prediction were trained in a balanced manner. However, they tended to inaccurately predict the angles by maximizing the bounding box's IoUs at the same time. This issue stems from the fact that, since the angles could not be directly supervised due to the absence of angle annotations, the angles were indirectly supervised from augmented views with rotations and flips. This is problematic because, when the difference between two predicted angles for the same object in the original view and its rotated view are equal to the rotation angle applied for the original view, the angle loss is zero although the predicted angles are inaccurate.", + "bbox": [ + 511, + 414, + 906, + 625 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "To mitigate such a predicted angle ambiguity, we propose a symmetric prior angle (SPA) loss. Based on the SPA loss, the model can be trained to predict precise angles by indirectly utilizing the object's symmetric characteristics. As shown in Fig. 4, the detected objects are symmetric against the symmetry axes (blue-dotted lines) passing the center points of their RBoxes. That is, the pixel contents in the two parts divided by the symmetry axis of the RBox are compared in similarity whose difference is used as supervision for our SPA loss. It is noted that our SPA loss utilizes only symmetric objects, incorporating the symmetry prior from GT class labels for proposals identified as symmetric, such as 'airplane,' 'ship,' 'vehicle,' and 'tennis court'", + "bbox": [ + 511, + 635, + 906, + 830 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "To avoid applying the SPA loss when $RB^{pred}$ are inaccurate for the respective objects, we first check the fidelity scores of proposal, and sample the Top- $k$ proposals as supervision in the SPA loss as:", + "bbox": [ + 511, + 839, + 905, + 898 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "8852", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\left\\{R B _ {n} ^ {p r e d} \\right\\} _ {n = 1} ^ {N _ {\\mathrm {s p a}}} = \\operatorname {T o p - k} \\left(\\left\\{R B _ {l} ^ {p r e d} \\right\\} _ {l = 1} ^ {N _ {p}} \\mid \\mathrm {s c} _ {\\mathrm {c l s}} ^ {i} + \\mathrm {s c} _ {\\mathrm {l o c}} ^ {i}\\right) \\tag {12}\n$$\n", + "text_format": "latex", + "bbox": [ + 99, + 88, + 482, + 113 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "where $\\mathrm{sc}_{\\mathrm{cls}}^{(i)}$ and $\\mathrm{sc}_{\\mathrm{loc}}^{(i)}$ are the classification and localization scores for the $l$ -th $RB^{pred}$ , and $N_{p}$ is the total number of proposals. From Eq. 12, the selected $N_{spa}$ proposals are considered in the SPA loss by which the predicted angles of $RB^{pred}(\\theta_{rb}^{pred})$ are enforced to align with the orientations of objects in the sense of maximizing the similarity, Structural Similarity Index (SSIM [28]), between the pixel contents in the two parts of each $RB^{pred}$ . It should be noted that, even in cases where symmetric objects may not appear perfectly symmetric due to contextual factors like shadows or asymmetrical cargo arrangements, their symmetry is still maintained by the inherent structural symmetry between the two parts. Our SPA loss is defined as:", + "bbox": [ + 89, + 119, + 483, + 318 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\mathrm {S P A}} = \\left(1 / N _ {\\mathrm {s p a}}\\right) \\sum_ {n = 1} ^ {N _ {\\mathrm {s p a}}} \\left(1 - \\operatorname {S S I M} \\left(I _ {p 1} ^ {(n)}, I _ {p 2} ^ {(n)}\\right)\\right) \\tag {13}\n$$\n", + "text_format": "latex", + "bbox": [ + 109, + 324, + 482, + 348 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "To remove the influence of object sizes in $L_{\\mathrm{SPA}}$ computation, the proposals $(RB^{pred})$ are projected onto a fixed-size grid of $50 \\times 50$ . Then, the pixel content $(I_{p1})$ in one part of the proposal's projection is compared with that $(I_{p2})$ of the other part that is flipped before the comparison.", + "bbox": [ + 89, + 348, + 483, + 422 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "3.4. Loss Functions", + "text_level": 1, + "bbox": [ + 89, + 429, + 243, + 444 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In the orientation learning branch (OLB), two angle-based losses [48], $\\mathcal{L}_{\\mathrm{rot}}$ and $\\mathcal{L}_{\\mathrm{flp}}$ , are adopted to leverage the consistency between the original, rotated, and flipped views of each object proposal. For the rotated and flipped views, $\\mathcal{L}_{\\mathrm{rot}}$ and $\\mathcal{L}_{\\mathrm{flp}}$ are computed by comparing with the predicted angle $\\theta$ in the original view $(\\mathrm{I}_{\\mathrm{ori}})$ :", + "bbox": [ + 89, + 452, + 483, + 542 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\mathrm {r o t}} = l _ {s} \\left(\\theta_ {\\mathrm {r o t}} - \\theta , R\\right), \\mathcal {L} _ {\\mathrm {f l p}} = l _ {s} \\left(\\theta_ {\\mathrm {f l p}} + \\theta , 0\\right), \\tag {14}\n$$\n", + "text_format": "latex", + "bbox": [ + 129, + 549, + 482, + 566 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "where $l_{s}$ denotes a smooth L1 loss-based snap loss [48], and $R$ denotes the angle applied to $\\mathbf{I}_{ori}$ . The final angle loss is:", + "bbox": [ + 89, + 571, + 482, + 602 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\text {a n g}} = \\beta \\left(\\lambda_ {r} \\mathcal {L} _ {\\text {r o t}} + \\lambda_ {f} \\mathcal {L} _ {\\text {f l p}}\\right) + \\gamma \\mathcal {L} _ {\\text {S P A}}, \\tag {15}\n$$\n", + "text_format": "latex", + "bbox": [ + 163, + 609, + 480, + 626 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "where $\\lambda_r = 1.0, \\lambda_f = 0.05, \\beta = 0.6$ , and $\\gamma = 0.05$ are empirically determined for all our experiments. In the shape learning branch (SLB), we use our IoU-based [45] regression loss $\\mathcal{L}_{\\mathrm{reg}}$ in Eq. 11. The overall loss is defined as:", + "bbox": [ + 89, + 631, + 483, + 691 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\text {t o t a l}} = \\lambda_ {\\text {a n g}} \\mathcal {L} _ {\\text {a n g}} + \\lambda_ {\\text {r e g}} \\mathcal {L} _ {\\text {r e g}} + \\lambda_ {\\text {c n}} \\mathcal {L} _ {\\text {c n}} + \\lambda_ {\\text {c l s}} \\mathcal {L} _ {\\text {c l s}} \\tag {16}\n$$\n", + "text_format": "latex", + "bbox": [ + 114, + 698, + 482, + 715 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "where $\\mathcal{L}_{\\mathrm{cn}}$ is the center-ness loss [24], and $\\mathcal{L}_{\\mathrm{cls}}$ is the classification loss based on the focal loss [20]. The weighting factors, $\\lambda_{\\mathrm{ang}}, \\lambda_{\\mathrm{reg}}, \\lambda_{\\mathrm{cn}}$ , and $\\lambda_{\\mathrm{cls}}$ are all set to 1.", + "bbox": [ + 89, + 720, + 483, + 767 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4. Experiments", + "text_level": 1, + "bbox": [ + 89, + 777, + 223, + 792 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.1. Datasets", + "text_level": 1, + "bbox": [ + 89, + 801, + 192, + 816 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We trained and tested all the methods across four different datasets: DIOR [5, 13], DOTA-v1.0 [29], SIMD [9] and NWPU VHR-10 [4], which are summarized in Table 1. The details for the datasets and results for SIMD and NWPU are described in Suppl.", + "bbox": [ + 89, + 824, + 483, + 901 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/89f028fbb66ac8d2765e35fe42d4b8fac947c3f837363afadf3c1e460ca86577.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Datasets# of ImagesImage Widths# of Objects# of ClassesAnnotation Types
DIOR [13]22,463800190,28820T-HBox
DIOR-R [5]22,463800190,28820RBox
DOTA-v1.0 [29]2,806800 ~ 4K188,28215C-HBox, RBox
SIMD [9]5,000102445,09615T-HBox
NWPU VHR-10 [4]800~10003,77510T-HBox
", + "bbox": [ + 519, + 89, + 901, + 175 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Table 1. Characteristics of datasets used for experiments", + "bbox": [ + 540, + 179, + 877, + 191 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2. Implementation Details", + "text_level": 1, + "bbox": [ + 511, + 213, + 730, + 229 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Our proposed ABBSPO pipeline adopts the FCOS [24] detector as the baseline architecture, utilizing a ResNet-50 [10] backbone and an FPN [15] neck, based on the H2RBox-v2 [48] framework. To ensure fairness, all models are configured with the ResNet-50 [10] backbone and trained for 12 epochs on NVIDIA RTX3090 GPUs.", + "bbox": [ + 511, + 236, + 905, + 327 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.3. Experimental Results", + "text_level": 1, + "bbox": [ + 511, + 338, + 717, + 354 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.3.1 Quantitative Comparison", + "text_level": 1, + "bbox": [ + 511, + 361, + 746, + 376 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "It should be noted that objects such as round-shaped pools have orientation ambiguities regardless of their annotations (RBoxes) [42]. In order to avoid confusion in orientation learning, annotations are modified as having horizontal orientations if the objects belong to the following categories: (i) DIOR-R: 'baseball field', 'chimney', 'golf field', 'stadium', 'storage tank', 'windmill'; and (ii) DOTA-v1.0: 'baseball diamond', 'stadium', 'roundabout'. Accordingly, their orientation learning is enforced to predict the horizontal orientations, similar to previous works [42, 48].", + "bbox": [ + 511, + 385, + 905, + 536 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Results on DIOR-R. Table 2 shows the OOD results. In addition to $\\mathrm{AP}_{50}$ metric, we use $3\\text{-AP}_{50}$ that focuses on the detection performance of the three complex-shaped object categories: 'airplane' (APL), 'expressway service area' (ESA), and 'overpass' (OP). As shown, our ABBSPO outperforms all weakly supervised OOD methods. Especially, in terms of $3\\text{-AP}_{50}$ , our ABBSPO is superior to the HBox-supervised SOTA methods, H2RBox and H2RBox-v2, with large margins of average $12.9\\%$ -point and average $9.1\\%$ -point improvements. In overall $\\mathrm{AP}_{50}$ performance, our ABBSPO surpasses H2RBox by $5.13\\%$ -point and the H2RBox-v2 by $3.03\\%$ -point. It is noted that our ABBSPO not only surpasses our base detector (H2RBox-v2 [48]) but also performs comparably to other RBox-supervised OOD methods, such as FCOS [24] and Oriented R-CNN [32]. It is worth noting that, compared to the RBox-supervised OOD methods, our ABBSPO shows even superior performance with large margins from $6.5\\%$ -point to $11.7\\%$ -point, especially on the 'airplane' that has the most complex shape. Notably, the ABBS module is less effective for rectangular objects, such as 'tennis court' (TC) and 'vehicle' (VE), as scaling is often unnecessary. However, it proves highly beneficial for complex-shaped objects, such as the ESA. The SPA loss is applied only to symmetric categories and helps", + "bbox": [ + 511, + 537, + 906, + 902 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "8853", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/b9411b1d163988c411741008f80198c6584b65223b04239509fe345f733a2d69.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Methods\\( \\underline{\\mathbf{APL}} \\)\\( \\mathbf{APO} \\)\\( \\mathbf{BF} \\)\\( \\mathbf{{BC}} \\)\\( \\mathbf{{BR}} \\)\\( \\mathbf{{CH}} \\)\\( \\underline{\\mathbf{ESA}} \\)\\( \\mathbf{{ETS}} \\)\\( \\mathbf{{DAM}} \\)\\( \\mathbf{{GF}} \\)\\( \\mathbf{{GTF}} \\)\\( \\mathbf{{HA}} \\)\\( \\mathbf{{OP}} \\)\\( \\mathbf{{SH}} \\)\\( \\mathbf{{STA}} \\)\\( \\mathbf{{STO}} \\)\\( \\mathbf{{TC}} \\)\\( \\mathbf{{TS}} \\)\\( \\mathbf{{WE}} \\)\\( \\mathbf{{WM}} \\)\\( 3-AP_{50} \\)\\( AP_{50} \\)
\\( S_R \\)RetinaNet [20]59.819.369.781.317.272.768.749.418.469.571.333.334.175.867.159.681.044.138.062.554.2054.64
FCOS [24]62.137.974.681.232.972.175.361.827.469.178.734.450.680.168.668.181.349.143.464.562.6760.66
Oriented R-CNN [32]63.036.771.981.641.172.677.865.524.872.982.140.956.581.273.462.481.553.365.665.7762.41
GWD [38] (RetinaNet)61.523.673.681.117.472.768.347.220.771.273.233.934.377.664.757.580.942.139.760.254.7055.07
KLD [39] (RetinaNet)57.822.671.581.216.972.768.952.120.673.571.033.733.277.168.959.980.943.939.160.953.3055.32
KFIoU [41] (RetinaNet)60.636.673.680.927.072.673.456.525.473.972.032.945.875.865.257.680.048.040.158.859.9357.84
\\( \\underline{\\mathbf{S_I}} \\)\\( WSODet^† \\)[23]20.729.063.267.30.265.50.40.10.349.028.90.31.51.253.416.440.00.16.10.17.5322.20
\\( S_P \\)\\( PointOBB^† \\)[17]58.215.370.578.60.172.269.61.83.70.377.316.740.479.239.632.429.616.833.627.756.0738.08
Point2RBox-SK [47]41.99.162.952.810.872.23.043.95.59.725.19.121.024.020.425.171.74.516.116.321.9727.26
\\( S_H \\)H2RBox [42]57.114.472.282.617.571.256.555.21467.777.93140.776.366.263.481.550.43857.651.4354.57
H2RBox-v2 [48]55.517.876.980.527.772.263.058.624.473.980.333.947.277.458.760.981.448.141.153.955.2356.67
ABBSPO (Ours)69.515.776.287.529.972.375.361.228.174.181.734.748.279.367.461.481.554.741.553.864.3359.70
", + "bbox": [ + 94, + 77, + 903, + 237 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/70da91bf82d89ba4065aedda657eaef932fcf70ec09bc42e7af6258b288a4e32.jpg", + "table_caption": [ + "Table 2. Quantitative results of each category on the DIOR-R [5] test dataset for RBox-supervised $(S_R)$ , Image-supervised $(S_I)$ , Point-supervised $(S_P)$ and HBox-supervised $(S_H)$ methods. The $3-\\mathrm{AP}_{50}$ represents the mean $\\mathrm{AP}_{50}$ scores for three complex-shaped object categories: 'airplane' (APL), 'expressway service area' (ESA), and 'overpass' (OP). The notation $\\dagger$ indicates its results in the paper [17]." + ], + "table_footnote": [], + "table_body": "
MethodsPLBDBRGTFSVLVSHTCBCSTSBFRAHASPHC3-AP50AP50
SRRetinaNet [20]87.575.139.959.666.366.378.290.555.062.747.163.659.455.143.061.8763.3
FCOS [24]88.874.046.859.170.181.487.790.767.768.360.266.164.958.744.063.8368.6
Oriented R-CNN [32]89.376.153.878.768.684.989.390.874.362.866.366.574.758.646.864.9072.1
Oriented RepPoints[14]89.780.150.574.475.082.088.790.464.070.045.760.673.660.442.864.3069.86
GWD [38] (RetinaNet)88.274.941.360.566.768.185.890.550.466.845.865.160.752.938.960.063.77
KLD [39] (RetinaNet)88.475.841.460.066.168.884.790.656.860.450.470.160.050.545.761.5364.65
KFIuU [41] (RetinaNet)84.474.340.755.257.956.976.471.246.164.854.365.058.348.742.958.6759.81
SPPointOBB [17]+FCOS32.467.30.853.62.39.718.80.39.912.80.554.011.034.111.425.9721.26
Point2RBox-SK [47]50.163.71.644.723.934.732.778.841.232.22.134.320.842.57.233.2734.03
SHH2RBox [42]89.573.137.355.170.776.485.490.366.567.359.664.960.657.936.561.3066.07
H2RBox-v2 [48]89.474.845.456.070.376.687.990.569.367.556.764.765.355.545.563.4767.69
ABBSPO (Ours)89.275.647.452.870.377.688.290.567.966.868.266.271.655.651.065.2769.26
", + "bbox": [ + 91, + 291, + 903, + 441 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 3. Quantitative results of each category on the DOTA-v1.0 [29] validation dataset for $S_R$ , $S_I$ , $S_P$ and $S_H$ methods. The $\\underline{3 - AP}_{50}$ represents the mean $\\mathrm{AP}_{50}$ scores for three complex-shaped object categories: plane (PL), swimming pool (SP), and helicopter (HC). All the methods are re-trained using only train dataset for fair comparison.", + "bbox": [ + 89, + 443, + 906, + 484 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "improve their performance, except for the categories with orientation ambiguities, such as 'storage tank' (STO). Since the predicted angles learned through the SPA loss are also utilized in the ABBS module for scale adjustment, both the SPA loss and the ABBS module jointly contribute to performance improvement in symmetric categories. This joint effect is particularly evident in complex-shaped symmetric categories, such as APL and ESA, where performance gains are more significant. Nevertheless, the performance gains for the two symmetric and rectangular categories, TC and VE, are marginal. This is mainly because the ABBS module has limited impact on rectangular shapes, and the small object sizes lead to an insufficient number of pixels for reliably determining the symmetry axis via the SPA loss.", + "bbox": [ + 88, + 493, + 480, + 705 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Results on DOTA-v1.0. Table 3 shows the detection performance results on the DOTA-v1.0 [29]. Due to the nonresponsiveness of the DOTA evaluation server, we report our experimental results on the validation dataset (458 images) instead of the test dataset (937 images). It should be noted that the validation dataset was not used for training all the methods for fair comparison. We use $3\\mathrm{-AP}_{50}$ that measures the detection performance for the three complex-shaped object categories: 'plane', 'swimming pool' and 'helicopter'. Our ABBsPO achieves SOTA performance, outperforming H2RBox by $3.19\\%$ -point and H2RBox-v2 by $1.57\\%$ -point improvements. Moreover, our ABBsPO", + "bbox": [ + 89, + 705, + 482, + 887 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "even surpasses the FCOS Baseline by $0.66\\%$ -point lift.", + "bbox": [ + 511, + 494, + 877, + 508 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.3.2 Qualitative Comparison", + "text_level": 1, + "bbox": [ + 511, + 527, + 735, + 542 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Results on DIOR. As shown in the figures of the first row in Fig. 5, our ABBSPO is the only method that accurately captures both the orientation and scale of the airplane. Since DIOR annotations provide GT in T-HBox format, direct usage of T-HBoxes as GT for training to predict RBox leads to degradations in orientation and scale prediction accuracy for the existing HBox-supervised OOD methods as shown in the figures of columns 2 and 3 in Fig. 5. In contrast, our ABBSPO avoids such degradation by utilizing the ABBS module that optimally scales the GT HBox sizes for precise RBox prediction during training. It is also worthwhile to mention that the predicted orientations by our ABBSPO are more precisely obtained via our SPA loss. Furthermore, it should be noted that compared to the RBox-supervised baseline method (Rotated FCOS [24]), our approach demonstrates superior visual results, even under weakly supervised learning.", + "bbox": [ + 509, + 551, + 906, + 810 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Results on DOTA-v1.0. As shown in the figures of the second row in Fig. 5, ABBSPO very accurately predicts both the orientation and scale of the swimming pool, achieving similar accuracy for tennis court. Interestingly, only ABBSPO successfully detects the two tennis courts that are partially occluded by trees (red solid circle) while the other", + "bbox": [ + 511, + 810, + 908, + 902 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "8854", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/9e6b6a3079290eb1c61eae8063f2493eadaa18e74151467d1fea92894d2034ad.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
ModuleDIOR-RDOTA-v1.0
ABBSSPA3-AP50AP50AP50
55.2356.6767.69
62.1358.3568.59
58.7758.9969.16
64.3359.7069.26
", + "bbox": [ + 102, + 88, + 346, + 171 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/19bf724e1698fae97cbb3bbcebebb512f03da67ee83be18d5195a971c825f065.jpg", + "table_caption": [ + "Table 4. Ablation results on ABBS module and SPA loss $(\\mathcal{L}_{\\mathrm{SPA}})$" + ], + "table_footnote": [], + "table_body": "
SamplingDIOR-R
L SPAOthers3-AP50AP50
61.6758.93
64.3359.70
43.6350.51
45.150.91
", + "bbox": [ + 379, + 88, + 588, + 171 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/03510db1c1a69acae27a4fb3dd44a73dffdd0f6e941572bea494f9fb3004f152.jpg", + "table_caption": [ + "Table 5. Ablation results on proposal sampling in ${\\mathcal{L}}_{\\mathrm{{SPA}}}$ and other components." + ], + "table_footnote": [], + "table_body": "
Scale RangeDIOR-RDOTA-v1.0
MinMaxInterval3-AP50AP50AP50
0.91.10.0557.9758.1569.26
0.51.50.161.6759.6268.8
1.01.50.164.3359.7068.9
1.02.00.156.0755.4666.55
", + "bbox": [ + 617, + 88, + 900, + 171 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Table 6. Ablation results on scale range in ABBS module.", + "bbox": [ + 614, + 176, + 877, + 203 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/f838ad4df2e507e0e2ecfce0c5614d593565459003deab0ecdc911cd4cdab3c7.jpg", + "image_caption": [ + "Figure 5. Qualitative results on DIOR [5, 13] and DOTA-v1.0 [29]. Zoom-in for better visualization. Rotated FCOS was trained only with GT RBoxes, while H2RBox, H2RBox-v2 and our ABBsPO were trained with GT T-HBoxes (1st row) and GT C-HBoxes (2nd row)." + ], + "image_footnote": [], + "bbox": [ + 117, + 210, + 888, + 503 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "methods failed. These results visually support the effectiveness of our ABBS module and SPA loss in learning the scales and orientations of objects accurately.", + "bbox": [ + 89, + 550, + 482, + 595 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.4. Ablation Studies", + "text_level": 1, + "bbox": [ + 89, + 604, + 254, + 619 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Ablation study on SPA loss and ABBS module. As shown in Table 4, both components contribute to performance improvements. The ABBS module effectively scales the GT HBoxes, leading to an increase in $\\mathrm{AP}_{50}$ performance on the DIOR dataset. Notably, it has a greater effect on complex-shaped object categories, resulting in a significant improvement in $3\\text{-AP}_{50}$ . Similarly, the SPA loss enhances angle prediction accuracy, also bringing an improvement in $\\mathrm{AP}_{50}$ .", + "bbox": [ + 89, + 628, + 482, + 750 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Ablation study on proposal sampling. As shown in Table 5, applying Top- $k$ proposal sampling exclusively to the SPA loss $(\\mathcal{L}_{\\mathrm{SPA}})$ yields the highest $\\mathrm{AP}_{50}$ performance, as the symmetric proposals of high-quality benefits $\\mathcal{L}_{\\mathrm{SPA}}$ . But, additional proposal sampling to the others $(\\mathcal{L}_{\\mathrm{rot}},\\mathcal{L}_{\\mathrm{flp}},\\mathcal{L}_{\\mathrm{reg}},\\mathcal{L}_{\\mathrm{cn}},\\mathcal{L}_{\\mathrm{cls}})$ significantly lowers the performance.", + "bbox": [ + 89, + 750, + 482, + 840 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Ablation study on scale range in the ABBS module. As shown in Table 6, the optimal scale range is influenced by the type of GT HBoxes. For DIOR's T-HBoxes, a scale range of 1 to 1.5 works well because it ensures that the", + "bbox": [ + 89, + 840, + 483, + 900 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "predicted RBoxes fully cover the objects boundary. On the other hand, for DOTA's C-HBoxes, which are already close to the optimal HBoxes, the optimal scale range is closer to 1. By adjusting the scale range based on the type of HBoxes, the ABBS module achieves high accuracy in predicting RBoxes for both datasets.", + "bbox": [ + 511, + 550, + 906, + 642 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5. Conclusion", + "text_level": 1, + "bbox": [ + 511, + 664, + 633, + 679 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Our ABBSPO, a weakly supervised OOD framework, effectively learns RBox prediction regardless of the type of HBox annotations (T-HBox and C-HBox). With our proposed Adaptive Bounding Box Scaling (ABBS) and Symmetric Prior Angle (SPA) loss, we achieved enhanced orientation and scale accuracy for OOD, which is comparable to or even better than RBox-supervised methods. Extensive experimental results underscore the superiority of our approach, surpassing state-of-the-art HBox-supervised methods. Our method effectively bridges the gap between weakly supervised OOD and fully supervised OOD, making it a promising solution for applications requiring efficient and accurate object detection via training with relatively cheap annotations of HBoxes compared to RBoxes.", + "bbox": [ + 509, + 688, + 906, + 900 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8855", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Acknowledgement", + "text_level": 1, + "bbox": [ + 91, + 90, + 250, + 107 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "This research was supported by Korea Institute of Marine Science & Technology Promotion (KIMST) funded by the Korea Coast Guard (RS-2023-00238652, Integrated Satellite-based Applications Development for Korea Coast Guard, $100\\%$ ).", + "bbox": [ + 89, + 113, + 483, + 185 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 210, + 187, + 227 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Liangyu Chen, Tong Yang, Xiangyu Zhang, Wei Zhang, and Jian Sun. Points as queries: Weakly semi-supervised object detection by points. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8823-8832, 2021. 3", + "[2] Pengfei Chen, Xuehui Yu, Xumeng Han, Najmul Hassan, Kai Wang, Jiachen Li, Jian Zhao, Humphrey Shi, Zhenjun Han, and Qixiang Ye. Point-to-box network for accurate object detection via single point supervision. In European Conference on Computer Vision, pages 51-67. Springer, 2022. 3", + "[3] Yihong Chen, Zheng Zhang, Yue Cao, Liwei Wang, Stephen Lin, and Han Hu. Reppoints v2: Verification meets regression for object detection. In Advances in Neural Information Processing Systems, pages 5621-5631. Curran Associates, Inc., 2020. 2", + "[4] Gong Cheng, Peicheng Zhou, and Junwei Han. Learning rotation-invariant convolutional neural networks for object detection in vhr optical remote sensing images. IEEE transactions on geoscience and remote sensing, 54(12):7405-7415, 2016. 2, 6", + "[5] Gong Cheng, Jiabao Wang, Ke Li, Xingxing Xie, Chunbo Lang, Yanqing Yao, and Junwei Han. Anchor-free oriented proposal generator for object detection. IEEE Transactions on Geoscience and Remote Sensing, 60:1-11, 2022. 6, 7, 8", + "[6] Jian Ding, Nan Xue, Yang Long, Gui-Song Xia, and Qikai Lu. Learning roi transformer for oriented object detection in aerial images. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2849-2858, 2019. 2", + "[7] Jiaming Han, Jian Ding, Jie Li, and Gui-Song Xia. Align deep features for oriented object detection. IEEE transactions on geoscience and remote sensing, 60:1-11, 2021. 2", + "[8] Jiaming Han, Jian Ding, Nan Xue, and Gui-Song Xia. Redet: A rotation-equivariant detector for aerial object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2786-2795, 2021. 2", + "[9] Muhammad Haroon, Muhammad Shahzad, and Muhammad Moazam Fraz. Multisized object detection using spaceborne optical imagery. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 13:3032-3046, 2020. 2, 6", + "[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 6", + "[11] Shitian He, Huanxin Zou, Yingqian Wang, Boyang Li, Xu Cao, and Ning Jing. Learning remote sensing object detect" + ], + "bbox": [ + 93, + 234, + 483, + 901 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "tion with single point supervision. IEEE Transactions on Geoscience and Remote Sensing, 2023. 3", + "[12] Darius Lam, Richard Kuzma, Kevin McGee, Samuel Dooley, Michael Laielli, Matthew Klaric, Yaroslav Bulatov, and Brendan McCord. xview: Objects in context in overhead imagery. arXiv preprint arXiv:1802.07856, 2018. 2", + "[13] Ke Li, Gang Wan, Gong Cheng, Liqui Meng, and Junwei Han. Object detection in optical remote sensing images: A survey and a new benchmark. ISPRS journal of photogrammetry and remote sensing, 159:296-307, 2020. 2, 6, 8", + "[14] Wentong Li, Yijie Chen, Kaixuan Hu, and Jianke Zhu. Oriented reppoints for aerial object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1829-1838, 2022. 2, 7", + "[15] Tsung-Yi Lin, Piotr Dólar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2117-2125, 2017. 6", + "[16] Xuebo Liu, Ding Liang, Shi Yan, Dagui Chen, Yu Qiao, and Junjie Yan. Fots: Fast oriented text spotting with a unified network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5676-5685, 2018. 3", + "[17] Junwei Luo, Xue Yang, Yi Yu, Qingyun Li, Junchi Yan, and Yansheng Li. Pointobb: Learning oriented object detection via single point supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16730-16740, 2024. 2, 3, 7", + "[18] Botao Ren, Xue Yang, Yi Yu, Junwei Luo, and Zhidong Deng. Pointobb-v2: Towards simpler, faster, and stronger single point supervised oriented object detection. arXiv preprint arXiv:2410.08210, 2024. 3", + "[19] Zhongzheng Ren, Zhiding Yu, Xiaodong Yang, Ming-Yu Liu, Alexander G Schwing, and Jan Kautz. Ufo 2: A unified framework towards omni-supervised object detection. In European conference on computer vision, pages 288-313. Springer, 2020. 3", + "[20] T-YLPG Ross and GKHP Dólar. Focal loss for dense object detection. In proceedings of the IEEE conference on computer vision and pattern recognition, pages 2980-2988, 2017. 2, 6, 7", + "[21] Xian Sun, Peijin Wang, Zhiyuan Yan, Feng Xu, Ruiping Wang, Wenhui Diao, Jin Chen, Jihao Li, Yingchao Feng, Tao Xu, et al. Fair1m: A benchmark dataset for fine-grained object recognition in high-resolution remote sensing imagery. ISPRS Journal of Photogrammetry and Remote Sensing, 184:116-130, 2022. 2", + "[22] Yongqing Sun, Jie Ran, Feng Yang, Chenqiang Gao, Takayuki Kurozumi, Hideaki Kimata, and Ziqi Ye. Oriented object detection for remote sensing images based on weakly supervised learning. In 2021 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), pages 1-6. IEEE, 2021. 2", + "[23] Zhiwen Tan, Zhiguo Jiang, Chen Guo, and Haopeng Zhang. Wsodet: A weakly supervised oriented detector for aerial object detection. IEEE Transactions on Geoscience and Remote Sensing, 61:1-12, 2023. 2, 3, 7" + ], + "bbox": [ + 516, + 92, + 906, + 900 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "8856", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[24] Zhi Tian, Xiangxiang Chu, Xiaoming Wang, Xiaolin Wei, and Chunhua Shen. Fully convolutional one-stage 3d object detection on lidar range images. Advances in Neural Information Processing Systems, 35:34899-34911, 2022. 2, 4, 6, 7", + "[25] Hao Wang, Zhanchao Huang, Zhengchao Chen, Ying Song, and Wei Li. Multigrained angle representation for remote-sensing object detection. IEEE Transactions on Geoscience and Remote Sensing, 60:1-13, 2022. 2", + "[26] Jian Wang, Fan Li, and Haixia Bi. Gaussian focal loss: Learning distribution polarized angle prediction for rotated object detection in aerial images. IEEE Transactions on Geoscience and Remote Sensing, 60:1-13, 2022. 2", + "[27] Linfei Wang, Yibing Zhan, Xu Lin, Baosheng Yu, Liang Ding, Jianqing Zhu, and Dapeng Tao. Explicit and implicit box equivariance learning for weakly-supervised rotated object detection. IEEE Transactions on Emerging Topics in Computational Intelligence, 2024. 2", + "[28] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600-612, 2004. 6", + "[29] Gui-Song Xia, Xiang Bai, Jian Ding, Zhen Zhu, Serge Belongie, Jiebo Luo, Mihai Datcu, Marcello Pelillo, and Liangpei Zhang. Dota: A large-scale dataset for object detection in aerial images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3974-3983, 2018. 2, 6, 7, 8", + "[30] SUN Xian, WANG Zhirui, SUN Yuanrui, DIAO Wenhui, ZHANG Yue, and FU Kun. Air-sarship-1.0: High-resolution sar ship detection dataset. , 8(6):852–863, 2019. 2", + "[31] Zhifeng Xiao, Qing Liu, Gefu Tang, and Xiaofang Zhai. Elliptic fourier transformation-based histograms of oriented gradients for rotationally invariant object detection in remote-sensing images. International Journal of Remote Sensing, 36(2):618-644, 2015. 2", + "[32] Xingxing Xie, Gong Cheng, Jiabao Wang, Xiwen Yao, and Junwei Han. Oriented r-cnn for object detection. In Proceedings of the IEEE/CVF international conference on computer vision, pages 3520-3529, 2021. 6, 7", + "[33] Hang Xu, Xinyuan Liu, Haonan Xu, Yike Ma, Zunjie Zhu, Chenggang Yan, and Feng Dai. Rethinking boundary discontinuity problem for oriented object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17406-17415, 2024. 2", + "[34] Xue Yang and Junchi Yan. Arbitrary-oriented object detection with circular smooth label. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part VIII 16, pages 677-694. Springer, 2020. 2", + "[35] Xue Yang and Junchi Yan. Arbitrary-oriented object detection with circular smooth label. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part VIII 16, pages 677-694. Springer, 2020. 2", + "[36] Xue Yang, Liping Hou, Yue Zhou, Wentao Wang, and Junchi Yan. Dense label encoding for boundary discontinuity free" + ], + "bbox": [ + 91, + 90, + 485, + 902 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "rotation detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 15819-15829, 2021. 2", + "[37] Xue Yang, Junchi Yan, Ziming Feng, and Tao He. R3det: Refined single-stage detector with feature refinement for rotating object. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4):3163-3171, 2021. 2", + "[38] Xue Yang, Junchi Yan, Qi Ming, Wentao Wang, Xiaopeng Zhang, and Qi Tian. Rethinking rotated object detection with gaussian Wasserstein distance loss. In International conference on machine learning, pages 11830-11841. PMLR, 2021. 2, 7", + "[39] Xue Yang, Xiaojiang Yang, Jirui Yang, Qi Ming, Wentao Wang, Qi Tian, and Junchi Yan. Learning high-precision bounding box for rotated object detection via kullback-Leibler divergence. Advances in Neural Information Processing Systems, 34:18381-18394, 2021. 7", + "[40] Xue Yang, Gefan Zhang, Xiaojiang Yang, Yue Zhou, Wentao Wang, Jin Tang, Tao He, and Junchi Yan. Detecting rotated objects as gaussian distributions and its 3-d generalization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4):4335-4354, 2022.", + "[41] Xue Yang, Yue Zhou, Gefan Zhang, Jirui Yang, Wentao Wang, Junchi Yan, Xiaopeng Zhang, and Qi Tian. The kfiou loss for rotated object detection. arXiv preprint arXiv:2201.12558, 2022. 2, 7", + "[42] Xue Yang, Gefan Zhang, Wentong Li, Yue Zhou, Xuehui Wang, and Junchi Yan. H2RBox: Horizontal box annotation is all you need for oriented object detection. In The Eleventh International Conference on Learning Representations, 2023. 1, 2, 3, 5, 6, 7", + "[43] Ze Yang, Shaohui Liu, Han Hu, Liwei Wang, and Stephen Lin. Reppoints: Point set representation for object detection. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9657-9666, 2019. 2", + "[44] Xinyi Ying, Li Liu, Yingqian Wang, Ruojing Li, Nuo Chen, Zaiping Lin, Weidong Sheng, and Shilin Zhou. Mapping degeneration meets label evolution: Learning infrared small target detection with single point supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15528-15538, 2023. 3", + "[45] Jiahui Yu, Yuning Jiang, Zhangyang Wang, Zhimin Cao, and Thomas Huang. Unitbox: An advanced object detection network. In Proceedings of the 24th ACM international conference on Multimedia, pages 516-520, 2016. 6", + "[46] Yi Yu and Feipeng Da. Phase-shifting coder: Predicting accurate orientation in oriented object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13354-13363, 2023. 4", + "[47] Yi Yu, Xue Yang, Qingyun Li, Feipeng Da, Jifeng Dai, Yu Qiao, and Junchi Yan. Point2rbox: Combine knowledge from synthetic visual patterns for end-to-end oriented object detection with single point supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16783-16793, 2024. 2, 3, 7", + "[48] Yi Yu, Xue Yang, Qingyun Li, Yue Zhou, Feipeng Da, and Junchi Yan. H2rbox-v2: Incorporating symmetry for boost-" + ], + "bbox": [ + 516, + 92, + 903, + 902 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "8857", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "ing horizontal box supervised oriented object detection. Advances in Neural Information Processing Systems, 36, 2024. 1, 2, 3, 4, 5, 6, 7", + "[49] Tingxuan Yue, Yanmei Zhang, Jin Wang, Yanbing Xu, and Pengyun Liu. A weak supervision learning paradigm for oriented ship detection in sar image. IEEE Transactions on Geoscience and Remote Sensing, 2024. 2" + ], + "bbox": [ + 91, + 90, + 480, + 189 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "8858", + "bbox": [ + 482, + 945, + 514, + 955 + ], + "page_idx": 10 + } +] \ No newline at end of file diff --git a/2025/ABBSPO_ Adaptive Bounding Box Scaling and Symmetric Prior based Orientation Prediction for Detecting Aerial Image Objects/4a21cd8b-8cbb-4db3-b4a2-0a741c570b46_model.json b/2025/ABBSPO_ Adaptive Bounding Box Scaling and Symmetric Prior based Orientation Prediction for Detecting Aerial Image Objects/4a21cd8b-8cbb-4db3-b4a2-0a741c570b46_model.json new file mode 100644 index 0000000000000000000000000000000000000000..ed97df704051dfa92c2623a0aa55f250d6900857 --- /dev/null +++ b/2025/ABBSPO_ Adaptive Bounding Box Scaling and Symmetric Prior based Orientation Prediction for Detecting Aerial Image Objects/4a21cd8b-8cbb-4db3-b4a2-0a741c570b46_model.json @@ -0,0 +1,2576 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.044 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.0, + 0.812, + 0.047 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.143, + 0.131, + 0.856, + 0.177 + ], + "angle": 0, + "content": "ABBSPO: Adaptive Bounding Box Scaling and Symmetric Prior based Orientation Prediction for Detecting Aerial Image Objects" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.203, + 0.834, + 0.238 + ], + "angle": 0, + "content": "Woojin Lee\\(^{1*}\\) Hyugjae Chang\\(^{1*}\\) Jaeho Moon\\(^{1}\\) Jaehyup Lee\\(^{2\\dagger}\\) Munchurl Kim\\(^{1\\dagger}\\) \n\\(^{1}\\)KAIST \\({}^{2}\\)KNU" + }, + { + "type": "text", + "bbox": [ + 0.254, + 0.241, + 0.745, + 0.258 + ], + "angle": 0, + "content": "{woojin412, hmnc97, jaeho.moon, mkimee}@kaist.ac.kr jaehyuplee@knu.ac.kr" + }, + { + "type": "text", + "bbox": [ + 0.307, + 0.26, + 0.692, + 0.275 + ], + "angle": 0, + "content": "https://kaist-viclab.github.io/ABBSPO_site/" + }, + { + "type": "image", + "bbox": [ + 0.114, + 0.305, + 0.258, + 0.408 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.117, + 0.41, + 0.255, + 0.421 + ], + "angle": 0, + "content": "① GT RBox ② GT C-HBox" + }, + { + "type": "image", + "bbox": [ + 0.262, + 0.296, + 0.383, + 0.42 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.263, + 0.422, + 0.382, + 0.535 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.385, + 0.296, + 0.506, + 0.42 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.387, + 0.421, + 0.506, + 0.534 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.51, + 0.296, + 0.63, + 0.42 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.51, + 0.421, + 0.63, + 0.534 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.636, + 0.313, + 0.885, + 0.426 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.636, + 0.433, + 0.885, + 0.535 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.636, + 0.536, + 0.895, + 0.558 + ], + "angle": 0, + "content": "H2RBox H2RBox-v2 ABBSPO) Performance overview 3-AP50 AP50" + }, + { + "type": "image_caption", + "bbox": [ + 0.1, + 0.545, + 0.255, + 0.558 + ], + "angle": 0, + "content": "(a) Two types of GT HBox" + }, + { + "type": "image_caption", + "bbox": [ + 0.262, + 0.545, + 0.61, + 0.558 + ], + "angle": 0, + "content": "(b) Visual comparison of HBox-supervised oriented detectors" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.569, + 0.907, + 0.64 + ], + "angle": 0, + "content": "Figure 1. Performance comparison of HBox-supervised orientated detectors. (a) Top: A coarse horizontal bounding box (C-HBox) \\((②)\\) and its corresponding rotated bounding box (RBox) \\((①)\\). Bottom: A tight horizontal bounding box (T-HBox)(\\((2)\\)) and its corresponding RBox \\((①)\\). (b) Our ABBSPO is capable of accurately detecting both orientations and scales for GT C-HBoxes and T-HBoxes. (c) Average Precision \\(\\left(\\mathrm{AP}_{50}\\right)\\) for H2RBox [42], H2RBox-v2 [48], and our ABBSPO. \\(3-\\mathrm{AP}_{50}\\) represents the mean \\(\\mathrm{AP}_{50}\\) for three complex shaped objects: (i) DIOR: 'airplane', 'expressway service area', and 'overpass' and (ii) DOTA: 'plane', 'swimming pool', and 'helicopter'." + }, + { + "type": "title", + "bbox": [ + 0.248, + 0.652, + 0.327, + 0.669 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.691, + 0.484, + 0.844 + ], + "angle": 0, + "content": "Weakly supervised Oriented Object Detection (WS-OOD) has gained attention as a cost-effective alternative to fully supervised methods, providing efficiency and high accuracy. Among weakly supervised approaches, horizontal bounding box (HBox) supervised OOD stands out for its ability to directly leverage existing HBox annotations while achieving the highest accuracy under weak supervision settings. This paper introduces adaptive bounding box scaling and symmetry-prior-based orientation prediction, called ABBSPO that is a framework for WS-OOD. Our ABB-" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.654, + 0.907, + 0.881 + ], + "angle": 0, + "content": "SPO addresses the limitations of previous HBox-supervised OOD methods, which compare ground truth (GT) HBoxes directly with predicted RBoxes' minimum circumscribed rectangles, often leading to inaccuracies. To overcome this, we propose: (i) Adaptive Bounding Box Scaling (ABBS) that appropriately scales the GT HBoxes to optimize for the size of each predicted RBox, ensuring more accurate prediction for RBoxes' scales; and (ii) a Symmetric Prior Angle (SPA) loss that uses the inherent symmetry of aerial objects for self-supervised learning, addressing the issue in previous methods where learning fails if they consistently make incorrect predictions for all three augmented views (original, rotated, and flipped). Extensive experimental results demonstrate that our ABBsPO achieves state-of-the-art results, outperforming existing methods." + }, + { + "type": "page_footnote", + "bbox": [ + 0.109, + 0.851, + 0.312, + 0.864 + ], + "angle": 0, + "content": "*Co-first authors (equal contribution)." + }, + { + "type": "page_footnote", + "bbox": [ + 0.111, + 0.864, + 0.258, + 0.876 + ], + "angle": 0, + "content": "† Co-corresponding authors." + }, + { + "type": "page_footnote", + "bbox": [ + 0.111, + 0.876, + 0.45, + 0.888 + ], + "angle": 0, + "content": "\\(^{1}\\) Korea Advanced Institute of Science and Technology (KAIST)." + }, + { + "type": "page_footnote", + "bbox": [ + 0.111, + 0.888, + 0.33, + 0.9 + ], + "angle": 0, + "content": "\\(^{2}\\)Kyungpook National University (KNU)." + }, + { + "type": "list", + "bbox": [ + 0.109, + 0.851, + 0.45, + 0.9 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "8848" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.093, + 0.09, + 0.223, + 0.106 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.115, + 0.486, + 0.388 + ], + "angle": 0, + "content": "Object detection often leverages supervised learning with ground truth horizontal bounding box labels (GT HBoxes) to locate the objects of interest. However, the usage of GT HBoxes limits the precise localization of the objects with their orientations and tight surrounding boundaries, especially for objects such as airplanes and ships of various orientations in aerial images. To handle object detection as an oriented object detection problem, more precise rotated bounding box labels (GT RBoxes) are required, which is very costly to generate [49]. So, to mitigate this challenge, previous methods [17, 23, 42, 47-49] have explored weakly supervised oriented object detection (OOD) that utilizes less expensive forms of annotations, such as image-level, point and HBox annotations. Among these, the use of HBoxes is the most popular due to their widespread availability in existing public datasets [4, 9, 12, 21, 30, 31] to predict the RBoxes for objects of interest. So, this approach can detour the costly process of generating GT RBoxes." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.395, + 0.485, + 0.744 + ], + "angle": 0, + "content": "The previous weakly supervised (WS) learning of OOD [22, 27, 42, 48] utilizes GT HBoxes in the forms of coarse HBoxes, called GT C-HBoxes, as supervision to compare with the HBoxes derived as the minimum circumscribed rectangles from the predicted RBoxes by their OOD models. As shown in the upper figure of Fig. 1-(a), the GT C-HBoxes are defined as coarse horizontal bounding boxes that loosely encompass the boundaries of objects (not tightly bounded). The GT HBoxes of the DOTA [29] dataset are in the forms of C-HBoxes which are derived as the minimum circumscribed horizontal bounding boxes of their GT RBoxes. However, when the previous OOD methods [42, 48] are supervised with the other GT HBoxes that are in the form of tight HBoxes, called GT T-HBoxes (e.g. DIOR dataset [13]), as shown in the bottom figure of Fig. 1-(a), we found that their performances are significantly degraded because GT T-HBoxes tend to have different scales, compared to those of GT C-HBoxes (see Fig. 1-(c)). As shown in Fig. 1-(b), this causes the previous methods to predict either RBoxes with accurate orientations but inaccurate scales smaller than the sizes of their corresponding objects, or the RBoxes with inaccurate (close to horizontal) orientations but somewhat accurate scales (almost the same as HBoxes)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.75, + 0.486, + 0.902 + ], + "angle": 0, + "content": "To overcome the above limitations of the previous WS-OOD methods, we propose an adaptive bounding box scaling and symmetry-prior-based orientation prediction, called as ABBSPO, as a WS-OOD framework that can be effectively trained with either GT C-HBoxes or GT T-Hboxes for aerial images. For this, (i) a novel Adaptive Bounding Box Scaling (ABBS) module is designed to have the flexibility of adjusting the GT HBoxes for each object into random sizes and then selecting the optimal scaled GT HBoxes that allow it to encompass the predicted RBoxes. Note that" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.908, + 0.244 + ], + "angle": 0, + "content": "the previous methods are not possible to have such flexibility for the adjustment of GT HBoxes; (ii) An angle learning module is proposed in a self-supervised manner that utilizes the symmetric priors of the objects that open appear in top-down views of aerial images. As shown in Figs 1-(b) and (c), Our proposed method predicts accurate orientation and surrounding boxes of objects for both cases of using GT C-HBoxes and GT T-HBoxes, outperforming the previous methods in angle accuracy and localization in terms of average precision (AP). Our contributions are summarized as:" + }, + { + "type": "text", + "bbox": [ + 0.52, + 0.259, + 0.907, + 0.35 + ], + "angle": 0, + "content": "- To the best of our knowledge, our work is the first to address the limitations of previous weakly supervised OOD learning methods with T-HBoxes as GT. To overcome this, we propose a novel weakly supervised OOD method that can be effectively trained with T-HBoxes or C-HBoxes that can be cheaply annotated as GT;" + }, + { + "type": "text", + "bbox": [ + 0.521, + 0.351, + 0.907, + 0.426 + ], + "angle": 0, + "content": "- The adaptive bounding box scaling (ABBS) module is proposed to flexibly adjust the HBox (GT) for each object toward an appropriately scaled HBox. This allows part of the predicted RBoxes to place outside the T-HBox (GT), yielding precise RBox prediction;" + }, + { + "type": "text", + "bbox": [ + 0.521, + 0.427, + 0.906, + 0.47 + ], + "angle": 0, + "content": "- A symmetric prior angle (SPA) loss is presented to enhance the orientation prediction accuracy by leveraging the symmetric priors of the objects in aerial images;" + }, + { + "type": "text", + "bbox": [ + 0.521, + 0.471, + 0.906, + 0.516 + ], + "angle": 0, + "content": "- Our method significantly outperforms the state-of-the-art OOD methods using weakly supervised learning with HBoxes (GT) for aerial datasets." + }, + { + "type": "list", + "bbox": [ + 0.52, + 0.259, + 0.907, + 0.516 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.536, + 0.655, + 0.551 + ], + "angle": 0, + "content": "2. Related Work" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.561, + 0.887, + 0.577 + ], + "angle": 0, + "content": "2.1. RBox-supervised Oriented Object Detection" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.584, + 0.909, + 0.903 + ], + "angle": 0, + "content": "Oriented Object Detection (OOD) has gained significant attention, leading to extensive research in RBox-supervised methods (using GT RBoxes) such as Rotated RetinaNet [20], Rotated FCOS [24], \\(\\mathrm{R}^3\\mathrm{Det}\\) [37], ROI Transformer [6], ReDet [8], and \\(\\mathrm{S}^2\\mathrm{A}\\)-Net [7]. Rotated FCOS [24] improves OOD performance by introducing center-ness, which assigns weights to samples based on their proposal locations, thereby emphasizing well-positioned proposals. OrientedRepPoints methods [3, 14, 43], in contrast, utilize flexible receptive fields to extract key object points. However, a common challenge in RBox-supervised OOD methods is the boundary discontinuity problem that arises from the definition and prediction of angle parameters \\((\\theta)\\) [33, 34]. To address this, several methods modified the ways of defining the RBox representations, such as Gaussian distributions [25, 26, 35, 36, 38-41], thereby avoiding straightforward regression of angle parameters. On the other hand, in weakly supervised learning, the boundary discontinuity issue does not arise thanks to the absence of direct RBox supervision, allowing for more stable angle predictions without the need for complex mitigation strategies." + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.956 + ], + "angle": 0, + "content": "8849" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.095, + 0.09, + 0.904, + 0.233 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.095, + 0.232, + 0.405, + 0.395 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.41, + 0.232, + 0.902, + 0.397 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.407, + 0.907, + 0.464 + ], + "angle": 0, + "content": "Figure 2. Overall pipeline of our ABBsPO framework. Our ABBsPO leverages weakly supervised learning from HBox annotations to accurately predict RBoxes. The framework incorporates the Orientation Learning Branch (OLB) for precise angle estimation, using the Symmetric Prior Angle (SPA) loss, and the Scale Learning Branch (SLB) for optimal scale adjustment via the Adaptive Bounding Box Scaling (ABBS) module. The framework supports both C-HBox and T-HBox ground truths, ensuring robust and accurate predictions." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.478, + 0.473, + 0.494 + ], + "angle": 0, + "content": "2.2. Weakly-supervised Orientd Object Detection" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.501, + 0.484, + 0.577 + ], + "angle": 0, + "content": "Weakly supervised OOD methods learn to predict RBoxes without directly utilizing GT RBoxes. The approaches in this domain are primarily categorized based on the types of labels they employ: image-based [23], point-based [17, 47], and HBox-based [42, 48]." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.59, + 0.484, + 0.665 + ], + "angle": 0, + "content": "Image-based supervision. WSODet [23], aims to generate pseudo-RBoxes without explicit localization supervision, thus encountering significant limitations when relying solely on image labels, especially for the scenes with numerous and diverse object types." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.678, + 0.483, + 0.873 + ], + "angle": 0, + "content": "Point-based supervision. By leveraging one representative point at the center location as a label for each object, point label-based methods offer the advantage of being cost-effective [1, 2, 11, 16, 19, 44]. PointOBB [17] estimates angles from geometric relationships across original, rotated, and flipped views, and determines scales by analyzing proposal distributions between original and scaled input images. PointOBB-v2 [18] improves single point supervision by refining pseudo-label generation, leading to enhanced efficiency and accuracy. Point2RBox [47] employed fundamental patterns as priors to guide the regression of RBoxes. Point-based methods are cost-effective and straightforward, but still struggle with limited supervision." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.886, + 0.483, + 0.901 + ], + "angle": 0, + "content": "HBox-based supervision. As annotating HBoxes is more" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.479, + 0.907, + 0.75 + ], + "angle": 0, + "content": "straightforward than RBoxes, the HBox-supervised OOD has gained increasing attention in recent studies. H2RBox [42] utilized rotated views from the original view and provided self-supervision of object orientations without requiring the GT angles. H2RBox-v2 [48] expanded the use of geometric relationships between views by adding flipped views. These methods learn to predict RBoxes by converting minimum circumscribed HBoxes that encompass the predicted RBoxes to directly compare IoU with the GT HBoxes, thereby enabling HBox-supervised OOD. However, these methods only guarantee performance when trained with GT C-HBoxes that are derived from GT RBoxes. These methods struggle to learn precise OOD when being trained with GT T-HBoxes, because of the significant gap between the GT T-HBoxes and the HBoxes derived from predicted RBoxes. To address the issue, we propose an ABBS module to effectively handle both types of GTs (C-HBoxes and T-HBoxes)." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.763, + 0.605, + 0.777 + ], + "angle": 0, + "content": "3. Method" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.787, + 0.674, + 0.803 + ], + "angle": 0, + "content": "3.1. Overall Pipeline" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.81, + 0.907, + 0.901 + ], + "angle": 0, + "content": "Weakly supervised OOD aims to predict RBoxes using less expensive annotations such as HBoxes. Existing methods such as H2RBox [42] and its improved version H2RBox-v2 [48] have laid the foundation for directly predicting RBoxes from HBoxes. Our proposed pipeline builds upon the H2RBox-v2 [48] framework to effectively enable weakly" + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.956 + ], + "angle": 0, + "content": "8850" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.091, + 0.485, + 0.289 + ], + "angle": 0, + "content": "supervised OOD from either C-HBoxes or T-Boxes. Figure 2 depicts the conceptual framework of our weakly supervised OOD method. Given an input image \\(\\mathrm{I}_{\\mathrm{ori}}\\) and its rotated and flipped version \\(\\mathrm{I}_{\\mathrm{rot}}\\) and \\(\\mathrm{I}_{\\mathrm{flp}}\\), our proposed pipeline obtains the RBox for each input view (\\(\\mathrm{I}_{\\mathrm{ori}}\\), \\(\\mathrm{I}_{\\mathrm{rot}}\\), \\(\\mathrm{I}_{\\mathrm{flp}}\\)), including the center position \\((x,y)\\), size \\((w,h)\\), angle \\((\\theta)\\), class scores \\((p)\\), and the center-ness \\((cn)\\). To classify each detected object, we follow FCOS [24] by supervising both the classification \\((p)\\) and the center-ness \\((cn)\\). The angle \\((\\theta)\\) prediction is obtained by using the method proposed in PSC [46]. Our contribution mainly lies in the supervision for localization, consisting of two branches: a scale learning branch (SLB) and an orientation learning branch (OLB)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.295, + 0.484, + 0.508 + ], + "angle": 0, + "content": "In the SLB, the adaptive bounding box scaling (ABBS) module addresses the relationships between GT HBoxes and predicted RBoxes. This ABBS module provides proper minimum circumscribed rectangle for the accurately predicted RBoxes, by adaptively scaling the HBoxes based on a predefined scale range. The OLB guides accurate prediction of object orientation by utilizing three input views \\((\\mathrm{I}_{\\mathrm{ori}},\\mathrm{I}_{\\mathrm{rot}},\\mathrm{I}_{\\mathrm{flp}})\\), following H2RBox-v2 [48]. Additionally, the OLB utilizes these orientation predictions for our symmetric prior angle (SPA) loss, which leverages the inherent left-right symmetry of objects in aerial images. The SPA loss enforces to further adjust the orientations of the predicted RBoxes to be aligned with the orientations of the symmetric objects such as airplanes, ships, ground track fields etc." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.515, + 0.435, + 0.531 + ], + "angle": 0, + "content": "3.2. Adaptive Bounding Box Scaling Module" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.538, + 0.484, + 0.643 + ], + "angle": 0, + "content": "In Fig. 2, 'Scale Learning Branch' illustrates the conceptual process of our adaptive bounding box scaling module (ABBS) module. In HBox-supervised OOD learning, the predicted RBoxes \\((RB^{\\mathrm{pred}})\\) must be compared with the GT HBoxes \\((HB^{\\mathrm{gt}})\\). Since they cannot be directly compared, \\(RB^{\\mathrm{pred}}\\) is first converted to \\(HB^{\\mathrm{pred}}\\), defined as the minimum circumscribed HBox of \\(RB^{\\mathrm{pred}}\\) as:" + }, + { + "type": "equation", + "bbox": [ + 0.205, + 0.647, + 0.482, + 0.665 + ], + "angle": 0, + "content": "\\[\nH B ^ {\\text {p r e d}} = M C R (R B ^ {\\text {p r e d}}), \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.669, + 0.485, + 0.715 + ], + "angle": 0, + "content": "where \\( MCR(\\cdot) \\) is an operator converting \\( RB \\) to the minimum circumscribed \\( HB \\), allowing \\( HB^{\\mathrm{pred}} \\) to be compared with \\( HB^{\\mathrm{gt}} \\). \\( HB^{\\mathrm{pred}} \\) and \\( RB^{\\mathrm{pred}} \\) are given by:" + }, + { + "type": "equation", + "bbox": [ + 0.154, + 0.72, + 0.484, + 0.748 + ], + "angle": 0, + "content": "\\[\nR B ^ {\\text {p r e d}} = \\left[ x _ {r b} ^ {\\text {p r e d}}, y _ {r b} ^ {\\text {p r e d}}, w _ {r b} ^ {\\text {p r e d}}, h _ {r b} ^ {\\text {p r e d}}, \\theta_ {r b} ^ {\\text {p r e d}} \\right], \\tag {2}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.157, + 0.744, + 0.383, + 0.763 + ], + "angle": 0, + "content": "\\[\nH B ^ {\\text {p r e d}} = \\left[ x _ {h b} ^ {\\text {p r e d}}, y _ {h b} ^ {\\text {p r e d}}, w _ {h b} ^ {\\text {p r e d}}, h _ {h b} ^ {\\text {p r e d}} \\right],\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.767, + 0.484, + 0.815 + ], + "angle": 0, + "content": "where \\((x_{rb}^{\\mathrm{pred}},y_{rb}^{\\mathrm{pred}})\\) and \\((x_{hb}^{\\mathrm{pred}},y_{hb}^{\\mathrm{pred}})\\) are the centers of \\(RB^{\\mathrm{pred}}\\) and \\(HB^{\\mathrm{pred}}\\), respectively. The width \\(w\\) and height \\(h\\) of \\(HB^{\\mathrm{pred}}\\) can be computed as:" + }, + { + "type": "equation", + "bbox": [ + 0.149, + 0.821, + 0.484, + 0.848 + ], + "angle": 0, + "content": "\\[\nw _ {h b} ^ {\\text {p r e d}} = w _ {r b} ^ {\\text {p r e d}} \\left| \\cos \\theta_ {r b} ^ {\\text {p r e d}} \\right| + h _ {r b} ^ {\\text {p r e d}} \\left| \\sin \\theta_ {r b} ^ {\\text {p r e d}} \\right|, \\tag {3}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.152, + 0.844, + 0.42, + 0.863 + ], + "angle": 0, + "content": "\\[\nh _ {h b} ^ {\\text {p r e d}} = w _ {r b} ^ {\\text {p r e d}} | \\sin \\theta_ {r b} ^ {\\text {p r e d}} | + h _ {r b} ^ {\\text {p r e d}} | \\cos \\theta_ {r b} ^ {\\text {p r e d}} |.\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.87, + 0.484, + 0.902 + ], + "angle": 0, + "content": "If \\( RB^{\\mathrm{opt}} \\) is defined as the tightly surrounding object boundary RBox with the precise orientation, then we have" + }, + { + "type": "image", + "bbox": [ + 0.553, + 0.089, + 0.648, + 0.18 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.592, + 0.182, + 0.61, + 0.194 + ], + "angle": 0, + "content": "(a)" + }, + { + "type": "image", + "bbox": [ + 0.652, + 0.089, + 0.743, + 0.18 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.688, + 0.182, + 0.706, + 0.193 + ], + "angle": 0, + "content": "(b)" + }, + { + "type": "image", + "bbox": [ + 0.748, + 0.089, + 0.866, + 0.18 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.798, + 0.182, + 0.815, + 0.193 + ], + "angle": 0, + "content": "(c)" + }, + { + "type": "image", + "bbox": [ + 0.555, + 0.195, + 0.649, + 0.269 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.595, + 0.273, + 0.613, + 0.285 + ], + "angle": 0, + "content": "(d)" + }, + { + "type": "image", + "bbox": [ + 0.653, + 0.195, + 0.764, + 0.27 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.699, + 0.273, + 0.716, + 0.285 + ], + "angle": 0, + "content": "(e)" + }, + { + "type": "image", + "bbox": [ + 0.766, + 0.195, + 0.864, + 0.269 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.808, + 0.273, + 0.824, + 0.285 + ], + "angle": 0, + "content": "(f)" + }, + { + "type": "image_caption", + "bbox": [ + 0.513, + 0.286, + 0.907, + 0.37 + ], + "angle": 0, + "content": "Figure 3. Analysis of scale adjustment function \\((f(\\cdot))\\) based on the shape and angle of objects. (a) rectangular shape, (b) rounded rectangular shape, (c) complex shape, (d) horizontal orientation, (e) slightly tilted orientation, (f) diagonal orientation. The cyan solid box, green solid box and red dotted box represent GT T-HBox, RBox and adjusted GT HBox, respectively." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.386, + 0.907, + 0.583 + ], + "angle": 0, + "content": "\\(HB^{\\mathrm{opt}} = MCR(RB^{\\mathrm{opt}})\\) for the 'Predicted RBox Projection Process' block in the SLB of Fig. 2. When the size of \\(HB^{\\mathrm{gt}}\\) is larger or smaller than that of \\(HB^{\\mathrm{opt}}\\), the model needs to adaptively adjust and find the optimal scale within a predefined range of scale variations. We propose an ABBS module that estimates \\(RB^{\\mathrm{opt}}\\) by adaptively adjusting the scale of \\(HB^{\\mathrm{gt}}\\). Notably, even if \\(RB^{\\mathrm{pred}}\\) is accurately estimated, its \\(HB^{\\mathrm{pred}}\\) may not overlap well with \\(HB^{\\mathrm{gt}}\\), leading to a low Intersection over Union (IoU) value. Enforcing \\(HB^{\\mathrm{pred}}\\) to match \\(HB^{\\mathrm{gt}}\\) can cause a misalignment with \\(RB^{\\mathrm{pred}}\\) because \\(HB^{\\mathrm{gt}}\\) may not be ideal in estimating \\(RB^{\\mathrm{opt}}\\). To address this, our ABBS module adaptively scales and adjusts \\(HB^{\\mathrm{gt}}\\) in the context of \\(RB^{\\mathrm{pred}}\\), rather than forcing \\(HB^{\\mathrm{pred}}\\) to match \\(HB^{\\mathrm{gt}}\\)." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.591, + 0.907, + 0.637 + ], + "angle": 0, + "content": "For the detailed explanation of our ABBS module, we first define a set of scaled versions of \\(HB^{\\mathrm{gt}}\\) for the 'Scaled GT HBoxes Generation Process' in the SLB of Fig. 2 as:" + }, + { + "type": "equation", + "bbox": [ + 0.591, + 0.646, + 0.905, + 0.667 + ], + "angle": 0, + "content": "\\[\n\\mathbf {H B} _ {\\mathrm {s}} ^ {\\mathrm {g t}} = \\left\\{H B _ {\\mathrm {s}, 1} ^ {\\mathrm {g t}}, H B _ {\\mathrm {s}, 2} ^ {\\mathrm {g t}}, \\dots , H B _ {\\mathrm {s}, \\mathrm {K}} ^ {\\mathrm {g t}} \\right\\}, \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.678, + 0.907, + 0.788 + ], + "angle": 0, + "content": "where \\(HB_{s,k}^{\\mathrm{gt}}\\) is the k-th scaled version of \\(HB^{\\mathrm{gt}}\\) and \\(\\mathbf{K}\\) is the total number of scaled variations of \\(HB^{\\mathrm{gt}}\\). \\(\\mathbf{HB}_{\\mathbf{S}}^{\\mathrm{gt}}\\) is determined as the combinations of angle-adjusted width and height scale factors, \\(\\{s_{adj,i}^{w}\\}_{i=1}^{N_s}\\) and \\(\\{s_{adj,j}^{h}\\}_{j=1}^{N_s}\\), that are transformed from basic width and height scale factors, \\(\\{s_i^{w}\\}_{i=1}^{N_s}\\) and \\(\\{s_j^{h}\\}_{j=1}^{N_s}\\) by considering angle prediction. Basic scale factors are uniformly spaced in a predefined scale range as:" + }, + { + "type": "equation", + "bbox": [ + 0.535, + 0.798, + 0.905, + 0.817 + ], + "angle": 0, + "content": "\\[\nS _ {w} = \\left\\{s _ {1} ^ {w}, s _ {2} ^ {w}, \\dots , s _ {N _ {s}} ^ {w} \\right\\}, S _ {h} = \\left\\{s _ {1} ^ {h}, s _ {2} ^ {h}, \\dots , s _ {N _ {s}} ^ {h} \\right\\}, \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.83, + 0.905, + 0.857 + ], + "angle": 0, + "content": "where \\( S_w \\) and \\( S_h \\) are the sets of basic width and height scale factors, respectively. \\( s_i^w \\) and \\( s_i^h \\) are calculated as:" + }, + { + "type": "equation", + "bbox": [ + 0.568, + 0.865, + 0.85, + 0.883 + ], + "angle": 0, + "content": "\\[\ns _ {i} ^ {w} = s _ {1} ^ {w} + \\left(s _ {N _ {s}} ^ {w} - s _ {1} ^ {w}\\right) / \\left(N _ {s} - 1\\right) \\cdot (i - 1),\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.57, + 0.886, + 0.905, + 0.905 + ], + "angle": 0, + "content": "\\[\ns _ {j} ^ {h} = s _ {1} ^ {h} + \\left(s _ {N _ {s}} ^ {h} - s _ {1} ^ {h}\\right) / \\left(N _ {s} - 1\\right) \\cdot (j - 1), \\tag {6}\n\\]" + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.514, + 0.957 + ], + "angle": 0, + "content": "8851" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.091, + 0.485, + 0.223 + ], + "angle": 0, + "content": "where \\( s_{N_s}^w = s_{N_s}^h \\) is the predefined largest basic scale factor for both width and height of \\( HB^{\\mathrm{gt}} \\), and \\( N_{s} \\) is the number of uniform quantization for the both range \\( [s_1^w..s_{N_s}^w] \\) and \\( [s_1^h..s_{N_s}^h] \\). In order to generate \\( HB_{s,i}^{\\mathrm{gt}} \\), we transform the basic width and height scale factors, \\( \\{s_i^w\\}_{i=1}^{N_s} \\) and \\( \\{s_j^h\\}_{j=1}^{N_s} \\), into angle-adjusted width and height scale factors, \\( \\{s_{adj,i}^w\\}_{i=1}^{N_s} \\) and \\( \\{s_{adj,j}^h\\}_{j=1}^{N_s} \\), using the predicted angle \\( \\theta^{\\mathrm{pred}} \\) through the scale adjustment function \\( f \\):" + }, + { + "type": "equation", + "bbox": [ + 0.152, + 0.225, + 0.483, + 0.247 + ], + "angle": 0, + "content": "\\[\ns _ {a d j, i} ^ {w} = f \\left(\\theta_ {r b} ^ {\\text {p r e d}}, s _ {i} ^ {w}\\right), s _ {a d j, j} ^ {h} = f \\left(\\theta_ {r b} ^ {\\text {p r e d}}, s _ {j} ^ {h}\\right). \\tag {7}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.093, + 0.254, + 0.485, + 0.538 + ], + "angle": 0, + "content": "To define \\( f(\\cdot) \\), it's essential to consider the object types and rotation angles. Fig. 3 shows the effect of scale adjustments on T-HBoxes for three object types: (i) For rectangular objects like tennis courts (Fig. 3-(a)), the adjusted T-HBox (red dotted box) aligns precisely with the GT T-HBox (cyan solid box) and tightly circumscribes the optimal RBox (green solid box); (ii) For rounded rectangular objects (Fig. 3-(b)), the optimal RBox slightly exceeds the GT T-HBox; (iii) Complex shapes like airplanes (Fig. 3-(c)) show a larger discrepancy, with parts of the optimal RBox lying outside the GT T-HBox. Furthermore, scale adjustments also depend on rotation angles: (i) Fig. 3-(d) for a vertically (or horizontally) aligned airplane, the GT T-HBox and optimal RBox are identical; (ii) For Fig. 3-(e) with a small rotation angle, they differ slightly; (iii) For Fig. 3-(f) with a larger angle, the difference is more pronounced. Therefore, to take the object's shape types and orientation degrees into account for the scale adjustment for the widths and heights of T-HBoxes, \\( f(\\cdot) \\) in Eq. 7 is defined as:" + }, + { + "type": "equation", + "bbox": [ + 0.099, + 0.544, + 0.483, + 0.585 + ], + "angle": 0, + "content": "\\[\nf (\\theta , s) = \\left\\{ \\begin{array}{l l} \\frac {4}{\\pi} (s - 1) \\cdot \\theta + 1, & \\text {i f} 0 \\leq \\theta < \\frac {\\pi}{4}, \\\\ \\frac {4}{\\pi} (1 - s) \\cdot \\theta + (2 s - 1), & \\text {i f} \\frac {\\pi}{4} \\leq \\theta < \\frac {\\pi}{2}, \\end{array} \\right. \\tag {8}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.593, + 0.483, + 0.64 + ], + "angle": 0, + "content": "where the angle range is set to \\(\\theta \\in [0,\\pi /2)\\) due to the periodicity of the angle. According to \\(f(\\cdot)\\) and Eq. 7, \\(HB_{s,k}^{\\mathrm{gt}}\\) in \\(\\mathbf{HB}_s^{\\mathrm{gt}}\\) can be expressed as:" + }, + { + "type": "equation", + "bbox": [ + 0.16, + 0.65, + 0.483, + 0.669 + ], + "angle": 0, + "content": "\\[\nH B _ {s, k} ^ {\\mathrm {g t}} = \\left[ x ^ {\\mathrm {g t}}, y ^ {\\mathrm {g t}}, w ^ {\\mathrm {g t}} \\cdot s _ {a d j, i} ^ {w}, h ^ {\\mathrm {g t}} \\cdot s _ {a d j, j} ^ {h} \\right], \\tag {9}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.671, + 0.484, + 0.747 + ], + "angle": 0, + "content": "where \\((x^{\\mathrm{gt}},y^{\\mathrm{gt}})\\) is the center point, and \\(w^{\\mathrm{gt}}\\) and \\(h^{\\mathrm{gt}}\\) are width and height of \\(HB^{\\mathrm{gt}}\\). As shown in the 'IoU Calculation' and 'Optimal Scale Learning' blocks in the SLB of Fig. 2, \\(HB^{\\mathrm{opt}}\\) among \\(\\{HB_{s,k}^{\\mathrm{gt}}\\}_{k=1}^{K}\\) can be determined which minimizes the IoU loss for all proposals by an ABBS loss as:" + }, + { + "type": "equation", + "bbox": [ + 0.1, + 0.757, + 0.483, + 0.806 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\mathrm {a s}} = \\frac {1}{N _ {p}} \\sum_ {l = 1} ^ {N _ {p}} \\min _ {s _ {i} ^ {w} \\in S _ {w}, \\atop s _ {j} ^ {h} \\in S _ {h}} \\mathcal {L} _ {\\mathrm {I o U}} \\left(H B _ {l} ^ {\\text {p r e d}}, H B _ {s, k} ^ {\\text {g t}, l} \\left(s _ {i} ^ {w}, s _ {j} ^ {h}\\right)\\right), \\tag {10}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.805, + 0.484, + 0.901 + ], + "angle": 0, + "content": "where \\(N_{p}\\) is the total number of proposals for input I. \\(HB_{l}^{\\mathrm{pred}}\\) is \\(HB^{\\mathrm{pred}}\\) for \\(l\\)-th proposal, and \\(HB_{s,k}^{\\mathrm{gt},l}(s_i^w,s_j^h)\\) is \\(k\\)-th scaled \\(HB^{\\mathrm{gt}}\\), as \\(HB_{s,k}^{\\mathrm{gt}}\\), whose width and height are scaled for \\(l\\)-th proposal according to Eq. 7 to Eq. 9. Finally, by adding a regularization term using the IoU loss between \\(HB^{\\mathrm{pred}}\\) and non-scaled \\(HB^{\\mathrm{gt}}\\), we formed the regression loss as:" + }, + { + "type": "image_caption", + "bbox": [ + 0.549, + 0.089, + 0.601, + 0.101 + ], + "angle": 0, + "content": "Airplane" + }, + { + "type": "image", + "bbox": [ + 0.517, + 0.102, + 0.63, + 0.178 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.664, + 0.089, + 0.692, + 0.101 + ], + "angle": 0, + "content": "Ship" + }, + { + "type": "image", + "bbox": [ + 0.632, + 0.102, + 0.721, + 0.178 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.715, + 0.089, + 0.784, + 0.101 + ], + "angle": 0, + "content": "Tennis court" + }, + { + "type": "image", + "bbox": [ + 0.726, + 0.102, + 0.778, + 0.178 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.819, + 0.089, + 0.865, + 0.101 + ], + "angle": 0, + "content": "Vehicle" + }, + { + "type": "image", + "bbox": [ + 0.779, + 0.102, + 0.905, + 0.179 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.513, + 0.187, + 0.907, + 0.23 + ], + "angle": 0, + "content": "Figure 4. Examples of symmetric objects in aerial images. In SPA loss, the \\( x,y \\) coordinates and angle \\( \\theta \\) are used to define the symmetry axis, splitting the object into two parts for comparison." + }, + { + "type": "equation", + "bbox": [ + 0.522, + 0.244, + 0.907, + 0.27 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\mathrm {r e g}} = \\mathcal {L} _ {\\mathrm {a s}} + \\alpha \\cdot \\left(1 / N _ {p}\\right) \\sum_ {l = 1} ^ {N _ {p}} \\mathcal {L} _ {\\mathrm {I o U}} \\left(H B _ {l} ^ {\\text {p r e d}}, H B ^ {\\mathrm {g t}, l}\\right), \\tag {11}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.281, + 0.905, + 0.296 + ], + "angle": 0, + "content": "where \\(\\alpha\\) is a hyperparameter which is set to 0.01 by default." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.322, + 0.769, + 0.339 + ], + "angle": 0, + "content": "3.3. Symmetric Prior Angle Loss" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.345, + 0.906, + 0.406 + ], + "angle": 0, + "content": "In aerial images, objects such as airplanes, ships, tennis courts, and vehicles are often captured from top-down viewpoints, where most of these objects exhibit symmetries in their appearance, as shown in Fig. 4." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.415, + 0.907, + 0.626 + ], + "angle": 0, + "content": "In the previous pipelines [42, 48], both the regression loss associated with the bounding box's center point, width, and height, and the angle loss for accurate angle prediction were trained in a balanced manner. However, they tended to inaccurately predict the angles by maximizing the bounding box's IoUs at the same time. This issue stems from the fact that, since the angles could not be directly supervised due to the absence of angle annotations, the angles were indirectly supervised from augmented views with rotations and flips. This is problematic because, when the difference between two predicted angles for the same object in the original view and its rotated view are equal to the rotation angle applied for the original view, the angle loss is zero although the predicted angles are inaccurate." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.636, + 0.907, + 0.832 + ], + "angle": 0, + "content": "To mitigate such a predicted angle ambiguity, we propose a symmetric prior angle (SPA) loss. Based on the SPA loss, the model can be trained to predict precise angles by indirectly utilizing the object's symmetric characteristics. As shown in Fig. 4, the detected objects are symmetric against the symmetry axes (blue-dotted lines) passing the center points of their RBoxes. That is, the pixel contents in the two parts divided by the symmetry axis of the RBox are compared in similarity whose difference is used as supervision for our SPA loss. It is noted that our SPA loss utilizes only symmetric objects, incorporating the symmetry prior from GT class labels for proposals identified as symmetric, such as 'airplane,' 'ship,' 'vehicle,' and 'tennis court'" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.84, + 0.906, + 0.9 + ], + "angle": 0, + "content": "To avoid applying the SPA loss when \\( RB^{pred} \\) are inaccurate for the respective objects, we first check the fidelity scores of proposal, and sample the Top- \\( k \\) proposals as supervision in the SPA loss as:" + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "8852" + } + ], + [ + { + "type": "equation", + "bbox": [ + 0.101, + 0.089, + 0.483, + 0.114 + ], + "angle": 0, + "content": "\\[\n\\left\\{R B _ {n} ^ {p r e d} \\right\\} _ {n = 1} ^ {N _ {\\mathrm {s p a}}} = \\operatorname {T o p - k} \\left(\\left\\{R B _ {l} ^ {p r e d} \\right\\} _ {l = 1} ^ {N _ {p}} \\mid \\mathrm {s c} _ {\\mathrm {c l s}} ^ {i} + \\mathrm {s c} _ {\\mathrm {l o c}} ^ {i}\\right) \\tag {12}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.12, + 0.484, + 0.319 + ], + "angle": 0, + "content": "where \\(\\mathrm{sc}_{\\mathrm{cls}}^{(i)}\\) and \\(\\mathrm{sc}_{\\mathrm{loc}}^{(i)}\\) are the classification and localization scores for the \\(l\\)-th \\(RB^{pred}\\), and \\(N_{p}\\) is the total number of proposals. From Eq. 12, the selected \\(N_{spa}\\) proposals are considered in the SPA loss by which the predicted angles of \\(RB^{pred}(\\theta_{rb}^{pred})\\) are enforced to align with the orientations of objects in the sense of maximizing the similarity, Structural Similarity Index (SSIM [28]), between the pixel contents in the two parts of each \\(RB^{pred}\\). It should be noted that, even in cases where symmetric objects may not appear perfectly symmetric due to contextual factors like shadows or asymmetrical cargo arrangements, their symmetry is still maintained by the inherent structural symmetry between the two parts. Our SPA loss is defined as:" + }, + { + "type": "equation", + "bbox": [ + 0.11, + 0.325, + 0.483, + 0.349 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\mathrm {S P A}} = \\left(1 / N _ {\\mathrm {s p a}}\\right) \\sum_ {n = 1} ^ {N _ {\\mathrm {s p a}}} \\left(1 - \\operatorname {S S I M} \\left(I _ {p 1} ^ {(n)}, I _ {p 2} ^ {(n)}\\right)\\right) \\tag {13}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.349, + 0.484, + 0.424 + ], + "angle": 0, + "content": "To remove the influence of object sizes in \\( L_{\\mathrm{SPA}} \\) computation, the proposals \\( (RB^{pred}) \\) are projected onto a fixed-size grid of \\( 50 \\times 50 \\). Then, the pixel content \\( (I_{p1}) \\) in one part of the proposal's projection is compared with that \\( (I_{p2}) \\) of the other part that is flipped before the comparison." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.43, + 0.245, + 0.445 + ], + "angle": 0, + "content": "3.4. Loss Functions" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.453, + 0.484, + 0.544 + ], + "angle": 0, + "content": "In the orientation learning branch (OLB), two angle-based losses [48], \\(\\mathcal{L}_{\\mathrm{rot}}\\) and \\(\\mathcal{L}_{\\mathrm{flp}}\\), are adopted to leverage the consistency between the original, rotated, and flipped views of each object proposal. For the rotated and flipped views, \\(\\mathcal{L}_{\\mathrm{rot}}\\) and \\(\\mathcal{L}_{\\mathrm{flp}}\\) are computed by comparing with the predicted angle \\(\\theta\\) in the original view \\((\\mathrm{I}_{\\mathrm{ori}})\\):" + }, + { + "type": "equation", + "bbox": [ + 0.13, + 0.55, + 0.483, + 0.567 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\mathrm {r o t}} = l _ {s} \\left(\\theta_ {\\mathrm {r o t}} - \\theta , R\\right), \\mathcal {L} _ {\\mathrm {f l p}} = l _ {s} \\left(\\theta_ {\\mathrm {f l p}} + \\theta , 0\\right), \\tag {14}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.573, + 0.483, + 0.603 + ], + "angle": 0, + "content": "where \\( l_{s} \\) denotes a smooth L1 loss-based snap loss [48], and \\( R \\) denotes the angle applied to \\( \\mathbf{I}_{ori} \\). The final angle loss is:" + }, + { + "type": "equation", + "bbox": [ + 0.164, + 0.61, + 0.482, + 0.627 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\text {a n g}} = \\beta \\left(\\lambda_ {r} \\mathcal {L} _ {\\text {r o t}} + \\lambda_ {f} \\mathcal {L} _ {\\text {f l p}}\\right) + \\gamma \\mathcal {L} _ {\\text {S P A}}, \\tag {15}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.632, + 0.484, + 0.693 + ], + "angle": 0, + "content": "where \\(\\lambda_r = 1.0, \\lambda_f = 0.05, \\beta = 0.6\\), and \\(\\gamma = 0.05\\) are empirically determined for all our experiments. In the shape learning branch (SLB), we use our IoU-based [45] regression loss \\(\\mathcal{L}_{\\mathrm{reg}}\\) in Eq. 11. The overall loss is defined as:" + }, + { + "type": "equation", + "bbox": [ + 0.115, + 0.699, + 0.483, + 0.716 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\text {t o t a l}} = \\lambda_ {\\text {a n g}} \\mathcal {L} _ {\\text {a n g}} + \\lambda_ {\\text {r e g}} \\mathcal {L} _ {\\text {r e g}} + \\lambda_ {\\text {c n}} \\mathcal {L} _ {\\text {c n}} + \\lambda_ {\\text {c l s}} \\mathcal {L} _ {\\text {c l s}} \\tag {16}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.721, + 0.484, + 0.768 + ], + "angle": 0, + "content": "where \\(\\mathcal{L}_{\\mathrm{cn}}\\) is the center-ness loss [24], and \\(\\mathcal{L}_{\\mathrm{cls}}\\) is the classification loss based on the focal loss [20]. The weighting factors, \\(\\lambda_{\\mathrm{ang}}, \\lambda_{\\mathrm{reg}}, \\lambda_{\\mathrm{cn}}\\), and \\(\\lambda_{\\mathrm{cls}}\\) are all set to 1." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.778, + 0.224, + 0.794 + ], + "angle": 0, + "content": "4. Experiments" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.802, + 0.194, + 0.817 + ], + "angle": 0, + "content": "4.1. Datasets" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.825, + 0.484, + 0.902 + ], + "angle": 0, + "content": "We trained and tested all the methods across four different datasets: DIOR [5, 13], DOTA-v1.0 [29], SIMD [9] and NWPU VHR-10 [4], which are summarized in Table 1. The details for the datasets and results for SIMD and NWPU are described in Suppl." + }, + { + "type": "table", + "bbox": [ + 0.521, + 0.09, + 0.903, + 0.176 + ], + "angle": 0, + "content": "
Datasets# of ImagesImage Widths# of Objects# of ClassesAnnotation Types
DIOR [13]22,463800190,28820T-HBox
DIOR-R [5]22,463800190,28820RBox
DOTA-v1.0 [29]2,806800 ~ 4K188,28215C-HBox, RBox
SIMD [9]5,000102445,09615T-HBox
NWPU VHR-10 [4]800~10003,77510T-HBox
" + }, + { + "type": "table_caption", + "bbox": [ + 0.541, + 0.18, + 0.878, + 0.193 + ], + "angle": 0, + "content": "Table 1. Characteristics of datasets used for experiments" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.214, + 0.731, + 0.23 + ], + "angle": 0, + "content": "4.2. Implementation Details" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.237, + 0.906, + 0.328 + ], + "angle": 0, + "content": "Our proposed ABBSPO pipeline adopts the FCOS [24] detector as the baseline architecture, utilizing a ResNet-50 [10] backbone and an FPN [15] neck, based on the H2RBox-v2 [48] framework. To ensure fairness, all models are configured with the ResNet-50 [10] backbone and trained for 12 epochs on NVIDIA RTX3090 GPUs." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.339, + 0.718, + 0.355 + ], + "angle": 0, + "content": "4.3. Experimental Results" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.362, + 0.747, + 0.377 + ], + "angle": 0, + "content": "4.3.1 Quantitative Comparison" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.386, + 0.906, + 0.537 + ], + "angle": 0, + "content": "It should be noted that objects such as round-shaped pools have orientation ambiguities regardless of their annotations (RBoxes) [42]. In order to avoid confusion in orientation learning, annotations are modified as having horizontal orientations if the objects belong to the following categories: (i) DIOR-R: 'baseball field', 'chimney', 'golf field', 'stadium', 'storage tank', 'windmill'; and (ii) DOTA-v1.0: 'baseball diamond', 'stadium', 'roundabout'. Accordingly, their orientation learning is enforced to predict the horizontal orientations, similar to previous works [42, 48]." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.539, + 0.907, + 0.903 + ], + "angle": 0, + "content": "Results on DIOR-R. Table 2 shows the OOD results. In addition to \\(\\mathrm{AP}_{50}\\) metric, we use \\(3\\text{-AP}_{50}\\) that focuses on the detection performance of the three complex-shaped object categories: 'airplane' (APL), 'expressway service area' (ESA), and 'overpass' (OP). As shown, our ABBSPO outperforms all weakly supervised OOD methods. Especially, in terms of \\(3\\text{-AP}_{50}\\), our ABBSPO is superior to the HBox-supervised SOTA methods, H2RBox and H2RBox-v2, with large margins of average \\(12.9\\%\\) -point and average \\(9.1\\%\\) -point improvements. In overall \\(\\mathrm{AP}_{50}\\) performance, our ABBSPO surpasses H2RBox by \\(5.13\\%\\) -point and the H2RBox-v2 by \\(3.03\\%\\) -point. It is noted that our ABBSPO not only surpasses our base detector (H2RBox-v2 [48]) but also performs comparably to other RBox-supervised OOD methods, such as FCOS [24] and Oriented R-CNN [32]. It is worth noting that, compared to the RBox-supervised OOD methods, our ABBSPO shows even superior performance with large margins from \\(6.5\\%\\) -point to \\(11.7\\%\\) -point, especially on the 'airplane' that has the most complex shape. Notably, the ABBS module is less effective for rectangular objects, such as 'tennis court' (TC) and 'vehicle' (VE), as scaling is often unnecessary. However, it proves highly beneficial for complex-shaped objects, such as the ESA. The SPA loss is applied only to symmetric categories and helps" + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "8853" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.095, + 0.078, + 0.904, + 0.238 + ], + "angle": 0, + "content": "
Methods\\( \\underline{\\mathbf{APL}} \\)\\( \\mathbf{APO} \\)\\( \\mathbf{BF} \\)\\( \\mathbf{{BC}} \\)\\( \\mathbf{{BR}} \\)\\( \\mathbf{{CH}} \\)\\( \\underline{\\mathbf{ESA}} \\)\\( \\mathbf{{ETS}} \\)\\( \\mathbf{{DAM}} \\)\\( \\mathbf{{GF}} \\)\\( \\mathbf{{GTF}} \\)\\( \\mathbf{{HA}} \\)\\( \\mathbf{{OP}} \\)\\( \\mathbf{{SH}} \\)\\( \\mathbf{{STA}} \\)\\( \\mathbf{{STO}} \\)\\( \\mathbf{{TC}} \\)\\( \\mathbf{{TS}} \\)\\( \\mathbf{{WE}} \\)\\( \\mathbf{{WM}} \\)\\( 3-AP_{50} \\)\\( AP_{50} \\)
\\( S_R \\)RetinaNet [20]59.819.369.781.317.272.768.749.418.469.571.333.334.175.867.159.681.044.138.062.554.2054.64
FCOS [24]62.137.974.681.232.972.175.361.827.469.178.734.450.680.168.668.181.349.143.464.562.6760.66
Oriented R-CNN [32]63.036.771.981.641.172.677.865.524.872.982.140.956.581.273.462.481.553.365.665.7762.41
GWD [38] (RetinaNet)61.523.673.681.117.472.768.347.220.771.273.233.934.377.664.757.580.942.139.760.254.7055.07
KLD [39] (RetinaNet)57.822.671.581.216.972.768.952.120.673.571.033.733.277.168.959.980.943.939.160.953.3055.32
KFIoU [41] (RetinaNet)60.636.673.680.927.072.673.456.525.473.972.032.945.875.865.257.680.048.040.158.859.9357.84
\\( \\underline{\\mathbf{S_I}} \\)\\( WSODet^† \\)[23]20.729.063.267.30.265.50.40.10.349.028.90.31.51.253.416.440.00.16.10.17.5322.20
\\( S_P \\)\\( PointOBB^† \\)[17]58.215.370.578.60.172.269.61.83.70.377.316.740.479.239.632.429.616.833.627.756.0738.08
Point2RBox-SK [47]41.99.162.952.810.872.23.043.95.59.725.19.121.024.020.425.171.74.516.116.321.9727.26
\\( S_H \\)H2RBox [42]57.114.472.282.617.571.256.555.21467.777.93140.776.366.263.481.550.43857.651.4354.57
H2RBox-v2 [48]55.517.876.980.527.772.263.058.624.473.980.333.947.277.458.760.981.448.141.153.955.2356.67
ABBSPO (Ours)69.515.776.287.529.972.375.361.228.174.181.734.748.279.367.461.481.554.741.553.864.3359.70
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.24, + 0.905, + 0.283 + ], + "angle": 0, + "content": "Table 2. Quantitative results of each category on the DIOR-R [5] test dataset for RBox-supervised \\((S_R)\\), Image-supervised \\((S_I)\\), Point-supervised \\((S_P)\\) and HBox-supervised \\((S_H)\\) methods. The \\(3-\\mathrm{AP}_{50}\\) represents the mean \\(\\mathrm{AP}_{50}\\) scores for three complex-shaped object categories: 'airplane' (APL), 'expressway service area' (ESA), and 'overpass' (OP). The notation \\(\\dagger\\) indicates its results in the paper [17]." + }, + { + "type": "table", + "bbox": [ + 0.093, + 0.292, + 0.905, + 0.442 + ], + "angle": 0, + "content": "
MethodsPLBDBRGTFSVLVSHTCBCSTSBFRAHASPHC3-AP50AP50
SRRetinaNet [20]87.575.139.959.666.366.378.290.555.062.747.163.659.455.143.061.8763.3
FCOS [24]88.874.046.859.170.181.487.790.767.768.360.266.164.958.744.063.8368.6
Oriented R-CNN [32]89.376.153.878.768.684.989.390.874.362.866.366.574.758.646.864.9072.1
Oriented RepPoints[14]89.780.150.574.475.082.088.790.464.070.045.760.673.660.442.864.3069.86
GWD [38] (RetinaNet)88.274.941.360.566.768.185.890.550.466.845.865.160.752.938.960.063.77
KLD [39] (RetinaNet)88.475.841.460.066.168.884.790.656.860.450.470.160.050.545.761.5364.65
KFIuU [41] (RetinaNet)84.474.340.755.257.956.976.471.246.164.854.365.058.348.742.958.6759.81
SPPointOBB [17]+FCOS32.467.30.853.62.39.718.80.39.912.80.554.011.034.111.425.9721.26
Point2RBox-SK [47]50.163.71.644.723.934.732.778.841.232.22.134.320.842.57.233.2734.03
SHH2RBox [42]89.573.137.355.170.776.485.490.366.567.359.664.960.657.936.561.3066.07
H2RBox-v2 [48]89.474.845.456.070.376.687.990.569.367.556.764.765.355.545.563.4767.69
ABBSPO (Ours)89.275.647.452.870.377.688.290.567.966.868.266.271.655.651.065.2769.26
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.444, + 0.907, + 0.486 + ], + "angle": 0, + "content": "Table 3. Quantitative results of each category on the DOTA-v1.0 [29] validation dataset for \\(S_R\\), \\(S_I\\), \\(S_P\\) and \\(S_H\\) methods. The \\(\\underline{3 - AP}_{50}\\) represents the mean \\(\\mathrm{AP}_{50}\\) scores for three complex-shaped object categories: plane (PL), swimming pool (SP), and helicopter (HC). All the methods are re-trained using only train dataset for fair comparison." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.494, + 0.482, + 0.706 + ], + "angle": 0, + "content": "improve their performance, except for the categories with orientation ambiguities, such as 'storage tank' (STO). Since the predicted angles learned through the SPA loss are also utilized in the ABBS module for scale adjustment, both the SPA loss and the ABBS module jointly contribute to performance improvement in symmetric categories. This joint effect is particularly evident in complex-shaped symmetric categories, such as APL and ESA, where performance gains are more significant. Nevertheless, the performance gains for the two symmetric and rectangular categories, TC and VE, are marginal. This is mainly because the ABBS module has limited impact on rectangular shapes, and the small object sizes lead to an insufficient number of pixels for reliably determining the symmetry axis via the SPA loss." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.707, + 0.483, + 0.888 + ], + "angle": 0, + "content": "Results on DOTA-v1.0. Table 3 shows the detection performance results on the DOTA-v1.0 [29]. Due to the nonresponsiveness of the DOTA evaluation server, we report our experimental results on the validation dataset (458 images) instead of the test dataset (937 images). It should be noted that the validation dataset was not used for training all the methods for fair comparison. We use \\(3\\mathrm{-AP}_{50}\\) that measures the detection performance for the three complex-shaped object categories: 'plane', 'swimming pool' and 'helicopter'. Our ABBsPO achieves SOTA performance, outperforming H2RBox by \\(3.19\\%\\) -point and H2RBox-v2 by \\(1.57\\%\\) -point improvements. Moreover, our ABBsPO" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.495, + 0.878, + 0.51 + ], + "angle": 0, + "content": "even surpasses the FCOS Baseline by \\(0.66\\%\\) -point lift." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.528, + 0.736, + 0.543 + ], + "angle": 0, + "content": "4.3.2 Qualitative Comparison" + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.553, + 0.907, + 0.811 + ], + "angle": 0, + "content": "Results on DIOR. As shown in the figures of the first row in Fig. 5, our ABBSPO is the only method that accurately captures both the orientation and scale of the airplane. Since DIOR annotations provide GT in T-HBox format, direct usage of T-HBoxes as GT for training to predict RBox leads to degradations in orientation and scale prediction accuracy for the existing HBox-supervised OOD methods as shown in the figures of columns 2 and 3 in Fig. 5. In contrast, our ABBSPO avoids such degradation by utilizing the ABBS module that optimally scales the GT HBox sizes for precise RBox prediction during training. It is also worthwhile to mention that the predicted orientations by our ABBSPO are more precisely obtained via our SPA loss. Furthermore, it should be noted that compared to the RBox-supervised baseline method (Rotated FCOS [24]), our approach demonstrates superior visual results, even under weakly supervised learning." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.811, + 0.909, + 0.903 + ], + "angle": 0, + "content": "Results on DOTA-v1.0. As shown in the figures of the second row in Fig. 5, ABBSPO very accurately predicts both the orientation and scale of the swimming pool, achieving similar accuracy for tennis court. Interestingly, only ABBSPO successfully detects the two tennis courts that are partially occluded by trees (red solid circle) while the other" + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.956 + ], + "angle": 0, + "content": "8854" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.104, + 0.089, + 0.348, + 0.172 + ], + "angle": 0, + "content": "
ModuleDIOR-RDOTA-v1.0
ABBSSPA3-AP50AP50AP50
55.2356.6767.69
62.1358.3568.59
58.7758.9969.16
64.3359.7069.26
" + }, + { + "type": "table_caption", + "bbox": [ + 0.095, + 0.176, + 0.359, + 0.203 + ], + "angle": 0, + "content": "Table 4. Ablation results on ABBS module and SPA loss \\((\\mathcal{L}_{\\mathrm{SPA}})\\)" + }, + { + "type": "table", + "bbox": [ + 0.38, + 0.089, + 0.589, + 0.172 + ], + "angle": 0, + "content": "
SamplingDIOR-R
L SPAOthers3-AP50AP50
61.6758.93
64.3359.70
43.6350.51
45.150.91
" + }, + { + "type": "table_caption", + "bbox": [ + 0.362, + 0.176, + 0.608, + 0.204 + ], + "angle": 0, + "content": "Table 5. Ablation results on proposal sampling in \\( {\\mathcal{L}}_{\\mathrm{{SPA}}} \\) and other components." + }, + { + "type": "table", + "bbox": [ + 0.619, + 0.089, + 0.901, + 0.172 + ], + "angle": 0, + "content": "
Scale RangeDIOR-RDOTA-v1.0
MinMaxInterval3-AP50AP50AP50
0.91.10.0557.9758.1569.26
0.51.50.161.6759.6268.8
1.01.50.164.3359.7068.9
1.02.00.156.0755.4666.55
" + }, + { + "type": "table_caption", + "bbox": [ + 0.616, + 0.177, + 0.879, + 0.204 + ], + "angle": 0, + "content": "Table 6. Ablation results on scale range in ABBS module." + }, + { + "type": "image", + "bbox": [ + 0.118, + 0.212, + 0.889, + 0.504 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.509, + 0.908, + 0.539 + ], + "angle": 0, + "content": "Figure 5. Qualitative results on DIOR [5, 13] and DOTA-v1.0 [29]. Zoom-in for better visualization. Rotated FCOS was trained only with GT RBoxes, while H2RBox, H2RBox-v2 and our ABBsPO were trained with GT T-HBoxes (1st row) and GT C-HBoxes (2nd row)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.551, + 0.483, + 0.597 + ], + "angle": 0, + "content": "methods failed. These results visually support the effectiveness of our ABBS module and SPA loss in learning the scales and orientations of objects accurately." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.606, + 0.255, + 0.621 + ], + "angle": 0, + "content": "4.4. Ablation Studies" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.629, + 0.483, + 0.75 + ], + "angle": 0, + "content": "Ablation study on SPA loss and ABBS module. As shown in Table 4, both components contribute to performance improvements. The ABBS module effectively scales the GT HBoxes, leading to an increase in \\(\\mathrm{AP}_{50}\\) performance on the DIOR dataset. Notably, it has a greater effect on complex-shaped object categories, resulting in a significant improvement in \\(3\\text{-AP}_{50}\\). Similarly, the SPA loss enhances angle prediction accuracy, also bringing an improvement in \\(\\mathrm{AP}_{50}\\)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.751, + 0.483, + 0.841 + ], + "angle": 0, + "content": "Ablation study on proposal sampling. As shown in Table 5, applying Top- \\(k\\) proposal sampling exclusively to the SPA loss \\((\\mathcal{L}_{\\mathrm{SPA}})\\) yields the highest \\(\\mathrm{AP}_{50}\\) performance, as the symmetric proposals of high-quality benefits \\(\\mathcal{L}_{\\mathrm{SPA}}\\). But, additional proposal sampling to the others \\((\\mathcal{L}_{\\mathrm{rot}},\\mathcal{L}_{\\mathrm{flp}},\\mathcal{L}_{\\mathrm{reg}},\\mathcal{L}_{\\mathrm{cn}},\\mathcal{L}_{\\mathrm{cls}})\\) significantly lowers the performance." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.841, + 0.484, + 0.901 + ], + "angle": 0, + "content": "Ablation study on scale range in the ABBS module. As shown in Table 6, the optimal scale range is influenced by the type of GT HBoxes. For DIOR's T-HBoxes, a scale range of 1 to 1.5 works well because it ensures that the" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.551, + 0.907, + 0.643 + ], + "angle": 0, + "content": "predicted RBoxes fully cover the objects boundary. On the other hand, for DOTA's C-HBoxes, which are already close to the optimal HBoxes, the optimal scale range is closer to 1. By adjusting the scale range based on the type of HBoxes, the ABBS module achieves high accuracy in predicting RBoxes for both datasets." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.665, + 0.634, + 0.68 + ], + "angle": 0, + "content": "5. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.689, + 0.907, + 0.901 + ], + "angle": 0, + "content": "Our ABBSPO, a weakly supervised OOD framework, effectively learns RBox prediction regardless of the type of HBox annotations (T-HBox and C-HBox). With our proposed Adaptive Bounding Box Scaling (ABBS) and Symmetric Prior Angle (SPA) loss, we achieved enhanced orientation and scale accuracy for OOD, which is comparable to or even better than RBox-supervised methods. Extensive experimental results underscore the superiority of our approach, surpassing state-of-the-art HBox-supervised methods. Our method effectively bridges the gap between weakly supervised OOD and fully supervised OOD, making it a promising solution for applications requiring efficient and accurate object detection via training with relatively cheap annotations of HBoxes compared to RBoxes." + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "8855" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.092, + 0.091, + 0.251, + 0.108 + ], + "angle": 0, + "content": "Acknowledgement" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.114, + 0.484, + 0.186 + ], + "angle": 0, + "content": "This research was supported by Korea Institute of Marine Science & Technology Promotion (KIMST) funded by the Korea Coast Guard (RS-2023-00238652, Integrated Satellite-based Applications Development for Korea Coast Guard, \\(100\\%\\))." + }, + { + "type": "title", + "bbox": [ + 0.093, + 0.212, + 0.188, + 0.228 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.236, + 0.484, + 0.306 + ], + "angle": 0, + "content": "[1] Liangyu Chen, Tong Yang, Xiangyu Zhang, Wei Zhang, and Jian Sun. Points as queries: Weakly semi-supervised object detection by points. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8823-8832, 2021. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.308, + 0.484, + 0.377 + ], + "angle": 0, + "content": "[2] Pengfei Chen, Xuehui Yu, Xumeng Han, Najmul Hassan, Kai Wang, Jiachen Li, Jian Zhao, Humphrey Shi, Zhenjun Han, and Qixiang Ye. Point-to-box network for accurate object detection via single point supervision. In European Conference on Computer Vision, pages 51-67. Springer, 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.379, + 0.483, + 0.447 + ], + "angle": 0, + "content": "[3] Yihong Chen, Zheng Zhang, Yue Cao, Liwei Wang, Stephen Lin, and Han Hu. Reppoints v2: Verification meets regression for object detection. In Advances in Neural Information Processing Systems, pages 5621-5631. Curran Associates, Inc., 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.449, + 0.484, + 0.517 + ], + "angle": 0, + "content": "[4] Gong Cheng, Peicheng Zhou, and Junwei Han. Learning rotation-invariant convolutional neural networks for object detection in vhr optical remote sensing images. IEEE transactions on geoscience and remote sensing, 54(12):7405-7415, 2016. 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.519, + 0.484, + 0.575 + ], + "angle": 0, + "content": "[5] Gong Cheng, Jiabao Wang, Ke Li, Xingxing Xie, Chunbo Lang, Yanqing Yao, and Junwei Han. Anchor-free oriented proposal generator for object detection. IEEE Transactions on Geoscience and Remote Sensing, 60:1-11, 2022. 6, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.577, + 0.484, + 0.645 + ], + "angle": 0, + "content": "[6] Jian Ding, Nan Xue, Yang Long, Gui-Song Xia, and Qikai Lu. Learning roi transformer for oriented object detection in aerial images. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2849-2858, 2019. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.647, + 0.483, + 0.688 + ], + "angle": 0, + "content": "[7] Jiaming Han, Jian Ding, Jie Li, and Gui-Song Xia. Align deep features for oriented object detection. IEEE transactions on geoscience and remote sensing, 60:1-11, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.69, + 0.484, + 0.745 + ], + "angle": 0, + "content": "[8] Jiaming Han, Jian Ding, Nan Xue, and Gui-Song Xia. Redet: A rotation-equivariant detector for aerial object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2786-2795, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.747, + 0.484, + 0.815 + ], + "angle": 0, + "content": "[9] Muhammad Haroon, Muhammad Shahzad, and Muhammad Moazam Fraz. Multisized object detection using spaceborne optical imagery. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 13:3032-3046, 2020. 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.817, + 0.483, + 0.873 + ], + "angle": 0, + "content": "[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.874, + 0.483, + 0.902 + ], + "angle": 0, + "content": "[11] Shitian He, Huanxin Zou, Yingqian Wang, Boyang Li, Xu Cao, and Ning Jing. Learning remote sensing object detect" + }, + { + "type": "list", + "bbox": [ + 0.094, + 0.236, + 0.484, + 0.902 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.545, + 0.093, + 0.906, + 0.121 + ], + "angle": 0, + "content": "tion with single point supervision. IEEE Transactions on Geoscience and Remote Sensing, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.122, + 0.906, + 0.176 + ], + "angle": 0, + "content": "[12] Darius Lam, Richard Kuzma, Kevin McGee, Samuel Dooley, Michael Laielli, Matthew Klaric, Yaroslav Bulatov, and Brendan McCord. xview: Objects in context in overhead imagery. arXiv preprint arXiv:1802.07856, 2018. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.178, + 0.906, + 0.232 + ], + "angle": 0, + "content": "[13] Ke Li, Gang Wan, Gong Cheng, Liqui Meng, and Junwei Han. Object detection in optical remote sensing images: A survey and a new benchmark. ISPRS journal of photogrammetry and remote sensing, 159:296-307, 2020. 2, 6, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.234, + 0.906, + 0.288 + ], + "angle": 0, + "content": "[14] Wentong Li, Yijie Chen, Kaixuan Hu, and Jianke Zhu. Oriented reppoints for aerial object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1829-1838, 2022. 2, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.289, + 0.906, + 0.357 + ], + "angle": 0, + "content": "[15] Tsung-Yi Lin, Piotr Dólar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2117-2125, 2017. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.359, + 0.906, + 0.426 + ], + "angle": 0, + "content": "[16] Xuebo Liu, Ding Liang, Shi Yan, Dagui Chen, Yu Qiao, and Junjie Yan. Fots: Fast oriented text spotting with a unified network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5676-5685, 2018. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.428, + 0.908, + 0.497 + ], + "angle": 0, + "content": "[17] Junwei Luo, Xue Yang, Yi Yu, Qingyun Li, Junchi Yan, and Yansheng Li. Pointobb: Learning oriented object detection via single point supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16730-16740, 2024. 2, 3, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.498, + 0.906, + 0.553 + ], + "angle": 0, + "content": "[18] Botao Ren, Xue Yang, Yi Yu, Junwei Luo, and Zhidong Deng. Pointobb-v2: Towards simpler, faster, and stronger single point supervised oriented object detection. arXiv preprint arXiv:2410.08210, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.554, + 0.906, + 0.622 + ], + "angle": 0, + "content": "[19] Zhongzheng Ren, Zhiding Yu, Xiaodong Yang, Ming-Yu Liu, Alexander G Schwing, and Jan Kautz. Ufo 2: A unified framework towards omni-supervised object detection. In European conference on computer vision, pages 288-313. Springer, 2020. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.624, + 0.906, + 0.678 + ], + "angle": 0, + "content": "[20] T-YLPG Ross and GKHP Dólar. Focal loss for dense object detection. In proceedings of the IEEE conference on computer vision and pattern recognition, pages 2980-2988, 2017. 2, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.679, + 0.906, + 0.761 + ], + "angle": 0, + "content": "[21] Xian Sun, Peijin Wang, Zhiyuan Yan, Feng Xu, Ruiping Wang, Wenhui Diao, Jin Chen, Jihao Li, Yingchao Feng, Tao Xu, et al. Fair1m: A benchmark dataset for fine-grained object recognition in high-resolution remote sensing imagery. ISPRS Journal of Photogrammetry and Remote Sensing, 184:116-130, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.763, + 0.906, + 0.845 + ], + "angle": 0, + "content": "[22] Yongqing Sun, Jie Ran, Feng Yang, Chenqiang Gao, Takayuki Kurozumi, Hideaki Kimata, and Ziqi Ye. Oriented object detection for remote sensing images based on weakly supervised learning. In 2021 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), pages 1-6. IEEE, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.846, + 0.906, + 0.901 + ], + "angle": 0, + "content": "[23] Zhiwen Tan, Zhiguo Jiang, Chen Guo, and Haopeng Zhang. Wsodet: A weakly supervised oriented detector for aerial object detection. IEEE Transactions on Geoscience and Remote Sensing, 61:1-12, 2023. 2, 3, 7" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.908, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.956 + ], + "angle": 0, + "content": "8856" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.092, + 0.486, + 0.161 + ], + "angle": 0, + "content": "[24] Zhi Tian, Xiangxiang Chu, Xiaoming Wang, Xiaolin Wei, and Chunhua Shen. Fully convolutional one-stage 3d object detection on lidar range images. Advances in Neural Information Processing Systems, 35:34899-34911, 2022. 2, 4, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.163, + 0.485, + 0.22 + ], + "angle": 0, + "content": "[25] Hao Wang, Zhanchao Huang, Zhengchao Chen, Ying Song, and Wei Li. Multigrained angle representation for remote-sensing object detection. IEEE Transactions on Geoscience and Remote Sensing, 60:1-13, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.221, + 0.484, + 0.276 + ], + "angle": 0, + "content": "[26] Jian Wang, Fan Li, and Haixia Bi. Gaussian focal loss: Learning distribution polarized angle prediction for rotated object detection in aerial images. IEEE Transactions on Geoscience and Remote Sensing, 60:1-13, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.278, + 0.484, + 0.348 + ], + "angle": 0, + "content": "[27] Linfei Wang, Yibing Zhan, Xu Lin, Baosheng Yu, Liang Ding, Jianqing Zhu, and Dapeng Tao. Explicit and implicit box equivariance learning for weakly-supervised rotated object detection. IEEE Transactions on Emerging Topics in Computational Intelligence, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.348, + 0.484, + 0.404 + ], + "angle": 0, + "content": "[28] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600-612, 2004. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.405, + 0.484, + 0.489 + ], + "angle": 0, + "content": "[29] Gui-Song Xia, Xiang Bai, Jian Ding, Zhen Zhu, Serge Belongie, Jiebo Luo, Mihai Datcu, Marcello Pelillo, and Liangpei Zhang. Dota: A large-scale dataset for object detection in aerial images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3974-3983, 2018. 2, 6, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.49, + 0.484, + 0.532 + ], + "angle": 0, + "content": "[30] SUN Xian, WANG Zhirui, SUN Yuanrui, DIAO Wenhui, ZHANG Yue, and FU Kun. Air-sarship-1.0: High-resolution sar ship detection dataset. , 8(6):852–863, 2019. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.533, + 0.484, + 0.603 + ], + "angle": 0, + "content": "[31] Zhifeng Xiao, Qing Liu, Gefu Tang, and Xiaofang Zhai. Elliptic fourier transformation-based histograms of oriented gradients for rotationally invariant object detection in remote-sensing images. International Journal of Remote Sensing, 36(2):618-644, 2015. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.604, + 0.484, + 0.659 + ], + "angle": 0, + "content": "[32] Xingxing Xie, Gong Cheng, Jiabao Wang, Xiwen Yao, and Junwei Han. Oriented r-cnn for object detection. In Proceedings of the IEEE/CVF international conference on computer vision, pages 3520-3529, 2021. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.66, + 0.484, + 0.731 + ], + "angle": 0, + "content": "[33] Hang Xu, Xinyuan Liu, Haonan Xu, Yike Ma, Zunjie Zhu, Chenggang Yan, and Feng Dai. Rethinking boundary discontinuity problem for oriented object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17406-17415, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.731, + 0.484, + 0.802 + ], + "angle": 0, + "content": "[34] Xue Yang and Junchi Yan. Arbitrary-oriented object detection with circular smooth label. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part VIII 16, pages 677-694. Springer, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.803, + 0.484, + 0.872 + ], + "angle": 0, + "content": "[35] Xue Yang and Junchi Yan. Arbitrary-oriented object detection with circular smooth label. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part VIII 16, pages 677-694. Springer, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.873, + 0.484, + 0.903 + ], + "angle": 0, + "content": "[36] Xue Yang, Liping Hou, Yue Zhou, Wentao Wang, and Junchi Yan. Dense label encoding for boundary discontinuity free" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.486, + 0.903 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.545, + 0.093, + 0.905, + 0.134 + ], + "angle": 0, + "content": "rotation detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 15819-15829, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.136, + 0.905, + 0.192 + ], + "angle": 0, + "content": "[37] Xue Yang, Junchi Yan, Ziming Feng, and Tao He. R3det: Refined single-stage detector with feature refinement for rotating object. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4):3163-3171, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.192, + 0.905, + 0.262 + ], + "angle": 0, + "content": "[38] Xue Yang, Junchi Yan, Qi Ming, Wentao Wang, Xiaopeng Zhang, and Qi Tian. Rethinking rotated object detection with gaussian Wasserstein distance loss. In International conference on machine learning, pages 11830-11841. PMLR, 2021. 2, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.263, + 0.905, + 0.333 + ], + "angle": 0, + "content": "[39] Xue Yang, Xiaojiang Yang, Jirui Yang, Qi Ming, Wentao Wang, Qi Tian, and Junchi Yan. Learning high-precision bounding box for rotated object detection via kullback-Leibler divergence. Advances in Neural Information Processing Systems, 34:18381-18394, 2021. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.334, + 0.905, + 0.404 + ], + "angle": 0, + "content": "[40] Xue Yang, Gefan Zhang, Xiaojiang Yang, Yue Zhou, Wentao Wang, Jin Tang, Tao He, and Junchi Yan. Detecting rotated objects as gaussian distributions and its 3-d generalization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4):4335-4354, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.405, + 0.905, + 0.462 + ], + "angle": 0, + "content": "[41] Xue Yang, Yue Zhou, Gefan Zhang, Jirui Yang, Wentao Wang, Junchi Yan, Xiaopeng Zhang, and Qi Tian. The kfiou loss for rotated object detection. arXiv preprint arXiv:2201.12558, 2022. 2, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.462, + 0.905, + 0.532 + ], + "angle": 0, + "content": "[42] Xue Yang, Gefan Zhang, Wentong Li, Yue Zhou, Xuehui Wang, and Junchi Yan. H2RBox: Horizontal box annotation is all you need for oriented object detection. In The Eleventh International Conference on Learning Representations, 2023. 1, 2, 3, 5, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.533, + 0.905, + 0.589 + ], + "angle": 0, + "content": "[43] Ze Yang, Shaohui Liu, Han Hu, Liwei Wang, and Stephen Lin. Reppoints: Point set representation for object detection. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9657-9666, 2019. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.59, + 0.905, + 0.673 + ], + "angle": 0, + "content": "[44] Xinyi Ying, Li Liu, Yingqian Wang, Ruojing Li, Nuo Chen, Zaiping Lin, Weidong Sheng, and Shilin Zhou. Mapping degeneration meets label evolution: Learning infrared small target detection with single point supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15528-15538, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.674, + 0.905, + 0.731 + ], + "angle": 0, + "content": "[45] Jiahui Yu, Yuning Jiang, Zhangyang Wang, Zhimin Cao, and Thomas Huang. Unitbox: An advanced object detection network. In Proceedings of the 24th ACM international conference on Multimedia, pages 516-520, 2016. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.731, + 0.905, + 0.788 + ], + "angle": 0, + "content": "[46] Yi Yu and Feipeng Da. Phase-shifting coder: Predicting accurate orientation in oriented object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13354-13363, 2023. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.788, + 0.905, + 0.872 + ], + "angle": 0, + "content": "[47] Yi Yu, Xue Yang, Qingyun Li, Feipeng Da, Jifeng Dai, Yu Qiao, and Junchi Yan. Point2rbox: Combine knowledge from synthetic visual patterns for end-to-end oriented object detection with single point supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16783-16793, 2024. 2, 3, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.873, + 0.905, + 0.903 + ], + "angle": 0, + "content": "[48] Yi Yu, Xue Yang, Qingyun Li, Yue Zhou, Feipeng Da, and Junchi Yan. H2rbox-v2: Incorporating symmetry for boost-" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.903 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.956 + ], + "angle": 0, + "content": "8857" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.127, + 0.092, + 0.482, + 0.134 + ], + "angle": 0, + "content": "ing horizontal box supervised oriented object detection. Advances in Neural Information Processing Systems, 36, 2024. 1, 2, 3, 4, 5, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.136, + 0.482, + 0.19 + ], + "angle": 0, + "content": "[49] Tingxuan Yue, Yanmei Zhang, Jin Wang, Yanbing Xu, and Pengyun Liu. A weak supervision learning paradigm for oriented ship detection in sar image. IEEE Transactions on Geoscience and Remote Sensing, 2024. 2" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.482, + 0.19 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.946, + 0.515, + 0.956 + ], + "angle": 0, + "content": "8858" + } + ] +] \ No newline at end of file diff --git a/2025/ABBSPO_ Adaptive Bounding Box Scaling and Symmetric Prior based Orientation Prediction for Detecting Aerial Image Objects/4a21cd8b-8cbb-4db3-b4a2-0a741c570b46_origin.pdf b/2025/ABBSPO_ Adaptive Bounding Box Scaling and Symmetric Prior based Orientation Prediction for Detecting Aerial Image Objects/4a21cd8b-8cbb-4db3-b4a2-0a741c570b46_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b7e7eb55559e839be9272103ceb565b5e3bfa685 --- /dev/null +++ b/2025/ABBSPO_ Adaptive Bounding Box Scaling and Symmetric Prior based Orientation Prediction for Detecting Aerial Image Objects/4a21cd8b-8cbb-4db3-b4a2-0a741c570b46_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cbe3b89c02fbca2dd021bd06dd56bed265496e0c8dcc2f1e52a4762bbfb8c19c +size 3829922 diff --git a/2025/ABBSPO_ Adaptive Bounding Box Scaling and Symmetric Prior based Orientation Prediction for Detecting Aerial Image Objects/full.md b/2025/ABBSPO_ Adaptive Bounding Box Scaling and Symmetric Prior based Orientation Prediction for Detecting Aerial Image Objects/full.md new file mode 100644 index 0000000000000000000000000000000000000000..ce95503dade0224e8e6dc250aa827aef37d378c0 --- /dev/null +++ b/2025/ABBSPO_ Adaptive Bounding Box Scaling and Symmetric Prior based Orientation Prediction for Detecting Aerial Image Objects/full.md @@ -0,0 +1,384 @@ +# ABBSPO: Adaptive Bounding Box Scaling and Symmetric Prior based Orientation Prediction for Detecting Aerial Image Objects + +Woojin Lee $^{1*}$ Hyugjae Chang $^{1*}$ Jaeho Moon $^{1}$ Jaehyup Lee $^{2\dagger}$ Munchurl Kim $^{1\dagger}$ $^{1}$ KAIST ${}^{2}$ KNU + +{woojin412, hmnc97, jaeho.moon, mkimee}@kaist.ac.kr jaehyuplee@knu.ac.kr + +https://kaist-viclab.github.io/ABBSPO_site/ + +![](images/568859ed62ff0b20ad4e6ee353770fe3878e44cbdbe8d43e11f475ee89f78213.jpg) +① GT RBox ② GT C-HBox + +![](images/b27b775cfa0792a81a90726212506fafdc20232a590b12f4e10a34d0a8bb840b.jpg) + +![](images/4036c351f70e6be817546f6ab541a1e7b674cd0d755dd3972a0e99308263a526.jpg) +(a) Two types of GT HBox +(b) Visual comparison of HBox-supervised oriented detectors +Figure 1. Performance comparison of HBox-supervised orientated detectors. (a) Top: A coarse horizontal bounding box (C-HBox) $(②)$ and its corresponding rotated bounding box (RBox) $(①)$ . Bottom: A tight horizontal bounding box (T-HBox)( $(2)$ ) and its corresponding RBox $(①)$ . (b) Our ABBSPO is capable of accurately detecting both orientations and scales for GT C-HBoxes and T-HBoxes. (c) Average Precision $\left(\mathrm{AP}_{50}\right)$ for H2RBox [42], H2RBox-v2 [48], and our ABBSPO. $3-\mathrm{AP}_{50}$ represents the mean $\mathrm{AP}_{50}$ for three complex shaped objects: (i) DIOR: 'airplane', 'expressway service area', and 'overpass' and (ii) DOTA: 'plane', 'swimming pool', and 'helicopter'. + +![](images/abbe6ce2f4ccfae50e61911adbfc36c43a0599bc320175b91a448662945033d5.jpg) + +![](images/d935d857a5b65cffbee17d0414115fdc61d65805e5ef4c398351ff5540f95a21.jpg) + +![](images/8bd69b0bee5866f4cba10b95e82ac4c8cc5b57f2cd8ba8783c90a33616727935.jpg) + +![](images/e955c6c9f48296b82376f41da38d1df88096b41e872664228316cfcd675731db.jpg) + +![](images/949e5f4d898a521f3773f2cdc29482ebb42f4bc6bc2189ba275e0d71a3b33682.jpg) + +![](images/a5b098d46dbb0f780bcd7fc46372d3c5816887a0ef6a2fc8e0c422f150b54626.jpg) +H2RBox H2RBox-v2 ABBSPO) Performance overview 3-AP50 AP50 + +# Abstract + +Weakly supervised Oriented Object Detection (WS-OOD) has gained attention as a cost-effective alternative to fully supervised methods, providing efficiency and high accuracy. Among weakly supervised approaches, horizontal bounding box (HBox) supervised OOD stands out for its ability to directly leverage existing HBox annotations while achieving the highest accuracy under weak supervision settings. This paper introduces adaptive bounding box scaling and symmetry-prior-based orientation prediction, called ABBSPO that is a framework for WS-OOD. Our ABB- + +SPO addresses the limitations of previous HBox-supervised OOD methods, which compare ground truth (GT) HBoxes directly with predicted RBoxes' minimum circumscribed rectangles, often leading to inaccuracies. To overcome this, we propose: (i) Adaptive Bounding Box Scaling (ABBS) that appropriately scales the GT HBoxes to optimize for the size of each predicted RBox, ensuring more accurate prediction for RBoxes' scales; and (ii) a Symmetric Prior Angle (SPA) loss that uses the inherent symmetry of aerial objects for self-supervised learning, addressing the issue in previous methods where learning fails if they consistently make incorrect predictions for all three augmented views (original, rotated, and flipped). Extensive experimental results demonstrate that our ABBsPO achieves state-of-the-art results, outperforming existing methods. + +# 1. Introduction + +Object detection often leverages supervised learning with ground truth horizontal bounding box labels (GT HBoxes) to locate the objects of interest. However, the usage of GT HBoxes limits the precise localization of the objects with their orientations and tight surrounding boundaries, especially for objects such as airplanes and ships of various orientations in aerial images. To handle object detection as an oriented object detection problem, more precise rotated bounding box labels (GT RBoxes) are required, which is very costly to generate [49]. So, to mitigate this challenge, previous methods [17, 23, 42, 47-49] have explored weakly supervised oriented object detection (OOD) that utilizes less expensive forms of annotations, such as image-level, point and HBox annotations. Among these, the use of HBoxes is the most popular due to their widespread availability in existing public datasets [4, 9, 12, 21, 30, 31] to predict the RBoxes for objects of interest. So, this approach can detour the costly process of generating GT RBoxes. + +The previous weakly supervised (WS) learning of OOD [22, 27, 42, 48] utilizes GT HBoxes in the forms of coarse HBoxes, called GT C-HBoxes, as supervision to compare with the HBoxes derived as the minimum circumscribed rectangles from the predicted RBoxes by their OOD models. As shown in the upper figure of Fig. 1-(a), the GT C-HBoxes are defined as coarse horizontal bounding boxes that loosely encompass the boundaries of objects (not tightly bounded). The GT HBoxes of the DOTA [29] dataset are in the forms of C-HBoxes which are derived as the minimum circumscribed horizontal bounding boxes of their GT RBoxes. However, when the previous OOD methods [42, 48] are supervised with the other GT HBoxes that are in the form of tight HBoxes, called GT T-HBoxes (e.g. DIOR dataset [13]), as shown in the bottom figure of Fig. 1-(a), we found that their performances are significantly degraded because GT T-HBoxes tend to have different scales, compared to those of GT C-HBoxes (see Fig. 1-(c)). As shown in Fig. 1-(b), this causes the previous methods to predict either RBoxes with accurate orientations but inaccurate scales smaller than the sizes of their corresponding objects, or the RBoxes with inaccurate (close to horizontal) orientations but somewhat accurate scales (almost the same as HBoxes). + +To overcome the above limitations of the previous WS-OOD methods, we propose an adaptive bounding box scaling and symmetry-prior-based orientation prediction, called as ABBSPO, as a WS-OOD framework that can be effectively trained with either GT C-HBoxes or GT T-Hboxes for aerial images. For this, (i) a novel Adaptive Bounding Box Scaling (ABBS) module is designed to have the flexibility of adjusting the GT HBoxes for each object into random sizes and then selecting the optimal scaled GT HBoxes that allow it to encompass the predicted RBoxes. Note that + +the previous methods are not possible to have such flexibility for the adjustment of GT HBoxes; (ii) An angle learning module is proposed in a self-supervised manner that utilizes the symmetric priors of the objects that open appear in top-down views of aerial images. As shown in Figs 1-(b) and (c), Our proposed method predicts accurate orientation and surrounding boxes of objects for both cases of using GT C-HBoxes and GT T-HBoxes, outperforming the previous methods in angle accuracy and localization in terms of average precision (AP). Our contributions are summarized as: + +- To the best of our knowledge, our work is the first to address the limitations of previous weakly supervised OOD learning methods with T-HBoxes as GT. To overcome this, we propose a novel weakly supervised OOD method that can be effectively trained with T-HBoxes or C-HBoxes that can be cheaply annotated as GT; +- The adaptive bounding box scaling (ABBS) module is proposed to flexibly adjust the HBox (GT) for each object toward an appropriately scaled HBox. This allows part of the predicted RBoxes to place outside the T-HBox (GT), yielding precise RBox prediction; +- A symmetric prior angle (SPA) loss is presented to enhance the orientation prediction accuracy by leveraging the symmetric priors of the objects in aerial images; +- Our method significantly outperforms the state-of-the-art OOD methods using weakly supervised learning with HBoxes (GT) for aerial datasets. + +# 2. Related Work + +# 2.1. RBox-supervised Oriented Object Detection + +Oriented Object Detection (OOD) has gained significant attention, leading to extensive research in RBox-supervised methods (using GT RBoxes) such as Rotated RetinaNet [20], Rotated FCOS [24], $\mathrm{R}^3\mathrm{Det}$ [37], ROI Transformer [6], ReDet [8], and $\mathrm{S}^2\mathrm{A}$ -Net [7]. Rotated FCOS [24] improves OOD performance by introducing center-ness, which assigns weights to samples based on their proposal locations, thereby emphasizing well-positioned proposals. OrientedRepPoints methods [3, 14, 43], in contrast, utilize flexible receptive fields to extract key object points. However, a common challenge in RBox-supervised OOD methods is the boundary discontinuity problem that arises from the definition and prediction of angle parameters $(\theta)$ [33, 34]. To address this, several methods modified the ways of defining the RBox representations, such as Gaussian distributions [25, 26, 35, 36, 38-41], thereby avoiding straightforward regression of angle parameters. On the other hand, in weakly supervised learning, the boundary discontinuity issue does not arise thanks to the absence of direct RBox supervision, allowing for more stable angle predictions without the need for complex mitigation strategies. + +![](images/9058721b81e8835b11068c548c9030cde9f80c85472d688e79cb7a9fc205dd43.jpg) + +![](images/3b91cfd3d439130d68778ce5b060bd3e80c614824624b643e61de1a383877e2f.jpg) +Figure 2. Overall pipeline of our ABBsPO framework. Our ABBsPO leverages weakly supervised learning from HBox annotations to accurately predict RBoxes. The framework incorporates the Orientation Learning Branch (OLB) for precise angle estimation, using the Symmetric Prior Angle (SPA) loss, and the Scale Learning Branch (SLB) for optimal scale adjustment via the Adaptive Bounding Box Scaling (ABBS) module. The framework supports both C-HBox and T-HBox ground truths, ensuring robust and accurate predictions. + +![](images/55d9e3d1694b0495f9e982e4c5a131893a68faaf37d1a22e07983bd4262d8421.jpg) + +# 2.2. Weakly-supervised Orientd Object Detection + +Weakly supervised OOD methods learn to predict RBoxes without directly utilizing GT RBoxes. The approaches in this domain are primarily categorized based on the types of labels they employ: image-based [23], point-based [17, 47], and HBox-based [42, 48]. + +Image-based supervision. WSODet [23], aims to generate pseudo-RBoxes without explicit localization supervision, thus encountering significant limitations when relying solely on image labels, especially for the scenes with numerous and diverse object types. + +Point-based supervision. By leveraging one representative point at the center location as a label for each object, point label-based methods offer the advantage of being cost-effective [1, 2, 11, 16, 19, 44]. PointOBB [17] estimates angles from geometric relationships across original, rotated, and flipped views, and determines scales by analyzing proposal distributions between original and scaled input images. PointOBB-v2 [18] improves single point supervision by refining pseudo-label generation, leading to enhanced efficiency and accuracy. Point2RBox [47] employed fundamental patterns as priors to guide the regression of RBoxes. Point-based methods are cost-effective and straightforward, but still struggle with limited supervision. + +HBox-based supervision. As annotating HBoxes is more + +straightforward than RBoxes, the HBox-supervised OOD has gained increasing attention in recent studies. H2RBox [42] utilized rotated views from the original view and provided self-supervision of object orientations without requiring the GT angles. H2RBox-v2 [48] expanded the use of geometric relationships between views by adding flipped views. These methods learn to predict RBoxes by converting minimum circumscribed HBoxes that encompass the predicted RBoxes to directly compare IoU with the GT HBoxes, thereby enabling HBox-supervised OOD. However, these methods only guarantee performance when trained with GT C-HBoxes that are derived from GT RBoxes. These methods struggle to learn precise OOD when being trained with GT T-HBoxes, because of the significant gap between the GT T-HBoxes and the HBoxes derived from predicted RBoxes. To address the issue, we propose an ABBS module to effectively handle both types of GTs (C-HBoxes and T-HBoxes). + +# 3. Method + +# 3.1. Overall Pipeline + +Weakly supervised OOD aims to predict RBoxes using less expensive annotations such as HBoxes. Existing methods such as H2RBox [42] and its improved version H2RBox-v2 [48] have laid the foundation for directly predicting RBoxes from HBoxes. Our proposed pipeline builds upon the H2RBox-v2 [48] framework to effectively enable weakly + +supervised OOD from either C-HBoxes or T-Boxes. Figure 2 depicts the conceptual framework of our weakly supervised OOD method. Given an input image $\mathrm{I}_{\mathrm{ori}}$ and its rotated and flipped version $\mathrm{I}_{\mathrm{rot}}$ and $\mathrm{I}_{\mathrm{flp}}$ , our proposed pipeline obtains the RBox for each input view ( $\mathrm{I}_{\mathrm{ori}}$ , $\mathrm{I}_{\mathrm{rot}}$ , $\mathrm{I}_{\mathrm{flp}}$ ), including the center position $(x,y)$ , size $(w,h)$ , angle $(\theta)$ , class scores $(p)$ , and the center-ness $(cn)$ . To classify each detected object, we follow FCOS [24] by supervising both the classification $(p)$ and the center-ness $(cn)$ . The angle $(\theta)$ prediction is obtained by using the method proposed in PSC [46]. Our contribution mainly lies in the supervision for localization, consisting of two branches: a scale learning branch (SLB) and an orientation learning branch (OLB). + +In the SLB, the adaptive bounding box scaling (ABBS) module addresses the relationships between GT HBoxes and predicted RBoxes. This ABBS module provides proper minimum circumscribed rectangle for the accurately predicted RBoxes, by adaptively scaling the HBoxes based on a predefined scale range. The OLB guides accurate prediction of object orientation by utilizing three input views $(\mathrm{I}_{\mathrm{ori}},\mathrm{I}_{\mathrm{rot}},\mathrm{I}_{\mathrm{flp}})$ , following H2RBox-v2 [48]. Additionally, the OLB utilizes these orientation predictions for our symmetric prior angle (SPA) loss, which leverages the inherent left-right symmetry of objects in aerial images. The SPA loss enforces to further adjust the orientations of the predicted RBoxes to be aligned with the orientations of the symmetric objects such as airplanes, ships, ground track fields etc. + +# 3.2. Adaptive Bounding Box Scaling Module + +In Fig. 2, 'Scale Learning Branch' illustrates the conceptual process of our adaptive bounding box scaling module (ABBS) module. In HBox-supervised OOD learning, the predicted RBoxes $(RB^{\mathrm{pred}})$ must be compared with the GT HBoxes $(HB^{\mathrm{gt}})$ . Since they cannot be directly compared, $RB^{\mathrm{pred}}$ is first converted to $HB^{\mathrm{pred}}$ , defined as the minimum circumscribed HBox of $RB^{\mathrm{pred}}$ as: + +$$ +H B ^ {\text {p r e d}} = M C R (R B ^ {\text {p r e d}}), \tag {1} +$$ + +where $MCR(\cdot)$ is an operator converting $RB$ to the minimum circumscribed $HB$ , allowing $HB^{\mathrm{pred}}$ to be compared with $HB^{\mathrm{gt}}$ . $HB^{\mathrm{pred}}$ and $RB^{\mathrm{pred}}$ are given by: + +$$ +R B ^ {\text {p r e d}} = \left[ x _ {r b} ^ {\text {p r e d}}, y _ {r b} ^ {\text {p r e d}}, w _ {r b} ^ {\text {p r e d}}, h _ {r b} ^ {\text {p r e d}}, \theta_ {r b} ^ {\text {p r e d}} \right], \tag {2} +$$ + +$$ +H B ^ {\text {p r e d}} = \left[ x _ {h b} ^ {\text {p r e d}}, y _ {h b} ^ {\text {p r e d}}, w _ {h b} ^ {\text {p r e d}}, h _ {h b} ^ {\text {p r e d}} \right], +$$ + +where $(x_{rb}^{\mathrm{pred}},y_{rb}^{\mathrm{pred}})$ and $(x_{hb}^{\mathrm{pred}},y_{hb}^{\mathrm{pred}})$ are the centers of $RB^{\mathrm{pred}}$ and $HB^{\mathrm{pred}}$ , respectively. The width $w$ and height $h$ of $HB^{\mathrm{pred}}$ can be computed as: + +$$ +w _ {h b} ^ {\text {p r e d}} = w _ {r b} ^ {\text {p r e d}} \left| \cos \theta_ {r b} ^ {\text {p r e d}} \right| + h _ {r b} ^ {\text {p r e d}} \left| \sin \theta_ {r b} ^ {\text {p r e d}} \right|, \tag {3} +$$ + +$$ +h _ {h b} ^ {\text {p r e d}} = w _ {r b} ^ {\text {p r e d}} | \sin \theta_ {r b} ^ {\text {p r e d}} | + h _ {r b} ^ {\text {p r e d}} | \cos \theta_ {r b} ^ {\text {p r e d}} |. +$$ + +If $RB^{\mathrm{opt}}$ is defined as the tightly surrounding object boundary RBox with the precise orientation, then we have + +![](images/0f0e3f67749295fd52092e18a0c6894c7b925e01576410d11a332bc47f5de4d9.jpg) +(a) + +![](images/325bb1fc626aaefc2c05d0140a6886eacafedfc82231735440732d56a2dee2c6.jpg) +(b) + +![](images/5c549cac7a8d8f46a869cf8c9dfc6cfb6146911f9a57937bfa47698c6a67e05b.jpg) +(c) + +![](images/11ae87346cdf7ff7e0c88c472005c62c18768baf3a886fa74174d5159e1b3c45.jpg) +(d) + +![](images/c71e66550e76ebe42e0f7ba49bef5c0a389b45ee411cfcf5cca25d30fde80b8f.jpg) +(e) + +![](images/d75f200c7a4f589524d78653290a80f31a6c6d2a6006a1a73ed76d29a5d60139.jpg) +(f) +Figure 3. Analysis of scale adjustment function $(f(\cdot))$ based on the shape and angle of objects. (a) rectangular shape, (b) rounded rectangular shape, (c) complex shape, (d) horizontal orientation, (e) slightly tilted orientation, (f) diagonal orientation. The cyan solid box, green solid box and red dotted box represent GT T-HBox, RBox and adjusted GT HBox, respectively. + +$HB^{\mathrm{opt}} = MCR(RB^{\mathrm{opt}})$ for the 'Predicted RBox Projection Process' block in the SLB of Fig. 2. When the size of $HB^{\mathrm{gt}}$ is larger or smaller than that of $HB^{\mathrm{opt}}$ , the model needs to adaptively adjust and find the optimal scale within a predefined range of scale variations. We propose an ABBS module that estimates $RB^{\mathrm{opt}}$ by adaptively adjusting the scale of $HB^{\mathrm{gt}}$ . Notably, even if $RB^{\mathrm{pred}}$ is accurately estimated, its $HB^{\mathrm{pred}}$ may not overlap well with $HB^{\mathrm{gt}}$ , leading to a low Intersection over Union (IoU) value. Enforcing $HB^{\mathrm{pred}}$ to match $HB^{\mathrm{gt}}$ can cause a misalignment with $RB^{\mathrm{pred}}$ because $HB^{\mathrm{gt}}$ may not be ideal in estimating $RB^{\mathrm{opt}}$ . To address this, our ABBS module adaptively scales and adjusts $HB^{\mathrm{gt}}$ in the context of $RB^{\mathrm{pred}}$ , rather than forcing $HB^{\mathrm{pred}}$ to match $HB^{\mathrm{gt}}$ . + +For the detailed explanation of our ABBS module, we first define a set of scaled versions of $HB^{\mathrm{gt}}$ for the 'Scaled GT HBoxes Generation Process' in the SLB of Fig. 2 as: + +$$ +\mathbf {H B} _ {\mathrm {s}} ^ {\mathrm {g t}} = \left\{H B _ {\mathrm {s}, 1} ^ {\mathrm {g t}}, H B _ {\mathrm {s}, 2} ^ {\mathrm {g t}}, \dots , H B _ {\mathrm {s}, \mathrm {K}} ^ {\mathrm {g t}} \right\}, \tag {4} +$$ + +where $HB_{s,k}^{\mathrm{gt}}$ is the k-th scaled version of $HB^{\mathrm{gt}}$ and $\mathbf{K}$ is the total number of scaled variations of $HB^{\mathrm{gt}}$ . $\mathbf{HB}_{\mathbf{S}}^{\mathrm{gt}}$ is determined as the combinations of angle-adjusted width and height scale factors, $\{s_{adj,i}^{w}\}_{i=1}^{N_s}$ and $\{s_{adj,j}^{h}\}_{j=1}^{N_s}$ , that are transformed from basic width and height scale factors, $\{s_i^{w}\}_{i=1}^{N_s}$ and $\{s_j^{h}\}_{j=1}^{N_s}$ by considering angle prediction. Basic scale factors are uniformly spaced in a predefined scale range as: + +$$ +S _ {w} = \left\{s _ {1} ^ {w}, s _ {2} ^ {w}, \dots , s _ {N _ {s}} ^ {w} \right\}, S _ {h} = \left\{s _ {1} ^ {h}, s _ {2} ^ {h}, \dots , s _ {N _ {s}} ^ {h} \right\}, \tag {5} +$$ + +where $S_w$ and $S_h$ are the sets of basic width and height scale factors, respectively. $s_i^w$ and $s_i^h$ are calculated as: + +$$ +s _ {i} ^ {w} = s _ {1} ^ {w} + \left(s _ {N _ {s}} ^ {w} - s _ {1} ^ {w}\right) / \left(N _ {s} - 1\right) \cdot (i - 1), +$$ + +$$ +s _ {j} ^ {h} = s _ {1} ^ {h} + \left(s _ {N _ {s}} ^ {h} - s _ {1} ^ {h}\right) / \left(N _ {s} - 1\right) \cdot (j - 1), \tag {6} +$$ + +where $s_{N_s}^w = s_{N_s}^h$ is the predefined largest basic scale factor for both width and height of $HB^{\mathrm{gt}}$ , and $N_{s}$ is the number of uniform quantization for the both range $[s_1^w..s_{N_s}^w]$ and $[s_1^h..s_{N_s}^h]$ . In order to generate $HB_{s,i}^{\mathrm{gt}}$ , we transform the basic width and height scale factors, $\{s_i^w\}_{i=1}^{N_s}$ and $\{s_j^h\}_{j=1}^{N_s}$ , into angle-adjusted width and height scale factors, $\{s_{adj,i}^w\}_{i=1}^{N_s}$ and $\{s_{adj,j}^h\}_{j=1}^{N_s}$ , using the predicted angle $\theta^{\mathrm{pred}}$ through the scale adjustment function $f$ : + +$$ +s _ {a d j, i} ^ {w} = f \left(\theta_ {r b} ^ {\text {p r e d}}, s _ {i} ^ {w}\right), s _ {a d j, j} ^ {h} = f \left(\theta_ {r b} ^ {\text {p r e d}}, s _ {j} ^ {h}\right). \tag {7} +$$ + +To define $f(\cdot)$ , it's essential to consider the object types and rotation angles. Fig. 3 shows the effect of scale adjustments on T-HBoxes for three object types: (i) For rectangular objects like tennis courts (Fig. 3-(a)), the adjusted T-HBox (red dotted box) aligns precisely with the GT T-HBox (cyan solid box) and tightly circumscribes the optimal RBox (green solid box); (ii) For rounded rectangular objects (Fig. 3-(b)), the optimal RBox slightly exceeds the GT T-HBox; (iii) Complex shapes like airplanes (Fig. 3-(c)) show a larger discrepancy, with parts of the optimal RBox lying outside the GT T-HBox. Furthermore, scale adjustments also depend on rotation angles: (i) Fig. 3-(d) for a vertically (or horizontally) aligned airplane, the GT T-HBox and optimal RBox are identical; (ii) For Fig. 3-(e) with a small rotation angle, they differ slightly; (iii) For Fig. 3-(f) with a larger angle, the difference is more pronounced. Therefore, to take the object's shape types and orientation degrees into account for the scale adjustment for the widths and heights of T-HBoxes, $f(\cdot)$ in Eq. 7 is defined as: + +$$ +f (\theta , s) = \left\{ \begin{array}{l l} \frac {4}{\pi} (s - 1) \cdot \theta + 1, & \text {i f} 0 \leq \theta < \frac {\pi}{4}, \\ \frac {4}{\pi} (1 - s) \cdot \theta + (2 s - 1), & \text {i f} \frac {\pi}{4} \leq \theta < \frac {\pi}{2}, \end{array} \right. \tag {8} +$$ + +where the angle range is set to $\theta \in [0,\pi /2)$ due to the periodicity of the angle. According to $f(\cdot)$ and Eq. 7, $HB_{s,k}^{\mathrm{gt}}$ in $\mathbf{HB}_s^{\mathrm{gt}}$ can be expressed as: + +$$ +H B _ {s, k} ^ {\mathrm {g t}} = \left[ x ^ {\mathrm {g t}}, y ^ {\mathrm {g t}}, w ^ {\mathrm {g t}} \cdot s _ {a d j, i} ^ {w}, h ^ {\mathrm {g t}} \cdot s _ {a d j, j} ^ {h} \right], \tag {9} +$$ + +where $(x^{\mathrm{gt}},y^{\mathrm{gt}})$ is the center point, and $w^{\mathrm{gt}}$ and $h^{\mathrm{gt}}$ are width and height of $HB^{\mathrm{gt}}$ . As shown in the 'IoU Calculation' and 'Optimal Scale Learning' blocks in the SLB of Fig. 2, $HB^{\mathrm{opt}}$ among $\{HB_{s,k}^{\mathrm{gt}}\}_{k=1}^{K}$ can be determined which minimizes the IoU loss for all proposals by an ABBS loss as: + +$$ +\mathcal {L} _ {\mathrm {a s}} = \frac {1}{N _ {p}} \sum_ {l = 1} ^ {N _ {p}} \min _ {s _ {i} ^ {w} \in S _ {w}, \atop s _ {j} ^ {h} \in S _ {h}} \mathcal {L} _ {\mathrm {I o U}} \left(H B _ {l} ^ {\text {p r e d}}, H B _ {s, k} ^ {\text {g t}, l} \left(s _ {i} ^ {w}, s _ {j} ^ {h}\right)\right), \tag {10} +$$ + +where $N_{p}$ is the total number of proposals for input I. $HB_{l}^{\mathrm{pred}}$ is $HB^{\mathrm{pred}}$ for $l$ -th proposal, and $HB_{s,k}^{\mathrm{gt},l}(s_i^w,s_j^h)$ is $k$ -th scaled $HB^{\mathrm{gt}}$ , as $HB_{s,k}^{\mathrm{gt}}$ , whose width and height are scaled for $l$ -th proposal according to Eq. 7 to Eq. 9. Finally, by adding a regularization term using the IoU loss between $HB^{\mathrm{pred}}$ and non-scaled $HB^{\mathrm{gt}}$ , we formed the regression loss as: + +![](images/20f8eef00637ab7600c03ee878ea6683fc07db4f9171afee28e7c45469de0d08.jpg) +Airplane + +![](images/45fc781940dddcb8806fd987a3b42f61e4fd13f1476d76fda1771d4abaa461d1.jpg) +Ship + +![](images/a2d1e58850e2cd6b5de90fa8fe55dff6b4f19935e977bc1b59efc803fba9d08a.jpg) +Tennis court + +![](images/ee66c9f2b21acbe6db6b232718f3f1b68a60af53fcaff428182869ab07575ae3.jpg) +Vehicle +Figure 4. Examples of symmetric objects in aerial images. In SPA loss, the $x,y$ coordinates and angle $\theta$ are used to define the symmetry axis, splitting the object into two parts for comparison. + +$$ +\mathcal {L} _ {\mathrm {r e g}} = \mathcal {L} _ {\mathrm {a s}} + \alpha \cdot \left(1 / N _ {p}\right) \sum_ {l = 1} ^ {N _ {p}} \mathcal {L} _ {\mathrm {I o U}} \left(H B _ {l} ^ {\text {p r e d}}, H B ^ {\mathrm {g t}, l}\right), \tag {11} +$$ + +where $\alpha$ is a hyperparameter which is set to 0.01 by default. + +# 3.3. Symmetric Prior Angle Loss + +In aerial images, objects such as airplanes, ships, tennis courts, and vehicles are often captured from top-down viewpoints, where most of these objects exhibit symmetries in their appearance, as shown in Fig. 4. + +In the previous pipelines [42, 48], both the regression loss associated with the bounding box's center point, width, and height, and the angle loss for accurate angle prediction were trained in a balanced manner. However, they tended to inaccurately predict the angles by maximizing the bounding box's IoUs at the same time. This issue stems from the fact that, since the angles could not be directly supervised due to the absence of angle annotations, the angles were indirectly supervised from augmented views with rotations and flips. This is problematic because, when the difference between two predicted angles for the same object in the original view and its rotated view are equal to the rotation angle applied for the original view, the angle loss is zero although the predicted angles are inaccurate. + +To mitigate such a predicted angle ambiguity, we propose a symmetric prior angle (SPA) loss. Based on the SPA loss, the model can be trained to predict precise angles by indirectly utilizing the object's symmetric characteristics. As shown in Fig. 4, the detected objects are symmetric against the symmetry axes (blue-dotted lines) passing the center points of their RBoxes. That is, the pixel contents in the two parts divided by the symmetry axis of the RBox are compared in similarity whose difference is used as supervision for our SPA loss. It is noted that our SPA loss utilizes only symmetric objects, incorporating the symmetry prior from GT class labels for proposals identified as symmetric, such as 'airplane,' 'ship,' 'vehicle,' and 'tennis court' + +To avoid applying the SPA loss when $RB^{pred}$ are inaccurate for the respective objects, we first check the fidelity scores of proposal, and sample the Top- $k$ proposals as supervision in the SPA loss as: + +$$ +\left\{R B _ {n} ^ {p r e d} \right\} _ {n = 1} ^ {N _ {\mathrm {s p a}}} = \operatorname {T o p - k} \left(\left\{R B _ {l} ^ {p r e d} \right\} _ {l = 1} ^ {N _ {p}} \mid \mathrm {s c} _ {\mathrm {c l s}} ^ {i} + \mathrm {s c} _ {\mathrm {l o c}} ^ {i}\right) \tag {12} +$$ + +where $\mathrm{sc}_{\mathrm{cls}}^{(i)}$ and $\mathrm{sc}_{\mathrm{loc}}^{(i)}$ are the classification and localization scores for the $l$ -th $RB^{pred}$ , and $N_{p}$ is the total number of proposals. From Eq. 12, the selected $N_{spa}$ proposals are considered in the SPA loss by which the predicted angles of $RB^{pred}(\theta_{rb}^{pred})$ are enforced to align with the orientations of objects in the sense of maximizing the similarity, Structural Similarity Index (SSIM [28]), between the pixel contents in the two parts of each $RB^{pred}$ . It should be noted that, even in cases where symmetric objects may not appear perfectly symmetric due to contextual factors like shadows or asymmetrical cargo arrangements, their symmetry is still maintained by the inherent structural symmetry between the two parts. Our SPA loss is defined as: + +$$ +\mathcal {L} _ {\mathrm {S P A}} = \left(1 / N _ {\mathrm {s p a}}\right) \sum_ {n = 1} ^ {N _ {\mathrm {s p a}}} \left(1 - \operatorname {S S I M} \left(I _ {p 1} ^ {(n)}, I _ {p 2} ^ {(n)}\right)\right) \tag {13} +$$ + +To remove the influence of object sizes in $L_{\mathrm{SPA}}$ computation, the proposals $(RB^{pred})$ are projected onto a fixed-size grid of $50 \times 50$ . Then, the pixel content $(I_{p1})$ in one part of the proposal's projection is compared with that $(I_{p2})$ of the other part that is flipped before the comparison. + +# 3.4. Loss Functions + +In the orientation learning branch (OLB), two angle-based losses [48], $\mathcal{L}_{\mathrm{rot}}$ and $\mathcal{L}_{\mathrm{flp}}$ , are adopted to leverage the consistency between the original, rotated, and flipped views of each object proposal. For the rotated and flipped views, $\mathcal{L}_{\mathrm{rot}}$ and $\mathcal{L}_{\mathrm{flp}}$ are computed by comparing with the predicted angle $\theta$ in the original view $(\mathrm{I}_{\mathrm{ori}})$ : + +$$ +\mathcal {L} _ {\mathrm {r o t}} = l _ {s} \left(\theta_ {\mathrm {r o t}} - \theta , R\right), \mathcal {L} _ {\mathrm {f l p}} = l _ {s} \left(\theta_ {\mathrm {f l p}} + \theta , 0\right), \tag {14} +$$ + +where $l_{s}$ denotes a smooth L1 loss-based snap loss [48], and $R$ denotes the angle applied to $\mathbf{I}_{ori}$ . The final angle loss is: + +$$ +\mathcal {L} _ {\text {a n g}} = \beta \left(\lambda_ {r} \mathcal {L} _ {\text {r o t}} + \lambda_ {f} \mathcal {L} _ {\text {f l p}}\right) + \gamma \mathcal {L} _ {\text {S P A}}, \tag {15} +$$ + +where $\lambda_r = 1.0, \lambda_f = 0.05, \beta = 0.6$ , and $\gamma = 0.05$ are empirically determined for all our experiments. In the shape learning branch (SLB), we use our IoU-based [45] regression loss $\mathcal{L}_{\mathrm{reg}}$ in Eq. 11. The overall loss is defined as: + +$$ +\mathcal {L} _ {\text {t o t a l}} = \lambda_ {\text {a n g}} \mathcal {L} _ {\text {a n g}} + \lambda_ {\text {r e g}} \mathcal {L} _ {\text {r e g}} + \lambda_ {\text {c n}} \mathcal {L} _ {\text {c n}} + \lambda_ {\text {c l s}} \mathcal {L} _ {\text {c l s}} \tag {16} +$$ + +where $\mathcal{L}_{\mathrm{cn}}$ is the center-ness loss [24], and $\mathcal{L}_{\mathrm{cls}}$ is the classification loss based on the focal loss [20]. The weighting factors, $\lambda_{\mathrm{ang}}, \lambda_{\mathrm{reg}}, \lambda_{\mathrm{cn}}$ , and $\lambda_{\mathrm{cls}}$ are all set to 1. + +# 4. Experiments + +# 4.1. Datasets + +We trained and tested all the methods across four different datasets: DIOR [5, 13], DOTA-v1.0 [29], SIMD [9] and NWPU VHR-10 [4], which are summarized in Table 1. The details for the datasets and results for SIMD and NWPU are described in Suppl. + +
Datasets# of ImagesImage Widths# of Objects# of ClassesAnnotation Types
DIOR [13]22,463800190,28820T-HBox
DIOR-R [5]22,463800190,28820RBox
DOTA-v1.0 [29]2,806800 ~ 4K188,28215C-HBox, RBox
SIMD [9]5,000102445,09615T-HBox
NWPU VHR-10 [4]800~10003,77510T-HBox
+ +Table 1. Characteristics of datasets used for experiments + +# 4.2. Implementation Details + +Our proposed ABBSPO pipeline adopts the FCOS [24] detector as the baseline architecture, utilizing a ResNet-50 [10] backbone and an FPN [15] neck, based on the H2RBox-v2 [48] framework. To ensure fairness, all models are configured with the ResNet-50 [10] backbone and trained for 12 epochs on NVIDIA RTX3090 GPUs. + +# 4.3. Experimental Results + +# 4.3.1 Quantitative Comparison + +It should be noted that objects such as round-shaped pools have orientation ambiguities regardless of their annotations (RBoxes) [42]. In order to avoid confusion in orientation learning, annotations are modified as having horizontal orientations if the objects belong to the following categories: (i) DIOR-R: 'baseball field', 'chimney', 'golf field', 'stadium', 'storage tank', 'windmill'; and (ii) DOTA-v1.0: 'baseball diamond', 'stadium', 'roundabout'. Accordingly, their orientation learning is enforced to predict the horizontal orientations, similar to previous works [42, 48]. + +Results on DIOR-R. Table 2 shows the OOD results. In addition to $\mathrm{AP}_{50}$ metric, we use $3\text{-AP}_{50}$ that focuses on the detection performance of the three complex-shaped object categories: 'airplane' (APL), 'expressway service area' (ESA), and 'overpass' (OP). As shown, our ABBSPO outperforms all weakly supervised OOD methods. Especially, in terms of $3\text{-AP}_{50}$ , our ABBSPO is superior to the HBox-supervised SOTA methods, H2RBox and H2RBox-v2, with large margins of average $12.9\%$ -point and average $9.1\%$ -point improvements. In overall $\mathrm{AP}_{50}$ performance, our ABBSPO surpasses H2RBox by $5.13\%$ -point and the H2RBox-v2 by $3.03\%$ -point. It is noted that our ABBSPO not only surpasses our base detector (H2RBox-v2 [48]) but also performs comparably to other RBox-supervised OOD methods, such as FCOS [24] and Oriented R-CNN [32]. It is worth noting that, compared to the RBox-supervised OOD methods, our ABBSPO shows even superior performance with large margins from $6.5\%$ -point to $11.7\%$ -point, especially on the 'airplane' that has the most complex shape. Notably, the ABBS module is less effective for rectangular objects, such as 'tennis court' (TC) and 'vehicle' (VE), as scaling is often unnecessary. However, it proves highly beneficial for complex-shaped objects, such as the ESA. The SPA loss is applied only to symmetric categories and helps + +
Methods\( \underline{\mathbf{APL}} \)\( \mathbf{APO} \)\( \mathbf{BF} \)\( \mathbf{{BC}} \)\( \mathbf{{BR}} \)\( \mathbf{{CH}} \)\( \underline{\mathbf{ESA}} \)\( \mathbf{{ETS}} \)\( \mathbf{{DAM}} \)\( \mathbf{{GF}} \)\( \mathbf{{GTF}} \)\( \mathbf{{HA}} \)\( \mathbf{{OP}} \)\( \mathbf{{SH}} \)\( \mathbf{{STA}} \)\( \mathbf{{STO}} \)\( \mathbf{{TC}} \)\( \mathbf{{TS}} \)\( \mathbf{{WE}} \)\( \mathbf{{WM}} \)\( 3-AP_{50} \)\( AP_{50} \)
\( S_R \)RetinaNet [20]59.819.369.781.317.272.768.749.418.469.571.333.334.175.867.159.681.044.138.062.554.2054.64
FCOS [24]62.137.974.681.232.972.175.361.827.469.178.734.450.680.168.668.181.349.143.464.562.6760.66
Oriented R-CNN [32]63.036.771.981.641.172.677.865.524.872.982.140.956.581.273.462.481.553.365.665.7762.41
GWD [38] (RetinaNet)61.523.673.681.117.472.768.347.220.771.273.233.934.377.664.757.580.942.139.760.254.7055.07
KLD [39] (RetinaNet)57.822.671.581.216.972.768.952.120.673.571.033.733.277.168.959.980.943.939.160.953.3055.32
KFIoU [41] (RetinaNet)60.636.673.680.927.072.673.456.525.473.972.032.945.875.865.257.680.048.040.158.859.9357.84
\( \underline{\mathbf{S_I}} \)\( WSODet^† \)[23]20.729.063.267.30.265.50.40.10.349.028.90.31.51.253.416.440.00.16.10.17.5322.20
\( S_P \)\( PointOBB^† \)[17]58.215.370.578.60.172.269.61.83.70.377.316.740.479.239.632.429.616.833.627.756.0738.08
Point2RBox-SK [47]41.99.162.952.810.872.23.043.95.59.725.19.121.024.020.425.171.74.516.116.321.9727.26
\( S_H \)H2RBox [42]57.114.472.282.617.571.256.555.21467.777.93140.776.366.263.481.550.43857.651.4354.57
H2RBox-v2 [48]55.517.876.980.527.772.263.058.624.473.980.333.947.277.458.760.981.448.141.153.955.2356.67
ABBSPO (Ours)69.515.776.287.529.972.375.361.228.174.181.734.748.279.367.461.481.554.741.553.864.3359.70
+ +Table 2. Quantitative results of each category on the DIOR-R [5] test dataset for RBox-supervised $(S_R)$ , Image-supervised $(S_I)$ , Point-supervised $(S_P)$ and HBox-supervised $(S_H)$ methods. The $3-\mathrm{AP}_{50}$ represents the mean $\mathrm{AP}_{50}$ scores for three complex-shaped object categories: 'airplane' (APL), 'expressway service area' (ESA), and 'overpass' (OP). The notation $\dagger$ indicates its results in the paper [17]. + +
MethodsPLBDBRGTFSVLVSHTCBCSTSBFRAHASPHC3-AP50AP50
SRRetinaNet [20]87.575.139.959.666.366.378.290.555.062.747.163.659.455.143.061.8763.3
FCOS [24]88.874.046.859.170.181.487.790.767.768.360.266.164.958.744.063.8368.6
Oriented R-CNN [32]89.376.153.878.768.684.989.390.874.362.866.366.574.758.646.864.9072.1
Oriented RepPoints[14]89.780.150.574.475.082.088.790.464.070.045.760.673.660.442.864.3069.86
GWD [38] (RetinaNet)88.274.941.360.566.768.185.890.550.466.845.865.160.752.938.960.063.77
KLD [39] (RetinaNet)88.475.841.460.066.168.884.790.656.860.450.470.160.050.545.761.5364.65
KFIuU [41] (RetinaNet)84.474.340.755.257.956.976.471.246.164.854.365.058.348.742.958.6759.81
SPPointOBB [17]+FCOS32.467.30.853.62.39.718.80.39.912.80.554.011.034.111.425.9721.26
Point2RBox-SK [47]50.163.71.644.723.934.732.778.841.232.22.134.320.842.57.233.2734.03
SHH2RBox [42]89.573.137.355.170.776.485.490.366.567.359.664.960.657.936.561.3066.07
H2RBox-v2 [48]89.474.845.456.070.376.687.990.569.367.556.764.765.355.545.563.4767.69
ABBSPO (Ours)89.275.647.452.870.377.688.290.567.966.868.266.271.655.651.065.2769.26
+ +Table 3. Quantitative results of each category on the DOTA-v1.0 [29] validation dataset for $S_R$ , $S_I$ , $S_P$ and $S_H$ methods. The $\underline{3 - AP}_{50}$ represents the mean $\mathrm{AP}_{50}$ scores for three complex-shaped object categories: plane (PL), swimming pool (SP), and helicopter (HC). All the methods are re-trained using only train dataset for fair comparison. + +improve their performance, except for the categories with orientation ambiguities, such as 'storage tank' (STO). Since the predicted angles learned through the SPA loss are also utilized in the ABBS module for scale adjustment, both the SPA loss and the ABBS module jointly contribute to performance improvement in symmetric categories. This joint effect is particularly evident in complex-shaped symmetric categories, such as APL and ESA, where performance gains are more significant. Nevertheless, the performance gains for the two symmetric and rectangular categories, TC and VE, are marginal. This is mainly because the ABBS module has limited impact on rectangular shapes, and the small object sizes lead to an insufficient number of pixels for reliably determining the symmetry axis via the SPA loss. + +Results on DOTA-v1.0. Table 3 shows the detection performance results on the DOTA-v1.0 [29]. Due to the nonresponsiveness of the DOTA evaluation server, we report our experimental results on the validation dataset (458 images) instead of the test dataset (937 images). It should be noted that the validation dataset was not used for training all the methods for fair comparison. We use $3\mathrm{-AP}_{50}$ that measures the detection performance for the three complex-shaped object categories: 'plane', 'swimming pool' and 'helicopter'. Our ABBsPO achieves SOTA performance, outperforming H2RBox by $3.19\%$ -point and H2RBox-v2 by $1.57\%$ -point improvements. Moreover, our ABBsPO + +even surpasses the FCOS Baseline by $0.66\%$ -point lift. + +# 4.3.2 Qualitative Comparison + +Results on DIOR. As shown in the figures of the first row in Fig. 5, our ABBSPO is the only method that accurately captures both the orientation and scale of the airplane. Since DIOR annotations provide GT in T-HBox format, direct usage of T-HBoxes as GT for training to predict RBox leads to degradations in orientation and scale prediction accuracy for the existing HBox-supervised OOD methods as shown in the figures of columns 2 and 3 in Fig. 5. In contrast, our ABBSPO avoids such degradation by utilizing the ABBS module that optimally scales the GT HBox sizes for precise RBox prediction during training. It is also worthwhile to mention that the predicted orientations by our ABBSPO are more precisely obtained via our SPA loss. Furthermore, it should be noted that compared to the RBox-supervised baseline method (Rotated FCOS [24]), our approach demonstrates superior visual results, even under weakly supervised learning. + +Results on DOTA-v1.0. As shown in the figures of the second row in Fig. 5, ABBSPO very accurately predicts both the orientation and scale of the swimming pool, achieving similar accuracy for tennis court. Interestingly, only ABBSPO successfully detects the two tennis courts that are partially occluded by trees (red solid circle) while the other + +
ModuleDIOR-RDOTA-v1.0
ABBSSPA3-AP50AP50AP50
55.2356.6767.69
62.1358.3568.59
58.7758.9969.16
64.3359.7069.26
+ +Table 4. Ablation results on ABBS module and SPA loss $(\mathcal{L}_{\mathrm{SPA}})$ + +
SamplingDIOR-R
L SPAOthers3-AP50AP50
61.6758.93
64.3359.70
43.6350.51
45.150.91
+ +Table 5. Ablation results on proposal sampling in ${\mathcal{L}}_{\mathrm{{SPA}}}$ and other components. + +
Scale RangeDIOR-RDOTA-v1.0
MinMaxInterval3-AP50AP50AP50
0.91.10.0557.9758.1569.26
0.51.50.161.6759.6268.8
1.01.50.164.3359.7068.9
1.02.00.156.0755.4666.55
+ +Table 6. Ablation results on scale range in ABBS module. + +![](images/f838ad4df2e507e0e2ecfce0c5614d593565459003deab0ecdc911cd4cdab3c7.jpg) +Figure 5. Qualitative results on DIOR [5, 13] and DOTA-v1.0 [29]. Zoom-in for better visualization. Rotated FCOS was trained only with GT RBoxes, while H2RBox, H2RBox-v2 and our ABBsPO were trained with GT T-HBoxes (1st row) and GT C-HBoxes (2nd row). + +methods failed. These results visually support the effectiveness of our ABBS module and SPA loss in learning the scales and orientations of objects accurately. + +# 4.4. Ablation Studies + +Ablation study on SPA loss and ABBS module. As shown in Table 4, both components contribute to performance improvements. The ABBS module effectively scales the GT HBoxes, leading to an increase in $\mathrm{AP}_{50}$ performance on the DIOR dataset. Notably, it has a greater effect on complex-shaped object categories, resulting in a significant improvement in $3\text{-AP}_{50}$ . Similarly, the SPA loss enhances angle prediction accuracy, also bringing an improvement in $\mathrm{AP}_{50}$ . + +Ablation study on proposal sampling. As shown in Table 5, applying Top- $k$ proposal sampling exclusively to the SPA loss $(\mathcal{L}_{\mathrm{SPA}})$ yields the highest $\mathrm{AP}_{50}$ performance, as the symmetric proposals of high-quality benefits $\mathcal{L}_{\mathrm{SPA}}$ . But, additional proposal sampling to the others $(\mathcal{L}_{\mathrm{rot}},\mathcal{L}_{\mathrm{flp}},\mathcal{L}_{\mathrm{reg}},\mathcal{L}_{\mathrm{cn}},\mathcal{L}_{\mathrm{cls}})$ significantly lowers the performance. + +Ablation study on scale range in the ABBS module. As shown in Table 6, the optimal scale range is influenced by the type of GT HBoxes. For DIOR's T-HBoxes, a scale range of 1 to 1.5 works well because it ensures that the + +predicted RBoxes fully cover the objects boundary. On the other hand, for DOTA's C-HBoxes, which are already close to the optimal HBoxes, the optimal scale range is closer to 1. By adjusting the scale range based on the type of HBoxes, the ABBS module achieves high accuracy in predicting RBoxes for both datasets. + +# 5. Conclusion + +Our ABBSPO, a weakly supervised OOD framework, effectively learns RBox prediction regardless of the type of HBox annotations (T-HBox and C-HBox). With our proposed Adaptive Bounding Box Scaling (ABBS) and Symmetric Prior Angle (SPA) loss, we achieved enhanced orientation and scale accuracy for OOD, which is comparable to or even better than RBox-supervised methods. Extensive experimental results underscore the superiority of our approach, surpassing state-of-the-art HBox-supervised methods. Our method effectively bridges the gap between weakly supervised OOD and fully supervised OOD, making it a promising solution for applications requiring efficient and accurate object detection via training with relatively cheap annotations of HBoxes compared to RBoxes. + +# Acknowledgement + +This research was supported by Korea Institute of Marine Science & Technology Promotion (KIMST) funded by the Korea Coast Guard (RS-2023-00238652, Integrated Satellite-based Applications Development for Korea Coast Guard, $100\%$ ). + +# References + +[1] Liangyu Chen, Tong Yang, Xiangyu Zhang, Wei Zhang, and Jian Sun. Points as queries: Weakly semi-supervised object detection by points. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8823-8832, 2021. 3 +[2] Pengfei Chen, Xuehui Yu, Xumeng Han, Najmul Hassan, Kai Wang, Jiachen Li, Jian Zhao, Humphrey Shi, Zhenjun Han, and Qixiang Ye. Point-to-box network for accurate object detection via single point supervision. In European Conference on Computer Vision, pages 51-67. Springer, 2022. 3 +[3] Yihong Chen, Zheng Zhang, Yue Cao, Liwei Wang, Stephen Lin, and Han Hu. Reppoints v2: Verification meets regression for object detection. In Advances in Neural Information Processing Systems, pages 5621-5631. Curran Associates, Inc., 2020. 2 +[4] Gong Cheng, Peicheng Zhou, and Junwei Han. Learning rotation-invariant convolutional neural networks for object detection in vhr optical remote sensing images. IEEE transactions on geoscience and remote sensing, 54(12):7405-7415, 2016. 2, 6 +[5] Gong Cheng, Jiabao Wang, Ke Li, Xingxing Xie, Chunbo Lang, Yanqing Yao, and Junwei Han. Anchor-free oriented proposal generator for object detection. IEEE Transactions on Geoscience and Remote Sensing, 60:1-11, 2022. 6, 7, 8 +[6] Jian Ding, Nan Xue, Yang Long, Gui-Song Xia, and Qikai Lu. Learning roi transformer for oriented object detection in aerial images. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2849-2858, 2019. 2 +[7] Jiaming Han, Jian Ding, Jie Li, and Gui-Song Xia. Align deep features for oriented object detection. IEEE transactions on geoscience and remote sensing, 60:1-11, 2021. 2 +[8] Jiaming Han, Jian Ding, Nan Xue, and Gui-Song Xia. Redet: A rotation-equivariant detector for aerial object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2786-2795, 2021. 2 +[9] Muhammad Haroon, Muhammad Shahzad, and Muhammad Moazam Fraz. Multisized object detection using spaceborne optical imagery. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 13:3032-3046, 2020. 2, 6 +[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 6 +[11] Shitian He, Huanxin Zou, Yingqian Wang, Boyang Li, Xu Cao, and Ning Jing. Learning remote sensing object detect + +tion with single point supervision. IEEE Transactions on Geoscience and Remote Sensing, 2023. 3 +[12] Darius Lam, Richard Kuzma, Kevin McGee, Samuel Dooley, Michael Laielli, Matthew Klaric, Yaroslav Bulatov, and Brendan McCord. xview: Objects in context in overhead imagery. arXiv preprint arXiv:1802.07856, 2018. 2 +[13] Ke Li, Gang Wan, Gong Cheng, Liqui Meng, and Junwei Han. Object detection in optical remote sensing images: A survey and a new benchmark. ISPRS journal of photogrammetry and remote sensing, 159:296-307, 2020. 2, 6, 8 +[14] Wentong Li, Yijie Chen, Kaixuan Hu, and Jianke Zhu. Oriented reppoints for aerial object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1829-1838, 2022. 2, 7 +[15] Tsung-Yi Lin, Piotr Dólar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2117-2125, 2017. 6 +[16] Xuebo Liu, Ding Liang, Shi Yan, Dagui Chen, Yu Qiao, and Junjie Yan. Fots: Fast oriented text spotting with a unified network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5676-5685, 2018. 3 +[17] Junwei Luo, Xue Yang, Yi Yu, Qingyun Li, Junchi Yan, and Yansheng Li. Pointobb: Learning oriented object detection via single point supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16730-16740, 2024. 2, 3, 7 +[18] Botao Ren, Xue Yang, Yi Yu, Junwei Luo, and Zhidong Deng. Pointobb-v2: Towards simpler, faster, and stronger single point supervised oriented object detection. arXiv preprint arXiv:2410.08210, 2024. 3 +[19] Zhongzheng Ren, Zhiding Yu, Xiaodong Yang, Ming-Yu Liu, Alexander G Schwing, and Jan Kautz. Ufo 2: A unified framework towards omni-supervised object detection. In European conference on computer vision, pages 288-313. Springer, 2020. 3 +[20] T-YLPG Ross and GKHP Dólar. Focal loss for dense object detection. In proceedings of the IEEE conference on computer vision and pattern recognition, pages 2980-2988, 2017. 2, 6, 7 +[21] Xian Sun, Peijin Wang, Zhiyuan Yan, Feng Xu, Ruiping Wang, Wenhui Diao, Jin Chen, Jihao Li, Yingchao Feng, Tao Xu, et al. Fair1m: A benchmark dataset for fine-grained object recognition in high-resolution remote sensing imagery. ISPRS Journal of Photogrammetry and Remote Sensing, 184:116-130, 2022. 2 +[22] Yongqing Sun, Jie Ran, Feng Yang, Chenqiang Gao, Takayuki Kurozumi, Hideaki Kimata, and Ziqi Ye. Oriented object detection for remote sensing images based on weakly supervised learning. In 2021 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), pages 1-6. IEEE, 2021. 2 +[23] Zhiwen Tan, Zhiguo Jiang, Chen Guo, and Haopeng Zhang. Wsodet: A weakly supervised oriented detector for aerial object detection. IEEE Transactions on Geoscience and Remote Sensing, 61:1-12, 2023. 2, 3, 7 + +[24] Zhi Tian, Xiangxiang Chu, Xiaoming Wang, Xiaolin Wei, and Chunhua Shen. Fully convolutional one-stage 3d object detection on lidar range images. Advances in Neural Information Processing Systems, 35:34899-34911, 2022. 2, 4, 6, 7 +[25] Hao Wang, Zhanchao Huang, Zhengchao Chen, Ying Song, and Wei Li. Multigrained angle representation for remote-sensing object detection. IEEE Transactions on Geoscience and Remote Sensing, 60:1-13, 2022. 2 +[26] Jian Wang, Fan Li, and Haixia Bi. Gaussian focal loss: Learning distribution polarized angle prediction for rotated object detection in aerial images. IEEE Transactions on Geoscience and Remote Sensing, 60:1-13, 2022. 2 +[27] Linfei Wang, Yibing Zhan, Xu Lin, Baosheng Yu, Liang Ding, Jianqing Zhu, and Dapeng Tao. Explicit and implicit box equivariance learning for weakly-supervised rotated object detection. IEEE Transactions on Emerging Topics in Computational Intelligence, 2024. 2 +[28] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600-612, 2004. 6 +[29] Gui-Song Xia, Xiang Bai, Jian Ding, Zhen Zhu, Serge Belongie, Jiebo Luo, Mihai Datcu, Marcello Pelillo, and Liangpei Zhang. Dota: A large-scale dataset for object detection in aerial images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3974-3983, 2018. 2, 6, 7, 8 +[30] SUN Xian, WANG Zhirui, SUN Yuanrui, DIAO Wenhui, ZHANG Yue, and FU Kun. Air-sarship-1.0: High-resolution sar ship detection dataset. , 8(6):852–863, 2019. 2 +[31] Zhifeng Xiao, Qing Liu, Gefu Tang, and Xiaofang Zhai. Elliptic fourier transformation-based histograms of oriented gradients for rotationally invariant object detection in remote-sensing images. International Journal of Remote Sensing, 36(2):618-644, 2015. 2 +[32] Xingxing Xie, Gong Cheng, Jiabao Wang, Xiwen Yao, and Junwei Han. Oriented r-cnn for object detection. In Proceedings of the IEEE/CVF international conference on computer vision, pages 3520-3529, 2021. 6, 7 +[33] Hang Xu, Xinyuan Liu, Haonan Xu, Yike Ma, Zunjie Zhu, Chenggang Yan, and Feng Dai. Rethinking boundary discontinuity problem for oriented object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17406-17415, 2024. 2 +[34] Xue Yang and Junchi Yan. Arbitrary-oriented object detection with circular smooth label. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part VIII 16, pages 677-694. Springer, 2020. 2 +[35] Xue Yang and Junchi Yan. Arbitrary-oriented object detection with circular smooth label. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part VIII 16, pages 677-694. Springer, 2020. 2 +[36] Xue Yang, Liping Hou, Yue Zhou, Wentao Wang, and Junchi Yan. Dense label encoding for boundary discontinuity free + +rotation detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 15819-15829, 2021. 2 +[37] Xue Yang, Junchi Yan, Ziming Feng, and Tao He. R3det: Refined single-stage detector with feature refinement for rotating object. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4):3163-3171, 2021. 2 +[38] Xue Yang, Junchi Yan, Qi Ming, Wentao Wang, Xiaopeng Zhang, and Qi Tian. Rethinking rotated object detection with gaussian Wasserstein distance loss. In International conference on machine learning, pages 11830-11841. PMLR, 2021. 2, 7 +[39] Xue Yang, Xiaojiang Yang, Jirui Yang, Qi Ming, Wentao Wang, Qi Tian, and Junchi Yan. Learning high-precision bounding box for rotated object detection via kullback-Leibler divergence. Advances in Neural Information Processing Systems, 34:18381-18394, 2021. 7 +[40] Xue Yang, Gefan Zhang, Xiaojiang Yang, Yue Zhou, Wentao Wang, Jin Tang, Tao He, and Junchi Yan. Detecting rotated objects as gaussian distributions and its 3-d generalization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4):4335-4354, 2022. +[41] Xue Yang, Yue Zhou, Gefan Zhang, Jirui Yang, Wentao Wang, Junchi Yan, Xiaopeng Zhang, and Qi Tian. The kfiou loss for rotated object detection. arXiv preprint arXiv:2201.12558, 2022. 2, 7 +[42] Xue Yang, Gefan Zhang, Wentong Li, Yue Zhou, Xuehui Wang, and Junchi Yan. H2RBox: Horizontal box annotation is all you need for oriented object detection. In The Eleventh International Conference on Learning Representations, 2023. 1, 2, 3, 5, 6, 7 +[43] Ze Yang, Shaohui Liu, Han Hu, Liwei Wang, and Stephen Lin. Reppoints: Point set representation for object detection. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9657-9666, 2019. 2 +[44] Xinyi Ying, Li Liu, Yingqian Wang, Ruojing Li, Nuo Chen, Zaiping Lin, Weidong Sheng, and Shilin Zhou. Mapping degeneration meets label evolution: Learning infrared small target detection with single point supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15528-15538, 2023. 3 +[45] Jiahui Yu, Yuning Jiang, Zhangyang Wang, Zhimin Cao, and Thomas Huang. Unitbox: An advanced object detection network. In Proceedings of the 24th ACM international conference on Multimedia, pages 516-520, 2016. 6 +[46] Yi Yu and Feipeng Da. Phase-shifting coder: Predicting accurate orientation in oriented object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13354-13363, 2023. 4 +[47] Yi Yu, Xue Yang, Qingyun Li, Feipeng Da, Jifeng Dai, Yu Qiao, and Junchi Yan. Point2rbox: Combine knowledge from synthetic visual patterns for end-to-end oriented object detection with single point supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16783-16793, 2024. 2, 3, 7 +[48] Yi Yu, Xue Yang, Qingyun Li, Yue Zhou, Feipeng Da, and Junchi Yan. H2rbox-v2: Incorporating symmetry for boost- + +ing horizontal box supervised oriented object detection. Advances in Neural Information Processing Systems, 36, 2024. 1, 2, 3, 4, 5, 6, 7 +[49] Tingxuan Yue, Yanmei Zhang, Jin Wang, Yanbing Xu, and Pengyun Liu. A weak supervision learning paradigm for oriented ship detection in sar image. IEEE Transactions on Geoscience and Remote Sensing, 2024. 2 \ No newline at end of file diff --git a/2025/ABBSPO_ Adaptive Bounding Box Scaling and Symmetric Prior based Orientation Prediction for Detecting Aerial Image Objects/images.zip b/2025/ABBSPO_ Adaptive Bounding Box Scaling and Symmetric Prior based Orientation Prediction for Detecting Aerial Image Objects/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..f2b7a08ca22cbd660942124406b071fcef66eaa3 --- /dev/null +++ b/2025/ABBSPO_ Adaptive Bounding Box Scaling and Symmetric Prior based Orientation Prediction for Detecting Aerial Image Objects/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:985fb7df8e46c19fcd216503aa446ea9cd47f93a641c441dc82240f26319f790 +size 957418 diff --git a/2025/ABBSPO_ Adaptive Bounding Box Scaling and Symmetric Prior based Orientation Prediction for Detecting Aerial Image Objects/layout.json b/2025/ABBSPO_ Adaptive Bounding Box Scaling and Symmetric Prior based Orientation Prediction for Detecting Aerial Image Objects/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..f574d32bf82600b5fee65e908b0a5603c17e8a2e --- /dev/null +++ b/2025/ABBSPO_ Adaptive Bounding Box Scaling and Symmetric Prior based Orientation Prediction for Detecting Aerial Image Objects/layout.json @@ -0,0 +1,11819 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 87, + 103, + 523, + 140 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 103, + 523, + 140 + ], + "spans": [ + { + "bbox": [ + 87, + 103, + 523, + 140 + ], + "type": "text", + "content": "ABBSPO: Adaptive Bounding Box Scaling and Symmetric Prior based Orientation Prediction for Detecting Aerial Image Objects" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 160, + 510, + 188 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 160, + 510, + 188 + ], + "spans": [ + { + "bbox": [ + 104, + 160, + 510, + 188 + ], + "type": "text", + "content": "Woojin Lee" + }, + { + "bbox": [ + 104, + 160, + 510, + 188 + ], + "type": "inline_equation", + "content": "^{1*}" + }, + { + "bbox": [ + 104, + 160, + 510, + 188 + ], + "type": "text", + "content": " Hyugjae Chang" + }, + { + "bbox": [ + 104, + 160, + 510, + 188 + ], + "type": "inline_equation", + "content": "^{1*}" + }, + { + "bbox": [ + 104, + 160, + 510, + 188 + ], + "type": "text", + "content": " Jaeho Moon" + }, + { + "bbox": [ + 104, + 160, + 510, + 188 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 104, + 160, + 510, + 188 + ], + "type": "text", + "content": " Jaehyup Lee" + }, + { + "bbox": [ + 104, + 160, + 510, + 188 + ], + "type": "inline_equation", + "content": "^{2\\dagger}" + }, + { + "bbox": [ + 104, + 160, + 510, + 188 + ], + "type": "text", + "content": " Munchurl Kim" + }, + { + "bbox": [ + 104, + 160, + 510, + 188 + ], + "type": "inline_equation", + "content": "^{1\\dagger}" + }, + { + "bbox": [ + 104, + 160, + 510, + 188 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 104, + 160, + 510, + 188 + ], + "type": "text", + "content": "KAIST " + }, + { + "bbox": [ + 104, + 160, + 510, + 188 + ], + "type": "inline_equation", + "content": "{}^{2}" + }, + { + "bbox": [ + 104, + 160, + 510, + 188 + ], + "type": "text", + "content": "KNU" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 155, + 190, + 455, + 204 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 155, + 190, + 455, + 204 + ], + "spans": [ + { + "bbox": [ + 155, + 190, + 455, + 204 + ], + "type": "text", + "content": "{woojin412, hmnc97, jaeho.moon, mkimee}@kaist.ac.kr jaehyuplee@knu.ac.kr" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 187, + 205, + 423, + 217 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 187, + 205, + 423, + 217 + ], + "spans": [ + { + "bbox": [ + 187, + 205, + 423, + 217 + ], + "type": "text", + "content": "https://kaist-viclab.github.io/ABBSPO_site/" + } + ] + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 69, + 241, + 157, + 323 + ], + "blocks": [ + { + "bbox": [ + 69, + 241, + 157, + 323 + ], + "lines": [ + { + "bbox": [ + 69, + 241, + 157, + 323 + ], + "spans": [ + { + "bbox": [ + 69, + 241, + 157, + 323 + ], + "type": "image", + "image_path": "568859ed62ff0b20ad4e6ee353770fe3878e44cbdbe8d43e11f475ee89f78213.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 71, + 324, + 156, + 333 + ], + "lines": [ + { + "bbox": [ + 71, + 324, + 156, + 333 + ], + "spans": [ + { + "bbox": [ + 71, + 324, + 156, + 333 + ], + "type": "text", + "content": "① GT RBox ② GT C-HBox" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 160, + 234, + 234, + 332 + ], + "blocks": [ + { + "bbox": [ + 160, + 234, + 234, + 332 + ], + "lines": [ + { + "bbox": [ + 160, + 234, + 234, + 332 + ], + "spans": [ + { + "bbox": [ + 160, + 234, + 234, + 332 + ], + "type": "image", + "image_path": "b27b775cfa0792a81a90726212506fafdc20232a590b12f4e10a34d0a8bb840b.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 160, + 334, + 233, + 423 + ], + "blocks": [ + { + "bbox": [ + 160, + 334, + 233, + 423 + ], + "lines": [ + { + "bbox": [ + 160, + 334, + 233, + 423 + ], + "spans": [ + { + "bbox": [ + 160, + 334, + 233, + 423 + ], + "type": "image", + "image_path": "4036c351f70e6be817546f6ab541a1e7b674cd0d755dd3972a0e99308263a526.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 61, + 431, + 156, + 441 + ], + "lines": [ + { + "bbox": [ + 61, + 431, + 156, + 441 + ], + "spans": [ + { + "bbox": [ + 61, + 431, + 156, + 441 + ], + "type": "text", + "content": "(a) Two types of GT HBox" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 160, + 431, + 373, + 441 + ], + "lines": [ + { + "bbox": [ + 160, + 431, + 373, + 441 + ], + "spans": [ + { + "bbox": [ + 160, + 431, + 373, + 441 + ], + "type": "text", + "content": "(b) Visual comparison of HBox-supervised oriented detectors" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 55, + 450, + 555, + 506 + ], + "lines": [ + { + "bbox": [ + 55, + 450, + 555, + 506 + ], + "spans": [ + { + "bbox": [ + 55, + 450, + 555, + 506 + ], + "type": "text", + "content": "Figure 1. Performance comparison of HBox-supervised orientated detectors. (a) Top: A coarse horizontal bounding box (C-HBox) " + }, + { + "bbox": [ + 55, + 450, + 555, + 506 + ], + "type": "inline_equation", + "content": "(②)" + }, + { + "bbox": [ + 55, + 450, + 555, + 506 + ], + "type": "text", + "content": " and its corresponding rotated bounding box (RBox) " + }, + { + "bbox": [ + 55, + 450, + 555, + 506 + ], + "type": "inline_equation", + "content": "(①)" + }, + { + "bbox": [ + 55, + 450, + 555, + 506 + ], + "type": "text", + "content": ". Bottom: A tight horizontal bounding box (T-HBox)(" + }, + { + "bbox": [ + 55, + 450, + 555, + 506 + ], + "type": "inline_equation", + "content": "(2)" + }, + { + "bbox": [ + 55, + 450, + 555, + 506 + ], + "type": "text", + "content": ") and its corresponding RBox " + }, + { + "bbox": [ + 55, + 450, + 555, + 506 + ], + "type": "inline_equation", + "content": "(①)" + }, + { + "bbox": [ + 55, + 450, + 555, + 506 + ], + "type": "text", + "content": ". (b) Our ABBSPO is capable of accurately detecting both orientations and scales for GT C-HBoxes and T-HBoxes. (c) Average Precision " + }, + { + "bbox": [ + 55, + 450, + 555, + 506 + ], + "type": "inline_equation", + "content": "\\left(\\mathrm{AP}_{50}\\right)" + }, + { + "bbox": [ + 55, + 450, + 555, + 506 + ], + "type": "text", + "content": " for H2RBox [42], H2RBox-v2 [48], and our ABBSPO. " + }, + { + "bbox": [ + 55, + 450, + 555, + 506 + ], + "type": "inline_equation", + "content": "3-\\mathrm{AP}_{50}" + }, + { + "bbox": [ + 55, + 450, + 555, + 506 + ], + "type": "text", + "content": " represents the mean " + }, + { + "bbox": [ + 55, + 450, + 555, + 506 + ], + "type": "inline_equation", + "content": "\\mathrm{AP}_{50}" + }, + { + "bbox": [ + 55, + 450, + 555, + 506 + ], + "type": "text", + "content": " for three complex shaped objects: (i) DIOR: 'airplane', 'expressway service area', and 'overpass' and (ii) DOTA: 'plane', 'swimming pool', and 'helicopter'." + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 235, + 234, + 309, + 332 + ], + "blocks": [ + { + "bbox": [ + 235, + 234, + 309, + 332 + ], + "lines": [ + { + "bbox": [ + 235, + 234, + 309, + 332 + ], + "spans": [ + { + "bbox": [ + 235, + 234, + 309, + 332 + ], + "type": "image", + "image_path": "abbe6ce2f4ccfae50e61911adbfc36c43a0599bc320175b91a448662945033d5.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 236, + 333, + 309, + 422 + ], + "blocks": [ + { + "bbox": [ + 236, + 333, + 309, + 422 + ], + "lines": [ + { + "bbox": [ + 236, + 333, + 309, + 422 + ], + "spans": [ + { + "bbox": [ + 236, + 333, + 309, + 422 + ], + "type": "image", + "image_path": "d935d857a5b65cffbee17d0414115fdc61d65805e5ef4c398351ff5540f95a21.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 312, + 234, + 385, + 332 + ], + "blocks": [ + { + "bbox": [ + 312, + 234, + 385, + 332 + ], + "lines": [ + { + "bbox": [ + 312, + 234, + 385, + 332 + ], + "spans": [ + { + "bbox": [ + 312, + 234, + 385, + 332 + ], + "type": "image", + "image_path": "8bd69b0bee5866f4cba10b95e82ac4c8cc5b57f2cd8ba8783c90a33616727935.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 312, + 333, + 385, + 422 + ], + "blocks": [ + { + "bbox": [ + 312, + 333, + 385, + 422 + ], + "lines": [ + { + "bbox": [ + 312, + 333, + 385, + 422 + ], + "spans": [ + { + "bbox": [ + 312, + 333, + 385, + 422 + ], + "type": "image", + "image_path": "e955c6c9f48296b82376f41da38d1df88096b41e872664228316cfcd675731db.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 389, + 247, + 541, + 337 + ], + "blocks": [ + { + "bbox": [ + 389, + 247, + 541, + 337 + ], + "lines": [ + { + "bbox": [ + 389, + 247, + 541, + 337 + ], + "spans": [ + { + "bbox": [ + 389, + 247, + 541, + 337 + ], + "type": "image", + "image_path": "949e5f4d898a521f3773f2cdc29482ebb42f4bc6bc2189ba275e0d71a3b33682.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 389, + 342, + 541, + 423 + ], + "blocks": [ + { + "bbox": [ + 389, + 342, + 541, + 423 + ], + "lines": [ + { + "bbox": [ + 389, + 342, + 541, + 423 + ], + "spans": [ + { + "bbox": [ + 389, + 342, + 541, + 423 + ], + "type": "image", + "image_path": "a5b098d46dbb0f780bcd7fc46372d3c5816887a0ef6a2fc8e0c422f150b54626.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 389, + 424, + 547, + 441 + ], + "lines": [ + { + "bbox": [ + 389, + 424, + 547, + 441 + ], + "spans": [ + { + "bbox": [ + 389, + 424, + 547, + 441 + ], + "type": "text", + "content": "H2RBox H2RBox-v2 ABBSPO) Performance overview 3-AP50 AP50" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_caption" + } + ], + "index": 15 + }, + { + "bbox": [ + 151, + 516, + 200, + 529 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 516, + 200, + 529 + ], + "spans": [ + { + "bbox": [ + 151, + 516, + 200, + 529 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 55, + 547, + 296, + 668 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 547, + 296, + 668 + ], + "spans": [ + { + "bbox": [ + 55, + 547, + 296, + 668 + ], + "type": "text", + "content": "Weakly supervised Oriented Object Detection (WS-OOD) has gained attention as a cost-effective alternative to fully supervised methods, providing efficiency and high accuracy. Among weakly supervised approaches, horizontal bounding box (HBox) supervised OOD stands out for its ability to directly leverage existing HBox annotations while achieving the highest accuracy under weak supervision settings. This paper introduces adaptive bounding box scaling and symmetry-prior-based orientation prediction, called ABBSPO that is a framework for WS-OOD. Our ABB-" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 313, + 517, + 555, + 697 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 517, + 555, + 697 + ], + "spans": [ + { + "bbox": [ + 313, + 517, + 555, + 697 + ], + "type": "text", + "content": "SPO addresses the limitations of previous HBox-supervised OOD methods, which compare ground truth (GT) HBoxes directly with predicted RBoxes' minimum circumscribed rectangles, often leading to inaccuracies. To overcome this, we propose: (i) Adaptive Bounding Box Scaling (ABBS) that appropriately scales the GT HBoxes to optimize for the size of each predicted RBox, ensuring more accurate prediction for RBoxes' scales; and (ii) a Symmetric Prior Angle (SPA) loss that uses the inherent symmetry of aerial objects for self-supervised learning, addressing the issue in previous methods where learning fails if they consistently make incorrect predictions for all three augmented views (original, rotated, and flipped). Extensive experimental results demonstrate that our ABBsPO achieves state-of-the-art results, outperforming existing methods." + } + ] + } + ], + "index": 22 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 66, + 673, + 190, + 684 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 673, + 190, + 684 + ], + "spans": [ + { + "bbox": [ + 66, + 673, + 190, + 684 + ], + "type": "text", + "content": "*Co-first authors (equal contribution)." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 67, + 684, + 157, + 693 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 684, + 157, + 693 + ], + "spans": [ + { + "bbox": [ + 67, + 684, + 157, + 693 + ], + "type": "text", + "content": "† Co-corresponding authors." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 67, + 693, + 275, + 703 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 693, + 275, + 703 + ], + "spans": [ + { + "bbox": [ + 67, + 693, + 275, + 703 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 67, + 693, + 275, + 703 + ], + "type": "text", + "content": " Korea Advanced Institute of Science and Technology (KAIST)." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 67, + 703, + 201, + 712 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 703, + 201, + 712 + ], + "spans": [ + { + "bbox": [ + 67, + 703, + 201, + 712 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 67, + 703, + 201, + 712 + ], + "type": "text", + "content": "Kyungpook National University (KNU)." + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "8848" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 71, + 136, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 71, + 136, + 83 + ], + "spans": [ + { + "bbox": [ + 56, + 71, + 136, + 83 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 91, + 297, + 307 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 91, + 297, + 307 + ], + "spans": [ + { + "bbox": [ + 55, + 91, + 297, + 307 + ], + "type": "text", + "content": "Object detection often leverages supervised learning with ground truth horizontal bounding box labels (GT HBoxes) to locate the objects of interest. However, the usage of GT HBoxes limits the precise localization of the objects with their orientations and tight surrounding boundaries, especially for objects such as airplanes and ships of various orientations in aerial images. To handle object detection as an oriented object detection problem, more precise rotated bounding box labels (GT RBoxes) are required, which is very costly to generate [49]. So, to mitigate this challenge, previous methods [17, 23, 42, 47-49] have explored weakly supervised oriented object detection (OOD) that utilizes less expensive forms of annotations, such as image-level, point and HBox annotations. Among these, the use of HBoxes is the most popular due to their widespread availability in existing public datasets [4, 9, 12, 21, 30, 31] to predict the RBoxes for objects of interest. So, this approach can detour the costly process of generating GT RBoxes." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 312, + 296, + 589 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 312, + 296, + 589 + ], + "spans": [ + { + "bbox": [ + 55, + 312, + 296, + 589 + ], + "type": "text", + "content": "The previous weakly supervised (WS) learning of OOD [22, 27, 42, 48] utilizes GT HBoxes in the forms of coarse HBoxes, called GT C-HBoxes, as supervision to compare with the HBoxes derived as the minimum circumscribed rectangles from the predicted RBoxes by their OOD models. As shown in the upper figure of Fig. 1-(a), the GT C-HBoxes are defined as coarse horizontal bounding boxes that loosely encompass the boundaries of objects (not tightly bounded). The GT HBoxes of the DOTA [29] dataset are in the forms of C-HBoxes which are derived as the minimum circumscribed horizontal bounding boxes of their GT RBoxes. However, when the previous OOD methods [42, 48] are supervised with the other GT HBoxes that are in the form of tight HBoxes, called GT T-HBoxes (e.g. DIOR dataset [13]), as shown in the bottom figure of Fig. 1-(a), we found that their performances are significantly degraded because GT T-HBoxes tend to have different scales, compared to those of GT C-HBoxes (see Fig. 1-(c)). As shown in Fig. 1-(b), this causes the previous methods to predict either RBoxes with accurate orientations but inaccurate scales smaller than the sizes of their corresponding objects, or the RBoxes with inaccurate (close to horizontal) orientations but somewhat accurate scales (almost the same as HBoxes)." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 594, + 297, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 594, + 297, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 594, + 297, + 714 + ], + "type": "text", + "content": "To overcome the above limitations of the previous WS-OOD methods, we propose an adaptive bounding box scaling and symmetry-prior-based orientation prediction, called as ABBSPO, as a WS-OOD framework that can be effectively trained with either GT C-HBoxes or GT T-Hboxes for aerial images. For this, (i) a novel Adaptive Bounding Box Scaling (ABBS) module is designed to have the flexibility of adjusting the GT HBoxes for each object into random sizes and then selecting the optimal scaled GT HBoxes that allow it to encompass the predicted RBoxes. Note that" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 313, + 72, + 555, + 193 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 555, + 193 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 555, + 193 + ], + "type": "text", + "content": "the previous methods are not possible to have such flexibility for the adjustment of GT HBoxes; (ii) An angle learning module is proposed in a self-supervised manner that utilizes the symmetric priors of the objects that open appear in top-down views of aerial images. As shown in Figs 1-(b) and (c), Our proposed method predicts accurate orientation and surrounding boxes of objects for both cases of using GT C-HBoxes and GT T-HBoxes, outperforming the previous methods in angle accuracy and localization in terms of average precision (AP). Our contributions are summarized as:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 318, + 205, + 555, + 408 + ], + "type": "list", + "angle": 0, + "index": 9, + "blocks": [ + { + "bbox": [ + 318, + 205, + 555, + 277 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 318, + 205, + 555, + 277 + ], + "spans": [ + { + "bbox": [ + 318, + 205, + 555, + 277 + ], + "type": "text", + "content": "- To the best of our knowledge, our work is the first to address the limitations of previous weakly supervised OOD learning methods with T-HBoxes as GT. To overcome this, we propose a novel weakly supervised OOD method that can be effectively trained with T-HBoxes or C-HBoxes that can be cheaply annotated as GT;" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 318, + 277, + 555, + 337 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 318, + 277, + 555, + 337 + ], + "spans": [ + { + "bbox": [ + 318, + 277, + 555, + 337 + ], + "type": "text", + "content": "- The adaptive bounding box scaling (ABBS) module is proposed to flexibly adjust the HBox (GT) for each object toward an appropriately scaled HBox. This allows part of the predicted RBoxes to place outside the T-HBox (GT), yielding precise RBox prediction;" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 318, + 338, + 554, + 372 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 318, + 338, + 554, + 372 + ], + "spans": [ + { + "bbox": [ + 318, + 338, + 554, + 372 + ], + "type": "text", + "content": "- A symmetric prior angle (SPA) loss is presented to enhance the orientation prediction accuracy by leveraging the symmetric priors of the objects in aerial images;" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 318, + 373, + 554, + 408 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 318, + 373, + 554, + 408 + ], + "spans": [ + { + "bbox": [ + 318, + 373, + 554, + 408 + ], + "type": "text", + "content": "- Our method significantly outperforms the state-of-the-art OOD methods using weakly supervised learning with HBoxes (GT) for aerial datasets." + } + ] + } + ], + "index": 8 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 314, + 424, + 400, + 436 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 424, + 400, + 436 + ], + "spans": [ + { + "bbox": [ + 314, + 424, + 400, + 436 + ], + "type": "text", + "content": "2. Related Work" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 444, + 542, + 456 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 444, + 542, + 456 + ], + "spans": [ + { + "bbox": [ + 313, + 444, + 542, + 456 + ], + "type": "text", + "content": "2.1. RBox-supervised Oriented Object Detection" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 462, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 462, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 462, + 556, + 715 + ], + "type": "text", + "content": "Oriented Object Detection (OOD) has gained significant attention, leading to extensive research in RBox-supervised methods (using GT RBoxes) such as Rotated RetinaNet [20], Rotated FCOS [24], " + }, + { + "bbox": [ + 313, + 462, + 556, + 715 + ], + "type": "inline_equation", + "content": "\\mathrm{R}^3\\mathrm{Det}" + }, + { + "bbox": [ + 313, + 462, + 556, + 715 + ], + "type": "text", + "content": " [37], ROI Transformer [6], ReDet [8], and " + }, + { + "bbox": [ + 313, + 462, + 556, + 715 + ], + "type": "inline_equation", + "content": "\\mathrm{S}^2\\mathrm{A}" + }, + { + "bbox": [ + 313, + 462, + 556, + 715 + ], + "type": "text", + "content": "-Net [7]. Rotated FCOS [24] improves OOD performance by introducing center-ness, which assigns weights to samples based on their proposal locations, thereby emphasizing well-positioned proposals. OrientedRepPoints methods [3, 14, 43], in contrast, utilize flexible receptive fields to extract key object points. However, a common challenge in RBox-supervised OOD methods is the boundary discontinuity problem that arises from the definition and prediction of angle parameters " + }, + { + "bbox": [ + 313, + 462, + 556, + 715 + ], + "type": "inline_equation", + "content": "(\\theta)" + }, + { + "bbox": [ + 313, + 462, + 556, + 715 + ], + "type": "text", + "content": " [33, 34]. To address this, several methods modified the ways of defining the RBox representations, such as Gaussian distributions [25, 26, 35, 36, 38-41], thereby avoiding straightforward regression of angle parameters. On the other hand, in weakly supervised learning, the boundary discontinuity issue does not arise thanks to the absence of direct RBox supervision, allowing for more stable angle predictions without the need for complex mitigation strategies." + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "8849" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 58, + 71, + 553, + 184 + ], + "blocks": [ + { + "bbox": [ + 58, + 71, + 553, + 184 + ], + "lines": [ + { + "bbox": [ + 58, + 71, + 553, + 184 + ], + "spans": [ + { + "bbox": [ + 58, + 71, + 553, + 184 + ], + "type": "image", + "image_path": "9058721b81e8835b11068c548c9030cde9f80c85472d688e79cb7a9fc205dd43.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 58, + 183, + 247, + 312 + ], + "blocks": [ + { + "bbox": [ + 58, + 183, + 247, + 312 + ], + "lines": [ + { + "bbox": [ + 58, + 183, + 247, + 312 + ], + "spans": [ + { + "bbox": [ + 58, + 183, + 247, + 312 + ], + "type": "image", + "image_path": "3b91cfd3d439130d68778ce5b060bd3e80c614824624b643e61de1a383877e2f.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 322, + 555, + 367 + ], + "lines": [ + { + "bbox": [ + 55, + 322, + 555, + 367 + ], + "spans": [ + { + "bbox": [ + 55, + 322, + 555, + 367 + ], + "type": "text", + "content": "Figure 2. Overall pipeline of our ABBsPO framework. Our ABBsPO leverages weakly supervised learning from HBox annotations to accurately predict RBoxes. The framework incorporates the Orientation Learning Branch (OLB) for precise angle estimation, using the Symmetric Prior Angle (SPA) loss, and the Scale Learning Branch (SLB) for optimal scale adjustment via the Adaptive Bounding Box Scaling (ABBS) module. The framework supports both C-HBox and T-HBox ground truths, ensuring robust and accurate predictions." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 250, + 183, + 552, + 314 + ], + "blocks": [ + { + "bbox": [ + 250, + 183, + 552, + 314 + ], + "lines": [ + { + "bbox": [ + 250, + 183, + 552, + 314 + ], + "spans": [ + { + "bbox": [ + 250, + 183, + 552, + 314 + ], + "type": "image", + "image_path": "55d9e3d1694b0495f9e982e4c5a131893a68faaf37d1a22e07983bd4262d8421.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 378, + 289, + 391 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 378, + 289, + 391 + ], + "spans": [ + { + "bbox": [ + 55, + 378, + 289, + 391 + ], + "type": "text", + "content": "2.2. Weakly-supervised Orientd Object Detection" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 396, + 296, + 456 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 396, + 296, + 456 + ], + "spans": [ + { + "bbox": [ + 55, + 396, + 296, + 456 + ], + "type": "text", + "content": "Weakly supervised OOD methods learn to predict RBoxes without directly utilizing GT RBoxes. The approaches in this domain are primarily categorized based on the types of labels they employ: image-based [23], point-based [17, 47], and HBox-based [42, 48]." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 467, + 296, + 526 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 467, + 296, + 526 + ], + "spans": [ + { + "bbox": [ + 55, + 467, + 296, + 526 + ], + "type": "text", + "content": "Image-based supervision. WSODet [23], aims to generate pseudo-RBoxes without explicit localization supervision, thus encountering significant limitations when relying solely on image labels, especially for the scenes with numerous and diverse object types." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 536, + 295, + 691 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 536, + 295, + 691 + ], + "spans": [ + { + "bbox": [ + 55, + 536, + 295, + 691 + ], + "type": "text", + "content": "Point-based supervision. By leveraging one representative point at the center location as a label for each object, point label-based methods offer the advantage of being cost-effective [1, 2, 11, 16, 19, 44]. PointOBB [17] estimates angles from geometric relationships across original, rotated, and flipped views, and determines scales by analyzing proposal distributions between original and scaled input images. PointOBB-v2 [18] improves single point supervision by refining pseudo-label generation, leading to enhanced efficiency and accuracy. Point2RBox [47] employed fundamental patterns as priors to guide the regression of RBoxes. Point-based methods are cost-effective and straightforward, but still struggle with limited supervision." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 701, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 701, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 701, + 295, + 713 + ], + "type": "text", + "content": "HBox-based supervision. As annotating HBoxes is more" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 379, + 555, + 594 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 379, + 555, + 594 + ], + "spans": [ + { + "bbox": [ + 313, + 379, + 555, + 594 + ], + "type": "text", + "content": "straightforward than RBoxes, the HBox-supervised OOD has gained increasing attention in recent studies. H2RBox [42] utilized rotated views from the original view and provided self-supervision of object orientations without requiring the GT angles. H2RBox-v2 [48] expanded the use of geometric relationships between views by adding flipped views. These methods learn to predict RBoxes by converting minimum circumscribed HBoxes that encompass the predicted RBoxes to directly compare IoU with the GT HBoxes, thereby enabling HBox-supervised OOD. However, these methods only guarantee performance when trained with GT C-HBoxes that are derived from GT RBoxes. These methods struggle to learn precise OOD when being trained with GT T-HBoxes, because of the significant gap between the GT T-HBoxes and the HBoxes derived from predicted RBoxes. To address the issue, we propose an ABBS module to effectively handle both types of GTs (C-HBoxes and T-HBoxes)." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 314, + 604, + 370, + 615 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 604, + 370, + 615 + ], + "spans": [ + { + "bbox": [ + 314, + 604, + 370, + 615 + ], + "type": "text", + "content": "3. Method" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 623, + 412, + 635 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 623, + 412, + 635 + ], + "spans": [ + { + "bbox": [ + 313, + 623, + 412, + 635 + ], + "type": "text", + "content": "3.1. Overall Pipeline" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 641, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 641, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 641, + 555, + 713 + ], + "type": "text", + "content": "Weakly supervised OOD aims to predict RBoxes using less expensive annotations such as HBoxes. Existing methods such as H2RBox [42] and its improved version H2RBox-v2 [48] have laid the foundation for directly predicting RBoxes from HBoxes. Our proposed pipeline builds upon the H2RBox-v2 [48] framework to effectively enable weakly" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "8850" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "text", + "content": "supervised OOD from either C-HBoxes or T-Boxes. Figure 2 depicts the conceptual framework of our weakly supervised OOD method. Given an input image " + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "inline_equation", + "content": "\\mathrm{I}_{\\mathrm{ori}}" + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "text", + "content": " and its rotated and flipped version " + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "inline_equation", + "content": "\\mathrm{I}_{\\mathrm{rot}}" + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "inline_equation", + "content": "\\mathrm{I}_{\\mathrm{flp}}" + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "text", + "content": ", our proposed pipeline obtains the RBox for each input view (" + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "inline_equation", + "content": "\\mathrm{I}_{\\mathrm{ori}}" + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "inline_equation", + "content": "\\mathrm{I}_{\\mathrm{rot}}" + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "inline_equation", + "content": "\\mathrm{I}_{\\mathrm{flp}}" + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "text", + "content": "), including the center position " + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "inline_equation", + "content": "(x,y)" + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "text", + "content": ", size " + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "inline_equation", + "content": "(w,h)" + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "text", + "content": ", angle " + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "inline_equation", + "content": "(\\theta)" + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "text", + "content": ", class scores " + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "inline_equation", + "content": "(p)" + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "text", + "content": ", and the center-ness " + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "inline_equation", + "content": "(cn)" + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "text", + "content": ". To classify each detected object, we follow FCOS [24] by supervising both the classification " + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "inline_equation", + "content": "(p)" + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "text", + "content": " and the center-ness " + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "inline_equation", + "content": "(cn)" + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "text", + "content": ". The angle " + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "inline_equation", + "content": "(\\theta)" + }, + { + "bbox": [ + 55, + 72, + 296, + 228 + ], + "type": "text", + "content": " prediction is obtained by using the method proposed in PSC [46]. Our contribution mainly lies in the supervision for localization, consisting of two branches: a scale learning branch (SLB) and an orientation learning branch (OLB)." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 233, + 296, + 402 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 233, + 296, + 402 + ], + "spans": [ + { + "bbox": [ + 55, + 233, + 296, + 402 + ], + "type": "text", + "content": "In the SLB, the adaptive bounding box scaling (ABBS) module addresses the relationships between GT HBoxes and predicted RBoxes. This ABBS module provides proper minimum circumscribed rectangle for the accurately predicted RBoxes, by adaptively scaling the HBoxes based on a predefined scale range. The OLB guides accurate prediction of object orientation by utilizing three input views " + }, + { + "bbox": [ + 55, + 233, + 296, + 402 + ], + "type": "inline_equation", + "content": "(\\mathrm{I}_{\\mathrm{ori}},\\mathrm{I}_{\\mathrm{rot}},\\mathrm{I}_{\\mathrm{flp}})" + }, + { + "bbox": [ + 55, + 233, + 296, + 402 + ], + "type": "text", + "content": ", following H2RBox-v2 [48]. Additionally, the OLB utilizes these orientation predictions for our symmetric prior angle (SPA) loss, which leverages the inherent left-right symmetry of objects in aerial images. The SPA loss enforces to further adjust the orientations of the predicted RBoxes to be aligned with the orientations of the symmetric objects such as airplanes, ships, ground track fields etc." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 407, + 266, + 420 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 407, + 266, + 420 + ], + "spans": [ + { + "bbox": [ + 55, + 407, + 266, + 420 + ], + "type": "text", + "content": "3.2. Adaptive Bounding Box Scaling Module" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 426, + 296, + 509 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 426, + 296, + 509 + ], + "spans": [ + { + "bbox": [ + 55, + 426, + 296, + 509 + ], + "type": "text", + "content": "In Fig. 2, 'Scale Learning Branch' illustrates the conceptual process of our adaptive bounding box scaling module (ABBS) module. In HBox-supervised OOD learning, the predicted RBoxes " + }, + { + "bbox": [ + 55, + 426, + 296, + 509 + ], + "type": "inline_equation", + "content": "(RB^{\\mathrm{pred}})" + }, + { + "bbox": [ + 55, + 426, + 296, + 509 + ], + "type": "text", + "content": " must be compared with the GT HBoxes " + }, + { + "bbox": [ + 55, + 426, + 296, + 509 + ], + "type": "inline_equation", + "content": "(HB^{\\mathrm{gt}})" + }, + { + "bbox": [ + 55, + 426, + 296, + 509 + ], + "type": "text", + "content": ". Since they cannot be directly compared, " + }, + { + "bbox": [ + 55, + 426, + 296, + 509 + ], + "type": "inline_equation", + "content": "RB^{\\mathrm{pred}}" + }, + { + "bbox": [ + 55, + 426, + 296, + 509 + ], + "type": "text", + "content": " is first converted to " + }, + { + "bbox": [ + 55, + 426, + 296, + 509 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{pred}}" + }, + { + "bbox": [ + 55, + 426, + 296, + 509 + ], + "type": "text", + "content": ", defined as the minimum circumscribed HBox of " + }, + { + "bbox": [ + 55, + 426, + 296, + 509 + ], + "type": "inline_equation", + "content": "RB^{\\mathrm{pred}}" + }, + { + "bbox": [ + 55, + 426, + 296, + 509 + ], + "type": "text", + "content": " as:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 125, + 512, + 294, + 526 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 125, + 512, + 294, + 526 + ], + "spans": [ + { + "bbox": [ + 125, + 512, + 294, + 526 + ], + "type": "interline_equation", + "content": "H B ^ {\\text {p r e d}} = M C R (R B ^ {\\text {p r e d}}), \\tag {1}", + "image_path": "3f57982b49a1b2c8e034871eca96e808208e1f3db5cd964b4ec7991ce2a16777.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 529, + 296, + 566 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 529, + 296, + 566 + ], + "spans": [ + { + "bbox": [ + 55, + 529, + 296, + 566 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 529, + 296, + 566 + ], + "type": "inline_equation", + "content": "MCR(\\cdot)" + }, + { + "bbox": [ + 55, + 529, + 296, + 566 + ], + "type": "text", + "content": " is an operator converting " + }, + { + "bbox": [ + 55, + 529, + 296, + 566 + ], + "type": "inline_equation", + "content": "RB" + }, + { + "bbox": [ + 55, + 529, + 296, + 566 + ], + "type": "text", + "content": " to the minimum circumscribed " + }, + { + "bbox": [ + 55, + 529, + 296, + 566 + ], + "type": "inline_equation", + "content": "HB" + }, + { + "bbox": [ + 55, + 529, + 296, + 566 + ], + "type": "text", + "content": ", allowing " + }, + { + "bbox": [ + 55, + 529, + 296, + 566 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{pred}}" + }, + { + "bbox": [ + 55, + 529, + 296, + 566 + ], + "type": "text", + "content": " to be compared with " + }, + { + "bbox": [ + 55, + 529, + 296, + 566 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{gt}}" + }, + { + "bbox": [ + 55, + 529, + 296, + 566 + ], + "type": "text", + "content": ". " + }, + { + "bbox": [ + 55, + 529, + 296, + 566 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{pred}}" + }, + { + "bbox": [ + 55, + 529, + 296, + 566 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 529, + 296, + 566 + ], + "type": "inline_equation", + "content": "RB^{\\mathrm{pred}}" + }, + { + "bbox": [ + 55, + 529, + 296, + 566 + ], + "type": "text", + "content": " are given by:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 94, + 570, + 296, + 592 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 570, + 296, + 592 + ], + "spans": [ + { + "bbox": [ + 94, + 570, + 296, + 592 + ], + "type": "interline_equation", + "content": "R B ^ {\\text {p r e d}} = \\left[ x _ {r b} ^ {\\text {p r e d}}, y _ {r b} ^ {\\text {p r e d}}, w _ {r b} ^ {\\text {p r e d}}, h _ {r b} ^ {\\text {p r e d}}, \\theta_ {r b} ^ {\\text {p r e d}} \\right], \\tag {2}", + "image_path": "e842f1d56f7bbabe593c6d03900c1bd6dd81b9d02e11d7f7aef7cb9b6dd49653.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 96, + 589, + 234, + 604 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 96, + 589, + 234, + 604 + ], + "spans": [ + { + "bbox": [ + 96, + 589, + 234, + 604 + ], + "type": "interline_equation", + "content": "H B ^ {\\text {p r e d}} = \\left[ x _ {h b} ^ {\\text {p r e d}}, y _ {h b} ^ {\\text {p r e d}}, w _ {h b} ^ {\\text {p r e d}}, h _ {h b} ^ {\\text {p r e d}} \\right],", + "image_path": "eeb6b991b29dc1b95ca2b8fd4a2251ace229e3a29210c7f5395679ffeac7ce6a.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 607, + 296, + 645 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 607, + 296, + 645 + ], + "spans": [ + { + "bbox": [ + 55, + 607, + 296, + 645 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 607, + 296, + 645 + ], + "type": "inline_equation", + "content": "(x_{rb}^{\\mathrm{pred}},y_{rb}^{\\mathrm{pred}})" + }, + { + "bbox": [ + 55, + 607, + 296, + 645 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 607, + 296, + 645 + ], + "type": "inline_equation", + "content": "(x_{hb}^{\\mathrm{pred}},y_{hb}^{\\mathrm{pred}})" + }, + { + "bbox": [ + 55, + 607, + 296, + 645 + ], + "type": "text", + "content": " are the centers of " + }, + { + "bbox": [ + 55, + 607, + 296, + 645 + ], + "type": "inline_equation", + "content": "RB^{\\mathrm{pred}}" + }, + { + "bbox": [ + 55, + 607, + 296, + 645 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 607, + 296, + 645 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{pred}}" + }, + { + "bbox": [ + 55, + 607, + 296, + 645 + ], + "type": "text", + "content": ", respectively. The width " + }, + { + "bbox": [ + 55, + 607, + 296, + 645 + ], + "type": "inline_equation", + "content": "w" + }, + { + "bbox": [ + 55, + 607, + 296, + 645 + ], + "type": "text", + "content": " and height " + }, + { + "bbox": [ + 55, + 607, + 296, + 645 + ], + "type": "inline_equation", + "content": "h" + }, + { + "bbox": [ + 55, + 607, + 296, + 645 + ], + "type": "text", + "content": " of " + }, + { + "bbox": [ + 55, + 607, + 296, + 645 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{pred}}" + }, + { + "bbox": [ + 55, + 607, + 296, + 645 + ], + "type": "text", + "content": " can be computed as:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 91, + 650, + 296, + 671 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 91, + 650, + 296, + 671 + ], + "spans": [ + { + "bbox": [ + 91, + 650, + 296, + 671 + ], + "type": "interline_equation", + "content": "w _ {h b} ^ {\\text {p r e d}} = w _ {r b} ^ {\\text {p r e d}} \\left| \\cos \\theta_ {r b} ^ {\\text {p r e d}} \\right| + h _ {r b} ^ {\\text {p r e d}} \\left| \\sin \\theta_ {r b} ^ {\\text {p r e d}} \\right|, \\tag {3}", + "image_path": "a2f8936fbcfb6915a87ae0ca02f95afe0df802a30d6f3d8c6f53d46150eaebb5.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 93, + 668, + 257, + 683 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 93, + 668, + 257, + 683 + ], + "spans": [ + { + "bbox": [ + 93, + 668, + 257, + 683 + ], + "type": "interline_equation", + "content": "h _ {h b} ^ {\\text {p r e d}} = w _ {r b} ^ {\\text {p r e d}} | \\sin \\theta_ {r b} ^ {\\text {p r e d}} | + h _ {r b} ^ {\\text {p r e d}} | \\cos \\theta_ {r b} ^ {\\text {p r e d}} |.", + "image_path": "55985f8eafa52f86b53e3a7d884e96b1b1d39d1769025c2222ef262f09ea3cbe.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 55, + 689, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 689, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 689, + 296, + 714 + ], + "type": "text", + "content": "If " + }, + { + "bbox": [ + 55, + 689, + 296, + 714 + ], + "type": "inline_equation", + "content": "RB^{\\mathrm{opt}}" + }, + { + "bbox": [ + 55, + 689, + 296, + 714 + ], + "type": "text", + "content": " is defined as the tightly surrounding object boundary RBox with the precise orientation, then we have" + } + ] + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 338, + 70, + 396, + 142 + ], + "blocks": [ + { + "bbox": [ + 338, + 70, + 396, + 142 + ], + "lines": [ + { + "bbox": [ + 338, + 70, + 396, + 142 + ], + "spans": [ + { + "bbox": [ + 338, + 70, + 396, + 142 + ], + "type": "image", + "image_path": "0f0e3f67749295fd52092e18a0c6894c7b925e01576410d11a332bc47f5de4d9.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 362, + 144, + 373, + 153 + ], + "lines": [ + { + "bbox": [ + 362, + 144, + 373, + 153 + ], + "spans": [ + { + "bbox": [ + 362, + 144, + 373, + 153 + ], + "type": "text", + "content": "(a)" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 399, + 70, + 454, + 142 + ], + "blocks": [ + { + "bbox": [ + 399, + 70, + 454, + 142 + ], + "lines": [ + { + "bbox": [ + 399, + 70, + 454, + 142 + ], + "spans": [ + { + "bbox": [ + 399, + 70, + 454, + 142 + ], + "type": "image", + "image_path": "325bb1fc626aaefc2c05d0140a6886eacafedfc82231735440732d56a2dee2c6.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 421, + 144, + 432, + 152 + ], + "lines": [ + { + "bbox": [ + 421, + 144, + 432, + 152 + ], + "spans": [ + { + "bbox": [ + 421, + 144, + 432, + 152 + ], + "type": "text", + "content": "(b)" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_caption" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 457, + 70, + 529, + 142 + ], + "blocks": [ + { + "bbox": [ + 457, + 70, + 529, + 142 + ], + "lines": [ + { + "bbox": [ + 457, + 70, + 529, + 142 + ], + "spans": [ + { + "bbox": [ + 457, + 70, + 529, + 142 + ], + "type": "image", + "image_path": "5c549cac7a8d8f46a869cf8c9dfc6cfb6146911f9a57937bfa47698c6a67e05b.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 488, + 144, + 498, + 152 + ], + "lines": [ + { + "bbox": [ + 488, + 144, + 498, + 152 + ], + "spans": [ + { + "bbox": [ + 488, + 144, + 498, + 152 + ], + "type": "text", + "content": "(c)" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_caption" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 339, + 154, + 397, + 213 + ], + "blocks": [ + { + "bbox": [ + 339, + 154, + 397, + 213 + ], + "lines": [ + { + "bbox": [ + 339, + 154, + 397, + 213 + ], + "spans": [ + { + "bbox": [ + 339, + 154, + 397, + 213 + ], + "type": "image", + "image_path": "11ae87346cdf7ff7e0c88c472005c62c18768baf3a886fa74174d5159e1b3c45.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 364, + 216, + 375, + 225 + ], + "lines": [ + { + "bbox": [ + 364, + 216, + 375, + 225 + ], + "spans": [ + { + "bbox": [ + 364, + 216, + 375, + 225 + ], + "type": "text", + "content": "(d)" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_caption" + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 399, + 154, + 467, + 213 + ], + "blocks": [ + { + "bbox": [ + 399, + 154, + 467, + 213 + ], + "lines": [ + { + "bbox": [ + 399, + 154, + 467, + 213 + ], + "spans": [ + { + "bbox": [ + 399, + 154, + 467, + 213 + ], + "type": "image", + "image_path": "c71e66550e76ebe42e0f7ba49bef5c0a389b45ee411cfcf5cca25d30fde80b8f.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 427, + 216, + 438, + 225 + ], + "lines": [ + { + "bbox": [ + 427, + 216, + 438, + 225 + ], + "spans": [ + { + "bbox": [ + 427, + 216, + 438, + 225 + ], + "type": "text", + "content": "(e)" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_caption" + } + ], + "index": 20 + }, + { + "type": "image", + "bbox": [ + 468, + 154, + 528, + 213 + ], + "blocks": [ + { + "bbox": [ + 468, + 154, + 528, + 213 + ], + "lines": [ + { + "bbox": [ + 468, + 154, + 528, + 213 + ], + "spans": [ + { + "bbox": [ + 468, + 154, + 528, + 213 + ], + "type": "image", + "image_path": "d75f200c7a4f589524d78653290a80f31a6c6d2a6006a1a73ed76d29a5d60139.jpg" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 494, + 216, + 504, + 225 + ], + "lines": [ + { + "bbox": [ + 494, + 216, + 504, + 225 + ], + "spans": [ + { + "bbox": [ + 494, + 216, + 504, + 225 + ], + "type": "text", + "content": "(f)" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 313, + 226, + 555, + 293 + ], + "lines": [ + { + "bbox": [ + 313, + 226, + 555, + 293 + ], + "spans": [ + { + "bbox": [ + 313, + 226, + 555, + 293 + ], + "type": "text", + "content": "Figure 3. Analysis of scale adjustment function " + }, + { + "bbox": [ + 313, + 226, + 555, + 293 + ], + "type": "inline_equation", + "content": "(f(\\cdot))" + }, + { + "bbox": [ + 313, + 226, + 555, + 293 + ], + "type": "text", + "content": " based on the shape and angle of objects. (a) rectangular shape, (b) rounded rectangular shape, (c) complex shape, (d) horizontal orientation, (e) slightly tilted orientation, (f) diagonal orientation. The cyan solid box, green solid box and red dotted box represent GT T-HBox, RBox and adjusted GT HBox, respectively." + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_caption" + } + ], + "index": 22 + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "spans": [ + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{opt}} = MCR(RB^{\\mathrm{opt}})" + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "text", + "content": " for the 'Predicted RBox Projection Process' block in the SLB of Fig. 2. When the size of " + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{gt}}" + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "text", + "content": " is larger or smaller than that of " + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{opt}}" + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "text", + "content": ", the model needs to adaptively adjust and find the optimal scale within a predefined range of scale variations. We propose an ABBS module that estimates " + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "inline_equation", + "content": "RB^{\\mathrm{opt}}" + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "text", + "content": " by adaptively adjusting the scale of " + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{gt}}" + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "text", + "content": ". Notably, even if " + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "inline_equation", + "content": "RB^{\\mathrm{pred}}" + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "text", + "content": " is accurately estimated, its " + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{pred}}" + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "text", + "content": " may not overlap well with " + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{gt}}" + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "text", + "content": ", leading to a low Intersection over Union (IoU) value. Enforcing " + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{pred}}" + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "text", + "content": " to match " + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{gt}}" + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "text", + "content": " can cause a misalignment with " + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "inline_equation", + "content": "RB^{\\mathrm{pred}}" + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "text", + "content": " because " + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{gt}}" + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "text", + "content": " may not be ideal in estimating " + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "inline_equation", + "content": "RB^{\\mathrm{opt}}" + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "text", + "content": ". To address this, our ABBS module adaptively scales and adjusts " + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{gt}}" + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "text", + "content": " in the context of " + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "inline_equation", + "content": "RB^{\\mathrm{pred}}" + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "text", + "content": ", rather than forcing " + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{pred}}" + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "text", + "content": " to match " + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{gt}}" + }, + { + "bbox": [ + 313, + 305, + 555, + 461 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 313, + 468, + 555, + 504 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 468, + 555, + 504 + ], + "spans": [ + { + "bbox": [ + 313, + 468, + 555, + 504 + ], + "type": "text", + "content": "For the detailed explanation of our ABBS module, we first define a set of scaled versions of " + }, + { + "bbox": [ + 313, + 468, + 555, + 504 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{gt}}" + }, + { + "bbox": [ + 313, + 468, + 555, + 504 + ], + "type": "text", + "content": " for the 'Scaled GT HBoxes Generation Process' in the SLB of Fig. 2 as:" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 361, + 511, + 553, + 528 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 361, + 511, + 553, + 528 + ], + "spans": [ + { + "bbox": [ + 361, + 511, + 553, + 528 + ], + "type": "interline_equation", + "content": "\\mathbf {H B} _ {\\mathrm {s}} ^ {\\mathrm {g t}} = \\left\\{H B _ {\\mathrm {s}, 1} ^ {\\mathrm {g t}}, H B _ {\\mathrm {s}, 2} ^ {\\mathrm {g t}}, \\dots , H B _ {\\mathrm {s}, \\mathrm {K}} ^ {\\mathrm {g t}} \\right\\}, \\tag {4}", + "image_path": "cf44a17c0214cf2f74c56e77b151cefe0d08fb2f8484430f20be3c5491a13211.jpg" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 313, + 536, + 555, + 624 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 536, + 555, + 624 + ], + "spans": [ + { + "bbox": [ + 313, + 536, + 555, + 624 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 536, + 555, + 624 + ], + "type": "inline_equation", + "content": "HB_{s,k}^{\\mathrm{gt}}" + }, + { + "bbox": [ + 313, + 536, + 555, + 624 + ], + "type": "text", + "content": " is the k-th scaled version of " + }, + { + "bbox": [ + 313, + 536, + 555, + 624 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{gt}}" + }, + { + "bbox": [ + 313, + 536, + 555, + 624 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 536, + 555, + 624 + ], + "type": "inline_equation", + "content": "\\mathbf{K}" + }, + { + "bbox": [ + 313, + 536, + 555, + 624 + ], + "type": "text", + "content": " is the total number of scaled variations of " + }, + { + "bbox": [ + 313, + 536, + 555, + 624 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{gt}}" + }, + { + "bbox": [ + 313, + 536, + 555, + 624 + ], + "type": "text", + "content": ". " + }, + { + "bbox": [ + 313, + 536, + 555, + 624 + ], + "type": "inline_equation", + "content": "\\mathbf{HB}_{\\mathbf{S}}^{\\mathrm{gt}}" + }, + { + "bbox": [ + 313, + 536, + 555, + 624 + ], + "type": "text", + "content": " is determined as the combinations of angle-adjusted width and height scale factors, " + }, + { + "bbox": [ + 313, + 536, + 555, + 624 + ], + "type": "inline_equation", + "content": "\\{s_{adj,i}^{w}\\}_{i=1}^{N_s}" + }, + { + "bbox": [ + 313, + 536, + 555, + 624 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 536, + 555, + 624 + ], + "type": "inline_equation", + "content": "\\{s_{adj,j}^{h}\\}_{j=1}^{N_s}" + }, + { + "bbox": [ + 313, + 536, + 555, + 624 + ], + "type": "text", + "content": ", that are transformed from basic width and height scale factors, " + }, + { + "bbox": [ + 313, + 536, + 555, + 624 + ], + "type": "inline_equation", + "content": "\\{s_i^{w}\\}_{i=1}^{N_s}" + }, + { + "bbox": [ + 313, + 536, + 555, + 624 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 536, + 555, + 624 + ], + "type": "inline_equation", + "content": "\\{s_j^{h}\\}_{j=1}^{N_s}" + }, + { + "bbox": [ + 313, + 536, + 555, + 624 + ], + "type": "text", + "content": " by considering angle prediction. Basic scale factors are uniformly spaced in a predefined scale range as:" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 327, + 632, + 553, + 647 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 327, + 632, + 553, + 647 + ], + "spans": [ + { + "bbox": [ + 327, + 632, + 553, + 647 + ], + "type": "interline_equation", + "content": "S _ {w} = \\left\\{s _ {1} ^ {w}, s _ {2} ^ {w}, \\dots , s _ {N _ {s}} ^ {w} \\right\\}, S _ {h} = \\left\\{s _ {1} ^ {h}, s _ {2} ^ {h}, \\dots , s _ {N _ {s}} ^ {h} \\right\\}, \\tag {5}", + "image_path": "b2fe0aaf7f20b89945f1c540a3b8c4598b80de28db9eb412ea2f1205d9260930.jpg" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 313, + 657, + 553, + 678 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 657, + 553, + 678 + ], + "spans": [ + { + "bbox": [ + 313, + 657, + 553, + 678 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 657, + 553, + 678 + ], + "type": "inline_equation", + "content": "S_w" + }, + { + "bbox": [ + 313, + 657, + 553, + 678 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 657, + 553, + 678 + ], + "type": "inline_equation", + "content": "S_h" + }, + { + "bbox": [ + 313, + 657, + 553, + 678 + ], + "type": "text", + "content": " are the sets of basic width and height scale factors, respectively. " + }, + { + "bbox": [ + 313, + 657, + 553, + 678 + ], + "type": "inline_equation", + "content": "s_i^w" + }, + { + "bbox": [ + 313, + 657, + 553, + 678 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 657, + 553, + 678 + ], + "type": "inline_equation", + "content": "s_i^h" + }, + { + "bbox": [ + 313, + 657, + 553, + 678 + ], + "type": "text", + "content": " are calculated as:" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 347, + 685, + 520, + 699 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 347, + 685, + 520, + 699 + ], + "spans": [ + { + "bbox": [ + 347, + 685, + 520, + 699 + ], + "type": "interline_equation", + "content": "s _ {i} ^ {w} = s _ {1} ^ {w} + \\left(s _ {N _ {s}} ^ {w} - s _ {1} ^ {w}\\right) / \\left(N _ {s} - 1\\right) \\cdot (i - 1),", + "image_path": "04b9db50f5a0b83051ff03a109277b9cf524a1cab74af5cbecc271a8bf108c16.jpg" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 348, + 701, + 553, + 716 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 348, + 701, + 553, + 716 + ], + "spans": [ + { + "bbox": [ + 348, + 701, + 553, + 716 + ], + "type": "interline_equation", + "content": "s _ {j} ^ {h} = s _ {1} ^ {h} + \\left(s _ {N _ {s}} ^ {h} - s _ {1} ^ {h}\\right) / \\left(N _ {s} - 1\\right) \\cdot (j - 1), \\tag {6}", + "image_path": "c22115d1fe03e6d1d42eae559a454f15cbf145e310f8bd1e481a1fe5557d8948.jpg" + } + ] + } + ], + "index": 32 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 314, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 314, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 314, + 757 + ], + "type": "text", + "content": "8851" + } + ] + } + ], + "index": 33 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "inline_equation", + "content": "s_{N_s}^w = s_{N_s}^h" + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "text", + "content": " is the predefined largest basic scale factor for both width and height of " + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{gt}}" + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "inline_equation", + "content": "N_{s}" + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "text", + "content": " is the number of uniform quantization for the both range " + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "inline_equation", + "content": "[s_1^w..s_{N_s}^w]" + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "inline_equation", + "content": "[s_1^h..s_{N_s}^h]" + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "text", + "content": ". In order to generate " + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "inline_equation", + "content": "HB_{s,i}^{\\mathrm{gt}}" + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "text", + "content": ", we transform the basic width and height scale factors, " + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "inline_equation", + "content": "\\{s_i^w\\}_{i=1}^{N_s}" + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "inline_equation", + "content": "\\{s_j^h\\}_{j=1}^{N_s}" + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "text", + "content": ", into angle-adjusted width and height scale factors, " + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "inline_equation", + "content": "\\{s_{adj,i}^w\\}_{i=1}^{N_s}" + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "inline_equation", + "content": "\\{s_{adj,j}^h\\}_{j=1}^{N_s}" + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "text", + "content": ", using the predicted angle " + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "inline_equation", + "content": "\\theta^{\\mathrm{pred}}" + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "text", + "content": " through the scale adjustment function " + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "inline_equation", + "content": "f" + }, + { + "bbox": [ + 55, + 72, + 296, + 176 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 93, + 178, + 295, + 195 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 93, + 178, + 295, + 195 + ], + "spans": [ + { + "bbox": [ + 93, + 178, + 295, + 195 + ], + "type": "interline_equation", + "content": "s _ {a d j, i} ^ {w} = f \\left(\\theta_ {r b} ^ {\\text {p r e d}}, s _ {i} ^ {w}\\right), s _ {a d j, j} ^ {h} = f \\left(\\theta_ {r b} ^ {\\text {p r e d}}, s _ {j} ^ {h}\\right). \\tag {7}", + "image_path": "503acbaf15ee701787a21cee43a8a8ac9ad623e0179a494eadcc69a852bb3c91.jpg" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 201, + 296, + 426 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 201, + 296, + 426 + ], + "spans": [ + { + "bbox": [ + 56, + 201, + 296, + 426 + ], + "type": "text", + "content": "To define " + }, + { + "bbox": [ + 56, + 201, + 296, + 426 + ], + "type": "inline_equation", + "content": "f(\\cdot)" + }, + { + "bbox": [ + 56, + 201, + 296, + 426 + ], + "type": "text", + "content": ", it's essential to consider the object types and rotation angles. Fig. 3 shows the effect of scale adjustments on T-HBoxes for three object types: (i) For rectangular objects like tennis courts (Fig. 3-(a)), the adjusted T-HBox (red dotted box) aligns precisely with the GT T-HBox (cyan solid box) and tightly circumscribes the optimal RBox (green solid box); (ii) For rounded rectangular objects (Fig. 3-(b)), the optimal RBox slightly exceeds the GT T-HBox; (iii) Complex shapes like airplanes (Fig. 3-(c)) show a larger discrepancy, with parts of the optimal RBox lying outside the GT T-HBox. Furthermore, scale adjustments also depend on rotation angles: (i) Fig. 3-(d) for a vertically (or horizontally) aligned airplane, the GT T-HBox and optimal RBox are identical; (ii) For Fig. 3-(e) with a small rotation angle, they differ slightly; (iii) For Fig. 3-(f) with a larger angle, the difference is more pronounced. Therefore, to take the object's shape types and orientation degrees into account for the scale adjustment for the widths and heights of T-HBoxes, " + }, + { + "bbox": [ + 56, + 201, + 296, + 426 + ], + "type": "inline_equation", + "content": "f(\\cdot)" + }, + { + "bbox": [ + 56, + 201, + 296, + 426 + ], + "type": "text", + "content": " in Eq. 7 is defined as:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 60, + 430, + 295, + 463 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 430, + 295, + 463 + ], + "spans": [ + { + "bbox": [ + 60, + 430, + 295, + 463 + ], + "type": "interline_equation", + "content": "f (\\theta , s) = \\left\\{ \\begin{array}{l l} \\frac {4}{\\pi} (s - 1) \\cdot \\theta + 1, & \\text {i f} 0 \\leq \\theta < \\frac {\\pi}{4}, \\\\ \\frac {4}{\\pi} (1 - s) \\cdot \\theta + (2 s - 1), & \\text {i f} \\frac {\\pi}{4} \\leq \\theta < \\frac {\\pi}{2}, \\end{array} \\right. \\tag {8}", + "image_path": "720db26f35722853057a78e03177a7eb54ce93a5d159edd8a198ed06e1a7114b.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 469, + 295, + 506 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 469, + 295, + 506 + ], + "spans": [ + { + "bbox": [ + 55, + 469, + 295, + 506 + ], + "type": "text", + "content": "where the angle range is set to " + }, + { + "bbox": [ + 55, + 469, + 295, + 506 + ], + "type": "inline_equation", + "content": "\\theta \\in [0,\\pi /2)" + }, + { + "bbox": [ + 55, + 469, + 295, + 506 + ], + "type": "text", + "content": " due to the periodicity of the angle. According to " + }, + { + "bbox": [ + 55, + 469, + 295, + 506 + ], + "type": "inline_equation", + "content": "f(\\cdot)" + }, + { + "bbox": [ + 55, + 469, + 295, + 506 + ], + "type": "text", + "content": " and Eq. 7, " + }, + { + "bbox": [ + 55, + 469, + 295, + 506 + ], + "type": "inline_equation", + "content": "HB_{s,k}^{\\mathrm{gt}}" + }, + { + "bbox": [ + 55, + 469, + 295, + 506 + ], + "type": "text", + "content": " in " + }, + { + "bbox": [ + 55, + 469, + 295, + 506 + ], + "type": "inline_equation", + "content": "\\mathbf{HB}_s^{\\mathrm{gt}}" + }, + { + "bbox": [ + 55, + 469, + 295, + 506 + ], + "type": "text", + "content": " can be expressed as:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 97, + 514, + 295, + 529 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 97, + 514, + 295, + 529 + ], + "spans": [ + { + "bbox": [ + 97, + 514, + 295, + 529 + ], + "type": "interline_equation", + "content": "H B _ {s, k} ^ {\\mathrm {g t}} = \\left[ x ^ {\\mathrm {g t}}, y ^ {\\mathrm {g t}}, w ^ {\\mathrm {g t}} \\cdot s _ {a d j, i} ^ {w}, h ^ {\\mathrm {g t}} \\cdot s _ {a d j, j} ^ {h} \\right], \\tag {9}", + "image_path": "3ae4f67d1ddf49741875c4015433776ff079f47fc5ea1079964f5db84994ac34.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 531, + 296, + 591 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 531, + 296, + 591 + ], + "spans": [ + { + "bbox": [ + 55, + 531, + 296, + 591 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 531, + 296, + 591 + ], + "type": "inline_equation", + "content": "(x^{\\mathrm{gt}},y^{\\mathrm{gt}})" + }, + { + "bbox": [ + 55, + 531, + 296, + 591 + ], + "type": "text", + "content": " is the center point, and " + }, + { + "bbox": [ + 55, + 531, + 296, + 591 + ], + "type": "inline_equation", + "content": "w^{\\mathrm{gt}}" + }, + { + "bbox": [ + 55, + 531, + 296, + 591 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 531, + 296, + 591 + ], + "type": "inline_equation", + "content": "h^{\\mathrm{gt}}" + }, + { + "bbox": [ + 55, + 531, + 296, + 591 + ], + "type": "text", + "content": " are width and height of " + }, + { + "bbox": [ + 55, + 531, + 296, + 591 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{gt}}" + }, + { + "bbox": [ + 55, + 531, + 296, + 591 + ], + "type": "text", + "content": ". As shown in the 'IoU Calculation' and 'Optimal Scale Learning' blocks in the SLB of Fig. 2, " + }, + { + "bbox": [ + 55, + 531, + 296, + 591 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{opt}}" + }, + { + "bbox": [ + 55, + 531, + 296, + 591 + ], + "type": "text", + "content": " among " + }, + { + "bbox": [ + 55, + 531, + 296, + 591 + ], + "type": "inline_equation", + "content": "\\{HB_{s,k}^{\\mathrm{gt}}\\}_{k=1}^{K}" + }, + { + "bbox": [ + 55, + 531, + 296, + 591 + ], + "type": "text", + "content": " can be determined which minimizes the IoU loss for all proposals by an ABBS loss as:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 61, + 599, + 295, + 638 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 599, + 295, + 638 + ], + "spans": [ + { + "bbox": [ + 61, + 599, + 295, + 638 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\mathrm {a s}} = \\frac {1}{N _ {p}} \\sum_ {l = 1} ^ {N _ {p}} \\min _ {s _ {i} ^ {w} \\in S _ {w}, \\atop s _ {j} ^ {h} \\in S _ {h}} \\mathcal {L} _ {\\mathrm {I o U}} \\left(H B _ {l} ^ {\\text {p r e d}}, H B _ {s, k} ^ {\\text {g t}, l} \\left(s _ {i} ^ {w}, s _ {j} ^ {h}\\right)\\right), \\tag {10}", + "image_path": "131e6d0efc32fbbd1aa324fe56b2aca67f967c6f27dc018711c69ecc7777c3ea.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "inline_equation", + "content": "N_{p}" + }, + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "text", + "content": " is the total number of proposals for input I. " + }, + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "inline_equation", + "content": "HB_{l}^{\\mathrm{pred}}" + }, + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "text", + "content": " is " + }, + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{pred}}" + }, + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "text", + "content": " for " + }, + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "inline_equation", + "content": "l" + }, + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "text", + "content": "-th proposal, and " + }, + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "inline_equation", + "content": "HB_{s,k}^{\\mathrm{gt},l}(s_i^w,s_j^h)" + }, + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "text", + "content": " is " + }, + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "text", + "content": "-th scaled " + }, + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{gt}}" + }, + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "text", + "content": ", as " + }, + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "inline_equation", + "content": "HB_{s,k}^{\\mathrm{gt}}" + }, + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "text", + "content": ", whose width and height are scaled for " + }, + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "inline_equation", + "content": "l" + }, + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "text", + "content": "-th proposal according to Eq. 7 to Eq. 9. Finally, by adding a regularization term using the IoU loss between " + }, + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{pred}}" + }, + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "text", + "content": " and non-scaled " + }, + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "inline_equation", + "content": "HB^{\\mathrm{gt}}" + }, + { + "bbox": [ + 55, + 637, + 296, + 713 + ], + "type": "text", + "content": ", we formed the regression loss as:" + } + ] + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 316, + 80, + 385, + 140 + ], + "blocks": [ + { + "bbox": [ + 335, + 70, + 367, + 79 + ], + "lines": [ + { + "bbox": [ + 335, + 70, + 367, + 79 + ], + "spans": [ + { + "bbox": [ + 335, + 70, + 367, + 79 + ], + "type": "text", + "content": "Airplane" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 316, + 80, + 385, + 140 + ], + "lines": [ + { + "bbox": [ + 316, + 80, + 385, + 140 + ], + "spans": [ + { + "bbox": [ + 316, + 80, + 385, + 140 + ], + "type": "image", + "image_path": "20f8eef00637ab7600c03ee878ea6683fc07db4f9171afee28e7c45469de0d08.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 386, + 80, + 441, + 140 + ], + "blocks": [ + { + "bbox": [ + 406, + 70, + 423, + 79 + ], + "lines": [ + { + "bbox": [ + 406, + 70, + 423, + 79 + ], + "spans": [ + { + "bbox": [ + 406, + 70, + 423, + 79 + ], + "type": "text", + "content": "Ship" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 386, + 80, + 441, + 140 + ], + "lines": [ + { + "bbox": [ + 386, + 80, + 441, + 140 + ], + "spans": [ + { + "bbox": [ + 386, + 80, + 441, + 140 + ], + "type": "image", + "image_path": "45fc781940dddcb8806fd987a3b42f61e4fd13f1476d76fda1771d4abaa461d1.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 444, + 80, + 476, + 140 + ], + "blocks": [ + { + "bbox": [ + 437, + 70, + 479, + 79 + ], + "lines": [ + { + "bbox": [ + 437, + 70, + 479, + 79 + ], + "spans": [ + { + "bbox": [ + 437, + 70, + 479, + 79 + ], + "type": "text", + "content": "Tennis court" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 444, + 80, + 476, + 140 + ], + "lines": [ + { + "bbox": [ + 444, + 80, + 476, + 140 + ], + "spans": [ + { + "bbox": [ + 444, + 80, + 476, + 140 + ], + "type": "image", + "image_path": "a2d1e58850e2cd6b5de90fa8fe55dff6b4f19935e977bc1b59efc803fba9d08a.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 476, + 80, + 553, + 141 + ], + "blocks": [ + { + "bbox": [ + 501, + 70, + 529, + 79 + ], + "lines": [ + { + "bbox": [ + 501, + 70, + 529, + 79 + ], + "spans": [ + { + "bbox": [ + 501, + 70, + 529, + 79 + ], + "type": "text", + "content": "Vehicle" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 476, + 80, + 553, + 141 + ], + "lines": [ + { + "bbox": [ + 476, + 80, + 553, + 141 + ], + "spans": [ + { + "bbox": [ + 476, + 80, + 553, + 141 + ], + "type": "image", + "image_path": "ee66c9f2b21acbe6db6b232718f3f1b68a60af53fcaff428182869ab07575ae3.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 148, + 555, + 182 + ], + "lines": [ + { + "bbox": [ + 313, + 148, + 555, + 182 + ], + "spans": [ + { + "bbox": [ + 313, + 148, + 555, + 182 + ], + "type": "text", + "content": "Figure 4. Examples of symmetric objects in aerial images. In SPA loss, the " + }, + { + "bbox": [ + 313, + 148, + 555, + 182 + ], + "type": "inline_equation", + "content": "x,y" + }, + { + "bbox": [ + 313, + 148, + 555, + 182 + ], + "type": "text", + "content": " coordinates and angle " + }, + { + "bbox": [ + 313, + 148, + 555, + 182 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 313, + 148, + 555, + 182 + ], + "type": "text", + "content": " are used to define the symmetry axis, splitting the object into two parts for comparison." + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_caption" + } + ], + "index": 16 + }, + { + "bbox": [ + 319, + 193, + 555, + 213 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 319, + 193, + 555, + 213 + ], + "spans": [ + { + "bbox": [ + 319, + 193, + 555, + 213 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\mathrm {r e g}} = \\mathcal {L} _ {\\mathrm {a s}} + \\alpha \\cdot \\left(1 / N _ {p}\\right) \\sum_ {l = 1} ^ {N _ {p}} \\mathcal {L} _ {\\mathrm {I o U}} \\left(H B _ {l} ^ {\\text {p r e d}}, H B ^ {\\mathrm {g t}, l}\\right), \\tag {11}", + "image_path": "c47278f4d6750e96bd2f5b1dc27f8a1c05512acfca4f50c837fa381ba4363141.jpg" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 314, + 222, + 553, + 234 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 222, + 553, + 234 + ], + "spans": [ + { + "bbox": [ + 314, + 222, + 553, + 234 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 314, + 222, + 553, + 234 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 314, + 222, + 553, + 234 + ], + "type": "text", + "content": " is a hyperparameter which is set to 0.01 by default." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 313, + 255, + 470, + 268 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 255, + 470, + 268 + ], + "spans": [ + { + "bbox": [ + 313, + 255, + 470, + 268 + ], + "type": "text", + "content": "3.3. Symmetric Prior Angle Loss" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 313, + 273, + 554, + 321 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 273, + 554, + 321 + ], + "spans": [ + { + "bbox": [ + 313, + 273, + 554, + 321 + ], + "type": "text", + "content": "In aerial images, objects such as airplanes, ships, tennis courts, and vehicles are often captured from top-down viewpoints, where most of these objects exhibit symmetries in their appearance, as shown in Fig. 4." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 313, + 328, + 555, + 495 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 328, + 555, + 495 + ], + "spans": [ + { + "bbox": [ + 313, + 328, + 555, + 495 + ], + "type": "text", + "content": "In the previous pipelines [42, 48], both the regression loss associated with the bounding box's center point, width, and height, and the angle loss for accurate angle prediction were trained in a balanced manner. However, they tended to inaccurately predict the angles by maximizing the bounding box's IoUs at the same time. This issue stems from the fact that, since the angles could not be directly supervised due to the absence of angle annotations, the angles were indirectly supervised from augmented views with rotations and flips. This is problematic because, when the difference between two predicted angles for the same object in the original view and its rotated view are equal to the rotation angle applied for the original view, the angle loss is zero although the predicted angles are inaccurate." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 313, + 503, + 555, + 658 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 503, + 555, + 658 + ], + "spans": [ + { + "bbox": [ + 313, + 503, + 555, + 658 + ], + "type": "text", + "content": "To mitigate such a predicted angle ambiguity, we propose a symmetric prior angle (SPA) loss. Based on the SPA loss, the model can be trained to predict precise angles by indirectly utilizing the object's symmetric characteristics. As shown in Fig. 4, the detected objects are symmetric against the symmetry axes (blue-dotted lines) passing the center points of their RBoxes. That is, the pixel contents in the two parts divided by the symmetry axis of the RBox are compared in similarity whose difference is used as supervision for our SPA loss. It is noted that our SPA loss utilizes only symmetric objects, incorporating the symmetry prior from GT class labels for proposals identified as symmetric, such as 'airplane,' 'ship,' 'vehicle,' and 'tennis court'" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 313, + 665, + 554, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 665, + 554, + 712 + ], + "spans": [ + { + "bbox": [ + 313, + 665, + 554, + 712 + ], + "type": "text", + "content": "To avoid applying the SPA loss when " + }, + { + "bbox": [ + 313, + 665, + 554, + 712 + ], + "type": "inline_equation", + "content": "RB^{pred}" + }, + { + "bbox": [ + 313, + 665, + 554, + 712 + ], + "type": "text", + "content": " are inaccurate for the respective objects, we first check the fidelity scores of proposal, and sample the Top- " + }, + { + "bbox": [ + 313, + 665, + 554, + 712 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 313, + 665, + 554, + 712 + ], + "type": "text", + "content": " proposals as supervision in the SPA loss as:" + } + ] + } + ], + "index": 24 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "8852" + } + ] + } + ], + "index": 25 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 61, + 70, + 295, + 90 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 70, + 295, + 90 + ], + "spans": [ + { + "bbox": [ + 61, + 70, + 295, + 90 + ], + "type": "interline_equation", + "content": "\\left\\{R B _ {n} ^ {p r e d} \\right\\} _ {n = 1} ^ {N _ {\\mathrm {s p a}}} = \\operatorname {T o p - k} \\left(\\left\\{R B _ {l} ^ {p r e d} \\right\\} _ {l = 1} ^ {N _ {p}} \\mid \\mathrm {s c} _ {\\mathrm {c l s}} ^ {i} + \\mathrm {s c} _ {\\mathrm {l o c}} ^ {i}\\right) \\tag {12}", + "image_path": "04de425e8ce5b6a9691d6b4ccb35acdccbb92c391c71f82eaa7c01b00bb2636b.jpg" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 95, + 296, + 252 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 95, + 296, + 252 + ], + "spans": [ + { + "bbox": [ + 55, + 95, + 296, + 252 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 95, + 296, + 252 + ], + "type": "inline_equation", + "content": "\\mathrm{sc}_{\\mathrm{cls}}^{(i)}" + }, + { + "bbox": [ + 55, + 95, + 296, + 252 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 95, + 296, + 252 + ], + "type": "inline_equation", + "content": "\\mathrm{sc}_{\\mathrm{loc}}^{(i)}" + }, + { + "bbox": [ + 55, + 95, + 296, + 252 + ], + "type": "text", + "content": " are the classification and localization scores for the " + }, + { + "bbox": [ + 55, + 95, + 296, + 252 + ], + "type": "inline_equation", + "content": "l" + }, + { + "bbox": [ + 55, + 95, + 296, + 252 + ], + "type": "text", + "content": "-th " + }, + { + "bbox": [ + 55, + 95, + 296, + 252 + ], + "type": "inline_equation", + "content": "RB^{pred}" + }, + { + "bbox": [ + 55, + 95, + 296, + 252 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 55, + 95, + 296, + 252 + ], + "type": "inline_equation", + "content": "N_{p}" + }, + { + "bbox": [ + 55, + 95, + 296, + 252 + ], + "type": "text", + "content": " is the total number of proposals. From Eq. 12, the selected " + }, + { + "bbox": [ + 55, + 95, + 296, + 252 + ], + "type": "inline_equation", + "content": "N_{spa}" + }, + { + "bbox": [ + 55, + 95, + 296, + 252 + ], + "type": "text", + "content": " proposals are considered in the SPA loss by which the predicted angles of " + }, + { + "bbox": [ + 55, + 95, + 296, + 252 + ], + "type": "inline_equation", + "content": "RB^{pred}(\\theta_{rb}^{pred})" + }, + { + "bbox": [ + 55, + 95, + 296, + 252 + ], + "type": "text", + "content": " are enforced to align with the orientations of objects in the sense of maximizing the similarity, Structural Similarity Index (SSIM [28]), between the pixel contents in the two parts of each " + }, + { + "bbox": [ + 55, + 95, + 296, + 252 + ], + "type": "inline_equation", + "content": "RB^{pred}" + }, + { + "bbox": [ + 55, + 95, + 296, + 252 + ], + "type": "text", + "content": ". It should be noted that, even in cases where symmetric objects may not appear perfectly symmetric due to contextual factors like shadows or asymmetrical cargo arrangements, their symmetry is still maintained by the inherent structural symmetry between the two parts. Our SPA loss is defined as:" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 257, + 295, + 276 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 257, + 295, + 276 + ], + "spans": [ + { + "bbox": [ + 67, + 257, + 295, + 276 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\mathrm {S P A}} = \\left(1 / N _ {\\mathrm {s p a}}\\right) \\sum_ {n = 1} ^ {N _ {\\mathrm {s p a}}} \\left(1 - \\operatorname {S S I M} \\left(I _ {p 1} ^ {(n)}, I _ {p 2} ^ {(n)}\\right)\\right) \\tag {13}", + "image_path": "3d050ef6cebfdb5479c3a8f562399719fbb616f36fbad27851f233d674c69f68.jpg" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 276, + 296, + 335 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 276, + 296, + 335 + ], + "spans": [ + { + "bbox": [ + 55, + 276, + 296, + 335 + ], + "type": "text", + "content": "To remove the influence of object sizes in " + }, + { + "bbox": [ + 55, + 276, + 296, + 335 + ], + "type": "inline_equation", + "content": "L_{\\mathrm{SPA}}" + }, + { + "bbox": [ + 55, + 276, + 296, + 335 + ], + "type": "text", + "content": " computation, the proposals " + }, + { + "bbox": [ + 55, + 276, + 296, + 335 + ], + "type": "inline_equation", + "content": "(RB^{pred})" + }, + { + "bbox": [ + 55, + 276, + 296, + 335 + ], + "type": "text", + "content": " are projected onto a fixed-size grid of " + }, + { + "bbox": [ + 55, + 276, + 296, + 335 + ], + "type": "inline_equation", + "content": "50 \\times 50" + }, + { + "bbox": [ + 55, + 276, + 296, + 335 + ], + "type": "text", + "content": ". Then, the pixel content " + }, + { + "bbox": [ + 55, + 276, + 296, + 335 + ], + "type": "inline_equation", + "content": "(I_{p1})" + }, + { + "bbox": [ + 55, + 276, + 296, + 335 + ], + "type": "text", + "content": " in one part of the proposal's projection is compared with that " + }, + { + "bbox": [ + 55, + 276, + 296, + 335 + ], + "type": "inline_equation", + "content": "(I_{p2})" + }, + { + "bbox": [ + 55, + 276, + 296, + 335 + ], + "type": "text", + "content": " of the other part that is flipped before the comparison." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 340, + 149, + 352 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 340, + 149, + 352 + ], + "spans": [ + { + "bbox": [ + 55, + 340, + 149, + 352 + ], + "type": "text", + "content": "3.4. Loss Functions" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 358, + 296, + 430 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 358, + 296, + 430 + ], + "spans": [ + { + "bbox": [ + 55, + 358, + 296, + 430 + ], + "type": "text", + "content": "In the orientation learning branch (OLB), two angle-based losses [48], " + }, + { + "bbox": [ + 55, + 358, + 296, + 430 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{rot}}" + }, + { + "bbox": [ + 55, + 358, + 296, + 430 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 358, + 296, + 430 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{flp}}" + }, + { + "bbox": [ + 55, + 358, + 296, + 430 + ], + "type": "text", + "content": ", are adopted to leverage the consistency between the original, rotated, and flipped views of each object proposal. For the rotated and flipped views, " + }, + { + "bbox": [ + 55, + 358, + 296, + 430 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{rot}}" + }, + { + "bbox": [ + 55, + 358, + 296, + 430 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 358, + 296, + 430 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{flp}}" + }, + { + "bbox": [ + 55, + 358, + 296, + 430 + ], + "type": "text", + "content": " are computed by comparing with the predicted angle " + }, + { + "bbox": [ + 55, + 358, + 296, + 430 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 55, + 358, + 296, + 430 + ], + "type": "text", + "content": " in the original view " + }, + { + "bbox": [ + 55, + 358, + 296, + 430 + ], + "type": "inline_equation", + "content": "(\\mathrm{I}_{\\mathrm{ori}})" + }, + { + "bbox": [ + 55, + 358, + 296, + 430 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 79, + 435, + 295, + 449 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 435, + 295, + 449 + ], + "spans": [ + { + "bbox": [ + 79, + 435, + 295, + 449 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\mathrm {r o t}} = l _ {s} \\left(\\theta_ {\\mathrm {r o t}} - \\theta , R\\right), \\mathcal {L} _ {\\mathrm {f l p}} = l _ {s} \\left(\\theta_ {\\mathrm {f l p}} + \\theta , 0\\right), \\tag {14}", + "image_path": "3a0a126f9b4c657eaf3bb5c34553c8f25b38dfee6c4dc275663dd4c6a1bb0d81.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 453, + 295, + 477 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 453, + 295, + 477 + ], + "spans": [ + { + "bbox": [ + 55, + 453, + 295, + 477 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 453, + 295, + 477 + ], + "type": "inline_equation", + "content": "l_{s}" + }, + { + "bbox": [ + 55, + 453, + 295, + 477 + ], + "type": "text", + "content": " denotes a smooth L1 loss-based snap loss [48], and " + }, + { + "bbox": [ + 55, + 453, + 295, + 477 + ], + "type": "inline_equation", + "content": "R" + }, + { + "bbox": [ + 55, + 453, + 295, + 477 + ], + "type": "text", + "content": " denotes the angle applied to " + }, + { + "bbox": [ + 55, + 453, + 295, + 477 + ], + "type": "inline_equation", + "content": "\\mathbf{I}_{ori}" + }, + { + "bbox": [ + 55, + 453, + 295, + 477 + ], + "type": "text", + "content": ". The final angle loss is:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 100, + 483, + 294, + 496 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 100, + 483, + 294, + 496 + ], + "spans": [ + { + "bbox": [ + 100, + 483, + 294, + 496 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\text {a n g}} = \\beta \\left(\\lambda_ {r} \\mathcal {L} _ {\\text {r o t}} + \\lambda_ {f} \\mathcal {L} _ {\\text {f l p}}\\right) + \\gamma \\mathcal {L} _ {\\text {S P A}}, \\tag {15}", + "image_path": "bdc85f0d7241e86dcb462b323f3bb9bb617222cd989760b06aec4aa83747c754.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 500, + 296, + 548 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 500, + 296, + 548 + ], + "spans": [ + { + "bbox": [ + 55, + 500, + 296, + 548 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 500, + 296, + 548 + ], + "type": "inline_equation", + "content": "\\lambda_r = 1.0, \\lambda_f = 0.05, \\beta = 0.6" + }, + { + "bbox": [ + 55, + 500, + 296, + 548 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 55, + 500, + 296, + 548 + ], + "type": "inline_equation", + "content": "\\gamma = 0.05" + }, + { + "bbox": [ + 55, + 500, + 296, + 548 + ], + "type": "text", + "content": " are empirically determined for all our experiments. In the shape learning branch (SLB), we use our IoU-based [45] regression loss " + }, + { + "bbox": [ + 55, + 500, + 296, + 548 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{reg}}" + }, + { + "bbox": [ + 55, + 500, + 296, + 548 + ], + "type": "text", + "content": " in Eq. 11. The overall loss is defined as:" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 70, + 553, + 295, + 567 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 70, + 553, + 295, + 567 + ], + "spans": [ + { + "bbox": [ + 70, + 553, + 295, + 567 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\text {t o t a l}} = \\lambda_ {\\text {a n g}} \\mathcal {L} _ {\\text {a n g}} + \\lambda_ {\\text {r e g}} \\mathcal {L} _ {\\text {r e g}} + \\lambda_ {\\text {c n}} \\mathcal {L} _ {\\text {c n}} + \\lambda_ {\\text {c l s}} \\mathcal {L} _ {\\text {c l s}} \\tag {16}", + "image_path": "dfd977c53b529f1f644bb6b127e506b9d3056bc61c7d23358e53bf1bbf26d1b6.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 55, + 571, + 296, + 608 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 571, + 296, + 608 + ], + "spans": [ + { + "bbox": [ + 55, + 571, + 296, + 608 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 571, + 296, + 608 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{cn}}" + }, + { + "bbox": [ + 55, + 571, + 296, + 608 + ], + "type": "text", + "content": " is the center-ness loss [24], and " + }, + { + "bbox": [ + 55, + 571, + 296, + 608 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{cls}}" + }, + { + "bbox": [ + 55, + 571, + 296, + 608 + ], + "type": "text", + "content": " is the classification loss based on the focal loss [20]. The weighting factors, " + }, + { + "bbox": [ + 55, + 571, + 296, + 608 + ], + "type": "inline_equation", + "content": "\\lambda_{\\mathrm{ang}}, \\lambda_{\\mathrm{reg}}, \\lambda_{\\mathrm{cn}}" + }, + { + "bbox": [ + 55, + 571, + 296, + 608 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 55, + 571, + 296, + 608 + ], + "type": "inline_equation", + "content": "\\lambda_{\\mathrm{cls}}" + }, + { + "bbox": [ + 55, + 571, + 296, + 608 + ], + "type": "text", + "content": " are all set to 1." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 55, + 616, + 137, + 628 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 616, + 137, + 628 + ], + "spans": [ + { + "bbox": [ + 55, + 616, + 137, + 628 + ], + "type": "text", + "content": "4. Experiments" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 55, + 635, + 118, + 647 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 635, + 118, + 647 + ], + "spans": [ + { + "bbox": [ + 55, + 635, + 118, + 647 + ], + "type": "text", + "content": "4.1. Datasets" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 55, + 653, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 653, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 653, + 296, + 714 + ], + "type": "text", + "content": "We trained and tested all the methods across four different datasets: DIOR [5, 13], DOTA-v1.0 [29], SIMD [9] and NWPU VHR-10 [4], which are summarized in Table 1. The details for the datasets and results for SIMD and NWPU are described in Suppl." + } + ] + } + ], + "index": 14 + }, + { + "type": "table", + "bbox": [ + 318, + 71, + 552, + 139 + ], + "blocks": [ + { + "bbox": [ + 318, + 71, + 552, + 139 + ], + "lines": [ + { + "bbox": [ + 318, + 71, + 552, + 139 + ], + "spans": [ + { + "bbox": [ + 318, + 71, + 552, + 139 + ], + "type": "table", + "html": "
Datasets# of ImagesImage Widths# of Objects# of ClassesAnnotation Types
DIOR [13]22,463800190,28820T-HBox
DIOR-R [5]22,463800190,28820RBox
DOTA-v1.0 [29]2,806800 ~ 4K188,28215C-HBox, RBox
SIMD [9]5,000102445,09615T-HBox
NWPU VHR-10 [4]800~10003,77510T-HBox
", + "image_path": "89f028fbb66ac8d2765e35fe42d4b8fac947c3f837363afadf3c1e460ca86577.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "table_body" + } + ], + "index": 15 + }, + { + "bbox": [ + 331, + 142, + 537, + 152 + ], + "lines": [ + { + "bbox": [ + 331, + 142, + 537, + 152 + ], + "spans": [ + { + "bbox": [ + 331, + 142, + 537, + 152 + ], + "type": "text", + "content": "Table 1. Characteristics of datasets used for experiments" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 313, + 169, + 447, + 182 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 169, + 447, + 182 + ], + "spans": [ + { + "bbox": [ + 313, + 169, + 447, + 182 + ], + "type": "text", + "content": "4.2. Implementation Details" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 187, + 554, + 259 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 187, + 554, + 259 + ], + "spans": [ + { + "bbox": [ + 313, + 187, + 554, + 259 + ], + "type": "text", + "content": "Our proposed ABBSPO pipeline adopts the FCOS [24] detector as the baseline architecture, utilizing a ResNet-50 [10] backbone and an FPN [15] neck, based on the H2RBox-v2 [48] framework. To ensure fairness, all models are configured with the ResNet-50 [10] backbone and trained for 12 epochs on NVIDIA RTX3090 GPUs." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 313, + 268, + 439, + 281 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 268, + 439, + 281 + ], + "spans": [ + { + "bbox": [ + 313, + 268, + 439, + 281 + ], + "type": "text", + "content": "4.3. Experimental Results" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 313, + 286, + 457, + 298 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 286, + 457, + 298 + ], + "spans": [ + { + "bbox": [ + 313, + 286, + 457, + 298 + ], + "type": "text", + "content": "4.3.1 Quantitative Comparison" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 313, + 305, + 554, + 425 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 305, + 554, + 425 + ], + "spans": [ + { + "bbox": [ + 313, + 305, + 554, + 425 + ], + "type": "text", + "content": "It should be noted that objects such as round-shaped pools have orientation ambiguities regardless of their annotations (RBoxes) [42]. In order to avoid confusion in orientation learning, annotations are modified as having horizontal orientations if the objects belong to the following categories: (i) DIOR-R: 'baseball field', 'chimney', 'golf field', 'stadium', 'storage tank', 'windmill'; and (ii) DOTA-v1.0: 'baseball diamond', 'stadium', 'roundabout'. Accordingly, their orientation learning is enforced to predict the horizontal orientations, similar to previous works [42, 48]." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 313, + 426, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 426, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 426, + 555, + 715 + ], + "type": "text", + "content": "Results on DIOR-R. Table 2 shows the OOD results. In addition to " + }, + { + "bbox": [ + 313, + 426, + 555, + 715 + ], + "type": "inline_equation", + "content": "\\mathrm{AP}_{50}" + }, + { + "bbox": [ + 313, + 426, + 555, + 715 + ], + "type": "text", + "content": " metric, we use " + }, + { + "bbox": [ + 313, + 426, + 555, + 715 + ], + "type": "inline_equation", + "content": "3\\text{-AP}_{50}" + }, + { + "bbox": [ + 313, + 426, + 555, + 715 + ], + "type": "text", + "content": " that focuses on the detection performance of the three complex-shaped object categories: 'airplane' (APL), 'expressway service area' (ESA), and 'overpass' (OP). As shown, our ABBSPO outperforms all weakly supervised OOD methods. Especially, in terms of " + }, + { + "bbox": [ + 313, + 426, + 555, + 715 + ], + "type": "inline_equation", + "content": "3\\text{-AP}_{50}" + }, + { + "bbox": [ + 313, + 426, + 555, + 715 + ], + "type": "text", + "content": ", our ABBSPO is superior to the HBox-supervised SOTA methods, H2RBox and H2RBox-v2, with large margins of average " + }, + { + "bbox": [ + 313, + 426, + 555, + 715 + ], + "type": "inline_equation", + "content": "12.9\\%" + }, + { + "bbox": [ + 313, + 426, + 555, + 715 + ], + "type": "text", + "content": " -point and average " + }, + { + "bbox": [ + 313, + 426, + 555, + 715 + ], + "type": "inline_equation", + "content": "9.1\\%" + }, + { + "bbox": [ + 313, + 426, + 555, + 715 + ], + "type": "text", + "content": " -point improvements. In overall " + }, + { + "bbox": [ + 313, + 426, + 555, + 715 + ], + "type": "inline_equation", + "content": "\\mathrm{AP}_{50}" + }, + { + "bbox": [ + 313, + 426, + 555, + 715 + ], + "type": "text", + "content": " performance, our ABBSPO surpasses H2RBox by " + }, + { + "bbox": [ + 313, + 426, + 555, + 715 + ], + "type": "inline_equation", + "content": "5.13\\%" + }, + { + "bbox": [ + 313, + 426, + 555, + 715 + ], + "type": "text", + "content": " -point and the H2RBox-v2 by " + }, + { + "bbox": [ + 313, + 426, + 555, + 715 + ], + "type": "inline_equation", + "content": "3.03\\%" + }, + { + "bbox": [ + 313, + 426, + 555, + 715 + ], + "type": "text", + "content": " -point. It is noted that our ABBSPO not only surpasses our base detector (H2RBox-v2 [48]) but also performs comparably to other RBox-supervised OOD methods, such as FCOS [24] and Oriented R-CNN [32]. It is worth noting that, compared to the RBox-supervised OOD methods, our ABBSPO shows even superior performance with large margins from " + }, + { + "bbox": [ + 313, + 426, + 555, + 715 + ], + "type": "inline_equation", + "content": "6.5\\%" + }, + { + "bbox": [ + 313, + 426, + 555, + 715 + ], + "type": "text", + "content": " -point to " + }, + { + "bbox": [ + 313, + 426, + 555, + 715 + ], + "type": "inline_equation", + "content": "11.7\\%" + }, + { + "bbox": [ + 313, + 426, + 555, + 715 + ], + "type": "text", + "content": " -point, especially on the 'airplane' that has the most complex shape. Notably, the ABBS module is less effective for rectangular objects, such as 'tennis court' (TC) and 'vehicle' (VE), as scaling is often unnecessary. However, it proves highly beneficial for complex-shaped objects, such as the ESA. The SPA loss is applied only to symmetric categories and helps" + } + ] + } + ], + "index": 22 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "8853" + } + ] + } + ], + "index": 23 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 58, + 61, + 553, + 188 + ], + "blocks": [ + { + "bbox": [ + 58, + 61, + 553, + 188 + ], + "lines": [ + { + "bbox": [ + 58, + 61, + 553, + 188 + ], + "spans": [ + { + "bbox": [ + 58, + 61, + 553, + 188 + ], + "type": "table", + "html": "
Methods\\( \\underline{\\mathbf{APL}} \\)\\( \\mathbf{APO} \\)\\( \\mathbf{BF} \\)\\( \\mathbf{{BC}} \\)\\( \\mathbf{{BR}} \\)\\( \\mathbf{{CH}} \\)\\( \\underline{\\mathbf{ESA}} \\)\\( \\mathbf{{ETS}} \\)\\( \\mathbf{{DAM}} \\)\\( \\mathbf{{GF}} \\)\\( \\mathbf{{GTF}} \\)\\( \\mathbf{{HA}} \\)\\( \\mathbf{{OP}} \\)\\( \\mathbf{{SH}} \\)\\( \\mathbf{{STA}} \\)\\( \\mathbf{{STO}} \\)\\( \\mathbf{{TC}} \\)\\( \\mathbf{{TS}} \\)\\( \\mathbf{{WE}} \\)\\( \\mathbf{{WM}} \\)\\( 3-AP_{50} \\)\\( AP_{50} \\)
\\( S_R \\)RetinaNet [20]59.819.369.781.317.272.768.749.418.469.571.333.334.175.867.159.681.044.138.062.554.2054.64
FCOS [24]62.137.974.681.232.972.175.361.827.469.178.734.450.680.168.668.181.349.143.464.562.6760.66
Oriented R-CNN [32]63.036.771.981.641.172.677.865.524.872.982.140.956.581.273.462.481.553.365.665.7762.41
GWD [38] (RetinaNet)61.523.673.681.117.472.768.347.220.771.273.233.934.377.664.757.580.942.139.760.254.7055.07
KLD [39] (RetinaNet)57.822.671.581.216.972.768.952.120.673.571.033.733.277.168.959.980.943.939.160.953.3055.32
KFIoU [41] (RetinaNet)60.636.673.680.927.072.673.456.525.473.972.032.945.875.865.257.680.048.040.158.859.9357.84
\\( \\underline{\\mathbf{S_I}} \\)\\( WSODet^† \\)[23]20.729.063.267.30.265.50.40.10.349.028.90.31.51.253.416.440.00.16.10.17.5322.20
\\( S_P \\)\\( PointOBB^† \\)[17]58.215.370.578.60.172.269.61.83.70.377.316.740.479.239.632.429.616.833.627.756.0738.08
Point2RBox-SK [47]41.99.162.952.810.872.23.043.95.59.725.19.121.024.020.425.171.74.516.116.321.9727.26
\\( S_H \\)H2RBox [42]57.114.472.282.617.571.256.555.21467.777.93140.776.366.263.481.550.43857.651.4354.57
H2RBox-v2 [48]55.517.876.980.527.772.263.058.624.473.980.333.947.277.458.760.981.448.141.153.955.2356.67
ABBSPO (Ours)69.515.776.287.529.972.375.361.228.174.181.734.748.279.367.461.481.554.741.553.864.3359.70
", + "image_path": "b9411b1d163988c411741008f80198c6584b65223b04239509fe345f733a2d69.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 56, + 231, + 553, + 350 + ], + "blocks": [ + { + "bbox": [ + 55, + 190, + 553, + 224 + ], + "lines": [ + { + "bbox": [ + 55, + 190, + 553, + 224 + ], + "spans": [ + { + "bbox": [ + 55, + 190, + 553, + 224 + ], + "type": "text", + "content": "Table 2. Quantitative results of each category on the DIOR-R [5] test dataset for RBox-supervised " + }, + { + "bbox": [ + 55, + 190, + 553, + 224 + ], + "type": "inline_equation", + "content": "(S_R)" + }, + { + "bbox": [ + 55, + 190, + 553, + 224 + ], + "type": "text", + "content": ", Image-supervised " + }, + { + "bbox": [ + 55, + 190, + 553, + 224 + ], + "type": "inline_equation", + "content": "(S_I)" + }, + { + "bbox": [ + 55, + 190, + 553, + 224 + ], + "type": "text", + "content": ", Point-supervised " + }, + { + "bbox": [ + 55, + 190, + 553, + 224 + ], + "type": "inline_equation", + "content": "(S_P)" + }, + { + "bbox": [ + 55, + 190, + 553, + 224 + ], + "type": "text", + "content": " and HBox-supervised " + }, + { + "bbox": [ + 55, + 190, + 553, + 224 + ], + "type": "inline_equation", + "content": "(S_H)" + }, + { + "bbox": [ + 55, + 190, + 553, + 224 + ], + "type": "text", + "content": " methods. The " + }, + { + "bbox": [ + 55, + 190, + 553, + 224 + ], + "type": "inline_equation", + "content": "3-\\mathrm{AP}_{50}" + }, + { + "bbox": [ + 55, + 190, + 553, + 224 + ], + "type": "text", + "content": " represents the mean " + }, + { + "bbox": [ + 55, + 190, + 553, + 224 + ], + "type": "inline_equation", + "content": "\\mathrm{AP}_{50}" + }, + { + "bbox": [ + 55, + 190, + 553, + 224 + ], + "type": "text", + "content": " scores for three complex-shaped object categories: 'airplane' (APL), 'expressway service area' (ESA), and 'overpass' (OP). The notation " + }, + { + "bbox": [ + 55, + 190, + 553, + 224 + ], + "type": "inline_equation", + "content": "\\dagger" + }, + { + "bbox": [ + 55, + 190, + 553, + 224 + ], + "type": "text", + "content": " indicates its results in the paper [17]." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 56, + 231, + 553, + 350 + ], + "lines": [ + { + "bbox": [ + 56, + 231, + 553, + 350 + ], + "spans": [ + { + "bbox": [ + 56, + 231, + 553, + 350 + ], + "type": "table", + "html": "
MethodsPLBDBRGTFSVLVSHTCBCSTSBFRAHASPHC3-AP50AP50
SRRetinaNet [20]87.575.139.959.666.366.378.290.555.062.747.163.659.455.143.061.8763.3
FCOS [24]88.874.046.859.170.181.487.790.767.768.360.266.164.958.744.063.8368.6
Oriented R-CNN [32]89.376.153.878.768.684.989.390.874.362.866.366.574.758.646.864.9072.1
Oriented RepPoints[14]89.780.150.574.475.082.088.790.464.070.045.760.673.660.442.864.3069.86
GWD [38] (RetinaNet)88.274.941.360.566.768.185.890.550.466.845.865.160.752.938.960.063.77
KLD [39] (RetinaNet)88.475.841.460.066.168.884.790.656.860.450.470.160.050.545.761.5364.65
KFIuU [41] (RetinaNet)84.474.340.755.257.956.976.471.246.164.854.365.058.348.742.958.6759.81
SPPointOBB [17]+FCOS32.467.30.853.62.39.718.80.39.912.80.554.011.034.111.425.9721.26
Point2RBox-SK [47]50.163.71.644.723.934.732.778.841.232.22.134.320.842.57.233.2734.03
SHH2RBox [42]89.573.137.355.170.776.485.490.366.567.359.664.960.657.936.561.3066.07
H2RBox-v2 [48]89.474.845.456.070.376.687.990.569.367.556.764.765.355.545.563.4767.69
ABBSPO (Ours)89.275.647.452.870.377.688.290.567.966.868.266.271.655.651.065.2769.26
", + "image_path": "70da91bf82d89ba4065aedda657eaef932fcf70ec09bc42e7af6258b288a4e32.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 351, + 555, + 384 + ], + "lines": [ + { + "bbox": [ + 55, + 351, + 555, + 384 + ], + "spans": [ + { + "bbox": [ + 55, + 351, + 555, + 384 + ], + "type": "text", + "content": "Table 3. Quantitative results of each category on the DOTA-v1.0 [29] validation dataset for " + }, + { + "bbox": [ + 55, + 351, + 555, + 384 + ], + "type": "inline_equation", + "content": "S_R" + }, + { + "bbox": [ + 55, + 351, + 555, + 384 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 351, + 555, + 384 + ], + "type": "inline_equation", + "content": "S_I" + }, + { + "bbox": [ + 55, + 351, + 555, + 384 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 351, + 555, + 384 + ], + "type": "inline_equation", + "content": "S_P" + }, + { + "bbox": [ + 55, + 351, + 555, + 384 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 351, + 555, + 384 + ], + "type": "inline_equation", + "content": "S_H" + }, + { + "bbox": [ + 55, + 351, + 555, + 384 + ], + "type": "text", + "content": " methods. The " + }, + { + "bbox": [ + 55, + 351, + 555, + 384 + ], + "type": "inline_equation", + "content": "\\underline{3 - AP}_{50}" + }, + { + "bbox": [ + 55, + 351, + 555, + 384 + ], + "type": "text", + "content": " represents the mean " + }, + { + "bbox": [ + 55, + 351, + 555, + 384 + ], + "type": "inline_equation", + "content": "\\mathrm{AP}_{50}" + }, + { + "bbox": [ + 55, + 351, + 555, + 384 + ], + "type": "text", + "content": " scores for three complex-shaped object categories: plane (PL), swimming pool (SP), and helicopter (HC). All the methods are re-trained using only train dataset for fair comparison." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 54, + 391, + 294, + 559 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 391, + 294, + 559 + ], + "spans": [ + { + "bbox": [ + 54, + 391, + 294, + 559 + ], + "type": "text", + "content": "improve their performance, except for the categories with orientation ambiguities, such as 'storage tank' (STO). Since the predicted angles learned through the SPA loss are also utilized in the ABBS module for scale adjustment, both the SPA loss and the ABBS module jointly contribute to performance improvement in symmetric categories. This joint effect is particularly evident in complex-shaped symmetric categories, such as APL and ESA, where performance gains are more significant. Nevertheless, the performance gains for the two symmetric and rectangular categories, TC and VE, are marginal. This is mainly because the ABBS module has limited impact on rectangular shapes, and the small object sizes lead to an insufficient number of pixels for reliably determining the symmetry axis via the SPA loss." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 559, + 295, + 703 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 559, + 295, + 703 + ], + "spans": [ + { + "bbox": [ + 55, + 559, + 295, + 703 + ], + "type": "text", + "content": "Results on DOTA-v1.0. Table 3 shows the detection performance results on the DOTA-v1.0 [29]. Due to the nonresponsiveness of the DOTA evaluation server, we report our experimental results on the validation dataset (458 images) instead of the test dataset (937 images). It should be noted that the validation dataset was not used for training all the methods for fair comparison. We use " + }, + { + "bbox": [ + 55, + 559, + 295, + 703 + ], + "type": "inline_equation", + "content": "3\\mathrm{-AP}_{50}" + }, + { + "bbox": [ + 55, + 559, + 295, + 703 + ], + "type": "text", + "content": " that measures the detection performance for the three complex-shaped object categories: 'plane', 'swimming pool' and 'helicopter'. Our ABBsPO achieves SOTA performance, outperforming H2RBox by " + }, + { + "bbox": [ + 55, + 559, + 295, + 703 + ], + "type": "inline_equation", + "content": "3.19\\%" + }, + { + "bbox": [ + 55, + 559, + 295, + 703 + ], + "type": "text", + "content": " -point and H2RBox-v2 by " + }, + { + "bbox": [ + 55, + 559, + 295, + 703 + ], + "type": "inline_equation", + "content": "1.57\\%" + }, + { + "bbox": [ + 55, + 559, + 295, + 703 + ], + "type": "text", + "content": " -point improvements. Moreover, our ABBsPO" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 392, + 537, + 403 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 392, + 537, + 403 + ], + "spans": [ + { + "bbox": [ + 313, + 392, + 537, + 403 + ], + "type": "text", + "content": "even surpasses the FCOS Baseline by " + }, + { + "bbox": [ + 313, + 392, + 537, + 403 + ], + "type": "inline_equation", + "content": "0.66\\%" + }, + { + "bbox": [ + 313, + 392, + 537, + 403 + ], + "type": "text", + "content": " -point lift." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 418, + 450, + 430 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 418, + 450, + 430 + ], + "spans": [ + { + "bbox": [ + 313, + 418, + 450, + 430 + ], + "type": "text", + "content": "4.3.2 Qualitative Comparison" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 312, + 437, + 555, + 642 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 437, + 555, + 642 + ], + "spans": [ + { + "bbox": [ + 312, + 437, + 555, + 642 + ], + "type": "text", + "content": "Results on DIOR. As shown in the figures of the first row in Fig. 5, our ABBSPO is the only method that accurately captures both the orientation and scale of the airplane. Since DIOR annotations provide GT in T-HBox format, direct usage of T-HBoxes as GT for training to predict RBox leads to degradations in orientation and scale prediction accuracy for the existing HBox-supervised OOD methods as shown in the figures of columns 2 and 3 in Fig. 5. In contrast, our ABBSPO avoids such degradation by utilizing the ABBS module that optimally scales the GT HBox sizes for precise RBox prediction during training. It is also worthwhile to mention that the predicted orientations by our ABBSPO are more precisely obtained via our SPA loss. Furthermore, it should be noted that compared to the RBox-supervised baseline method (Rotated FCOS [24]), our approach demonstrates superior visual results, even under weakly supervised learning." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 642, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 642, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 642, + 556, + 715 + ], + "type": "text", + "content": "Results on DOTA-v1.0. As shown in the figures of the second row in Fig. 5, ABBSPO very accurately predicts both the orientation and scale of the swimming pool, achieving similar accuracy for tennis court. Interestingly, only ABBSPO successfully detects the two tennis courts that are partially occluded by trees (red solid circle) while the other" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "8854" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 63, + 70, + 212, + 136 + ], + "blocks": [ + { + "bbox": [ + 63, + 70, + 212, + 136 + ], + "lines": [ + { + "bbox": [ + 63, + 70, + 212, + 136 + ], + "spans": [ + { + "bbox": [ + 63, + 70, + 212, + 136 + ], + "type": "table", + "html": "
ModuleDIOR-RDOTA-v1.0
ABBSSPA3-AP50AP50AP50
55.2356.6767.69
62.1358.3568.59
58.7758.9969.16
64.3359.7069.26
", + "image_path": "9e6b6a3079290eb1c61eae8063f2493eadaa18e74151467d1fea92894d2034ad.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 232, + 70, + 360, + 136 + ], + "blocks": [ + { + "bbox": [ + 58, + 139, + 219, + 160 + ], + "lines": [ + { + "bbox": [ + 58, + 139, + 219, + 160 + ], + "spans": [ + { + "bbox": [ + 58, + 139, + 219, + 160 + ], + "type": "text", + "content": "Table 4. Ablation results on ABBS module and SPA loss " + }, + { + "bbox": [ + 58, + 139, + 219, + 160 + ], + "type": "inline_equation", + "content": "(\\mathcal{L}_{\\mathrm{SPA}})" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 232, + 70, + 360, + 136 + ], + "lines": [ + { + "bbox": [ + 232, + 70, + 360, + 136 + ], + "spans": [ + { + "bbox": [ + 232, + 70, + 360, + 136 + ], + "type": "table", + "html": "
SamplingDIOR-R
L SPAOthers3-AP50AP50
61.6758.93
64.3359.70
43.6350.51
45.150.91
", + "image_path": "19bf724e1698fae97cbb3bbcebebb512f03da67ee83be18d5195a971c825f065.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 378, + 70, + 551, + 136 + ], + "blocks": [ + { + "bbox": [ + 221, + 139, + 372, + 161 + ], + "lines": [ + { + "bbox": [ + 221, + 139, + 372, + 161 + ], + "spans": [ + { + "bbox": [ + 221, + 139, + 372, + 161 + ], + "type": "text", + "content": "Table 5. Ablation results on proposal sampling in " + }, + { + "bbox": [ + 221, + 139, + 372, + 161 + ], + "type": "inline_equation", + "content": "{\\mathcal{L}}_{\\mathrm{{SPA}}}" + }, + { + "bbox": [ + 221, + 139, + 372, + 161 + ], + "type": "text", + "content": " and other components." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 378, + 70, + 551, + 136 + ], + "lines": [ + { + "bbox": [ + 378, + 70, + 551, + 136 + ], + "spans": [ + { + "bbox": [ + 378, + 70, + 551, + 136 + ], + "type": "table", + "html": "
Scale RangeDIOR-RDOTA-v1.0
MinMaxInterval3-AP50AP50AP50
0.91.10.0557.9758.1569.26
0.51.50.161.6759.6268.8
1.01.50.164.3359.7068.9
1.02.00.156.0755.4666.55
", + "image_path": "03510db1c1a69acae27a4fb3dd44a73dffdd0f6e941572bea494f9fb3004f152.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + } + ], + "index": 4 + }, + { + "bbox": [ + 376, + 140, + 537, + 161 + ], + "lines": [ + { + "bbox": [ + 376, + 140, + 537, + 161 + ], + "spans": [ + { + "bbox": [ + 376, + 140, + 537, + 161 + ], + "type": "text", + "content": "Table 6. Ablation results on scale range in ABBS module." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "text" + }, + { + "type": "image", + "bbox": [ + 72, + 167, + 544, + 399 + ], + "blocks": [ + { + "bbox": [ + 72, + 167, + 544, + 399 + ], + "lines": [ + { + "bbox": [ + 72, + 167, + 544, + 399 + ], + "spans": [ + { + "bbox": [ + 72, + 167, + 544, + 399 + ], + "type": "image", + "image_path": "f838ad4df2e507e0e2ecfce0c5614d593565459003deab0ecdc911cd4cdab3c7.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 403, + 555, + 426 + ], + "lines": [ + { + "bbox": [ + 55, + 403, + 555, + 426 + ], + "spans": [ + { + "bbox": [ + 55, + 403, + 555, + 426 + ], + "type": "text", + "content": "Figure 5. Qualitative results on DIOR [5, 13] and DOTA-v1.0 [29]. Zoom-in for better visualization. Rotated FCOS was trained only with GT RBoxes, while H2RBox, H2RBox-v2 and our ABBsPO were trained with GT T-HBoxes (1st row) and GT C-HBoxes (2nd row)." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 436, + 295, + 472 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 436, + 295, + 472 + ], + "spans": [ + { + "bbox": [ + 55, + 436, + 295, + 472 + ], + "type": "text", + "content": "methods failed. These results visually support the effectiveness of our ABBS module and SPA loss in learning the scales and orientations of objects accurately." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 479, + 156, + 491 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 479, + 156, + 491 + ], + "spans": [ + { + "bbox": [ + 55, + 479, + 156, + 491 + ], + "type": "text", + "content": "4.4. Ablation Studies" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 498, + 295, + 594 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 498, + 295, + 594 + ], + "spans": [ + { + "bbox": [ + 55, + 498, + 295, + 594 + ], + "type": "text", + "content": "Ablation study on SPA loss and ABBS module. As shown in Table 4, both components contribute to performance improvements. The ABBS module effectively scales the GT HBoxes, leading to an increase in " + }, + { + "bbox": [ + 55, + 498, + 295, + 594 + ], + "type": "inline_equation", + "content": "\\mathrm{AP}_{50}" + }, + { + "bbox": [ + 55, + 498, + 295, + 594 + ], + "type": "text", + "content": " performance on the DIOR dataset. Notably, it has a greater effect on complex-shaped object categories, resulting in a significant improvement in " + }, + { + "bbox": [ + 55, + 498, + 295, + 594 + ], + "type": "inline_equation", + "content": "3\\text{-AP}_{50}" + }, + { + "bbox": [ + 55, + 498, + 295, + 594 + ], + "type": "text", + "content": ". Similarly, the SPA loss enhances angle prediction accuracy, also bringing an improvement in " + }, + { + "bbox": [ + 55, + 498, + 295, + 594 + ], + "type": "inline_equation", + "content": "\\mathrm{AP}_{50}" + }, + { + "bbox": [ + 55, + 498, + 295, + 594 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 55, + 594, + 295, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 594, + 295, + 666 + ], + "spans": [ + { + "bbox": [ + 55, + 594, + 295, + 666 + ], + "type": "text", + "content": "Ablation study on proposal sampling. As shown in Table 5, applying Top- " + }, + { + "bbox": [ + 55, + 594, + 295, + 666 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 55, + 594, + 295, + 666 + ], + "type": "text", + "content": " proposal sampling exclusively to the SPA loss " + }, + { + "bbox": [ + 55, + 594, + 295, + 666 + ], + "type": "inline_equation", + "content": "(\\mathcal{L}_{\\mathrm{SPA}})" + }, + { + "bbox": [ + 55, + 594, + 295, + 666 + ], + "type": "text", + "content": " yields the highest " + }, + { + "bbox": [ + 55, + 594, + 295, + 666 + ], + "type": "inline_equation", + "content": "\\mathrm{AP}_{50}" + }, + { + "bbox": [ + 55, + 594, + 295, + 666 + ], + "type": "text", + "content": " performance, as the symmetric proposals of high-quality benefits " + }, + { + "bbox": [ + 55, + 594, + 295, + 666 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{SPA}}" + }, + { + "bbox": [ + 55, + 594, + 295, + 666 + ], + "type": "text", + "content": ". But, additional proposal sampling to the others " + }, + { + "bbox": [ + 55, + 594, + 295, + 666 + ], + "type": "inline_equation", + "content": "(\\mathcal{L}_{\\mathrm{rot}},\\mathcal{L}_{\\mathrm{flp}},\\mathcal{L}_{\\mathrm{reg}},\\mathcal{L}_{\\mathrm{cn}},\\mathcal{L}_{\\mathrm{cls}})" + }, + { + "bbox": [ + 55, + 594, + 295, + 666 + ], + "type": "text", + "content": " significantly lowers the performance." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 55, + 666, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 666, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 666, + 296, + 713 + ], + "type": "text", + "content": "Ablation study on scale range in the ABBS module. As shown in Table 6, the optimal scale range is influenced by the type of GT HBoxes. For DIOR's T-HBoxes, a scale range of 1 to 1.5 works well because it ensures that the" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 436, + 555, + 509 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 436, + 555, + 509 + ], + "spans": [ + { + "bbox": [ + 313, + 436, + 555, + 509 + ], + "type": "text", + "content": "predicted RBoxes fully cover the objects boundary. On the other hand, for DOTA's C-HBoxes, which are already close to the optimal HBoxes, the optimal scale range is closer to 1. By adjusting the scale range based on the type of HBoxes, the ABBS module achieves high accuracy in predicting RBoxes for both datasets." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 526, + 388, + 538 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 526, + 388, + 538 + ], + "spans": [ + { + "bbox": [ + 313, + 526, + 388, + 538 + ], + "type": "text", + "content": "5. Conclusion" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 312, + 545, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 545, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 312, + 545, + 555, + 713 + ], + "type": "text", + "content": "Our ABBSPO, a weakly supervised OOD framework, effectively learns RBox prediction regardless of the type of HBox annotations (T-HBox and C-HBox). With our proposed Adaptive Bounding Box Scaling (ABBS) and Symmetric Prior Angle (SPA) loss, we achieved enhanced orientation and scale accuracy for OOD, which is comparable to or even better than RBox-supervised methods. Extensive experimental results underscore the superiority of our approach, surpassing state-of-the-art HBox-supervised methods. Our method effectively bridges the gap between weakly supervised OOD and fully supervised OOD, making it a promising solution for applications requiring efficient and accurate object detection via training with relatively cheap annotations of HBoxes compared to RBoxes." + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "8855" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 153, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 153, + 85 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 153, + 85 + ], + "type": "text", + "content": "Acknowledgement" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 90, + 296, + 147 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 90, + 296, + 147 + ], + "spans": [ + { + "bbox": [ + 55, + 90, + 296, + 147 + ], + "type": "text", + "content": "This research was supported by Korea Institute of Marine Science & Technology Promotion (KIMST) funded by the Korea Coast Guard (RS-2023-00238652, Integrated Satellite-based Applications Development for Korea Coast Guard, " + }, + { + "bbox": [ + 55, + 90, + 296, + 147 + ], + "type": "inline_equation", + "content": "100\\%" + }, + { + "bbox": [ + 55, + 90, + 296, + 147 + ], + "type": "text", + "content": ")." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 167, + 115, + 180 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 167, + 115, + 180 + ], + "spans": [ + { + "bbox": [ + 56, + 167, + 115, + 180 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 57, + 186, + 296, + 714 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 61, + 186, + 296, + 242 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 186, + 296, + 242 + ], + "spans": [ + { + "bbox": [ + 61, + 186, + 296, + 242 + ], + "type": "text", + "content": "[1] Liangyu Chen, Tong Yang, Xiangyu Zhang, Wei Zhang, and Jian Sun. Points as queries: Weakly semi-supervised object detection by points. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8823-8832, 2021. 3" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 243, + 296, + 298 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 243, + 296, + 298 + ], + "spans": [ + { + "bbox": [ + 61, + 243, + 296, + 298 + ], + "type": "text", + "content": "[2] Pengfei Chen, Xuehui Yu, Xumeng Han, Najmul Hassan, Kai Wang, Jiachen Li, Jian Zhao, Humphrey Shi, Zhenjun Han, and Qixiang Ye. Point-to-box network for accurate object detection via single point supervision. In European Conference on Computer Vision, pages 51-67. Springer, 2022. 3" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 62, + 300, + 295, + 354 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 300, + 295, + 354 + ], + "spans": [ + { + "bbox": [ + 62, + 300, + 295, + 354 + ], + "type": "text", + "content": "[3] Yihong Chen, Zheng Zhang, Yue Cao, Liwei Wang, Stephen Lin, and Han Hu. Reppoints v2: Verification meets regression for object detection. In Advances in Neural Information Processing Systems, pages 5621-5631. Curran Associates, Inc., 2020. 2" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 355, + 296, + 409 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 355, + 296, + 409 + ], + "spans": [ + { + "bbox": [ + 62, + 355, + 296, + 409 + ], + "type": "text", + "content": "[4] Gong Cheng, Peicheng Zhou, and Junwei Han. Learning rotation-invariant convolutional neural networks for object detection in vhr optical remote sensing images. IEEE transactions on geoscience and remote sensing, 54(12):7405-7415, 2016. 2, 6" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 411, + 296, + 455 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 411, + 296, + 455 + ], + "spans": [ + { + "bbox": [ + 62, + 411, + 296, + 455 + ], + "type": "text", + "content": "[5] Gong Cheng, Jiabao Wang, Ke Li, Xingxing Xie, Chunbo Lang, Yanqing Yao, and Junwei Han. Anchor-free oriented proposal generator for object detection. IEEE Transactions on Geoscience and Remote Sensing, 60:1-11, 2022. 6, 7, 8" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 456, + 296, + 510 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 456, + 296, + 510 + ], + "spans": [ + { + "bbox": [ + 62, + 456, + 296, + 510 + ], + "type": "text", + "content": "[6] Jian Ding, Nan Xue, Yang Long, Gui-Song Xia, and Qikai Lu. Learning roi transformer for oriented object detection in aerial images. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2849-2858, 2019. 2" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 512, + 295, + 544 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 512, + 295, + 544 + ], + "spans": [ + { + "bbox": [ + 62, + 512, + 295, + 544 + ], + "type": "text", + "content": "[7] Jiaming Han, Jian Ding, Jie Li, and Gui-Song Xia. Align deep features for oriented object detection. IEEE transactions on geoscience and remote sensing, 60:1-11, 2021. 2" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 62, + 546, + 296, + 590 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 546, + 296, + 590 + ], + "spans": [ + { + "bbox": [ + 62, + 546, + 296, + 590 + ], + "type": "text", + "content": "[8] Jiaming Han, Jian Ding, Nan Xue, and Gui-Song Xia. Redet: A rotation-equivariant detector for aerial object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2786-2795, 2021. 2" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 62, + 591, + 296, + 645 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 591, + 296, + 645 + ], + "spans": [ + { + "bbox": [ + 62, + 591, + 296, + 645 + ], + "type": "text", + "content": "[9] Muhammad Haroon, Muhammad Shahzad, and Muhammad Moazam Fraz. Multisized object detection using spaceborne optical imagery. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 13:3032-3046, 2020. 2, 6" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 647, + 295, + 691 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 647, + 295, + 691 + ], + "spans": [ + { + "bbox": [ + 57, + 647, + 295, + 691 + ], + "type": "text", + "content": "[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 6" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 57, + 692, + 295, + 714 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 692, + 295, + 714 + ], + "spans": [ + { + "bbox": [ + 57, + 692, + 295, + 714 + ], + "type": "text", + "content": "[11] Shitian He, Huanxin Zou, Yingqian Wang, Boyang Li, Xu Cao, and Ning Jing. Learning remote sensing object detect" + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 555, + 713 + ], + "type": "list", + "angle": 0, + "index": 28, + "blocks": [ + { + "bbox": [ + 333, + 73, + 554, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 333, + 73, + 554, + 95 + ], + "spans": [ + { + "bbox": [ + 333, + 73, + 554, + 95 + ], + "type": "text", + "content": "tion with single point supervision. IEEE Transactions on Geoscience and Remote Sensing, 2023. 3" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 96, + 554, + 139 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 96, + 554, + 139 + ], + "spans": [ + { + "bbox": [ + 316, + 96, + 554, + 139 + ], + "type": "text", + "content": "[12] Darius Lam, Richard Kuzma, Kevin McGee, Samuel Dooley, Michael Laielli, Matthew Klaric, Yaroslav Bulatov, and Brendan McCord. xview: Objects in context in overhead imagery. arXiv preprint arXiv:1802.07856, 2018. 2" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 317, + 140, + 554, + 183 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 140, + 554, + 183 + ], + "spans": [ + { + "bbox": [ + 317, + 140, + 554, + 183 + ], + "type": "text", + "content": "[13] Ke Li, Gang Wan, Gong Cheng, Liqui Meng, and Junwei Han. Object detection in optical remote sensing images: A survey and a new benchmark. ISPRS journal of photogrammetry and remote sensing, 159:296-307, 2020. 2, 6, 8" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 317, + 185, + 554, + 228 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 185, + 554, + 228 + ], + "spans": [ + { + "bbox": [ + 317, + 185, + 554, + 228 + ], + "type": "text", + "content": "[14] Wentong Li, Yijie Chen, Kaixuan Hu, and Jianke Zhu. Oriented reppoints for aerial object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1829-1838, 2022. 2, 7" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 317, + 228, + 554, + 282 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 228, + 554, + 282 + ], + "spans": [ + { + "bbox": [ + 317, + 228, + 554, + 282 + ], + "type": "text", + "content": "[15] Tsung-Yi Lin, Piotr Dólar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2117-2125, 2017. 6" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 284, + 554, + 337 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 284, + 554, + 337 + ], + "spans": [ + { + "bbox": [ + 316, + 284, + 554, + 337 + ], + "type": "text", + "content": "[16] Xuebo Liu, Ding Liang, Shi Yan, Dagui Chen, Yu Qiao, and Junjie Yan. Fots: Fast oriented text spotting with a unified network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5676-5685, 2018. 3" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 317, + 338, + 555, + 393 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 338, + 555, + 393 + ], + "spans": [ + { + "bbox": [ + 317, + 338, + 555, + 393 + ], + "type": "text", + "content": "[17] Junwei Luo, Xue Yang, Yi Yu, Qingyun Li, Junchi Yan, and Yansheng Li. Pointobb: Learning oriented object detection via single point supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16730-16740, 2024. 2, 3, 7" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 317, + 394, + 554, + 437 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 394, + 554, + 437 + ], + "spans": [ + { + "bbox": [ + 317, + 394, + 554, + 437 + ], + "type": "text", + "content": "[18] Botao Ren, Xue Yang, Yi Yu, Junwei Luo, and Zhidong Deng. Pointobb-v2: Towards simpler, faster, and stronger single point supervised oriented object detection. arXiv preprint arXiv:2410.08210, 2024. 3" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 317, + 438, + 554, + 492 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 438, + 554, + 492 + ], + "spans": [ + { + "bbox": [ + 317, + 438, + 554, + 492 + ], + "type": "text", + "content": "[19] Zhongzheng Ren, Zhiding Yu, Xiaodong Yang, Ming-Yu Liu, Alexander G Schwing, and Jan Kautz. Ufo 2: A unified framework towards omni-supervised object detection. In European conference on computer vision, pages 288-313. Springer, 2020. 3" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 317, + 494, + 554, + 536 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 494, + 554, + 536 + ], + "spans": [ + { + "bbox": [ + 317, + 494, + 554, + 536 + ], + "type": "text", + "content": "[20] T-YLPG Ross and GKHP Dólar. Focal loss for dense object detection. In proceedings of the IEEE conference on computer vision and pattern recognition, pages 2980-2988, 2017. 2, 6, 7" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 317, + 537, + 554, + 602 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 537, + 554, + 602 + ], + "spans": [ + { + "bbox": [ + 317, + 537, + 554, + 602 + ], + "type": "text", + "content": "[21] Xian Sun, Peijin Wang, Zhiyuan Yan, Feng Xu, Ruiping Wang, Wenhui Diao, Jin Chen, Jihao Li, Yingchao Feng, Tao Xu, et al. Fair1m: A benchmark dataset for fine-grained object recognition in high-resolution remote sensing imagery. ISPRS Journal of Photogrammetry and Remote Sensing, 184:116-130, 2022. 2" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 317, + 604, + 554, + 669 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 604, + 554, + 669 + ], + "spans": [ + { + "bbox": [ + 317, + 604, + 554, + 669 + ], + "type": "text", + "content": "[22] Yongqing Sun, Jie Ran, Feng Yang, Chenqiang Gao, Takayuki Kurozumi, Hideaki Kimata, and Ziqi Ye. Oriented object detection for remote sensing images based on weakly supervised learning. In 2021 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), pages 1-6. IEEE, 2021. 2" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 316, + 670, + 554, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 670, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 670, + 554, + 713 + ], + "type": "text", + "content": "[23] Zhiwen Tan, Zhiguo Jiang, Chen Guo, and Haopeng Zhang. Wsodet: A weakly supervised oriented detector for aerial object detection. IEEE Transactions on Geoscience and Remote Sensing, 61:1-12, 2023. 2, 3, 7" + } + ] + } + ], + "index": 27 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "8856" + } + ] + } + ], + "index": 29 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 297, + 715 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 56, + 72, + 297, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 297, + 127 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 297, + 127 + ], + "type": "text", + "content": "[24] Zhi Tian, Xiangxiang Chu, Xiaoming Wang, Xiaolin Wei, and Chunhua Shen. Fully convolutional one-stage 3d object detection on lidar range images. Advances in Neural Information Processing Systems, 35:34899-34911, 2022. 2, 4, 6, 7" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 129, + 296, + 174 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 129, + 296, + 174 + ], + "spans": [ + { + "bbox": [ + 56, + 129, + 296, + 174 + ], + "type": "text", + "content": "[25] Hao Wang, Zhanchao Huang, Zhengchao Chen, Ying Song, and Wei Li. Multigrained angle representation for remote-sensing object detection. IEEE Transactions on Geoscience and Remote Sensing, 60:1-13, 2022. 2" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 175, + 296, + 218 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 175, + 296, + 218 + ], + "spans": [ + { + "bbox": [ + 56, + 175, + 296, + 218 + ], + "type": "text", + "content": "[26] Jian Wang, Fan Li, and Haixia Bi. Gaussian focal loss: Learning distribution polarized angle prediction for rotated object detection in aerial images. IEEE Transactions on Geoscience and Remote Sensing, 60:1-13, 2022. 2" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 220, + 296, + 275 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 220, + 296, + 275 + ], + "spans": [ + { + "bbox": [ + 56, + 220, + 296, + 275 + ], + "type": "text", + "content": "[27] Linfei Wang, Yibing Zhan, Xu Lin, Baosheng Yu, Liang Ding, Jianqing Zhu, and Dapeng Tao. Explicit and implicit box equivariance learning for weakly-supervised rotated object detection. IEEE Transactions on Emerging Topics in Computational Intelligence, 2024. 2" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 275, + 296, + 319 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 275, + 296, + 319 + ], + "spans": [ + { + "bbox": [ + 56, + 275, + 296, + 319 + ], + "type": "text", + "content": "[28] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600-612, 2004. 6" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 320, + 296, + 387 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 320, + 296, + 387 + ], + "spans": [ + { + "bbox": [ + 56, + 320, + 296, + 387 + ], + "type": "text", + "content": "[29] Gui-Song Xia, Xiang Bai, Jian Ding, Zhen Zhu, Serge Belongie, Jiebo Luo, Mihai Datcu, Marcello Pelillo, and Liangpei Zhang. Dota: A large-scale dataset for object detection in aerial images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3974-3983, 2018. 2, 6, 7, 8" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 388, + 296, + 421 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 388, + 296, + 421 + ], + "spans": [ + { + "bbox": [ + 56, + 388, + 296, + 421 + ], + "type": "text", + "content": "[30] SUN Xian, WANG Zhirui, SUN Yuanrui, DIAO Wenhui, ZHANG Yue, and FU Kun. Air-sarship-1.0: High-resolution sar ship detection dataset. , 8(6):852–863, 2019. 2" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 422, + 296, + 477 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 422, + 296, + 477 + ], + "spans": [ + { + "bbox": [ + 56, + 422, + 296, + 477 + ], + "type": "text", + "content": "[31] Zhifeng Xiao, Qing Liu, Gefu Tang, and Xiaofang Zhai. Elliptic fourier transformation-based histograms of oriented gradients for rotationally invariant object detection in remote-sensing images. International Journal of Remote Sensing, 36(2):618-644, 2015. 2" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 478, + 296, + 521 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 478, + 296, + 521 + ], + "spans": [ + { + "bbox": [ + 56, + 478, + 296, + 521 + ], + "type": "text", + "content": "[32] Xingxing Xie, Gong Cheng, Jiabao Wang, Xiwen Yao, and Junwei Han. Oriented r-cnn for object detection. In Proceedings of the IEEE/CVF international conference on computer vision, pages 3520-3529, 2021. 6, 7" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 522, + 296, + 578 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 522, + 296, + 578 + ], + "spans": [ + { + "bbox": [ + 56, + 522, + 296, + 578 + ], + "type": "text", + "content": "[33] Hang Xu, Xinyuan Liu, Haonan Xu, Yike Ma, Zunjie Zhu, Chenggang Yan, and Feng Dai. Rethinking boundary discontinuity problem for oriented object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17406-17415, 2024. 2" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 578, + 296, + 635 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 578, + 296, + 635 + ], + "spans": [ + { + "bbox": [ + 56, + 578, + 296, + 635 + ], + "type": "text", + "content": "[34] Xue Yang and Junchi Yan. Arbitrary-oriented object detection with circular smooth label. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part VIII 16, pages 677-694. Springer, 2020. 2" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 635, + 296, + 690 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 635, + 296, + 690 + ], + "spans": [ + { + "bbox": [ + 56, + 635, + 296, + 690 + ], + "type": "text", + "content": "[35] Xue Yang and Junchi Yan. Arbitrary-oriented object detection with circular smooth label. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part VIII 16, pages 677-694. Springer, 2020. 2" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 56, + 691, + 296, + 715 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 691, + 296, + 715 + ], + "spans": [ + { + "bbox": [ + 56, + 691, + 296, + 715 + ], + "type": "text", + "content": "[36] Xue Yang, Liping Hou, Yue Zhou, Wentao Wang, and Junchi Yan. Dense label encoding for boundary discontinuity free" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 553, + 715 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 333, + 73, + 553, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 333, + 73, + 553, + 106 + ], + "spans": [ + { + "bbox": [ + 333, + 73, + 553, + 106 + ], + "type": "text", + "content": "rotation detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 15819-15829, 2021. 2" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 107, + 553, + 152 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 107, + 553, + 152 + ], + "spans": [ + { + "bbox": [ + 316, + 107, + 553, + 152 + ], + "type": "text", + "content": "[37] Xue Yang, Junchi Yan, Ziming Feng, and Tao He. R3det: Refined single-stage detector with feature refinement for rotating object. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4):3163-3171, 2021. 2" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 152, + 553, + 207 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 152, + 553, + 207 + ], + "spans": [ + { + "bbox": [ + 316, + 152, + 553, + 207 + ], + "type": "text", + "content": "[38] Xue Yang, Junchi Yan, Qi Ming, Wentao Wang, Xiaopeng Zhang, and Qi Tian. Rethinking rotated object detection with gaussian Wasserstein distance loss. In International conference on machine learning, pages 11830-11841. PMLR, 2021. 2, 7" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 208, + 553, + 263 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 208, + 553, + 263 + ], + "spans": [ + { + "bbox": [ + 316, + 208, + 553, + 263 + ], + "type": "text", + "content": "[39] Xue Yang, Xiaojiang Yang, Jirui Yang, Qi Ming, Wentao Wang, Qi Tian, and Junchi Yan. Learning high-precision bounding box for rotated object detection via kullback-Leibler divergence. Advances in Neural Information Processing Systems, 34:18381-18394, 2021. 7" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 264, + 553, + 319 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 264, + 553, + 319 + ], + "spans": [ + { + "bbox": [ + 316, + 264, + 553, + 319 + ], + "type": "text", + "content": "[40] Xue Yang, Gefan Zhang, Xiaojiang Yang, Yue Zhou, Wentao Wang, Jin Tang, Tao He, and Junchi Yan. Detecting rotated objects as gaussian distributions and its 3-d generalization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4):4335-4354, 2022." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 320, + 553, + 365 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 320, + 553, + 365 + ], + "spans": [ + { + "bbox": [ + 316, + 320, + 553, + 365 + ], + "type": "text", + "content": "[41] Xue Yang, Yue Zhou, Gefan Zhang, Jirui Yang, Wentao Wang, Junchi Yan, Xiaopeng Zhang, and Qi Tian. The kfiou loss for rotated object detection. arXiv preprint arXiv:2201.12558, 2022. 2, 7" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 365, + 553, + 421 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 365, + 553, + 421 + ], + "spans": [ + { + "bbox": [ + 316, + 365, + 553, + 421 + ], + "type": "text", + "content": "[42] Xue Yang, Gefan Zhang, Wentong Li, Yue Zhou, Xuehui Wang, and Junchi Yan. H2RBox: Horizontal box annotation is all you need for oriented object detection. In The Eleventh International Conference on Learning Representations, 2023. 1, 2, 3, 5, 6, 7" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 422, + 553, + 466 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 422, + 553, + 466 + ], + "spans": [ + { + "bbox": [ + 316, + 422, + 553, + 466 + ], + "type": "text", + "content": "[43] Ze Yang, Shaohui Liu, Han Hu, Liwei Wang, and Stephen Lin. Reppoints: Point set representation for object detection. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9657-9666, 2019. 2" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 467, + 553, + 533 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 467, + 553, + 533 + ], + "spans": [ + { + "bbox": [ + 316, + 467, + 553, + 533 + ], + "type": "text", + "content": "[44] Xinyi Ying, Li Liu, Yingqian Wang, Ruojing Li, Nuo Chen, Zaiping Lin, Weidong Sheng, and Shilin Zhou. Mapping degeneration meets label evolution: Learning infrared small target detection with single point supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15528-15538, 2023. 3" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 533, + 553, + 578 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 533, + 553, + 578 + ], + "spans": [ + { + "bbox": [ + 316, + 533, + 553, + 578 + ], + "type": "text", + "content": "[45] Jiahui Yu, Yuning Jiang, Zhangyang Wang, Zhimin Cao, and Thomas Huang. Unitbox: An advanced object detection network. In Proceedings of the 24th ACM international conference on Multimedia, pages 516-520, 2016. 6" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 578, + 553, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 578, + 553, + 624 + ], + "spans": [ + { + "bbox": [ + 316, + 578, + 553, + 624 + ], + "type": "text", + "content": "[46] Yi Yu and Feipeng Da. Phase-shifting coder: Predicting accurate orientation in oriented object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13354-13363, 2023. 4" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 624, + 553, + 690 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 624, + 553, + 690 + ], + "spans": [ + { + "bbox": [ + 316, + 624, + 553, + 690 + ], + "type": "text", + "content": "[47] Yi Yu, Xue Yang, Qingyun Li, Feipeng Da, Jifeng Dai, Yu Qiao, and Junchi Yan. Point2rbox: Combine knowledge from synthetic visual patterns for end-to-end oriented object detection with single point supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16783-16793, 2024. 2, 3, 7" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 691, + 553, + 715 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 691, + 553, + 715 + ], + "spans": [ + { + "bbox": [ + 316, + 691, + 553, + 715 + ], + "type": "text", + "content": "[48] Yi Yu, Xue Yang, Qingyun Li, Yue Zhou, Feipeng Da, and Junchi Yan. H2rbox-v2: Incorporating symmetry for boost-" + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "8857" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 294, + 150 + ], + "type": "list", + "angle": 0, + "index": 2, + "blocks": [ + { + "bbox": [ + 77, + 72, + 294, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 72, + 294, + 106 + ], + "spans": [ + { + "bbox": [ + 77, + 72, + 294, + 106 + ], + "type": "text", + "content": "ing horizontal box supervised oriented object detection. Advances in Neural Information Processing Systems, 36, 2024. 1, 2, 3, 4, 5, 6, 7" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 107, + 294, + 150 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 107, + 294, + 150 + ], + "spans": [ + { + "bbox": [ + 56, + 107, + 294, + 150 + ], + "type": "text", + "content": "[49] Tingxuan Yue, Yanmei Zhang, Jin Wang, Yanbing Xu, and Pengyun Liu. A weak supervision learning paradigm for oriented ship detection in sar image. IEEE Transactions on Geoscience and Remote Sensing, 2024. 2" + } + ] + } + ], + "index": 1 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "type": "text", + "content": "8858" + } + ] + } + ], + "index": 3 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/ABC-Former_ Auxiliary Bimodal Cross-domain Transformer with Interactive Channel Attention for White Balance/531c4970-08d2-402e-8972-1a2e62e92c0a_content_list.json b/2025/ABC-Former_ Auxiliary Bimodal Cross-domain Transformer with Interactive Channel Attention for White Balance/531c4970-08d2-402e-8972-1a2e62e92c0a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..eed8341b63c4ec74052f7f7609acaa22a188cf18 --- /dev/null +++ b/2025/ABC-Former_ Auxiliary Bimodal Cross-domain Transformer with Interactive Channel Attention for White Balance/531c4970-08d2-402e-8972-1a2e62e92c0a_content_list.json @@ -0,0 +1,1503 @@ +[ + { + "type": "text", + "text": "ABC-Former: Auxiliary Bimodal Cross-domain Transformer with Interactive Channel Attention for White Balance", + "text_level": 1, + "bbox": [ + 107, + 128, + 890, + 174 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Yu-Cheng Chiu* Guan-Rong Chen* Zihao Chen Yan-Tsung Peng† National Chengchi University", + "bbox": [ + 259, + 203, + 800, + 239 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "No. 64, Section 2, Zhinan Rd, Wenshan District, Taipei City, 116", + "bbox": [ + 272, + 239, + 789, + 256 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "111753202@nccu.edu.tw 111753139@nccu.edu.tw 113761501@nccu.edu.tw ytpeng@cs.nccu.edu.tw", + "bbox": [ + 101, + 256, + 955, + 273 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 246, + 308, + 326, + 324 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "The primary goal of white balance (WB) for sRGB images is to correct inaccurate color temperatures, ensuring that images display natural, neutral colors. While existing WB methods yield reasonable results, their effectiveness is limited. They either focus solely on global color adjustments applied before the camera-specific image signal processing pipeline or rely on end-to-end models that generate WB outputs without accounting for global color trends, leading to suboptimal correction. To address these limitations, we propose an Auxiliary Bimodal Cross-domain Transformer (ABC-Former) that enhances WB correction by leveraging complementary knowledge from global color information from CIELab and RGB histograms alongside sRGB inputs. By introducing an Interactive Channel Attention (ICA) module to facilitate cross-modality global knowledge transfer, ABC-Former achieves more precise WB correction. Experimental results on benchmark WB datasets show that ABC-Former performs favorably against state-of-the-art WB methods. The source code is available at https://github.com/ytpeng-aimlab/ABC-Former.", + "bbox": [ + 89, + 340, + 483, + 643 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 89, + 672, + 220, + 686 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "White balance (WB) correction ensures consistent and accurate color production across varying lighting conditions. However, camera Image Signal Processing (ISP) can introduce color casts in sRGB images due to inaccurate or customized WB settings applied to raw-RGB inputs. These distortions can degrade the accuracy of tasks like image classification and segmentation, where precise color is crucial [10, 12, 23]. Consequently, WB correction has gained significant research interest.", + "bbox": [ + 89, + 696, + 482, + 833 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Significant effort has been made to improve WB within the camera's ISP pipeline. Raw-WB methods estimate the", + "bbox": [ + 89, + 834, + 483, + 864 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/dc3a462b8e9f99c49514e17bdd64749d9d84856929305498c0a53ba146854fdf.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 517, + 309, + 901, + 401 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/d84a832deedf8ac4ac147fb047a3056f07187691fd7a0655c9afd11b305027a2.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 517, + 401, + 901, + 479 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/3ce855a07bd457cdced9502f45450bc79508178e0c0b9ea05bbe7a156d755c83.jpg", + "image_caption": [ + "Figure 1. (a) Traditional raw-WB methods predict the scene's illuminant, perform Illumination Correction (IC), and generate the sRGB output via camera-specific ISP. (b) DNN-based sRGB-WB methods apply end-to-end models directly to sRGB images for WB correction. (c) ABC-Former improves WB accuracy by converting the input into multiple modalities, enhancing illumination correction through auxiliary and primary models." + ], + "image_footnote": [], + "bbox": [ + 517, + 479, + 901, + 592 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "scene's illuminant to correct color shifts in raw images before further processing. However, due to the non-linear transformations applied by the ISP during rendering, these corrections may not fully compensate for color shifts in the final sRGB output [1].", + "bbox": [ + 511, + 746, + 906, + 823 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Several sRGB-WB methods address color shifts caused by imprecise WB in the camera's ISP, categorized into exemplar-based and DNN-based approaches. Exemplar-based methods like KNN [3] classify images from the Rendered WB dataset and apply the best-matching nonlinear", + "bbox": [ + 511, + 824, + 908, + 902 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 46 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "* equal contribution", + "bbox": [ + 112, + 875, + 292, + 887 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "† corresponding author", + "bbox": [ + 112, + 887, + 310, + 900 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "21258", + "bbox": [ + 478, + 944, + 519, + 955 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "mappings for correction. DNN-based methods such as DEEP-WB [2] use CNNs for single-illuminant color correction, while WBFlow [18] extracts pseudo-raw features via a reversible flow model for sRGB correction. SWBNet [19] employs a transformer in the DCT domain to refine color-sensitive features. While effective, these methods fail to fully integrate global color trends and scene information for more comprehensive WB correction.", + "bbox": [ + 89, + 90, + 480, + 210 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Our work utilizes alternative modalities, such as color histograms, to learn global color temperature for effective WB correction. Unlike images, which encode spatial and color information, histograms capture color distribution across channels without spatial details. While per-channel histograms do not fully preserve color relationships between pixels, we can explore both sRGB and CIELab color histograms to go beyond sRGB-input images, extracting global color information. Inspired by [26], we propose ABC-Former, an Auxiliary Bimodal Cross-domain Transformer architecture that integrates global color information from both sRGB and CIELab histograms. It consists of two auxiliary models for histogram-based learning and a target model for sRGB image WB correction, where the sRGB histogram supervises raw color intensity, and CIELab ensures perceptually uniform color distribution. To enhance WB correction in the target sRGB model, we introduce the Interactive Channel Attention (ICA) module, which transfers modality-complementary knowledge using re-parameterization, allowing the target model to adaptively reweight image features for improved accuracy. Our key contributions are listed as follows:", + "bbox": [ + 91, + 212, + 483, + 542 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- We propose ABC-Former, which leverages histogram-based global color features via auxiliary models to refine WB correction in the target sRGB model.", + "- We introduce the ICA module to facilitate effective cross-modality knowledge transfer, optimizing sRGB feature reweighting for better WB results.", + "- Extensive experiments on benchmark WB datasets demonstrate that ABC-Former outperforms state-of-the-art (SOTA) sRGB-WB methods." + ], + "bbox": [ + 89, + 546, + 482, + 680 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related Works", + "text_level": 1, + "bbox": [ + 89, + 694, + 238, + 709 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Raw-WB Approaches. The WB module in a camera's ISP corrects raw images for accurate color temperature, compensating for lighting variations. Traditional WB methods estimate the global light source color and apply uniform gain coefficients for illumination correction. However, they assume a consistent color temperature across the scene, making them ineffective under mixed lighting. Additionally, these methods irreversibly alter raw images, limiting precise sRGB adjustments in post-processing [4, 7-9, 15, 20, 21, 24].", + "bbox": [ + 89, + 719, + 480, + 869 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "sRGB-WB Approaches. To address the shortcomings of traditional WB methods, recent research has explored", + "bbox": [ + 89, + 869, + 480, + 900 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "sRGB-WB approaches, which refine color correction beyond the ISP stage. These methods can be classified into exemplar-based methods [3, 5] and DNN-based methods [18, 19]. Exemplar-based methods apply trained nonlinear mappings for color correction. For example, Affi et al. [3] use histogram features to find images with similar color distributions and derive a correction matrix to adjust colors accordingly. Mixed-WB [5] generates multiple WB versions of an image, averaging their weighting maps to achieve optimal correction. However, these methods rely heavily on predefined training data, making them less adaptable to diverse lighting conditions.", + "bbox": [ + 511, + 90, + 903, + 271 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "DNN-based methods, such as WBFlow [18], use neural flow for reversible color correction, mapping color-cast sRGB images to a pseudo-raw feature space for linear WB. WBFlow also incorporates a camera transformation module for few-shot adaptation, improving generalization ability. SWBNet [19] suppresses temperature-sensitive low-frequency information and employs a contrast loss to align scene features across varying temperatures, enhancing WB stability. It also uses adaptive weights to correct multiple color shifts under mixed lighting. Although these methods are effective, they remain constrained by the sRGB image modality.", + "bbox": [ + 511, + 272, + 903, + 452 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Multimodal Training Approaches. Unimodal training uses data from a single modality, limiting its ability to generalize across different modalities. In contrast, multimodal training enables models to learn from multiple modalities, which can be strongly correlated (paired data) or weakly correlated (irrelevant data). CLIP [22] exemplifies strongly correlated multimodal data, using contrastive learning to align image and text features, requiring paired data. In contrast, M2PT [26] integrates irrelevant data from different modalities by leveraging re-parameterization to transfer knowledge from pre-trained auxiliary models. While this enhances cross-modal learning, it increases data collection costs and pre-training demands.", + "bbox": [ + 511, + 454, + 903, + 648 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Inspired by M2PT, we adopt multimodal training to capture global color information from sRGB images to enhance WB correction. Unlike M2PT, which uses irrelevant data, we train sRGB images to learn corrected histogram-based color features from their color and CIELab histograms. Figure 1 compares conventional DNN-based raw-WB and sRGB-WB methods with the proposed ABC-Former.", + "bbox": [ + 511, + 650, + 903, + 753 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "3. Proposed Method", + "text_level": 1, + "bbox": [ + 511, + 768, + 686, + 786 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The proposed ABC-Former consists of three transformer models: two auxiliary transformers that learn to correct color and CIELab histograms, and a primary transformer that processes the input sRGB image for the final WB correction. Unlike M2PT [26], which tokenizes irrelevant multimodal data for unified processing, our approach leverages sRGB images with their strongly related color information.", + "bbox": [ + 511, + 795, + 903, + 900 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "21259", + "bbox": [ + 478, + 944, + 519, + 955 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To efficiently transfer complementary color knowledge, we introduce the Interactive Channel Attention (ICA) module, which utilizes condensed histogram-based features to enhance color temperature correction, improving accuracy and visual quality. The overall architecture is shown in Figure 2.", + "bbox": [ + 89, + 90, + 480, + 181 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Auxiliary Model — PDFformer", + "text_level": 1, + "bbox": [ + 89, + 189, + 367, + 205 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Most prior sRGB-WB works [2, 3, 5, 18] focus on local pixel information within the sRGB domain, often overlooking global color temperature and perceptual color relationships, which are crucial for effective WB correction. Therefore, integrating global color information from alternative modalities can enhance accuracy by providing a broader context for color adjustments. Given an input sRGB image $\\mathbf{I} \\in \\mathbb{R}^{H \\times W \\times 3}$ , where $H$ and $W$ denote the image's height and width, and the three channels represent the RGB color components, we first convert it to the CIELab color space, represented as $\\mathbf{I}_{\\mathrm{Lab}} \\in \\mathbb{R}^{H \\times W \\times 3}$ , where the three channels correspond to the $L^*$ , $a^*$ , and $b^*$ components. To efficiently capture global color temperature while maintaining low model complexity, we convert $\\mathbf{I}$ into its probability density function (PDF) representation, $\\mathbf{H}_{\\mathrm{sRGB}} \\in \\mathbb{R}^{L \\times 3}$ , where $L = 256$ represents the number of histogram bins per channel. Similarly, we transform $\\mathbf{I}_{\\mathrm{Lab}}$ into its PDF form, referred to as $\\mathbf{H}_{\\mathrm{Lab}} \\in \\mathbb{R}^{L \\times 3}$ , with $L = 256$ , providing a histogram-based representation for each color space.", + "bbox": [ + 89, + 210, + 483, + 498 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The ABC-Former framework incorporates two auxiliary models to enhance the target model's performance. These models process global color PDF data, $\\mathbf{H}^{\\mathrm{sRGB}}$ and $\\mathbf{H}^{\\mathrm{Lab}}$ , which serve as distinct inputs. Each auxiliary model, called PDFformer, employs a 1D transformer architecture with a shared structure but separate training. Initially, the input histograms, $\\mathbf{H}^{\\mathrm{sRGB}}$ or $\\mathbf{H}^{\\mathrm{Lab}}$ , pass through a one-dimensional convolutional layer to produce histogram features $\\mathbf{H}_0^{\\mathbf{A}} \\in \\mathbb{R}^{L \\times C}$ , where $\\mathbf{A} \\in [\\mathbf{sRGB}, \\mathbf{Lab}]$ , and $C$ is the feature dimension. These features are subsequently processed through a U-shape transformer structure with PDFformer blocks that facilitate upsampling and downsampling in the encoding and decoding paths. Each block consists of two sequences of Layer Normalization (LN), Channel Attention (CA) [14], and a feed-forward Multilayer Perceptron (MLP) [11], arranged in the following order:", + "bbox": [ + 89, + 500, + 483, + 741 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\hat {\\mathbf {H}} _ {\\mathbf {i}} ^ {\\mathbf {A}} = \\operatorname {C A} \\left(\\operatorname {L N} \\left(\\mathbf {H} _ {\\mathbf {i} - 1} ^ {\\mathbf {A}}\\right)\\right) + \\mathbf {H} _ {\\mathbf {i} - 1} ^ {\\mathbf {A}}; \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 151, + 747, + 482, + 773 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {H} _ {\\mathbf {i}} ^ {\\mathbf {A}} = \\operatorname {G E L U} (\\operatorname {M L P} (\\operatorname {L N} (\\hat {\\mathbf {H}} _ {\\mathbf {i}} ^ {\\mathbf {A}}))) + \\hat {\\mathbf {H}} _ {\\mathbf {i}} ^ {\\mathbf {A}},\n$$\n", + "text_format": "latex", + "bbox": [ + 153, + 768, + 419, + 787 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $\\mathrm{GELU}(\\cdot)$ denotes the GELU activation function, and $\\mathbf{i}$ is the block index, starting from 1. PDFformer has $K$ PDFformer blocks in the encoding path, followed by a bottleneck stage, which is also a PDFformer block. These blocks are interconnected via downsampling, implemented through a $4\\times 1$ convolution with a stride of 2 and channel doubling. This process yields an output of $\\mathbf{H}_{\\mathbf{K} + 1}^{\\mathbf{A}}\\in$", + "bbox": [ + 89, + 794, + 483, + 902 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "$\\mathbb{R}^{\\frac{L}{2K}\\times 2^K C}$ . Following the bottleneck, the decoding path contains $K$ PDFformer blocks, with upsampling and channel reduction applied between blocks. The first two blocks halve the number of channels, while the remaining blocks quarter them, as in [25]. Upsampling is achieved using a $2\\times 1$ transposed convolution with a stride of 2. Additionally, each block in the decoding path takes the output from the previous block and concatenates it with the corresponding output from the encoding path of the same spatial size. The final output, $\\mathbf{H}^{\\mathbf{A}}\\in \\mathbb{R}^{L\\times 2C}$ , passes through a convolutional layer with a residual connection to the input histograms and a softmax function to produce the corrected color or CIELab histograms $\\mathbf{H}_{\\mathbf{c}}^{\\mathbf{A}}\\in \\mathbb{R}^{L\\times 3}$ as:", + "bbox": [ + 511, + 88, + 903, + 290 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {H} _ {\\mathrm {c}} ^ {\\mathbf {A}} = \\operatorname {S O F T M A X} \\left(\\operatorname {C o n v} _ {3 \\times 1} \\left(\\mathbf {H} _ {2 \\mathrm {K} + 1} ^ {\\mathbf {A}}\\right) + \\mathbf {H} _ {0} ^ {\\mathbf {A}}\\right). \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 527, + 297, + 903, + 316 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.2. Target model — sRGBformer", + "text_level": 1, + "bbox": [ + 511, + 325, + 777, + 340 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "To achieve a color-calibrated correction, we employ sRGBformer, a vision transformer designed to address color deviations. It systematically processes input features $\\mathbf{X}_0$ to generate the final WB-corrected image. Similar to PDFformer, sRGBformer employs a U-shaped transformer architecture with upsampling and downsampling. However, it uniquely integrates cross-modality knowledge from auxiliary models, using their corrected global color information as guidance. To enable this, we introduce our proposed Interactive", + "bbox": [ + 511, + 347, + 906, + 481 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Channel Attention (ICA) in each sRGBformer block, incorporating auxiliary model knowledge through dedicated pathways. Each sRGBformer block consists of two sets of LN, ICA, and an MLP, structured as follows:", + "bbox": [ + 511, + 483, + 906, + 542 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\hat {\\mathbf {X}} _ {\\mathbf {i}} = \\operatorname {I C A} (\\ln (\\mathbf {X} _ {\\mathbf {i} - \\mathbf {1}})) + \\mathbf {X} _ {\\mathbf {i} - \\mathbf {1}}; \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 584, + 554, + 903, + 580 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {X} _ {\\mathbf {i}} = \\operatorname {G E L U} (\\operatorname {M L P} (\\ln (\\hat {\\mathbf {X}} _ {\\mathbf {i}}))) + \\hat {\\mathbf {X}} _ {\\mathbf {i}},\n$$\n", + "text_format": "latex", + "bbox": [ + 588, + 575, + 831, + 593 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $\\mathbf{X_i}$ is the image features produced by the sRGBformer block at $i$ -th level. The encoder and decoder each consist of $K$ sRGBformer blocks, connected via a bottleneck stage with one sRGBformer block. In the encoder, downsampling is achieved using a $4\\times 4$ convolution with a stride of 2 and channel doubling, while the decoder applies upsampling via a $2\\times 2$ transposed convolution with a stride of 2 and channel halving. As in PDFformer, each decoder block receives the output from the previous block, concatenated with the corresponding output from the encoder of the same spatial size. The final decoder output $\\mathbf{X_{2K + 1}}\\in \\mathbb{R}^{H\\times W\\times 2C}$ passes through a $3\\times 3$ convolutional layer with a residual connection to the input, producing the final WB-corrected image, $\\mathbf{X_c}\\in \\mathbb{R}^{H\\times W\\times 3}$ .", + "bbox": [ + 511, + 599, + 905, + 809 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Interactive Channel Attention. The goal of ICA is to facilitate knowledge transfer from auxiliary models to sRGBformer, enhancing WB correction. In sRGBformer, each encoder and decoder block is equipped with an ICA module to correspond to the respective blocks in the encoders and decoders of the auxiliary models. First, $\\mathbf{X}_{\\mathrm{i}}$ is condensed", + "bbox": [ + 511, + 810, + 905, + 898 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "21260", + "bbox": [ + 478, + 944, + 519, + 955 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/cbb3ff0b83c41d00a1bf7c82b0173d3179f0e6e159f7b222f16f183168a5fcbd.jpg", + "image_caption": [ + "Figure 2. The ABC-Former framework consists of two key components:: Auxiliary models (PDFformers) and a Target model (sRG-Bformer). The auxiliary models process sRGB and CIELab histograms to learn color features from different modalities, while the ICA module in the target model integrates this information to generate the final WB correction." + ], + "image_footnote": [], + "bbox": [ + 96, + 88, + 888, + 510 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "into a vector with one value per channel by global average pooling (Avg.), followed by a convolution and sigmoid activation, generating a channel-wise weighting vector $\\mathbf{W}_{\\mathrm{i}}^{\\mathrm{T}} \\in \\mathbb{R}^{1 \\times 1 \\times C}$ at the $i$ -th level of the sRGBformer block. Similarly, weighting vectors $\\mathbf{W}_{\\mathrm{i}}^{\\mathrm{sRGB}}$ , $\\mathbf{W}_{\\mathrm{i}}^{\\mathrm{Lab}} \\in \\mathbb{R}^{1 \\times 1 \\times C}$ are extracted by feeding $\\mathbf{H}_{\\mathrm{i}}^{\\mathrm{sRGB}}$ and $\\mathbf{H}_{\\mathrm{i}}^{\\mathrm{Lab}}$ into the CA module within their corresponding PDFformer's blocks, which include the average pooling $\\mathrm{Avg}(\\cdot)$ , convolution, the unsqueeze operation, and the excitation operation (sigmoid activation) at the $i$ -th level. Next, to integrate cross-modal knowledge, we apply cross-modal re-parameterization [26], introducing learnable parameters $\\lambda_{\\mathrm{i}}^{\\mathrm{sRGB}}$ and $\\lambda_{\\mathrm{i}}^{\\mathrm{Lab}}$ to adjust $\\mathbf{W}_{\\mathrm{i}}^{\\mathrm{sRGB}}$ and $\\mathbf{W}_{\\mathrm{i}}^{\\mathrm{Lab}}$ , respectively. These are then combined with $\\mathbf{W}_{\\mathrm{i}}^{\\mathrm{T}}$ along the channel dimension to obtain $\\mathbf{W}_{\\mathrm{total}}$ as:", + "bbox": [ + 88, + 604, + 486, + 832 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {W} _ {\\mathbf {i}} ^ {\\mathbf {T}} = \\text {S i g m o i d} (\\mathbf {C o n v} (\\operatorname {A v g} (\\mathbf {X} _ {\\mathbf {i}}))) \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 127, + 837, + 482, + 861 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {W} _ {\\mathrm {i}} ^ {\\text {t o t a l}} = \\mathbf {W} _ {\\mathrm {i}} ^ {\\mathrm {T}} + \\lambda_ {\\mathrm {i}} ^ {\\text {L a b}} \\mathbf {W} _ {\\mathrm {i}} ^ {\\text {L a b}} + \\lambda_ {\\mathrm {i}} ^ {\\text {s R G B}} \\mathbf {W} _ {\\mathrm {i}} ^ {\\text {s R G B}}.\n$$\n", + "text_format": "latex", + "bbox": [ + 109, + 857, + 480, + 876 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "At last, $\\mathbf{X_i}$ is channel-wise re-weighted by $\\mathbf{W_i^{total}}$ to gen-", + "bbox": [ + 89, + 883, + 482, + 902 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "erate the refined sRGB features $\\tilde{\\mathbf{X}}_{\\mathrm{i}}$ as:", + "bbox": [ + 511, + 603, + 769, + 619 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\tilde {\\mathbf {X}} _ {\\mathbf {i}} = \\mathbb {F} _ {\\text {s c a l e}} \\left(\\mathbf {W} _ {\\mathbf {i}} ^ {\\text {t o t a l}}, \\mathbf {X} _ {\\mathbf {i}}\\right), \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 617, + 631, + 906, + 651 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\mathbb{F}_{scale}(\\cdot, \\cdot)$ represents the channel-wise multiplication function, applying scalar weights to the corresponding feature maps. Through ICA, we leverage calibrated global color information from modality-specific histogram-based features, enabling effective knowledge transfer for improved WB correction.", + "bbox": [ + 511, + 662, + 906, + 755 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.3. Loss Function", + "text_level": 1, + "bbox": [ + 511, + 767, + 660, + 781 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The proposed framework optimizes two auxiliary models and one target model. Specifically, two auxiliary PDF-formers process histogram inputs in a PDF format from either the sRGB or CIELab domains, while the target model, sRGBformer, is responsible for the final WB sRGB output. The cooperative interaction between the auxiliary and target models is crucial for achieving accurate WB in the output sRGB image. To train the auxiliary models, we use L2 loss", + "bbox": [ + 511, + 787, + 908, + 902 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "21261", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "to measure the difference between the PDFs of the predicted and ground-truth color channel histograms, formulated as:", + "bbox": [ + 89, + 90, + 482, + 122 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\mathcal {L} _ {\\mathrm {p d f}} ^ {s R G B} = \\left\\| \\mathbf {H} _ {\\mathbf {c}} ^ {\\mathbf {R}} - \\mathbf {H} _ {\\mathbf {g t}} ^ {\\mathbf {R}} \\right\\| _ {2} + \\left\\| \\mathbf {H} _ {\\mathbf {c}} ^ {\\mathbf {G}} - \\mathbf {H} _ {\\mathbf {g t}} ^ {\\mathbf {G}} \\right\\| _ {2} + \\left\\| \\mathbf {H} _ {\\mathbf {c}} ^ {\\mathbf {B}} - \\mathbf {H} _ {\\mathbf {g t}} ^ {\\mathbf {B}} \\right\\| _ {2}; \\\\ \\mathcal {L} _ {\\mathrm {p d f}} ^ {L a b} = \\left\\| \\mathbf {H} _ {\\mathbf {c}} ^ {\\mathbf {L}} - \\mathbf {H} _ {\\mathbf {g t}} ^ {\\mathbf {L}} \\right\\| _ {2} + \\left\\| \\mathbf {H} _ {\\mathbf {c}} ^ {\\mathbf {a}} - \\mathbf {H} _ {\\mathbf {g t}} ^ {\\mathbf {a}} \\right\\| _ {2} + \\left\\| \\mathbf {H} _ {\\mathbf {c}} ^ {\\mathbf {b}} - \\mathbf {H} _ {\\mathbf {g t}} ^ {\\mathbf {b}} \\right\\| _ {2}, \\tag {6} \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 94, + 126, + 480, + 176 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{sRGB}} = [\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{R}};\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{G}};\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{B}}]$ and $\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{Lab}} = [\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{L}};\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{a}};\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{b}}]$ denote the corrected RGB and CIELab histograms, respectively, as estimated by PDFformers. $\\mathbf{H}_{\\mathrm{gt}}^{\\mathrm{sRGB}}$ and $\\mathbf{H}_{\\mathrm{gt}}^{\\mathrm{Lab}}$ represent the ground-truth histograms. We adopt L2 loss as it penalizes large deviations more heavily, encouraging the model to align histogram bins across the distribution evenly.", + "bbox": [ + 89, + 176, + 480, + 267 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "For sRGBformer, we use the L1 loss for training, as follows:", + "bbox": [ + 89, + 267, + 480, + 294 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\mathrm {r e c}} = \\left\\| \\mathbf {X} _ {\\mathbf {c}} - \\mathbf {X} _ {\\mathbf {g t}} \\right\\| _ {1}, \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 210, + 297, + 480, + 314 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\mathbf{X}_{\\mathbf{c}}$ is the estimated WB image produced by sRGBformer, and $\\mathbf{X}_{\\mathbf{gt}}$ is the ground truth. The total loss is defined as $\\mathcal{L}_{\\mathrm{total}} = \\mathcal{L}_{\\mathrm{pdf}}^{sRGB} + \\mathcal{L}_{\\mathrm{pdf}}^{Lab} + \\mathcal{L}_{\\mathrm{rec}}$ .", + "bbox": [ + 89, + 318, + 482, + 367 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4. Experimental Results", + "text_level": 1, + "bbox": [ + 89, + 377, + 295, + 393 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Datasets. The commonly used public dataset, the Rendered WB dataset Set1 [3], is divided into three non-overlapping folds. For training, we randomly selected 12,000 rendered sRGB images with different WB settings from two of the folds. For testing, we use the third fold of Set1, also known as Set1-Test (21,046 images), as well as other datasets that have no overlap in scenes or cameras with the training data: Set2 of the Rendered WB dataset (2,881 images) [3] and the Rendered Cube+ dataset (10,242 images) [3, 6]. These datasets serve as benchmarks to evaluate the WB correction performance of our ABC-Former.", + "bbox": [ + 89, + 402, + 482, + 566 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Evaluation Metrics. We evaluated our results using three widely used metrics: Mean Squared Error (MSE), Mean Angular Error (MAE), and $\\Delta E$ 2000 [13], which quantify the differences between the predicted WB images and the ground truth. For each metric, we reported the mean, first quantile (Q1), median (Q2), and upper quantile (Q3) of the error. Lower values in these metrics indicate better WB correction performance, consistent with those used in recent works [2, 3, 5, 18, 19].", + "bbox": [ + 89, + 569, + 482, + 703 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Implementation Details. We implemented ABC-Former using PyTorch. During training, we optimize both the auxiliary and target models simultaneously over 350 epochs using the Adam optimizer [17] with $\\beta_{1} = 0.5$ and $\\beta_{2} = 0.999$ for each model. The learning rate is set to $2 \\times 10^{-4}$ , and the embedding feature dimension to 16. For training, we randomly cropped four $128 \\times 128$ patches from each training image as input. Additionally, we apply geometric transformations, including rotation and flipping, to augment the data.", + "bbox": [ + 89, + 704, + 482, + 853 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Quantitative Experimental Results. The quantitative results in Table 1 show that ABC-Former performs favorably against five SOTA methods [2, 3, 5, 18, 19] across three", + "bbox": [ + 89, + 854, + 482, + 900 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "public benchmark datasets [3, 6]. The compared methods were evaluated using either their publicly available pretrained models or results directly cited from their respective publications. However, SWBNet [19] was retrained to be tested on the Set1-Test and Set2 datasets, as its code and original results for these datasets were not provided (marked with * in Table 1). SWBNet's scores on the Cube+ Dataset were taken from the original paper. On the Rendered WB Dataset Set1-Test and Set2, ABC-Former achieved the best performance on MSE, MAE, and $\\Delta E$ 2000, indicating that it effectively removes color casts from images and achieves superior WB correction. On the Rendered Cube+ dataset, ABC-Former also delivered superior results, with the lowest mean scores in MSE, MAE, and $\\Delta E$ 2000. Additionally, it maintains a competitive model size, only larger than Deep-WB [2] and Mixed-WB [5]. These results demonstrate ABC-Former's efficiency in achieving efficient WB correction across various datasets without significantly increasing model complexity, showcasing its robustness and generalization capabilities.", + "bbox": [ + 511, + 90, + 903, + 393 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Qualitative Experimental Results. We present the qualitative comparison results on the Rendered WB and Rendered Cube+ datasets in Figure 3 and Figure 4. Analyzing the color correction performance of different methods, we see that while KNN [3], Deep-WB [2], Mixed-WB [5], and WBFlow [18] generally reduce color casts, they often exhibit color inconsistencies across different regions of the image. For example, in Figure 3, these methods tend to correct the colors of objects while neglecting the sky's color accuracy, resulting in an undesirable yellow tint. Similarly, in Figure 4, strong color casts lead to suboptimal WB correction, often leaving the image with an overall blue or yellow tone. In contrast, our method, guided by corrected global color information from multiple modalities, achieves a more balanced and consistent color correction across the entire image, producing natural and harmonious results.", + "bbox": [ + 511, + 396, + 903, + 637 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In addition, Figure 5 demonstrates a comparative analysis of global color distribution accuracy in the average WB results across different methods. The Bhattacharyya coefficient, as introduced in [16], is employed to measure the similarity between the red, green, and blue histograms of the processed images and those of the ground-truth WB images across three benchmark datasets. This coefficient ranges from 0 to 1, with higher values indicating a closer match to the ground truth color distributions. The results show that ABC-Former achieves a higher similarity compared to other methods. Additionally, the visualization at the bottom of the figure illustrates an example of the color distributions in WB results obtained by the compared methods, highlighting that ABC-Former produces color distributions more consistent with the ground truth.", + "bbox": [ + 511, + 641, + 903, + 867 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Ablation Studies. In the ablation studies, we investigated the impact of various combinations of modalities used for", + "bbox": [ + 511, + 869, + 903, + 900 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "21262", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/23df1fe63eb1febd77ca9f8deaa439d543fbebe15ad9db50d06729cd6e2ae67c.jpg", + "table_caption": [ + "Table 1. The quantitative results of ABC-Former and competing WB methods are evaluated on three public benchmark datasets, including Rendered WB Dataset (Set1-Test and Set2) [3], and Rendered Cube+ Dataset [3, 6]. The best results are highlighted in red, while the second-best results are highlighted in blue." + ], + "table_footnote": [], + "table_body": "
MethodMSE ↓MAE ↓ΔE 2000 ↓Size
MeanQ1Q2Q3MeanQ1Q2Q3MeanQ1Q2Q3MB
Rendered WB Dataset: Set1-Test (21,046 images) [3]
KNN [3]77.4913.7439.6294.013.06°1.74°2.54°3.76°3.582.073.094.5521.8
Deep-WB [2]82.5513.1942.77102.093.12°1.88°2.70°3.84°3.772.163.304.8616.7
Mixed-WB [5]142.2526.8167.17164.664.07°2.64°3.68°5.16°4.553.004.155.635.1
WBFlow [18]78.8912.9935.0979.352.67°1.73°2.39°3.24°3.131.922.793.9430.2
SWBNet* [19]111.6220.6160.68137.914.11°2.56°3.75°5.22°4.542.734.165.86258.8
ABC-Former20.474.6510.0221.051.99°1.25°1.73°2.33°2.181.381.862.5920.2
Rendered WB Dataset: Set2 (2,881 images) [3]
KNN [3]171.0937.0487.04190.884.48°2.26°3.64°5.95°5.603.434.907.0621.8
Deep-WB [2]124.0730.1376.32154.443.75°2.02°3.08°4.72°4.903.134.356.0816.7
Mixed-WB [5]188.7648.64112.32219.914.92°2.69°4.10°6.37°6.053.454.927.205.1
WBFlow [18]117.6031.2561.68143.903.51°1.93°2.92°4.47°4.643.164.075.5630.2
SWBNet* [19]219.0255.45113.98236.255.46°3.45°4.78°6.63°6.514.395.848.08258.8
ABC-Former104.3125.5558.61132.903.39°1.87°2.73°4.30°4.562.974.135.6320.2
Rendered Cube+ Dataset (10,242 images) [3, 6]
KNN [3]194.9827.4357.08118.214.12°1.96°3.17°5.04°5.683.224.616.7021.8
Deep-WB [2]80.4615.4333.8874.423.45°1.87°2.82°4.26°4.592.683.815.5316.7
Mixed-WB [5]161.8016.9619.3390.814.05°1.40°2.12°4.88°4.892.163.106.785.1
WBFlow [18]75.3914.2230.9072.913.34°1.87°2.82°4.11°4.282.683.775.2130.2
SWBNet [19]74.3520.4640.0486.953.15°1.33°2.09°4.12°4.282.403.565.09258.8
ABC-Former60.6012.1526.9257.202.99°1.63°2.45°3.69°3.952.353.404.8620.2
", + "bbox": [ + 102, + 142, + 890, + 565 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/c3378122664307760f9e6a413a22ad7619238a7db1f86bea05f23afd572a65da.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 96, + 580, + 210, + 642 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/551cab629c1264d4c5034413d796f07d772303580bd0321e0bc3ef9de370ef9e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 96, + 642, + 210, + 723 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/69c9b7b603dfb72472638fb5c3629a1a454dae2c9ace30ec0928f92db10b3bb6.jpg", + "image_caption": [ + "Input" + ], + "image_footnote": [], + "bbox": [ + 96, + 723, + 210, + 799 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/90b21a6f37be26da2b82d186d668f88fc9a76970239aaf02882730645069fbc3.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 212, + 580, + 326, + 641 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/274b0d8ed5c490dbb989c157e6cd37ba4165830a18f653d82013dcf2c1e3f5eb.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 212, + 642, + 326, + 722 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/9debd5d7045bff73cf6f43ae9b287aa5aa349b99612a69c472fac61f555c7ed0.jpg", + "image_caption": [ + "KNN" + ], + "image_footnote": [], + "bbox": [ + 212, + 723, + 326, + 799 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/a8d6b41038be2e3c006563195444320ad8b2f8cb1b0c1b17352152cf76109389.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 328, + 580, + 441, + 641 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/52fe869da81c3a8d34549519c14dd5829523b1f8e1cd5d824fef1f26af155959.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 328, + 642, + 441, + 722 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/212fdce970499e8169dde75fe208fecc2c63f118d5034dc57c0bcd20b4bd2550.jpg", + "image_caption": [ + "Deep-WB" + ], + "image_footnote": [], + "bbox": [ + 328, + 723, + 441, + 799 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/4a8103d7ed0d9d2271b26bf02ccc13fea9c3c9e460305ae0abcfc9b43e8c6464.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 442, + 580, + 555, + 641 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/4a7f1af5212f11c2f2a2da4296eb909272aa9df93e68395c2042987e8021674e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 442, + 642, + 553, + 722 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/a116ce8cadb8fe19d639c1c63f0e2fa9a8df48fba2e99a8804f20bd49da8b763.jpg", + "image_caption": [ + "Mixed-WB", + "Figure 3. Qualitative comparisons with other sRGB-WB methods on the Rendered WB dataset [3], with the $\\Delta$ E 2000 indicated in the bottom-right corner of each image." + ], + "image_footnote": [], + "bbox": [ + 442, + 723, + 553, + 799 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/816bbb8567944e4793334ba775b8e11f7738c51eb1f29c8c2cf843a1a8b1c998.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 557, + 580, + 669, + 641 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/1578e6473e3d4095fc6dc20e95f44b59944d7489d6bab7b7de44d58e920161b2.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 557, + 642, + 669, + 722 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/dba71879c777c4eaa2c389315803fc5fa1d3101f37d0a5fe8ca6be89f8c6d15f.jpg", + "image_caption": [ + "WBFlow" + ], + "image_footnote": [], + "bbox": [ + 557, + 723, + 669, + 799 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/6dcc78a119e3152c36728c6f826b20822c372edd29411248d6e2e8f6e6323eb6.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 671, + 580, + 785, + 641 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/61d8524cbfadf25a49d98c82d0001471003fc6001d52d82febe8be9c7fa8fe37.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 671, + 642, + 785, + 722 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/5e84cb823eda855f16fce087b5ca4795849e080d29892d97548f2378fee8f8fa.jpg", + "image_caption": [ + "Ours" + ], + "image_footnote": [], + "bbox": [ + 671, + 723, + 785, + 799 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/4c130dcc5f1dcc973e9fde452a1b89ff9d9e210238fe8db29599d8618bcac07b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 787, + 580, + 900, + 641 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/f4794eb6ce4ff7737ac2b06d6440e95c88f0509c01d628b3e8bcffe08da8291f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 787, + 642, + 900, + 722 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/069b3937070a01f3592c6d0ba272852ce9fc833a8312857090d40dd911351cdf.jpg", + "image_caption": [ + "GT" + ], + "image_footnote": [], + "bbox": [ + 787, + 723, + 900, + 799 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "21263", + "bbox": [ + 478, + 945, + 517, + 955 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/687b9d42ba6f432d13f1b3edaaa2f1bf690bf6d0b0bb86325878ad1c95cb4593.jpg", + "image_caption": [ + "Figure 4. Qualitative comparisons with other sRGB-WB methods on the Rendered Cube+ dataset [3, 6], with the $\\Delta$ E 2000 displayed in the bottom-right corner of each image." + ], + "image_footnote": [], + "bbox": [ + 96, + 92, + 903, + 294 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/7b18e71146036143bf058e4ad6b666f46af9f089e0d2deea7bb6e19dfa269b42.jpg", + "image_caption": [ + "Figure 5. Evaluation of the Bhattacharyya coefficient [16] for color histograms across three benchmark datasets [3, 6], showing global color accuracy for red, green, and blue histograms in sRGB-WB methods. ABC-Former achieves more consistent color distributions by leveraging global color temperature features from both sRGB and CIELab histograms." + ], + "image_footnote": [], + "bbox": [ + 122, + 356, + 859, + 645 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "WB correction on the Rendered Cube+ dataset. We report the mean values across three evaluation metrics, along with model sizes for each combination of auxiliary models. As shown in Table 2, using only the target model (sRGB-former) without additional guidance of global color information from other modalities results in suboptimal WB accuracy. Adding a single auxiliary model (e.g., $\\mathrm{PDF}_{\\mathrm{sRGB}} + \\mathrm{sRGB}$ or $\\mathrm{PDF}_{\\mathrm{Lab}} + \\mathrm{sRGB}$ ) improves WB performance over the target model alone. However, utilizing a single auxiliary model to jointly learn both sRGB and CIELab histograms $(\\mathrm{PDF}_{\\mathrm{sRGB:Lab}} + \\mathrm{sRGB})$ proves less effective, as a single", + "bbox": [ + 89, + 726, + 485, + 893 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "model struggles to disentangle the information from both modalities. Our full ABC-Former design leverages global color temperature features from multiple modalities through two auxiliary models to guide color adjustment, achieving the highest WB accuracy. To ensure a fair comparison across these combinations, we matched the model size to that of ABC-Former by doubling the bottleneck channels in sRGBformer (Baseline) and evenly increasing channels across layers when using a single auxiliary model.", + "bbox": [ + 511, + 726, + 906, + 864 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 3 compares different loss functions that train auxiliary models for learning sRGB and CIELab histograms.", + "bbox": [ + 511, + 869, + 906, + 901 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "21264", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/7e65e9639d7993366a321ac6b1a973ba94aee0bf85eb3f2d8b8f3706006c3a30.jpg", + "table_caption": [ + "Table 2. Ablation studies of ABC-Former w/ and w/o guidance from different modalities on the Rendered Cube+ dataset [3, 6]. Here, sRGB denotes sRGBformer, while $\\mathrm{PDF}_{\\mathrm{sRGB}}$ , and $\\mathrm{PDF}_{\\mathrm{Lab}}$ represent auxiliary PDFformer models for sRGB and CIELab histograms, respectively." + ], + "table_footnote": [], + "table_body": "
ModalitiesMSE ↓MAE ↓ΔE 2000 ↓Size(MB)
sRGB76.563.35°4.3120.4
PDFLab + sRGB73.353.26°4.2020.4
PDFsRGB + sRGB68.653.12°4.0820.4
PDFsRGB:Lab + sRGB72.383.38°4.3820.4
PDFsRGB + PDFLab + sRGB60.602.99°3.9520.2
", + "bbox": [ + 91, + 169, + 480, + 263 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/2a4f9d107b8613e0c5607cd100a8065ef666dcaf1ec14f4dc087bd31c550f223.jpg", + "table_caption": [ + "Table 3. Ablation studies on the loss function used to train auxiliary models for learning sRGB and CIELab histograms. We compare the KL divergence, Wasserstein distance, and our chosen L2 loss on the Rendered Cube+ dataset [3, 6]." + ], + "table_footnote": [], + "table_body": "
Loss functionMSE ↓MAE ↓ΔE 2000 ↓Size(MB)
KL divergence72.673.29°4.1720.2
Wasserstein distance70.223.12°4.1520.2
L2 loss (Proposed)60.602.99°3.9520.2
", + "bbox": [ + 109, + 347, + 457, + 415 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/8c6bd6d5da3fa66640851ecbe355f5e557fc1e4c709adbf725a67858f767c9e6.jpg", + "table_caption": [ + "Table 4. WB manipulation on the Rendered Cube+ dataset [3, 6]." + ], + "table_footnote": [], + "table_body": "
MethodMSE ↓MAE ↓ΔE 2000 ↓
MeanQ1Q2Q3MeanQ1Q2Q3MeanQ1Q2Q3
Deep-WB [2]199.3832.3063.34142.765.40°2.67°4.04°6.36°5.983.444.787.29
ABC-Former82.370.0117.7665.362.78°1.06°2.85°3.22°2.890.072.364.12
", + "bbox": [ + 94, + 457, + 480, + 515 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/26fbe8e8784e5c94a3f3e9baf308d5e9b90f3978887521261dadca03bdd8f7ed.jpg", + "image_caption": [ + "Figure 6. Analysis on learned weights on Rendered Cube+ dataset [3, 6], $\\lambda_{\\mathbf{i}}^{\\mathbf{sRGB}}$ and $\\lambda_{\\mathbf{i}}^{\\mathbf{Lab}}$ , for cross-modality knowledge transfer. Here, $i \\in \\{e_0, e_1, e_2, b_0, d_0, d_1, d_2\\}$ represents the level of the sRGBformer block, where $e, b,$ and $d$ denote the encoder, bottleneck, and decoder layers, respectively." + ], + "image_footnote": [], + "bbox": [ + 114, + 534, + 464, + 758 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "As can be seen, the L2 loss shows better performance compared to KL divergence and Wasserstein distance, at", + "bbox": [ + 89, + 869, + 482, + 902 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "tributed to its stability and effectiveness in aligning histograms. In contrast, KL divergence may encounter issues with zero-probability bins, and Wasserstein distance can be non-smooth and challenging to optimize in high-dimensional spaces.", + "bbox": [ + 511, + 90, + 903, + 167 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Analyzing Learned Weights for Cross-Modality Transfer. We present the learned weights, $\\lambda_{\\mathrm{i}}^{\\mathrm{sRGB}}$ and $\\lambda_{\\mathrm{i}}^{\\mathrm{Lab}}$ , at each level of ABC-Former to illustrate how modality-complementary knowledge guides WB correction. As shown in Figure 6, the influence of calibrated global color information from sRGB and CIELab histogram modalities intensifies toward the bottleneck block, peaking at the bottleneck. This indicates that high-level WB color histogram-based features from both modalities, capturing global, semantically rich color information, are crucial for reweighting sRGB image features for accurate WB correction. Notably, the sRGB histogram modality has a slightly greater impact than CIELab, though the difference is minimal. This suggests that both color modalities collaborate effectively, allowing ABC-Former to balance raw color intensity with perceptually uniform CIELab properties.", + "bbox": [ + 511, + 167, + 903, + 407 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "WB Manipulation. Following the setup described in [2], we conduct experiments to alter the input image's colors to match the target white balance (WB) settings. These settings correspond to the following color temperatures: tungsten $(2850\\mathrm{K})$ , fluorescent $(3800\\mathrm{K})$ , daylight $(5500\\mathrm{K})$ , cloudy $(6500\\mathrm{K})$ , and shade $(7500\\mathrm{K})$ . As shown in Table 4, ABC-Former significantly outperforms Deep-WB [2] in achieving accurate color transformations.", + "bbox": [ + 511, + 407, + 905, + 529 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5. Conclusion", + "text_level": 1, + "bbox": [ + 511, + 542, + 633, + 556 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We presented ABC-Former, an Auxiliary Bimodal Cross-domain Transformer that enhances sRGB WB correction by leveraging complementary information from multiple modalities. ABC-Former uses Interactive Channel Attention to facilitate cross-modality knowledge transfer, integrating calibrated color features from both sRGB and CIELab histograms. This multimodal approach enables a more nuanced fusion of color information, allowing the model to handle diverse color temperatures and complex scenes with pronounced color shifts. Extensive experiments have demonstrated that ABC-Former consistently outperforms state-of-the-art methods in both quantitative and qualitative evaluations. For future work, extending into 2D histograms is a promising direction.", + "bbox": [ + 511, + 568, + 906, + 779 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6. Acknowledgments", + "text_level": 1, + "bbox": [ + 511, + 792, + 692, + 810 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "This paper was supported in part by the National Science and Technology Council, Taiwan, under grants NSTC 113-2221-E-004-001-MY3, 113-2622-E-004-001, 113-2221-E-004-006-MY2, 112-2634-F-002-005, 113-2634-F-002-008, and 113-2923-E-A49-003-MY2.", + "bbox": [ + 511, + 816, + 903, + 893 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "21265", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 89, + 187, + 104 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Mahmoud Afifi and Michael S Brown. What else can fool deep learning? addressing color constancy errors on deep neural network performance. In ICCV, 2019. 1", + "[2] Mahmoud Afifi and Michael S Brown. Deep white-balance editing. In CVPR, 2020. 2, 3, 5, 6, 8", + "[3] Mahmoud Afifi, Brian Price, Scott Cohen, and Michael S Brown. When color constancy goes wrong: Correcting improperly white-balanced images. In CVPR, 2019. 1, 2, 3, 5, 6, 7, 8", + "[4] Mahmoud Afifi, Jonathan T Barron, Chloe LeGendre, Yun-Ta Tsai, and Francois Bleibel. Cross-camera convolutional color constancy. In ICCV, 2021. 2", + "[5] Mahmoud Affi, Marcus A Brubaker, and Michael S Brown. Auto white-balance correction for mixed-illuminant scenes. In WACV, 2022. 2, 3, 5, 6", + "[6] Nikola Banić, Karlo Koščević, and Sven Lončarić. Unsupervised learning for color constancy. arXiv preprint arXiv:1712.00436, 2017. 5, 6, 7, 8", + "[7] Jonathan T Barron and Yun-Ta Tsai. Fast fourier color constancy. In CVPR, 2017. 2", + "[8] Simone Bianco and Claudio Cusano. Quasiunsupervised color constancy. In CVPR, 2019.", + "[9] Jonathan Cepeda-Negrete and Raul E Sanchez-Yanez. Gray-world assumption on perceptual color spaces. In PSIVT, 2014. 2", + "[10] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In CVPR, 2019. 1", + "[11] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. ICLR, 2021. 3", + "[12] Jun Fu, Jing Liu, Hajjie Tian, Yong Li, Yongjun Bao, Zhiwei Fang, and Hanqing Lu. Dual attention network for scene segmentation. In CVPR, 2019. 1", + "[13] Sharma Gaurav. The ciede2000 color-difference formula: Implementation notes, supplementary test data," + ], + "bbox": [ + 93, + 114, + 482, + 750 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "and mathematical observations. COLOR research and application, 2005. 5", + "[14] Jie Hu, Li Shen, and Gang Sun. Squeeze-andexcitation networks. In CVPR, 2018. 3", + "[15] Yuanming Hu, Baoyuan Wang, and Stephen Lin. FC4: Fully convolutional color constancy with confidence-weighted pooling. In CVPR, 2017. 2", + "[16] Thomas Kailath. The divergence and bhattacharyya distance measures in signal selection. IEEE transactions on communication technology, 1967. 5, 7", + "[17] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *ICLR*, 2015. 5", + "[18] Chunxiao Li, Xuejing Kang, and Anlong Ming. WBFlow: Few-shot white balance for sRGB images via reversible neural flows. In IJCAI, 2023. 2, 3, 5, 6", + "[19] Chunxiao Li, Xuejing Kang, Zhifeng Zhang, and Anlong Ming. SWBNet: a stable white balance network for sRGB images. In AAAI, 2023. 2, 5, 6", + "[20] Yi-Chen Lo, Chia-Che Chang, Hsuan-Chao Chiu, Yu-Hao Huang, Chia-Ping Chen, Yu-Lin Chang, and Kevin Jou. CLCC: Contrastive learning for color constancy. In CVPR, 2021. 2", + "[21] Taishi Ono, Yuhi Kondo, Legong Sun, Teppei Kurita, and Yusuke Moriuchi. Degree-of-linear-polarization-based color constancy. In CVPR, 2022. 2", + "[22] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, 2021. 2", + "[23] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. NIPS, 2015. 1", + "[24] Joost Van De Weijer, Theo Gevers, and Arjan Gijsenij. Edge-based color constancy. IEEE TIP, 2007. 2", + "[25] Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. Uformer: A general u-shaped transformer for image restoration. In CVPR, 2022. 3", + "[26] Yiyuan Zhang, Xiaohan Ding, Kaixiong Gong, Yixiao Ge, Ying Shan, and Xiangyu Yue. Multimodal pathway: Improve transformers with irrelevant data from other modalities. In CVPR, 2024. 2, 4" + ], + "bbox": [ + 516, + 92, + 903, + 753 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "21266", + "bbox": [ + 478, + 944, + 519, + 955 + ], + "page_idx": 8 + } +] \ No newline at end of file diff --git a/2025/ABC-Former_ Auxiliary Bimodal Cross-domain Transformer with Interactive Channel Attention for White Balance/531c4970-08d2-402e-8972-1a2e62e92c0a_model.json b/2025/ABC-Former_ Auxiliary Bimodal Cross-domain Transformer with Interactive Channel Attention for White Balance/531c4970-08d2-402e-8972-1a2e62e92c0a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2d3c14e3cd85889beebf9b781d72f05d85138305 --- /dev/null +++ b/2025/ABC-Former_ Auxiliary Bimodal Cross-domain Transformer with Interactive Channel Attention for White Balance/531c4970-08d2-402e-8972-1a2e62e92c0a_model.json @@ -0,0 +1,1890 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.043 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.001, + 0.812, + 0.047 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.108, + 0.13, + 0.892, + 0.175 + ], + "angle": 0, + "content": "ABC-Former: Auxiliary Bimodal Cross-domain Transformer with Interactive Channel Attention for White Balance" + }, + { + "type": "text", + "bbox": [ + 0.26, + 0.204, + 0.802, + 0.24 + ], + "angle": 0, + "content": "Yu-Cheng Chiu* Guan-Rong Chen* Zihao Chen Yan-Tsung Peng† National Chengchi University" + }, + { + "type": "text", + "bbox": [ + 0.273, + 0.24, + 0.79, + 0.257 + ], + "angle": 0, + "content": "No. 64, Section 2, Zhinan Rd, Wenshan District, Taipei City, 116" + }, + { + "type": "text", + "bbox": [ + 0.102, + 0.257, + 0.956, + 0.275 + ], + "angle": 0, + "content": "111753202@nccu.edu.tw 111753139@nccu.edu.tw 113761501@nccu.edu.tw ytpeng@cs.nccu.edu.tw" + }, + { + "type": "title", + "bbox": [ + 0.248, + 0.309, + 0.327, + 0.325 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.341, + 0.485, + 0.645 + ], + "angle": 0, + "content": "The primary goal of white balance (WB) for sRGB images is to correct inaccurate color temperatures, ensuring that images display natural, neutral colors. While existing WB methods yield reasonable results, their effectiveness is limited. They either focus solely on global color adjustments applied before the camera-specific image signal processing pipeline or rely on end-to-end models that generate WB outputs without accounting for global color trends, leading to suboptimal correction. To address these limitations, we propose an Auxiliary Bimodal Cross-domain Transformer (ABC-Former) that enhances WB correction by leveraging complementary knowledge from global color information from CIELab and RGB histograms alongside sRGB inputs. By introducing an Interactive Channel Attention (ICA) module to facilitate cross-modality global knowledge transfer, ABC-Former achieves more precise WB correction. Experimental results on benchmark WB datasets show that ABC-Former performs favorably against state-of-the-art WB methods. The source code is available at https://github.com/ytpeng-aimlab/ABC-Former." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.673, + 0.222, + 0.688 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.698, + 0.483, + 0.834 + ], + "angle": 0, + "content": "White balance (WB) correction ensures consistent and accurate color production across varying lighting conditions. However, camera Image Signal Processing (ISP) can introduce color casts in sRGB images due to inaccurate or customized WB settings applied to raw-RGB inputs. These distortions can degrade the accuracy of tasks like image classification and segmentation, where precise color is crucial [10, 12, 23]. Consequently, WB correction has gained significant research interest." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.835, + 0.484, + 0.866 + ], + "angle": 0, + "content": "Significant effort has been made to improve WB within the camera's ISP pipeline. Raw-WB methods estimate the" + }, + { + "type": "image", + "bbox": [ + 0.518, + 0.31, + 0.903, + 0.402 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.518, + 0.402, + 0.903, + 0.48 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.518, + 0.48, + 0.903, + 0.593 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.512, + 0.605, + 0.907, + 0.703 + ], + "angle": 0, + "content": "Figure 1. (a) Traditional raw-WB methods predict the scene's illuminant, perform Illumination Correction (IC), and generate the sRGB output via camera-specific ISP. (b) DNN-based sRGB-WB methods apply end-to-end models directly to sRGB images for WB correction. (c) ABC-Former improves WB accuracy by converting the input into multiple modalities, enhancing illumination correction through auxiliary and primary models." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.747, + 0.907, + 0.824 + ], + "angle": 0, + "content": "scene's illuminant to correct color shifts in raw images before further processing. However, due to the non-linear transformations applied by the ISP during rendering, these corrections may not fully compensate for color shifts in the final sRGB output [1]." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.825, + 0.909, + 0.903 + ], + "angle": 0, + "content": "Several sRGB-WB methods address color shifts caused by imprecise WB in the camera's ISP, categorized into exemplar-based and DNN-based approaches. Exemplar-based methods like KNN [3] classify images from the Rendered WB dataset and apply the best-matching nonlinear" + }, + { + "type": "page_footnote", + "bbox": [ + 0.114, + 0.875, + 0.294, + 0.888 + ], + "angle": 0, + "content": "* equal contribution" + }, + { + "type": "page_footnote", + "bbox": [ + 0.114, + 0.888, + 0.311, + 0.901 + ], + "angle": 0, + "content": "† corresponding author" + }, + { + "type": "list", + "bbox": [ + 0.114, + 0.875, + 0.311, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "21258" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.482, + 0.211 + ], + "angle": 0, + "content": "mappings for correction. DNN-based methods such as DEEP-WB [2] use CNNs for single-illuminant color correction, while WBFlow [18] extracts pseudo-raw features via a reversible flow model for sRGB correction. SWBNet [19] employs a transformer in the DCT domain to refine color-sensitive features. While effective, these methods fail to fully integrate global color trends and scene information for more comprehensive WB correction." + }, + { + "type": "text", + "bbox": [ + 0.093, + 0.213, + 0.485, + 0.543 + ], + "angle": 0, + "content": "Our work utilizes alternative modalities, such as color histograms, to learn global color temperature for effective WB correction. Unlike images, which encode spatial and color information, histograms capture color distribution across channels without spatial details. While per-channel histograms do not fully preserve color relationships between pixels, we can explore both sRGB and CIELab color histograms to go beyond sRGB-input images, extracting global color information. Inspired by [26], we propose ABC-Former, an Auxiliary Bimodal Cross-domain Transformer architecture that integrates global color information from both sRGB and CIELab histograms. It consists of two auxiliary models for histogram-based learning and a target model for sRGB image WB correction, where the sRGB histogram supervises raw color intensity, and CIELab ensures perceptually uniform color distribution. To enhance WB correction in the target sRGB model, we introduce the Interactive Channel Attention (ICA) module, which transfers modality-complementary knowledge using re-parameterization, allowing the target model to adaptively reweight image features for improved accuracy. Our key contributions are listed as follows:" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.547, + 0.483, + 0.59 + ], + "angle": 0, + "content": "- We propose ABC-Former, which leverages histogram-based global color features via auxiliary models to refine WB correction in the target sRGB model." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.592, + 0.482, + 0.636 + ], + "angle": 0, + "content": "- We introduce the ICA module to facilitate effective cross-modality knowledge transfer, optimizing sRGB feature reweighting for better WB results." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.638, + 0.482, + 0.681 + ], + "angle": 0, + "content": "- Extensive experiments on benchmark WB datasets demonstrate that ABC-Former outperforms state-of-the-art (SOTA) sRGB-WB methods." + }, + { + "type": "list", + "bbox": [ + 0.091, + 0.547, + 0.483, + 0.681 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.695, + 0.24, + 0.71 + ], + "angle": 0, + "content": "2. Related Works" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.72, + 0.482, + 0.87 + ], + "angle": 0, + "content": "Raw-WB Approaches. The WB module in a camera's ISP corrects raw images for accurate color temperature, compensating for lighting variations. Traditional WB methods estimate the global light source color and apply uniform gain coefficients for illumination correction. However, they assume a consistent color temperature across the scene, making them ineffective under mixed lighting. Additionally, these methods irreversibly alter raw images, limiting precise sRGB adjustments in post-processing [4, 7-9, 15, 20, 21, 24]." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.871, + 0.482, + 0.901 + ], + "angle": 0, + "content": "sRGB-WB Approaches. To address the shortcomings of traditional WB methods, recent research has explored" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.905, + 0.272 + ], + "angle": 0, + "content": "sRGB-WB approaches, which refine color correction beyond the ISP stage. These methods can be classified into exemplar-based methods [3, 5] and DNN-based methods [18, 19]. Exemplar-based methods apply trained nonlinear mappings for color correction. For example, Affi et al. [3] use histogram features to find images with similar color distributions and derive a correction matrix to adjust colors accordingly. Mixed-WB [5] generates multiple WB versions of an image, averaging their weighting maps to achieve optimal correction. However, these methods rely heavily on predefined training data, making them less adaptable to diverse lighting conditions." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.273, + 0.905, + 0.453 + ], + "angle": 0, + "content": "DNN-based methods, such as WBFlow [18], use neural flow for reversible color correction, mapping color-cast sRGB images to a pseudo-raw feature space for linear WB. WBFlow also incorporates a camera transformation module for few-shot adaptation, improving generalization ability. SWBNet [19] suppresses temperature-sensitive low-frequency information and employs a contrast loss to align scene features across varying temperatures, enhancing WB stability. It also uses adaptive weights to correct multiple color shifts under mixed lighting. Although these methods are effective, they remain constrained by the sRGB image modality." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.455, + 0.905, + 0.65 + ], + "angle": 0, + "content": "Multimodal Training Approaches. Unimodal training uses data from a single modality, limiting its ability to generalize across different modalities. In contrast, multimodal training enables models to learn from multiple modalities, which can be strongly correlated (paired data) or weakly correlated (irrelevant data). CLIP [22] exemplifies strongly correlated multimodal data, using contrastive learning to align image and text features, requiring paired data. In contrast, M2PT [26] integrates irrelevant data from different modalities by leveraging re-parameterization to transfer knowledge from pre-trained auxiliary models. While this enhances cross-modal learning, it increases data collection costs and pre-training demands." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.651, + 0.905, + 0.755 + ], + "angle": 0, + "content": "Inspired by M2PT, we adopt multimodal training to capture global color information from sRGB images to enhance WB correction. Unlike M2PT, which uses irrelevant data, we train sRGB images to learn corrected histogram-based color features from their color and CIELab histograms. Figure 1 compares conventional DNN-based raw-WB and sRGB-WB methods with the proposed ABC-Former." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.77, + 0.687, + 0.787 + ], + "angle": 0, + "content": "3. Proposed Method" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.796, + 0.905, + 0.901 + ], + "angle": 0, + "content": "The proposed ABC-Former consists of three transformer models: two auxiliary transformers that learn to correct color and CIELab histograms, and a primary transformer that processes the input sRGB image for the final WB correction. Unlike M2PT [26], which tokenizes irrelevant multimodal data for unified processing, our approach leverages sRGB images with their strongly related color information." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.521, + 0.957 + ], + "angle": 0, + "content": "21259" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.482, + 0.182 + ], + "angle": 0, + "content": "To efficiently transfer complementary color knowledge, we introduce the Interactive Channel Attention (ICA) module, which utilizes condensed histogram-based features to enhance color temperature correction, improving accuracy and visual quality. The overall architecture is shown in Figure 2." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.19, + 0.368, + 0.206 + ], + "angle": 0, + "content": "3.1. Auxiliary Model — PDFformer" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.212, + 0.484, + 0.499 + ], + "angle": 0, + "content": "Most prior sRGB-WB works [2, 3, 5, 18] focus on local pixel information within the sRGB domain, often overlooking global color temperature and perceptual color relationships, which are crucial for effective WB correction. Therefore, integrating global color information from alternative modalities can enhance accuracy by providing a broader context for color adjustments. Given an input sRGB image \\(\\mathbf{I} \\in \\mathbb{R}^{H \\times W \\times 3}\\), where \\(H\\) and \\(W\\) denote the image's height and width, and the three channels represent the RGB color components, we first convert it to the CIELab color space, represented as \\(\\mathbf{I}_{\\mathrm{Lab}} \\in \\mathbb{R}^{H \\times W \\times 3}\\), where the three channels correspond to the \\(L^*\\), \\(a^*\\), and \\(b^*\\) components. To efficiently capture global color temperature while maintaining low model complexity, we convert \\(\\mathbf{I}\\) into its probability density function (PDF) representation, \\(\\mathbf{H}_{\\mathrm{sRGB}} \\in \\mathbb{R}^{L \\times 3}\\), where \\(L = 256\\) represents the number of histogram bins per channel. Similarly, we transform \\(\\mathbf{I}_{\\mathrm{Lab}}\\) into its PDF form, referred to as \\(\\mathbf{H}_{\\mathrm{Lab}} \\in \\mathbb{R}^{L \\times 3}\\), with \\(L = 256\\), providing a histogram-based representation for each color space." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.5, + 0.484, + 0.742 + ], + "angle": 0, + "content": "The ABC-Former framework incorporates two auxiliary models to enhance the target model's performance. These models process global color PDF data, \\(\\mathbf{H}^{\\mathrm{sRGB}}\\) and \\(\\mathbf{H}^{\\mathrm{Lab}}\\), which serve as distinct inputs. Each auxiliary model, called PDFformer, employs a 1D transformer architecture with a shared structure but separate training. Initially, the input histograms, \\(\\mathbf{H}^{\\mathrm{sRGB}}\\) or \\(\\mathbf{H}^{\\mathrm{Lab}}\\), pass through a one-dimensional convolutional layer to produce histogram features \\(\\mathbf{H}_0^{\\mathbf{A}} \\in \\mathbb{R}^{L \\times C}\\), where \\(\\mathbf{A} \\in [\\mathbf{sRGB}, \\mathbf{Lab}]\\), and \\(C\\) is the feature dimension. These features are subsequently processed through a U-shape transformer structure with PDFformer blocks that facilitate upsampling and downsampling in the encoding and decoding paths. Each block consists of two sequences of Layer Normalization (LN), Channel Attention (CA) [14], and a feed-forward Multilayer Perceptron (MLP) [11], arranged in the following order:" + }, + { + "type": "equation", + "bbox": [ + 0.152, + 0.748, + 0.483, + 0.775 + ], + "angle": 0, + "content": "\\[\n\\hat {\\mathbf {H}} _ {\\mathbf {i}} ^ {\\mathbf {A}} = \\operatorname {C A} \\left(\\operatorname {L N} \\left(\\mathbf {H} _ {\\mathbf {i} - 1} ^ {\\mathbf {A}}\\right)\\right) + \\mathbf {H} _ {\\mathbf {i} - 1} ^ {\\mathbf {A}}; \\tag {1}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.155, + 0.77, + 0.42, + 0.788 + ], + "angle": 0, + "content": "\\[\n\\mathbf {H} _ {\\mathbf {i}} ^ {\\mathbf {A}} = \\operatorname {G E L U} (\\operatorname {M L P} (\\operatorname {L N} (\\hat {\\mathbf {H}} _ {\\mathbf {i}} ^ {\\mathbf {A}}))) + \\hat {\\mathbf {H}} _ {\\mathbf {i}} ^ {\\mathbf {A}},\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.795, + 0.484, + 0.904 + ], + "angle": 0, + "content": "where \\(\\mathrm{GELU}(\\cdot)\\) denotes the GELU activation function, and \\(\\mathbf{i}\\) is the block index, starting from 1. PDFformer has \\(K\\) PDFformer blocks in the encoding path, followed by a bottleneck stage, which is also a PDFformer block. These blocks are interconnected via downsampling, implemented through a \\(4\\times 1\\) convolution with a stride of 2 and channel doubling. This process yields an output of \\(\\mathbf{H}_{\\mathbf{K} + 1}^{\\mathbf{A}}\\in\\)" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.089, + 0.905, + 0.291 + ], + "angle": 0, + "content": "\\(\\mathbb{R}^{\\frac{L}{2K}\\times 2^K C}\\). Following the bottleneck, the decoding path contains \\(K\\) PDFformer blocks, with upsampling and channel reduction applied between blocks. The first two blocks halve the number of channels, while the remaining blocks quarter them, as in [25]. Upsampling is achieved using a \\(2\\times 1\\) transposed convolution with a stride of 2. Additionally, each block in the decoding path takes the output from the previous block and concatenates it with the corresponding output from the encoding path of the same spatial size. The final output, \\(\\mathbf{H}^{\\mathbf{A}}\\in \\mathbb{R}^{L\\times 2C}\\), passes through a convolutional layer with a residual connection to the input histograms and a softmax function to produce the corrected color or CIELab histograms \\(\\mathbf{H}_{\\mathbf{c}}^{\\mathbf{A}}\\in \\mathbb{R}^{L\\times 3}\\) as:" + }, + { + "type": "equation", + "bbox": [ + 0.528, + 0.299, + 0.905, + 0.318 + ], + "angle": 0, + "content": "\\[\n\\mathbf {H} _ {\\mathrm {c}} ^ {\\mathbf {A}} = \\operatorname {S O F T M A X} \\left(\\operatorname {C o n v} _ {3 \\times 1} \\left(\\mathbf {H} _ {2 \\mathrm {K} + 1} ^ {\\mathbf {A}}\\right) + \\mathbf {H} _ {0} ^ {\\mathbf {A}}\\right). \\tag {2}\n\\]" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.326, + 0.779, + 0.342 + ], + "angle": 0, + "content": "3.2. Target model — sRGBformer" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.348, + 0.907, + 0.482 + ], + "angle": 0, + "content": "To achieve a color-calibrated correction, we employ sRGBformer, a vision transformer designed to address color deviations. It systematically processes input features \\(\\mathbf{X}_0\\) to generate the final WB-corrected image. Similar to PDFformer, sRGBformer employs a U-shaped transformer architecture with upsampling and downsampling. However, it uniquely integrates cross-modality knowledge from auxiliary models, using their corrected global color information as guidance. To enable this, we introduce our proposed Interactive" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.484, + 0.907, + 0.543 + ], + "angle": 0, + "content": "Channel Attention (ICA) in each sRGBformer block, incorporating auxiliary model knowledge through dedicated pathways. Each sRGBformer block consists of two sets of LN, ICA, and an MLP, structured as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.586, + 0.555, + 0.905, + 0.581 + ], + "angle": 0, + "content": "\\[\n\\hat {\\mathbf {X}} _ {\\mathbf {i}} = \\operatorname {I C A} (\\ln (\\mathbf {X} _ {\\mathbf {i} - \\mathbf {1}})) + \\mathbf {X} _ {\\mathbf {i} - \\mathbf {1}}; \\tag {3}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.589, + 0.577, + 0.832, + 0.594 + ], + "angle": 0, + "content": "\\[\n\\mathbf {X} _ {\\mathbf {i}} = \\operatorname {G E L U} (\\operatorname {M L P} (\\ln (\\hat {\\mathbf {X}} _ {\\mathbf {i}}))) + \\hat {\\mathbf {X}} _ {\\mathbf {i}},\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.6, + 0.906, + 0.81 + ], + "angle": 0, + "content": "where \\(\\mathbf{X_i}\\) is the image features produced by the sRGBformer block at \\(i\\)-th level. The encoder and decoder each consist of \\(K\\) sRGBformer blocks, connected via a bottleneck stage with one sRGBformer block. In the encoder, downsampling is achieved using a \\(4\\times 4\\) convolution with a stride of 2 and channel doubling, while the decoder applies upsampling via a \\(2\\times 2\\) transposed convolution with a stride of 2 and channel halving. As in PDFformer, each decoder block receives the output from the previous block, concatenated with the corresponding output from the encoder of the same spatial size. The final decoder output \\(\\mathbf{X_{2K + 1}}\\in \\mathbb{R}^{H\\times W\\times 2C}\\) passes through a \\(3\\times 3\\) convolutional layer with a residual connection to the input, producing the final WB-corrected image, \\(\\mathbf{X_c}\\in \\mathbb{R}^{H\\times W\\times 3}\\)." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.811, + 0.906, + 0.9 + ], + "angle": 0, + "content": "Interactive Channel Attention. The goal of ICA is to facilitate knowledge transfer from auxiliary models to sRGBformer, enhancing WB correction. In sRGBformer, each encoder and decoder block is equipped with an ICA module to correspond to the respective blocks in the encoders and decoders of the auxiliary models. First, \\(\\mathbf{X}_{\\mathrm{i}}\\) is condensed" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.521, + 0.957 + ], + "angle": 0, + "content": "21260" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.097, + 0.089, + 0.889, + 0.511 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.524, + 0.908, + 0.568 + ], + "angle": 0, + "content": "Figure 2. The ABC-Former framework consists of two key components:: Auxiliary models (PDFformers) and a Target model (sRG-Bformer). The auxiliary models process sRGB and CIELab histograms to learn color features from different modalities, while the ICA module in the target model integrates this information to generate the final WB correction." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.605, + 0.487, + 0.833 + ], + "angle": 0, + "content": "into a vector with one value per channel by global average pooling (Avg.), followed by a convolution and sigmoid activation, generating a channel-wise weighting vector \\(\\mathbf{W}_{\\mathrm{i}}^{\\mathrm{T}} \\in \\mathbb{R}^{1 \\times 1 \\times C}\\) at the \\(i\\)-th level of the sRGBformer block. Similarly, weighting vectors \\(\\mathbf{W}_{\\mathrm{i}}^{\\mathrm{sRGB}}\\), \\(\\mathbf{W}_{\\mathrm{i}}^{\\mathrm{Lab}} \\in \\mathbb{R}^{1 \\times 1 \\times C}\\) are extracted by feeding \\(\\mathbf{H}_{\\mathrm{i}}^{\\mathrm{sRGB}}\\) and \\(\\mathbf{H}_{\\mathrm{i}}^{\\mathrm{Lab}}\\) into the CA module within their corresponding PDFformer's blocks, which include the average pooling \\(\\mathrm{Avg}(\\cdot)\\), convolution, the unsqueeze operation, and the excitation operation (sigmoid activation) at the \\(i\\)-th level. Next, to integrate cross-modal knowledge, we apply cross-modal re-parameterization [26], introducing learnable parameters \\(\\lambda_{\\mathrm{i}}^{\\mathrm{sRGB}}\\) and \\(\\lambda_{\\mathrm{i}}^{\\mathrm{Lab}}\\) to adjust \\(\\mathbf{W}_{\\mathrm{i}}^{\\mathrm{sRGB}}\\) and \\(\\mathbf{W}_{\\mathrm{i}}^{\\mathrm{Lab}}\\), respectively. These are then combined with \\(\\mathbf{W}_{\\mathrm{i}}^{\\mathrm{T}}\\) along the channel dimension to obtain \\(\\mathbf{W}_{\\mathrm{total}}\\) as:" + }, + { + "type": "equation", + "bbox": [ + 0.128, + 0.838, + 0.483, + 0.862 + ], + "angle": 0, + "content": "\\[\n\\mathbf {W} _ {\\mathbf {i}} ^ {\\mathbf {T}} = \\text {S i g m o i d} (\\mathbf {C o n v} (\\operatorname {A v g} (\\mathbf {X} _ {\\mathbf {i}}))) \\tag {4}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.11, + 0.858, + 0.482, + 0.877 + ], + "angle": 0, + "content": "\\[\n\\mathbf {W} _ {\\mathrm {i}} ^ {\\text {t o t a l}} = \\mathbf {W} _ {\\mathrm {i}} ^ {\\mathrm {T}} + \\lambda_ {\\mathrm {i}} ^ {\\text {L a b}} \\mathbf {W} _ {\\mathrm {i}} ^ {\\text {L a b}} + \\lambda_ {\\mathrm {i}} ^ {\\text {s R G B}} \\mathbf {W} _ {\\mathrm {i}} ^ {\\text {s R G B}}.\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.885, + 0.483, + 0.903 + ], + "angle": 0, + "content": "At last, \\(\\mathbf{X_i}\\) is channel-wise re-weighted by \\(\\mathbf{W_i^{total}}\\) to gen-" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.604, + 0.771, + 0.621 + ], + "angle": 0, + "content": "erate the refined sRGB features \\(\\tilde{\\mathbf{X}}_{\\mathrm{i}}\\) as:" + }, + { + "type": "equation", + "bbox": [ + 0.619, + 0.632, + 0.907, + 0.652 + ], + "angle": 0, + "content": "\\[\n\\tilde {\\mathbf {X}} _ {\\mathbf {i}} = \\mathbb {F} _ {\\text {s c a l e}} \\left(\\mathbf {W} _ {\\mathbf {i}} ^ {\\text {t o t a l}}, \\mathbf {X} _ {\\mathbf {i}}\\right), \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.664, + 0.907, + 0.756 + ], + "angle": 0, + "content": "where \\(\\mathbb{F}_{scale}(\\cdot, \\cdot)\\) represents the channel-wise multiplication function, applying scalar weights to the corresponding feature maps. Through ICA, we leverage calibrated global color information from modality-specific histogram-based features, enabling effective knowledge transfer for improved WB correction." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.768, + 0.661, + 0.782 + ], + "angle": 0, + "content": "3.3. Loss Function" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.789, + 0.909, + 0.903 + ], + "angle": 0, + "content": "The proposed framework optimizes two auxiliary models and one target model. Specifically, two auxiliary PDF-formers process histogram inputs in a PDF format from either the sRGB or CIELab domains, while the target model, sRGBformer, is responsible for the final WB sRGB output. The cooperative interaction between the auxiliary and target models is crucial for achieving accurate WB in the output sRGB image. To train the auxiliary models, we use L2 loss" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.518, + 0.957 + ], + "angle": 0, + "content": "21261" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.483, + 0.123 + ], + "angle": 0, + "content": "to measure the difference between the PDFs of the predicted and ground-truth color channel histograms, formulated as:" + }, + { + "type": "equation", + "bbox": [ + 0.095, + 0.127, + 0.482, + 0.177 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\mathcal {L} _ {\\mathrm {p d f}} ^ {s R G B} = \\left\\| \\mathbf {H} _ {\\mathbf {c}} ^ {\\mathbf {R}} - \\mathbf {H} _ {\\mathbf {g t}} ^ {\\mathbf {R}} \\right\\| _ {2} + \\left\\| \\mathbf {H} _ {\\mathbf {c}} ^ {\\mathbf {G}} - \\mathbf {H} _ {\\mathbf {g t}} ^ {\\mathbf {G}} \\right\\| _ {2} + \\left\\| \\mathbf {H} _ {\\mathbf {c}} ^ {\\mathbf {B}} - \\mathbf {H} _ {\\mathbf {g t}} ^ {\\mathbf {B}} \\right\\| _ {2}; \\\\ \\mathcal {L} _ {\\mathrm {p d f}} ^ {L a b} = \\left\\| \\mathbf {H} _ {\\mathbf {c}} ^ {\\mathbf {L}} - \\mathbf {H} _ {\\mathbf {g t}} ^ {\\mathbf {L}} \\right\\| _ {2} + \\left\\| \\mathbf {H} _ {\\mathbf {c}} ^ {\\mathbf {a}} - \\mathbf {H} _ {\\mathbf {g t}} ^ {\\mathbf {a}} \\right\\| _ {2} + \\left\\| \\mathbf {H} _ {\\mathbf {c}} ^ {\\mathbf {b}} - \\mathbf {H} _ {\\mathbf {g t}} ^ {\\mathbf {b}} \\right\\| _ {2}, \\tag {6} \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.178, + 0.482, + 0.268 + ], + "angle": 0, + "content": "where \\(\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{sRGB}} = [\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{R}};\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{G}};\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{B}}]\\) and \\(\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{Lab}} = [\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{L}};\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{a}};\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{b}}]\\) denote the corrected RGB and CIELab histograms, respectively, as estimated by PDFformers. \\(\\mathbf{H}_{\\mathrm{gt}}^{\\mathrm{sRGB}}\\) and \\(\\mathbf{H}_{\\mathrm{gt}}^{\\mathrm{Lab}}\\) represent the ground-truth histograms. We adopt L2 loss as it penalizes large deviations more heavily, encouraging the model to align histogram bins across the distribution evenly." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.268, + 0.482, + 0.295 + ], + "angle": 0, + "content": "For sRGBformer, we use the L1 loss for training, as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.211, + 0.298, + 0.482, + 0.315 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\mathrm {r e c}} = \\left\\| \\mathbf {X} _ {\\mathbf {c}} - \\mathbf {X} _ {\\mathbf {g t}} \\right\\| _ {1}, \\tag {7}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.319, + 0.483, + 0.368 + ], + "angle": 0, + "content": "where \\(\\mathbf{X}_{\\mathbf{c}}\\) is the estimated WB image produced by sRGBformer, and \\(\\mathbf{X}_{\\mathbf{gt}}\\) is the ground truth. The total loss is defined as \\(\\mathcal{L}_{\\mathrm{total}} = \\mathcal{L}_{\\mathrm{pdf}}^{sRGB} + \\mathcal{L}_{\\mathrm{pdf}}^{Lab} + \\mathcal{L}_{\\mathrm{rec}}\\)." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.378, + 0.296, + 0.394 + ], + "angle": 0, + "content": "4. Experimental Results" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.403, + 0.483, + 0.568 + ], + "angle": 0, + "content": "Datasets. The commonly used public dataset, the Rendered WB dataset Set1 [3], is divided into three non-overlapping folds. For training, we randomly selected 12,000 rendered sRGB images with different WB settings from two of the folds. For testing, we use the third fold of Set1, also known as Set1-Test (21,046 images), as well as other datasets that have no overlap in scenes or cameras with the training data: Set2 of the Rendered WB dataset (2,881 images) [3] and the Rendered Cube+ dataset (10,242 images) [3, 6]. These datasets serve as benchmarks to evaluate the WB correction performance of our ABC-Former." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.57, + 0.483, + 0.704 + ], + "angle": 0, + "content": "Evaluation Metrics. We evaluated our results using three widely used metrics: Mean Squared Error (MSE), Mean Angular Error (MAE), and \\(\\Delta E\\) 2000 [13], which quantify the differences between the predicted WB images and the ground truth. For each metric, we reported the mean, first quantile (Q1), median (Q2), and upper quantile (Q3) of the error. Lower values in these metrics indicate better WB correction performance, consistent with those used in recent works [2, 3, 5, 18, 19]." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.705, + 0.483, + 0.854 + ], + "angle": 0, + "content": "Implementation Details. We implemented ABC-Former using PyTorch. During training, we optimize both the auxiliary and target models simultaneously over 350 epochs using the Adam optimizer [17] with \\(\\beta_{1} = 0.5\\) and \\(\\beta_{2} = 0.999\\) for each model. The learning rate is set to \\(2 \\times 10^{-4}\\), and the embedding feature dimension to 16. For training, we randomly cropped four \\(128 \\times 128\\) patches from each training image as input. Additionally, we apply geometric transformations, including rotation and flipping, to augment the data." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.856, + 0.483, + 0.901 + ], + "angle": 0, + "content": "Quantitative Experimental Results. The quantitative results in Table 1 show that ABC-Former performs favorably against five SOTA methods [2, 3, 5, 18, 19] across three" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.905, + 0.394 + ], + "angle": 0, + "content": "public benchmark datasets [3, 6]. The compared methods were evaluated using either their publicly available pretrained models or results directly cited from their respective publications. However, SWBNet [19] was retrained to be tested on the Set1-Test and Set2 datasets, as its code and original results for these datasets were not provided (marked with * in Table 1). SWBNet's scores on the Cube+ Dataset were taken from the original paper. On the Rendered WB Dataset Set1-Test and Set2, ABC-Former achieved the best performance on MSE, MAE, and \\(\\Delta E\\) 2000, indicating that it effectively removes color casts from images and achieves superior WB correction. On the Rendered Cube+ dataset, ABC-Former also delivered superior results, with the lowest mean scores in MSE, MAE, and \\(\\Delta E\\) 2000. Additionally, it maintains a competitive model size, only larger than Deep-WB [2] and Mixed-WB [5]. These results demonstrate ABC-Former's efficiency in achieving efficient WB correction across various datasets without significantly increasing model complexity, showcasing its robustness and generalization capabilities." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.397, + 0.905, + 0.638 + ], + "angle": 0, + "content": "Qualitative Experimental Results. We present the qualitative comparison results on the Rendered WB and Rendered Cube+ datasets in Figure 3 and Figure 4. Analyzing the color correction performance of different methods, we see that while KNN [3], Deep-WB [2], Mixed-WB [5], and WBFlow [18] generally reduce color casts, they often exhibit color inconsistencies across different regions of the image. For example, in Figure 3, these methods tend to correct the colors of objects while neglecting the sky's color accuracy, resulting in an undesirable yellow tint. Similarly, in Figure 4, strong color casts lead to suboptimal WB correction, often leaving the image with an overall blue or yellow tone. In contrast, our method, guided by corrected global color information from multiple modalities, achieves a more balanced and consistent color correction across the entire image, producing natural and harmonious results." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.642, + 0.905, + 0.868 + ], + "angle": 0, + "content": "In addition, Figure 5 demonstrates a comparative analysis of global color distribution accuracy in the average WB results across different methods. The Bhattacharyya coefficient, as introduced in [16], is employed to measure the similarity between the red, green, and blue histograms of the processed images and those of the ground-truth WB images across three benchmark datasets. This coefficient ranges from 0 to 1, with higher values indicating a closer match to the ground truth color distributions. The results show that ABC-Former achieves a higher similarity compared to other methods. Additionally, the visualization at the bottom of the figure illustrates an example of the color distributions in WB results obtained by the compared methods, highlighting that ABC-Former produces color distributions more consistent with the ground truth." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.871, + 0.905, + 0.901 + ], + "angle": 0, + "content": "Ablation Studies. In the ablation studies, we investigated the impact of various combinations of modalities used for" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "21262" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.09, + 0.908, + 0.133 + ], + "angle": 0, + "content": "Table 1. The quantitative results of ABC-Former and competing WB methods are evaluated on three public benchmark datasets, including Rendered WB Dataset (Set1-Test and Set2) [3], and Rendered Cube+ Dataset [3, 6]. The best results are highlighted in red, while the second-best results are highlighted in blue." + }, + { + "type": "table", + "bbox": [ + 0.104, + 0.143, + 0.891, + 0.566 + ], + "angle": 0, + "content": "
MethodMSE ↓MAE ↓ΔE 2000 ↓Size
MeanQ1Q2Q3MeanQ1Q2Q3MeanQ1Q2Q3MB
Rendered WB Dataset: Set1-Test (21,046 images) [3]
KNN [3]77.4913.7439.6294.013.06°1.74°2.54°3.76°3.582.073.094.5521.8
Deep-WB [2]82.5513.1942.77102.093.12°1.88°2.70°3.84°3.772.163.304.8616.7
Mixed-WB [5]142.2526.8167.17164.664.07°2.64°3.68°5.16°4.553.004.155.635.1
WBFlow [18]78.8912.9935.0979.352.67°1.73°2.39°3.24°3.131.922.793.9430.2
SWBNet* [19]111.6220.6160.68137.914.11°2.56°3.75°5.22°4.542.734.165.86258.8
ABC-Former20.474.6510.0221.051.99°1.25°1.73°2.33°2.181.381.862.5920.2
Rendered WB Dataset: Set2 (2,881 images) [3]
KNN [3]171.0937.0487.04190.884.48°2.26°3.64°5.95°5.603.434.907.0621.8
Deep-WB [2]124.0730.1376.32154.443.75°2.02°3.08°4.72°4.903.134.356.0816.7
Mixed-WB [5]188.7648.64112.32219.914.92°2.69°4.10°6.37°6.053.454.927.205.1
WBFlow [18]117.6031.2561.68143.903.51°1.93°2.92°4.47°4.643.164.075.5630.2
SWBNet* [19]219.0255.45113.98236.255.46°3.45°4.78°6.63°6.514.395.848.08258.8
ABC-Former104.3125.5558.61132.903.39°1.87°2.73°4.30°4.562.974.135.6320.2
Rendered Cube+ Dataset (10,242 images) [3, 6]
KNN [3]194.9827.4357.08118.214.12°1.96°3.17°5.04°5.683.224.616.7021.8
Deep-WB [2]80.4615.4333.8874.423.45°1.87°2.82°4.26°4.592.683.815.5316.7
Mixed-WB [5]161.8016.9619.3390.814.05°1.40°2.12°4.88°4.892.163.106.785.1
WBFlow [18]75.3914.2230.9072.913.34°1.87°2.82°4.11°4.282.683.775.2130.2
SWBNet [19]74.3520.4640.0486.953.15°1.33°2.09°4.12°4.282.403.565.09258.8
ABC-Former60.6012.1526.9257.202.99°1.63°2.45°3.69°3.952.353.404.8620.2
" + }, + { + "type": "image", + "bbox": [ + 0.098, + 0.581, + 0.211, + 0.643 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.098, + 0.643, + 0.211, + 0.724 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.098, + 0.724, + 0.211, + 0.8 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.133, + 0.803, + 0.179, + 0.818 + ], + "angle": 0, + "content": "Input" + }, + { + "type": "image", + "bbox": [ + 0.214, + 0.581, + 0.327, + 0.642 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.214, + 0.643, + 0.327, + 0.723 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.214, + 0.724, + 0.327, + 0.8 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.251, + 0.803, + 0.288, + 0.817 + ], + "angle": 0, + "content": "KNN" + }, + { + "type": "image", + "bbox": [ + 0.329, + 0.581, + 0.442, + 0.642 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.329, + 0.643, + 0.442, + 0.723 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.329, + 0.724, + 0.442, + 0.8 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.346, + 0.803, + 0.425, + 0.817 + ], + "angle": 0, + "content": "Deep-WB" + }, + { + "type": "image", + "bbox": [ + 0.443, + 0.581, + 0.556, + 0.642 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.443, + 0.643, + 0.555, + 0.723 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.443, + 0.724, + 0.555, + 0.8 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.456, + 0.803, + 0.543, + 0.817 + ], + "angle": 0, + "content": "Mixed-WB" + }, + { + "type": "image", + "bbox": [ + 0.558, + 0.581, + 0.671, + 0.642 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.558, + 0.643, + 0.671, + 0.723 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.558, + 0.724, + 0.671, + 0.8 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.58, + 0.803, + 0.649, + 0.817 + ], + "angle": 0, + "content": "WBFlow" + }, + { + "type": "image", + "bbox": [ + 0.672, + 0.581, + 0.787, + 0.642 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.673, + 0.643, + 0.786, + 0.723 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.673, + 0.724, + 0.786, + 0.8 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.706, + 0.803, + 0.747, + 0.817 + ], + "angle": 0, + "content": "Ours" + }, + { + "type": "image", + "bbox": [ + 0.788, + 0.581, + 0.901, + 0.642 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.788, + 0.643, + 0.901, + 0.723 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.788, + 0.724, + 0.901, + 0.8 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.833, + 0.803, + 0.857, + 0.817 + ], + "angle": 0, + "content": "GT" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.832, + 0.908, + 0.861 + ], + "angle": 0, + "content": "Figure 3. Qualitative comparisons with other sRGB-WB methods on the Rendered WB dataset [3], with the \\(\\Delta\\)E 2000 indicated in the bottom-right corner of each image." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.946, + 0.519, + 0.957 + ], + "angle": 0, + "content": "21263" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.098, + 0.093, + 0.904, + 0.295 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.305, + 0.908, + 0.334 + ], + "angle": 0, + "content": "Figure 4. Qualitative comparisons with other sRGB-WB methods on the Rendered Cube+ dataset [3, 6], with the \\(\\Delta\\)E 2000 displayed in the bottom-right corner of each image." + }, + { + "type": "image", + "bbox": [ + 0.123, + 0.357, + 0.861, + 0.646 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.659, + 0.908, + 0.703 + ], + "angle": 0, + "content": "Figure 5. Evaluation of the Bhattacharyya coefficient [16] for color histograms across three benchmark datasets [3, 6], showing global color accuracy for red, green, and blue histograms in sRGB-WB methods. ABC-Former achieves more consistent color distributions by leveraging global color temperature features from both sRGB and CIELab histograms." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.727, + 0.486, + 0.895 + ], + "angle": 0, + "content": "WB correction on the Rendered Cube+ dataset. We report the mean values across three evaluation metrics, along with model sizes for each combination of auxiliary models. As shown in Table 2, using only the target model (sRGB-former) without additional guidance of global color information from other modalities results in suboptimal WB accuracy. Adding a single auxiliary model (e.g., \\(\\mathrm{PDF}_{\\mathrm{sRGB}} + \\mathrm{sRGB}\\) or \\(\\mathrm{PDF}_{\\mathrm{Lab}} + \\mathrm{sRGB}\\)) improves WB performance over the target model alone. However, utilizing a single auxiliary model to jointly learn both sRGB and CIELab histograms \\((\\mathrm{PDF}_{\\mathrm{sRGB:Lab}} + \\mathrm{sRGB})\\) proves less effective, as a single" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.727, + 0.907, + 0.865 + ], + "angle": 0, + "content": "model struggles to disentangle the information from both modalities. Our full ABC-Former design leverages global color temperature features from multiple modalities through two auxiliary models to guide color adjustment, achieving the highest WB accuracy. To ensure a fair comparison across these combinations, we matched the model size to that of ABC-Former by doubling the bottleneck channels in sRGBformer (Baseline) and evenly increasing channels across layers when using a single auxiliary model." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.871, + 0.907, + 0.902 + ], + "angle": 0, + "content": "Table 3 compares different loss functions that train auxiliary models for learning sRGB and CIELab histograms." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.521, + 0.958 + ], + "angle": 0, + "content": "21264" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.089, + 0.483, + 0.16 + ], + "angle": 0, + "content": "Table 2. Ablation studies of ABC-Former w/ and w/o guidance from different modalities on the Rendered Cube+ dataset [3, 6]. Here, sRGB denotes sRGBformer, while \\(\\mathrm{PDF}_{\\mathrm{sRGB}}\\), and \\(\\mathrm{PDF}_{\\mathrm{Lab}}\\) represent auxiliary PDFformer models for sRGB and CIELab histograms, respectively." + }, + { + "type": "table", + "bbox": [ + 0.093, + 0.17, + 0.481, + 0.264 + ], + "angle": 0, + "content": "
ModalitiesMSE ↓MAE ↓ΔE 2000 ↓Size(MB)
sRGB76.563.35°4.3120.4
PDFLab + sRGB73.353.26°4.2020.4
PDFsRGB + sRGB68.653.12°4.0820.4
PDFsRGB:Lab + sRGB72.383.38°4.3820.4
PDFsRGB + PDFLab + sRGB60.602.99°3.9520.2
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.28, + 0.483, + 0.337 + ], + "angle": 0, + "content": "Table 3. Ablation studies on the loss function used to train auxiliary models for learning sRGB and CIELab histograms. We compare the KL divergence, Wasserstein distance, and our chosen L2 loss on the Rendered Cube+ dataset [3, 6]." + }, + { + "type": "table", + "bbox": [ + 0.11, + 0.348, + 0.458, + 0.416 + ], + "angle": 0, + "content": "
Loss functionMSE ↓MAE ↓ΔE 2000 ↓Size(MB)
KL divergence72.673.29°4.1720.2
Wasserstein distance70.223.12°4.1520.2
L2 loss (Proposed)60.602.99°3.9520.2
" + }, + { + "type": "table_caption", + "bbox": [ + 0.093, + 0.432, + 0.48, + 0.446 + ], + "angle": 0, + "content": "Table 4. WB manipulation on the Rendered Cube+ dataset [3, 6]." + }, + { + "type": "table", + "bbox": [ + 0.095, + 0.458, + 0.481, + 0.516 + ], + "angle": 0, + "content": "
MethodMSE ↓MAE ↓ΔE 2000 ↓
MeanQ1Q2Q3MeanQ1Q2Q3MeanQ1Q2Q3
Deep-WB [2]199.3832.3063.34142.765.40°2.67°4.04°6.36°5.983.444.787.29
ABC-Former82.370.0117.7665.362.78°1.06°2.85°3.22°2.890.072.364.12
" + }, + { + "type": "image", + "bbox": [ + 0.115, + 0.535, + 0.465, + 0.759 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.771, + 0.483, + 0.842 + ], + "angle": 0, + "content": "Figure 6. Analysis on learned weights on Rendered Cube+ dataset [3, 6], \\(\\lambda_{\\mathbf{i}}^{\\mathbf{sRGB}}\\) and \\(\\lambda_{\\mathbf{i}}^{\\mathbf{Lab}}\\), for cross-modality knowledge transfer. Here, \\(i \\in \\{e_0, e_1, e_2, b_0, d_0, d_1, d_2\\}\\) represents the level of the sRGBformer block, where \\(e, b,\\) and \\(d\\) denote the encoder, bottleneck, and decoder layers, respectively." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.871, + 0.483, + 0.903 + ], + "angle": 0, + "content": "As can be seen, the L2 loss shows better performance compared to KL divergence and Wasserstein distance, at" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.905, + 0.168 + ], + "angle": 0, + "content": "tributed to its stability and effectiveness in aligning histograms. In contrast, KL divergence may encounter issues with zero-probability bins, and Wasserstein distance can be non-smooth and challenging to optimize in high-dimensional spaces." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.168, + 0.905, + 0.409 + ], + "angle": 0, + "content": "Analyzing Learned Weights for Cross-Modality Transfer. We present the learned weights, \\(\\lambda_{\\mathrm{i}}^{\\mathrm{sRGB}}\\) and \\(\\lambda_{\\mathrm{i}}^{\\mathrm{Lab}}\\), at each level of ABC-Former to illustrate how modality-complementary knowledge guides WB correction. As shown in Figure 6, the influence of calibrated global color information from sRGB and CIELab histogram modalities intensifies toward the bottleneck block, peaking at the bottleneck. This indicates that high-level WB color histogram-based features from both modalities, capturing global, semantically rich color information, are crucial for reweighting sRGB image features for accurate WB correction. Notably, the sRGB histogram modality has a slightly greater impact than CIELab, though the difference is minimal. This suggests that both color modalities collaborate effectively, allowing ABC-Former to balance raw color intensity with perceptually uniform CIELab properties." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.409, + 0.906, + 0.53 + ], + "angle": 0, + "content": "WB Manipulation. Following the setup described in [2], we conduct experiments to alter the input image's colors to match the target white balance (WB) settings. These settings correspond to the following color temperatures: tungsten \\((2850\\mathrm{K})\\), fluorescent \\((3800\\mathrm{K})\\), daylight \\((5500\\mathrm{K})\\), cloudy \\((6500\\mathrm{K})\\), and shade \\((7500\\mathrm{K})\\). As shown in Table 4, ABC-Former significantly outperforms Deep-WB [2] in achieving accurate color transformations." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.543, + 0.634, + 0.558 + ], + "angle": 0, + "content": "5. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.569, + 0.907, + 0.78 + ], + "angle": 0, + "content": "We presented ABC-Former, an Auxiliary Bimodal Cross-domain Transformer that enhances sRGB WB correction by leveraging complementary information from multiple modalities. ABC-Former uses Interactive Channel Attention to facilitate cross-modality knowledge transfer, integrating calibrated color features from both sRGB and CIELab histograms. This multimodal approach enables a more nuanced fusion of color information, allowing the model to handle diverse color temperatures and complex scenes with pronounced color shifts. Extensive experiments have demonstrated that ABC-Former consistently outperforms state-of-the-art methods in both quantitative and qualitative evaluations. For future work, extending into 2D histograms is a promising direction." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.794, + 0.693, + 0.811 + ], + "angle": 0, + "content": "6. Acknowledgments" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.818, + 0.905, + 0.894 + ], + "angle": 0, + "content": "This paper was supported in part by the National Science and Technology Council, Taiwan, under grants NSTC 113-2221-E-004-001-MY3, 113-2622-E-004-001, 113-2221-E-004-006-MY2, 112-2634-F-002-005, 113-2634-F-002-008, and 113-2923-E-A49-003-MY2." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "21265" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.093, + 0.09, + 0.188, + 0.106 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.116, + 0.482, + 0.174 + ], + "angle": 0, + "content": "[1] Mahmoud Afifi and Michael S Brown. What else can fool deep learning? addressing color constancy errors on deep neural network performance. In ICCV, 2019. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.178, + 0.482, + 0.207 + ], + "angle": 0, + "content": "[2] Mahmoud Afifi and Michael S Brown. Deep white-balance editing. In CVPR, 2020. 2, 3, 5, 6, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.209, + 0.483, + 0.268 + ], + "angle": 0, + "content": "[3] Mahmoud Afifi, Brian Price, Scott Cohen, and Michael S Brown. When color constancy goes wrong: Correcting improperly white-balanced images. In CVPR, 2019. 1, 2, 3, 5, 6, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.271, + 0.482, + 0.315 + ], + "angle": 0, + "content": "[4] Mahmoud Afifi, Jonathan T Barron, Chloe LeGendre, Yun-Ta Tsai, and Francois Bleibel. Cross-camera convolutional color constancy. In ICCV, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.103, + 0.318, + 0.482, + 0.361 + ], + "angle": 0, + "content": "[5] Mahmoud Affi, Marcus A Brubaker, and Michael S Brown. Auto white-balance correction for mixed-illuminant scenes. In WACV, 2022. 2, 3, 5, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.103, + 0.364, + 0.482, + 0.409 + ], + "angle": 0, + "content": "[6] Nikola Banić, Karlo Koščević, and Sven Lončarić. Unsupervised learning for color constancy. arXiv preprint arXiv:1712.00436, 2017. 5, 6, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.103, + 0.412, + 0.482, + 0.44 + ], + "angle": 0, + "content": "[7] Jonathan T Barron and Yun-Ta Tsai. Fast fourier color constancy. In CVPR, 2017. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.103, + 0.443, + 0.482, + 0.471 + ], + "angle": 0, + "content": "[8] Simone Bianco and Claudio Cusano. Quasiunsupervised color constancy. In CVPR, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.103, + 0.474, + 0.482, + 0.517 + ], + "angle": 0, + "content": "[9] Jonathan Cepeda-Negrete and Raul E Sanchez-Yanez. Gray-world assumption on perceptual color spaces. In PSIVT, 2014. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.521, + 0.482, + 0.564 + ], + "angle": 0, + "content": "[10] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In CVPR, 2019. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.567, + 0.482, + 0.67 + ], + "angle": 0, + "content": "[11] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. ICLR, 2021. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.674, + 0.482, + 0.718 + ], + "angle": 0, + "content": "[12] Jun Fu, Jing Liu, Hajjie Tian, Yong Li, Yongjun Bao, Zhiwei Fang, and Hanqing Lu. Dual attention network for scene segmentation. In CVPR, 2019. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.721, + 0.482, + 0.75 + ], + "angle": 0, + "content": "[13] Sharma Gaurav. The ciede2000 color-difference formula: Implementation notes, supplementary test data," + }, + { + "type": "list", + "bbox": [ + 0.094, + 0.116, + 0.483, + 0.75 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.55, + 0.093, + 0.905, + 0.121 + ], + "angle": 0, + "content": "and mathematical observations. COLOR research and application, 2005. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.125, + 0.905, + 0.153 + ], + "angle": 0, + "content": "[14] Jie Hu, Li Shen, and Gang Sun. Squeeze-andexcitation networks. In CVPR, 2018. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.156, + 0.905, + 0.199 + ], + "angle": 0, + "content": "[15] Yuanming Hu, Baoyuan Wang, and Stephen Lin. FC4: Fully convolutional color constancy with confidence-weighted pooling. In CVPR, 2017. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.202, + 0.905, + 0.245 + ], + "angle": 0, + "content": "[16] Thomas Kailath. The divergence and bhattacharyya distance measures in signal selection. IEEE transactions on communication technology, 1967. 5, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.247, + 0.905, + 0.274 + ], + "angle": 0, + "content": "[17] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *ICLR*, 2015. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.277, + 0.905, + 0.321 + ], + "angle": 0, + "content": "[18] Chunxiao Li, Xuejing Kang, and Anlong Ming. WBFlow: Few-shot white balance for sRGB images via reversible neural flows. In IJCAI, 2023. 2, 3, 5, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.324, + 0.905, + 0.368 + ], + "angle": 0, + "content": "[19] Chunxiao Li, Xuejing Kang, Zhifeng Zhang, and Anlong Ming. SWBNet: a stable white balance network for sRGB images. In AAAI, 2023. 2, 5, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.371, + 0.905, + 0.43 + ], + "angle": 0, + "content": "[20] Yi-Chen Lo, Chia-Che Chang, Hsuan-Chao Chiu, Yu-Hao Huang, Chia-Ping Chen, Yu-Lin Chang, and Kevin Jou. CLCC: Contrastive learning for color constancy. In CVPR, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.433, + 0.905, + 0.476 + ], + "angle": 0, + "content": "[21] Taishi Ono, Yuhi Kondo, Legong Sun, Teppei Kurita, and Yusuke Moriuchi. Degree-of-linear-polarization-based color constancy. In CVPR, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.479, + 0.905, + 0.553 + ], + "angle": 0, + "content": "[22] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.556, + 0.905, + 0.599 + ], + "angle": 0, + "content": "[23] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. NIPS, 2015. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.602, + 0.905, + 0.631 + ], + "angle": 0, + "content": "[24] Joost Van De Weijer, Theo Gevers, and Arjan Gijsenij. Edge-based color constancy. IEEE TIP, 2007. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.634, + 0.905, + 0.692 + ], + "angle": 0, + "content": "[25] Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. Uformer: A general u-shaped transformer for image restoration. In CVPR, 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.695, + 0.905, + 0.754 + ], + "angle": 0, + "content": "[26] Yiyuan Zhang, Xiaohan Ding, Kaixiong Gong, Yixiao Ge, Ying Shan, and Xiangyu Yue. Multimodal pathway: Improve transformers with irrelevant data from other modalities. In CVPR, 2024. 2, 4" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.754 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "21266" + } + ] +] \ No newline at end of file diff --git a/2025/ABC-Former_ Auxiliary Bimodal Cross-domain Transformer with Interactive Channel Attention for White Balance/531c4970-08d2-402e-8972-1a2e62e92c0a_origin.pdf b/2025/ABC-Former_ Auxiliary Bimodal Cross-domain Transformer with Interactive Channel Attention for White Balance/531c4970-08d2-402e-8972-1a2e62e92c0a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5935345d2446a569903dde3e45bd78ae73685dab --- /dev/null +++ b/2025/ABC-Former_ Auxiliary Bimodal Cross-domain Transformer with Interactive Channel Attention for White Balance/531c4970-08d2-402e-8972-1a2e62e92c0a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0530e06af21255bb3cec7501569233ab69291c3997b896fcdab42badb37a0582 +size 2460120 diff --git a/2025/ABC-Former_ Auxiliary Bimodal Cross-domain Transformer with Interactive Channel Attention for White Balance/full.md b/2025/ABC-Former_ Auxiliary Bimodal Cross-domain Transformer with Interactive Channel Attention for White Balance/full.md new file mode 100644 index 0000000000000000000000000000000000000000..208d8c3da5cc56cfc43297783979bc8c32c9508d --- /dev/null +++ b/2025/ABC-Former_ Auxiliary Bimodal Cross-domain Transformer with Interactive Channel Attention for White Balance/full.md @@ -0,0 +1,285 @@ +# ABC-Former: Auxiliary Bimodal Cross-domain Transformer with Interactive Channel Attention for White Balance + +Yu-Cheng Chiu* Guan-Rong Chen* Zihao Chen Yan-Tsung Peng† National Chengchi University + +No. 64, Section 2, Zhinan Rd, Wenshan District, Taipei City, 116 + +111753202@nccu.edu.tw 111753139@nccu.edu.tw 113761501@nccu.edu.tw ytpeng@cs.nccu.edu.tw + +# Abstract + +The primary goal of white balance (WB) for sRGB images is to correct inaccurate color temperatures, ensuring that images display natural, neutral colors. While existing WB methods yield reasonable results, their effectiveness is limited. They either focus solely on global color adjustments applied before the camera-specific image signal processing pipeline or rely on end-to-end models that generate WB outputs without accounting for global color trends, leading to suboptimal correction. To address these limitations, we propose an Auxiliary Bimodal Cross-domain Transformer (ABC-Former) that enhances WB correction by leveraging complementary knowledge from global color information from CIELab and RGB histograms alongside sRGB inputs. By introducing an Interactive Channel Attention (ICA) module to facilitate cross-modality global knowledge transfer, ABC-Former achieves more precise WB correction. Experimental results on benchmark WB datasets show that ABC-Former performs favorably against state-of-the-art WB methods. The source code is available at https://github.com/ytpeng-aimlab/ABC-Former. + +# 1. Introduction + +White balance (WB) correction ensures consistent and accurate color production across varying lighting conditions. However, camera Image Signal Processing (ISP) can introduce color casts in sRGB images due to inaccurate or customized WB settings applied to raw-RGB inputs. These distortions can degrade the accuracy of tasks like image classification and segmentation, where precise color is crucial [10, 12, 23]. Consequently, WB correction has gained significant research interest. + +Significant effort has been made to improve WB within the camera's ISP pipeline. Raw-WB methods estimate the + +![](images/dc3a462b8e9f99c49514e17bdd64749d9d84856929305498c0a53ba146854fdf.jpg) + +![](images/d84a832deedf8ac4ac147fb047a3056f07187691fd7a0655c9afd11b305027a2.jpg) + +![](images/3ce855a07bd457cdced9502f45450bc79508178e0c0b9ea05bbe7a156d755c83.jpg) +Figure 1. (a) Traditional raw-WB methods predict the scene's illuminant, perform Illumination Correction (IC), and generate the sRGB output via camera-specific ISP. (b) DNN-based sRGB-WB methods apply end-to-end models directly to sRGB images for WB correction. (c) ABC-Former improves WB accuracy by converting the input into multiple modalities, enhancing illumination correction through auxiliary and primary models. + +scene's illuminant to correct color shifts in raw images before further processing. However, due to the non-linear transformations applied by the ISP during rendering, these corrections may not fully compensate for color shifts in the final sRGB output [1]. + +Several sRGB-WB methods address color shifts caused by imprecise WB in the camera's ISP, categorized into exemplar-based and DNN-based approaches. Exemplar-based methods like KNN [3] classify images from the Rendered WB dataset and apply the best-matching nonlinear + +mappings for correction. DNN-based methods such as DEEP-WB [2] use CNNs for single-illuminant color correction, while WBFlow [18] extracts pseudo-raw features via a reversible flow model for sRGB correction. SWBNet [19] employs a transformer in the DCT domain to refine color-sensitive features. While effective, these methods fail to fully integrate global color trends and scene information for more comprehensive WB correction. + +Our work utilizes alternative modalities, such as color histograms, to learn global color temperature for effective WB correction. Unlike images, which encode spatial and color information, histograms capture color distribution across channels without spatial details. While per-channel histograms do not fully preserve color relationships between pixels, we can explore both sRGB and CIELab color histograms to go beyond sRGB-input images, extracting global color information. Inspired by [26], we propose ABC-Former, an Auxiliary Bimodal Cross-domain Transformer architecture that integrates global color information from both sRGB and CIELab histograms. It consists of two auxiliary models for histogram-based learning and a target model for sRGB image WB correction, where the sRGB histogram supervises raw color intensity, and CIELab ensures perceptually uniform color distribution. To enhance WB correction in the target sRGB model, we introduce the Interactive Channel Attention (ICA) module, which transfers modality-complementary knowledge using re-parameterization, allowing the target model to adaptively reweight image features for improved accuracy. Our key contributions are listed as follows: + +- We propose ABC-Former, which leverages histogram-based global color features via auxiliary models to refine WB correction in the target sRGB model. +- We introduce the ICA module to facilitate effective cross-modality knowledge transfer, optimizing sRGB feature reweighting for better WB results. +- Extensive experiments on benchmark WB datasets demonstrate that ABC-Former outperforms state-of-the-art (SOTA) sRGB-WB methods. + +# 2. Related Works + +Raw-WB Approaches. The WB module in a camera's ISP corrects raw images for accurate color temperature, compensating for lighting variations. Traditional WB methods estimate the global light source color and apply uniform gain coefficients for illumination correction. However, they assume a consistent color temperature across the scene, making them ineffective under mixed lighting. Additionally, these methods irreversibly alter raw images, limiting precise sRGB adjustments in post-processing [4, 7-9, 15, 20, 21, 24]. + +sRGB-WB Approaches. To address the shortcomings of traditional WB methods, recent research has explored + +sRGB-WB approaches, which refine color correction beyond the ISP stage. These methods can be classified into exemplar-based methods [3, 5] and DNN-based methods [18, 19]. Exemplar-based methods apply trained nonlinear mappings for color correction. For example, Affi et al. [3] use histogram features to find images with similar color distributions and derive a correction matrix to adjust colors accordingly. Mixed-WB [5] generates multiple WB versions of an image, averaging their weighting maps to achieve optimal correction. However, these methods rely heavily on predefined training data, making them less adaptable to diverse lighting conditions. + +DNN-based methods, such as WBFlow [18], use neural flow for reversible color correction, mapping color-cast sRGB images to a pseudo-raw feature space for linear WB. WBFlow also incorporates a camera transformation module for few-shot adaptation, improving generalization ability. SWBNet [19] suppresses temperature-sensitive low-frequency information and employs a contrast loss to align scene features across varying temperatures, enhancing WB stability. It also uses adaptive weights to correct multiple color shifts under mixed lighting. Although these methods are effective, they remain constrained by the sRGB image modality. + +Multimodal Training Approaches. Unimodal training uses data from a single modality, limiting its ability to generalize across different modalities. In contrast, multimodal training enables models to learn from multiple modalities, which can be strongly correlated (paired data) or weakly correlated (irrelevant data). CLIP [22] exemplifies strongly correlated multimodal data, using contrastive learning to align image and text features, requiring paired data. In contrast, M2PT [26] integrates irrelevant data from different modalities by leveraging re-parameterization to transfer knowledge from pre-trained auxiliary models. While this enhances cross-modal learning, it increases data collection costs and pre-training demands. + +Inspired by M2PT, we adopt multimodal training to capture global color information from sRGB images to enhance WB correction. Unlike M2PT, which uses irrelevant data, we train sRGB images to learn corrected histogram-based color features from their color and CIELab histograms. Figure 1 compares conventional DNN-based raw-WB and sRGB-WB methods with the proposed ABC-Former. + +# 3. Proposed Method + +The proposed ABC-Former consists of three transformer models: two auxiliary transformers that learn to correct color and CIELab histograms, and a primary transformer that processes the input sRGB image for the final WB correction. Unlike M2PT [26], which tokenizes irrelevant multimodal data for unified processing, our approach leverages sRGB images with their strongly related color information. + +To efficiently transfer complementary color knowledge, we introduce the Interactive Channel Attention (ICA) module, which utilizes condensed histogram-based features to enhance color temperature correction, improving accuracy and visual quality. The overall architecture is shown in Figure 2. + +# 3.1. Auxiliary Model — PDFformer + +Most prior sRGB-WB works [2, 3, 5, 18] focus on local pixel information within the sRGB domain, often overlooking global color temperature and perceptual color relationships, which are crucial for effective WB correction. Therefore, integrating global color information from alternative modalities can enhance accuracy by providing a broader context for color adjustments. Given an input sRGB image $\mathbf{I} \in \mathbb{R}^{H \times W \times 3}$ , where $H$ and $W$ denote the image's height and width, and the three channels represent the RGB color components, we first convert it to the CIELab color space, represented as $\mathbf{I}_{\mathrm{Lab}} \in \mathbb{R}^{H \times W \times 3}$ , where the three channels correspond to the $L^*$ , $a^*$ , and $b^*$ components. To efficiently capture global color temperature while maintaining low model complexity, we convert $\mathbf{I}$ into its probability density function (PDF) representation, $\mathbf{H}_{\mathrm{sRGB}} \in \mathbb{R}^{L \times 3}$ , where $L = 256$ represents the number of histogram bins per channel. Similarly, we transform $\mathbf{I}_{\mathrm{Lab}}$ into its PDF form, referred to as $\mathbf{H}_{\mathrm{Lab}} \in \mathbb{R}^{L \times 3}$ , with $L = 256$ , providing a histogram-based representation for each color space. + +The ABC-Former framework incorporates two auxiliary models to enhance the target model's performance. These models process global color PDF data, $\mathbf{H}^{\mathrm{sRGB}}$ and $\mathbf{H}^{\mathrm{Lab}}$ , which serve as distinct inputs. Each auxiliary model, called PDFformer, employs a 1D transformer architecture with a shared structure but separate training. Initially, the input histograms, $\mathbf{H}^{\mathrm{sRGB}}$ or $\mathbf{H}^{\mathrm{Lab}}$ , pass through a one-dimensional convolutional layer to produce histogram features $\mathbf{H}_0^{\mathbf{A}} \in \mathbb{R}^{L \times C}$ , where $\mathbf{A} \in [\mathbf{sRGB}, \mathbf{Lab}]$ , and $C$ is the feature dimension. These features are subsequently processed through a U-shape transformer structure with PDFformer blocks that facilitate upsampling and downsampling in the encoding and decoding paths. Each block consists of two sequences of Layer Normalization (LN), Channel Attention (CA) [14], and a feed-forward Multilayer Perceptron (MLP) [11], arranged in the following order: + +$$ +\hat {\mathbf {H}} _ {\mathbf {i}} ^ {\mathbf {A}} = \operatorname {C A} \left(\operatorname {L N} \left(\mathbf {H} _ {\mathbf {i} - 1} ^ {\mathbf {A}}\right)\right) + \mathbf {H} _ {\mathbf {i} - 1} ^ {\mathbf {A}}; \tag {1} +$$ + +$$ +\mathbf {H} _ {\mathbf {i}} ^ {\mathbf {A}} = \operatorname {G E L U} (\operatorname {M L P} (\operatorname {L N} (\hat {\mathbf {H}} _ {\mathbf {i}} ^ {\mathbf {A}}))) + \hat {\mathbf {H}} _ {\mathbf {i}} ^ {\mathbf {A}}, +$$ + +where $\mathrm{GELU}(\cdot)$ denotes the GELU activation function, and $\mathbf{i}$ is the block index, starting from 1. PDFformer has $K$ PDFformer blocks in the encoding path, followed by a bottleneck stage, which is also a PDFformer block. These blocks are interconnected via downsampling, implemented through a $4\times 1$ convolution with a stride of 2 and channel doubling. This process yields an output of $\mathbf{H}_{\mathbf{K} + 1}^{\mathbf{A}}\in$ + +$\mathbb{R}^{\frac{L}{2K}\times 2^K C}$ . Following the bottleneck, the decoding path contains $K$ PDFformer blocks, with upsampling and channel reduction applied between blocks. The first two blocks halve the number of channels, while the remaining blocks quarter them, as in [25]. Upsampling is achieved using a $2\times 1$ transposed convolution with a stride of 2. Additionally, each block in the decoding path takes the output from the previous block and concatenates it with the corresponding output from the encoding path of the same spatial size. The final output, $\mathbf{H}^{\mathbf{A}}\in \mathbb{R}^{L\times 2C}$ , passes through a convolutional layer with a residual connection to the input histograms and a softmax function to produce the corrected color or CIELab histograms $\mathbf{H}_{\mathbf{c}}^{\mathbf{A}}\in \mathbb{R}^{L\times 3}$ as: + +$$ +\mathbf {H} _ {\mathrm {c}} ^ {\mathbf {A}} = \operatorname {S O F T M A X} \left(\operatorname {C o n v} _ {3 \times 1} \left(\mathbf {H} _ {2 \mathrm {K} + 1} ^ {\mathbf {A}}\right) + \mathbf {H} _ {0} ^ {\mathbf {A}}\right). \tag {2} +$$ + +# 3.2. Target model — sRGBformer + +To achieve a color-calibrated correction, we employ sRGBformer, a vision transformer designed to address color deviations. It systematically processes input features $\mathbf{X}_0$ to generate the final WB-corrected image. Similar to PDFformer, sRGBformer employs a U-shaped transformer architecture with upsampling and downsampling. However, it uniquely integrates cross-modality knowledge from auxiliary models, using their corrected global color information as guidance. To enable this, we introduce our proposed Interactive + +Channel Attention (ICA) in each sRGBformer block, incorporating auxiliary model knowledge through dedicated pathways. Each sRGBformer block consists of two sets of LN, ICA, and an MLP, structured as follows: + +$$ +\hat {\mathbf {X}} _ {\mathbf {i}} = \operatorname {I C A} (\ln (\mathbf {X} _ {\mathbf {i} - \mathbf {1}})) + \mathbf {X} _ {\mathbf {i} - \mathbf {1}}; \tag {3} +$$ + +$$ +\mathbf {X} _ {\mathbf {i}} = \operatorname {G E L U} (\operatorname {M L P} (\ln (\hat {\mathbf {X}} _ {\mathbf {i}}))) + \hat {\mathbf {X}} _ {\mathbf {i}}, +$$ + +where $\mathbf{X_i}$ is the image features produced by the sRGBformer block at $i$ -th level. The encoder and decoder each consist of $K$ sRGBformer blocks, connected via a bottleneck stage with one sRGBformer block. In the encoder, downsampling is achieved using a $4\times 4$ convolution with a stride of 2 and channel doubling, while the decoder applies upsampling via a $2\times 2$ transposed convolution with a stride of 2 and channel halving. As in PDFformer, each decoder block receives the output from the previous block, concatenated with the corresponding output from the encoder of the same spatial size. The final decoder output $\mathbf{X_{2K + 1}}\in \mathbb{R}^{H\times W\times 2C}$ passes through a $3\times 3$ convolutional layer with a residual connection to the input, producing the final WB-corrected image, $\mathbf{X_c}\in \mathbb{R}^{H\times W\times 3}$ . + +Interactive Channel Attention. The goal of ICA is to facilitate knowledge transfer from auxiliary models to sRGBformer, enhancing WB correction. In sRGBformer, each encoder and decoder block is equipped with an ICA module to correspond to the respective blocks in the encoders and decoders of the auxiliary models. First, $\mathbf{X}_{\mathrm{i}}$ is condensed + +![](images/cbb3ff0b83c41d00a1bf7c82b0173d3179f0e6e159f7b222f16f183168a5fcbd.jpg) +Figure 2. The ABC-Former framework consists of two key components:: Auxiliary models (PDFformers) and a Target model (sRG-Bformer). The auxiliary models process sRGB and CIELab histograms to learn color features from different modalities, while the ICA module in the target model integrates this information to generate the final WB correction. + +into a vector with one value per channel by global average pooling (Avg.), followed by a convolution and sigmoid activation, generating a channel-wise weighting vector $\mathbf{W}_{\mathrm{i}}^{\mathrm{T}} \in \mathbb{R}^{1 \times 1 \times C}$ at the $i$ -th level of the sRGBformer block. Similarly, weighting vectors $\mathbf{W}_{\mathrm{i}}^{\mathrm{sRGB}}$ , $\mathbf{W}_{\mathrm{i}}^{\mathrm{Lab}} \in \mathbb{R}^{1 \times 1 \times C}$ are extracted by feeding $\mathbf{H}_{\mathrm{i}}^{\mathrm{sRGB}}$ and $\mathbf{H}_{\mathrm{i}}^{\mathrm{Lab}}$ into the CA module within their corresponding PDFformer's blocks, which include the average pooling $\mathrm{Avg}(\cdot)$ , convolution, the unsqueeze operation, and the excitation operation (sigmoid activation) at the $i$ -th level. Next, to integrate cross-modal knowledge, we apply cross-modal re-parameterization [26], introducing learnable parameters $\lambda_{\mathrm{i}}^{\mathrm{sRGB}}$ and $\lambda_{\mathrm{i}}^{\mathrm{Lab}}$ to adjust $\mathbf{W}_{\mathrm{i}}^{\mathrm{sRGB}}$ and $\mathbf{W}_{\mathrm{i}}^{\mathrm{Lab}}$ , respectively. These are then combined with $\mathbf{W}_{\mathrm{i}}^{\mathrm{T}}$ along the channel dimension to obtain $\mathbf{W}_{\mathrm{total}}$ as: + +$$ +\mathbf {W} _ {\mathbf {i}} ^ {\mathbf {T}} = \text {S i g m o i d} (\mathbf {C o n v} (\operatorname {A v g} (\mathbf {X} _ {\mathbf {i}}))) \tag {4} +$$ + +$$ +\mathbf {W} _ {\mathrm {i}} ^ {\text {t o t a l}} = \mathbf {W} _ {\mathrm {i}} ^ {\mathrm {T}} + \lambda_ {\mathrm {i}} ^ {\text {L a b}} \mathbf {W} _ {\mathrm {i}} ^ {\text {L a b}} + \lambda_ {\mathrm {i}} ^ {\text {s R G B}} \mathbf {W} _ {\mathrm {i}} ^ {\text {s R G B}}. +$$ + +At last, $\mathbf{X_i}$ is channel-wise re-weighted by $\mathbf{W_i^{total}}$ to gen- + +erate the refined sRGB features $\tilde{\mathbf{X}}_{\mathrm{i}}$ as: + +$$ +\tilde {\mathbf {X}} _ {\mathbf {i}} = \mathbb {F} _ {\text {s c a l e}} \left(\mathbf {W} _ {\mathbf {i}} ^ {\text {t o t a l}}, \mathbf {X} _ {\mathbf {i}}\right), \tag {5} +$$ + +where $\mathbb{F}_{scale}(\cdot, \cdot)$ represents the channel-wise multiplication function, applying scalar weights to the corresponding feature maps. Through ICA, we leverage calibrated global color information from modality-specific histogram-based features, enabling effective knowledge transfer for improved WB correction. + +# 3.3. Loss Function + +The proposed framework optimizes two auxiliary models and one target model. Specifically, two auxiliary PDF-formers process histogram inputs in a PDF format from either the sRGB or CIELab domains, while the target model, sRGBformer, is responsible for the final WB sRGB output. The cooperative interaction between the auxiliary and target models is crucial for achieving accurate WB in the output sRGB image. To train the auxiliary models, we use L2 loss + +to measure the difference between the PDFs of the predicted and ground-truth color channel histograms, formulated as: + +$$ +\begin{array}{l} \mathcal {L} _ {\mathrm {p d f}} ^ {s R G B} = \left\| \mathbf {H} _ {\mathbf {c}} ^ {\mathbf {R}} - \mathbf {H} _ {\mathbf {g t}} ^ {\mathbf {R}} \right\| _ {2} + \left\| \mathbf {H} _ {\mathbf {c}} ^ {\mathbf {G}} - \mathbf {H} _ {\mathbf {g t}} ^ {\mathbf {G}} \right\| _ {2} + \left\| \mathbf {H} _ {\mathbf {c}} ^ {\mathbf {B}} - \mathbf {H} _ {\mathbf {g t}} ^ {\mathbf {B}} \right\| _ {2}; \\ \mathcal {L} _ {\mathrm {p d f}} ^ {L a b} = \left\| \mathbf {H} _ {\mathbf {c}} ^ {\mathbf {L}} - \mathbf {H} _ {\mathbf {g t}} ^ {\mathbf {L}} \right\| _ {2} + \left\| \mathbf {H} _ {\mathbf {c}} ^ {\mathbf {a}} - \mathbf {H} _ {\mathbf {g t}} ^ {\mathbf {a}} \right\| _ {2} + \left\| \mathbf {H} _ {\mathbf {c}} ^ {\mathbf {b}} - \mathbf {H} _ {\mathbf {g t}} ^ {\mathbf {b}} \right\| _ {2}, \tag {6} \\ \end{array} +$$ + +where $\mathbf{H}_{\mathrm{c}}^{\mathrm{sRGB}} = [\mathbf{H}_{\mathrm{c}}^{\mathrm{R}};\mathbf{H}_{\mathrm{c}}^{\mathrm{G}};\mathbf{H}_{\mathrm{c}}^{\mathrm{B}}]$ and $\mathbf{H}_{\mathrm{c}}^{\mathrm{Lab}} = [\mathbf{H}_{\mathrm{c}}^{\mathrm{L}};\mathbf{H}_{\mathrm{c}}^{\mathrm{a}};\mathbf{H}_{\mathrm{c}}^{\mathrm{b}}]$ denote the corrected RGB and CIELab histograms, respectively, as estimated by PDFformers. $\mathbf{H}_{\mathrm{gt}}^{\mathrm{sRGB}}$ and $\mathbf{H}_{\mathrm{gt}}^{\mathrm{Lab}}$ represent the ground-truth histograms. We adopt L2 loss as it penalizes large deviations more heavily, encouraging the model to align histogram bins across the distribution evenly. + +For sRGBformer, we use the L1 loss for training, as follows: + +$$ +\mathcal {L} _ {\mathrm {r e c}} = \left\| \mathbf {X} _ {\mathbf {c}} - \mathbf {X} _ {\mathbf {g t}} \right\| _ {1}, \tag {7} +$$ + +where $\mathbf{X}_{\mathbf{c}}$ is the estimated WB image produced by sRGBformer, and $\mathbf{X}_{\mathbf{gt}}$ is the ground truth. The total loss is defined as $\mathcal{L}_{\mathrm{total}} = \mathcal{L}_{\mathrm{pdf}}^{sRGB} + \mathcal{L}_{\mathrm{pdf}}^{Lab} + \mathcal{L}_{\mathrm{rec}}$ . + +# 4. Experimental Results + +Datasets. The commonly used public dataset, the Rendered WB dataset Set1 [3], is divided into three non-overlapping folds. For training, we randomly selected 12,000 rendered sRGB images with different WB settings from two of the folds. For testing, we use the third fold of Set1, also known as Set1-Test (21,046 images), as well as other datasets that have no overlap in scenes or cameras with the training data: Set2 of the Rendered WB dataset (2,881 images) [3] and the Rendered Cube+ dataset (10,242 images) [3, 6]. These datasets serve as benchmarks to evaluate the WB correction performance of our ABC-Former. + +Evaluation Metrics. We evaluated our results using three widely used metrics: Mean Squared Error (MSE), Mean Angular Error (MAE), and $\Delta E$ 2000 [13], which quantify the differences between the predicted WB images and the ground truth. For each metric, we reported the mean, first quantile (Q1), median (Q2), and upper quantile (Q3) of the error. Lower values in these metrics indicate better WB correction performance, consistent with those used in recent works [2, 3, 5, 18, 19]. + +Implementation Details. We implemented ABC-Former using PyTorch. During training, we optimize both the auxiliary and target models simultaneously over 350 epochs using the Adam optimizer [17] with $\beta_{1} = 0.5$ and $\beta_{2} = 0.999$ for each model. The learning rate is set to $2 \times 10^{-4}$ , and the embedding feature dimension to 16. For training, we randomly cropped four $128 \times 128$ patches from each training image as input. Additionally, we apply geometric transformations, including rotation and flipping, to augment the data. + +Quantitative Experimental Results. The quantitative results in Table 1 show that ABC-Former performs favorably against five SOTA methods [2, 3, 5, 18, 19] across three + +public benchmark datasets [3, 6]. The compared methods were evaluated using either their publicly available pretrained models or results directly cited from their respective publications. However, SWBNet [19] was retrained to be tested on the Set1-Test and Set2 datasets, as its code and original results for these datasets were not provided (marked with * in Table 1). SWBNet's scores on the Cube+ Dataset were taken from the original paper. On the Rendered WB Dataset Set1-Test and Set2, ABC-Former achieved the best performance on MSE, MAE, and $\Delta E$ 2000, indicating that it effectively removes color casts from images and achieves superior WB correction. On the Rendered Cube+ dataset, ABC-Former also delivered superior results, with the lowest mean scores in MSE, MAE, and $\Delta E$ 2000. Additionally, it maintains a competitive model size, only larger than Deep-WB [2] and Mixed-WB [5]. These results demonstrate ABC-Former's efficiency in achieving efficient WB correction across various datasets without significantly increasing model complexity, showcasing its robustness and generalization capabilities. + +Qualitative Experimental Results. We present the qualitative comparison results on the Rendered WB and Rendered Cube+ datasets in Figure 3 and Figure 4. Analyzing the color correction performance of different methods, we see that while KNN [3], Deep-WB [2], Mixed-WB [5], and WBFlow [18] generally reduce color casts, they often exhibit color inconsistencies across different regions of the image. For example, in Figure 3, these methods tend to correct the colors of objects while neglecting the sky's color accuracy, resulting in an undesirable yellow tint. Similarly, in Figure 4, strong color casts lead to suboptimal WB correction, often leaving the image with an overall blue or yellow tone. In contrast, our method, guided by corrected global color information from multiple modalities, achieves a more balanced and consistent color correction across the entire image, producing natural and harmonious results. + +In addition, Figure 5 demonstrates a comparative analysis of global color distribution accuracy in the average WB results across different methods. The Bhattacharyya coefficient, as introduced in [16], is employed to measure the similarity between the red, green, and blue histograms of the processed images and those of the ground-truth WB images across three benchmark datasets. This coefficient ranges from 0 to 1, with higher values indicating a closer match to the ground truth color distributions. The results show that ABC-Former achieves a higher similarity compared to other methods. Additionally, the visualization at the bottom of the figure illustrates an example of the color distributions in WB results obtained by the compared methods, highlighting that ABC-Former produces color distributions more consistent with the ground truth. + +Ablation Studies. In the ablation studies, we investigated the impact of various combinations of modalities used for + +Table 1. The quantitative results of ABC-Former and competing WB methods are evaluated on three public benchmark datasets, including Rendered WB Dataset (Set1-Test and Set2) [3], and Rendered Cube+ Dataset [3, 6]. The best results are highlighted in red, while the second-best results are highlighted in blue. + +
MethodMSE ↓MAE ↓ΔE 2000 ↓Size
MeanQ1Q2Q3MeanQ1Q2Q3MeanQ1Q2Q3MB
Rendered WB Dataset: Set1-Test (21,046 images) [3]
KNN [3]77.4913.7439.6294.013.06°1.74°2.54°3.76°3.582.073.094.5521.8
Deep-WB [2]82.5513.1942.77102.093.12°1.88°2.70°3.84°3.772.163.304.8616.7
Mixed-WB [5]142.2526.8167.17164.664.07°2.64°3.68°5.16°4.553.004.155.635.1
WBFlow [18]78.8912.9935.0979.352.67°1.73°2.39°3.24°3.131.922.793.9430.2
SWBNet* [19]111.6220.6160.68137.914.11°2.56°3.75°5.22°4.542.734.165.86258.8
ABC-Former20.474.6510.0221.051.99°1.25°1.73°2.33°2.181.381.862.5920.2
Rendered WB Dataset: Set2 (2,881 images) [3]
KNN [3]171.0937.0487.04190.884.48°2.26°3.64°5.95°5.603.434.907.0621.8
Deep-WB [2]124.0730.1376.32154.443.75°2.02°3.08°4.72°4.903.134.356.0816.7
Mixed-WB [5]188.7648.64112.32219.914.92°2.69°4.10°6.37°6.053.454.927.205.1
WBFlow [18]117.6031.2561.68143.903.51°1.93°2.92°4.47°4.643.164.075.5630.2
SWBNet* [19]219.0255.45113.98236.255.46°3.45°4.78°6.63°6.514.395.848.08258.8
ABC-Former104.3125.5558.61132.903.39°1.87°2.73°4.30°4.562.974.135.6320.2
Rendered Cube+ Dataset (10,242 images) [3, 6]
KNN [3]194.9827.4357.08118.214.12°1.96°3.17°5.04°5.683.224.616.7021.8
Deep-WB [2]80.4615.4333.8874.423.45°1.87°2.82°4.26°4.592.683.815.5316.7
Mixed-WB [5]161.8016.9619.3390.814.05°1.40°2.12°4.88°4.892.163.106.785.1
WBFlow [18]75.3914.2230.9072.913.34°1.87°2.82°4.11°4.282.683.775.2130.2
SWBNet [19]74.3520.4640.0486.953.15°1.33°2.09°4.12°4.282.403.565.09258.8
ABC-Former60.6012.1526.9257.202.99°1.63°2.45°3.69°3.952.353.404.8620.2
+ +![](images/c3378122664307760f9e6a413a22ad7619238a7db1f86bea05f23afd572a65da.jpg) + +![](images/551cab629c1264d4c5034413d796f07d772303580bd0321e0bc3ef9de370ef9e.jpg) + +![](images/69c9b7b603dfb72472638fb5c3629a1a454dae2c9ace30ec0928f92db10b3bb6.jpg) +Input + +![](images/90b21a6f37be26da2b82d186d668f88fc9a76970239aaf02882730645069fbc3.jpg) + +![](images/274b0d8ed5c490dbb989c157e6cd37ba4165830a18f653d82013dcf2c1e3f5eb.jpg) + +![](images/9debd5d7045bff73cf6f43ae9b287aa5aa349b99612a69c472fac61f555c7ed0.jpg) +KNN + +![](images/a8d6b41038be2e3c006563195444320ad8b2f8cb1b0c1b17352152cf76109389.jpg) + +![](images/52fe869da81c3a8d34549519c14dd5829523b1f8e1cd5d824fef1f26af155959.jpg) + +![](images/212fdce970499e8169dde75fe208fecc2c63f118d5034dc57c0bcd20b4bd2550.jpg) +Deep-WB + +![](images/4a8103d7ed0d9d2271b26bf02ccc13fea9c3c9e460305ae0abcfc9b43e8c6464.jpg) + +![](images/4a7f1af5212f11c2f2a2da4296eb909272aa9df93e68395c2042987e8021674e.jpg) + +![](images/a116ce8cadb8fe19d639c1c63f0e2fa9a8df48fba2e99a8804f20bd49da8b763.jpg) +Mixed-WB +Figure 3. Qualitative comparisons with other sRGB-WB methods on the Rendered WB dataset [3], with the $\Delta$ E 2000 indicated in the bottom-right corner of each image. + +![](images/816bbb8567944e4793334ba775b8e11f7738c51eb1f29c8c2cf843a1a8b1c998.jpg) + +![](images/1578e6473e3d4095fc6dc20e95f44b59944d7489d6bab7b7de44d58e920161b2.jpg) + +![](images/dba71879c777c4eaa2c389315803fc5fa1d3101f37d0a5fe8ca6be89f8c6d15f.jpg) +WBFlow + +![](images/6dcc78a119e3152c36728c6f826b20822c372edd29411248d6e2e8f6e6323eb6.jpg) + +![](images/61d8524cbfadf25a49d98c82d0001471003fc6001d52d82febe8be9c7fa8fe37.jpg) + +![](images/5e84cb823eda855f16fce087b5ca4795849e080d29892d97548f2378fee8f8fa.jpg) +Ours + +![](images/4c130dcc5f1dcc973e9fde452a1b89ff9d9e210238fe8db29599d8618bcac07b.jpg) + +![](images/f4794eb6ce4ff7737ac2b06d6440e95c88f0509c01d628b3e8bcffe08da8291f.jpg) + +![](images/069b3937070a01f3592c6d0ba272852ce9fc833a8312857090d40dd911351cdf.jpg) +GT + +![](images/687b9d42ba6f432d13f1b3edaaa2f1bf690bf6d0b0bb86325878ad1c95cb4593.jpg) +Figure 4. Qualitative comparisons with other sRGB-WB methods on the Rendered Cube+ dataset [3, 6], with the $\Delta$ E 2000 displayed in the bottom-right corner of each image. + +![](images/7b18e71146036143bf058e4ad6b666f46af9f089e0d2deea7bb6e19dfa269b42.jpg) +Figure 5. Evaluation of the Bhattacharyya coefficient [16] for color histograms across three benchmark datasets [3, 6], showing global color accuracy for red, green, and blue histograms in sRGB-WB methods. ABC-Former achieves more consistent color distributions by leveraging global color temperature features from both sRGB and CIELab histograms. + +WB correction on the Rendered Cube+ dataset. We report the mean values across three evaluation metrics, along with model sizes for each combination of auxiliary models. As shown in Table 2, using only the target model (sRGB-former) without additional guidance of global color information from other modalities results in suboptimal WB accuracy. Adding a single auxiliary model (e.g., $\mathrm{PDF}_{\mathrm{sRGB}} + \mathrm{sRGB}$ or $\mathrm{PDF}_{\mathrm{Lab}} + \mathrm{sRGB}$ ) improves WB performance over the target model alone. However, utilizing a single auxiliary model to jointly learn both sRGB and CIELab histograms $(\mathrm{PDF}_{\mathrm{sRGB:Lab}} + \mathrm{sRGB})$ proves less effective, as a single + +model struggles to disentangle the information from both modalities. Our full ABC-Former design leverages global color temperature features from multiple modalities through two auxiliary models to guide color adjustment, achieving the highest WB accuracy. To ensure a fair comparison across these combinations, we matched the model size to that of ABC-Former by doubling the bottleneck channels in sRGBformer (Baseline) and evenly increasing channels across layers when using a single auxiliary model. + +Table 3 compares different loss functions that train auxiliary models for learning sRGB and CIELab histograms. + +Table 2. Ablation studies of ABC-Former w/ and w/o guidance from different modalities on the Rendered Cube+ dataset [3, 6]. Here, sRGB denotes sRGBformer, while $\mathrm{PDF}_{\mathrm{sRGB}}$ , and $\mathrm{PDF}_{\mathrm{Lab}}$ represent auxiliary PDFformer models for sRGB and CIELab histograms, respectively. + +
ModalitiesMSE ↓MAE ↓ΔE 2000 ↓Size(MB)
sRGB76.563.35°4.3120.4
PDFLab + sRGB73.353.26°4.2020.4
PDFsRGB + sRGB68.653.12°4.0820.4
PDFsRGB:Lab + sRGB72.383.38°4.3820.4
PDFsRGB + PDFLab + sRGB60.602.99°3.9520.2
+ +Table 3. Ablation studies on the loss function used to train auxiliary models for learning sRGB and CIELab histograms. We compare the KL divergence, Wasserstein distance, and our chosen L2 loss on the Rendered Cube+ dataset [3, 6]. + +
Loss functionMSE ↓MAE ↓ΔE 2000 ↓Size(MB)
KL divergence72.673.29°4.1720.2
Wasserstein distance70.223.12°4.1520.2
L2 loss (Proposed)60.602.99°3.9520.2
+ +Table 4. WB manipulation on the Rendered Cube+ dataset [3, 6]. + +
MethodMSE ↓MAE ↓ΔE 2000 ↓
MeanQ1Q2Q3MeanQ1Q2Q3MeanQ1Q2Q3
Deep-WB [2]199.3832.3063.34142.765.40°2.67°4.04°6.36°5.983.444.787.29
ABC-Former82.370.0117.7665.362.78°1.06°2.85°3.22°2.890.072.364.12
+ +![](images/26fbe8e8784e5c94a3f3e9baf308d5e9b90f3978887521261dadca03bdd8f7ed.jpg) +Figure 6. Analysis on learned weights on Rendered Cube+ dataset [3, 6], $\lambda_{\mathbf{i}}^{\mathbf{sRGB}}$ and $\lambda_{\mathbf{i}}^{\mathbf{Lab}}$ , for cross-modality knowledge transfer. Here, $i \in \{e_0, e_1, e_2, b_0, d_0, d_1, d_2\}$ represents the level of the sRGBformer block, where $e, b,$ and $d$ denote the encoder, bottleneck, and decoder layers, respectively. + +As can be seen, the L2 loss shows better performance compared to KL divergence and Wasserstein distance, at + +tributed to its stability and effectiveness in aligning histograms. In contrast, KL divergence may encounter issues with zero-probability bins, and Wasserstein distance can be non-smooth and challenging to optimize in high-dimensional spaces. + +Analyzing Learned Weights for Cross-Modality Transfer. We present the learned weights, $\lambda_{\mathrm{i}}^{\mathrm{sRGB}}$ and $\lambda_{\mathrm{i}}^{\mathrm{Lab}}$ , at each level of ABC-Former to illustrate how modality-complementary knowledge guides WB correction. As shown in Figure 6, the influence of calibrated global color information from sRGB and CIELab histogram modalities intensifies toward the bottleneck block, peaking at the bottleneck. This indicates that high-level WB color histogram-based features from both modalities, capturing global, semantically rich color information, are crucial for reweighting sRGB image features for accurate WB correction. Notably, the sRGB histogram modality has a slightly greater impact than CIELab, though the difference is minimal. This suggests that both color modalities collaborate effectively, allowing ABC-Former to balance raw color intensity with perceptually uniform CIELab properties. + +WB Manipulation. Following the setup described in [2], we conduct experiments to alter the input image's colors to match the target white balance (WB) settings. These settings correspond to the following color temperatures: tungsten $(2850\mathrm{K})$ , fluorescent $(3800\mathrm{K})$ , daylight $(5500\mathrm{K})$ , cloudy $(6500\mathrm{K})$ , and shade $(7500\mathrm{K})$ . As shown in Table 4, ABC-Former significantly outperforms Deep-WB [2] in achieving accurate color transformations. + +# 5. Conclusion + +We presented ABC-Former, an Auxiliary Bimodal Cross-domain Transformer that enhances sRGB WB correction by leveraging complementary information from multiple modalities. ABC-Former uses Interactive Channel Attention to facilitate cross-modality knowledge transfer, integrating calibrated color features from both sRGB and CIELab histograms. This multimodal approach enables a more nuanced fusion of color information, allowing the model to handle diverse color temperatures and complex scenes with pronounced color shifts. Extensive experiments have demonstrated that ABC-Former consistently outperforms state-of-the-art methods in both quantitative and qualitative evaluations. For future work, extending into 2D histograms is a promising direction. + +# 6. Acknowledgments + +This paper was supported in part by the National Science and Technology Council, Taiwan, under grants NSTC 113-2221-E-004-001-MY3, 113-2622-E-004-001, 113-2221-E-004-006-MY2, 112-2634-F-002-005, 113-2634-F-002-008, and 113-2923-E-A49-003-MY2. + +# References + +[1] Mahmoud Afifi and Michael S Brown. What else can fool deep learning? addressing color constancy errors on deep neural network performance. In ICCV, 2019. 1 +[2] Mahmoud Afifi and Michael S Brown. Deep white-balance editing. In CVPR, 2020. 2, 3, 5, 6, 8 +[3] Mahmoud Afifi, Brian Price, Scott Cohen, and Michael S Brown. When color constancy goes wrong: Correcting improperly white-balanced images. In CVPR, 2019. 1, 2, 3, 5, 6, 7, 8 +[4] Mahmoud Afifi, Jonathan T Barron, Chloe LeGendre, Yun-Ta Tsai, and Francois Bleibel. Cross-camera convolutional color constancy. In ICCV, 2021. 2 +[5] Mahmoud Affi, Marcus A Brubaker, and Michael S Brown. Auto white-balance correction for mixed-illuminant scenes. In WACV, 2022. 2, 3, 5, 6 +[6] Nikola Banić, Karlo Koščević, and Sven Lončarić. Unsupervised learning for color constancy. arXiv preprint arXiv:1712.00436, 2017. 5, 6, 7, 8 +[7] Jonathan T Barron and Yun-Ta Tsai. Fast fourier color constancy. In CVPR, 2017. 2 +[8] Simone Bianco and Claudio Cusano. Quasiunsupervised color constancy. In CVPR, 2019. +[9] Jonathan Cepeda-Negrete and Raul E Sanchez-Yanez. Gray-world assumption on perceptual color spaces. In PSIVT, 2014. 2 +[10] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In CVPR, 2019. 1 +[11] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. ICLR, 2021. 3 +[12] Jun Fu, Jing Liu, Hajjie Tian, Yong Li, Yongjun Bao, Zhiwei Fang, and Hanqing Lu. Dual attention network for scene segmentation. In CVPR, 2019. 1 +[13] Sharma Gaurav. The ciede2000 color-difference formula: Implementation notes, supplementary test data, + +and mathematical observations. COLOR research and application, 2005. 5 +[14] Jie Hu, Li Shen, and Gang Sun. Squeeze-andexcitation networks. In CVPR, 2018. 3 +[15] Yuanming Hu, Baoyuan Wang, and Stephen Lin. FC4: Fully convolutional color constancy with confidence-weighted pooling. In CVPR, 2017. 2 +[16] Thomas Kailath. The divergence and bhattacharyya distance measures in signal selection. IEEE transactions on communication technology, 1967. 5, 7 +[17] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *ICLR*, 2015. 5 +[18] Chunxiao Li, Xuejing Kang, and Anlong Ming. WBFlow: Few-shot white balance for sRGB images via reversible neural flows. In IJCAI, 2023. 2, 3, 5, 6 +[19] Chunxiao Li, Xuejing Kang, Zhifeng Zhang, and Anlong Ming. SWBNet: a stable white balance network for sRGB images. In AAAI, 2023. 2, 5, 6 +[20] Yi-Chen Lo, Chia-Che Chang, Hsuan-Chao Chiu, Yu-Hao Huang, Chia-Ping Chen, Yu-Lin Chang, and Kevin Jou. CLCC: Contrastive learning for color constancy. In CVPR, 2021. 2 +[21] Taishi Ono, Yuhi Kondo, Legong Sun, Teppei Kurita, and Yusuke Moriuchi. Degree-of-linear-polarization-based color constancy. In CVPR, 2022. 2 +[22] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, 2021. 2 +[23] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. NIPS, 2015. 1 +[24] Joost Van De Weijer, Theo Gevers, and Arjan Gijsenij. Edge-based color constancy. IEEE TIP, 2007. 2 +[25] Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. Uformer: A general u-shaped transformer for image restoration. In CVPR, 2022. 3 +[26] Yiyuan Zhang, Xiaohan Ding, Kaixiong Gong, Yixiao Ge, Ying Shan, and Xiangyu Yue. Multimodal pathway: Improve transformers with irrelevant data from other modalities. In CVPR, 2024. 2, 4 \ No newline at end of file diff --git a/2025/ABC-Former_ Auxiliary Bimodal Cross-domain Transformer with Interactive Channel Attention for White Balance/images.zip b/2025/ABC-Former_ Auxiliary Bimodal Cross-domain Transformer with Interactive Channel Attention for White Balance/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..932868929c13194ca6e1c4659b384e5880d8ec3a --- /dev/null +++ b/2025/ABC-Former_ Auxiliary Bimodal Cross-domain Transformer with Interactive Channel Attention for White Balance/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2f28eaca506dd1aaaef975f6829062ea7006bfd65763d8b85336fff50454bbd6 +size 970497 diff --git a/2025/ABC-Former_ Auxiliary Bimodal Cross-domain Transformer with Interactive Channel Attention for White Balance/layout.json b/2025/ABC-Former_ Auxiliary Bimodal Cross-domain Transformer with Interactive Channel Attention for White Balance/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b6a88cba1daa9e8b47cd039ded35d268a4eef767 --- /dev/null +++ b/2025/ABC-Former_ Auxiliary Bimodal Cross-domain Transformer with Interactive Channel Attention for White Balance/layout.json @@ -0,0 +1,7823 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 66, + 102, + 545, + 138 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 102, + 545, + 138 + ], + "spans": [ + { + "bbox": [ + 66, + 102, + 545, + 138 + ], + "type": "text", + "content": "ABC-Former: Auxiliary Bimodal Cross-domain Transformer with Interactive Channel Attention for White Balance" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 159, + 161, + 490, + 190 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 159, + 161, + 490, + 190 + ], + "spans": [ + { + "bbox": [ + 159, + 161, + 490, + 190 + ], + "type": "text", + "content": "Yu-Cheng Chiu* Guan-Rong Chen* Zihao Chen Yan-Tsung Peng† National Chengchi University" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 167, + 190, + 483, + 203 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 167, + 190, + 483, + 203 + ], + "spans": [ + { + "bbox": [ + 167, + 190, + 483, + 203 + ], + "type": "text", + "content": "No. 64, Section 2, Zhinan Rd, Wenshan District, Taipei City, 116" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 62, + 203, + 585, + 217 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 203, + 585, + 217 + ], + "spans": [ + { + "bbox": [ + 62, + 203, + 585, + 217 + ], + "type": "text", + "content": "111753202@nccu.edu.tw 111753139@nccu.edu.tw 113761501@nccu.edu.tw ytpeng@cs.nccu.edu.tw" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 151, + 244, + 200, + 257 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 244, + 200, + 257 + ], + "spans": [ + { + "bbox": [ + 151, + 244, + 200, + 257 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 270, + 296, + 510 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 270, + 296, + 510 + ], + "spans": [ + { + "bbox": [ + 55, + 270, + 296, + 510 + ], + "type": "text", + "content": "The primary goal of white balance (WB) for sRGB images is to correct inaccurate color temperatures, ensuring that images display natural, neutral colors. While existing WB methods yield reasonable results, their effectiveness is limited. They either focus solely on global color adjustments applied before the camera-specific image signal processing pipeline or rely on end-to-end models that generate WB outputs without accounting for global color trends, leading to suboptimal correction. To address these limitations, we propose an Auxiliary Bimodal Cross-domain Transformer (ABC-Former) that enhances WB correction by leveraging complementary knowledge from global color information from CIELab and RGB histograms alongside sRGB inputs. By introducing an Interactive Channel Attention (ICA) module to facilitate cross-modality global knowledge transfer, ABC-Former achieves more precise WB correction. Experimental results on benchmark WB datasets show that ABC-Former performs favorably against state-of-the-art WB methods. The source code is available at https://github.com/ytpeng-aimlab/ABC-Former." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 533, + 135, + 544 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 533, + 135, + 544 + ], + "spans": [ + { + "bbox": [ + 55, + 533, + 135, + 544 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 552, + 295, + 660 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 552, + 295, + 660 + ], + "spans": [ + { + "bbox": [ + 55, + 552, + 295, + 660 + ], + "type": "text", + "content": "White balance (WB) correction ensures consistent and accurate color production across varying lighting conditions. However, camera Image Signal Processing (ISP) can introduce color casts in sRGB images due to inaccurate or customized WB settings applied to raw-RGB inputs. These distortions can degrade the accuracy of tasks like image classification and segmentation, where precise color is crucial [10, 12, 23]. Consequently, WB correction has gained significant research interest." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 661, + 296, + 685 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 661, + 296, + 685 + ], + "spans": [ + { + "bbox": [ + 55, + 661, + 296, + 685 + ], + "type": "text", + "content": "Significant effort has been made to improve WB within the camera's ISP pipeline. Raw-WB methods estimate the" + } + ] + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 317, + 245, + 552, + 318 + ], + "blocks": [ + { + "bbox": [ + 317, + 245, + 552, + 318 + ], + "lines": [ + { + "bbox": [ + 317, + 245, + 552, + 318 + ], + "spans": [ + { + "bbox": [ + 317, + 245, + 552, + 318 + ], + "type": "image", + "image_path": "dc3a462b8e9f99c49514e17bdd64749d9d84856929305498c0a53ba146854fdf.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 317, + 318, + 552, + 380 + ], + "blocks": [ + { + "bbox": [ + 317, + 318, + 552, + 380 + ], + "lines": [ + { + "bbox": [ + 317, + 318, + 552, + 380 + ], + "spans": [ + { + "bbox": [ + 317, + 318, + 552, + 380 + ], + "type": "image", + "image_path": "d84a832deedf8ac4ac147fb047a3056f07187691fd7a0655c9afd11b305027a2.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 317, + 380, + 552, + 469 + ], + "blocks": [ + { + "bbox": [ + 317, + 380, + 552, + 469 + ], + "lines": [ + { + "bbox": [ + 317, + 380, + 552, + 469 + ], + "spans": [ + { + "bbox": [ + 317, + 380, + 552, + 469 + ], + "type": "image", + "image_path": "3ce855a07bd457cdced9502f45450bc79508178e0c0b9ea05bbe7a156d755c83.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 479, + 555, + 556 + ], + "lines": [ + { + "bbox": [ + 313, + 479, + 555, + 556 + ], + "spans": [ + { + "bbox": [ + 313, + 479, + 555, + 556 + ], + "type": "text", + "content": "Figure 1. (a) Traditional raw-WB methods predict the scene's illuminant, perform Illumination Correction (IC), and generate the sRGB output via camera-specific ISP. (b) DNN-based sRGB-WB methods apply end-to-end models directly to sRGB images for WB correction. (c) ABC-Former improves WB accuracy by converting the input into multiple modalities, enhancing illumination correction through auxiliary and primary models." + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_caption" + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 591, + 555, + 652 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 591, + 555, + 652 + ], + "spans": [ + { + "bbox": [ + 313, + 591, + 555, + 652 + ], + "type": "text", + "content": "scene's illuminant to correct color shifts in raw images before further processing. However, due to the non-linear transformations applied by the ISP during rendering, these corrections may not fully compensate for color shifts in the final sRGB output [1]." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 653, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 653, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 653, + 556, + 715 + ], + "type": "text", + "content": "Several sRGB-WB methods address color shifts caused by imprecise WB in the camera's ISP, categorized into exemplar-based and DNN-based approaches. Exemplar-based methods like KNN [3] classify images from the Rendered WB dataset and apply the best-matching nonlinear" + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 693, + 179, + 703 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 693, + 179, + 703 + ], + "spans": [ + { + "bbox": [ + 69, + 693, + 179, + 703 + ], + "type": "text", + "content": "* equal contribution" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 69, + 703, + 190, + 713 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 703, + 190, + 713 + ], + "spans": [ + { + "bbox": [ + 69, + 703, + 190, + 713 + ], + "type": "text", + "content": "† corresponding author" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "text", + "content": "21258" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 294, + 167 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 294, + 167 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 294, + 167 + ], + "type": "text", + "content": "mappings for correction. DNN-based methods such as DEEP-WB [2] use CNNs for single-illuminant color correction, while WBFlow [18] extracts pseudo-raw features via a reversible flow model for sRGB correction. SWBNet [19] employs a transformer in the DCT domain to refine color-sensitive features. While effective, these methods fail to fully integrate global color trends and scene information for more comprehensive WB correction." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 168, + 296, + 430 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 168, + 296, + 430 + ], + "spans": [ + { + "bbox": [ + 56, + 168, + 296, + 430 + ], + "type": "text", + "content": "Our work utilizes alternative modalities, such as color histograms, to learn global color temperature for effective WB correction. Unlike images, which encode spatial and color information, histograms capture color distribution across channels without spatial details. While per-channel histograms do not fully preserve color relationships between pixels, we can explore both sRGB and CIELab color histograms to go beyond sRGB-input images, extracting global color information. Inspired by [26], we propose ABC-Former, an Auxiliary Bimodal Cross-domain Transformer architecture that integrates global color information from both sRGB and CIELab histograms. It consists of two auxiliary models for histogram-based learning and a target model for sRGB image WB correction, where the sRGB histogram supervises raw color intensity, and CIELab ensures perceptually uniform color distribution. To enhance WB correction in the target sRGB model, we introduce the Interactive Channel Attention (ICA) module, which transfers modality-complementary knowledge using re-parameterization, allowing the target model to adaptively reweight image features for improved accuracy. Our key contributions are listed as follows:" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 433, + 295, + 539 + ], + "type": "list", + "angle": 0, + "index": 5, + "blocks": [ + { + "bbox": [ + 55, + 433, + 295, + 467 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 433, + 295, + 467 + ], + "spans": [ + { + "bbox": [ + 55, + 433, + 295, + 467 + ], + "type": "text", + "content": "- We propose ABC-Former, which leverages histogram-based global color features via auxiliary models to refine WB correction in the target sRGB model." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 468, + 294, + 503 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 468, + 294, + 503 + ], + "spans": [ + { + "bbox": [ + 56, + 468, + 294, + 503 + ], + "type": "text", + "content": "- We introduce the ICA module to facilitate effective cross-modality knowledge transfer, optimizing sRGB feature reweighting for better WB results." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 505, + 294, + 539 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 505, + 294, + 539 + ], + "spans": [ + { + "bbox": [ + 56, + 505, + 294, + 539 + ], + "type": "text", + "content": "- Extensive experiments on benchmark WB datasets demonstrate that ABC-Former outperforms state-of-the-art (SOTA) sRGB-WB methods." + } + ] + } + ], + "index": 4 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 55, + 550, + 146, + 562 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 550, + 146, + 562 + ], + "spans": [ + { + "bbox": [ + 55, + 550, + 146, + 562 + ], + "type": "text", + "content": "2. Related Works" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 570, + 294, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 570, + 294, + 689 + ], + "spans": [ + { + "bbox": [ + 55, + 570, + 294, + 689 + ], + "type": "text", + "content": "Raw-WB Approaches. The WB module in a camera's ISP corrects raw images for accurate color temperature, compensating for lighting variations. Traditional WB methods estimate the global light source color and apply uniform gain coefficients for illumination correction. However, they assume a consistent color temperature across the scene, making them ineffective under mixed lighting. Additionally, these methods irreversibly alter raw images, limiting precise sRGB adjustments in post-processing [4, 7-9, 15, 20, 21, 24]." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 689, + 294, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 689, + 294, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 689, + 294, + 713 + ], + "type": "text", + "content": "sRGB-WB Approaches. To address the shortcomings of traditional WB methods, recent research has explored" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 72, + 553, + 215 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 553, + 215 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 553, + 215 + ], + "type": "text", + "content": "sRGB-WB approaches, which refine color correction beyond the ISP stage. These methods can be classified into exemplar-based methods [3, 5] and DNN-based methods [18, 19]. Exemplar-based methods apply trained nonlinear mappings for color correction. For example, Affi et al. [3] use histogram features to find images with similar color distributions and derive a correction matrix to adjust colors accordingly. Mixed-WB [5] generates multiple WB versions of an image, averaging their weighting maps to achieve optimal correction. However, these methods rely heavily on predefined training data, making them less adaptable to diverse lighting conditions." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 216, + 553, + 358 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 216, + 553, + 358 + ], + "spans": [ + { + "bbox": [ + 313, + 216, + 553, + 358 + ], + "type": "text", + "content": "DNN-based methods, such as WBFlow [18], use neural flow for reversible color correction, mapping color-cast sRGB images to a pseudo-raw feature space for linear WB. WBFlow also incorporates a camera transformation module for few-shot adaptation, improving generalization ability. SWBNet [19] suppresses temperature-sensitive low-frequency information and employs a contrast loss to align scene features across varying temperatures, enhancing WB stability. It also uses adaptive weights to correct multiple color shifts under mixed lighting. Although these methods are effective, they remain constrained by the sRGB image modality." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 360, + 553, + 514 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 360, + 553, + 514 + ], + "spans": [ + { + "bbox": [ + 313, + 360, + 553, + 514 + ], + "type": "text", + "content": "Multimodal Training Approaches. Unimodal training uses data from a single modality, limiting its ability to generalize across different modalities. In contrast, multimodal training enables models to learn from multiple modalities, which can be strongly correlated (paired data) or weakly correlated (irrelevant data). CLIP [22] exemplifies strongly correlated multimodal data, using contrastive learning to align image and text features, requiring paired data. In contrast, M2PT [26] integrates irrelevant data from different modalities by leveraging re-parameterization to transfer knowledge from pre-trained auxiliary models. While this enhances cross-modal learning, it increases data collection costs and pre-training demands." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 515, + 553, + 597 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 515, + 553, + 597 + ], + "spans": [ + { + "bbox": [ + 313, + 515, + 553, + 597 + ], + "type": "text", + "content": "Inspired by M2PT, we adopt multimodal training to capture global color information from sRGB images to enhance WB correction. Unlike M2PT, which uses irrelevant data, we train sRGB images to learn corrected histogram-based color features from their color and CIELab histograms. Figure 1 compares conventional DNN-based raw-WB and sRGB-WB methods with the proposed ABC-Former." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 609, + 420, + 623 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 609, + 420, + 623 + ], + "spans": [ + { + "bbox": [ + 313, + 609, + 420, + 623 + ], + "type": "text", + "content": "3. Proposed Method" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 630, + 553, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 630, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 630, + 553, + 713 + ], + "type": "text", + "content": "The proposed ABC-Former consists of three transformer models: two auxiliary transformers that learn to correct color and CIELab histograms, and a primary transformer that processes the input sRGB image for the final WB correction. Unlike M2PT [26], which tokenizes irrelevant multimodal data for unified processing, our approach leverages sRGB images with their strongly related color information." + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "text", + "content": "21259" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 294, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 294, + 144 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 294, + 144 + ], + "type": "text", + "content": "To efficiently transfer complementary color knowledge, we introduce the Interactive Channel Attention (ICA) module, which utilizes condensed histogram-based features to enhance color temperature correction, improving accuracy and visual quality. The overall architecture is shown in Figure 2." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 150, + 225, + 163 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 150, + 225, + 163 + ], + "spans": [ + { + "bbox": [ + 55, + 150, + 225, + 163 + ], + "type": "text", + "content": "3.1. Auxiliary Model — PDFformer" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "spans": [ + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "text", + "content": "Most prior sRGB-WB works [2, 3, 5, 18] focus on local pixel information within the sRGB domain, often overlooking global color temperature and perceptual color relationships, which are crucial for effective WB correction. Therefore, integrating global color information from alternative modalities can enhance accuracy by providing a broader context for color adjustments. Given an input sRGB image " + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "inline_equation", + "content": "\\mathbf{I} \\in \\mathbb{R}^{H \\times W \\times 3}" + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "inline_equation", + "content": "H" + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "inline_equation", + "content": "W" + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "text", + "content": " denote the image's height and width, and the three channels represent the RGB color components, we first convert it to the CIELab color space, represented as " + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "inline_equation", + "content": "\\mathbf{I}_{\\mathrm{Lab}} \\in \\mathbb{R}^{H \\times W \\times 3}" + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "text", + "content": ", where the three channels correspond to the " + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "inline_equation", + "content": "L^*" + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "inline_equation", + "content": "a^*" + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "inline_equation", + "content": "b^*" + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "text", + "content": " components. To efficiently capture global color temperature while maintaining low model complexity, we convert " + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "inline_equation", + "content": "\\mathbf{I}" + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "text", + "content": " into its probability density function (PDF) representation, " + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "inline_equation", + "content": "\\mathbf{H}_{\\mathrm{sRGB}} \\in \\mathbb{R}^{L \\times 3}" + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "inline_equation", + "content": "L = 256" + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "text", + "content": " represents the number of histogram bins per channel. Similarly, we transform " + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "inline_equation", + "content": "\\mathbf{I}_{\\mathrm{Lab}}" + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "text", + "content": " into its PDF form, referred to as " + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "inline_equation", + "content": "\\mathbf{H}_{\\mathrm{Lab}} \\in \\mathbb{R}^{L \\times 3}" + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "text", + "content": ", with " + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "inline_equation", + "content": "L = 256" + }, + { + "bbox": [ + 55, + 167, + 296, + 395 + ], + "type": "text", + "content": ", providing a histogram-based representation for each color space." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 396, + 296, + 587 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 396, + 296, + 587 + ], + "spans": [ + { + "bbox": [ + 55, + 396, + 296, + 587 + ], + "type": "text", + "content": "The ABC-Former framework incorporates two auxiliary models to enhance the target model's performance. These models process global color PDF data, " + }, + { + "bbox": [ + 55, + 396, + 296, + 587 + ], + "type": "inline_equation", + "content": "\\mathbf{H}^{\\mathrm{sRGB}}" + }, + { + "bbox": [ + 55, + 396, + 296, + 587 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 396, + 296, + 587 + ], + "type": "inline_equation", + "content": "\\mathbf{H}^{\\mathrm{Lab}}" + }, + { + "bbox": [ + 55, + 396, + 296, + 587 + ], + "type": "text", + "content": ", which serve as distinct inputs. Each auxiliary model, called PDFformer, employs a 1D transformer architecture with a shared structure but separate training. Initially, the input histograms, " + }, + { + "bbox": [ + 55, + 396, + 296, + 587 + ], + "type": "inline_equation", + "content": "\\mathbf{H}^{\\mathrm{sRGB}}" + }, + { + "bbox": [ + 55, + 396, + 296, + 587 + ], + "type": "text", + "content": " or " + }, + { + "bbox": [ + 55, + 396, + 296, + 587 + ], + "type": "inline_equation", + "content": "\\mathbf{H}^{\\mathrm{Lab}}" + }, + { + "bbox": [ + 55, + 396, + 296, + 587 + ], + "type": "text", + "content": ", pass through a one-dimensional convolutional layer to produce histogram features " + }, + { + "bbox": [ + 55, + 396, + 296, + 587 + ], + "type": "inline_equation", + "content": "\\mathbf{H}_0^{\\mathbf{A}} \\in \\mathbb{R}^{L \\times C}" + }, + { + "bbox": [ + 55, + 396, + 296, + 587 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 55, + 396, + 296, + 587 + ], + "type": "inline_equation", + "content": "\\mathbf{A} \\in [\\mathbf{sRGB}, \\mathbf{Lab}]" + }, + { + "bbox": [ + 55, + 396, + 296, + 587 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 55, + 396, + 296, + 587 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 55, + 396, + 296, + 587 + ], + "type": "text", + "content": " is the feature dimension. These features are subsequently processed through a U-shape transformer structure with PDFformer blocks that facilitate upsampling and downsampling in the encoding and decoding paths. Each block consists of two sequences of Layer Normalization (LN), Channel Attention (CA) [14], and a feed-forward Multilayer Perceptron (MLP) [11], arranged in the following order:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 93, + 592, + 295, + 613 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 93, + 592, + 295, + 613 + ], + "spans": [ + { + "bbox": [ + 93, + 592, + 295, + 613 + ], + "type": "interline_equation", + "content": "\\hat {\\mathbf {H}} _ {\\mathbf {i}} ^ {\\mathbf {A}} = \\operatorname {C A} \\left(\\operatorname {L N} \\left(\\mathbf {H} _ {\\mathbf {i} - 1} ^ {\\mathbf {A}}\\right)\\right) + \\mathbf {H} _ {\\mathbf {i} - 1} ^ {\\mathbf {A}}; \\tag {1}", + "image_path": "b2151d61a2227b38f9ec644cda62171cd6e78acaf264042e4aa63be831075184.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 94, + 609, + 257, + 624 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 609, + 257, + 624 + ], + "spans": [ + { + "bbox": [ + 94, + 609, + 257, + 624 + ], + "type": "interline_equation", + "content": "\\mathbf {H} _ {\\mathbf {i}} ^ {\\mathbf {A}} = \\operatorname {G E L U} (\\operatorname {M L P} (\\operatorname {L N} (\\hat {\\mathbf {H}} _ {\\mathbf {i}} ^ {\\mathbf {A}}))) + \\hat {\\mathbf {H}} _ {\\mathbf {i}} ^ {\\mathbf {A}},", + "image_path": "4702a325b20e94d777878cbcd2babf2ac4ab87ceb3f1bc21e20251179c10855e.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 629, + 296, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 629, + 296, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 629, + 296, + 715 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 629, + 296, + 715 + ], + "type": "inline_equation", + "content": "\\mathrm{GELU}(\\cdot)" + }, + { + "bbox": [ + 55, + 629, + 296, + 715 + ], + "type": "text", + "content": " denotes the GELU activation function, and " + }, + { + "bbox": [ + 55, + 629, + 296, + 715 + ], + "type": "inline_equation", + "content": "\\mathbf{i}" + }, + { + "bbox": [ + 55, + 629, + 296, + 715 + ], + "type": "text", + "content": " is the block index, starting from 1. PDFformer has " + }, + { + "bbox": [ + 55, + 629, + 296, + 715 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 55, + 629, + 296, + 715 + ], + "type": "text", + "content": " PDFformer blocks in the encoding path, followed by a bottleneck stage, which is also a PDFformer block. These blocks are interconnected via downsampling, implemented through a " + }, + { + "bbox": [ + 55, + 629, + 296, + 715 + ], + "type": "inline_equation", + "content": "4\\times 1" + }, + { + "bbox": [ + 55, + 629, + 296, + 715 + ], + "type": "text", + "content": " convolution with a stride of 2 and channel doubling. This process yields an output of " + }, + { + "bbox": [ + 55, + 629, + 296, + 715 + ], + "type": "inline_equation", + "content": "\\mathbf{H}_{\\mathbf{K} + 1}^{\\mathbf{A}}\\in" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 70, + 553, + 230 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 70, + 553, + 230 + ], + "spans": [ + { + "bbox": [ + 313, + 70, + 553, + 230 + ], + "type": "inline_equation", + "content": "\\mathbb{R}^{\\frac{L}{2K}\\times 2^K C}" + }, + { + "bbox": [ + 313, + 70, + 553, + 230 + ], + "type": "text", + "content": ". Following the bottleneck, the decoding path contains " + }, + { + "bbox": [ + 313, + 70, + 553, + 230 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 313, + 70, + 553, + 230 + ], + "type": "text", + "content": " PDFformer blocks, with upsampling and channel reduction applied between blocks. The first two blocks halve the number of channels, while the remaining blocks quarter them, as in [25]. Upsampling is achieved using a " + }, + { + "bbox": [ + 313, + 70, + 553, + 230 + ], + "type": "inline_equation", + "content": "2\\times 1" + }, + { + "bbox": [ + 313, + 70, + 553, + 230 + ], + "type": "text", + "content": " transposed convolution with a stride of 2. Additionally, each block in the decoding path takes the output from the previous block and concatenates it with the corresponding output from the encoding path of the same spatial size. The final output, " + }, + { + "bbox": [ + 313, + 70, + 553, + 230 + ], + "type": "inline_equation", + "content": "\\mathbf{H}^{\\mathbf{A}}\\in \\mathbb{R}^{L\\times 2C}" + }, + { + "bbox": [ + 313, + 70, + 553, + 230 + ], + "type": "text", + "content": ", passes through a convolutional layer with a residual connection to the input histograms and a softmax function to produce the corrected color or CIELab histograms " + }, + { + "bbox": [ + 313, + 70, + 553, + 230 + ], + "type": "inline_equation", + "content": "\\mathbf{H}_{\\mathbf{c}}^{\\mathbf{A}}\\in \\mathbb{R}^{L\\times 3}" + }, + { + "bbox": [ + 313, + 70, + 553, + 230 + ], + "type": "text", + "content": " as:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 323, + 236, + 553, + 251 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 323, + 236, + 553, + 251 + ], + "spans": [ + { + "bbox": [ + 323, + 236, + 553, + 251 + ], + "type": "interline_equation", + "content": "\\mathbf {H} _ {\\mathrm {c}} ^ {\\mathbf {A}} = \\operatorname {S O F T M A X} \\left(\\operatorname {C o n v} _ {3 \\times 1} \\left(\\mathbf {H} _ {2 \\mathrm {K} + 1} ^ {\\mathbf {A}}\\right) + \\mathbf {H} _ {0} ^ {\\mathbf {A}}\\right). \\tag {2}", + "image_path": "08da82d2c2524fd9c4ddb0dd2da3e0705785b60e498062662977ed2123776e2a.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 258, + 476, + 270 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 258, + 476, + 270 + ], + "spans": [ + { + "bbox": [ + 313, + 258, + 476, + 270 + ], + "type": "text", + "content": "3.2. Target model — sRGBformer" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 275, + 555, + 381 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 275, + 555, + 381 + ], + "spans": [ + { + "bbox": [ + 313, + 275, + 555, + 381 + ], + "type": "text", + "content": "To achieve a color-calibrated correction, we employ sRGBformer, a vision transformer designed to address color deviations. It systematically processes input features " + }, + { + "bbox": [ + 313, + 275, + 555, + 381 + ], + "type": "inline_equation", + "content": "\\mathbf{X}_0" + }, + { + "bbox": [ + 313, + 275, + 555, + 381 + ], + "type": "text", + "content": " to generate the final WB-corrected image. Similar to PDFformer, sRGBformer employs a U-shaped transformer architecture with upsampling and downsampling. However, it uniquely integrates cross-modality knowledge from auxiliary models, using their corrected global color information as guidance. To enable this, we introduce our proposed Interactive" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 383, + 555, + 430 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 383, + 555, + 430 + ], + "spans": [ + { + "bbox": [ + 313, + 383, + 555, + 430 + ], + "type": "text", + "content": "Channel Attention (ICA) in each sRGBformer block, incorporating auxiliary model knowledge through dedicated pathways. Each sRGBformer block consists of two sets of LN, ICA, and an MLP, structured as follows:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 358, + 439, + 553, + 460 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 358, + 439, + 553, + 460 + ], + "spans": [ + { + "bbox": [ + 358, + 439, + 553, + 460 + ], + "type": "interline_equation", + "content": "\\hat {\\mathbf {X}} _ {\\mathbf {i}} = \\operatorname {I C A} (\\ln (\\mathbf {X} _ {\\mathbf {i} - \\mathbf {1}})) + \\mathbf {X} _ {\\mathbf {i} - \\mathbf {1}}; \\tag {3}", + "image_path": "b42907e8b5a4797797655de99ef145a83970d0583fa90149e9df0f0ed6c9976a.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 360, + 456, + 509, + 470 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 360, + 456, + 509, + 470 + ], + "spans": [ + { + "bbox": [ + 360, + 456, + 509, + 470 + ], + "type": "interline_equation", + "content": "\\mathbf {X} _ {\\mathbf {i}} = \\operatorname {G E L U} (\\operatorname {M L P} (\\ln (\\hat {\\mathbf {X}} _ {\\mathbf {i}}))) + \\hat {\\mathbf {X}} _ {\\mathbf {i}},", + "image_path": "9eb6508bed24009cf00c48fe0a8c3f2fb8f1a223daf45c761e0f556700bd21a5.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 475, + 554, + 641 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 475, + 554, + 641 + ], + "spans": [ + { + "bbox": [ + 313, + 475, + 554, + 641 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 475, + 554, + 641 + ], + "type": "inline_equation", + "content": "\\mathbf{X_i}" + }, + { + "bbox": [ + 313, + 475, + 554, + 641 + ], + "type": "text", + "content": " is the image features produced by the sRGBformer block at " + }, + { + "bbox": [ + 313, + 475, + 554, + 641 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 313, + 475, + 554, + 641 + ], + "type": "text", + "content": "-th level. The encoder and decoder each consist of " + }, + { + "bbox": [ + 313, + 475, + 554, + 641 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 313, + 475, + 554, + 641 + ], + "type": "text", + "content": " sRGBformer blocks, connected via a bottleneck stage with one sRGBformer block. In the encoder, downsampling is achieved using a " + }, + { + "bbox": [ + 313, + 475, + 554, + 641 + ], + "type": "inline_equation", + "content": "4\\times 4" + }, + { + "bbox": [ + 313, + 475, + 554, + 641 + ], + "type": "text", + "content": " convolution with a stride of 2 and channel doubling, while the decoder applies upsampling via a " + }, + { + "bbox": [ + 313, + 475, + 554, + 641 + ], + "type": "inline_equation", + "content": "2\\times 2" + }, + { + "bbox": [ + 313, + 475, + 554, + 641 + ], + "type": "text", + "content": " transposed convolution with a stride of 2 and channel halving. As in PDFformer, each decoder block receives the output from the previous block, concatenated with the corresponding output from the encoder of the same spatial size. The final decoder output " + }, + { + "bbox": [ + 313, + 475, + 554, + 641 + ], + "type": "inline_equation", + "content": "\\mathbf{X_{2K + 1}}\\in \\mathbb{R}^{H\\times W\\times 2C}" + }, + { + "bbox": [ + 313, + 475, + 554, + 641 + ], + "type": "text", + "content": " passes through a " + }, + { + "bbox": [ + 313, + 475, + 554, + 641 + ], + "type": "inline_equation", + "content": "3\\times 3" + }, + { + "bbox": [ + 313, + 475, + 554, + 641 + ], + "type": "text", + "content": " convolutional layer with a residual connection to the input, producing the final WB-corrected image, " + }, + { + "bbox": [ + 313, + 475, + 554, + 641 + ], + "type": "inline_equation", + "content": "\\mathbf{X_c}\\in \\mathbb{R}^{H\\times W\\times 3}" + }, + { + "bbox": [ + 313, + 475, + 554, + 641 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 642, + 554, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 642, + 554, + 712 + ], + "spans": [ + { + "bbox": [ + 313, + 642, + 554, + 712 + ], + "type": "text", + "content": "Interactive Channel Attention. The goal of ICA is to facilitate knowledge transfer from auxiliary models to sRGBformer, enhancing WB correction. In sRGBformer, each encoder and decoder block is equipped with an ICA module to correspond to the respective blocks in the encoders and decoders of the auxiliary models. First, " + }, + { + "bbox": [ + 313, + 642, + 554, + 712 + ], + "type": "inline_equation", + "content": "\\mathbf{X}_{\\mathrm{i}}" + }, + { + "bbox": [ + 313, + 642, + 554, + 712 + ], + "type": "text", + "content": " is condensed" + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "text", + "content": "21260" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 59, + 70, + 544, + 404 + ], + "blocks": [ + { + "bbox": [ + 59, + 70, + 544, + 404 + ], + "lines": [ + { + "bbox": [ + 59, + 70, + 544, + 404 + ], + "spans": [ + { + "bbox": [ + 59, + 70, + 544, + 404 + ], + "type": "image", + "image_path": "cbb3ff0b83c41d00a1bf7c82b0173d3179f0e6e159f7b222f16f183168a5fcbd.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 54, + 415, + 555, + 449 + ], + "lines": [ + { + "bbox": [ + 54, + 415, + 555, + 449 + ], + "spans": [ + { + "bbox": [ + 54, + 415, + 555, + 449 + ], + "type": "text", + "content": "Figure 2. The ABC-Former framework consists of two key components:: Auxiliary models (PDFformers) and a Target model (sRG-Bformer). The auxiliary models process sRGB and CIELab histograms to learn color features from different modalities, while the ICA module in the target model integrates this information to generate the final WB correction." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "spans": [ + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "text", + "content": "into a vector with one value per channel by global average pooling (Avg.), followed by a convolution and sigmoid activation, generating a channel-wise weighting vector " + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "inline_equation", + "content": "\\mathbf{W}_{\\mathrm{i}}^{\\mathrm{T}} \\in \\mathbb{R}^{1 \\times 1 \\times C}" + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "text", + "content": " at the " + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "text", + "content": "-th level of the sRGBformer block. Similarly, weighting vectors " + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "inline_equation", + "content": "\\mathbf{W}_{\\mathrm{i}}^{\\mathrm{sRGB}}" + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "inline_equation", + "content": "\\mathbf{W}_{\\mathrm{i}}^{\\mathrm{Lab}} \\in \\mathbb{R}^{1 \\times 1 \\times C}" + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "text", + "content": " are extracted by feeding " + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "inline_equation", + "content": "\\mathbf{H}_{\\mathrm{i}}^{\\mathrm{sRGB}}" + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "inline_equation", + "content": "\\mathbf{H}_{\\mathrm{i}}^{\\mathrm{Lab}}" + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "text", + "content": " into the CA module within their corresponding PDFformer's blocks, which include the average pooling " + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "inline_equation", + "content": "\\mathrm{Avg}(\\cdot)" + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "text", + "content": ", convolution, the unsqueeze operation, and the excitation operation (sigmoid activation) at the " + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "text", + "content": "-th level. Next, to integrate cross-modal knowledge, we apply cross-modal re-parameterization [26], introducing learnable parameters " + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "inline_equation", + "content": "\\lambda_{\\mathrm{i}}^{\\mathrm{sRGB}}" + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "inline_equation", + "content": "\\lambda_{\\mathrm{i}}^{\\mathrm{Lab}}" + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "text", + "content": " to adjust " + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "inline_equation", + "content": "\\mathbf{W}_{\\mathrm{i}}^{\\mathrm{sRGB}}" + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "inline_equation", + "content": "\\mathbf{W}_{\\mathrm{i}}^{\\mathrm{Lab}}" + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "text", + "content": ", respectively. These are then combined with " + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "inline_equation", + "content": "\\mathbf{W}_{\\mathrm{i}}^{\\mathrm{T}}" + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "text", + "content": " along the channel dimension to obtain " + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "inline_equation", + "content": "\\mathbf{W}_{\\mathrm{total}}" + }, + { + "bbox": [ + 54, + 479, + 298, + 659 + ], + "type": "text", + "content": " as:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 78, + 663, + 295, + 682 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 78, + 663, + 295, + 682 + ], + "spans": [ + { + "bbox": [ + 78, + 663, + 295, + 682 + ], + "type": "interline_equation", + "content": "\\mathbf {W} _ {\\mathbf {i}} ^ {\\mathbf {T}} = \\text {S i g m o i d} (\\mathbf {C o n v} (\\operatorname {A v g} (\\mathbf {X} _ {\\mathbf {i}}))) \\tag {4}", + "image_path": "560ca50bbd65d4732af80dcb2c9cde82595b57e52214b319183504fc63185fa0.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 679, + 294, + 694 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 679, + 294, + 694 + ], + "spans": [ + { + "bbox": [ + 67, + 679, + 294, + 694 + ], + "type": "interline_equation", + "content": "\\mathbf {W} _ {\\mathrm {i}} ^ {\\text {t o t a l}} = \\mathbf {W} _ {\\mathrm {i}} ^ {\\mathrm {T}} + \\lambda_ {\\mathrm {i}} ^ {\\text {L a b}} \\mathbf {W} _ {\\mathrm {i}} ^ {\\text {L a b}} + \\lambda_ {\\mathrm {i}} ^ {\\text {s R G B}} \\mathbf {W} _ {\\mathrm {i}} ^ {\\text {s R G B}}.", + "image_path": "06d21a3f2b995fc2b47e4fbd74998d04bd64e5aca297438da1a931c5cee38cc0.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 700, + 295, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 700, + 295, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 700, + 295, + 715 + ], + "type": "text", + "content": "At last, " + }, + { + "bbox": [ + 55, + 700, + 295, + 715 + ], + "type": "inline_equation", + "content": "\\mathbf{X_i}" + }, + { + "bbox": [ + 55, + 700, + 295, + 715 + ], + "type": "text", + "content": " is channel-wise re-weighted by " + }, + { + "bbox": [ + 55, + 700, + 295, + 715 + ], + "type": "inline_equation", + "content": "\\mathbf{W_i^{total}}" + }, + { + "bbox": [ + 55, + 700, + 295, + 715 + ], + "type": "text", + "content": " to gen-" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 478, + 471, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 478, + 471, + 491 + ], + "spans": [ + { + "bbox": [ + 313, + 478, + 471, + 491 + ], + "type": "text", + "content": "erate the refined sRGB features " + }, + { + "bbox": [ + 313, + 478, + 471, + 491 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{X}}_{\\mathrm{i}}" + }, + { + "bbox": [ + 313, + 478, + 471, + 491 + ], + "type": "text", + "content": " as:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 378, + 500, + 555, + 516 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 378, + 500, + 555, + 516 + ], + "spans": [ + { + "bbox": [ + 378, + 500, + 555, + 516 + ], + "type": "interline_equation", + "content": "\\tilde {\\mathbf {X}} _ {\\mathbf {i}} = \\mathbb {F} _ {\\text {s c a l e}} \\left(\\mathbf {W} _ {\\mathbf {i}} ^ {\\text {t o t a l}}, \\mathbf {X} _ {\\mathbf {i}}\\right), \\tag {5}", + "image_path": "44175e6e0f9395d15648d570e9cbcdcbb58b7a852aca57adbe050b42586d0f7f.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 525, + 555, + 598 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 525, + 555, + 598 + ], + "spans": [ + { + "bbox": [ + 313, + 525, + 555, + 598 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 525, + 555, + 598 + ], + "type": "inline_equation", + "content": "\\mathbb{F}_{scale}(\\cdot, \\cdot)" + }, + { + "bbox": [ + 313, + 525, + 555, + 598 + ], + "type": "text", + "content": " represents the channel-wise multiplication function, applying scalar weights to the corresponding feature maps. Through ICA, we leverage calibrated global color information from modality-specific histogram-based features, enabling effective knowledge transfer for improved WB correction." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 608, + 404, + 619 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 608, + 404, + 619 + ], + "spans": [ + { + "bbox": [ + 313, + 608, + 404, + 619 + ], + "type": "text", + "content": "3.3. Loss Function" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 624, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 624, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 624, + 556, + 715 + ], + "type": "text", + "content": "The proposed framework optimizes two auxiliary models and one target model. Specifically, two auxiliary PDF-formers process histogram inputs in a PDF format from either the sRGB or CIELab domains, while the target model, sRGBformer, is responsible for the final WB sRGB output. The cooperative interaction between the auxiliary and target models is crucial for achieving accurate WB in the output sRGB image. To train the auxiliary models, we use L2 loss" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "21261" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 295, + 97 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 295, + 97 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 295, + 97 + ], + "type": "text", + "content": "to measure the difference between the PDFs of the predicted and ground-truth color channel histograms, formulated as:" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 58, + 100, + 294, + 140 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 100, + 294, + 140 + ], + "spans": [ + { + "bbox": [ + 58, + 100, + 294, + 140 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\mathcal {L} _ {\\mathrm {p d f}} ^ {s R G B} = \\left\\| \\mathbf {H} _ {\\mathbf {c}} ^ {\\mathbf {R}} - \\mathbf {H} _ {\\mathbf {g t}} ^ {\\mathbf {R}} \\right\\| _ {2} + \\left\\| \\mathbf {H} _ {\\mathbf {c}} ^ {\\mathbf {G}} - \\mathbf {H} _ {\\mathbf {g t}} ^ {\\mathbf {G}} \\right\\| _ {2} + \\left\\| \\mathbf {H} _ {\\mathbf {c}} ^ {\\mathbf {B}} - \\mathbf {H} _ {\\mathbf {g t}} ^ {\\mathbf {B}} \\right\\| _ {2}; \\\\ \\mathcal {L} _ {\\mathrm {p d f}} ^ {L a b} = \\left\\| \\mathbf {H} _ {\\mathbf {c}} ^ {\\mathbf {L}} - \\mathbf {H} _ {\\mathbf {g t}} ^ {\\mathbf {L}} \\right\\| _ {2} + \\left\\| \\mathbf {H} _ {\\mathbf {c}} ^ {\\mathbf {a}} - \\mathbf {H} _ {\\mathbf {g t}} ^ {\\mathbf {a}} \\right\\| _ {2} + \\left\\| \\mathbf {H} _ {\\mathbf {c}} ^ {\\mathbf {b}} - \\mathbf {H} _ {\\mathbf {g t}} ^ {\\mathbf {b}} \\right\\| _ {2}, \\tag {6} \\\\ \\end{array}", + "image_path": "e80db03629e8951a62e1b09f0b403e465a78029485242177ce82569bdda6370d.jpg" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 140, + 294, + 212 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 140, + 294, + 212 + ], + "spans": [ + { + "bbox": [ + 55, + 140, + 294, + 212 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 140, + 294, + 212 + ], + "type": "inline_equation", + "content": "\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{sRGB}} = [\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{R}};\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{G}};\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{B}}]" + }, + { + "bbox": [ + 55, + 140, + 294, + 212 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 140, + 294, + 212 + ], + "type": "inline_equation", + "content": "\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{Lab}} = [\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{L}};\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{a}};\\mathbf{H}_{\\mathrm{c}}^{\\mathrm{b}}]" + }, + { + "bbox": [ + 55, + 140, + 294, + 212 + ], + "type": "text", + "content": " denote the corrected RGB and CIELab histograms, respectively, as estimated by PDFformers. " + }, + { + "bbox": [ + 55, + 140, + 294, + 212 + ], + "type": "inline_equation", + "content": "\\mathbf{H}_{\\mathrm{gt}}^{\\mathrm{sRGB}}" + }, + { + "bbox": [ + 55, + 140, + 294, + 212 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 140, + 294, + 212 + ], + "type": "inline_equation", + "content": "\\mathbf{H}_{\\mathrm{gt}}^{\\mathrm{Lab}}" + }, + { + "bbox": [ + 55, + 140, + 294, + 212 + ], + "type": "text", + "content": " represent the ground-truth histograms. We adopt L2 loss as it penalizes large deviations more heavily, encouraging the model to align histogram bins across the distribution evenly." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 212, + 294, + 233 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 212, + 294, + 233 + ], + "spans": [ + { + "bbox": [ + 55, + 212, + 294, + 233 + ], + "type": "text", + "content": "For sRGBformer, we use the L1 loss for training, as follows:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 129, + 236, + 294, + 249 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 236, + 294, + 249 + ], + "spans": [ + { + "bbox": [ + 129, + 236, + 294, + 249 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\mathrm {r e c}} = \\left\\| \\mathbf {X} _ {\\mathbf {c}} - \\mathbf {X} _ {\\mathbf {g t}} \\right\\| _ {1}, \\tag {7}", + "image_path": "b93d46129e3fa803c9038a7520e99f912f8a7e7cd1250041b674e1f359c99bc8.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 252, + 295, + 291 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 252, + 295, + 291 + ], + "spans": [ + { + "bbox": [ + 55, + 252, + 295, + 291 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 252, + 295, + 291 + ], + "type": "inline_equation", + "content": "\\mathbf{X}_{\\mathbf{c}}" + }, + { + "bbox": [ + 55, + 252, + 295, + 291 + ], + "type": "text", + "content": " is the estimated WB image produced by sRGBformer, and " + }, + { + "bbox": [ + 55, + 252, + 295, + 291 + ], + "type": "inline_equation", + "content": "\\mathbf{X}_{\\mathbf{gt}}" + }, + { + "bbox": [ + 55, + 252, + 295, + 291 + ], + "type": "text", + "content": " is the ground truth. The total loss is defined as " + }, + { + "bbox": [ + 55, + 252, + 295, + 291 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{total}} = \\mathcal{L}_{\\mathrm{pdf}}^{sRGB} + \\mathcal{L}_{\\mathrm{pdf}}^{Lab} + \\mathcal{L}_{\\mathrm{rec}}" + }, + { + "bbox": [ + 55, + 252, + 295, + 291 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 299, + 181, + 312 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 299, + 181, + 312 + ], + "spans": [ + { + "bbox": [ + 55, + 299, + 181, + 312 + ], + "type": "text", + "content": "4. Experimental Results" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 319, + 295, + 449 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 319, + 295, + 449 + ], + "spans": [ + { + "bbox": [ + 55, + 319, + 295, + 449 + ], + "type": "text", + "content": "Datasets. The commonly used public dataset, the Rendered WB dataset Set1 [3], is divided into three non-overlapping folds. For training, we randomly selected 12,000 rendered sRGB images with different WB settings from two of the folds. For testing, we use the third fold of Set1, also known as Set1-Test (21,046 images), as well as other datasets that have no overlap in scenes or cameras with the training data: Set2 of the Rendered WB dataset (2,881 images) [3] and the Rendered Cube+ dataset (10,242 images) [3, 6]. These datasets serve as benchmarks to evaluate the WB correction performance of our ABC-Former." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 451, + 295, + 557 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 451, + 295, + 557 + ], + "spans": [ + { + "bbox": [ + 55, + 451, + 295, + 557 + ], + "type": "text", + "content": "Evaluation Metrics. We evaluated our results using three widely used metrics: Mean Squared Error (MSE), Mean Angular Error (MAE), and " + }, + { + "bbox": [ + 55, + 451, + 295, + 557 + ], + "type": "inline_equation", + "content": "\\Delta E" + }, + { + "bbox": [ + 55, + 451, + 295, + 557 + ], + "type": "text", + "content": " 2000 [13], which quantify the differences between the predicted WB images and the ground truth. For each metric, we reported the mean, first quantile (Q1), median (Q2), and upper quantile (Q3) of the error. Lower values in these metrics indicate better WB correction performance, consistent with those used in recent works [2, 3, 5, 18, 19]." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 558, + 295, + 676 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 558, + 295, + 676 + ], + "spans": [ + { + "bbox": [ + 55, + 558, + 295, + 676 + ], + "type": "text", + "content": "Implementation Details. We implemented ABC-Former using PyTorch. During training, we optimize both the auxiliary and target models simultaneously over 350 epochs using the Adam optimizer [17] with " + }, + { + "bbox": [ + 55, + 558, + 295, + 676 + ], + "type": "inline_equation", + "content": "\\beta_{1} = 0.5" + }, + { + "bbox": [ + 55, + 558, + 295, + 676 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 558, + 295, + 676 + ], + "type": "inline_equation", + "content": "\\beta_{2} = 0.999" + }, + { + "bbox": [ + 55, + 558, + 295, + 676 + ], + "type": "text", + "content": " for each model. The learning rate is set to " + }, + { + "bbox": [ + 55, + 558, + 295, + 676 + ], + "type": "inline_equation", + "content": "2 \\times 10^{-4}" + }, + { + "bbox": [ + 55, + 558, + 295, + 676 + ], + "type": "text", + "content": ", and the embedding feature dimension to 16. For training, we randomly cropped four " + }, + { + "bbox": [ + 55, + 558, + 295, + 676 + ], + "type": "inline_equation", + "content": "128 \\times 128" + }, + { + "bbox": [ + 55, + 558, + 295, + 676 + ], + "type": "text", + "content": " patches from each training image as input. Additionally, we apply geometric transformations, including rotation and flipping, to augment the data." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 677, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 677, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 677, + 295, + 713 + ], + "type": "text", + "content": "Quantitative Experimental Results. The quantitative results in Table 1 show that ABC-Former performs favorably against five SOTA methods [2, 3, 5, 18, 19] across three" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 72, + 553, + 312 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 553, + 312 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 553, + 312 + ], + "type": "text", + "content": "public benchmark datasets [3, 6]. The compared methods were evaluated using either their publicly available pretrained models or results directly cited from their respective publications. However, SWBNet [19] was retrained to be tested on the Set1-Test and Set2 datasets, as its code and original results for these datasets were not provided (marked with * in Table 1). SWBNet's scores on the Cube+ Dataset were taken from the original paper. On the Rendered WB Dataset Set1-Test and Set2, ABC-Former achieved the best performance on MSE, MAE, and " + }, + { + "bbox": [ + 313, + 72, + 553, + 312 + ], + "type": "inline_equation", + "content": "\\Delta E" + }, + { + "bbox": [ + 313, + 72, + 553, + 312 + ], + "type": "text", + "content": " 2000, indicating that it effectively removes color casts from images and achieves superior WB correction. On the Rendered Cube+ dataset, ABC-Former also delivered superior results, with the lowest mean scores in MSE, MAE, and " + }, + { + "bbox": [ + 313, + 72, + 553, + 312 + ], + "type": "inline_equation", + "content": "\\Delta E" + }, + { + "bbox": [ + 313, + 72, + 553, + 312 + ], + "type": "text", + "content": " 2000. Additionally, it maintains a competitive model size, only larger than Deep-WB [2] and Mixed-WB [5]. These results demonstrate ABC-Former's efficiency in achieving efficient WB correction across various datasets without significantly increasing model complexity, showcasing its robustness and generalization capabilities." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 314, + 553, + 505 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 314, + 553, + 505 + ], + "spans": [ + { + "bbox": [ + 313, + 314, + 553, + 505 + ], + "type": "text", + "content": "Qualitative Experimental Results. We present the qualitative comparison results on the Rendered WB and Rendered Cube+ datasets in Figure 3 and Figure 4. Analyzing the color correction performance of different methods, we see that while KNN [3], Deep-WB [2], Mixed-WB [5], and WBFlow [18] generally reduce color casts, they often exhibit color inconsistencies across different regions of the image. For example, in Figure 3, these methods tend to correct the colors of objects while neglecting the sky's color accuracy, resulting in an undesirable yellow tint. Similarly, in Figure 4, strong color casts lead to suboptimal WB correction, often leaving the image with an overall blue or yellow tone. In contrast, our method, guided by corrected global color information from multiple modalities, achieves a more balanced and consistent color correction across the entire image, producing natural and harmonious results." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 508, + 553, + 687 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 508, + 553, + 687 + ], + "spans": [ + { + "bbox": [ + 313, + 508, + 553, + 687 + ], + "type": "text", + "content": "In addition, Figure 5 demonstrates a comparative analysis of global color distribution accuracy in the average WB results across different methods. The Bhattacharyya coefficient, as introduced in [16], is employed to measure the similarity between the red, green, and blue histograms of the processed images and those of the ground-truth WB images across three benchmark datasets. This coefficient ranges from 0 to 1, with higher values indicating a closer match to the ground truth color distributions. The results show that ABC-Former achieves a higher similarity compared to other methods. Additionally, the visualization at the bottom of the figure illustrates an example of the color distributions in WB results obtained by the compared methods, highlighting that ABC-Former produces color distributions more consistent with the ground truth." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 689, + 553, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 689, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 689, + 553, + 713 + ], + "type": "text", + "content": "Ablation Studies. In the ablation studies, we investigated the impact of various combinations of modalities used for" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "21262" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 63, + 113, + 545, + 448 + ], + "blocks": [ + { + "bbox": [ + 55, + 71, + 555, + 105 + ], + "lines": [ + { + "bbox": [ + 55, + 71, + 555, + 105 + ], + "spans": [ + { + "bbox": [ + 55, + 71, + 555, + 105 + ], + "type": "text", + "content": "Table 1. The quantitative results of ABC-Former and competing WB methods are evaluated on three public benchmark datasets, including Rendered WB Dataset (Set1-Test and Set2) [3], and Rendered Cube+ Dataset [3, 6]. The best results are highlighted in red, while the second-best results are highlighted in blue." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 63, + 113, + 545, + 448 + ], + "lines": [ + { + "bbox": [ + 63, + 113, + 545, + 448 + ], + "spans": [ + { + "bbox": [ + 63, + 113, + 545, + 448 + ], + "type": "table", + "html": "
MethodMSE ↓MAE ↓ΔE 2000 ↓Size
MeanQ1Q2Q3MeanQ1Q2Q3MeanQ1Q2Q3MB
Rendered WB Dataset: Set1-Test (21,046 images) [3]
KNN [3]77.4913.7439.6294.013.06°1.74°2.54°3.76°3.582.073.094.5521.8
Deep-WB [2]82.5513.1942.77102.093.12°1.88°2.70°3.84°3.772.163.304.8616.7
Mixed-WB [5]142.2526.8167.17164.664.07°2.64°3.68°5.16°4.553.004.155.635.1
WBFlow [18]78.8912.9935.0979.352.67°1.73°2.39°3.24°3.131.922.793.9430.2
SWBNet* [19]111.6220.6160.68137.914.11°2.56°3.75°5.22°4.542.734.165.86258.8
ABC-Former20.474.6510.0221.051.99°1.25°1.73°2.33°2.181.381.862.5920.2
Rendered WB Dataset: Set2 (2,881 images) [3]
KNN [3]171.0937.0487.04190.884.48°2.26°3.64°5.95°5.603.434.907.0621.8
Deep-WB [2]124.0730.1376.32154.443.75°2.02°3.08°4.72°4.903.134.356.0816.7
Mixed-WB [5]188.7648.64112.32219.914.92°2.69°4.10°6.37°6.053.454.927.205.1
WBFlow [18]117.6031.2561.68143.903.51°1.93°2.92°4.47°4.643.164.075.5630.2
SWBNet* [19]219.0255.45113.98236.255.46°3.45°4.78°6.63°6.514.395.848.08258.8
ABC-Former104.3125.5558.61132.903.39°1.87°2.73°4.30°4.562.974.135.6320.2
Rendered Cube+ Dataset (10,242 images) [3, 6]
KNN [3]194.9827.4357.08118.214.12°1.96°3.17°5.04°5.683.224.616.7021.8
Deep-WB [2]80.4615.4333.8874.423.45°1.87°2.82°4.26°4.592.683.815.5316.7
Mixed-WB [5]161.8016.9619.3390.814.05°1.40°2.12°4.88°4.892.163.106.785.1
WBFlow [18]75.3914.2230.9072.913.34°1.87°2.82°4.11°4.282.683.775.2130.2
SWBNet [19]74.3520.4640.0486.953.15°1.33°2.09°4.12°4.282.403.565.09258.8
ABC-Former60.6012.1526.9257.202.99°1.63°2.45°3.69°3.952.353.404.8620.2
", + "image_path": "23df1fe63eb1febd77ca9f8deaa439d543fbebe15ad9db50d06729cd6e2ae67c.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 59, + 460, + 129, + 509 + ], + "blocks": [ + { + "bbox": [ + 59, + 460, + 129, + 509 + ], + "lines": [ + { + "bbox": [ + 59, + 460, + 129, + 509 + ], + "spans": [ + { + "bbox": [ + 59, + 460, + 129, + 509 + ], + "type": "image", + "image_path": "c3378122664307760f9e6a413a22ad7619238a7db1f86bea05f23afd572a65da.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 59, + 509, + 129, + 573 + ], + "blocks": [ + { + "bbox": [ + 59, + 509, + 129, + 573 + ], + "lines": [ + { + "bbox": [ + 59, + 509, + 129, + 573 + ], + "spans": [ + { + "bbox": [ + 59, + 509, + 129, + 573 + ], + "type": "image", + "image_path": "551cab629c1264d4c5034413d796f07d772303580bd0321e0bc3ef9de370ef9e.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 59, + 573, + 129, + 633 + ], + "blocks": [ + { + "bbox": [ + 59, + 573, + 129, + 633 + ], + "lines": [ + { + "bbox": [ + 59, + 573, + 129, + 633 + ], + "spans": [ + { + "bbox": [ + 59, + 573, + 129, + 633 + ], + "type": "image", + "image_path": "69c9b7b603dfb72472638fb5c3629a1a454dae2c9ace30ec0928f92db10b3bb6.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 81, + 635, + 109, + 647 + ], + "lines": [ + { + "bbox": [ + 81, + 635, + 109, + 647 + ], + "spans": [ + { + "bbox": [ + 81, + 635, + 109, + 647 + ], + "type": "text", + "content": "Input" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 130, + 460, + 200, + 508 + ], + "blocks": [ + { + "bbox": [ + 130, + 460, + 200, + 508 + ], + "lines": [ + { + "bbox": [ + 130, + 460, + 200, + 508 + ], + "spans": [ + { + "bbox": [ + 130, + 460, + 200, + 508 + ], + "type": "image", + "image_path": "90b21a6f37be26da2b82d186d668f88fc9a76970239aaf02882730645069fbc3.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 130, + 509, + 200, + 572 + ], + "blocks": [ + { + "bbox": [ + 130, + 509, + 200, + 572 + ], + "lines": [ + { + "bbox": [ + 130, + 509, + 200, + 572 + ], + "spans": [ + { + "bbox": [ + 130, + 509, + 200, + 572 + ], + "type": "image", + "image_path": "274b0d8ed5c490dbb989c157e6cd37ba4165830a18f653d82013dcf2c1e3f5eb.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 130, + 573, + 200, + 633 + ], + "blocks": [ + { + "bbox": [ + 130, + 573, + 200, + 633 + ], + "lines": [ + { + "bbox": [ + 130, + 573, + 200, + 633 + ], + "spans": [ + { + "bbox": [ + 130, + 573, + 200, + 633 + ], + "type": "image", + "image_path": "9debd5d7045bff73cf6f43ae9b287aa5aa349b99612a69c472fac61f555c7ed0.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 153, + 635, + 176, + 647 + ], + "lines": [ + { + "bbox": [ + 153, + 635, + 176, + 647 + ], + "spans": [ + { + "bbox": [ + 153, + 635, + 176, + 647 + ], + "type": "text", + "content": "KNN" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 201, + 460, + 270, + 508 + ], + "blocks": [ + { + "bbox": [ + 201, + 460, + 270, + 508 + ], + "lines": [ + { + "bbox": [ + 201, + 460, + 270, + 508 + ], + "spans": [ + { + "bbox": [ + 201, + 460, + 270, + 508 + ], + "type": "image", + "image_path": "a8d6b41038be2e3c006563195444320ad8b2f8cb1b0c1b17352152cf76109389.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 201, + 509, + 270, + 572 + ], + "blocks": [ + { + "bbox": [ + 201, + 509, + 270, + 572 + ], + "lines": [ + { + "bbox": [ + 201, + 509, + 270, + 572 + ], + "spans": [ + { + "bbox": [ + 201, + 509, + 270, + 572 + ], + "type": "image", + "image_path": "52fe869da81c3a8d34549519c14dd5829523b1f8e1cd5d824fef1f26af155959.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 201, + 573, + 270, + 633 + ], + "blocks": [ + { + "bbox": [ + 201, + 573, + 270, + 633 + ], + "lines": [ + { + "bbox": [ + 201, + 573, + 270, + 633 + ], + "spans": [ + { + "bbox": [ + 201, + 573, + 270, + 633 + ], + "type": "image", + "image_path": "212fdce970499e8169dde75fe208fecc2c63f118d5034dc57c0bcd20b4bd2550.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 211, + 635, + 260, + 647 + ], + "lines": [ + { + "bbox": [ + 211, + 635, + 260, + 647 + ], + "spans": [ + { + "bbox": [ + 211, + 635, + 260, + 647 + ], + "type": "text", + "content": "Deep-WB" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 271, + 460, + 340, + 508 + ], + "blocks": [ + { + "bbox": [ + 271, + 460, + 340, + 508 + ], + "lines": [ + { + "bbox": [ + 271, + 460, + 340, + 508 + ], + "spans": [ + { + "bbox": [ + 271, + 460, + 340, + 508 + ], + "type": "image", + "image_path": "4a8103d7ed0d9d2271b26bf02ccc13fea9c3c9e460305ae0abcfc9b43e8c6464.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 271, + 509, + 339, + 572 + ], + "blocks": [ + { + "bbox": [ + 271, + 509, + 339, + 572 + ], + "lines": [ + { + "bbox": [ + 271, + 509, + 339, + 572 + ], + "spans": [ + { + "bbox": [ + 271, + 509, + 339, + 572 + ], + "type": "image", + "image_path": "4a7f1af5212f11c2f2a2da4296eb909272aa9df93e68395c2042987e8021674e.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 271, + 573, + 339, + 633 + ], + "blocks": [ + { + "bbox": [ + 271, + 573, + 339, + 633 + ], + "lines": [ + { + "bbox": [ + 271, + 573, + 339, + 633 + ], + "spans": [ + { + "bbox": [ + 271, + 573, + 339, + 633 + ], + "type": "image", + "image_path": "a116ce8cadb8fe19d639c1c63f0e2fa9a8df48fba2e99a8804f20bd49da8b763.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 279, + 635, + 332, + 647 + ], + "lines": [ + { + "bbox": [ + 279, + 635, + 332, + 647 + ], + "spans": [ + { + "bbox": [ + 279, + 635, + 332, + 647 + ], + "type": "text", + "content": "Mixed-WB" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 55, + 658, + 555, + 681 + ], + "lines": [ + { + "bbox": [ + 55, + 658, + 555, + 681 + ], + "spans": [ + { + "bbox": [ + 55, + 658, + 555, + 681 + ], + "type": "text", + "content": "Figure 3. Qualitative comparisons with other sRGB-WB methods on the Rendered WB dataset [3], with the " + }, + { + "bbox": [ + 55, + 658, + 555, + 681 + ], + "type": "inline_equation", + "content": "\\Delta" + }, + { + "bbox": [ + 55, + 658, + 555, + 681 + ], + "type": "text", + "content": "E 2000 indicated in the bottom-right corner of each image." + } + ] + } + ], + "index": 30, + "angle": 0, + "type": "image_caption" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 341, + 460, + 410, + 508 + ], + "blocks": [ + { + "bbox": [ + 341, + 460, + 410, + 508 + ], + "lines": [ + { + "bbox": [ + 341, + 460, + 410, + 508 + ], + "spans": [ + { + "bbox": [ + 341, + 460, + 410, + 508 + ], + "type": "image", + "image_path": "816bbb8567944e4793334ba775b8e11f7738c51eb1f29c8c2cf843a1a8b1c998.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 341, + 509, + 410, + 572 + ], + "blocks": [ + { + "bbox": [ + 341, + 509, + 410, + 572 + ], + "lines": [ + { + "bbox": [ + 341, + 509, + 410, + 572 + ], + "spans": [ + { + "bbox": [ + 341, + 509, + 410, + 572 + ], + "type": "image", + "image_path": "1578e6473e3d4095fc6dc20e95f44b59944d7489d6bab7b7de44d58e920161b2.jpg" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_body" + } + ], + "index": 19 + }, + { + "type": "image", + "bbox": [ + 341, + 573, + 410, + 633 + ], + "blocks": [ + { + "bbox": [ + 341, + 573, + 410, + 633 + ], + "lines": [ + { + "bbox": [ + 341, + 573, + 410, + 633 + ], + "spans": [ + { + "bbox": [ + 341, + 573, + 410, + 633 + ], + "type": "image", + "image_path": "dba71879c777c4eaa2c389315803fc5fa1d3101f37d0a5fe8ca6be89f8c6d15f.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 354, + 635, + 397, + 647 + ], + "lines": [ + { + "bbox": [ + 354, + 635, + 397, + 647 + ], + "spans": [ + { + "bbox": [ + 354, + 635, + 397, + 647 + ], + "type": "text", + "content": "WBFlow" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_caption" + } + ], + "index": 20 + }, + { + "type": "image", + "bbox": [ + 411, + 460, + 481, + 508 + ], + "blocks": [ + { + "bbox": [ + 411, + 460, + 481, + 508 + ], + "lines": [ + { + "bbox": [ + 411, + 460, + 481, + 508 + ], + "spans": [ + { + "bbox": [ + 411, + 460, + 481, + 508 + ], + "type": "image", + "image_path": "6dcc78a119e3152c36728c6f826b20822c372edd29411248d6e2e8f6e6323eb6.jpg" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_body" + } + ], + "index": 22 + }, + { + "type": "image", + "bbox": [ + 411, + 509, + 481, + 572 + ], + "blocks": [ + { + "bbox": [ + 411, + 509, + 481, + 572 + ], + "lines": [ + { + "bbox": [ + 411, + 509, + 481, + 572 + ], + "spans": [ + { + "bbox": [ + 411, + 509, + 481, + 572 + ], + "type": "image", + "image_path": "61d8524cbfadf25a49d98c82d0001471003fc6001d52d82febe8be9c7fa8fe37.jpg" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_body" + } + ], + "index": 23 + }, + { + "type": "image", + "bbox": [ + 411, + 573, + 481, + 633 + ], + "blocks": [ + { + "bbox": [ + 411, + 573, + 481, + 633 + ], + "lines": [ + { + "bbox": [ + 411, + 573, + 481, + 633 + ], + "spans": [ + { + "bbox": [ + 411, + 573, + 481, + 633 + ], + "type": "image", + "image_path": "5e84cb823eda855f16fce087b5ca4795849e080d29892d97548f2378fee8f8fa.jpg" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 432, + 635, + 457, + 647 + ], + "lines": [ + { + "bbox": [ + 432, + 635, + 457, + 647 + ], + "spans": [ + { + "bbox": [ + 432, + 635, + 457, + 647 + ], + "type": "text", + "content": "Ours" + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "image_caption" + } + ], + "index": 24 + }, + { + "type": "image", + "bbox": [ + 482, + 460, + 551, + 508 + ], + "blocks": [ + { + "bbox": [ + 482, + 460, + 551, + 508 + ], + "lines": [ + { + "bbox": [ + 482, + 460, + 551, + 508 + ], + "spans": [ + { + "bbox": [ + 482, + 460, + 551, + 508 + ], + "type": "image", + "image_path": "4c130dcc5f1dcc973e9fde452a1b89ff9d9e210238fe8db29599d8618bcac07b.jpg" + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_body" + } + ], + "index": 26 + }, + { + "type": "image", + "bbox": [ + 482, + 509, + 551, + 572 + ], + "blocks": [ + { + "bbox": [ + 482, + 509, + 551, + 572 + ], + "lines": [ + { + "bbox": [ + 482, + 509, + 551, + 572 + ], + "spans": [ + { + "bbox": [ + 482, + 509, + 551, + 572 + ], + "type": "image", + "image_path": "f4794eb6ce4ff7737ac2b06d6440e95c88f0509c01d628b3e8bcffe08da8291f.jpg" + } + ] + } + ], + "index": 27, + "angle": 0, + "type": "image_body" + } + ], + "index": 27 + }, + { + "type": "image", + "bbox": [ + 482, + 573, + 551, + 633 + ], + "blocks": [ + { + "bbox": [ + 482, + 573, + 551, + 633 + ], + "lines": [ + { + "bbox": [ + 482, + 573, + 551, + 633 + ], + "spans": [ + { + "bbox": [ + 482, + 573, + 551, + 633 + ], + "type": "image", + "image_path": "069b3937070a01f3592c6d0ba272852ce9fc833a8312857090d40dd911351cdf.jpg" + } + ] + } + ], + "index": 28, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 509, + 635, + 524, + 647 + ], + "lines": [ + { + "bbox": [ + 509, + 635, + 524, + 647 + ], + "spans": [ + { + "bbox": [ + 509, + 635, + 524, + 647 + ], + "type": "text", + "content": "GT" + } + ] + } + ], + "index": 29, + "angle": 0, + "type": "image_caption" + } + ], + "index": 28 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "type": "text", + "content": "21263" + } + ] + } + ], + "index": 31 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 59, + 73, + 553, + 233 + ], + "blocks": [ + { + "bbox": [ + 59, + 73, + 553, + 233 + ], + "lines": [ + { + "bbox": [ + 59, + 73, + 553, + 233 + ], + "spans": [ + { + "bbox": [ + 59, + 73, + 553, + 233 + ], + "type": "image", + "image_path": "687b9d42ba6f432d13f1b3edaaa2f1bf690bf6d0b0bb86325878ad1c95cb4593.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 241, + 555, + 264 + ], + "lines": [ + { + "bbox": [ + 55, + 241, + 555, + 264 + ], + "spans": [ + { + "bbox": [ + 55, + 241, + 555, + 264 + ], + "type": "text", + "content": "Figure 4. Qualitative comparisons with other sRGB-WB methods on the Rendered Cube+ dataset [3, 6], with the " + }, + { + "bbox": [ + 55, + 241, + 555, + 264 + ], + "type": "inline_equation", + "content": "\\Delta" + }, + { + "bbox": [ + 55, + 241, + 555, + 264 + ], + "type": "text", + "content": "E 2000 displayed in the bottom-right corner of each image." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 75, + 282, + 526, + 511 + ], + "blocks": [ + { + "bbox": [ + 75, + 282, + 526, + 511 + ], + "lines": [ + { + "bbox": [ + 75, + 282, + 526, + 511 + ], + "spans": [ + { + "bbox": [ + 75, + 282, + 526, + 511 + ], + "type": "image", + "image_path": "7b18e71146036143bf058e4ad6b666f46af9f089e0d2deea7bb6e19dfa269b42.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 54, + 521, + 555, + 556 + ], + "lines": [ + { + "bbox": [ + 54, + 521, + 555, + 556 + ], + "spans": [ + { + "bbox": [ + 54, + 521, + 555, + 556 + ], + "type": "text", + "content": "Figure 5. Evaluation of the Bhattacharyya coefficient [16] for color histograms across three benchmark datasets [3, 6], showing global color accuracy for red, green, and blue histograms in sRGB-WB methods. ABC-Former achieves more consistent color distributions by leveraging global color temperature features from both sRGB and CIELab histograms." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 575, + 297, + 708 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 575, + 297, + 708 + ], + "spans": [ + { + "bbox": [ + 55, + 575, + 297, + 708 + ], + "type": "text", + "content": "WB correction on the Rendered Cube+ dataset. We report the mean values across three evaluation metrics, along with model sizes for each combination of auxiliary models. As shown in Table 2, using only the target model (sRGB-former) without additional guidance of global color information from other modalities results in suboptimal WB accuracy. Adding a single auxiliary model (e.g., " + }, + { + "bbox": [ + 55, + 575, + 297, + 708 + ], + "type": "inline_equation", + "content": "\\mathrm{PDF}_{\\mathrm{sRGB}} + \\mathrm{sRGB}" + }, + { + "bbox": [ + 55, + 575, + 297, + 708 + ], + "type": "text", + "content": " or " + }, + { + "bbox": [ + 55, + 575, + 297, + 708 + ], + "type": "inline_equation", + "content": "\\mathrm{PDF}_{\\mathrm{Lab}} + \\mathrm{sRGB}" + }, + { + "bbox": [ + 55, + 575, + 297, + 708 + ], + "type": "text", + "content": ") improves WB performance over the target model alone. However, utilizing a single auxiliary model to jointly learn both sRGB and CIELab histograms " + }, + { + "bbox": [ + 55, + 575, + 297, + 708 + ], + "type": "inline_equation", + "content": "(\\mathrm{PDF}_{\\mathrm{sRGB:Lab}} + \\mathrm{sRGB})" + }, + { + "bbox": [ + 55, + 575, + 297, + 708 + ], + "type": "text", + "content": " proves less effective, as a single" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 313, + 575, + 555, + 685 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 575, + 555, + 685 + ], + "spans": [ + { + "bbox": [ + 313, + 575, + 555, + 685 + ], + "type": "text", + "content": "model struggles to disentangle the information from both modalities. Our full ABC-Former design leverages global color temperature features from multiple modalities through two auxiliary models to guide color adjustment, achieving the highest WB accuracy. To ensure a fair comparison across these combinations, we matched the model size to that of ABC-Former by doubling the bottleneck channels in sRGBformer (Baseline) and evenly increasing channels across layers when using a single auxiliary model." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "text", + "content": "Table 3 compares different loss functions that train auxiliary models for learning sRGB and CIELab histograms." + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "21264" + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 56, + 134, + 294, + 209 + ], + "blocks": [ + { + "bbox": [ + 55, + 70, + 295, + 126 + ], + "lines": [ + { + "bbox": [ + 55, + 70, + 295, + 126 + ], + "spans": [ + { + "bbox": [ + 55, + 70, + 295, + 126 + ], + "type": "text", + "content": "Table 2. Ablation studies of ABC-Former w/ and w/o guidance from different modalities on the Rendered Cube+ dataset [3, 6]. Here, sRGB denotes sRGBformer, while " + }, + { + "bbox": [ + 55, + 70, + 295, + 126 + ], + "type": "inline_equation", + "content": "\\mathrm{PDF}_{\\mathrm{sRGB}}" + }, + { + "bbox": [ + 55, + 70, + 295, + 126 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 55, + 70, + 295, + 126 + ], + "type": "inline_equation", + "content": "\\mathrm{PDF}_{\\mathrm{Lab}}" + }, + { + "bbox": [ + 55, + 70, + 295, + 126 + ], + "type": "text", + "content": " represent auxiliary PDFformer models for sRGB and CIELab histograms, respectively." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 56, + 134, + 294, + 209 + ], + "lines": [ + { + "bbox": [ + 56, + 134, + 294, + 209 + ], + "spans": [ + { + "bbox": [ + 56, + 134, + 294, + 209 + ], + "type": "table", + "html": "
ModalitiesMSE ↓MAE ↓ΔE 2000 ↓Size(MB)
sRGB76.563.35°4.3120.4
PDFLab + sRGB73.353.26°4.2020.4
PDFsRGB + sRGB68.653.12°4.0820.4
PDFsRGB:Lab + sRGB72.383.38°4.3820.4
PDFsRGB + PDFLab + sRGB60.602.99°3.9520.2
", + "image_path": "7e65e9639d7993366a321ac6b1a973ba94aee0bf85eb3f2d8b8f3706006c3a30.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "type": "table", + "bbox": [ + 67, + 275, + 280, + 329 + ], + "blocks": [ + { + "bbox": [ + 55, + 221, + 295, + 266 + ], + "lines": [ + { + "bbox": [ + 55, + 221, + 295, + 266 + ], + "spans": [ + { + "bbox": [ + 55, + 221, + 295, + 266 + ], + "type": "text", + "content": "Table 3. Ablation studies on the loss function used to train auxiliary models for learning sRGB and CIELab histograms. We compare the KL divergence, Wasserstein distance, and our chosen L2 loss on the Rendered Cube+ dataset [3, 6]." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 67, + 275, + 280, + 329 + ], + "lines": [ + { + "bbox": [ + 67, + 275, + 280, + 329 + ], + "spans": [ + { + "bbox": [ + 67, + 275, + 280, + 329 + ], + "type": "table", + "html": "
Loss functionMSE ↓MAE ↓ΔE 2000 ↓Size(MB)
KL divergence72.673.29°4.1720.2
Wasserstein distance70.223.12°4.1520.2
L2 loss (Proposed)60.602.99°3.9520.2
", + "image_path": "2a4f9d107b8613e0c5607cd100a8065ef666dcaf1ec14f4dc087bd31c550f223.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "type": "table", + "bbox": [ + 58, + 362, + 294, + 408 + ], + "blocks": [ + { + "bbox": [ + 56, + 342, + 293, + 353 + ], + "lines": [ + { + "bbox": [ + 56, + 342, + 293, + 353 + ], + "spans": [ + { + "bbox": [ + 56, + 342, + 293, + 353 + ], + "type": "text", + "content": "Table 4. WB manipulation on the Rendered Cube+ dataset [3, 6]." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 58, + 362, + 294, + 408 + ], + "lines": [ + { + "bbox": [ + 58, + 362, + 294, + 408 + ], + "spans": [ + { + "bbox": [ + 58, + 362, + 294, + 408 + ], + "type": "table", + "html": "
MethodMSE ↓MAE ↓ΔE 2000 ↓
MeanQ1Q2Q3MeanQ1Q2Q3MeanQ1Q2Q3
Deep-WB [2]199.3832.3063.34142.765.40°2.67°4.04°6.36°5.983.444.787.29
ABC-Former82.370.0117.7665.362.78°1.06°2.85°3.22°2.890.072.364.12
", + "image_path": "8c6bd6d5da3fa66640851ecbe355f5e557fc1e4c709adbf725a67858f767c9e6.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 70, + 423, + 284, + 601 + ], + "blocks": [ + { + "bbox": [ + 70, + 423, + 284, + 601 + ], + "lines": [ + { + "bbox": [ + 70, + 423, + 284, + 601 + ], + "spans": [ + { + "bbox": [ + 70, + 423, + 284, + 601 + ], + "type": "image", + "image_path": "26fbe8e8784e5c94a3f3e9baf308d5e9b90f3978887521261dadca03bdd8f7ed.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 610, + 295, + 666 + ], + "lines": [ + { + "bbox": [ + 55, + 610, + 295, + 666 + ], + "spans": [ + { + "bbox": [ + 55, + 610, + 295, + 666 + ], + "type": "text", + "content": "Figure 6. Analysis on learned weights on Rendered Cube+ dataset [3, 6], " + }, + { + "bbox": [ + 55, + 610, + 295, + 666 + ], + "type": "inline_equation", + "content": "\\lambda_{\\mathbf{i}}^{\\mathbf{sRGB}}" + }, + { + "bbox": [ + 55, + 610, + 295, + 666 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 610, + 295, + 666 + ], + "type": "inline_equation", + "content": "\\lambda_{\\mathbf{i}}^{\\mathbf{Lab}}" + }, + { + "bbox": [ + 55, + 610, + 295, + 666 + ], + "type": "text", + "content": ", for cross-modality knowledge transfer. Here, " + }, + { + "bbox": [ + 55, + 610, + 295, + 666 + ], + "type": "inline_equation", + "content": "i \\in \\{e_0, e_1, e_2, b_0, d_0, d_1, d_2\\}" + }, + { + "bbox": [ + 55, + 610, + 295, + 666 + ], + "type": "text", + "content": " represents the level of the sRGBformer block, where " + }, + { + "bbox": [ + 55, + 610, + 295, + 666 + ], + "type": "inline_equation", + "content": "e, b," + }, + { + "bbox": [ + 55, + 610, + 295, + 666 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 610, + 295, + 666 + ], + "type": "inline_equation", + "content": "d" + }, + { + "bbox": [ + 55, + 610, + 295, + 666 + ], + "type": "text", + "content": " denote the encoder, bottleneck, and decoder layers, respectively." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 689, + 295, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 689, + 295, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 689, + 295, + 715 + ], + "type": "text", + "content": "As can be seen, the L2 loss shows better performance compared to KL divergence and Wasserstein distance, at" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 72, + 553, + 133 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 553, + 133 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 553, + 133 + ], + "type": "text", + "content": "tributed to its stability and effectiveness in aligning histograms. In contrast, KL divergence may encounter issues with zero-probability bins, and Wasserstein distance can be non-smooth and challenging to optimize in high-dimensional spaces." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 133, + 553, + 323 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 133, + 553, + 323 + ], + "spans": [ + { + "bbox": [ + 313, + 133, + 553, + 323 + ], + "type": "text", + "content": "Analyzing Learned Weights for Cross-Modality Transfer. We present the learned weights, " + }, + { + "bbox": [ + 313, + 133, + 553, + 323 + ], + "type": "inline_equation", + "content": "\\lambda_{\\mathrm{i}}^{\\mathrm{sRGB}}" + }, + { + "bbox": [ + 313, + 133, + 553, + 323 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 133, + 553, + 323 + ], + "type": "inline_equation", + "content": "\\lambda_{\\mathrm{i}}^{\\mathrm{Lab}}" + }, + { + "bbox": [ + 313, + 133, + 553, + 323 + ], + "type": "text", + "content": ", at each level of ABC-Former to illustrate how modality-complementary knowledge guides WB correction. As shown in Figure 6, the influence of calibrated global color information from sRGB and CIELab histogram modalities intensifies toward the bottleneck block, peaking at the bottleneck. This indicates that high-level WB color histogram-based features from both modalities, capturing global, semantically rich color information, are crucial for reweighting sRGB image features for accurate WB correction. Notably, the sRGB histogram modality has a slightly greater impact than CIELab, though the difference is minimal. This suggests that both color modalities collaborate effectively, allowing ABC-Former to balance raw color intensity with perceptually uniform CIELab properties." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 323, + 554, + 419 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 323, + 554, + 419 + ], + "spans": [ + { + "bbox": [ + 313, + 323, + 554, + 419 + ], + "type": "text", + "content": "WB Manipulation. Following the setup described in [2], we conduct experiments to alter the input image's colors to match the target white balance (WB) settings. These settings correspond to the following color temperatures: tungsten " + }, + { + "bbox": [ + 313, + 323, + 554, + 419 + ], + "type": "inline_equation", + "content": "(2850\\mathrm{K})" + }, + { + "bbox": [ + 313, + 323, + 554, + 419 + ], + "type": "text", + "content": ", fluorescent " + }, + { + "bbox": [ + 313, + 323, + 554, + 419 + ], + "type": "inline_equation", + "content": "(3800\\mathrm{K})" + }, + { + "bbox": [ + 313, + 323, + 554, + 419 + ], + "type": "text", + "content": ", daylight " + }, + { + "bbox": [ + 313, + 323, + 554, + 419 + ], + "type": "inline_equation", + "content": "(5500\\mathrm{K})" + }, + { + "bbox": [ + 313, + 323, + 554, + 419 + ], + "type": "text", + "content": ", cloudy " + }, + { + "bbox": [ + 313, + 323, + 554, + 419 + ], + "type": "inline_equation", + "content": "(6500\\mathrm{K})" + }, + { + "bbox": [ + 313, + 323, + 554, + 419 + ], + "type": "text", + "content": ", and shade " + }, + { + "bbox": [ + 313, + 323, + 554, + 419 + ], + "type": "inline_equation", + "content": "(7500\\mathrm{K})" + }, + { + "bbox": [ + 313, + 323, + 554, + 419 + ], + "type": "text", + "content": ". As shown in Table 4, ABC-Former significantly outperforms Deep-WB [2] in achieving accurate color transformations." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 430, + 388, + 441 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 430, + 388, + 441 + ], + "spans": [ + { + "bbox": [ + 313, + 430, + 388, + 441 + ], + "type": "text", + "content": "5. Conclusion" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 450, + 555, + 617 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 450, + 555, + 617 + ], + "spans": [ + { + "bbox": [ + 313, + 450, + 555, + 617 + ], + "type": "text", + "content": "We presented ABC-Former, an Auxiliary Bimodal Cross-domain Transformer that enhances sRGB WB correction by leveraging complementary information from multiple modalities. ABC-Former uses Interactive Channel Attention to facilitate cross-modality knowledge transfer, integrating calibrated color features from both sRGB and CIELab histograms. This multimodal approach enables a more nuanced fusion of color information, allowing the model to handle diverse color temperatures and complex scenes with pronounced color shifts. Extensive experiments have demonstrated that ABC-Former consistently outperforms state-of-the-art methods in both quantitative and qualitative evaluations. For future work, extending into 2D histograms is a promising direction." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 628, + 424, + 642 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 628, + 424, + 642 + ], + "spans": [ + { + "bbox": [ + 313, + 628, + 424, + 642 + ], + "type": "text", + "content": "6. Acknowledgments" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 647, + 553, + 708 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 647, + 553, + 708 + ], + "spans": [ + { + "bbox": [ + 313, + 647, + 553, + 708 + ], + "type": "text", + "content": "This paper was supported in part by the National Science and Technology Council, Taiwan, under grants NSTC 113-2221-E-004-001-MY3, 113-2622-E-004-001, 113-2221-E-004-006-MY2, 112-2634-F-002-005, 113-2634-F-002-008, and 113-2923-E-A49-003-MY2." + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "21265" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "spans": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 57, + 91, + 295, + 594 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 62, + 91, + 294, + 137 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 91, + 294, + 137 + ], + "spans": [ + { + "bbox": [ + 62, + 91, + 294, + 137 + ], + "type": "text", + "content": "[1] Mahmoud Afifi and Michael S Brown. What else can fool deep learning? addressing color constancy errors on deep neural network performance. In ICCV, 2019. 1" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 140, + 294, + 163 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 140, + 294, + 163 + ], + "spans": [ + { + "bbox": [ + 61, + 140, + 294, + 163 + ], + "type": "text", + "content": "[2] Mahmoud Afifi and Michael S Brown. Deep white-balance editing. In CVPR, 2020. 2, 3, 5, 6, 8" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 62, + 165, + 295, + 212 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 165, + 295, + 212 + ], + "spans": [ + { + "bbox": [ + 62, + 165, + 295, + 212 + ], + "type": "text", + "content": "[3] Mahmoud Afifi, Brian Price, Scott Cohen, and Michael S Brown. When color constancy goes wrong: Correcting improperly white-balanced images. In CVPR, 2019. 1, 2, 3, 5, 6, 7, 8" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 62, + 214, + 294, + 249 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 214, + 294, + 249 + ], + "spans": [ + { + "bbox": [ + 62, + 214, + 294, + 249 + ], + "type": "text", + "content": "[4] Mahmoud Afifi, Jonathan T Barron, Chloe LeGendre, Yun-Ta Tsai, and Francois Bleibel. Cross-camera convolutional color constancy. In ICCV, 2021. 2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 63, + 251, + 294, + 285 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 251, + 294, + 285 + ], + "spans": [ + { + "bbox": [ + 63, + 251, + 294, + 285 + ], + "type": "text", + "content": "[5] Mahmoud Affi, Marcus A Brubaker, and Michael S Brown. Auto white-balance correction for mixed-illuminant scenes. In WACV, 2022. 2, 3, 5, 6" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 63, + 288, + 294, + 323 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 288, + 294, + 323 + ], + "spans": [ + { + "bbox": [ + 63, + 288, + 294, + 323 + ], + "type": "text", + "content": "[6] Nikola Banić, Karlo Koščević, and Sven Lončarić. Unsupervised learning for color constancy. arXiv preprint arXiv:1712.00436, 2017. 5, 6, 7, 8" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 63, + 326, + 294, + 348 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 326, + 294, + 348 + ], + "spans": [ + { + "bbox": [ + 63, + 326, + 294, + 348 + ], + "type": "text", + "content": "[7] Jonathan T Barron and Yun-Ta Tsai. Fast fourier color constancy. In CVPR, 2017. 2" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 63, + 350, + 294, + 373 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 350, + 294, + 373 + ], + "spans": [ + { + "bbox": [ + 63, + 350, + 294, + 373 + ], + "type": "text", + "content": "[8] Simone Bianco and Claudio Cusano. Quasiunsupervised color constancy. In CVPR, 2019." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 63, + 375, + 294, + 409 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 375, + 294, + 409 + ], + "spans": [ + { + "bbox": [ + 63, + 375, + 294, + 409 + ], + "type": "text", + "content": "[9] Jonathan Cepeda-Negrete and Raul E Sanchez-Yanez. Gray-world assumption on perceptual color spaces. In PSIVT, 2014. 2" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 57, + 412, + 294, + 446 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 412, + 294, + 446 + ], + "spans": [ + { + "bbox": [ + 57, + 412, + 294, + 446 + ], + "type": "text", + "content": "[10] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In CVPR, 2019. 1" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 57, + 449, + 294, + 530 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 449, + 294, + 530 + ], + "spans": [ + { + "bbox": [ + 57, + 449, + 294, + 530 + ], + "type": "text", + "content": "[11] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. ICLR, 2021. 3" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 533, + 294, + 568 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 533, + 294, + 568 + ], + "spans": [ + { + "bbox": [ + 57, + 533, + 294, + 568 + ], + "type": "text", + "content": "[12] Jun Fu, Jing Liu, Hajjie Tian, Yong Li, Yongjun Bao, Zhiwei Fang, and Hanqing Lu. Dual attention network for scene segmentation. In CVPR, 2019. 1" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 57, + 571, + 294, + 594 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 571, + 294, + 594 + ], + "spans": [ + { + "bbox": [ + 57, + 571, + 294, + 594 + ], + "type": "text", + "content": "[13] Sharma Gaurav. The ciede2000 color-difference formula: Implementation notes, supplementary test data," + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 553, + 597 + ], + "type": "list", + "angle": 0, + "index": 29, + "blocks": [ + { + "bbox": [ + 336, + 73, + 553, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 336, + 73, + 553, + 95 + ], + "spans": [ + { + "bbox": [ + 336, + 73, + 553, + 95 + ], + "type": "text", + "content": "and mathematical observations. COLOR research and application, 2005. 5" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 99, + 553, + 121 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 99, + 553, + 121 + ], + "spans": [ + { + "bbox": [ + 316, + 99, + 553, + 121 + ], + "type": "text", + "content": "[14] Jie Hu, Li Shen, and Gang Sun. Squeeze-andexcitation networks. In CVPR, 2018. 3" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 317, + 123, + 553, + 157 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 123, + 553, + 157 + ], + "spans": [ + { + "bbox": [ + 317, + 123, + 553, + 157 + ], + "type": "text", + "content": "[15] Yuanming Hu, Baoyuan Wang, and Stephen Lin. FC4: Fully convolutional color constancy with confidence-weighted pooling. In CVPR, 2017. 2" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 317, + 159, + 553, + 194 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 159, + 553, + 194 + ], + "spans": [ + { + "bbox": [ + 317, + 159, + 553, + 194 + ], + "type": "text", + "content": "[16] Thomas Kailath. The divergence and bhattacharyya distance measures in signal selection. IEEE transactions on communication technology, 1967. 5, 7" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 317, + 195, + 553, + 217 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 195, + 553, + 217 + ], + "spans": [ + { + "bbox": [ + 317, + 195, + 553, + 217 + ], + "type": "text", + "content": "[17] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *ICLR*, 2015. 5" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 317, + 219, + 553, + 254 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 219, + 553, + 254 + ], + "spans": [ + { + "bbox": [ + 317, + 219, + 553, + 254 + ], + "type": "text", + "content": "[18] Chunxiao Li, Xuejing Kang, and Anlong Ming. WBFlow: Few-shot white balance for sRGB images via reversible neural flows. In IJCAI, 2023. 2, 3, 5, 6" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 317, + 256, + 553, + 291 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 256, + 553, + 291 + ], + "spans": [ + { + "bbox": [ + 317, + 256, + 553, + 291 + ], + "type": "text", + "content": "[19] Chunxiao Li, Xuejing Kang, Zhifeng Zhang, and Anlong Ming. SWBNet: a stable white balance network for sRGB images. In AAAI, 2023. 2, 5, 6" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 317, + 293, + 553, + 340 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 293, + 553, + 340 + ], + "spans": [ + { + "bbox": [ + 317, + 293, + 553, + 340 + ], + "type": "text", + "content": "[20] Yi-Chen Lo, Chia-Che Chang, Hsuan-Chao Chiu, Yu-Hao Huang, Chia-Ping Chen, Yu-Lin Chang, and Kevin Jou. CLCC: Contrastive learning for color constancy. In CVPR, 2021. 2" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 317, + 342, + 553, + 376 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 342, + 553, + 376 + ], + "spans": [ + { + "bbox": [ + 317, + 342, + 553, + 376 + ], + "type": "text", + "content": "[21] Taishi Ono, Yuhi Kondo, Legong Sun, Teppei Kurita, and Yusuke Moriuchi. Degree-of-linear-polarization-based color constancy. In CVPR, 2022. 2" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 317, + 379, + 553, + 437 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 379, + 553, + 437 + ], + "spans": [ + { + "bbox": [ + 317, + 379, + 553, + 437 + ], + "type": "text", + "content": "[22] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, 2021. 2" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 317, + 440, + 553, + 474 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 440, + 553, + 474 + ], + "spans": [ + { + "bbox": [ + 317, + 440, + 553, + 474 + ], + "type": "text", + "content": "[23] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. NIPS, 2015. 1" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 317, + 476, + 553, + 499 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 476, + 553, + 499 + ], + "spans": [ + { + "bbox": [ + 317, + 476, + 553, + 499 + ], + "type": "text", + "content": "[24] Joost Van De Weijer, Theo Gevers, and Arjan Gijsenij. Edge-based color constancy. IEEE TIP, 2007. 2" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 317, + 502, + 553, + 548 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 502, + 553, + 548 + ], + "spans": [ + { + "bbox": [ + 317, + 502, + 553, + 548 + ], + "type": "text", + "content": "[25] Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. Uformer: A general u-shaped transformer for image restoration. In CVPR, 2022. 3" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 317, + 550, + 553, + 597 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 550, + 553, + 597 + ], + "spans": [ + { + "bbox": [ + 317, + 550, + 553, + 597 + ], + "type": "text", + "content": "[26] Yiyuan Zhang, Xiaohan Ding, Kaixiong Gong, Yixiao Ge, Ying Shan, and Xiangyu Yue. Multimodal pathway: Improve transformers with irrelevant data from other modalities. In CVPR, 2024. 2, 4" + } + ] + } + ], + "index": 28 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "text", + "content": "21266" + } + ] + } + ], + "index": 30 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/AC3D_ Analyzing and Improving 3D Camera Control in Video Diffusion Transformers/61cdf9ca-5840-4279-a8c2-d8f7abcf95b7_content_list.json b/2025/AC3D_ Analyzing and Improving 3D Camera Control in Video Diffusion Transformers/61cdf9ca-5840-4279-a8c2-d8f7abcf95b7_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..53796b7c610125b942b9c31e15d06f978e4a2c45 --- /dev/null +++ b/2025/AC3D_ Analyzing and Improving 3D Camera Control in Video Diffusion Transformers/61cdf9ca-5840-4279-a8c2-d8f7abcf95b7_content_list.json @@ -0,0 +1,1680 @@ +[ + { + "type": "text", + "text": "AC3D: Analyzing and Improving 3D Camera Control in Video Diffusion Transformers", + "text_level": 1, + "bbox": [ + 228, + 130, + 772, + 174 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Sherwin Bahmani $^{1,2,3}$ Ivan Skorokhodov $^{3}$ Guocheng Qian $^{3}$ Aliaksandr Siarohin $^{3}$ Willi Menapace $^{3}$ Andrea Tagliasacchi $^{1,4}$ David B. Lindell $^{1,2}$ Sergey Tulyakov $^{3}$", + "bbox": [ + 158, + 202, + 839, + 239 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1University of Toronto 2Vector Institute 3Snap Inc. 4SFU", + "bbox": [ + 321, + 239, + 676, + 256 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "*equal contribution", + "bbox": [ + 437, + 258, + 557, + 273 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "https://snap-research.github.io/ac3d", + "bbox": [ + 318, + 282, + 676, + 297 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/b817cae749f12438e773bb976adf99e6f8b2a973ab4df38ce66da003d63efb02.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 93, + 306, + 906, + 487 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/52516c7173e4846a6846b007e2a90e43104163aca0143472ddbef4ba84c1276e.jpg", + "image_caption": [ + "Figure 1. Camera-controlled video generation. Our method enables precise camera controllability in pre-trained video diffusion transformers, allowing joint conditioning of text and camera sequences. We synthesize the same scene with two different camera trajectories as input. The inset images visualize the cameras for the videos in the corresponding columns. The left camera sequence consists of a rotation to the right, while the right camera visualizes a zoom-out and up trajectory." + ], + "image_footnote": [], + "bbox": [ + 94, + 498, + 250, + 595 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/511171fd4800824ac854c98baa2d66ee98e1865662ef20852b5d5958d8527144.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 738, + 493, + 901, + 595 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 248, + 619, + 326, + 636 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Numerous works have recently integrated 3D camera control into foundational text-to-video models, but the resulting camera control is often imprecise, and video generation quality suffers. In this work, we analyze camera motion from a first principles perspective, uncovering insights that enable precise 3D camera manipulation without compromising synthesis quality. First, we determine that motion induced by camera movements in videos is low-frequency in nature. This motivates us to adjust train and test pose conditioning schedules, accelerating training convergence while improving visual and motion quality. Then, by probing the representations of an unconditional video diffusion transformer, we observe that they implicitly perform camera pose estimation under the hood, and only a sub-partion of their layers contain the camera information. This suggested us to limit the injection of camera conditioning to a subset of the", + "bbox": [ + 86, + 659, + 485, + 901 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "architecture to prevent interference with other video features, leading to a $4 \\times$ reduction of training parameters, improved training speed, and $10\\%$ higher visual quality. Finally, we complement the typical dataset for camera control learning with a curated dataset of 20K diverse, dynamic videos with stationary cameras. This helps the model distinguish between camera and scene motion and improves the dynamics of generated pose-conditioned videos. We compound these findings to design the Advanced 3D Camera Control (AC3D) architecture, the new state-of-the-art model for generative video modeling with camera control.", + "bbox": [ + 511, + 623, + 908, + 789 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 513, + 825, + 645, + 842 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Foundational video diffusion models (VDMs) trained on internet-scale data, acquire abundant knowledge about the physical world [10]. They not only learn appearance and", + "bbox": [ + 511, + 854, + 906, + 902 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 46 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "22875", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "plausible 2D dynamics, but they also have abundant understanding of 3D structure [7]. However, most of this knowledge is stored implicitly within the model, as these models do not expose fine-grained control mechanisms, such as camera motion control. We recently witnessed a surge of works that bring 3D camera control into foundational video models [39, 149, 166], but the control they provide is not very precise, and the synthesis quality is often compromised [6]. We analyze camera motion control in video diffusion models from first principles, and develop several findings that allow us to incorporate precise 3D camera conditioning without degrading synthesis quality. To perform our analysis we train a 11.5B-parameter VDiT (video latent diffusion transformer) [100] on a dataset of 100M text/video pairs. On this model, we perform three key studies. With what we learn, we adapt the camera control solution from VD3D [6] from a pixel-based to latent-based diffusion model, and significantly improve its performance.", + "bbox": [ + 89, + 90, + 485, + 363 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "1) The spectral properties of camera motion. To study the statistical nature of motion control, we analyze motion spectral volumes (MSV) [75] of the videos generated by a large-scale video DiT model. MSVs show the amount of energy in different portions of the frequency spectra (i.e., high energy in the low frequencies indicate smooth motion) and we measure them across 200 generated videos of different types (camera motion, scene motion, scene plus camera motion) and at various stages of the denoising synthesis process. We observe that camera motion mostly affects the lower portion of the spectrum and kicks in very early ( $\\approx 10\\%$ ) in the denoising trajectory. Then, as diffusion models are inherently coarse-to-fine in nature [25], we restrict our camera conditioning to only being injected on the subset of the denoising steps corresponding to low-frequencies. This results in $\\approx 15\\%$ higher visual fidelity, $\\approx 30\\%$ better camera following, and mitigates scene motion degradation.", + "bbox": [ + 89, + 375, + 485, + 632 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2) Camera motion knowledge in VDiTs. Then, we consider our text-only VDiT, and determine whether such a model possesses knowledge about cameras, and where this knowledge is expressed within its architecture. With this objective, we feed the (unseen during training) RealEstate10k [199] videos to our VDiT, and perform linear probing [27] to determine if camera poses can be recovered from its internal representation. Our analysis revealed that a video DiT implicitly performs camera pose estimation under the hood, and the presence of camera knowledge in a disentangled form peaks in its middle layers. This implies that the camera signal emerges in its early blocks to allow the later ones rely on it to build subsequent visual representations. Therefore, we adjust our conditioning scheme to only affect the first $30\\%$ of the architecture, leading to a $\\approx 4\\times$ reduction in training parameters, $15\\%$ training and inference acceleration, and $10\\%$ improved visual quality.", + "bbox": [ + 89, + 643, + 485, + 901 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "3) Re-balancing the training distribution. Finally, to supervise camera control architectures, the typical solution is to rely on the camera pose annotations provided by RealEstate10k [199]. However, this dataset contains mostly static scenes, which results in significant motion degradation of the fine-tuned video model. To overcome this problem, we curate a subset of 20K diverse videos with dynamic scenes but static cameras. As the camera conditioning branch is still activated for these videos, this helps the model disambiguate the camera from scene movement. Our experiments show that this simple adjustment in the data is sufficient to recover the scene dynamism while still enabling an effective pose-conditioned video model.", + "bbox": [ + 511, + 90, + 908, + 287 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Contributions. We compound the knowledge gained from these three studies into the design of the Advanced 3D Camera Control (AC3D) method. We perform extensive ablation studies and compare against state-of-the-art models for camera control, including MotionCtrl [149], CameraCtrl [39], and VD3D [6]. We demonstrate $18\\%$ higher video fidelity and $25\\%$ more precise camera steering in terms of quantitative metrics than a closest competitor, and our generated videos are favored to others in $90\\%$ of cases.", + "bbox": [ + 511, + 291, + 910, + 428 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related work", + "text_level": 1, + "bbox": [ + 511, + 443, + 651, + 458 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Our approach lies at the intersection of text-to-video, text-to-3D, and text-to-4D generation approaches. We refer to recent state-of-the-reports [101, 180] for a more thorough analysis of previous work.", + "bbox": [ + 511, + 469, + 908, + 529 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Text-to-video generation. Our approach builds on recent advancements in 2D video diffusion models. One prominent technique in this area enhances text-to-image models by adding temporal layers to support video generation [7, 8, 37, 119, 151]. While these methods use the U-Net architecture, more recent ones [10, 92, 93, 168] have been adapting transformer-based architectures for more scalable, realistic, and highly dynamic scene generation. We are interested in controlling the camera movements during the generation process of recent transformer-based video models based on precise camera extrinsics, i.e., cameras represented as rotation and translation sequences for each frame.", + "bbox": [ + 511, + 534, + 910, + 715 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "4D generation. Early 4D generation works [2, 164] used 4D GANs to learn category-specific generators with an underlying dynamic 3D representation. More recent approaches [4, 5, 81, 120, 196] have tackled 4D generation by distilling motion priors from pre-trained video diffusion models into an explicit 4D representation, enabling category-agnostic 4D generation. Follow-up works investigate image or video conditioned 4D generation [33, 76, 81, 98, 109, 173, 182, 193, 196] instead of pure text inputs, improving flexibility in the generation process. While most of these works are object-centric, recent approaches [4, 159] shifted towards more complex scenes, including methods [23, 175] which", + "bbox": [ + 511, + 719, + 908, + 900 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "22876", + "bbox": [ + 478, + 945, + 519, + 955 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/7d90716174ed6253fcc66d5f7b61f599bd5530253f95e75ce8956865a8007427.jpg", + "image_caption": [ + "Figure 2. VDiT-CC model with ControlNet [71, 188] camera conditioning built on top of VDiT. Video synthesis is performed by large 4,096-dimensional DiT-XL blocks of the frozen VDiT backbone, while VDiT-CC only processes and injects the camera information through lightweight 128-dimensional DiT-XS blocks (FC stands for fully-connected layers); see Section 3.2 for details." + ], + "image_footnote": [], + "bbox": [ + 94, + 88, + 906, + 258 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "model the background. However, all these methods are optimization-based, i.e., each scene is generated independently from scratch. Recently, L4GM [110] proposes a feed-forward 4D generator trained on object-centric synthetic 4D data. While these approaches are explicit and provide space-time control, they are limited in their photorealism compared to recent 2D video diffusion models. We investigate dynamic 3D scene generation from a different perspective by extending pre-trained video diffusion models with 3D camera control.", + "bbox": [ + 88, + 335, + 485, + 487 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Camera control for video models. Recently, there has been significant progress in adding camera control to video diffusion models. As the pioneering work, MotionCtrl [149] learns camera control by conditioning pre-trained video models [7, 17] with extrinsic matrices. Follow-up works [39, 65, 160] further improve the conditioning mechanisms by representing cameras as Plücker coordinates. Another line of work [49, 50, 82, 155] controls camera motion without training additional parameters. However, all of these approaches use U-Net-based architectures as their backbone. More recently, 4DiM [150] trains a space-time diffusion model from scratch for novel view synthesis from a single image input. Closely related to our work, VD3D [6] incorporates camera control into a pre-trained video diffusion transformer. While the motion and camera control improves over U-Net-based approaches, the synthesized motion in the scenes and the visual quality are still degraded compared to the base video model. In contrast to VD3D, we first thoroughly investigate the pre-trained base video model and its knowledge of camera motion. We derive an improved training and architecture design for high-quality and dynamic video generation based on our findings.", + "bbox": [ + 91, + 498, + 483, + 830 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Concurrent works. Concurrent approaches [68, 158, 177, 185, 194, 195] further improve camera control in U-Net-based architectures, while another work [22] tackles video diffusion transformer. However, the scene and visual quality", + "bbox": [ + 89, + 839, + 485, + 902 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "is still limited in that approach. DimensionX [127] controls space and time in video diffusion transformers but the camera trajectories are pre-defined and not continuous. Chun-Hao et al. [51] explore pose estimation with a video DiT by pairing it with DUSt3R [145] and fine-tuning, while we perform linear probing without any training to assess its existing camera knowledge. CAT4D [152] proposes a multi-view video diffusion model fine-tuned from a multi-view model.", + "bbox": [ + 511, + 335, + 906, + 457 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. Method", + "text_level": 1, + "bbox": [ + 513, + 474, + 604, + 489 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "We first describe our base video diffusion model (Sec. 3.1), and the baseline camera control method built on top of it (Sec. 3.2). Then, we proceed with the analysis of motion (Sec. 3.3), linear probing (Sec. 3.4) and dataset biases (Sec. 3.5), and additional insights on how to build an effective model for camera control (Sec. 3.6)", + "bbox": [ + 511, + 501, + 908, + 592 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Base model (VDiT)", + "text_level": 1, + "bbox": [ + 511, + 604, + 696, + 621 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Following Sora [10], most modern foundational text-to-video generators use the diffusion framework [45, 122] to train a large-scale transformer [139] in the latent space of a variational autoencoder [64, 111]. We adopt the same design and, for a base video model, pre-train an 11.5B-parameter Video DiT model [100] with 32 blocks of hidden dimension 4,096 for text-to-video generation. We use the rectified flow diffusion parametrization [85] and learn in the latent space of CogVideoX [168] (using an autoencoder with a 16-channel output and compression factors of $4 \\times 8 \\times 8$ in the temporal and spatial dimensions). The T5 [108] encoder produces text embeddings, which are passed into VDiT via cross-attention. We train our base model on a large-scale dataset of images and videos with text annotations, with resolutions ranging from $17 \\times 144 \\times 256$ to $121 \\times 576 \\times 1024$ . This design is fairly standard and followed by many existing works with little deviation [32, 102, 168, 197]; we describe our specific architectural and training setup in detail in Appendix D.", + "bbox": [ + 511, + 628, + 908, + 900 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "22877", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.2. VDiT with Camera Control (VDiT-CC)", + "text_level": 1, + "bbox": [ + 89, + 90, + 428, + 107 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "To construct a baseline architecture for camera control, we implement ControlNet [18, 188] conditioning on top of the VDiT. Similar to previous work [6, 39, 149], we use the RealEstate10k [199] dataset, consisting of $65\\mathrm{k}$ (text, video, camera trajectory) triplets $(\\pmb{t}_n,\\pmb{x}_n,\\pmb{c}_n)_{n = 1}^N$ and train a new set of model parameters to input the camera information into the model. Camera trajectories $\\pmb {c}\\in \\mathbb{R}^{f\\times 25}$ are provided in the form of camera extrinsics $\\pmb {C_f}\\in \\mathbb{R}^{4\\times 4}$ and intrinsics $\\pmb {K}_f\\in \\mathbb{R}^{3\\times 3}$ for each $f$ -th frame $\\pmb{x}_f$ .", + "bbox": [ + 89, + 125, + 483, + 263 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Camera conditioning. For base camera control, we adapt VD3D [6] since it was designed for transformer-based models and suits our setup the most, while other methods are built on top of UNet-based [112] backbones. We use Plücker camera representations [6, 16, 39, 59, 121], which are projected to the same dimensionality and resolution as the video tokens via a fully-convolutional encoder to produce camera tokens. These camera tokens are processed by a sequence of lightweight DiT-XS blocks with hidden dimension 128 and four attention heads each. To mix the camera information with the video tokens of VDiT, we use summation before each main DiT block. We also found it useful to perform cross-attention from video tokens to camera tokens as a form of a feedback connection [71]. We illustrate this model architecture, which we call VDiT-CC, in Figure 2; see implementation details in Appendix D. VDiT-CC describes the camera-controlled video model architecture used by AC3D, while AC3D describes our proposed work including analysis and additional adjustments based on the analysis.", + "bbox": [ + 91, + 277, + 483, + 565 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Training. Keeping the VDiT backbone frozen, we train the new parameters with a rectified flow objective [85] and standard (location of 0 and scale of 1) logit-normal noise distribution [28]. Similar to prior works [6, 150], we apply a $10\\%$ camera dropout to support classifier-free guidance (CFG) [44] later. Notably, we train VDiT-CC only at the $256^2$ resolution: since camera motion is a low-frequency type of signal (which can be observed at lower resolutions) and the main VDiT backbone is frozen, we found that our design generalizes to higher resolutions out-of-the-box. During inference, we input text prompts and camera embeddings with classifier-free guidance at each time step.", + "bbox": [ + 89, + 580, + 483, + 763 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Model behavior. This baseline model, being built on top of our powerful VDiT, already achieves decent-quality camera control. However, it struggles with degraded visual quality and reduced scene motion, and sometimes, the camera control inputs are ignored. To improve the design, we analyze our VDiT backbone to understand how camera motion is modeled and represented. Then, we inspect VDiT-CC's failure cases and where they arise to address them.", + "bbox": [ + 89, + 779, + 483, + 900 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/84e1ccc3b8c3684156cda0c6e94f1b00f3bf0f68361b47b1471f204f9c75627a.jpg", + "image_caption": [ + "Figure 3. Average magnitude of motion spectral volumes along spatial, temporal offset, and video batch dimensions for scenes with different motion types. We compute the flow of each video in a sliding window manner with temporal offsets and average the frequencies across all offsets. Videos with camera motion (purple) exhibit stronger overall motion than the videos with scene motion (orange), especially for the low-frequency range, suggesting that the motion induced by camera transitions is heavily biased towards low-frequency components. Frequency refers to the temporal frequency." + ], + "image_footnote": [], + "bbox": [ + 557, + 90, + 864, + 237 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.3. How is camera motion modeled by diffusion?", + "text_level": 1, + "bbox": [ + 511, + 402, + 895, + 420 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "We start by analyzing how camera motion is modeled by a pre-trained video diffusion model (i.e., before camera control is incorporated). We hypothesize that the motion induced by changes in camera pose is a low-frequency type of signal and investigate the motion spectral volumes [75] of the generated videos at different steps of the denoising process. To perform this analysis, we generate 200 diverse videos with our VDiT model with 80 denoising steps and manually annotate them into four categories: videos with only scene motion, videos with only camera motion, videos with both scene and camera motion, and others; see Appendix E for details. During generation, we save the denoised predictions at each denoising step and estimate optical flow to compute the motion spectral volumes.", + "bbox": [ + 511, + 426, + 906, + 638 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Analysis. We visualize motion spectral volumes with $95\\%$ confidence intervals in Figure 3. Videos with camera motion exhibit higher amplitudes than scene-motion-only videos for low-frequency components while having similar characteristics for high-frequency ones. This supports the conjecture that the camera motion is a low-frequency type of signal. We also depict an example of a generated video with both scene and camera motion with four denoising steps on Fig. 4a: one can observe that the camera movement has been fully produced by $t = 0.9$ (first $10\\%$ of the rectified flow denoising process). In contrast, scene motion details like the hand movements of the subjects are not finalized even till $t = 0.5$ .", + "bbox": [ + 511, + 642, + 908, + 823 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Inspired by this finding, we pose the question: when exactly does a video diffusion model determine the camera pose? To answer this question, we plot aggregated spectral volumes for different timesteps in Figure 4b. We also show the ratio with respect to the last timestep $t = 0$ (i.e.,", + "bbox": [ + 511, + 825, + 908, + 900 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "22878", + "bbox": [ + 478, + 945, + 517, + 955 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/8693ddc1f716218a31172eb0176c0cf1659c7715d807548d77bc27c89b978dc5.jpg", + "image_caption": [ + "(a) A generated video at different diffusion timesteps. The camera has already been decided by the model even at $t = 0.9$ (first $10\\%$ of the denoising process) and does not change after that." + ], + "image_footnote": [], + "bbox": [ + 91, + 89, + 356, + 234 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/9ea73aaf19caa9d407640a1ed89c9115a7296f0e1cb9290812cc347eaa8c3e6c.jpg", + "image_caption": [ + "(b) Motion spectral volumes of VDiT's generated videos for different diffusion timesteps (left) and their ratio w.r.t. the motion spectral volume at $t = 0$ (i.e., a fully denoised video).", + "Figure 4. How camera motion is modeled by diffusion? As visualized in Figure 4a and Figure 3, the motion induced by camera transitions is a low-frequency type of motion. We observe that a video DiT creates low-frequency motion very early in the denoising trajectory: Figure 4b (left) shows that even at $t = 0.96$ (first $\\approx 4\\%$ of the steps), the low-frequency motion components have already been created, while high frequency ones do not fully unveil even till $t = 0.5$ . We found that controlling the camera pose later in the denoising trajectory is not only unnecessary but detrimental to both scene motion and overall visual quality." + ], + "image_footnote": [], + "bbox": [ + 369, + 92, + 609, + 250 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/23e1a2904a69fcdbc58bb5274bf44241e2989333975124d4518225d74c2d0e4e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 611, + 90, + 901, + 250 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "when all motion has been generated). We then inspect when different types of motion appear during the denoising process. Figure 4b (right) shows that the low-frequency motion components fill up to $\\approx 84\\%$ at $t = 0.9$ (the first $10\\%$ of the denoising process), while high-frequency components are not well-modeled until $t = 0.6$ .", + "bbox": [ + 88, + 392, + 482, + 481 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "An immediate consequence of this observation is that trying to control the camera later in the denoising trajectory is simply unnecessary and will not influence the manipulation result. In this way, instead of using the standard logit-normal noise level distribution of SD3 [28] with a location of 0.0 and scale of 1.0 (which we use by default for VDiT), we switch to using truncated normal with a location of 0.8 and scale of 0.075 on the [0.6, 1] interval to cover the early steps of the denoising rectified flow trajectory. At inference time, we apply camera conditioning on the same [0.6, 1] interval. Surprisingly, we observe that not using truncation is detrimental to the scene motion and overall visual quality. Following this insight, we restrict both our train-time noise levels and test-time camera conditioning schedules to cover only the first $40\\%$ of the reverse diffusion trajectory. As Sec. 4.3 shows, this improves FID and FVD by $14\\%$ on average, and camera following by $30\\%$ on MSR-VTT (the dataset used to measure generalization to diverse, out-of-fine-tuning-distribution scenes). Further, truncated noise sampling enhances the overall scene motion.", + "bbox": [ + 89, + 484, + 483, + 786 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.4. What does VDiT know about camera pose?", + "text_level": 1, + "bbox": [ + 89, + 801, + 457, + 816 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Foundational video models acquire rich knowledge about the physical world, and we hypothesize that they store information about the camera pose within their representations. To investigate this, we perform linear probing of our base VDiT model on the RealEstate10k [199] dataset (not seen", + "bbox": [ + 89, + 824, + 483, + 900 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "during training) for camera extrinsics.", + "bbox": [ + 511, + 392, + 764, + 406 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Specifically, we take 1,000 random 49-frame videos from RealEstate10K, feed them into VDiT under 8 noise levels $(1/8, 2/8, \\dots, 1)$ , and extract the activations for all 32 DiT blocks. Next, we split the random videos into 900 train and 100 test videos and train a linear ridge regression model to predict the rotation pitch/yaw/roll angles and translation vectors for the entire viewpoint trajectory $(49 \\times 6$ target values in total). This results in $8 \\times 32$ trained models, and we report the rotation and (normalized) translation errors [39] on a held-out test set of 100 videos in Figure 5. Surprisingly, VDiT can accurately predict the camera pose, achieving minimum test errors of $\\approx 0.025$ for rotation and for $\\approx 0.48$ translation prediction. The knowledge quality increases around layer #9 and peaks in the range of #13-21. We reason that since the camera information in block #13 is stored in such a disentangled manner, then the model is using it to build other representations; hence, conditioning the camera in this block is risky and unnecessary and would interfere with other visual features, as shown in our ablations. In this way, we propose to input the camera conditioning only in the first #8 blocks and leave the remaining 24 DiT blocks unconditioned. We find in Section 4.3 that this not only reduces the number of trainable parameters by $\\approx 4$ times and improves training speed by $\\approx 15\\%$ , but also enhances the visual quality by $\\approx 10\\%$ .", + "bbox": [ + 511, + 409, + 908, + 786 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.5. Mitigating training data limitations", + "text_level": 1, + "bbox": [ + 511, + 801, + 821, + 816 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Estimating camera parameters from in-the-wild videos remains challenging, as leading methods like [114, 115, 145, 192] frequently fail when processing videos containing dynamic scene content. This limitation results in camera-annotated datasets being heavily biased", + "bbox": [ + 511, + 824, + 906, + 900 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "22879", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/81ea02059ff1477dd730701760e70616e713378dad921b2c3c191cac9198b885.jpg", + "image_caption": [ + "Figure 5. Video DiT is secretly a camera pose estimator. We perform linear probing of camera poses in each of VDiT blocks for various noise levels and observe that video DiT performs pose estimation under the hood. Its middle blocks carry the most accurate information about the camera locations and orientations, which indicates that the camera signal emerges in the early layers to help the middle and late blocks render other visual features aligned with the viewpoint." + ], + "image_footnote": [], + "bbox": [ + 114, + 89, + 454, + 239 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/5b13402e9380f67cf4a5cc2290fae66fa5179e151f60267cd7fe2a1dbed59fa5.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 455, + 90, + 883, + 239 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "toward static scenes, which is particularly evident in RealEstate10K (RE10K) [199], the predominant dataset for training camera-controlled video models [6, 39, 149]. We hypothesize that models fine-tuned on such data interpret camera position information as a signal to suppress scene dynamics. This bias persists even when jointly training on unconstrained 2D video data [150], because the camera conditioning branch is only activated when camera parameters are available, which occurs exclusively for static scenes from RE10K, as static scenes remain the only reliable source for accurate camera annotation.", + "bbox": [ + 88, + 335, + 483, + 500 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "To address this fundamental limitation, we propose an alternative approach: rather than attempting to annotate dynamic scenes, which proved unsuccessful in our extensive preliminary research, even with state-of-the-art methods [145], we curate a collection of 20K diverse videos featuring dynamic scenes captured by stationary cameras (see Figure 6). With stationary cameras, the camera position is inherently known, (we can assign fixed arbitrary extrinsic), allowing us to maintain active camera conditioning during training. This approach enables the camera conditioning branch to remain active during training while exposing the model to dynamic content, helping it distinguish between viewpoint conditioning and scene stillness. On top of this secondary dataset, following [150], we remove the scale ambiguity in RE10K by leveraging an off-the-shelf metric depth estimator; see Appendix H. Our experiments in Sec. 4.3 demonstrate that this straightforward yet effective data curation strategy successfully mitigates the distributional limitations of RE10K, restoring much of the lost scene dynamics, while maintaining precise camera control.", + "bbox": [ + 93, + 502, + 485, + 804 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "3.6. Miscellaneous improvements", + "text_level": 1, + "bbox": [ + 89, + 814, + 349, + 830 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In addition to our core analysis, we introduce several auxiliary techniques that enhance model performance.", + "bbox": [ + 89, + 835, + 485, + 867 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Separate text and camera guidance. Text and camera signals require different guidance weights due to their dis", + "bbox": [ + 89, + 869, + 483, + 901 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/68916703f5541477f8f661f8e721d18c59c45cea7bc6d959220446f4f42366a8.jpg", + "image_caption": [ + "Figure 6. RealEstate10k [199] videos (upper 2 rows) contain diverse camera trajectories, but are strongly biased towards static scenes. To mitigate this bias and also increase the concepts diversity, we curate 20K videos with stationary cameras, but dynamic content (lower 2 rows). Such datasets are easy to construct, and surprisingly effective. Section 4.3 shows that integrating the dataset into our training improves visual quality on out-of-distribution prompts by $17\\%$ ." + ], + "image_footnote": [], + "bbox": [ + 519, + 333, + 901, + 508 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "tinct nature, motivating us to separate their classifier-free guidance (CFG) [9, 44]. We formulate the equation as:", + "bbox": [ + 511, + 659, + 906, + 690 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\hat {s} (\\boldsymbol {x} \\mid \\boldsymbol {t}, \\boldsymbol {c}) = (1 + w _ {y} + w _ {c}) s _ {\\theta} (\\boldsymbol {x} \\mid \\boldsymbol {t}, \\boldsymbol {c}) \\tag {1} \\\\ - w _ {y} s _ {\\theta} (\\boldsymbol {x} | \\boldsymbol {c}) - w _ {c} s _ {\\theta} (\\boldsymbol {x} | \\boldsymbol {t}), \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 584, + 700, + 906, + 736 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "where $\\hat{s}(.)$ denotes the final update direction used during synthesis, $s_{\\theta}$ represents the model's predicted update direction, $\\pmb{t}$ and $\\pmb{c}$ are text and camera conditions, and $w_{y}$ and $w_{c}$ are their respective CFG weights. We zero-out the tensor for unconditional generation.", + "bbox": [ + 511, + 744, + 906, + 821 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "ControlNet with feedback. Traditional ControlNet [188] conditioning, used in recent camera control methods [6, 39, 149], only processes conditioning signals without accessing the main branch. Our experiments show that using a bidirectional ControlNet produces better camera representations.", + "bbox": [ + 511, + 824, + 908, + 901 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "22880", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/cbc675c62bf8069c3abb44c46bbdae5edf36531dd0dc97fa4fa0a8d8e1dd43f0.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
MethodHuman Preference
CAMQTAVQOverall
Ours vs. VD3D (FIT)89.5%79.0%87.5%97.5%95.0%
Ours vs. VD3D (DiT)65.0%87.5%83.5%95.0%92.5%
", + "bbox": [ + 94, + 89, + 482, + 152 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 1. User study. We compare our approach to the original VD3D (FIT) and reimplemented VD3D (DiT) on top of our base model. We conduct a user study where participants indicate their preference based on camera alignment (CA), motion quality (MQ), text alignment (TA), visual quality (VQ), and overall preference (Overall).", + "bbox": [ + 89, + 174, + 483, + 258 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "This modification behaves as a feedback mechanism [71] provided by the main synthesis branch to the camera processing branch.", + "bbox": [ + 89, + 285, + 483, + 330 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Dropping context in the camera branch. Applying cross-attention over the context information (text prompts, resolution, etc.) in the camera DiT-XS blocks worsens visual quality and camera steering due to harmful interference of the context embeddings with camera representations.", + "bbox": [ + 89, + 334, + 483, + 411 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4. Experiments", + "text_level": 1, + "bbox": [ + 89, + 424, + 220, + 441 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Datasets. Our base VDiT model was trained on a large-scale dataset of text-annotated images and videos. VDiT-CC is fine-tuned from VDiT on RealEstate10K [199], contains $\\approx 65\\mathrm{M}$ video clips with per-frame camera parameters since it is the setup used by existing methods [6, 39, 149].", + "bbox": [ + 89, + 454, + 483, + 530 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Metrics. To assess the performance, we rely on a wide range of automatic quantitative metrics. We use FID [43], FVD [136], and CLIP score [42] to evaluate visual quality, and rotation and normalized translation errors [39] of ParticleSfM [192]-reconstructed trajectories to assess camera steerability. We evaluate them both on RE10K and MSR-VTT [161], since the latter allows to assess zero-shot generalization on out-of-distribution data. Moreover, we conduct a user study with details in the appendix in Sec. J.", + "bbox": [ + 89, + 532, + 483, + 670 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.1. Baselines", + "text_level": 1, + "bbox": [ + 89, + 680, + 197, + 694 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We select three camera-control methods: MotionCtrl [149], CameraCtrl [39], and VD3D [6]. MotionCtrl and CameraCtrl use a UNet-based video diffusion backbone [37], while VD3D builds on top of FIT [21, 93] and as such, is easily extendable to our video DiT [100] setup. Hence, we re-implement VD3D on top of our VDiT model to obtain an additional \"VD3D+DiT\" baseline. Moreover, we provide comparisons for an open-source model, i.e., CogVideoX [169]. See Sec. C of the appendix for more details.", + "bbox": [ + 89, + 700, + 483, + 838 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.2. Main results", + "text_level": 1, + "bbox": [ + 89, + 848, + 223, + 862 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We present quantitative comparisons with the baselines in Tab. 2. One can observe that just switching from the 4B-", + "bbox": [ + 89, + 869, + 483, + 900 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "parameter pixel-space FIT [21] backbone, employed by the original VD3D approach, to our larger 11.5B-parameter latent-space DiT yields clear improvements across most metrics. Next, the results demonstrate that AC3D establishes a new state-of-the-art in performance against all baselines. Evaluating the quality of camera motion from still images is difficult, so we instead visualize all qualitative results in the website provided within our supplementary material. Therein, we can observe that AC3D better follows pose conditioning and achieves higher visual fidelity. We conduct user studies against VD3D+FIT (the original model) and VD3D+DiT (our improved re-implementation on top of the bigger video transformer). The results are presented in Table 1: AC3D outperforms them across all qualitative aspects, achieving a $90\\%+$ overall preference score. Finally, we encourage the reader to assess the visual quality by observing videos on our website.", + "bbox": [ + 511, + 90, + 906, + 347 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.3. Ablations", + "text_level": 1, + "bbox": [ + 513, + 359, + 624, + 373 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "No camera conditioning. The first ablation we conduct is to drop all camera conditioning, which makes the model equivalent to the vanilla VDiT. This is needed to understand the degradation of visual quality and text alignment. The results (Tab. 2, row w/o camera cond) show that our model loses less only $\\approx 7\\%$ of the original visual fidelity on MSR-VTT (as measured by FVD), while (as expected) greatly improving on its in-domain RE10K data. In comparison, VD3D-DiT (the closest baseline) loses $\\approx 20\\%$ of its visual fidelity on MSR-VTT.", + "bbox": [ + 511, + 383, + 906, + 536 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Importance of biasing the noise towards higher levels. As Sec. 3.3 shows, we use the truncated normal distribution with location of 0.8 and scale of 0.075 with the [0.6, 1] bounds for training AC3D. We ablate the importance of biasing the noise sampling towards high noise and observe higher motion, visual quality, and camera controllability.", + "bbox": [ + 511, + 540, + 905, + 631 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Importance of truncating the noise schedule. We change the training and inference procedure by using no truncation during noise sampling. Instead, we condition the model with camera inputs over the whole noise range and observe decreased visual quality.", + "bbox": [ + 511, + 635, + 905, + 710 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "No camera guidance. We assess the importance of classifier-free guidance [44] on the camera conditioning in Tab. 2 (w/o camera CFG). It attains the same visual quality on both indistribution (RE10K) and out-of-distribution (MSR-VTT) data, but degrades camera following, resulting in $\\approx 5\\%$ worse pose reconstruction errors.", + "bbox": [ + 511, + 714, + 906, + 805 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Training without our data with scene motion. To understand how well our curated data with scene motion but stationary cameras mitigates static scene bias, we train AC3D exclusively on RE10K, and report the results in Tab. 2 (w/o our dynamic data). The model maintains similar visual quality and text alignment on RE10K (in-domain data), but", + "bbox": [ + 511, + 810, + 908, + 902 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "22881", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/e68fcf7d3b123e62d6b3ad6a74ed101391b9ea1e09a2ed45e50885da65b9b5b3.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
MethodRealEstate10K [199]MSR-VTT [161]
TransErr ↓RotErr ↓FID ↓FVD ↓CLIP ↑TransErr ↓RotErr ↓FID ↓FVD ↓CLIP ↑
MotionCtrl (U-Net)0.4770.0942.9961.7026.460.5930.13716.85283.1224.11
CameraCtrl (U-Net)0.4650.0892.4855.6426.810.5870.13212.33201.3325.05
VD3D (FIT)0.4090.0431.4042.4328.070.5040.0507.80165.1826.89
VD3D (CogVideoX)0.4670.0631.6643.1428.080.5010.0687.45148.1127.65
AC3D (CogVideoX) (ours)0.3740.0391.2738.2028.620.4310.0395.52116.0428.38
MotionCtrl (VDiT)0.5040.1261.7443.8127.690.5890.1469.92150.2027.25
CameraCtrl (VDiT)0.5130.1381.6242.1027.730.5660.1438.15146.7727.51
VD3D (VDiT)0.4210.0561.2138.5728.340.4860.0476.88137.6227.90
AC3D (VDiT) (ours)0.3580.0351.1836.5528.760.4280.0385.34110.7128.58
w/o camera cond+0.233+0.153+4.02+53.83-1.63+0.266+0.157-0.48-8.53+0.35
w/o biasing noise+0.093+0.015+0.02+1.78-0.32+0.138+0.033+0.59+16.92-0.54
w/o noise truncation+0.020-0.003+0.06+1.69-0.20+0.016+0.005+0.76+6.63-0.18
w/o camera CFG+0.014+0.004+0.49+4.57-0.54+0.025+0.003+0.03+1.42-0.27
w/o our dynamic data-0.005-0.004-0.06+0.22-0.20+0.004-0.001+0.89+4.40-0.55
w/o metric scaled data+0.013+0.005+0.17+4.650.00+0.023+0.002-0.010.00-0.12
w/o dropping camera context+0.013+0.001+0.04+2.46-0.65+0.029+0.003+1.25+7.41-0.36
w/o limiting camera cond to 8 blocks-0.001+0.001+0.09+0.56-0.02+0.0030.000+0.32+9.23-0.33
w/ 2D training+0.129+0.068+2.60+33.85-1.17+0.128+0.093-0.26-3.83+0.21
", + "bbox": [ + 91, + 88, + 911, + 345 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Table 2. Quantitative evaluation. We evaluate all the models using camera pose and visual quality metrics based on unseen camera trajectories. We compute translation and rotation errors based on the estimated camera poses from generations using ParticleSfM [192]. We evaluate both in-distribution performance with RealEstate10K [199] and out-of-distribution performance with MSR-VTT [161].", + "bbox": [ + 88, + 354, + 906, + 398 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "performance on out-of-distribution samples from MSR-VTT worsens ( $\\approx 17\\%$ worse FID and $\\approx 4\\%$ worse FVD). The quality of scene motion is better assessed by referring to our qualitative video comparisons in the supplementary.", + "bbox": [ + 88, + 424, + 480, + 486 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Importance of metric scaled cameras. We train AC3D using the original RE10K's camera parameters without our scaling procedure and present the results in Tab. 2 (w/o metric scaled data). This is a more ambiguous conditioning signal, and results in worse visual quality ( $\\approx 10\\%$ FVD on RE10K) and camera following performance ( $\\approx 12\\%$ worse trajectory reconstruction).", + "bbox": [ + 88, + 489, + 482, + 597 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Providing context into the camera branch. As discussed in Sec. 3.6, we chose not to input the context information (text embeddings, resolution conditioning, etc.) into the camera branch to avoid potential interference with the camera representations. As Tab. 2 (w/o dropping camera context) shows, providing this information indeed results in $\\approx 4\\%$ worse camera following and $\\approx 15\\%$ lower visual quality.", + "bbox": [ + 88, + 601, + 482, + 708 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Importance of limiting conditioning to the first 8 VDiT blocks. Following our insights in Sec. 3.4, we condition AC3D only in the first 8 blocks. Trying to condition in all the 32 DiT blocks (w/o limiting camera cond to 8 blocks) worsens the visual quality by $\\approx 10\\%$ , while keeping the quality control at the same level. This suggests that the middle and late VDiT layers indeed rely on processed camera information and conditioning them on external camera poses might lead to interference with other visual features.", + "bbox": [ + 88, + 713, + 482, + 849 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Joint training with 2D data. To mitigate visual quality and scene motion degradation, we attempted to perform joint fine-tuning on 2D video data (without camera annotations)", + "bbox": [ + 88, + 854, + 483, + 901 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "which was used in base VDiT training by applying dropout on camera inputs for it. Prior work shows performance benefits with this strategy [150] and, as Tab. 2 (with $2D$ training) shows, it indeed helps to maintain slightly higher visual fidelity in our case ( $\\approx 3\\%$ better FVD on MSR-VTT). However, camera steering severely deteriorates, leading to up to $3\\times$ worse results for translation/rotation errors.", + "bbox": [ + 511, + 424, + 906, + 530 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5. Conclusions", + "text_level": 1, + "bbox": [ + 511, + 553, + 640, + 569 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Our findings demonstrate that principled analysis of camera motion in video diffusion models leads to significant improvements in control precision and efficiency. Through enhanced conditioning schedules, targeted layerspecific camera control, and better-calibrated training data, AC3D achieves state-of-the-art performance in 3D camera-controlled video synthesis while maintaining high visual quality and natural scene dynamics. This work establishes a foundation for more precise and efficient camera control in text-to-video generation. We discuss the limitations of our approach in Appendix B. In future work, we plan to focus on further improving data limitations and developing control mechanisms for camera trajectories far outside of the training distribution.", + "bbox": [ + 511, + 582, + 908, + 792 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6. Acknowledgements", + "text_level": 1, + "bbox": [ + 511, + 816, + 700, + 834 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "DBL acknowledges support from NSERC under the RGPIN program, the Canada Foundation for Innovation, and the Ontario Research Fund.", + "bbox": [ + 511, + 844, + 908, + 887 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "22882", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 89, + 187, + 104 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Jimmy Lei Ba. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. 17", + "[2] Sherwin Bahmani, Jeong Joon Park, Despoina Paschalidou, Hao Tang, Gordon Wetzstein, Leonidas Guibas, Luc Van Gool, and Radu Timofte. 3d-aware video generation. In TMLR, 2023. 2, 20", + "[3] Sherwin Bahmani, Jeong Joon Park, Despoina Paschalidou, Xingguang Yan, Gordon Wetzstein, Leonidas Guibas, and Andrea Tagliasacchi. CC3D: Layout-conditioned generation of compositional 3D scenes. In Proc. ICCV, 2023. 20", + "[4] Sherwin Bahmani, Xian Liu, Wang Yifan, Ivan Skorokhodov, Victor Rong, Ziwei Liu, Xihui Liu, Jeong Joon Park, Sergey Tulyakov, Gordon Wetzstein, Andrea Tagliasacchi, and David B. Lindell. Tc4d: Trajectory-conditioned text-to-4d generation. In Proc. ECCV, 2024. 2, 20", + "[5] Sherwin Bahmani, Ivan Skorokhodov, Victor Rong, Gordon Wetzstein, Leonidas Guibas, Peter Wonka, Sergey Tulyakov, Jeong Joon Park, Andrea Tagliasacchi, and David B. Lindell. 4d-fy: Text-to-4d generation using hybrid score distillation sampling. In Proc. CVPR, 2024. 2, 20", + "[6] Sherwin Bahmani, Ivan Skorokhodov, Aliaksandr Siarohin, Willi Menapace, Guocheng Qian, Michael Vasilkovsky, Hsin-Ying Lee, Chaoyang Wang, Jiaxu Zou, Andrea Tagliasacchi, et al. Vd3d: Taming large video diffusion transformers for 3d camera control. arXiv preprint arXiv:2407.12781, 2024. 2, 3, 4, 6, 7, 18", + "[7] Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint arXiv:2311.15127, 2023. 2, 3", + "[8] Andreas Blattmann, Robin Rombach, Huan Ling, Tim Dockhorn, Seung Wook Kim, Sanja Fidler, and Karsten Kreis. Align your latents: High-resolution video synthesis with latent diffusion models. In Proc. CVPR, 2023. 2", + "[9] Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. In Proc. CVPR, 2023. 6", + "[10] Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, Clarence Ng, Ricky Wang, and Aditya Ramesh. Video generation models as world simulators. OpenAI technical reports, 2024. 1, 2, 3, 17", + "[11] Yukang Cao, Liang Pan, Kai Han, Kwan-Yee K Wong, and Ziwei Liu. Avatargo: Zero-shot 4d human-object interaction generation and animation. arXiv preprint arXiv:2410.07164, 2024. 20", + "[12] Zenghao Chai, Chen Tang, Yongkang Wong, and Mohan Kankanhalli. Star: Skeleton-aware text-based 4d avatar generation with in-network motion retargeting. arXiv preprint arXiv:2406.04629, 2024. 20", + "[13] Eric R Chan, Connor Z Lin, Matthew A Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas J Guibas, Jonathan Tremblay, Sameh Khamis, et al. Efficient geometry-aware 3D generative adversarial networks. In Proc. CVPR, 2022. 20" + ], + "bbox": [ + 101, + 114, + 483, + 898 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[14] Eric R Chan, Koki Nagano, Matthew A Chan, Alexander W Bergman, Jeong Joon Park, Axel Levy, Miika Aittala, Shalini De Mello, Tero Karras, and Gordon Wetzstein. Generative novel view synthesis with 3d-aware diffusion models. In Proc. ICCV, 2023. 20", + "[15] Ce Chen, Shaoli Huang, Xuelin Chen, Guangyi Chen, Xiaoguang Han, Kun Zhang, and Mingming Gong. Ct4d: Consistent text-to-4d generation with animatable meshes. arXiv preprint arXiv:2408.08342, 2024. 20", + "[16] Eric Ming Chen, Sidhanth Holalkere, Ruyu Yan, Kai Zhang, and Abe Davis. Ray conditioning: Trading photoconsistency for photo-realism in multi-view image generation. In Proc. ICCV, 2023. 4", + "[17] Haoxin Chen, Menghan Xia, Yingqing He, Yong Zhang, Xiaodong Cun, Shaoshu Yang, Jinbo Xing, Yaofang Liu, Qifeng Chen, Xintao Wang, Chao Weng, and Ying Shan. Videocrafter1: Open diffusion models for high-quality video generation. arXiv preprint arXiv:2310.19512, 2023. 3", + "[18] Junsong Chen, Yue Wu, Simian Luo, Enze Xie, Sayak Paul, Ping Luo, Hang Zhao, and Zhenguo Li. Pixart-{\\delta} : Fast and controllable image generation with latent consistency models. arXiv preprint arXiv:2401.05252, 2024. 4", + "[19] Kevin Chen, Christopher B Choy, Manolis Savva, Angel X Chang, Thomas Funkhouser, and Silvio Savarese. Text2Shape: Generating shapes from natural language by learning joint embeddings. In Proc. ACCV, 2018. 20", + "[20] Rui Chen, Yongwei Chen, Ningxin Jiao, and Kui Jia. *Fantasia3D: Disentangling geometry and appearance for high-quality text-to-3D content creation.* arXiv preprint arXiv:2303.13873, 2023. 20", + "[21] Ting Chen and Lala Li. Fit: Far-reaching interleaved transformers. arXiv preprint arXiv:2305.12689, 2023. 7", + "[22] Soon Yau Cheong, Duygu Ceylan, Armin Mustafa, Andrew Gilbert, and Chun-Hao Paul Huang. Boosting camera motion control for video diffusion transformers. arXiv preprint arXiv:2410.10802, 2024. 3", + "[23] Wen-Hsuan Chu, Lei Ke, and Katerina Fragkiadaki. Dreamscene4d: Dynamic multi-object scene generation from monocular videos. arXiv preprint arXiv:2405.02280, 2024. 2, 20", + "[24] Terrance DeVries, Miguel Angel Bautista, Nitish Srivastava, Graham W Taylor, and Joshua M Susskind. Unconstrained scene generation with locally conditioned radiance fields. In Proc. ICCV, 2021. 20", + "[25] Sander Dieleman. Perspectives on diffusion, 2023. 2", + "[26] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In Proc. ICLR, 2021. 17", + "[27] Mohamed El Banani, Amit Raj, Kevis-Kokitsi Maninis, Abhishek Kar, Yuanshen Li, Michael Rubinstein, Deqing Sun, Leonidas Guibas, Justin Johnson, and Varun Jampani. Probing the 3d awareness of visual foundation models. In Proc. CVPR, 2024. 2", + "[28] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik" + ], + "bbox": [ + 524, + 92, + 906, + 898 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "22883", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis. In Proc. ICMLR, 2024. 4, 5, 18", + "[29] Gunnar Farnebäck. Two-frame motion estimation based on polynomial expansion. In Proc. SCIA, 2003. 19", + "[30] Qijun Feng, Zhen Xing, Zuxuan Wu, and Yu-Gang Jiang. FDGaussian: Fast Gaussian splatting from single image via geometric-aware diffusion model. arXiv preprint arXiv:2403.10242, 2024. 20", + "[31] Yutao Feng, Yintong Shang, Xiang Feng, Lei Lan, Shandian Zhe, Tianjia Shao, Hongzhi Wu, Kun Zhou, Hao Su, Chenfanfu Jiang, et al. Elastogen: 4d generative elastodynamics. arXiv preprint arXiv:2405.15056, 2024. 20", + "[32] Peng Gao, Le Zhuo, Ziyi Lin, Dongyang Liu, Ruoyi Du, Xu Luo, Longtian Qiu, Yuhang Zhang, et al. Lumina-t2x: Transforming text into any modality, resolution, and duration via flow-based large diffusion transformers. arXiv preprint arXiv:2405.05945, 2024. 3, 17, 18", + "[33] Quankai Gao, Qiangeng Xu, Zhe Cao, Ben Mildenhall, Wenchao Ma, Le Chen, Danhang Tang, and Ulrich Neumann. Gaussianflow: Splitting Gaussian dynamics for 4D content creation. arXiv preprint arXiv:2403.12365, 2024. 2, 20", + "[34] Ruiqi Gao, Aleksander Holynski, Philipp Henzler, Arthur Brussee, Ricardo Martin-Brualla, Pratul Srinivasan, Jonathan T Barron, and Ben Poole. Cat3d: Create anything in 3d with multi-view diffusion models. In Proc. NeurIPS, 2024. 20", + "[35] William Gao, Noam Aigerman, Thibault Groueix, Vova Kim, and Rana Hanocka. TextDeformer: Geometry manipulation using text guidance. In SIGGRAPH, 2023. 20", + "[36] Jiatao Gu, Alex Trevithick, Kai-En Lin, Joshua M Suskind, Christian Theobalt, Lingjie Liu, and Ravi Ramamoorthi. NerfDiff: Single-image view synthesis with NeRF-guided distillation from 3D-aware diffusion. In Proc. ICML, 2023. 20", + "[37] Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, and Bo Dai. Animatediff: Animate your personalized text-to-image diffusion models without specific tuning. In Proc. ICLR, 2024. 2, 7", + "[38] Junlin Han, Filippos Kokkinos, and Philip Torr. VFusion3D: Learning scalable 3D generative models from video diffusion models. arXiv preprint arXiv:2403.12034, 2024. 20", + "[39] Hao He, Yinghao Xu, Yuwei Guo, Gordon Wetzstein, Bo Dai, Hongsheng Li, and Ceyuan Yang. Cameractrl: Enabling camera control for text-to-video generation. arXiv preprint arXiv:2404.02101, 2024. 2, 3, 4, 5, 6, 7, 19", + "[40] Xianglong He, Junyi Chen, Sida Peng, Di Huang, Yangguang Li, Xiaoshui Huang, Chun Yuan, Wanli Ouyang, and Tong He. GVGEN: Text-to-3D generation with volumetric representation. arXiv preprint arXiv:2403.12957, 2024. 20", + "[41] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016. 17, 18", + "[42] Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation metric for image captioning. arXiv preprint arXiv:2104.08718, 2021. 7" + ], + "bbox": [ + 99, + 90, + 483, + 898 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[43] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Proc. NeurIPS, 2017. 7", + "[44] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022. 4, 6, 7", + "[45] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Proc. NeurIPS, 2020. 3", + "[46] Lukas Hollein, Ang Cao, Andrew Owens, Justin Johnson, and Matthias Nießner. Text2room: Extracting textured 3d meshes from 2d text-to-image models. In Proc. ICCV, 2023. 20", + "[47] Lukas Hollein, Aljaž Božić, Norman Müller, David Novotny, Hung-Yu Tseng, Christian Richardt, Michael Zollhöfer, and Matthias Nießner. ViewDiff: 3D-consistent image generation with text-to-image models. In Proc. CVPR, 2024. 20", + "[48] Yicong Hong, Kai Zhang, Juixiang Gu, Sai Bi, Yang Zhou, Difan Liu, Feng Liu, Kalyan Sunkavalli, Trung Bui, and Hao Tan. LRM: Large reconstruction model for single image to 3D. In Proc. ICLR, 2024. 20", + "[49] Chen Hou, Guoqiang Wei, Yan Zeng, and Zhibo Chen. Training-free camera control for video generation. arXiv preprint arXiv:2406.10126, 2024.3", + "[50] Teng Hu, Jiangning Zhang, Ran Yi, Yating Wang, Hongrui Huang, Jieyu Weng, Yabiao Wang, and Lizhuang Ma. Motionmaster: Training-free camera motion transfer for video generation. arXiv preprint arXiv:2404.15789, 2024. 3", + "[51] Chun-Hao Paul Huang, Jae Shin Yoon, Hyeonho Jeong, Niloy Mitra, and Duygu Ceylan. Camera pose estimation emerging in video diffusion transformer, 2024. 3", + "[52] Tianyu Huang, Yihan Zeng, Hui Li, Wangmeng Zuo, and Rynson WH Lau. Dreamphysics: Learning physical properties of dynamic 3d gaussians with video diffusion priors. arXiv preprint arXiv:2406.01476, 2024. 20", + "[53] Ajay Jain, Ben Mildenhall, Jonathan T Barron, Pieter Abbeel, and Ben Poole. Zero-shot text-guided object generation with dream fields. In Proc. CVPR, 2022. 20", + "[54] Yash Jain, Anshul Nasery, Vibhav Vineet, and Harkirat Behl. Peekaboo: Interactive video generation via masked-diffusion. In Proc. CVPR, 2024. 20", + "[55] Nikolay Jetchev. ClipMatrix: Text-controlled creation of 3D textured meshes. arXiv preprint arXiv:2109.12922, 2021. 20", + "[56] Lutao Jiang and Lin Wang. Brightdreamer: Generic 3D Gaussian generative framework for fast text-to-3D synthesis. arXiv preprint arXiv:2403.11273, 2024. 20", + "[57] Yanqin Jiang, Chaohui Yu, Chenjie Cao, Fan Wang, Weiming Hu, and Jin Gao. *Animate3d: Animating any 3d model with multi-view video diffusion.* arXiv preprint arXiv:2407.11398, 2024. 20", + "[58] Yanqin Jiang, Li Zhang, Jin Gao, Weimin Hu, and Yao Yao. Consistent4d: Consistent 360 $\\{\\backslash \\mathrm{deg}\\}$ dynamic object generation from monocular video. In Proc. ICLR, 2024. 20", + "[59] Yash Kant, Aliaksandr Siarohin, Ziyi Wu, Michael Vasilkovsky, Guocheng Qian, Jian Ren, Riza Alp Guler, Bernard Ghanem, Sergey Tulyakov, and Igor Gilitschenski. Spad: Spatially aware multi-view diffusers. In Proc. CVPR, 2024. 4" + ], + "bbox": [ + 522, + 92, + 906, + 898 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "22884", + "bbox": [ + 478, + 945, + 517, + 955 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[60] Tero Karras, Miika Aittala, Jaakko Lehtinen, Janne Hellsten, Timo Aila, and Samuli Laine. Analyzing and improving the training dynamics of diffusion models. In Proc. CVPR, 2024. 17", + "[61] Oren Katzir, Or Patashnik, Daniel Cohen-Or, and Dani Lischinski. Noise-free score distillation. In Proc. ICLR, 2024. 20", + "[62] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splattering for real-time radiance field rendering. In ACM TOG, 2023. 20", + "[63] Seung Wook Kim, Bradley Brown, Kangxue Yin, Karsten Kreis, Katja Schwarz, Daiqing Li, Robin Rombach, Antonio Torralba, and Sanja Fidler. NeuralField-LDM: Scene generation with hierarchical latent diffusion models. In Proc. CVPR, 2023. 20", + "[64] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. 3", + "[65] Zhengfei Kuang, Shengqu Cai, Hao He, Yinghao Xu, Hongsheng Li, Leonidas Guibas, and Gordon Wetzstein. Collaborative video diffusion: Consistent multi-video generation with camera control. In Proc. NeurIPS, 2024. 3", + "[66] Kyungmin Lee, Kihyuk Sohn, and Jinwoo Shin. DreamFlow: High-quality text-to-3D generation by approximating probability flow. In Proc. ICLR, 2024. 20", + "[67] Yao-Chih Lee, Yi-Ting Chen, Andrew Wang, Ting-Hsuan Liao, Brandon Y Feng, and Jia-Bin Huang. Vividdream: Generating 3d scene with ambient dynamics. arXiv preprint arXiv:2405.20334, 2024. 20", + "[68] Guojun Lei, Chi Wang, Hong Li, Rong Zhang, Yikai Wang, and Weiwei Xu. Animateanything: Consistent and controllable animation for video generation. arXiv preprint arXiv:2411.10836, 2024. 3", + "[69] Bing Li, Cheng Zheng, Wenxuan Zhu, Jinjie Mai, Biao Zhang, Peter Wonka, and Bernard Ghanem. Vivid-zoo: Multi-view video generation with diffusion model. arXiv preprint arXiv:2406.08659, 2024. 20", + "[70] Jiahao Li, Hao Tan, Kai Zhang, Zexiang Xu, Fujun Luan, Yinghao Xu, Yicong Hong, Kalyan Sunkavalli, Greg Shakhnarovich, and Sai Bi. Instant3D: Fast text-to-3D with sparse-view generation and large reconstruction model. In Proc. ICLR, 2024. 20", + "[71] Ming Li, Taojiannan Yang, Huafeng Kuang, Jie Wu, Zhaoning Wang, Xuefeng Xiao, and Chen Chen. Controlnet++: Improving conditional controls with efficient consistency feedback. In ECCV, 2024. 3, 4, 7, 18", + "[72] Renjie Li, Panwang Pan, Bangbang Yang, Dejia Xu, Shijie Zhou, Xuanyang Zhang, Zeming Li, Achuta Kadambi, Zhangyang Wang, and Zhiwen Fan. 4k4dgen: Panoramic 4d generation at 4k resolution. arXiv preprint arXiv:2406.13527, 2024. 20", + "[73] Zhiqi Li, Yiming Chen, and Peidong Liu. Dreammesh4d: Video-to-4d generation with sparse-controlled gaussian-mesh hybrid representation. arXiv preprint arXiv:2410.06756, 2024. 20", + "[74] Zhiqi Li, Yiming Chen, Lingzhe Zhao, and Peidong Liu. Controllable text-to-3D generation via surface-aligned Gaussian splatting. arXiv preprint arXiv:2403.09981, 2024. 20" + ], + "bbox": [ + 99, + 92, + 485, + 900 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[75] Zhengqi Li, Richard Tucker, Noah Snavely, and Aleksander Holynski. Generative image dynamics. In Proc. CVPR, 2024. 2, 4, 16, 19", + "[76] Hanwen Liang, Yuyang Yin, Dejia Xu, Hanxue Liang, Zhangyang Wang, Konstantinos N Plataniotis, Yao Zhao, and Yunchao Wei. Diffusion4d: Fast spatial-temporal consistent 4d generation via video diffusion models. arXiv preprint arXiv:2405.16645, 2024. 2, 20", + "[77] Yixun Liang, Xin Yang, Jiantao Lin, Haodong Li, Xiaogang Xu, and Yingcong Chen. Luciddreamer: Towards high-fidelity text-to-3D generation via interval score matching. arXiv preprint arXiv:2311.11284, 2023. 20", + "[78] Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin. Magic3D: High-resolution text-to-3D content creation. In Proc. CVPR, 2023. 20", + "[79] Jiajing Lin, Zhenzhong Wang, Yongjie Hou, Yuzhou Tang, and Min Jiang. Phy124: Fast physics-driven 4d content generation from a single image. arXiv preprint arXiv:2409.07179, 2024. 20", + "[80] Yukang Lin, Haonan Han, Chaoqun Gong, Zunnan Xu, Yachao Zhang, and Xiu Li. Consistent123: One image to highly consistent 3D asset using case-aware diffusion priors. In arXiv preprint arXiv:2309.17261, 2023. 20", + "[81] Huan Ling, Seung Wook Kim, Antonio Torralba, Sanja Fidler, and Karsten Kreis. Align your gaussians: Text-to-4d with dynamic 3d gaussians and composed diffusion models. In Proc. CVPR, 2024. 2, 20", + "[82] Pengyang Ling, Jiazi Bu, Pan Zhang, Xiaoyi Dong, Yuhang Zang, Tong Wu, Huaian Chen, Jiaqi Wang, and Yi Jin. Motionclone: Training-free motion cloning for controllable video generation. arXiv preprint arXiv:2406.05338, 2024. 3", + "[83] Pengkun Liu, Yikai Wang, Fuchun Sun, Jiafang Li, Hang Xiao, Hongxiang Xue, and Xinzhou Wang. Isotropic3D: Image-to-3D generation based on a single clip embedding. arXiv preprint arXiv:2403.10395, 2024. 20", + "[84] Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3D object. In Proc. ICCV, 2023. 20", + "[85] Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. arXiv preprint arXiv:2209.03003, 2022. 3, 4", + "[86] Xian Liu, Xiaohang Zhan, Jiaxiang Tang, Ying Shan, Gang Zeng, Dahua Lin, Xihui Liu, and Ziwei Liu. HumanGaussian: Text-driven 3D human generation with Gaussian splatting. In Proc. CVPR, 2024. 20", + "[87] Yuan Liu, Cheng Lin, Zijiao Zeng, Xiaoxiao Long, Lingjie Liu, Taku Komura, and Wenping Wang. SyncDreamer: Generating multiview-consistent images from a single-view image. In Proc. ICLR, 2024. 20", + "[88] Xiaoxiao Long, Yuan-Chen Guo, Cheng Lin, Yuan Liu, Zhiyang Dou, Lingjie Liu, Yuexin Ma, Song-Hai Zhang, Marc Habermann, Christian Theobalt, et al. Wonder3D: Single image to 3D using cross-domain diffusion. In Proc. CVPR, 2024. 20", + "[89] I Loshchilov. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. 17, 18" + ], + "bbox": [ + 522, + 92, + 906, + 900 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "22885", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[90] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. 17, 18", + "[91] Wan-Duo Kurt Ma, John P Lewis, and W Bastiaan Kleijn. Trailblazer: Trajectory control for diffusion-based video generation. arXiv preprint arXiv:2401.00896, 2023. 20", + "[92] Xin Ma, Yaohui Wang, Gengyun Jia, Xinyuan Chen, Ziwei Liu, Yuan-Fang Li, Cunjian Chen, and Yu Qiao. Latte: Latent diffusion transformer for video generation. arXiv preprint arXiv:2401.03048, 2024. 2", + "[93] Willi Menapace, Aliaksandr Siarohin, Ivan Skorokhodov, Ekaterina Deyneka, Tsai-Shien Chen, Anil Kag, Yuwei Fang, Aleksei Stoliar, Elisa Ricci, Jian Ren, et al. Snap video: Scaled spatiotemporal transformers for text-to-video synthesis. In Proc. CVPR, 2024. 2, 7, 17", + "[94] Qiaowei Miao, Yawei Luo, and Yi Yang. Pla4d: Pixel-level alignments for text-to-4d gaussian splatting. arXiv preprint arXiv:2405.19957, 2024. 20", + "[95] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In Proc. ECCV, 2020. 20", + "[96] Koichi Namekata, Sherwin Bahmani, Ziyi Wu, Yash Kant, Igor Gilitschenski, and David B Lindell. Sg-i2v: Self-guided trajectory control in image-to-video generation. arXiv preprint arXiv:2411.04989, 2024. 20", + "[97] Roy Or-El, Xuan Luo, Mengyi Shan, Eli Shechtman, Jeong Joon Park, and Ira Kemelmacher-Shlizerman. StyleSDF: High-resolution 3D-consistent image and geometry generation. In Proc. CVPR, 2022. 20", + "[98] Zijie Pan, Zeyu Yang, Xiatian Zhu, and Li Zhang. Fast dynamic 3d object generation from a single-view video. arXiv preprint arXiv:2401.08742, 2024. 2, 20", + "[99] Deepak Pathak, Ross Girshick, Piotr Dólar, Trevor Darrell, and Bharath Hariharan. Learning features by watching objects move. In Proc. CVPR, 2017. 19", + "[100] William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proc. ICCV, 2023. 2, 3, 7, 17", + "[101] Ryan Po, Wang Yifan, Vladislav Golyanik, Kfir Aberman, Jonathan T Barron, Amit H Bermano, Eric Ryan Chan, Tali Dekel, Aleksander Holynski, Angjoo Kanazawa, et al. State of the art on diffusion models for visual computing. arXiv preprint arXiv:2310.07204, 2023. 2", + "[102] Adam Polyak, Amit Zohar, Andrew Brown, Andros Tjandra, Animesh Sinha, Ann Lee, Apoorv Vyas, Bowen Shi, Chih-Yao Ma, Ching-Yao Chuang, et al. Movie gen: A cast of media foundation models. arXiv preprint arXiv:2410.13720, 2024. 3, 17", + "[103] Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben Mildenhall. DreamFusion: Text-to-3D using 2D diffusion. In Proc. ICLR, 2023. 20", + "[104] Guocheng Qian, Junli Cao, Aliaksandr Siarohin, Yash Kant, Chaoyang Wang, Michael Vasilkovsky, Hsin-Ying Lee, Yuwei Fang, Ivan Skorokhodov, Peiye Zhuang, et al. Atom: Amortized text-to-mesh using 2d diffusion. arXiv preprint arXiv:2402.00867, 2024. 20" + ], + "bbox": [ + 93, + 90, + 485, + 898 + ], + "page_idx": 11 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[105] Guocheng Qian, Jinjie Mai, Abdullah Hamdi, Jian Ren, Aliaksandr Siarohin, Bing Li, Hsin-Ying Lee, Ivan Skorokhodov, Peter Wonka, Sergey Tulyakov, et al. Magic123: One image to high-quality 3D object generation using both 2D and 3D diffusion priors. In Proc. ICLR, 2024. 20", + "[106] Haonan Qiu, Zhaoxi Chen, Zhouxia Wang, Yingqing He, Menghan Xia, and Ziwei Liu. Freetraj: Tuning-free trajectory control in video diffusion models. arXiv preprint arXiv:2406.16863, 2024. 20", + "[107] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In Proc. ICML, 2021. 20", + "[108] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. In Proc. JMLR, 2020. 3, 17", + "[109] Jiawei Ren, Liang Pan, Jiaxiang Tang, Chi Zhang, Ang Cao, Gang Zeng, and Ziwei Liu. DreamGaussian4D: Generative 4D Gaussian splatting. arXiv preprint arXiv:2312.17142, 2023. 2, 20", + "[110] Jiawei Ren, Kevin Xie, Ashkan Mirzaei, Hanxue Liang, Xiaohui Zeng, Karsten Kreis, Ziwei Liu, Antonio Torralba, Sanja Fidler, Seung Wook Kim, et al. L4gm: Large 4d gaussian reconstruction model. In Proc. NeurIPS, 2024. 3, 20", + "[111] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proc. CVPR, 2022. 3, 19", + "[112] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Proc. MICCAI, 2015. 4", + "[113] Aditya Sanghi, Hang Chu, Joseph G Lambourne, Ye Wang, Chin-Yi Cheng, Marco Fumero, and Kamal Rahimi Malekshan. CLIP-Forge: Towards zero-shot text-to-shape generation. In Proc. CVPR, 2022. 20", + "[114] Johannes Lutz Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In Proc. CVPR, 2016. 5, 20", + "[115] Johannes Lutz Schonberger, Enliang Zheng, Marc Pollefeys, and Jan-Michael Frahm. Pixelwise view selection for unstructured multi-view stereo. In Proc. ECCV, 2016. 5, 20", + "[116] Katja Schwarz, Axel Sauer, Michael Niemeyer, Yiyi Liao, and Andreas Geiger. VoxGRAF: Fast 3D-aware image synthesis with sparse voxel grids. In Proc. NeurIPS, 2022. 20", + "[117] Yichun Shi, Peng Wang, Jianglong Ye, Long Mai, Kejie Li, and Xiao Yang. MVDream: Multi-view diffusion for 3D generation. In Proc. ICLR, 2024. 20", + "[118] Jaidev Shriram, Alex Trevithick, Lingjie Liu, and Ravi Ramamoorthi. Realmdreamer: Text-driven 3d scene generation with inpainting and depth diffusion. arXiv preprint arXiv:2404.07199, 2024. 20", + "[119] Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, et al. Make-a-video: Text-to-video generation without text-video data. Proc. ICLR, 2023. 2" + ], + "bbox": [ + 516, + 90, + 906, + 898 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "22886", + "bbox": [ + 478, + 945, + 519, + 955 + ], + "page_idx": 11 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[120] Uriel Singer, Shelly Sheynin, Adam Polyak, Oron Ashual, Iurii Makarov, Filippos Kokkinos, Naman Goyal, Andrea Vedaldi, Devi Parikh, Justin Johnson, et al. Text-to-4d dynamic scene generation. In Proc. ICML, 2023. 2, 20", + "[121] Vincent Sitzmann, Semon Rezchikov, Bill Freeman, Josh Tenenbaum, and Fredo Durand. Light field networks: Neural scene representations with single-evaluation rendering. In Proc. NeurIPS, 2021. 4", + "[122] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In Proc. ICML, 2015. 3", + "[123] Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding. In Neurocomputing, 2024. 17", + "[124] Jingxiang Sun, Bo Zhang, Ruizhi Shao, Lizhen Wang, Wen Liu, Zhenda Xie, and Yebin Liu. DreamCraft3D: Hierarchical 3D generation with bootstrapped diffusion prior. In Proc. ICLR, 2024. 20", + "[125] Keqiang Sun, Dor Litvak, Yunzhi Zhang, Hongsheng Li, Jiajun Wu, and Shangzhe Wu. Ponomy: Learning articulated 3d animal motions from unlabeled online videos. In Proc. ECCV, 2024. 20", + "[126] Qi Sun, Zhiyang Guo, Ziyu Wan, Jing Nathan Yan, Shengming Yin, Wengang Zhou, Jing Liao, and Houqiang Li. Eg4d: Explicit generation of 4d object without score distillation. arXiv preprint arXiv:2405.18132, 2024. 20", + "[127] Wenqiang Sun, Shuo Chen, Fangfu Liu, Zilong Chen, Yueqi Duan, Jun Zhang, and Yikai Wang. Dimensionx: Create any 3d and 4d scenes from a single image with controllable video diffusion. arXiv preprint arXiv:2411.04928, 2024. 3", + "[128] Stanislaw Szymanowicz, Christian Rupprecht, and Andrea Vedaldi. Viewset diffusion: (0-) image-conditioned 3d generative models from 2d data. In Proc. ICCV, 2023. 20", + "[129] Stanislaw Szymanowicz, Eldar Insafutdinov, Chuanxia Zheng, Dylan Campbell, João F Henriques, Christian Rupprecht, and Andrea Vedaldi. Flash3d: Feed-forward generalisable 3d scene reconstruction from a single image. arXiv preprint arXiv:2406.04343, 2024. 20", + "[130] Stanislaw Szymanowicz, Christian Rupprecht, and Andrea Vedaldi. Splatter image: Ultra-fast single-view 3D reconstruction. In Proc. CVPR, 2024. 20", + "[131] Junshu Tang, Tengfei Wang, Bo Zhang, Ting Zhang, Ran Yi, Lizhuang Ma, and Dong Chen. Make-it-3D: High-fidelity 3D creation from a single image with diffusion prior. arXiv preprint arXiv:2303.14184, 2023. 20", + "[132] Jiaxiang Tang, Zhaoxi Chen, Xiaokang Chen, Tengfei Wang, Gang Zeng, and Ziwei Liu. LGM: Large multi-view gaussian model for high-resolution 3d content creation. Proc. ECCV, 2024. 20", + "[133] Zhenggang Tang, Peiye Zhuang, Chaoyang Wang, Aliaksandr Siarohin, Yash Kant, Alexander Schwing, Sergey Tulyakov, and Hsin-Ying Lee. Pixel-aligned multi-view generation with depth guided decoder. arXiv preprint arXiv:2408.14016, 2024. 20", + "[134] Ayush Tewari, Tianwei Yin, George Cazenavette, Semon Rezhikov, Josh Tenenbaum, Frédo Durand, Bill Freeman," + ], + "bbox": [ + 91, + 90, + 485, + 900 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "and Vincent Sitzmann. Diffusion with forward models: Solving stochastic inverse problems without direct supervision. In Proc. NeurIPS, 2023. 20", + "[135] Dmitry Tochilkin, David Pankratz, Zexiang Liu, Zixuan Huang, Adam Letts, Yangguang Li, Ding Liang, Christian Laforte, Varun Jampani, and Yan-Pei Cao. Triposr: Fast 3D object reconstruction from a single image. arXiv preprint arXiv:2403.02151, 2024. 20", + "[136] Thomas Unterthiner, Sjoerd van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, and Sylvain Gelly. Towards accurate generative models of video: A new metric & challenges. arXiv preprint arXiv:1812.01717, 2018. 7", + "[137] Lukas Uzolas, Elmar Eisemann, and Petr Kellnhofer. Motiondreamer: Zero-shot 3d mesh animation from video diffusion models. arXiv preprint arXiv:2405.20155, 2024. 20", + "[138] Basile Van Hoorick, Rundi Wu, Ege Ozguroglu, Kyle Sargent, Ruoshi Liu, Pavel Tokmakov, Achal Dave, Changxi Zheng, and Carl Vondrick. Generative camera dolly: Extreme monocular dynamic novel view synthesis. In Proc. ECCV, 2024. 20", + "[139] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Proc. NeurIPS, 2017. 3", + "[140] Vikram Voleti, Chun-Han Yao, Mark Boss, Adam Letts, David Pankratz, Dmitry Tochilkin, Christian Laforte, Robin Rombach, and Varun Jampani. SV3D: Novel multi-view synthesis and 3D generation from a single image using latent video diffusion. arXiv preprint arXiv:2403.12008, 2024. 20", + "[141] Ziyu Wan, Despoina Paschalidou, Ian Huang, Hongyu Liu, Bokui Shen, Xiaoyu Xiang, Jing Liao, and Leonidas Guibas. CAD: Photorealistic 3D generation via adversarial distillation. In Proc. CVPR, 2024. 20", + "[142] Can Wang, Menglei Chai, Mingming He, Dongdong Chen, and Jing Liao. Clip-NeRF: Text-and-image driven manipulation of neural radiance fields. In Proc. CVPR, 2022. 20", + "[143] Haochen Wang, Xiaodan Du, Jiahao Li, Raymond A Yeh, and Greg Shakhnarovich. Score Jacobian chaining: Lifting pretrained 2d diffusion models for 3D generation. In Proc. CVPR, 2023. 20", + "[144] Jiawei Wang, Yuchen Zhang, Jiaxin Zou, Yan Zeng, Guoqiang Wei, Liping Yuan, and Hang Li. Boximator: Generating rich and controllable motions for video synthesis. arXiv preprint arXiv:2402.01566, 2024. 20", + "[145] Shuzhe Wang, Vincent Leroy, Yohann Cabon, Boris Chidlovskii, and Jerome Revaud. Dust3r: Geometric 3d vision made easy. In Proc. CVPR, 2024. 3, 5, 6", + "[146] Xiang Wang, Hangjie Yuan, Shiwei Zhang, Dayou Chen, Jiuniu Wang, Yingya Zhang, Yujun Shen, Deli Zhao, and Jingren Zhou. Videocomposer: Compositional video synthesis with motion controllability. arXiv preprint arXiv:2306.02018, 2023. 20", + "[147] Yikai Wang, Xinzhou Wang, Zilong Chen, Zhengyi Wang, Fuchun Sun, and Jun Zhu. Vidu4d: Single generated video to high-fidelity 4d reconstruction with dynamic gaussian surfels. arXiv preprint arXiv:2405.16822, 2024. 20" + ], + "bbox": [ + 516, + 92, + 906, + 900 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "22887", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[148] Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, and Jun Zhu. ProlificDreamer: High-fidelity and diverse text-to-3D generation with variational score distillation. In Proc. NeurIPS, 2023. 20", + "[149] Zhouxia Wang, Ziyang Yuan, Xintao Wang, Tianshui Chen, Menghan Xia, Ping Luo, and Yin Shan. Motionctrl: A unified and flexible motion controller for video generation. In SIGGRAPH, 2024. 2, 3, 4, 6, 7", + "[150] Daniel Watson, Saurabh Saxena, Lala Li, Andrea Tagliasacchi, and David J Fleet. Controlling space and time with diffusion models. arXiv preprint arXiv:2407.07860, 2024. 3, 4, 6, 8", + "[151] Ruiqi Wu, Liangyu Chen, Tong Yang, Chunle Guo, Chongyi Li, and Xiangyu Zhang. Lamp: Learn a motion pattern for few-shot-based video generation. arXiv preprint arXiv:2310.10769, 2023. 2", + "[152] Rundi Wu, Ruiqi Gao, Ben Poole, Alex Trevithick, Changxi Zheng, Jonathan T Barron, and Aleksander Holynski. Cat4d: Create anything in 4d with multi-view video diffusion models. arXiv preprint arXiv:2411.18613, 2024. 3", + "[153] Wejia Wu, Zhuang Li, Yuchao Gu, Rui Zhao, Yefei He, David Junhao Zhang, Mike Zheng Shou, Yan Li, Tingting Gao, and Di Zhang. Draganything: Motion control for anything using entity representation. In Proc. ECCV, 2024. 20", + "[154] Zijie Wu, Chaohui Yu, Yanqin Jiang, Chenjie Cao, Fan Wang, and Xiang Bai. Sc4d: Sparse-controlled video-to-4d generation and motion transfer. arXiv preprint arXiv:2404.03736, 2024. 20", + "[155] Zeqi Xiao, Yifan Zhou, Shuai Yang, and Xingang Pan. Video diffusion models are training-free motion interpreter and controller. arXiv preprint arXiv:2405.14864, 2024. 3", + "[156] Kevin Xie, Jonathan Lorraine, Tianshi Cao, Jun Gao, James Lucas, Antonio Torralba, Sanja Fidler, and Xiaohui Zeng. LATTE3D: Large-scale amortized text-to-enhanced3D synthesis. In Proc. ECCV, 2024. 20", + "[157] Yiming Xie, Chun-Han Yao, Vikram Voleti, Huaizu Jiang, and Varun Jampani. Sv4d: Dynamic 3d content generation with multi-frame and multi-view consistency. arXiv preprint arXiv:2407.17470, 2024. 20", + "[158] Dejia Xu, Yifan Jiang, Chen Huang, Liangchen Song, Thorsten Gernoth, Liangliang Cao, Zhangyang Wang, and Hao Tang. Cavity: Camera-controllable multi-view video diffusion with view-integrated attention. arXiv preprint arXiv:2410.10774, 2024. 3", + "[159] Dejia Xu, Hanwen Liang, Neel P Bhatt, Hezhen Hu, Hanxue Liang, Konstantinos N Plataniotis, and Zhangyang Wang. Comp4d: Llm-guided compositional 4d scene generation. arXiv preprint arXiv:2403.16993, 2024. 2, 20", + "[160] Dejia Xu, Weili Nie, Chao Liu, Sifei Liu, Jan Kautz, Zhangyang Wang, and Arash Vahdat. Camco: Camera-controllable 3d-consistent image-to-video generation. arXiv preprint arXiv:2406.02509, 2024. 3", + "[161] Jun Xu, Tao Mei, Ting Yao, and Yong Rui. Msr-vtt: A large video description dataset for bridging video and language. In Proc. CVPR, 2016. 7, 8" + ], + "bbox": [ + 91, + 90, + 485, + 900 + ], + "page_idx": 13 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[162] Yinghao Xu, Zifan Shi, Wang Yifan, Hansheng Chen, Ceyuan Yang, Sida Peng, Yujun Shen, and Gordon Wetzstein. GRM: Large Gaussian reconstruction model for efficient 3D reconstruction and generation. In Proc. ECCV, 2024. 20", + "[163] Yinghao Xu, Hao Tan, Fujun Luan, Sai Bi, Peng Wang, Jiahao Li, Zifan Shi, Kalyan Sunkavalli, Gordon Wetzstein, Zexiang Xu, et al. DMV3D: Denoising multi-view diffusion using 3D large reconstruction model. In Proc. ICLR, 2024. 20", + "[164] Zhongcong Xu, Jianfeng Zhang, Jun Hao Liew, Wenqing Zhang, Song Bai, Jiashi Feng, and Mike Zheng Shou. Pv3d: A 3d generative model for portrait video generation. In Proc. ICLR, 2023. 2, 20", + "[165] Qitong Yang, Mingtao Feng, Zijie Wu, Shijie Sun, Weisheng Dong, Yaonan Wang, and Ajmal Mian. Beyond skeletons: Integrative latent mapping for coherent 4d sequence generation. arXiv preprint arXiv:2403.13238, 2024. 20", + "[166] Shiyuan Yang, Liang Hou, Haibin Huang, Chongyang Ma, Pengfei Wan, Di Zhang, Xiaodong Chen, and Jing Liao. Direct-a-video: Customized video generation with user-directed camera movement and object motion. arXiv preprint arXiv:2402.03162, 2024. 2, 20", + "[167] Zeyu Yang, Zijie Pan, Chun Gu, and Li Zhang. Diffusion2: Dynamic 3d content generation via score composition of orthogonal diffusion models. arXiv preprint 2404.02148, 2024. 20", + "[168] Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al. Cogvideox: Text-to-video diffusion models with an expert transformer. arXiv preprint arXiv:2408.06072, 2024. 2, 3, 16, 17, 18, 19", + "[169] Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al. Cogvideox: Text-to-video diffusion models with an expert transformer. arXiv preprint arXiv:2408.06072, 2024. 7, 16, 17", + "[170] Junliang Ye, Fangfu Liu, Qixiu Li, Zhengyi Wang, Yikai Wang, Xinzhou Wang, Yueqi Duan, and Jun Zhu. DreamReward: Text-to-3D generation with human preference. arXiv preprint arXiv:2403.14613, 2024. 20", + "[171] Shengming Yin, Chenfei Wu, Jian Liang, Jie Shi, Houqiang Li, Gong Ming, and Nan Duan. Dragnuwa: Fine-grained control in video generation by integrating text, image, and trajectory. arXiv preprint arXiv:2308.08089, 2023. 20", + "[172] Wei Yin, Chi Zhang, Hao Chen, Zhipeng Cai, Gang Yu, Kaixuan Wang, Xiaozhi Chen, and Chunhua Shen. Metric3d: Towards zero-shot metric 3d prediction from A single image. In Proc. ICCV, 2023. 20", + "[173] Yuyang Yin, Dejia Xu, Zhangyang Wang, Yao Zhao, and Yunchao Wei. 4dgen: Grounded 4d content generation with spatial-temporal consistency. arXiv preprint arXiv:2312.17225, 2023. 2, 20", + "[174] Paul Yoo, Jiaxian Guo, Yutaka Matsuo, and Shixiang Shane Gu. DreamSparse: Escaping from Plato's cave with 2D diffusion model given sparse views. In arXiv preprint arXiv:2306.03414, 2023. 20" + ], + "bbox": [ + 516, + 90, + 906, + 900 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "22888", + "bbox": [ + 478, + 944, + 519, + 955 + ], + "page_idx": 13 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[175] Heng Yu, Chaoyang Wang, Peiye Zhuang, Willi Menapace, Aliaksandr Siarohin, Junli Cao, Laszlo A Jeni, Sergey Tulyakov, and Hsin-Ying Lee. 4real: Towards photorealistic 4d scene generation via video diffusion models. In Proc. NeurIPS, 2024. 2, 20", + "[176] Shoubin Yu, Jacob Zhiyuan Fang, Jian Zheng, Gunnar A Sigurdsson, Vicente Ordonez, Robinson Piramuthu, and Mohit Bansal. Zero-shot controllable image-to-video animation via motion decomposition. In ACM MM, 2024. 20", + "[177] Wangbo Yu, Jinbo Xing, Li Yuan, Wenbo Hu, Xiaoyu Li, Zhipeng Huang, Xiangjun Gao, Tien-Tsin Wong, Ying Shan, and Yonghong Tian. Viewcrafter: Taming video diffusion models for high-fidelity novel view synthesis. arXiv preprint arXiv:2409.02048, 2024. 3", + "[178] Xin Yu, Yuan-Chen Guo, Yangguang Li, Ding Liang, Song-Hai Zhang, and Xiaojuan Qi. Text-to-3D with classifier score distillation. arXiv preprint arXiv:2310.19415, 2023. 20", + "[179] Yu-Jie Yuan, Leif Kobbelt, Jiwen Liu, Yuan Zhang, Pengfei Wan, Yu-Kun Lai, and Lin Gao. 4dynamic: Text-to-4d generation with hybrid priors. arXiv preprint arXiv:2407.12684, 2024. 20", + "[180] Raza Yunus, Jan Eric Lenssen, Michael Niemeyer, Yiyi Liao, Christian Rupprecht, Christian Theobalt, Gerard Pons-Moll, Jia-Bin Huang, Vladislav Golyanik, and Eddy Ilg. Recent trends in 3d reconstruction of general non-rigid scenes. In Computer Graphics Forum, 2024. 2", + "[181] Bohan Zeng, Ling Yang, Siyu Li, Jiaming Liu, Zixiang Zhang, Juanxi Tian, Kaixin Zhu, Yongzhen Guo, Fu-Yun Wang, Minkai Xu, et al. Trans4d: Realistic geometry-aware transition for compositional text-to-4d synthesis. arXiv preprint arXiv:2410.07155, 2024. 20", + "[182] Yifei Zeng, Yanqin Jiang, Siyu Zhu, Yuanxun Lu, Youtian Lin, Hao Zhu, Weiming Hu, Xun Cao, and Yao Yao. Stag4d: Spatial-temporal anchored generative 4d gaussians. arXiv preprint arXiv:2403.14939, 2024. 2, 20", + "[183] Biao Zhang and Rico Sennrich. Root mean square layer normalization. In Proc. NeurIPS, 2019. 17", + "[184] Bowen Zhang, Tianyu Yang, Yu Li, Lei Zhang, and Xi Zhao. Compress3D: a compressed latent space for 3D generation from a single image. arXiv preprint arXiv:2403.13524, 2024. 20", + "[185] David Junhao Zhang, Roni Paiss, Shiran Zada, Nikhil Karnad, David E Jacobs, Yael Pritch, Inbar Mosseri, Mike Zheng Shou, Neal Wadhwa, and Nataniel Ruiz. Recapture: Generative video camera controls for user-provided videos using masked video fine-tuning. arXiv preprint arXiv:2411.05003, 2024. 3", + "[186] Hao Zhang, Di Chang, Fang Li, Mohammad Soleymani, and Narendra Ahuja. Magicpose4d: Crafting articulated models with appearance and motion control. arXiv preprint arXiv:2405.14017, 2024. 20", + "[187] Haiyu Zhang, Xinyuan Chen, Yaohui Wang, Xihui Liu, Yunhong Wang, and Yu Qiao. 4diffusion: Multi-view video diffusion model for 4d generation. arXiv preprint arXiv:2405.20674, 2024. 20" + ], + "bbox": [ + 91, + 90, + 485, + 898 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[188] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proc. ICCV, 2023. 3, 4, 6, 18", + "[189] Tianyuan Zhang, Hong-Xing Yu, Rundi Wu, Brandon Y Feng, Changxi Zheng, Noah Snavely, Jiajun Wu, and William T Freeman. Physdreamer: Physics-based interaction with 3d objects via video generation. In Proc. ECCV, 2024. 20", + "[190] Zhichao Zhang, Hui Chen, Jinsheng Deng, Xiaqing Yin, Xingshen Song, and Ming Xu. Motion4d: A decoupled pipeline for enhanced text-to-4d generation with optimized motion patterns. SSRN, 2024. 20", + "[191] Zhenghao Zhang, Junchao Liao, Menghao Li, Long Qin, and Weizhi Wang. Tora: Trajectory-oriented diffusion transformer for video generation. arXiv preprint arXiv:2407.21705, 2024. 20", + "[192] Wang Zhao, Shaohui Liu, Hengkai Guo, Wenping Wang, and Yong-Jin Liu. *Particlesfm: Exploiting dense point trajectories for localizing moving cameras in the wild*. In *Proc. ECCV*, 2022. 5, 7, 8", + "[193] Yuyang Zhao, Zhiwen Yan, Enze Xie, Lanqing Hong, Zhenguo Li, and Gim Hee Lee.Animate124: Animating one image to 4d dynamic scene. arXiv preprint arXiv:2311.14603, 2023. 2, 20", + "[194] Yuyang Zhao, Chung-Ching Lin, Kevin Lin, Zhiwen Yan, Linjie Li, Zhengyuan Yang, Jianfeng Wang, Gim Hee Lee, and Lijuan Wang. Genxd: Generating any 3d and 4d scenes. arXiv preprint arXiv:2411.02319, 2024. 3", + "[195] Guangcong Zheng, Teng Li, Rui Jiang, Yehao Lu, Tao Wu, and Xi Li. Cami2v: Camera-controlled image-to-video diffusion model. arXiv preprint arXiv:2410.15957, 2024. 3", + "[196] Yufeng Zheng, Xueting Li, Koki Nagano, Sifei Liu, Otmar Hilliges, and Shalini De Mello. A unified approach for text-and image-guided 4d scene generation. In Proc. CVPR, 2024. 2, 20", + "[197] Zangwei Zheng, Xiangyu Peng, Tianji Yang, Chenhui Shen, Shenggui Li, Hongxin Liu, Yukun Zhou, Tianyi Li, and Yang You. Open-sora: Democratizing efficient video production for all, 2024. 3, 17", + "[198] Haitao Zhou, Chuang Wang, Rui Nie, Jinxiao Lin, Dongdong Yu, Qian Yu, and Changhu Wang. Trackgo: A flexible and efficient method for controllable video generation. arXiv preprint arXiv:2408.11475, 2024. 20", + "[199] Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snively. Stereo magnification: Learning view synthesis using multiplane images. In SIGGRAPH, 2018. 2, 4, 5, 6, 7, 8, 16, 18, 19, 20", + "[200] Hanxin Zhu, Tianyu He, Anni Tang, Junliang Guo, Zhibo Chen, and Jiang Bian. Compositional 3d-aware video generation with llm director. arXiv preprint arXiv:2409.00558, 2024. 20" + ], + "bbox": [ + 516, + 90, + 906, + 811 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "22889", + "bbox": [ + 478, + 944, + 519, + 955 + ], + "page_idx": 14 + } +] \ No newline at end of file diff --git a/2025/AC3D_ Analyzing and Improving 3D Camera Control in Video Diffusion Transformers/61cdf9ca-5840-4279-a8c2-d8f7abcf95b7_model.json b/2025/AC3D_ Analyzing and Improving 3D Camera Control in Video Diffusion Transformers/61cdf9ca-5840-4279-a8c2-d8f7abcf95b7_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6bf47acb84aca7adaa85db64a08f1ef972329d06 --- /dev/null +++ b/2025/AC3D_ Analyzing and Improving 3D Camera Control in Video Diffusion Transformers/61cdf9ca-5840-4279-a8c2-d8f7abcf95b7_model.json @@ -0,0 +1,3728 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.043 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.001, + 0.812, + 0.047 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.229, + 0.131, + 0.773, + 0.175 + ], + "angle": 0, + "content": "AC3D: Analyzing and Improving 3D Camera Control in Video Diffusion Transformers" + }, + { + "type": "text", + "bbox": [ + 0.159, + 0.203, + 0.84, + 0.24 + ], + "angle": 0, + "content": "Sherwin Bahmani\\(^{1,2,3}\\) Ivan Skorokhodov\\(^{3}\\) Guocheng Qian\\(^{3}\\) Aliaksandr Siarohin\\(^{3}\\) Willi Menapace\\(^{3}\\) Andrea Tagliasacchi\\(^{1,4}\\) David B. Lindell\\(^{1,2}\\) Sergey Tulyakov\\(^{3}\\)" + }, + { + "type": "text", + "bbox": [ + 0.322, + 0.241, + 0.677, + 0.257 + ], + "angle": 0, + "content": "1University of Toronto 2Vector Institute 3Snap Inc. 4SFU" + }, + { + "type": "text", + "bbox": [ + 0.438, + 0.26, + 0.558, + 0.274 + ], + "angle": 0, + "content": "*equal contribution" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.283, + 0.678, + 0.298 + ], + "angle": 0, + "content": "https://snap-research.github.io/ac3d" + }, + { + "type": "image", + "bbox": [ + 0.094, + 0.308, + 0.907, + 0.488 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.095, + 0.499, + 0.25, + 0.596 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.258, + 0.499, + 0.736, + 0.597 + ], + "angle": 0, + "content": "Figure 1. Camera-controlled video generation. Our method enables precise camera controllability in pre-trained video diffusion transformers, allowing joint conditioning of text and camera sequences. We synthesize the same scene with two different camera trajectories as input. The inset images visualize the cameras for the videos in the corresponding columns. The left camera sequence consists of a rotation to the right, while the right camera visualizes a zoom-out and up trajectory." + }, + { + "type": "image", + "bbox": [ + 0.74, + 0.494, + 0.902, + 0.597 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.249, + 0.621, + 0.327, + 0.637 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.66, + 0.486, + 0.902 + ], + "angle": 0, + "content": "Numerous works have recently integrated 3D camera control into foundational text-to-video models, but the resulting camera control is often imprecise, and video generation quality suffers. In this work, we analyze camera motion from a first principles perspective, uncovering insights that enable precise 3D camera manipulation without compromising synthesis quality. First, we determine that motion induced by camera movements in videos is low-frequency in nature. This motivates us to adjust train and test pose conditioning schedules, accelerating training convergence while improving visual and motion quality. Then, by probing the representations of an unconditional video diffusion transformer, we observe that they implicitly perform camera pose estimation under the hood, and only a sub-partion of their layers contain the camera information. This suggested us to limit the injection of camera conditioning to a subset of the" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.624, + 0.909, + 0.79 + ], + "angle": 0, + "content": "architecture to prevent interference with other video features, leading to a \\(4 \\times\\) reduction of training parameters, improved training speed, and \\(10\\%\\) higher visual quality. Finally, we complement the typical dataset for camera control learning with a curated dataset of 20K diverse, dynamic videos with stationary cameras. This helps the model distinguish between camera and scene motion and improves the dynamics of generated pose-conditioned videos. We compound these findings to design the Advanced 3D Camera Control (AC3D) architecture, the new state-of-the-art model for generative video modeling with camera control." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.827, + 0.646, + 0.843 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.856, + 0.908, + 0.903 + ], + "angle": 0, + "content": "Foundational video diffusion models (VDMs) trained on internet-scale data, acquire abundant knowledge about the physical world [10]. They not only learn appearance and" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.52, + 0.958 + ], + "angle": 0, + "content": "22875" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.486, + 0.364 + ], + "angle": 0, + "content": "plausible 2D dynamics, but they also have abundant understanding of 3D structure [7]. However, most of this knowledge is stored implicitly within the model, as these models do not expose fine-grained control mechanisms, such as camera motion control. We recently witnessed a surge of works that bring 3D camera control into foundational video models [39, 149, 166], but the control they provide is not very precise, and the synthesis quality is often compromised [6]. We analyze camera motion control in video diffusion models from first principles, and develop several findings that allow us to incorporate precise 3D camera conditioning without degrading synthesis quality. To perform our analysis we train a 11.5B-parameter VDiT (video latent diffusion transformer) [100] on a dataset of 100M text/video pairs. On this model, we perform three key studies. With what we learn, we adapt the camera control solution from VD3D [6] from a pixel-based to latent-based diffusion model, and significantly improve its performance." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.376, + 0.486, + 0.633 + ], + "angle": 0, + "content": "1) The spectral properties of camera motion. To study the statistical nature of motion control, we analyze motion spectral volumes (MSV) [75] of the videos generated by a large-scale video DiT model. MSVs show the amount of energy in different portions of the frequency spectra (i.e., high energy in the low frequencies indicate smooth motion) and we measure them across 200 generated videos of different types (camera motion, scene motion, scene plus camera motion) and at various stages of the denoising synthesis process. We observe that camera motion mostly affects the lower portion of the spectrum and kicks in very early (\\(\\approx 10\\%\\)) in the denoising trajectory. Then, as diffusion models are inherently coarse-to-fine in nature [25], we restrict our camera conditioning to only being injected on the subset of the denoising steps corresponding to low-frequencies. This results in \\(\\approx 15\\%\\) higher visual fidelity, \\(\\approx 30\\%\\) better camera following, and mitigates scene motion degradation." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.645, + 0.486, + 0.902 + ], + "angle": 0, + "content": "2) Camera motion knowledge in VDiTs. Then, we consider our text-only VDiT, and determine whether such a model possesses knowledge about cameras, and where this knowledge is expressed within its architecture. With this objective, we feed the (unseen during training) RealEstate10k [199] videos to our VDiT, and perform linear probing [27] to determine if camera poses can be recovered from its internal representation. Our analysis revealed that a video DiT implicitly performs camera pose estimation under the hood, and the presence of camera knowledge in a disentangled form peaks in its middle layers. This implies that the camera signal emerges in its early blocks to allow the later ones rely on it to build subsequent visual representations. Therefore, we adjust our conditioning scheme to only affect the first \\(30\\%\\) of the architecture, leading to a \\(\\approx 4\\times\\) reduction in training parameters, \\(15\\%\\) training and inference acceleration, and \\(10\\%\\) improved visual quality." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.909, + 0.288 + ], + "angle": 0, + "content": "3) Re-balancing the training distribution. Finally, to supervise camera control architectures, the typical solution is to rely on the camera pose annotations provided by RealEstate10k [199]. However, this dataset contains mostly static scenes, which results in significant motion degradation of the fine-tuned video model. To overcome this problem, we curate a subset of 20K diverse videos with dynamic scenes but static cameras. As the camera conditioning branch is still activated for these videos, this helps the model disambiguate the camera from scene movement. Our experiments show that this simple adjustment in the data is sufficient to recover the scene dynamism while still enabling an effective pose-conditioned video model." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.292, + 0.911, + 0.429 + ], + "angle": 0, + "content": "Contributions. We compound the knowledge gained from these three studies into the design of the Advanced 3D Camera Control (AC3D) method. We perform extensive ablation studies and compare against state-of-the-art models for camera control, including MotionCtrl [149], CameraCtrl [39], and VD3D [6]. We demonstrate \\(18\\%\\) higher video fidelity and \\(25\\%\\) more precise camera steering in terms of quantitative metrics than a closest competitor, and our generated videos are favored to others in \\(90\\%\\) of cases." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.444, + 0.652, + 0.459 + ], + "angle": 0, + "content": "2. Related work" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.47, + 0.909, + 0.53 + ], + "angle": 0, + "content": "Our approach lies at the intersection of text-to-video, text-to-3D, and text-to-4D generation approaches. We refer to recent state-of-the-reports [101, 180] for a more thorough analysis of previous work." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.535, + 0.911, + 0.716 + ], + "angle": 0, + "content": "Text-to-video generation. Our approach builds on recent advancements in 2D video diffusion models. One prominent technique in this area enhances text-to-image models by adding temporal layers to support video generation [7, 8, 37, 119, 151]. While these methods use the U-Net architecture, more recent ones [10, 92, 93, 168] have been adapting transformer-based architectures for more scalable, realistic, and highly dynamic scene generation. We are interested in controlling the camera movements during the generation process of recent transformer-based video models based on precise camera extrinsics, i.e., cameras represented as rotation and translation sequences for each frame." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.72, + 0.91, + 0.901 + ], + "angle": 0, + "content": "4D generation. Early 4D generation works [2, 164] used 4D GANs to learn category-specific generators with an underlying dynamic 3D representation. More recent approaches [4, 5, 81, 120, 196] have tackled 4D generation by distilling motion priors from pre-trained video diffusion models into an explicit 4D representation, enabling category-agnostic 4D generation. Follow-up works investigate image or video conditioned 4D generation [33, 76, 81, 98, 109, 173, 182, 193, 196] instead of pure text inputs, improving flexibility in the generation process. While most of these works are object-centric, recent approaches [4, 159] shifted towards more complex scenes, including methods [23, 175] which" + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.946, + 0.521, + 0.957 + ], + "angle": 0, + "content": "22876" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.095, + 0.089, + 0.907, + 0.26 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.268, + 0.908, + 0.311 + ], + "angle": 0, + "content": "Figure 2. VDiT-CC model with ControlNet [71, 188] camera conditioning built on top of VDiT. Video synthesis is performed by large 4,096-dimensional DiT-XL blocks of the frozen VDiT backbone, while VDiT-CC only processes and injects the camera information through lightweight 128-dimensional DiT-XS blocks (FC stands for fully-connected layers); see Section 3.2 for details." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.337, + 0.486, + 0.488 + ], + "angle": 0, + "content": "model the background. However, all these methods are optimization-based, i.e., each scene is generated independently from scratch. Recently, L4GM [110] proposes a feed-forward 4D generator trained on object-centric synthetic 4D data. While these approaches are explicit and provide space-time control, they are limited in their photorealism compared to recent 2D video diffusion models. We investigate dynamic 3D scene generation from a different perspective by extending pre-trained video diffusion models with 3D camera control." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.499, + 0.485, + 0.831 + ], + "angle": 0, + "content": "Camera control for video models. Recently, there has been significant progress in adding camera control to video diffusion models. As the pioneering work, MotionCtrl [149] learns camera control by conditioning pre-trained video models [7, 17] with extrinsic matrices. Follow-up works [39, 65, 160] further improve the conditioning mechanisms by representing cameras as Plücker coordinates. Another line of work [49, 50, 82, 155] controls camera motion without training additional parameters. However, all of these approaches use U-Net-based architectures as their backbone. More recently, 4DiM [150] trains a space-time diffusion model from scratch for novel view synthesis from a single image input. Closely related to our work, VD3D [6] incorporates camera control into a pre-trained video diffusion transformer. While the motion and camera control improves over U-Net-based approaches, the synthesized motion in the scenes and the visual quality are still degraded compared to the base video model. In contrast to VD3D, we first thoroughly investigate the pre-trained base video model and its knowledge of camera motion. We derive an improved training and architecture design for high-quality and dynamic video generation based on our findings." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.84, + 0.486, + 0.903 + ], + "angle": 0, + "content": "Concurrent works. Concurrent approaches [68, 158, 177, 185, 194, 195] further improve camera control in U-Net-based architectures, while another work [22] tackles video diffusion transformer. However, the scene and visual quality" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.337, + 0.907, + 0.458 + ], + "angle": 0, + "content": "is still limited in that approach. DimensionX [127] controls space and time in video diffusion transformers but the camera trajectories are pre-defined and not continuous. Chun-Hao et al. [51] explore pose estimation with a video DiT by pairing it with DUSt3R [145] and fine-tuning, while we perform linear probing without any training to assess its existing camera knowledge. CAT4D [152] proposes a multi-view video diffusion model fine-tuned from a multi-view model." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.475, + 0.606, + 0.491 + ], + "angle": 0, + "content": "3. Method" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.502, + 0.909, + 0.593 + ], + "angle": 0, + "content": "We first describe our base video diffusion model (Sec. 3.1), and the baseline camera control method built on top of it (Sec. 3.2). Then, we proceed with the analysis of motion (Sec. 3.3), linear probing (Sec. 3.4) and dataset biases (Sec. 3.5), and additional insights on how to build an effective model for camera control (Sec. 3.6)" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.606, + 0.697, + 0.622 + ], + "angle": 0, + "content": "3.1. Base model (VDiT)" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.629, + 0.909, + 0.901 + ], + "angle": 0, + "content": "Following Sora [10], most modern foundational text-to-video generators use the diffusion framework [45, 122] to train a large-scale transformer [139] in the latent space of a variational autoencoder [64, 111]. We adopt the same design and, for a base video model, pre-train an 11.5B-parameter Video DiT model [100] with 32 blocks of hidden dimension 4,096 for text-to-video generation. We use the rectified flow diffusion parametrization [85] and learn in the latent space of CogVideoX [168] (using an autoencoder with a 16-channel output and compression factors of \\(4 \\times 8 \\times 8\\) in the temporal and spatial dimensions). The T5 [108] encoder produces text embeddings, which are passed into VDiT via cross-attention. We train our base model on a large-scale dataset of images and videos with text annotations, with resolutions ranging from \\(17 \\times 144 \\times 256\\) to \\(121 \\times 576 \\times 1024\\). This design is fairly standard and followed by many existing works with little deviation [32, 102, 168, 197]; we describe our specific architectural and training setup in detail in Appendix D." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.521, + 0.958 + ], + "angle": 0, + "content": "22877" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.091, + 0.091, + 0.429, + 0.108 + ], + "angle": 0, + "content": "3.2. VDiT with Camera Control (VDiT-CC)" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.126, + 0.485, + 0.264 + ], + "angle": 0, + "content": "To construct a baseline architecture for camera control, we implement ControlNet [18, 188] conditioning on top of the VDiT. Similar to previous work [6, 39, 149], we use the RealEstate10k [199] dataset, consisting of \\(65\\mathrm{k}\\) (text, video, camera trajectory) triplets \\((\\pmb{t}_n,\\pmb{x}_n,\\pmb{c}_n)_{n = 1}^N\\) and train a new set of model parameters to input the camera information into the model. Camera trajectories \\(\\pmb {c}\\in \\mathbb{R}^{f\\times 25}\\) are provided in the form of camera extrinsics \\(\\pmb {C_f}\\in \\mathbb{R}^{4\\times 4}\\) and intrinsics \\(\\pmb {K}_f\\in \\mathbb{R}^{3\\times 3}\\) for each \\(f\\)-th frame \\(\\pmb{x}_f\\)." + }, + { + "type": "text", + "bbox": [ + 0.093, + 0.279, + 0.485, + 0.566 + ], + "angle": 0, + "content": "Camera conditioning. For base camera control, we adapt VD3D [6] since it was designed for transformer-based models and suits our setup the most, while other methods are built on top of UNet-based [112] backbones. We use Plücker camera representations [6, 16, 39, 59, 121], which are projected to the same dimensionality and resolution as the video tokens via a fully-convolutional encoder to produce camera tokens. These camera tokens are processed by a sequence of lightweight DiT-XS blocks with hidden dimension 128 and four attention heads each. To mix the camera information with the video tokens of VDiT, we use summation before each main DiT block. We also found it useful to perform cross-attention from video tokens to camera tokens as a form of a feedback connection [71]. We illustrate this model architecture, which we call VDiT-CC, in Figure 2; see implementation details in Appendix D. VDiT-CC describes the camera-controlled video model architecture used by AC3D, while AC3D describes our proposed work including analysis and additional adjustments based on the analysis." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.582, + 0.484, + 0.764 + ], + "angle": 0, + "content": "Training. Keeping the VDiT backbone frozen, we train the new parameters with a rectified flow objective [85] and standard (location of 0 and scale of 1) logit-normal noise distribution [28]. Similar to prior works [6, 150], we apply a \\(10\\%\\) camera dropout to support classifier-free guidance (CFG) [44] later. Notably, we train VDiT-CC only at the \\(256^2\\) resolution: since camera motion is a low-frequency type of signal (which can be observed at lower resolutions) and the main VDiT backbone is frozen, we found that our design generalizes to higher resolutions out-of-the-box. During inference, we input text prompts and camera embeddings with classifier-free guidance at each time step." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.78, + 0.484, + 0.901 + ], + "angle": 0, + "content": "Model behavior. This baseline model, being built on top of our powerful VDiT, already achieves decent-quality camera control. However, it struggles with degraded visual quality and reduced scene motion, and sometimes, the camera control inputs are ignored. To improve the design, we analyze our VDiT backbone to understand how camera motion is modeled and represented. Then, we inspect VDiT-CC's failure cases and where they arise to address them." + }, + { + "type": "image", + "bbox": [ + 0.558, + 0.092, + 0.865, + 0.238 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.513, + 0.251, + 0.908, + 0.378 + ], + "angle": 0, + "content": "Figure 3. Average magnitude of motion spectral volumes along spatial, temporal offset, and video batch dimensions for scenes with different motion types. We compute the flow of each video in a sliding window manner with temporal offsets and average the frequencies across all offsets. Videos with camera motion (purple) exhibit stronger overall motion than the videos with scene motion (orange), especially for the low-frequency range, suggesting that the motion induced by camera transitions is heavily biased towards low-frequency components. Frequency refers to the temporal frequency." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.404, + 0.896, + 0.421 + ], + "angle": 0, + "content": "3.3. How is camera motion modeled by diffusion?" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.427, + 0.908, + 0.639 + ], + "angle": 0, + "content": "We start by analyzing how camera motion is modeled by a pre-trained video diffusion model (i.e., before camera control is incorporated). We hypothesize that the motion induced by changes in camera pose is a low-frequency type of signal and investigate the motion spectral volumes [75] of the generated videos at different steps of the denoising process. To perform this analysis, we generate 200 diverse videos with our VDiT model with 80 denoising steps and manually annotate them into four categories: videos with only scene motion, videos with only camera motion, videos with both scene and camera motion, and others; see Appendix E for details. During generation, we save the denoised predictions at each denoising step and estimate optical flow to compute the motion spectral volumes." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.643, + 0.909, + 0.824 + ], + "angle": 0, + "content": "Analysis. We visualize motion spectral volumes with \\(95\\%\\) confidence intervals in Figure 3. Videos with camera motion exhibit higher amplitudes than scene-motion-only videos for low-frequency components while having similar characteristics for high-frequency ones. This supports the conjecture that the camera motion is a low-frequency type of signal. We also depict an example of a generated video with both scene and camera motion with four denoising steps on Fig. 4a: one can observe that the camera movement has been fully produced by \\(t = 0.9\\) (first \\(10\\%\\) of the rectified flow denoising process). In contrast, scene motion details like the hand movements of the subjects are not finalized even till \\(t = 0.5\\)." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.826, + 0.909, + 0.901 + ], + "angle": 0, + "content": "Inspired by this finding, we pose the question: when exactly does a video diffusion model determine the camera pose? To answer this question, we plot aggregated spectral volumes for different timesteps in Figure 4b. We also show the ratio with respect to the last timestep \\(t = 0\\) (i.e.," + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.946, + 0.519, + 0.957 + ], + "angle": 0, + "content": "22878" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.092, + 0.09, + 0.357, + 0.235 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.093, + 0.236, + 0.356, + 0.284 + ], + "angle": 0, + "content": "(a) A generated video at different diffusion timesteps. The camera has already been decided by the model even at \\(t = 0.9\\) (first \\(10\\%\\) of the denoising process) and does not change after that." + }, + { + "type": "image", + "bbox": [ + 0.37, + 0.093, + 0.61, + 0.25 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.612, + 0.092, + 0.902, + 0.25 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.365, + 0.254, + 0.907, + 0.279 + ], + "angle": 0, + "content": "(b) Motion spectral volumes of VDiT's generated videos for different diffusion timesteps (left) and their ratio w.r.t. the motion spectral volume at \\(t = 0\\) (i.e., a fully denoised video)." + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.296, + 0.907, + 0.366 + ], + "angle": 0, + "content": "Figure 4. How camera motion is modeled by diffusion? As visualized in Figure 4a and Figure 3, the motion induced by camera transitions is a low-frequency type of motion. We observe that a video DiT creates low-frequency motion very early in the denoising trajectory: Figure 4b (left) shows that even at \\( t = 0.96 \\) (first \\( \\approx 4\\% \\) of the steps), the low-frequency motion components have already been created, while high frequency ones do not fully unveil even till \\( t = 0.5 \\). We found that controlling the camera pose later in the denoising trajectory is not only unnecessary but detrimental to both scene motion and overall visual quality." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.393, + 0.483, + 0.482 + ], + "angle": 0, + "content": "when all motion has been generated). We then inspect when different types of motion appear during the denoising process. Figure 4b (right) shows that the low-frequency motion components fill up to \\(\\approx 84\\%\\) at \\(t = 0.9\\) (the first \\(10\\%\\) of the denoising process), while high-frequency components are not well-modeled until \\(t = 0.6\\)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.485, + 0.484, + 0.787 + ], + "angle": 0, + "content": "An immediate consequence of this observation is that trying to control the camera later in the denoising trajectory is simply unnecessary and will not influence the manipulation result. In this way, instead of using the standard logit-normal noise level distribution of SD3 [28] with a location of 0.0 and scale of 1.0 (which we use by default for VDiT), we switch to using truncated normal with a location of 0.8 and scale of 0.075 on the [0.6, 1] interval to cover the early steps of the denoising rectified flow trajectory. At inference time, we apply camera conditioning on the same [0.6, 1] interval. Surprisingly, we observe that not using truncation is detrimental to the scene motion and overall visual quality. Following this insight, we restrict both our train-time noise levels and test-time camera conditioning schedules to cover only the first \\(40\\%\\) of the reverse diffusion trajectory. As Sec. 4.3 shows, this improves FID and FVD by \\(14\\%\\) on average, and camera following by \\(30\\%\\) on MSR-VTT (the dataset used to measure generalization to diverse, out-of-fine-tuning-distribution scenes). Further, truncated noise sampling enhances the overall scene motion." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.802, + 0.458, + 0.818 + ], + "angle": 0, + "content": "3.4. What does VDiT know about camera pose?" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.825, + 0.484, + 0.901 + ], + "angle": 0, + "content": "Foundational video models acquire rich knowledge about the physical world, and we hypothesize that they store information about the camera pose within their representations. To investigate this, we perform linear probing of our base VDiT model on the RealEstate10k [199] dataset (not seen" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.393, + 0.766, + 0.407 + ], + "angle": 0, + "content": "during training) for camera extrinsics." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.41, + 0.909, + 0.787 + ], + "angle": 0, + "content": "Specifically, we take 1,000 random 49-frame videos from RealEstate10K, feed them into VDiT under 8 noise levels \\((1/8, 2/8, \\dots, 1)\\), and extract the activations for all 32 DiT blocks. Next, we split the random videos into 900 train and 100 test videos and train a linear ridge regression model to predict the rotation pitch/yaw/roll angles and translation vectors for the entire viewpoint trajectory \\((49 \\times 6\\) target values in total). This results in \\(8 \\times 32\\) trained models, and we report the rotation and (normalized) translation errors [39] on a held-out test set of 100 videos in Figure 5. Surprisingly, VDiT can accurately predict the camera pose, achieving minimum test errors of \\(\\approx 0.025\\) for rotation and for \\(\\approx 0.48\\) translation prediction. The knowledge quality increases around layer #9 and peaks in the range of #13-21. We reason that since the camera information in block #13 is stored in such a disentangled manner, then the model is using it to build other representations; hence, conditioning the camera in this block is risky and unnecessary and would interfere with other visual features, as shown in our ablations. In this way, we propose to input the camera conditioning only in the first #8 blocks and leave the remaining 24 DiT blocks unconditioned. We find in Section 4.3 that this not only reduces the number of trainable parameters by \\(\\approx 4\\) times and improves training speed by \\(\\approx 15\\%\\), but also enhances the visual quality by \\(\\approx 10\\%\\)." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.802, + 0.822, + 0.817 + ], + "angle": 0, + "content": "3.5. Mitigating training data limitations" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.825, + 0.908, + 0.901 + ], + "angle": 0, + "content": "Estimating camera parameters from in-the-wild videos remains challenging, as leading methods like [114, 115, 145, 192] frequently fail when processing videos containing dynamic scene content. This limitation results in camera-annotated datasets being heavily biased" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "22879" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.115, + 0.09, + 0.455, + 0.24 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.456, + 0.091, + 0.885, + 0.24 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.253, + 0.908, + 0.311 + ], + "angle": 0, + "content": "Figure 5. Video DiT is secretly a camera pose estimator. We perform linear probing of camera poses in each of VDiT blocks for various noise levels and observe that video DiT performs pose estimation under the hood. Its middle blocks carry the most accurate information about the camera locations and orientations, which indicates that the camera signal emerges in the early layers to help the middle and late blocks render other visual features aligned with the viewpoint." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.336, + 0.484, + 0.501 + ], + "angle": 0, + "content": "toward static scenes, which is particularly evident in RealEstate10K (RE10K) [199], the predominant dataset for training camera-controlled video models [6, 39, 149]. We hypothesize that models fine-tuned on such data interpret camera position information as a signal to suppress scene dynamics. This bias persists even when jointly training on unconstrained 2D video data [150], because the camera conditioning branch is only activated when camera parameters are available, which occurs exclusively for static scenes from RE10K, as static scenes remain the only reliable source for accurate camera annotation." + }, + { + "type": "text", + "bbox": [ + 0.094, + 0.503, + 0.486, + 0.805 + ], + "angle": 0, + "content": "To address this fundamental limitation, we propose an alternative approach: rather than attempting to annotate dynamic scenes, which proved unsuccessful in our extensive preliminary research, even with state-of-the-art methods [145], we curate a collection of 20K diverse videos featuring dynamic scenes captured by stationary cameras (see Figure 6). With stationary cameras, the camera position is inherently known, (we can assign fixed arbitrary extrinsic), allowing us to maintain active camera conditioning during training. This approach enables the camera conditioning branch to remain active during training while exposing the model to dynamic content, helping it distinguish between viewpoint conditioning and scene stillness. On top of this secondary dataset, following [150], we remove the scale ambiguity in RE10K by leveraging an off-the-shelf metric depth estimator; see Appendix H. Our experiments in Sec. 4.3 demonstrate that this straightforward yet effective data curation strategy successfully mitigates the distributional limitations of RE10K, restoring much of the lost scene dynamics, while maintaining precise camera control." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.815, + 0.35, + 0.831 + ], + "angle": 0, + "content": "3.6. Miscellaneous improvements" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.837, + 0.486, + 0.868 + ], + "angle": 0, + "content": "In addition to our core analysis, we introduce several auxiliary techniques that enhance model performance." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.871, + 0.485, + 0.902 + ], + "angle": 0, + "content": "Separate text and camera guidance. Text and camera signals require different guidance weights due to their dis" + }, + { + "type": "image", + "bbox": [ + 0.52, + 0.334, + 0.902, + 0.51 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.512, + 0.523, + 0.909, + 0.636 + ], + "angle": 0, + "content": "Figure 6. RealEstate10k [199] videos (upper 2 rows) contain diverse camera trajectories, but are strongly biased towards static scenes. To mitigate this bias and also increase the concepts diversity, we curate 20K videos with stationary cameras, but dynamic content (lower 2 rows). Such datasets are easy to construct, and surprisingly effective. Section 4.3 shows that integrating the dataset into our training improves visual quality on out-of-distribution prompts by \\(17\\%\\)." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.66, + 0.908, + 0.691 + ], + "angle": 0, + "content": "tinct nature, motivating us to separate their classifier-free guidance (CFG) [9, 44]. We formulate the equation as:" + }, + { + "type": "equation", + "bbox": [ + 0.585, + 0.701, + 0.907, + 0.737 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\hat {s} (\\boldsymbol {x} \\mid \\boldsymbol {t}, \\boldsymbol {c}) = (1 + w _ {y} + w _ {c}) s _ {\\theta} (\\boldsymbol {x} \\mid \\boldsymbol {t}, \\boldsymbol {c}) \\tag {1} \\\\ - w _ {y} s _ {\\theta} (\\boldsymbol {x} | \\boldsymbol {c}) - w _ {c} s _ {\\theta} (\\boldsymbol {x} | \\boldsymbol {t}), \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.746, + 0.908, + 0.822 + ], + "angle": 0, + "content": "where \\(\\hat{s}(.)\\) denotes the final update direction used during synthesis, \\(s_{\\theta}\\) represents the model's predicted update direction, \\(\\pmb{t}\\) and \\(\\pmb{c}\\) are text and camera conditions, and \\(w_{y}\\) and \\(w_{c}\\) are their respective CFG weights. We zero-out the tensor for unconditional generation." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.825, + 0.909, + 0.902 + ], + "angle": 0, + "content": "ControlNet with feedback. Traditional ControlNet [188] conditioning, used in recent camera control methods [6, 39, 149], only processes conditioning signals without accessing the main branch. Our experiments show that using a bidirectional ControlNet produces better camera representations." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.521, + 0.958 + ], + "angle": 0, + "content": "22880" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.095, + 0.09, + 0.483, + 0.153 + ], + "angle": 0, + "content": "
MethodHuman Preference
CAMQTAVQOverall
Ours vs. VD3D (FIT)89.5%79.0%87.5%97.5%95.0%
Ours vs. VD3D (DiT)65.0%87.5%83.5%95.0%92.5%
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.175, + 0.485, + 0.259 + ], + "angle": 0, + "content": "Table 1. User study. We compare our approach to the original VD3D (FIT) and reimplemented VD3D (DiT) on top of our base model. We conduct a user study where participants indicate their preference based on camera alignment (CA), motion quality (MQ), text alignment (TA), visual quality (VQ), and overall preference (Overall)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.286, + 0.484, + 0.332 + ], + "angle": 0, + "content": "This modification behaves as a feedback mechanism [71] provided by the main synthesis branch to the camera processing branch." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.335, + 0.484, + 0.412 + ], + "angle": 0, + "content": "Dropping context in the camera branch. Applying cross-attention over the context information (text prompts, resolution, etc.) in the camera DiT-XS blocks worsens visual quality and camera steering due to harmful interference of the context embeddings with camera representations." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.425, + 0.222, + 0.442 + ], + "angle": 0, + "content": "4. Experiments" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.455, + 0.484, + 0.531 + ], + "angle": 0, + "content": "Datasets. Our base VDiT model was trained on a large-scale dataset of text-annotated images and videos. VDiT-CC is fine-tuned from VDiT on RealEstate10K [199], contains \\(\\approx 65\\mathrm{M}\\) video clips with per-frame camera parameters since it is the setup used by existing methods [6, 39, 149]." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.534, + 0.484, + 0.671 + ], + "angle": 0, + "content": "Metrics. To assess the performance, we rely on a wide range of automatic quantitative metrics. We use FID [43], FVD [136], and CLIP score [42] to evaluate visual quality, and rotation and normalized translation errors [39] of ParticleSfM [192]-reconstructed trajectories to assess camera steerability. We evaluate them both on RE10K and MSR-VTT [161], since the latter allows to assess zero-shot generalization on out-of-distribution data. Moreover, we conduct a user study with details in the appendix in Sec. J." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.681, + 0.199, + 0.695 + ], + "angle": 0, + "content": "4.1. Baselines" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.702, + 0.484, + 0.839 + ], + "angle": 0, + "content": "We select three camera-control methods: MotionCtrl [149], CameraCtrl [39], and VD3D [6]. MotionCtrl and CameraCtrl use a UNet-based video diffusion backbone [37], while VD3D builds on top of FIT [21, 93] and as such, is easily extendable to our video DiT [100] setup. Hence, we re-implement VD3D on top of our VDiT model to obtain an additional \"VD3D+DiT\" baseline. Moreover, we provide comparisons for an open-source model, i.e., CogVideoX [169]. See Sec. C of the appendix for more details." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.849, + 0.224, + 0.863 + ], + "angle": 0, + "content": "4.2. Main results" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.871, + 0.484, + 0.901 + ], + "angle": 0, + "content": "We present quantitative comparisons with the baselines in Tab. 2. One can observe that just switching from the 4B-" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.908, + 0.348 + ], + "angle": 0, + "content": "parameter pixel-space FIT [21] backbone, employed by the original VD3D approach, to our larger 11.5B-parameter latent-space DiT yields clear improvements across most metrics. Next, the results demonstrate that AC3D establishes a new state-of-the-art in performance against all baselines. Evaluating the quality of camera motion from still images is difficult, so we instead visualize all qualitative results in the website provided within our supplementary material. Therein, we can observe that AC3D better follows pose conditioning and achieves higher visual fidelity. We conduct user studies against VD3D+FIT (the original model) and VD3D+DiT (our improved re-implementation on top of the bigger video transformer). The results are presented in Table 1: AC3D outperforms them across all qualitative aspects, achieving a \\(90\\%+\\) overall preference score. Finally, we encourage the reader to assess the visual quality by observing videos on our website." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.36, + 0.625, + 0.374 + ], + "angle": 0, + "content": "4.3. Ablations" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.385, + 0.907, + 0.537 + ], + "angle": 0, + "content": "No camera conditioning. The first ablation we conduct is to drop all camera conditioning, which makes the model equivalent to the vanilla VDiT. This is needed to understand the degradation of visual quality and text alignment. The results (Tab. 2, row w/o camera cond) show that our model loses less only \\(\\approx 7\\%\\) of the original visual fidelity on MSR-VTT (as measured by FVD), while (as expected) greatly improving on its in-domain RE10K data. In comparison, VD3D-DiT (the closest baseline) loses \\(\\approx 20\\%\\) of its visual fidelity on MSR-VTT." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.541, + 0.906, + 0.632 + ], + "angle": 0, + "content": "Importance of biasing the noise towards higher levels. As Sec. 3.3 shows, we use the truncated normal distribution with location of 0.8 and scale of 0.075 with the [0.6, 1] bounds for training AC3D. We ablate the importance of biasing the noise sampling towards high noise and observe higher motion, visual quality, and camera controllability." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.636, + 0.906, + 0.712 + ], + "angle": 0, + "content": "Importance of truncating the noise schedule. We change the training and inference procedure by using no truncation during noise sampling. Instead, we condition the model with camera inputs over the whole noise range and observe decreased visual quality." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.715, + 0.907, + 0.806 + ], + "angle": 0, + "content": "No camera guidance. We assess the importance of classifier-free guidance [44] on the camera conditioning in Tab. 2 (w/o camera CFG). It attains the same visual quality on both indistribution (RE10K) and out-of-distribution (MSR-VTT) data, but degrades camera following, resulting in \\(\\approx 5\\%\\) worse pose reconstruction errors." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.811, + 0.909, + 0.903 + ], + "angle": 0, + "content": "Training without our data with scene motion. To understand how well our curated data with scene motion but stationary cameras mitigates static scene bias, we train AC3D exclusively on RE10K, and report the results in Tab. 2 (w/o our dynamic data). The model maintains similar visual quality and text alignment on RE10K (in-domain data), but" + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.518, + 0.957 + ], + "angle": 0, + "content": "22881" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.092, + 0.089, + 0.912, + 0.346 + ], + "angle": 0, + "content": "
MethodRealEstate10K [199]MSR-VTT [161]
TransErr ↓RotErr ↓FID ↓FVD ↓CLIP ↑TransErr ↓RotErr ↓FID ↓FVD ↓CLIP ↑
MotionCtrl (U-Net)0.4770.0942.9961.7026.460.5930.13716.85283.1224.11
CameraCtrl (U-Net)0.4650.0892.4855.6426.810.5870.13212.33201.3325.05
VD3D (FIT)0.4090.0431.4042.4328.070.5040.0507.80165.1826.89
VD3D (CogVideoX)0.4670.0631.6643.1428.080.5010.0687.45148.1127.65
AC3D (CogVideoX) (ours)0.3740.0391.2738.2028.620.4310.0395.52116.0428.38
MotionCtrl (VDiT)0.5040.1261.7443.8127.690.5890.1469.92150.2027.25
CameraCtrl (VDiT)0.5130.1381.6242.1027.730.5660.1438.15146.7727.51
VD3D (VDiT)0.4210.0561.2138.5728.340.4860.0476.88137.6227.90
AC3D (VDiT) (ours)0.3580.0351.1836.5528.760.4280.0385.34110.7128.58
w/o camera cond+0.233+0.153+4.02+53.83-1.63+0.266+0.157-0.48-8.53+0.35
w/o biasing noise+0.093+0.015+0.02+1.78-0.32+0.138+0.033+0.59+16.92-0.54
w/o noise truncation+0.020-0.003+0.06+1.69-0.20+0.016+0.005+0.76+6.63-0.18
w/o camera CFG+0.014+0.004+0.49+4.57-0.54+0.025+0.003+0.03+1.42-0.27
w/o our dynamic data-0.005-0.004-0.06+0.22-0.20+0.004-0.001+0.89+4.40-0.55
w/o metric scaled data+0.013+0.005+0.17+4.650.00+0.023+0.002-0.010.00-0.12
w/o dropping camera context+0.013+0.001+0.04+2.46-0.65+0.029+0.003+1.25+7.41-0.36
w/o limiting camera cond to 8 blocks-0.001+0.001+0.09+0.56-0.02+0.0030.000+0.32+9.23-0.33
w/ 2D training+0.129+0.068+2.60+33.85-1.17+0.128+0.093-0.26-3.83+0.21
" + }, + { + "type": "table_caption", + "bbox": [ + 0.089, + 0.356, + 0.908, + 0.4 + ], + "angle": 0, + "content": "Table 2. Quantitative evaluation. We evaluate all the models using camera pose and visual quality metrics based on unseen camera trajectories. We compute translation and rotation errors based on the estimated camera poses from generations using ParticleSfM [192]. We evaluate both in-distribution performance with RealEstate10K [199] and out-of-distribution performance with MSR-VTT [161]." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.425, + 0.482, + 0.487 + ], + "angle": 0, + "content": "performance on out-of-distribution samples from MSR-VTT worsens (\\(\\approx 17\\%\\) worse FID and \\(\\approx 4\\%\\) worse FVD). The quality of scene motion is better assessed by referring to our qualitative video comparisons in the supplementary." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.491, + 0.483, + 0.598 + ], + "angle": 0, + "content": "Importance of metric scaled cameras. We train AC3D using the original RE10K's camera parameters without our scaling procedure and present the results in Tab. 2 (w/o metric scaled data). This is a more ambiguous conditioning signal, and results in worse visual quality (\\(\\approx 10\\%\\) FVD on RE10K) and camera following performance (\\(\\approx 12\\%\\) worse trajectory reconstruction)." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.602, + 0.483, + 0.709 + ], + "angle": 0, + "content": "Providing context into the camera branch. As discussed in Sec. 3.6, we chose not to input the context information (text embeddings, resolution conditioning, etc.) into the camera branch to avoid potential interference with the camera representations. As Tab. 2 (w/o dropping camera context) shows, providing this information indeed results in \\(\\approx 4\\%\\) worse camera following and \\(\\approx 15\\%\\) lower visual quality." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.714, + 0.483, + 0.851 + ], + "angle": 0, + "content": "Importance of limiting conditioning to the first 8 VDiT blocks. Following our insights in Sec. 3.4, we condition AC3D only in the first 8 blocks. Trying to condition in all the 32 DiT blocks (w/o limiting camera cond to 8 blocks) worsens the visual quality by \\(\\approx 10\\%\\), while keeping the quality control at the same level. This suggests that the middle and late VDiT layers indeed rely on processed camera information and conditioning them on external camera poses might lead to interference with other visual features." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.856, + 0.484, + 0.902 + ], + "angle": 0, + "content": "Joint training with 2D data. To mitigate visual quality and scene motion degradation, we attempted to perform joint fine-tuning on 2D video data (without camera annotations)" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.425, + 0.907, + 0.531 + ], + "angle": 0, + "content": "which was used in base VDiT training by applying dropout on camera inputs for it. Prior work shows performance benefits with this strategy [150] and, as Tab. 2 (with \\(2D\\) training) shows, it indeed helps to maintain slightly higher visual fidelity in our case (\\(\\approx 3\\%\\) better FVD on MSR-VTT). However, camera steering severely deteriorates, leading to up to \\(3\\times\\) worse results for translation/rotation errors." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.554, + 0.642, + 0.57 + ], + "angle": 0, + "content": "5. Conclusions" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.583, + 0.909, + 0.794 + ], + "angle": 0, + "content": "Our findings demonstrate that principled analysis of camera motion in video diffusion models leads to significant improvements in control precision and efficiency. Through enhanced conditioning schedules, targeted layerspecific camera control, and better-calibrated training data, AC3D achieves state-of-the-art performance in 3D camera-controlled video synthesis while maintaining high visual quality and natural scene dynamics. This work establishes a foundation for more precise and efficient camera control in text-to-video generation. We discuss the limitations of our approach in Appendix B. In future work, we plan to focus on further improving data limitations and developing control mechanisms for camera trajectories far outside of the training distribution." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.818, + 0.702, + 0.835 + ], + "angle": 0, + "content": "6. Acknowledgements" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.845, + 0.91, + 0.888 + ], + "angle": 0, + "content": "DBL acknowledges support from NSERC under the RGPIN program, the Canada Foundation for Innovation, and the Ontario Research Fund." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.521, + 0.958 + ], + "angle": 0, + "content": "22882" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.093, + 0.09, + 0.188, + 0.106 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.108, + 0.115, + 0.484, + 0.142 + ], + "angle": 0, + "content": "[1] Jimmy Lei Ba. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. 17" + }, + { + "type": "ref_text", + "bbox": [ + 0.109, + 0.144, + 0.484, + 0.198 + ], + "angle": 0, + "content": "[2] Sherwin Bahmani, Jeong Joon Park, Despoina Paschalidou, Hao Tang, Gordon Wetzstein, Leonidas Guibas, Luc Van Gool, and Radu Timofte. 3d-aware video generation. In TMLR, 2023. 2, 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.109, + 0.201, + 0.484, + 0.255 + ], + "angle": 0, + "content": "[3] Sherwin Bahmani, Jeong Joon Park, Despoina Paschalidou, Xingguang Yan, Gordon Wetzstein, Leonidas Guibas, and Andrea Tagliasacchi. CC3D: Layout-conditioned generation of compositional 3D scenes. In Proc. ICCV, 2023. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.109, + 0.257, + 0.484, + 0.325 + ], + "angle": 0, + "content": "[4] Sherwin Bahmani, Xian Liu, Wang Yifan, Ivan Skorokhodov, Victor Rong, Ziwei Liu, Xihui Liu, Jeong Joon Park, Sergey Tulyakov, Gordon Wetzstein, Andrea Tagliasacchi, and David B. Lindell. Tc4d: Trajectory-conditioned text-to-4d generation. In Proc. ECCV, 2024. 2, 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.109, + 0.327, + 0.484, + 0.395 + ], + "angle": 0, + "content": "[5] Sherwin Bahmani, Ivan Skorokhodov, Victor Rong, Gordon Wetzstein, Leonidas Guibas, Peter Wonka, Sergey Tulyakov, Jeong Joon Park, Andrea Tagliasacchi, and David B. Lindell. 4d-fy: Text-to-4d generation using hybrid score distillation sampling. In Proc. CVPR, 2024. 2, 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.109, + 0.398, + 0.484, + 0.478 + ], + "angle": 0, + "content": "[6] Sherwin Bahmani, Ivan Skorokhodov, Aliaksandr Siarohin, Willi Menapace, Guocheng Qian, Michael Vasilkovsky, Hsin-Ying Lee, Chaoyang Wang, Jiaxu Zou, Andrea Tagliasacchi, et al. Vd3d: Taming large video diffusion transformers for 3d camera control. arXiv preprint arXiv:2407.12781, 2024. 2, 3, 4, 6, 7, 18" + }, + { + "type": "ref_text", + "bbox": [ + 0.109, + 0.481, + 0.484, + 0.549 + ], + "angle": 0, + "content": "[7] Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint arXiv:2311.15127, 2023. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.109, + 0.551, + 0.484, + 0.605 + ], + "angle": 0, + "content": "[8] Andreas Blattmann, Robin Rombach, Huan Ling, Tim Dockhorn, Seung Wook Kim, Sanja Fidler, and Karsten Kreis. Align your latents: High-resolution video synthesis with latent diffusion models. In Proc. CVPR, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.109, + 0.607, + 0.484, + 0.647 + ], + "angle": 0, + "content": "[9] Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. In Proc. CVPR, 2023. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.65, + 0.484, + 0.718 + ], + "angle": 0, + "content": "[10] Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, Clarence Ng, Ricky Wang, and Aditya Ramesh. Video generation models as world simulators. OpenAI technical reports, 2024. 1, 2, 3, 17" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.72, + 0.484, + 0.773 + ], + "angle": 0, + "content": "[11] Yukang Cao, Liang Pan, Kai Han, Kwan-Yee K Wong, and Ziwei Liu. Avatargo: Zero-shot 4d human-object interaction generation and animation. arXiv preprint arXiv:2410.07164, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.776, + 0.484, + 0.829 + ], + "angle": 0, + "content": "[12] Zenghao Chai, Chen Tang, Yongkang Wong, and Mohan Kankanhalli. Star: Skeleton-aware text-based 4d avatar generation with in-network motion retargeting. arXiv preprint arXiv:2406.04629, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.832, + 0.484, + 0.9 + ], + "angle": 0, + "content": "[13] Eric R Chan, Connor Z Lin, Matthew A Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas J Guibas, Jonathan Tremblay, Sameh Khamis, et al. Efficient geometry-aware 3D generative adversarial networks. In Proc. CVPR, 2022. 20" + }, + { + "type": "list", + "bbox": [ + 0.102, + 0.115, + 0.484, + 0.9 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.093, + 0.908, + 0.161 + ], + "angle": 0, + "content": "[14] Eric R Chan, Koki Nagano, Matthew A Chan, Alexander W Bergman, Jeong Joon Park, Axel Levy, Miika Aittala, Shalini De Mello, Tero Karras, and Gordon Wetzstein. Generative novel view synthesis with 3d-aware diffusion models. In Proc. ICCV, 2023. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.163, + 0.908, + 0.217 + ], + "angle": 0, + "content": "[15] Ce Chen, Shaoli Huang, Xuelin Chen, Guangyi Chen, Xiaoguang Han, Kun Zhang, and Mingming Gong. Ct4d: Consistent text-to-4d generation with animatable meshes. arXiv preprint arXiv:2408.08342, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.219, + 0.908, + 0.272 + ], + "angle": 0, + "content": "[16] Eric Ming Chen, Sidhanth Holalkere, Ruyu Yan, Kai Zhang, and Abe Davis. Ray conditioning: Trading photoconsistency for photo-realism in multi-view image generation. In Proc. ICCV, 2023. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.274, + 0.908, + 0.343 + ], + "angle": 0, + "content": "[17] Haoxin Chen, Menghan Xia, Yingqing He, Yong Zhang, Xiaodong Cun, Shaoshu Yang, Jinbo Xing, Yaofang Liu, Qifeng Chen, Xintao Wang, Chao Weng, and Ying Shan. Videocrafter1: Open diffusion models for high-quality video generation. arXiv preprint arXiv:2310.19512, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.344, + 0.908, + 0.398 + ], + "angle": 0, + "content": "[18] Junsong Chen, Yue Wu, Simian Luo, Enze Xie, Sayak Paul, Ping Luo, Hang Zhao, and Zhenguo Li. Pixart-{\\delta} : Fast and controllable image generation with latent consistency models. arXiv preprint arXiv:2401.05252, 2024. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.4, + 0.908, + 0.455 + ], + "angle": 0, + "content": "[19] Kevin Chen, Christopher B Choy, Manolis Savva, Angel X Chang, Thomas Funkhouser, and Silvio Savarese. Text2Shape: Generating shapes from natural language by learning joint embeddings. In Proc. ACCV, 2018. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.456, + 0.908, + 0.509 + ], + "angle": 0, + "content": "[20] Rui Chen, Yongwei Chen, Ningxin Jiao, and Kui Jia. *Fantasia3D: Disentangling geometry and appearance for high-quality text-to-3D content creation.* arXiv preprint arXiv:2303.13873, 2023. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.511, + 0.908, + 0.537 + ], + "angle": 0, + "content": "[21] Ting Chen and Lala Li. Fit: Far-reaching interleaved transformers. arXiv preprint arXiv:2305.12689, 2023. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.539, + 0.908, + 0.593 + ], + "angle": 0, + "content": "[22] Soon Yau Cheong, Duygu Ceylan, Armin Mustafa, Andrew Gilbert, and Chun-Hao Paul Huang. Boosting camera motion control for video diffusion transformers. arXiv preprint arXiv:2410.10802, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.595, + 0.908, + 0.649 + ], + "angle": 0, + "content": "[23] Wen-Hsuan Chu, Lei Ke, and Katerina Fragkiadaki. Dreamscene4d: Dynamic multi-object scene generation from monocular videos. arXiv preprint arXiv:2405.02280, 2024. 2, 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.651, + 0.908, + 0.704 + ], + "angle": 0, + "content": "[24] Terrance DeVries, Miguel Angel Bautista, Nitish Srivastava, Graham W Taylor, and Joshua M Susskind. Unconstrained scene generation with locally conditioned radiance fields. In Proc. ICCV, 2021. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.707, + 0.871, + 0.719 + ], + "angle": 0, + "content": "[25] Sander Dieleman. Perspectives on diffusion, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.721, + 0.908, + 0.8 + ], + "angle": 0, + "content": "[26] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In Proc. ICLR, 2021. 17" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.803, + 0.908, + 0.871 + ], + "angle": 0, + "content": "[27] Mohamed El Banani, Amit Raj, Kevis-Kokitsi Maninis, Abhishek Kar, Yuanshen Li, Michael Rubinstein, Deqing Sun, Leonidas Guibas, Justin Johnson, and Varun Jampani. Probing the 3d awareness of visual foundation models. In Proc. CVPR, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.873, + 0.908, + 0.9 + ], + "angle": 0, + "content": "[28] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik" + }, + { + "type": "list", + "bbox": [ + 0.525, + 0.093, + 0.908, + 0.9 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "22883" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.133, + 0.092, + 0.482, + 0.134 + ], + "angle": 0, + "content": "Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis. In Proc. ICMLR, 2024. 4, 5, 18" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.136, + 0.483, + 0.164 + ], + "angle": 0, + "content": "[29] Gunnar Farnebäck. Two-frame motion estimation based on polynomial expansion. In Proc. SCIA, 2003. 19" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.165, + 0.483, + 0.219 + ], + "angle": 0, + "content": "[30] Qijun Feng, Zhen Xing, Zuxuan Wu, and Yu-Gang Jiang. FDGaussian: Fast Gaussian splatting from single image via geometric-aware diffusion model. arXiv preprint arXiv:2403.10242, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.221, + 0.484, + 0.277 + ], + "angle": 0, + "content": "[31] Yutao Feng, Yintong Shang, Xiang Feng, Lei Lan, Shandian Zhe, Tianjia Shao, Hongzhi Wu, Kun Zhou, Hao Su, Chenfanfu Jiang, et al. Elastogen: 4d generative elastodynamics. arXiv preprint arXiv:2405.15056, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.278, + 0.484, + 0.347 + ], + "angle": 0, + "content": "[32] Peng Gao, Le Zhuo, Ziyi Lin, Dongyang Liu, Ruoyi Du, Xu Luo, Longtian Qiu, Yuhang Zhang, et al. Lumina-t2x: Transforming text into any modality, resolution, and duration via flow-based large diffusion transformers. arXiv preprint arXiv:2405.05945, 2024. 3, 17, 18" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.349, + 0.484, + 0.404 + ], + "angle": 0, + "content": "[33] Quankai Gao, Qiangeng Xu, Zhe Cao, Ben Mildenhall, Wenchao Ma, Le Chen, Danhang Tang, and Ulrich Neumann. Gaussianflow: Splitting Gaussian dynamics for 4D content creation. arXiv preprint arXiv:2403.12365, 2024. 2, 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.406, + 0.484, + 0.473 + ], + "angle": 0, + "content": "[34] Ruiqi Gao, Aleksander Holynski, Philipp Henzler, Arthur Brussee, Ricardo Martin-Brualla, Pratul Srinivasan, Jonathan T Barron, and Ben Poole. Cat3d: Create anything in 3d with multi-view diffusion models. In Proc. NeurIPS, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.476, + 0.483, + 0.517 + ], + "angle": 0, + "content": "[35] William Gao, Noam Aigerman, Thibault Groueix, Vova Kim, and Rana Hanocka. TextDeformer: Geometry manipulation using text guidance. In SIGGRAPH, 2023. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.519, + 0.484, + 0.586 + ], + "angle": 0, + "content": "[36] Jiatao Gu, Alex Trevithick, Kai-En Lin, Joshua M Suskind, Christian Theobalt, Lingjie Liu, and Ravi Ramamoorthi. NerfDiff: Single-image view synthesis with NeRF-guided distillation from 3D-aware diffusion. In Proc. ICML, 2023. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.589, + 0.483, + 0.644 + ], + "angle": 0, + "content": "[37] Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, and Bo Dai. Animatediff: Animate your personalized text-to-image diffusion models without specific tuning. In Proc. ICLR, 2024. 2, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.646, + 0.484, + 0.687 + ], + "angle": 0, + "content": "[38] Junlin Han, Filippos Kokkinos, and Philip Torr. VFusion3D: Learning scalable 3D generative models from video diffusion models. arXiv preprint arXiv:2403.12034, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.689, + 0.483, + 0.744 + ], + "angle": 0, + "content": "[39] Hao He, Yinghao Xu, Yuwei Guo, Gordon Wetzstein, Bo Dai, Hongsheng Li, and Ceyuan Yang. Cameractrl: Enabling camera control for text-to-video generation. arXiv preprint arXiv:2404.02101, 2024. 2, 3, 4, 5, 6, 7, 19" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.746, + 0.483, + 0.801 + ], + "angle": 0, + "content": "[40] Xianglong He, Junyi Chen, Sida Peng, Di Huang, Yangguang Li, Xiaoshui Huang, Chun Yuan, Wanli Ouyang, and Tong He. GVGEN: Text-to-3D generation with volumetric representation. arXiv preprint arXiv:2403.12957, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.803, + 0.483, + 0.842 + ], + "angle": 0, + "content": "[41] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016. 17, 18" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.845, + 0.483, + 0.899 + ], + "angle": 0, + "content": "[42] Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation metric for image captioning. arXiv preprint arXiv:2104.08718, 2021. 7" + }, + { + "type": "list", + "bbox": [ + 0.101, + 0.092, + 0.484, + 0.899 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.523, + 0.093, + 0.908, + 0.147 + ], + "angle": 0, + "content": "[43] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Proc. NeurIPS, 2017. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.523, + 0.149, + 0.905, + 0.176 + ], + "angle": 0, + "content": "[44] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022. 4, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.523, + 0.178, + 0.908, + 0.204 + ], + "angle": 0, + "content": "[45] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Proc. NeurIPS, 2020. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.523, + 0.205, + 0.908, + 0.258 + ], + "angle": 0, + "content": "[46] Lukas Hollein, Ang Cao, Andrew Owens, Justin Johnson, and Matthias Nießner. Text2room: Extracting textured 3d meshes from 2d text-to-image models. In Proc. ICCV, 2023. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.523, + 0.261, + 0.907, + 0.315 + ], + "angle": 0, + "content": "[47] Lukas Hollein, Aljaž Božić, Norman Müller, David Novotny, Hung-Yu Tseng, Christian Richardt, Michael Zollhöfer, and Matthias Nießner. ViewDiff: 3D-consistent image generation with text-to-image models. In Proc. CVPR, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.523, + 0.317, + 0.907, + 0.37 + ], + "angle": 0, + "content": "[48] Yicong Hong, Kai Zhang, Juixiang Gu, Sai Bi, Yang Zhou, Difan Liu, Feng Liu, Kalyan Sunkavalli, Trung Bui, and Hao Tan. LRM: Large reconstruction model for single image to 3D. In Proc. ICLR, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.523, + 0.372, + 0.907, + 0.413 + ], + "angle": 0, + "content": "[49] Chen Hou, Guoqiang Wei, Yan Zeng, and Zhibo Chen. Training-free camera control for video generation. arXiv preprint arXiv:2406.10126, 2024.3" + }, + { + "type": "ref_text", + "bbox": [ + 0.523, + 0.414, + 0.907, + 0.469 + ], + "angle": 0, + "content": "[50] Teng Hu, Jiangning Zhang, Ran Yi, Yating Wang, Hongrui Huang, Jieyu Weng, Yabiao Wang, and Lizhuang Ma. Motionmaster: Training-free camera motion transfer for video generation. arXiv preprint arXiv:2404.15789, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.523, + 0.47, + 0.906, + 0.511 + ], + "angle": 0, + "content": "[51] Chun-Hao Paul Huang, Jae Shin Yoon, Hyeonho Jeong, Niloy Mitra, and Duygu Ceylan. Camera pose estimation emerging in video diffusion transformer, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.523, + 0.511, + 0.907, + 0.566 + ], + "angle": 0, + "content": "[52] Tianyu Huang, Yihan Zeng, Hui Li, Wangmeng Zuo, and Rynson WH Lau. Dreamphysics: Learning physical properties of dynamic 3d gaussians with video diffusion priors. arXiv preprint arXiv:2406.01476, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.523, + 0.567, + 0.907, + 0.607 + ], + "angle": 0, + "content": "[53] Ajay Jain, Ben Mildenhall, Jonathan T Barron, Pieter Abbeel, and Ben Poole. Zero-shot text-guided object generation with dream fields. In Proc. CVPR, 2022. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.523, + 0.609, + 0.907, + 0.649 + ], + "angle": 0, + "content": "[54] Yash Jain, Anshul Nasery, Vibhav Vineet, and Harkirat Behl. Peekaboo: Interactive video generation via masked-diffusion. In Proc. CVPR, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.523, + 0.65, + 0.907, + 0.691 + ], + "angle": 0, + "content": "[55] Nikolay Jetchev. ClipMatrix: Text-controlled creation of 3D textured meshes. arXiv preprint arXiv:2109.12922, 2021. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.523, + 0.692, + 0.907, + 0.734 + ], + "angle": 0, + "content": "[56] Lutao Jiang and Lin Wang. Brightdreamer: Generic 3D Gaussian generative framework for fast text-to-3D synthesis. arXiv preprint arXiv:2403.11273, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.523, + 0.735, + 0.907, + 0.788 + ], + "angle": 0, + "content": "[57] Yanqin Jiang, Chaohui Yu, Chenjie Cao, Fan Wang, Weiming Hu, and Jin Gao. *Animate3d: Animating any 3d model with multi-view video diffusion.* arXiv preprint arXiv:2407.11398, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.523, + 0.79, + 0.906, + 0.831 + ], + "angle": 0, + "content": "[58] Yanqin Jiang, Li Zhang, Jin Gao, Weimin Hu, and Yao Yao. Consistent4d: Consistent 360 \\(\\{\\backslash \\mathrm{deg}\\}\\) dynamic object generation from monocular video. In Proc. ICLR, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.523, + 0.832, + 0.907, + 0.899 + ], + "angle": 0, + "content": "[59] Yash Kant, Aliaksandr Siarohin, Ziyi Wu, Michael Vasilkovsky, Guocheng Qian, Jian Ren, Riza Alp Guler, Bernard Ghanem, Sergey Tulyakov, and Igor Gilitschenski. Spad: Spatially aware multi-view diffusers. In Proc. CVPR, 2024. 4" + }, + { + "type": "list", + "bbox": [ + 0.523, + 0.093, + 0.908, + 0.899 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.946, + 0.519, + 0.957 + ], + "angle": 0, + "content": "22884" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.093, + 0.486, + 0.147 + ], + "angle": 0, + "content": "[60] Tero Karras, Miika Aittala, Jaakko Lehtinen, Janne Hellsten, Timo Aila, and Samuli Laine. Analyzing and improving the training dynamics of diffusion models. In Proc. CVPR, 2024. 17" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.149, + 0.485, + 0.19 + ], + "angle": 0, + "content": "[61] Oren Katzir, Or Patashnik, Daniel Cohen-Or, and Dani Lischinski. Noise-free score distillation. In Proc. ICLR, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.192, + 0.484, + 0.234 + ], + "angle": 0, + "content": "[62] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splattering for real-time radiance field rendering. In ACM TOG, 2023. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.236, + 0.484, + 0.303 + ], + "angle": 0, + "content": "[63] Seung Wook Kim, Bradley Brown, Kangxue Yin, Karsten Kreis, Katja Schwarz, Daiqing Li, Robin Rombach, Antonio Torralba, and Sanja Fidler. NeuralField-LDM: Scene generation with hierarchical latent diffusion models. In Proc. CVPR, 2023. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.305, + 0.485, + 0.334 + ], + "angle": 0, + "content": "[64] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.335, + 0.484, + 0.39 + ], + "angle": 0, + "content": "[65] Zhengfei Kuang, Shengqu Cai, Hao He, Yinghao Xu, Hongsheng Li, Leonidas Guibas, and Gordon Wetzstein. Collaborative video diffusion: Consistent multi-video generation with camera control. In Proc. NeurIPS, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.392, + 0.484, + 0.434 + ], + "angle": 0, + "content": "[66] Kyungmin Lee, Kihyuk Sohn, and Jinwoo Shin. DreamFlow: High-quality text-to-3D generation by approximating probability flow. In Proc. ICLR, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.435, + 0.484, + 0.489 + ], + "angle": 0, + "content": "[67] Yao-Chih Lee, Yi-Ting Chen, Andrew Wang, Ting-Hsuan Liao, Brandon Y Feng, and Jia-Bin Huang. Vividdream: Generating 3d scene with ambient dynamics. arXiv preprint arXiv:2405.20334, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.491, + 0.484, + 0.546 + ], + "angle": 0, + "content": "[68] Guojun Lei, Chi Wang, Hong Li, Rong Zhang, Yikai Wang, and Weiwei Xu. Animateanything: Consistent and controllable animation for video generation. arXiv preprint arXiv:2411.10836, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.548, + 0.484, + 0.603 + ], + "angle": 0, + "content": "[69] Bing Li, Cheng Zheng, Wenxuan Zhu, Jinjie Mai, Biao Zhang, Peter Wonka, and Bernard Ghanem. Vivid-zoo: Multi-view video generation with diffusion model. arXiv preprint arXiv:2406.08659, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.605, + 0.484, + 0.673 + ], + "angle": 0, + "content": "[70] Jiahao Li, Hao Tan, Kai Zhang, Zexiang Xu, Fujun Luan, Yinghao Xu, Yicong Hong, Kalyan Sunkavalli, Greg Shakhnarovich, and Sai Bi. Instant3D: Fast text-to-3D with sparse-view generation and large reconstruction model. In Proc. ICLR, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.676, + 0.484, + 0.731 + ], + "angle": 0, + "content": "[71] Ming Li, Taojiannan Yang, Huafeng Kuang, Jie Wu, Zhaoning Wang, Xuefeng Xiao, and Chen Chen. Controlnet++: Improving conditional controls with efficient consistency feedback. In ECCV, 2024. 3, 4, 7, 18" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.733, + 0.484, + 0.801 + ], + "angle": 0, + "content": "[72] Renjie Li, Panwang Pan, Bangbang Yang, Dejia Xu, Shijie Zhou, Xuanyang Zhang, Zeming Li, Achuta Kadambi, Zhangyang Wang, and Zhiwen Fan. 4k4dgen: Panoramic 4d generation at 4k resolution. arXiv preprint arXiv:2406.13527, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.803, + 0.484, + 0.857 + ], + "angle": 0, + "content": "[73] Zhiqi Li, Yiming Chen, and Peidong Liu. Dreammesh4d: Video-to-4d generation with sparse-controlled gaussian-mesh hybrid representation. arXiv preprint arXiv:2410.06756, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.859, + 0.484, + 0.901 + ], + "angle": 0, + "content": "[74] Zhiqi Li, Yiming Chen, Lingzhe Zhao, and Peidong Liu. Controllable text-to-3D generation via surface-aligned Gaussian splatting. arXiv preprint arXiv:2403.09981, 2024. 20" + }, + { + "type": "list", + "bbox": [ + 0.101, + 0.093, + 0.486, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.093, + 0.907, + 0.134 + ], + "angle": 0, + "content": "[75] Zhengqi Li, Richard Tucker, Noah Snavely, and Aleksander Holynski. Generative image dynamics. In Proc. CVPR, 2024. 2, 4, 16, 19" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.136, + 0.907, + 0.203 + ], + "angle": 0, + "content": "[76] Hanwen Liang, Yuyang Yin, Dejia Xu, Hanxue Liang, Zhangyang Wang, Konstantinos N Plataniotis, Yao Zhao, and Yunchao Wei. Diffusion4d: Fast spatial-temporal consistent 4d generation via video diffusion models. arXiv preprint arXiv:2405.16645, 2024. 2, 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.205, + 0.907, + 0.259 + ], + "angle": 0, + "content": "[77] Yixun Liang, Xin Yang, Jiantao Lin, Haodong Li, Xiaogang Xu, and Yingcong Chen. Luciddreamer: Towards high-fidelity text-to-3D generation via interval score matching. arXiv preprint arXiv:2311.11284, 2023. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.261, + 0.906, + 0.315 + ], + "angle": 0, + "content": "[78] Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin. Magic3D: High-resolution text-to-3D content creation. In Proc. CVPR, 2023. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.317, + 0.907, + 0.37 + ], + "angle": 0, + "content": "[79] Jiajing Lin, Zhenzhong Wang, Yongjie Hou, Yuzhou Tang, and Min Jiang. Phy124: Fast physics-driven 4d content generation from a single image. arXiv preprint arXiv:2409.07179, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.372, + 0.906, + 0.427 + ], + "angle": 0, + "content": "[80] Yukang Lin, Haonan Han, Chaoqun Gong, Zunnan Xu, Yachao Zhang, and Xiu Li. Consistent123: One image to highly consistent 3D asset using case-aware diffusion priors. In arXiv preprint arXiv:2309.17261, 2023. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.428, + 0.907, + 0.482 + ], + "angle": 0, + "content": "[81] Huan Ling, Seung Wook Kim, Antonio Torralba, Sanja Fidler, and Karsten Kreis. Align your gaussians: Text-to-4d with dynamic 3d gaussians and composed diffusion models. In Proc. CVPR, 2024. 2, 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.484, + 0.907, + 0.538 + ], + "angle": 0, + "content": "[82] Pengyang Ling, Jiazi Bu, Pan Zhang, Xiaoyi Dong, Yuhang Zang, Tong Wu, Huaian Chen, Jiaqi Wang, and Yi Jin. Motionclone: Training-free motion cloning for controllable video generation. arXiv preprint arXiv:2406.05338, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.54, + 0.907, + 0.594 + ], + "angle": 0, + "content": "[83] Pengkun Liu, Yikai Wang, Fuchun Sun, Jiafang Li, Hang Xiao, Hongxiang Xue, and Xinzhou Wang. Isotropic3D: Image-to-3D generation based on a single clip embedding. arXiv preprint arXiv:2403.10395, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.596, + 0.907, + 0.636 + ], + "angle": 0, + "content": "[84] Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3D object. In Proc. ICCV, 2023. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.638, + 0.907, + 0.691 + ], + "angle": 0, + "content": "[85] Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. arXiv preprint arXiv:2209.03003, 2022. 3, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.693, + 0.907, + 0.748 + ], + "angle": 0, + "content": "[86] Xian Liu, Xiaohang Zhan, Jiaxiang Tang, Ying Shan, Gang Zeng, Dahua Lin, Xihui Liu, and Ziwei Liu. HumanGaussian: Text-driven 3D human generation with Gaussian splatting. In Proc. CVPR, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.749, + 0.907, + 0.803 + ], + "angle": 0, + "content": "[87] Yuan Liu, Cheng Lin, Zijiao Zeng, Xiaoxiao Long, Lingjie Liu, Taku Komura, and Wenping Wang. SyncDreamer: Generating multiview-consistent images from a single-view image. In Proc. ICLR, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.805, + 0.907, + 0.871 + ], + "angle": 0, + "content": "[88] Xiaoxiao Long, Yuan-Chen Guo, Cheng Lin, Yuan Liu, Zhiyang Dou, Lingjie Liu, Yuexin Ma, Song-Hai Zhang, Marc Habermann, Christian Theobalt, et al. Wonder3D: Single image to 3D using cross-domain diffusion. In Proc. CVPR, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.873, + 0.906, + 0.901 + ], + "angle": 0, + "content": "[89] I Loshchilov. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. 17, 18" + }, + { + "type": "list", + "bbox": [ + 0.524, + 0.093, + 0.907, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "22885" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.092, + 0.486, + 0.134 + ], + "angle": 0, + "content": "[90] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. 17, 18" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.136, + 0.485, + 0.178 + ], + "angle": 0, + "content": "[91] Wan-Duo Kurt Ma, John P Lewis, and W Bastiaan Kleijn. Trailblazer: Trajectory control for diffusion-based video generation. arXiv preprint arXiv:2401.00896, 2023. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.181, + 0.484, + 0.235 + ], + "angle": 0, + "content": "[92] Xin Ma, Yaohui Wang, Gengyun Jia, Xinyuan Chen, Ziwei Liu, Yuan-Fang Li, Cunjian Chen, and Yu Qiao. Latte: Latent diffusion transformer for video generation. arXiv preprint arXiv:2401.03048, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.238, + 0.484, + 0.307 + ], + "angle": 0, + "content": "[93] Willi Menapace, Aliaksandr Siarohin, Ivan Skorokhodov, Ekaterina Deyneka, Tsai-Shien Chen, Anil Kag, Yuwei Fang, Aleksei Stoliar, Elisa Ricci, Jian Ren, et al. Snap video: Scaled spatiotemporal transformers for text-to-video synthesis. In Proc. CVPR, 2024. 2, 7, 17" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.309, + 0.483, + 0.351 + ], + "angle": 0, + "content": "[94] Qiaowei Miao, Yawei Luo, and Yi Yang. Pla4d: Pixel-level alignments for text-to-4d gaussian splatting. arXiv preprint arXiv:2405.19957, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.354, + 0.484, + 0.408 + ], + "angle": 0, + "content": "[95] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In Proc. ECCV, 2020. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.411, + 0.484, + 0.466 + ], + "angle": 0, + "content": "[96] Koichi Namekata, Sherwin Bahmani, Ziyi Wu, Yash Kant, Igor Gilitschenski, and David B Lindell. Sg-i2v: Self-guided trajectory control in image-to-video generation. arXiv preprint arXiv:2411.04989, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.469, + 0.484, + 0.524 + ], + "angle": 0, + "content": "[97] Roy Or-El, Xuan Luo, Mengyi Shan, Eli Shechtman, Jeong Joon Park, and Ira Kemelmacher-Shlizerman. StyleSDF: High-resolution 3D-consistent image and geometry generation. In Proc. CVPR, 2022. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.527, + 0.484, + 0.569 + ], + "angle": 0, + "content": "[98] Zijie Pan, Zeyu Yang, Xiatian Zhu, and Li Zhang. Fast dynamic 3d object generation from a single-view video. arXiv preprint arXiv:2401.08742, 2024. 2, 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.571, + 0.484, + 0.612 + ], + "angle": 0, + "content": "[99] Deepak Pathak, Ross Girshick, Piotr Dólar, Trevor Darrell, and Bharath Hariharan. Learning features by watching objects move. In Proc. CVPR, 2017. 19" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.615, + 0.482, + 0.642 + ], + "angle": 0, + "content": "[100] William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proc. ICCV, 2023. 2, 3, 7, 17" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.645, + 0.484, + 0.714 + ], + "angle": 0, + "content": "[101] Ryan Po, Wang Yifan, Vladislav Golyanik, Kfir Aberman, Jonathan T Barron, Amit H Bermano, Eric Ryan Chan, Tali Dekel, Aleksander Holynski, Angjoo Kanazawa, et al. State of the art on diffusion models for visual computing. arXiv preprint arXiv:2310.07204, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.716, + 0.484, + 0.785 + ], + "angle": 0, + "content": "[102] Adam Polyak, Amit Zohar, Andrew Brown, Andros Tjandra, Animesh Sinha, Ann Lee, Apoorv Vyas, Bowen Shi, Chih-Yao Ma, Ching-Yao Chuang, et al. Movie gen: A cast of media foundation models. arXiv preprint arXiv:2410.13720, 2024. 3, 17" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.788, + 0.484, + 0.829 + ], + "angle": 0, + "content": "[103] Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben Mildenhall. DreamFusion: Text-to-3D using 2D diffusion. In Proc. ICLR, 2023. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.832, + 0.484, + 0.9 + ], + "angle": 0, + "content": "[104] Guocheng Qian, Junli Cao, Aliaksandr Siarohin, Yash Kant, Chaoyang Wang, Michael Vasilkovsky, Hsin-Ying Lee, Yuwei Fang, Ivan Skorokhodov, Peiye Zhuang, et al. Atom: Amortized text-to-mesh using 2d diffusion. arXiv preprint arXiv:2402.00867, 2024. 20" + }, + { + "type": "list", + "bbox": [ + 0.094, + 0.092, + 0.486, + 0.9 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.092, + 0.908, + 0.162 + ], + "angle": 0, + "content": "[105] Guocheng Qian, Jinjie Mai, Abdullah Hamdi, Jian Ren, Aliaksandr Siarohin, Bing Li, Hsin-Ying Lee, Ivan Skorokhodov, Peter Wonka, Sergey Tulyakov, et al. Magic123: One image to high-quality 3D object generation using both 2D and 3D diffusion priors. In Proc. ICLR, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.163, + 0.908, + 0.217 + ], + "angle": 0, + "content": "[106] Haonan Qiu, Zhaoxi Chen, Zhouxia Wang, Yingqing He, Menghan Xia, and Ziwei Liu. Freetraj: Tuning-free trajectory control in video diffusion models. arXiv preprint arXiv:2406.16863, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.218, + 0.908, + 0.287 + ], + "angle": 0, + "content": "[107] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In Proc. ICML, 2021. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.288, + 0.907, + 0.343 + ], + "angle": 0, + "content": "[108] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. In Proc. JMLR, 2020. 3, 17" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.344, + 0.907, + 0.397 + ], + "angle": 0, + "content": "[109] Jiawei Ren, Liang Pan, Jiaxiang Tang, Chi Zhang, Ang Cao, Gang Zeng, and Ziwei Liu. DreamGaussian4D: Generative 4D Gaussian splatting. arXiv preprint arXiv:2312.17142, 2023. 2, 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.399, + 0.907, + 0.467 + ], + "angle": 0, + "content": "[110] Jiawei Ren, Kevin Xie, Ashkan Mirzaei, Hanxue Liang, Xiaohui Zeng, Karsten Kreis, Ziwei Liu, Antonio Torralba, Sanja Fidler, Seung Wook Kim, et al. L4gm: Large 4d gaussian reconstruction model. In Proc. NeurIPS, 2024. 3, 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.469, + 0.907, + 0.523 + ], + "angle": 0, + "content": "[111] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proc. CVPR, 2022. 3, 19" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.525, + 0.907, + 0.565 + ], + "angle": 0, + "content": "[112] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Proc. MICCAI, 2015. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.567, + 0.907, + 0.621 + ], + "angle": 0, + "content": "[113] Aditya Sanghi, Hang Chu, Joseph G Lambourne, Ye Wang, Chin-Yi Cheng, Marco Fumero, and Kamal Rahimi Malekshan. CLIP-Forge: Towards zero-shot text-to-shape generation. In Proc. CVPR, 2022. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.622, + 0.907, + 0.663 + ], + "angle": 0, + "content": "[114] Johannes Lutz Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In Proc. CVPR, 2016. 5, 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.664, + 0.907, + 0.705 + ], + "angle": 0, + "content": "[115] Johannes Lutz Schonberger, Enliang Zheng, Marc Pollefeys, and Jan-Michael Frahm. Pixelwise view selection for unstructured multi-view stereo. In Proc. ECCV, 2016. 5, 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.707, + 0.907, + 0.748 + ], + "angle": 0, + "content": "[116] Katja Schwarz, Axel Sauer, Michael Niemeyer, Yiyi Liao, and Andreas Geiger. VoxGRAF: Fast 3D-aware image synthesis with sparse voxel grids. In Proc. NeurIPS, 2022. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.749, + 0.907, + 0.789 + ], + "angle": 0, + "content": "[117] Yichun Shi, Peng Wang, Jianglong Ye, Long Mai, Kejie Li, and Xiao Yang. MVDream: Multi-view diffusion for 3D generation. In Proc. ICLR, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.79, + 0.907, + 0.844 + ], + "angle": 0, + "content": "[118] Jaidev Shriram, Alex Trevithick, Lingjie Liu, and Ravi Ramamoorthi. Realmdreamer: Text-driven 3d scene generation with inpainting and depth diffusion. arXiv preprint arXiv:2404.07199, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.845, + 0.907, + 0.9 + ], + "angle": 0, + "content": "[119] Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, et al. Make-a-video: Text-to-video generation without text-video data. Proc. ICLR, 2023. 2" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.092, + 0.908, + 0.9 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.946, + 0.52, + 0.957 + ], + "angle": 0, + "content": "22886" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.092, + 0.486, + 0.148 + ], + "angle": 0, + "content": "[120] Uriel Singer, Shelly Sheynin, Adam Polyak, Oron Ashual, Iurii Makarov, Filippos Kokkinos, Naman Goyal, Andrea Vedaldi, Devi Parikh, Justin Johnson, et al. Text-to-4d dynamic scene generation. In Proc. ICML, 2023. 2, 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.151, + 0.483, + 0.207 + ], + "angle": 0, + "content": "[121] Vincent Sitzmann, Semon Rezchikov, Bill Freeman, Josh Tenenbaum, and Fredo Durand. Light field networks: Neural scene representations with single-evaluation rendering. In Proc. NeurIPS, 2021. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.209, + 0.483, + 0.251 + ], + "angle": 0, + "content": "[122] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In Proc. ICML, 2015. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.253, + 0.483, + 0.295 + ], + "angle": 0, + "content": "[123] Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding. In Neurocomputing, 2024. 17" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.297, + 0.483, + 0.352 + ], + "angle": 0, + "content": "[124] Jingxiang Sun, Bo Zhang, Ruizhi Shao, Lizhen Wang, Wen Liu, Zhenda Xie, and Yebin Liu. DreamCraft3D: Hierarchical 3D generation with bootstrapped diffusion prior. In Proc. ICLR, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.354, + 0.483, + 0.408 + ], + "angle": 0, + "content": "[125] Keqiang Sun, Dor Litvak, Yunzhi Zhang, Hongsheng Li, Jiajun Wu, and Shangzhe Wu. Ponomy: Learning articulated 3d animal motions from unlabeled online videos. In Proc. ECCV, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.412, + 0.483, + 0.467 + ], + "angle": 0, + "content": "[126] Qi Sun, Zhiyang Guo, Ziyu Wan, Jing Nathan Yan, Shengming Yin, Wengang Zhou, Jing Liao, and Houqiang Li. Eg4d: Explicit generation of 4d object without score distillation. arXiv preprint arXiv:2405.18132, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.47, + 0.483, + 0.525 + ], + "angle": 0, + "content": "[127] Wenqiang Sun, Shuo Chen, Fangfu Liu, Zilong Chen, Yueqi Duan, Jun Zhang, and Yikai Wang. Dimensionx: Create any 3d and 4d scenes from a single image with controllable video diffusion. arXiv preprint arXiv:2411.04928, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.528, + 0.483, + 0.569 + ], + "angle": 0, + "content": "[128] Stanislaw Szymanowicz, Christian Rupprecht, and Andrea Vedaldi. Viewset diffusion: (0-) image-conditioned 3d generative models from 2d data. In Proc. ICCV, 2023. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.571, + 0.483, + 0.641 + ], + "angle": 0, + "content": "[129] Stanislaw Szymanowicz, Eldar Insafutdinov, Chuanxia Zheng, Dylan Campbell, João F Henriques, Christian Rupprecht, and Andrea Vedaldi. Flash3d: Feed-forward generalisable 3d scene reconstruction from a single image. arXiv preprint arXiv:2406.04343, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.643, + 0.483, + 0.684 + ], + "angle": 0, + "content": "[130] Stanislaw Szymanowicz, Christian Rupprecht, and Andrea Vedaldi. Splatter image: Ultra-fast single-view 3D reconstruction. In Proc. CVPR, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.687, + 0.483, + 0.741 + ], + "angle": 0, + "content": "[131] Junshu Tang, Tengfei Wang, Bo Zhang, Ting Zhang, Ran Yi, Lizhuang Ma, and Dong Chen. Make-it-3D: High-fidelity 3D creation from a single image with diffusion prior. arXiv preprint arXiv:2303.14184, 2023. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.744, + 0.483, + 0.799 + ], + "angle": 0, + "content": "[132] Jiaxiang Tang, Zhaoxi Chen, Xiaokang Chen, Tengfei Wang, Gang Zeng, and Ziwei Liu. LGM: Large multi-view gaussian model for high-resolution 3d content creation. Proc. ECCV, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.802, + 0.483, + 0.871 + ], + "angle": 0, + "content": "[133] Zhenggang Tang, Peiye Zhuang, Chaoyang Wang, Aliaksandr Siarohin, Yash Kant, Alexander Schwing, Sergey Tulyakov, and Hsin-Ying Lee. Pixel-aligned multi-view generation with depth guided decoder. arXiv preprint arXiv:2408.14016, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.874, + 0.483, + 0.901 + ], + "angle": 0, + "content": "[134] Ayush Tewari, Tianwei Yin, George Cazenavette, Semon Rezhikov, Josh Tenenbaum, Frédo Durand, Bill Freeman," + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.486, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.553, + 0.093, + 0.908, + 0.134 + ], + "angle": 0, + "content": "and Vincent Sitzmann. Diffusion with forward models: Solving stochastic inverse problems without direct supervision. In Proc. NeurIPS, 2023. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.137, + 0.907, + 0.205 + ], + "angle": 0, + "content": "[135] Dmitry Tochilkin, David Pankratz, Zexiang Liu, Zixuan Huang, Adam Letts, Yangguang Li, Ding Liang, Christian Laforte, Varun Jampani, and Yan-Pei Cao. Triposr: Fast 3D object reconstruction from a single image. arXiv preprint arXiv:2403.02151, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.209, + 0.908, + 0.264 + ], + "angle": 0, + "content": "[136] Thomas Unterthiner, Sjoerd van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, and Sylvain Gelly. Towards accurate generative models of video: A new metric & challenges. arXiv preprint arXiv:1812.01717, 2018. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.267, + 0.908, + 0.309 + ], + "angle": 0, + "content": "[137] Lukas Uzolas, Elmar Eisemann, and Petr Kellnhofer. Motiondreamer: Zero-shot 3d mesh animation from video diffusion models. arXiv preprint arXiv:2405.20155, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.311, + 0.908, + 0.379 + ], + "angle": 0, + "content": "[138] Basile Van Hoorick, Rundi Wu, Ege Ozguroglu, Kyle Sargent, Ruoshi Liu, Pavel Tokmakov, Achal Dave, Changxi Zheng, and Carl Vondrick. Generative camera dolly: Extreme monocular dynamic novel view synthesis. In Proc. ECCV, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.382, + 0.908, + 0.436 + ], + "angle": 0, + "content": "[139] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Proc. NeurIPS, 2017. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.44, + 0.907, + 0.509 + ], + "angle": 0, + "content": "[140] Vikram Voleti, Chun-Han Yao, Mark Boss, Adam Letts, David Pankratz, Dmitry Tochilkin, Christian Laforte, Robin Rombach, and Varun Jampani. SV3D: Novel multi-view synthesis and 3D generation from a single image using latent video diffusion. arXiv preprint arXiv:2403.12008, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.512, + 0.908, + 0.567 + ], + "angle": 0, + "content": "[141] Ziyu Wan, Despoina Paschalidou, Ian Huang, Hongyu Liu, Bokui Shen, Xiaoyu Xiang, Jing Liao, and Leonidas Guibas. CAD: Photorealistic 3D generation via adversarial distillation. In Proc. CVPR, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.57, + 0.908, + 0.611 + ], + "angle": 0, + "content": "[142] Can Wang, Menglei Chai, Mingming He, Dongdong Chen, and Jing Liao. Clip-NeRF: Text-and-image driven manipulation of neural radiance fields. In Proc. CVPR, 2022. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.614, + 0.908, + 0.668 + ], + "angle": 0, + "content": "[143] Haochen Wang, Xiaodan Du, Jiahao Li, Raymond A Yeh, and Greg Shakhnarovich. Score Jacobian chaining: Lifting pretrained 2d diffusion models for 3D generation. In Proc. CVPR, 2023. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.672, + 0.908, + 0.728 + ], + "angle": 0, + "content": "[144] Jiawei Wang, Yuchen Zhang, Jiaxin Zou, Yan Zeng, Guoqiang Wei, Liping Yuan, and Hang Li. Boximator: Generating rich and controllable motions for video synthesis. arXiv preprint arXiv:2402.01566, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.73, + 0.907, + 0.772 + ], + "angle": 0, + "content": "[145] Shuzhe Wang, Vincent Leroy, Yohann Cabon, Boris Chidlovskii, and Jerome Revaud. Dust3r: Geometric 3d vision made easy. In Proc. CVPR, 2024. 3, 5, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.774, + 0.908, + 0.843 + ], + "angle": 0, + "content": "[146] Xiang Wang, Hangjie Yuan, Shiwei Zhang, Dayou Chen, Jiuniu Wang, Yingya Zhang, Yujun Shen, Deli Zhao, and Jingren Zhou. Videocomposer: Compositional video synthesis with motion controllability. arXiv preprint arXiv:2306.02018, 2023. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.846, + 0.907, + 0.901 + ], + "angle": 0, + "content": "[147] Yikai Wang, Xinzhou Wang, Zilong Chen, Zhengyi Wang, Fuchun Sun, and Jun Zhu. Vidu4d: Single generated video to high-fidelity 4d reconstruction with dynamic gaussian surfels. arXiv preprint arXiv:2405.16822, 2024. 20" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.908, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "22887" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.092, + 0.486, + 0.147 + ], + "angle": 0, + "content": "[148] Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, and Jun Zhu. ProlificDreamer: High-fidelity and diverse text-to-3D generation with variational score distillation. In Proc. NeurIPS, 2023. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.15, + 0.485, + 0.207 + ], + "angle": 0, + "content": "[149] Zhouxia Wang, Ziyang Yuan, Xintao Wang, Tianshui Chen, Menghan Xia, Ping Luo, and Yin Shan. Motionctrl: A unified and flexible motion controller for video generation. In SIGGRAPH, 2024. 2, 3, 4, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.209, + 0.485, + 0.263 + ], + "angle": 0, + "content": "[150] Daniel Watson, Saurabh Saxena, Lala Li, Andrea Tagliasacchi, and David J Fleet. Controlling space and time with diffusion models. arXiv preprint arXiv:2407.07860, 2024. 3, 4, 6, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.266, + 0.484, + 0.322 + ], + "angle": 0, + "content": "[151] Ruiqi Wu, Liangyu Chen, Tong Yang, Chunle Guo, Chongyi Li, and Xiangyu Zhang. Lamp: Learn a motion pattern for few-shot-based video generation. arXiv preprint arXiv:2310.10769, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.324, + 0.484, + 0.38 + ], + "angle": 0, + "content": "[152] Rundi Wu, Ruiqi Gao, Ben Poole, Alex Trevithick, Changxi Zheng, Jonathan T Barron, and Aleksander Holynski. Cat4d: Create anything in 4d with multi-view video diffusion models. arXiv preprint arXiv:2411.18613, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.382, + 0.484, + 0.45 + ], + "angle": 0, + "content": "[153] Wejia Wu, Zhuang Li, Yuchao Gu, Rui Zhao, Yefei He, David Junhao Zhang, Mike Zheng Shou, Yan Li, Tingting Gao, and Di Zhang. Draganything: Motion control for anything using entity representation. In Proc. ECCV, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.453, + 0.484, + 0.509 + ], + "angle": 0, + "content": "[154] Zijie Wu, Chaohui Yu, Yanqin Jiang, Chenjie Cao, Fan Wang, and Xiang Bai. Sc4d: Sparse-controlled video-to-4d generation and motion transfer. arXiv preprint arXiv:2404.03736, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.511, + 0.484, + 0.554 + ], + "angle": 0, + "content": "[155] Zeqi Xiao, Yifan Zhou, Shuai Yang, and Xingang Pan. Video diffusion models are training-free motion interpreter and controller. arXiv preprint arXiv:2405.14864, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.556, + 0.484, + 0.611 + ], + "angle": 0, + "content": "[156] Kevin Xie, Jonathan Lorraine, Tianshi Cao, Jun Gao, James Lucas, Antonio Torralba, Sanja Fidler, and Xiaohui Zeng. LATTE3D: Large-scale amortized text-to-enhanced3D synthesis. In Proc. ECCV, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.613, + 0.484, + 0.669 + ], + "angle": 0, + "content": "[157] Yiming Xie, Chun-Han Yao, Vikram Voleti, Huaizu Jiang, and Varun Jampani. Sv4d: Dynamic 3d content generation with multi-frame and multi-view consistency. arXiv preprint arXiv:2407.17470, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.671, + 0.484, + 0.74 + ], + "angle": 0, + "content": "[158] Dejia Xu, Yifan Jiang, Chen Huang, Liangchen Song, Thorsten Gernoth, Liangliang Cao, Zhangyang Wang, and Hao Tang. Cavity: Camera-controllable multi-view video diffusion with view-integrated attention. arXiv preprint arXiv:2410.10774, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.743, + 0.484, + 0.799 + ], + "angle": 0, + "content": "[159] Dejia Xu, Hanwen Liang, Neel P Bhatt, Hezhen Hu, Hanxue Liang, Konstantinos N Plataniotis, and Zhangyang Wang. Comp4d: Llm-guided compositional 4d scene generation. arXiv preprint arXiv:2403.16993, 2024. 2, 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.802, + 0.484, + 0.857 + ], + "angle": 0, + "content": "[160] Dejia Xu, Weili Nie, Chao Liu, Sifei Liu, Jan Kautz, Zhangyang Wang, and Arash Vahdat. Camco: Camera-controllable 3d-consistent image-to-video generation. arXiv preprint arXiv:2406.02509, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.859, + 0.484, + 0.901 + ], + "angle": 0, + "content": "[161] Jun Xu, Tao Mei, Ting Yao, and Yong Rui. Msr-vtt: A large video description dataset for bridging video and language. In Proc. CVPR, 2016. 7, 8" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.486, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.092, + 0.908, + 0.161 + ], + "angle": 0, + "content": "[162] Yinghao Xu, Zifan Shi, Wang Yifan, Hansheng Chen, Ceyuan Yang, Sida Peng, Yujun Shen, and Gordon Wetzstein. GRM: Large Gaussian reconstruction model for efficient 3D reconstruction and generation. In Proc. ECCV, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.164, + 0.908, + 0.232 + ], + "angle": 0, + "content": "[163] Yinghao Xu, Hao Tan, Fujun Luan, Sai Bi, Peng Wang, Jiahao Li, Zifan Shi, Kalyan Sunkavalli, Gordon Wetzstein, Zexiang Xu, et al. DMV3D: Denoising multi-view diffusion using 3D large reconstruction model. In Proc. ICLR, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.234, + 0.908, + 0.289 + ], + "angle": 0, + "content": "[164] Zhongcong Xu, Jianfeng Zhang, Jun Hao Liew, Wenqing Zhang, Song Bai, Jiashi Feng, and Mike Zheng Shou. Pv3d: A 3d generative model for portrait video generation. In Proc. ICLR, 2023. 2, 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.291, + 0.908, + 0.347 + ], + "angle": 0, + "content": "[165] Qitong Yang, Mingtao Feng, Zijie Wu, Shijie Sun, Weisheng Dong, Yaonan Wang, and Ajmal Mian. Beyond skeletons: Integrative latent mapping for coherent 4d sequence generation. arXiv preprint arXiv:2403.13238, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.349, + 0.908, + 0.417 + ], + "angle": 0, + "content": "[166] Shiyuan Yang, Liang Hou, Haibin Huang, Chongyang Ma, Pengfei Wan, Di Zhang, Xiaodong Chen, and Jing Liao. Direct-a-video: Customized video generation with user-directed camera movement and object motion. arXiv preprint arXiv:2402.03162, 2024. 2, 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.42, + 0.908, + 0.473 + ], + "angle": 0, + "content": "[167] Zeyu Yang, Zijie Pan, Chun Gu, and Li Zhang. Diffusion2: Dynamic 3d content generation via score composition of orthogonal diffusion models. arXiv preprint 2404.02148, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.476, + 0.908, + 0.545 + ], + "angle": 0, + "content": "[168] Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al. Cogvideox: Text-to-video diffusion models with an expert transformer. arXiv preprint arXiv:2408.06072, 2024. 2, 3, 16, 17, 18, 19" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.548, + 0.908, + 0.615 + ], + "angle": 0, + "content": "[169] Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al. Cogvideox: Text-to-video diffusion models with an expert transformer. arXiv preprint arXiv:2408.06072, 2024. 7, 16, 17" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.617, + 0.908, + 0.673 + ], + "angle": 0, + "content": "[170] Junliang Ye, Fangfu Liu, Qixiu Li, Zhengyi Wang, Yikai Wang, Xinzhou Wang, Yueqi Duan, and Jun Zhu. DreamReward: Text-to-3D generation with human preference. arXiv preprint arXiv:2403.14613, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.675, + 0.908, + 0.731 + ], + "angle": 0, + "content": "[171] Shengming Yin, Chenfei Wu, Jian Liang, Jie Shi, Houqiang Li, Gong Ming, and Nan Duan. Dragnuwa: Fine-grained control in video generation by integrating text, image, and trajectory. arXiv preprint arXiv:2308.08089, 2023. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.732, + 0.908, + 0.786 + ], + "angle": 0, + "content": "[172] Wei Yin, Chi Zhang, Hao Chen, Zhipeng Cai, Gang Yu, Kaixuan Wang, Xiaozhi Chen, and Chunhua Shen. Metric3d: Towards zero-shot metric 3d prediction from A single image. In Proc. ICCV, 2023. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.789, + 0.908, + 0.843 + ], + "angle": 0, + "content": "[173] Yuyang Yin, Dejia Xu, Zhangyang Wang, Yao Zhao, and Yunchao Wei. 4dgen: Grounded 4d content generation with spatial-temporal consistency. arXiv preprint arXiv:2312.17225, 2023. 2, 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.846, + 0.908, + 0.901 + ], + "angle": 0, + "content": "[174] Paul Yoo, Jiaxian Guo, Yutaka Matsuo, and Shixiang Shane Gu. DreamSparse: Escaping from Plato's cave with 2D diffusion model given sparse views. In arXiv preprint arXiv:2306.03414, 2023. 20" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.092, + 0.908, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "22888" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.092, + 0.486, + 0.161 + ], + "angle": 0, + "content": "[175] Heng Yu, Chaoyang Wang, Peiye Zhuang, Willi Menapace, Aliaksandr Siarohin, Junli Cao, Laszlo A Jeni, Sergey Tulyakov, and Hsin-Ying Lee. 4real: Towards photorealistic 4d scene generation via video diffusion models. In Proc. NeurIPS, 2024. 2, 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.165, + 0.485, + 0.22 + ], + "angle": 0, + "content": "[176] Shoubin Yu, Jacob Zhiyuan Fang, Jian Zheng, Gunnar A Sigurdsson, Vicente Ordonez, Robinson Piramuthu, and Mohit Bansal. Zero-shot controllable image-to-video animation via motion decomposition. In ACM MM, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.223, + 0.484, + 0.292 + ], + "angle": 0, + "content": "[177] Wangbo Yu, Jinbo Xing, Li Yuan, Wenbo Hu, Xiaoyu Li, Zhipeng Huang, Xiangjun Gao, Tien-Tsin Wong, Ying Shan, and Yonghong Tian. Viewcrafter: Taming video diffusion models for high-fidelity novel view synthesis. arXiv preprint arXiv:2409.02048, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.295, + 0.484, + 0.348 + ], + "angle": 0, + "content": "[178] Xin Yu, Yuan-Chen Guo, Yangguang Li, Ding Liang, Song-Hai Zhang, and Xiaojuan Qi. Text-to-3D with classifier score distillation. arXiv preprint arXiv:2310.19415, 2023. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.353, + 0.484, + 0.407 + ], + "angle": 0, + "content": "[179] Yu-Jie Yuan, Leif Kobbelt, Jiwen Liu, Yuan Zhang, Pengfei Wan, Yu-Kun Lai, and Lin Gao. 4dynamic: Text-to-4d generation with hybrid priors. arXiv preprint arXiv:2407.12684, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.411, + 0.484, + 0.48 + ], + "angle": 0, + "content": "[180] Raza Yunus, Jan Eric Lenssen, Michael Niemeyer, Yiyi Liao, Christian Rupprecht, Christian Theobalt, Gerard Pons-Moll, Jia-Bin Huang, Vladislav Golyanik, and Eddy Ilg. Recent trends in 3d reconstruction of general non-rigid scenes. In Computer Graphics Forum, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.483, + 0.484, + 0.552 + ], + "angle": 0, + "content": "[181] Bohan Zeng, Ling Yang, Siyu Li, Jiaming Liu, Zixiang Zhang, Juanxi Tian, Kaixin Zhu, Yongzhen Guo, Fu-Yun Wang, Minkai Xu, et al. Trans4d: Realistic geometry-aware transition for compositional text-to-4d synthesis. arXiv preprint arXiv:2410.07155, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.555, + 0.484, + 0.61 + ], + "angle": 0, + "content": "[182] Yifei Zeng, Yanqin Jiang, Siyu Zhu, Yuanxun Lu, Youtian Lin, Hao Zhu, Weiming Hu, Xun Cao, and Yao Yao. Stag4d: Spatial-temporal anchored generative 4d gaussians. arXiv preprint arXiv:2403.14939, 2024. 2, 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.613, + 0.484, + 0.64 + ], + "angle": 0, + "content": "[183] Biao Zhang and Rico Sennrich. Root mean square layer normalization. In Proc. NeurIPS, 2019. 17" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.644, + 0.484, + 0.697 + ], + "angle": 0, + "content": "[184] Bowen Zhang, Tianyu Yang, Yu Li, Lei Zhang, and Xi Zhao. Compress3D: a compressed latent space for 3D generation from a single image. arXiv preprint arXiv:2403.13524, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.702, + 0.484, + 0.783 + ], + "angle": 0, + "content": "[185] David Junhao Zhang, Roni Paiss, Shiran Zada, Nikhil Karnad, David E Jacobs, Yael Pritch, Inbar Mosseri, Mike Zheng Shou, Neal Wadhwa, and Nataniel Ruiz. Recapture: Generative video camera controls for user-provided videos using masked video fine-tuning. arXiv preprint arXiv:2411.05003, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.787, + 0.484, + 0.842 + ], + "angle": 0, + "content": "[186] Hao Zhang, Di Chang, Fang Li, Mohammad Soleymani, and Narendra Ahuja. Magicpose4d: Crafting articulated models with appearance and motion control. arXiv preprint arXiv:2405.14017, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.846, + 0.484, + 0.9 + ], + "angle": 0, + "content": "[187] Haiyu Zhang, Xinyuan Chen, Yaohui Wang, Xihui Liu, Yunhong Wang, and Yu Qiao. 4diffusion: Multi-view video diffusion model for 4d generation. arXiv preprint arXiv:2405.20674, 2024. 20" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.486, + 0.9 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.092, + 0.907, + 0.134 + ], + "angle": 0, + "content": "[188] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proc. ICCV, 2023. 3, 4, 6, 18" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.136, + 0.907, + 0.204 + ], + "angle": 0, + "content": "[189] Tianyuan Zhang, Hong-Xing Yu, Rundi Wu, Brandon Y Feng, Changxi Zheng, Noah Snavely, Jiajun Wu, and William T Freeman. Physdreamer: Physics-based interaction with 3d objects via video generation. In Proc. ECCV, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.207, + 0.907, + 0.261 + ], + "angle": 0, + "content": "[190] Zhichao Zhang, Hui Chen, Jinsheng Deng, Xiaqing Yin, Xingshen Song, and Ming Xu. Motion4d: A decoupled pipeline for enhanced text-to-4d generation with optimized motion patterns. SSRN, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.263, + 0.907, + 0.317 + ], + "angle": 0, + "content": "[191] Zhenghao Zhang, Junchao Liao, Menghao Li, Long Qin, and Weizhi Wang. Tora: Trajectory-oriented diffusion transformer for video generation. arXiv preprint arXiv:2407.21705, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.32, + 0.907, + 0.373 + ], + "angle": 0, + "content": "[192] Wang Zhao, Shaohui Liu, Hengkai Guo, Wenping Wang, and Yong-Jin Liu. *Particlesfm: Exploiting dense point trajectories for localizing moving cameras in the wild*. In *Proc. ECCV*, 2022. 5, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.376, + 0.907, + 0.43 + ], + "angle": 0, + "content": "[193] Yuyang Zhao, Zhiwen Yan, Enze Xie, Lanqing Hong, Zhenguo Li, and Gim Hee Lee.Animate124: Animating one image to 4d dynamic scene. arXiv preprint arXiv:2311.14603, 2023. 2, 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.433, + 0.907, + 0.488 + ], + "angle": 0, + "content": "[194] Yuyang Zhao, Chung-Ching Lin, Kevin Lin, Zhiwen Yan, Linjie Li, Zhengyuan Yang, Jianfeng Wang, Gim Hee Lee, and Lijuan Wang. Genxd: Generating any 3d and 4d scenes. arXiv preprint arXiv:2411.02319, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.49, + 0.907, + 0.531 + ], + "angle": 0, + "content": "[195] Guangcong Zheng, Teng Li, Rui Jiang, Yehao Lu, Tao Wu, and Xi Li. Cami2v: Camera-controlled image-to-video diffusion model. arXiv preprint arXiv:2410.15957, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.532, + 0.907, + 0.586 + ], + "angle": 0, + "content": "[196] Yufeng Zheng, Xueting Li, Koki Nagano, Sifei Liu, Otmar Hilliges, and Shalini De Mello. A unified approach for text-and image-guided 4d scene generation. In Proc. CVPR, 2024. 2, 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.589, + 0.907, + 0.643 + ], + "angle": 0, + "content": "[197] Zangwei Zheng, Xiangyu Peng, Tianji Yang, Chenhui Shen, Shenggui Li, Hongxin Liu, Yukun Zhou, Tianyi Li, and Yang You. Open-sora: Democratizing efficient video production for all, 2024. 3, 17" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.646, + 0.907, + 0.7 + ], + "angle": 0, + "content": "[198] Haitao Zhou, Chuang Wang, Rui Nie, Jinxiao Lin, Dongdong Yu, Qian Yu, and Changhu Wang. Trackgo: A flexible and efficient method for controllable video generation. arXiv preprint arXiv:2408.11475, 2024. 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.702, + 0.907, + 0.756 + ], + "angle": 0, + "content": "[199] Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snively. Stereo magnification: Learning view synthesis using multiplane images. In SIGGRAPH, 2018. 2, 4, 5, 6, 7, 8, 16, 18, 19, 20" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.759, + 0.907, + 0.812 + ], + "angle": 0, + "content": "[200] Hanxin Zhu, Tianyu He, Anni Tang, Junliang Guo, Zhibo Chen, and Jiang Bian. Compositional 3d-aware video generation with llm director. arXiv preprint arXiv:2409.00558, 2024. 20" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.092, + 0.907, + 0.812 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "22889" + } + ] +] \ No newline at end of file diff --git a/2025/AC3D_ Analyzing and Improving 3D Camera Control in Video Diffusion Transformers/61cdf9ca-5840-4279-a8c2-d8f7abcf95b7_origin.pdf b/2025/AC3D_ Analyzing and Improving 3D Camera Control in Video Diffusion Transformers/61cdf9ca-5840-4279-a8c2-d8f7abcf95b7_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..47e5577070c23fc255c42b7ebf7d516951a62924 --- /dev/null +++ b/2025/AC3D_ Analyzing and Improving 3D Camera Control in Video Diffusion Transformers/61cdf9ca-5840-4279-a8c2-d8f7abcf95b7_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:44b4bc90f0d5ac5432a1e5a3cf97edbaf6a248b36d630013ffa7b8e36b7f0d44 +size 5458069 diff --git a/2025/AC3D_ Analyzing and Improving 3D Camera Control in Video Diffusion Transformers/full.md b/2025/AC3D_ Analyzing and Improving 3D Camera Control in Video Diffusion Transformers/full.md new file mode 100644 index 0000000000000000000000000000000000000000..05961b6eb95e5b0999f2dd0d4a6364e67af03091 --- /dev/null +++ b/2025/AC3D_ Analyzing and Improving 3D Camera Control in Video Diffusion Transformers/full.md @@ -0,0 +1,415 @@ +# AC3D: Analyzing and Improving 3D Camera Control in Video Diffusion Transformers + +Sherwin Bahmani $^{1,2,3}$ Ivan Skorokhodov $^{3}$ Guocheng Qian $^{3}$ Aliaksandr Siarohin $^{3}$ Willi Menapace $^{3}$ Andrea Tagliasacchi $^{1,4}$ David B. Lindell $^{1,2}$ Sergey Tulyakov $^{3}$ + +1University of Toronto 2Vector Institute 3Snap Inc. 4SFU + +*equal contribution + +https://snap-research.github.io/ac3d + +![](images/b817cae749f12438e773bb976adf99e6f8b2a973ab4df38ce66da003d63efb02.jpg) + +![](images/52516c7173e4846a6846b007e2a90e43104163aca0143472ddbef4ba84c1276e.jpg) +Figure 1. Camera-controlled video generation. Our method enables precise camera controllability in pre-trained video diffusion transformers, allowing joint conditioning of text and camera sequences. We synthesize the same scene with two different camera trajectories as input. The inset images visualize the cameras for the videos in the corresponding columns. The left camera sequence consists of a rotation to the right, while the right camera visualizes a zoom-out and up trajectory. + +![](images/511171fd4800824ac854c98baa2d66ee98e1865662ef20852b5d5958d8527144.jpg) + +# Abstract + +Numerous works have recently integrated 3D camera control into foundational text-to-video models, but the resulting camera control is often imprecise, and video generation quality suffers. In this work, we analyze camera motion from a first principles perspective, uncovering insights that enable precise 3D camera manipulation without compromising synthesis quality. First, we determine that motion induced by camera movements in videos is low-frequency in nature. This motivates us to adjust train and test pose conditioning schedules, accelerating training convergence while improving visual and motion quality. Then, by probing the representations of an unconditional video diffusion transformer, we observe that they implicitly perform camera pose estimation under the hood, and only a sub-partion of their layers contain the camera information. This suggested us to limit the injection of camera conditioning to a subset of the + +architecture to prevent interference with other video features, leading to a $4 \times$ reduction of training parameters, improved training speed, and $10\%$ higher visual quality. Finally, we complement the typical dataset for camera control learning with a curated dataset of 20K diverse, dynamic videos with stationary cameras. This helps the model distinguish between camera and scene motion and improves the dynamics of generated pose-conditioned videos. We compound these findings to design the Advanced 3D Camera Control (AC3D) architecture, the new state-of-the-art model for generative video modeling with camera control. + +# 1. Introduction + +Foundational video diffusion models (VDMs) trained on internet-scale data, acquire abundant knowledge about the physical world [10]. They not only learn appearance and + +plausible 2D dynamics, but they also have abundant understanding of 3D structure [7]. However, most of this knowledge is stored implicitly within the model, as these models do not expose fine-grained control mechanisms, such as camera motion control. We recently witnessed a surge of works that bring 3D camera control into foundational video models [39, 149, 166], but the control they provide is not very precise, and the synthesis quality is often compromised [6]. We analyze camera motion control in video diffusion models from first principles, and develop several findings that allow us to incorporate precise 3D camera conditioning without degrading synthesis quality. To perform our analysis we train a 11.5B-parameter VDiT (video latent diffusion transformer) [100] on a dataset of 100M text/video pairs. On this model, we perform three key studies. With what we learn, we adapt the camera control solution from VD3D [6] from a pixel-based to latent-based diffusion model, and significantly improve its performance. + +1) The spectral properties of camera motion. To study the statistical nature of motion control, we analyze motion spectral volumes (MSV) [75] of the videos generated by a large-scale video DiT model. MSVs show the amount of energy in different portions of the frequency spectra (i.e., high energy in the low frequencies indicate smooth motion) and we measure them across 200 generated videos of different types (camera motion, scene motion, scene plus camera motion) and at various stages of the denoising synthesis process. We observe that camera motion mostly affects the lower portion of the spectrum and kicks in very early ( $\approx 10\%$ ) in the denoising trajectory. Then, as diffusion models are inherently coarse-to-fine in nature [25], we restrict our camera conditioning to only being injected on the subset of the denoising steps corresponding to low-frequencies. This results in $\approx 15\%$ higher visual fidelity, $\approx 30\%$ better camera following, and mitigates scene motion degradation. + +2) Camera motion knowledge in VDiTs. Then, we consider our text-only VDiT, and determine whether such a model possesses knowledge about cameras, and where this knowledge is expressed within its architecture. With this objective, we feed the (unseen during training) RealEstate10k [199] videos to our VDiT, and perform linear probing [27] to determine if camera poses can be recovered from its internal representation. Our analysis revealed that a video DiT implicitly performs camera pose estimation under the hood, and the presence of camera knowledge in a disentangled form peaks in its middle layers. This implies that the camera signal emerges in its early blocks to allow the later ones rely on it to build subsequent visual representations. Therefore, we adjust our conditioning scheme to only affect the first $30\%$ of the architecture, leading to a $\approx 4\times$ reduction in training parameters, $15\%$ training and inference acceleration, and $10\%$ improved visual quality. + +3) Re-balancing the training distribution. Finally, to supervise camera control architectures, the typical solution is to rely on the camera pose annotations provided by RealEstate10k [199]. However, this dataset contains mostly static scenes, which results in significant motion degradation of the fine-tuned video model. To overcome this problem, we curate a subset of 20K diverse videos with dynamic scenes but static cameras. As the camera conditioning branch is still activated for these videos, this helps the model disambiguate the camera from scene movement. Our experiments show that this simple adjustment in the data is sufficient to recover the scene dynamism while still enabling an effective pose-conditioned video model. + +Contributions. We compound the knowledge gained from these three studies into the design of the Advanced 3D Camera Control (AC3D) method. We perform extensive ablation studies and compare against state-of-the-art models for camera control, including MotionCtrl [149], CameraCtrl [39], and VD3D [6]. We demonstrate $18\%$ higher video fidelity and $25\%$ more precise camera steering in terms of quantitative metrics than a closest competitor, and our generated videos are favored to others in $90\%$ of cases. + +# 2. Related work + +Our approach lies at the intersection of text-to-video, text-to-3D, and text-to-4D generation approaches. We refer to recent state-of-the-reports [101, 180] for a more thorough analysis of previous work. + +Text-to-video generation. Our approach builds on recent advancements in 2D video diffusion models. One prominent technique in this area enhances text-to-image models by adding temporal layers to support video generation [7, 8, 37, 119, 151]. While these methods use the U-Net architecture, more recent ones [10, 92, 93, 168] have been adapting transformer-based architectures for more scalable, realistic, and highly dynamic scene generation. We are interested in controlling the camera movements during the generation process of recent transformer-based video models based on precise camera extrinsics, i.e., cameras represented as rotation and translation sequences for each frame. + +4D generation. Early 4D generation works [2, 164] used 4D GANs to learn category-specific generators with an underlying dynamic 3D representation. More recent approaches [4, 5, 81, 120, 196] have tackled 4D generation by distilling motion priors from pre-trained video diffusion models into an explicit 4D representation, enabling category-agnostic 4D generation. Follow-up works investigate image or video conditioned 4D generation [33, 76, 81, 98, 109, 173, 182, 193, 196] instead of pure text inputs, improving flexibility in the generation process. While most of these works are object-centric, recent approaches [4, 159] shifted towards more complex scenes, including methods [23, 175] which + +![](images/7d90716174ed6253fcc66d5f7b61f599bd5530253f95e75ce8956865a8007427.jpg) +Figure 2. VDiT-CC model with ControlNet [71, 188] camera conditioning built on top of VDiT. Video synthesis is performed by large 4,096-dimensional DiT-XL blocks of the frozen VDiT backbone, while VDiT-CC only processes and injects the camera information through lightweight 128-dimensional DiT-XS blocks (FC stands for fully-connected layers); see Section 3.2 for details. + +model the background. However, all these methods are optimization-based, i.e., each scene is generated independently from scratch. Recently, L4GM [110] proposes a feed-forward 4D generator trained on object-centric synthetic 4D data. While these approaches are explicit and provide space-time control, they are limited in their photorealism compared to recent 2D video diffusion models. We investigate dynamic 3D scene generation from a different perspective by extending pre-trained video diffusion models with 3D camera control. + +Camera control for video models. Recently, there has been significant progress in adding camera control to video diffusion models. As the pioneering work, MotionCtrl [149] learns camera control by conditioning pre-trained video models [7, 17] with extrinsic matrices. Follow-up works [39, 65, 160] further improve the conditioning mechanisms by representing cameras as Plücker coordinates. Another line of work [49, 50, 82, 155] controls camera motion without training additional parameters. However, all of these approaches use U-Net-based architectures as their backbone. More recently, 4DiM [150] trains a space-time diffusion model from scratch for novel view synthesis from a single image input. Closely related to our work, VD3D [6] incorporates camera control into a pre-trained video diffusion transformer. While the motion and camera control improves over U-Net-based approaches, the synthesized motion in the scenes and the visual quality are still degraded compared to the base video model. In contrast to VD3D, we first thoroughly investigate the pre-trained base video model and its knowledge of camera motion. We derive an improved training and architecture design for high-quality and dynamic video generation based on our findings. + +Concurrent works. Concurrent approaches [68, 158, 177, 185, 194, 195] further improve camera control in U-Net-based architectures, while another work [22] tackles video diffusion transformer. However, the scene and visual quality + +is still limited in that approach. DimensionX [127] controls space and time in video diffusion transformers but the camera trajectories are pre-defined and not continuous. Chun-Hao et al. [51] explore pose estimation with a video DiT by pairing it with DUSt3R [145] and fine-tuning, while we perform linear probing without any training to assess its existing camera knowledge. CAT4D [152] proposes a multi-view video diffusion model fine-tuned from a multi-view model. + +# 3. Method + +We first describe our base video diffusion model (Sec. 3.1), and the baseline camera control method built on top of it (Sec. 3.2). Then, we proceed with the analysis of motion (Sec. 3.3), linear probing (Sec. 3.4) and dataset biases (Sec. 3.5), and additional insights on how to build an effective model for camera control (Sec. 3.6) + +# 3.1. Base model (VDiT) + +Following Sora [10], most modern foundational text-to-video generators use the diffusion framework [45, 122] to train a large-scale transformer [139] in the latent space of a variational autoencoder [64, 111]. We adopt the same design and, for a base video model, pre-train an 11.5B-parameter Video DiT model [100] with 32 blocks of hidden dimension 4,096 for text-to-video generation. We use the rectified flow diffusion parametrization [85] and learn in the latent space of CogVideoX [168] (using an autoencoder with a 16-channel output and compression factors of $4 \times 8 \times 8$ in the temporal and spatial dimensions). The T5 [108] encoder produces text embeddings, which are passed into VDiT via cross-attention. We train our base model on a large-scale dataset of images and videos with text annotations, with resolutions ranging from $17 \times 144 \times 256$ to $121 \times 576 \times 1024$ . This design is fairly standard and followed by many existing works with little deviation [32, 102, 168, 197]; we describe our specific architectural and training setup in detail in Appendix D. + +# 3.2. VDiT with Camera Control (VDiT-CC) + +To construct a baseline architecture for camera control, we implement ControlNet [18, 188] conditioning on top of the VDiT. Similar to previous work [6, 39, 149], we use the RealEstate10k [199] dataset, consisting of $65\mathrm{k}$ (text, video, camera trajectory) triplets $(\pmb{t}_n,\pmb{x}_n,\pmb{c}_n)_{n = 1}^N$ and train a new set of model parameters to input the camera information into the model. Camera trajectories $\pmb {c}\in \mathbb{R}^{f\times 25}$ are provided in the form of camera extrinsics $\pmb {C_f}\in \mathbb{R}^{4\times 4}$ and intrinsics $\pmb {K}_f\in \mathbb{R}^{3\times 3}$ for each $f$ -th frame $\pmb{x}_f$ . + +Camera conditioning. For base camera control, we adapt VD3D [6] since it was designed for transformer-based models and suits our setup the most, while other methods are built on top of UNet-based [112] backbones. We use Plücker camera representations [6, 16, 39, 59, 121], which are projected to the same dimensionality and resolution as the video tokens via a fully-convolutional encoder to produce camera tokens. These camera tokens are processed by a sequence of lightweight DiT-XS blocks with hidden dimension 128 and four attention heads each. To mix the camera information with the video tokens of VDiT, we use summation before each main DiT block. We also found it useful to perform cross-attention from video tokens to camera tokens as a form of a feedback connection [71]. We illustrate this model architecture, which we call VDiT-CC, in Figure 2; see implementation details in Appendix D. VDiT-CC describes the camera-controlled video model architecture used by AC3D, while AC3D describes our proposed work including analysis and additional adjustments based on the analysis. + +Training. Keeping the VDiT backbone frozen, we train the new parameters with a rectified flow objective [85] and standard (location of 0 and scale of 1) logit-normal noise distribution [28]. Similar to prior works [6, 150], we apply a $10\%$ camera dropout to support classifier-free guidance (CFG) [44] later. Notably, we train VDiT-CC only at the $256^2$ resolution: since camera motion is a low-frequency type of signal (which can be observed at lower resolutions) and the main VDiT backbone is frozen, we found that our design generalizes to higher resolutions out-of-the-box. During inference, we input text prompts and camera embeddings with classifier-free guidance at each time step. + +Model behavior. This baseline model, being built on top of our powerful VDiT, already achieves decent-quality camera control. However, it struggles with degraded visual quality and reduced scene motion, and sometimes, the camera control inputs are ignored. To improve the design, we analyze our VDiT backbone to understand how camera motion is modeled and represented. Then, we inspect VDiT-CC's failure cases and where they arise to address them. + +![](images/84e1ccc3b8c3684156cda0c6e94f1b00f3bf0f68361b47b1471f204f9c75627a.jpg) +Figure 3. Average magnitude of motion spectral volumes along spatial, temporal offset, and video batch dimensions for scenes with different motion types. We compute the flow of each video in a sliding window manner with temporal offsets and average the frequencies across all offsets. Videos with camera motion (purple) exhibit stronger overall motion than the videos with scene motion (orange), especially for the low-frequency range, suggesting that the motion induced by camera transitions is heavily biased towards low-frequency components. Frequency refers to the temporal frequency. + +# 3.3. How is camera motion modeled by diffusion? + +We start by analyzing how camera motion is modeled by a pre-trained video diffusion model (i.e., before camera control is incorporated). We hypothesize that the motion induced by changes in camera pose is a low-frequency type of signal and investigate the motion spectral volumes [75] of the generated videos at different steps of the denoising process. To perform this analysis, we generate 200 diverse videos with our VDiT model with 80 denoising steps and manually annotate them into four categories: videos with only scene motion, videos with only camera motion, videos with both scene and camera motion, and others; see Appendix E for details. During generation, we save the denoised predictions at each denoising step and estimate optical flow to compute the motion spectral volumes. + +Analysis. We visualize motion spectral volumes with $95\%$ confidence intervals in Figure 3. Videos with camera motion exhibit higher amplitudes than scene-motion-only videos for low-frequency components while having similar characteristics for high-frequency ones. This supports the conjecture that the camera motion is a low-frequency type of signal. We also depict an example of a generated video with both scene and camera motion with four denoising steps on Fig. 4a: one can observe that the camera movement has been fully produced by $t = 0.9$ (first $10\%$ of the rectified flow denoising process). In contrast, scene motion details like the hand movements of the subjects are not finalized even till $t = 0.5$ . + +Inspired by this finding, we pose the question: when exactly does a video diffusion model determine the camera pose? To answer this question, we plot aggregated spectral volumes for different timesteps in Figure 4b. We also show the ratio with respect to the last timestep $t = 0$ (i.e., + +![](images/8693ddc1f716218a31172eb0176c0cf1659c7715d807548d77bc27c89b978dc5.jpg) +(a) A generated video at different diffusion timesteps. The camera has already been decided by the model even at $t = 0.9$ (first $10\%$ of the denoising process) and does not change after that. + +![](images/9ea73aaf19caa9d407640a1ed89c9115a7296f0e1cb9290812cc347eaa8c3e6c.jpg) +(b) Motion spectral volumes of VDiT's generated videos for different diffusion timesteps (left) and their ratio w.r.t. the motion spectral volume at $t = 0$ (i.e., a fully denoised video). +Figure 4. How camera motion is modeled by diffusion? As visualized in Figure 4a and Figure 3, the motion induced by camera transitions is a low-frequency type of motion. We observe that a video DiT creates low-frequency motion very early in the denoising trajectory: Figure 4b (left) shows that even at $t = 0.96$ (first $\approx 4\%$ of the steps), the low-frequency motion components have already been created, while high frequency ones do not fully unveil even till $t = 0.5$ . We found that controlling the camera pose later in the denoising trajectory is not only unnecessary but detrimental to both scene motion and overall visual quality. + +![](images/23e1a2904a69fcdbc58bb5274bf44241e2989333975124d4518225d74c2d0e4e.jpg) + +when all motion has been generated). We then inspect when different types of motion appear during the denoising process. Figure 4b (right) shows that the low-frequency motion components fill up to $\approx 84\%$ at $t = 0.9$ (the first $10\%$ of the denoising process), while high-frequency components are not well-modeled until $t = 0.6$ . + +An immediate consequence of this observation is that trying to control the camera later in the denoising trajectory is simply unnecessary and will not influence the manipulation result. In this way, instead of using the standard logit-normal noise level distribution of SD3 [28] with a location of 0.0 and scale of 1.0 (which we use by default for VDiT), we switch to using truncated normal with a location of 0.8 and scale of 0.075 on the [0.6, 1] interval to cover the early steps of the denoising rectified flow trajectory. At inference time, we apply camera conditioning on the same [0.6, 1] interval. Surprisingly, we observe that not using truncation is detrimental to the scene motion and overall visual quality. Following this insight, we restrict both our train-time noise levels and test-time camera conditioning schedules to cover only the first $40\%$ of the reverse diffusion trajectory. As Sec. 4.3 shows, this improves FID and FVD by $14\%$ on average, and camera following by $30\%$ on MSR-VTT (the dataset used to measure generalization to diverse, out-of-fine-tuning-distribution scenes). Further, truncated noise sampling enhances the overall scene motion. + +# 3.4. What does VDiT know about camera pose? + +Foundational video models acquire rich knowledge about the physical world, and we hypothesize that they store information about the camera pose within their representations. To investigate this, we perform linear probing of our base VDiT model on the RealEstate10k [199] dataset (not seen + +during training) for camera extrinsics. + +Specifically, we take 1,000 random 49-frame videos from RealEstate10K, feed them into VDiT under 8 noise levels $(1/8, 2/8, \dots, 1)$ , and extract the activations for all 32 DiT blocks. Next, we split the random videos into 900 train and 100 test videos and train a linear ridge regression model to predict the rotation pitch/yaw/roll angles and translation vectors for the entire viewpoint trajectory $(49 \times 6$ target values in total). This results in $8 \times 32$ trained models, and we report the rotation and (normalized) translation errors [39] on a held-out test set of 100 videos in Figure 5. Surprisingly, VDiT can accurately predict the camera pose, achieving minimum test errors of $\approx 0.025$ for rotation and for $\approx 0.48$ translation prediction. The knowledge quality increases around layer #9 and peaks in the range of #13-21. We reason that since the camera information in block #13 is stored in such a disentangled manner, then the model is using it to build other representations; hence, conditioning the camera in this block is risky and unnecessary and would interfere with other visual features, as shown in our ablations. In this way, we propose to input the camera conditioning only in the first #8 blocks and leave the remaining 24 DiT blocks unconditioned. We find in Section 4.3 that this not only reduces the number of trainable parameters by $\approx 4$ times and improves training speed by $\approx 15\%$ , but also enhances the visual quality by $\approx 10\%$ . + +# 3.5. Mitigating training data limitations + +Estimating camera parameters from in-the-wild videos remains challenging, as leading methods like [114, 115, 145, 192] frequently fail when processing videos containing dynamic scene content. This limitation results in camera-annotated datasets being heavily biased + +![](images/81ea02059ff1477dd730701760e70616e713378dad921b2c3c191cac9198b885.jpg) +Figure 5. Video DiT is secretly a camera pose estimator. We perform linear probing of camera poses in each of VDiT blocks for various noise levels and observe that video DiT performs pose estimation under the hood. Its middle blocks carry the most accurate information about the camera locations and orientations, which indicates that the camera signal emerges in the early layers to help the middle and late blocks render other visual features aligned with the viewpoint. + +![](images/5b13402e9380f67cf4a5cc2290fae66fa5179e151f60267cd7fe2a1dbed59fa5.jpg) + +toward static scenes, which is particularly evident in RealEstate10K (RE10K) [199], the predominant dataset for training camera-controlled video models [6, 39, 149]. We hypothesize that models fine-tuned on such data interpret camera position information as a signal to suppress scene dynamics. This bias persists even when jointly training on unconstrained 2D video data [150], because the camera conditioning branch is only activated when camera parameters are available, which occurs exclusively for static scenes from RE10K, as static scenes remain the only reliable source for accurate camera annotation. + +To address this fundamental limitation, we propose an alternative approach: rather than attempting to annotate dynamic scenes, which proved unsuccessful in our extensive preliminary research, even with state-of-the-art methods [145], we curate a collection of 20K diverse videos featuring dynamic scenes captured by stationary cameras (see Figure 6). With stationary cameras, the camera position is inherently known, (we can assign fixed arbitrary extrinsic), allowing us to maintain active camera conditioning during training. This approach enables the camera conditioning branch to remain active during training while exposing the model to dynamic content, helping it distinguish between viewpoint conditioning and scene stillness. On top of this secondary dataset, following [150], we remove the scale ambiguity in RE10K by leveraging an off-the-shelf metric depth estimator; see Appendix H. Our experiments in Sec. 4.3 demonstrate that this straightforward yet effective data curation strategy successfully mitigates the distributional limitations of RE10K, restoring much of the lost scene dynamics, while maintaining precise camera control. + +# 3.6. Miscellaneous improvements + +In addition to our core analysis, we introduce several auxiliary techniques that enhance model performance. + +Separate text and camera guidance. Text and camera signals require different guidance weights due to their dis + +![](images/68916703f5541477f8f661f8e721d18c59c45cea7bc6d959220446f4f42366a8.jpg) +Figure 6. RealEstate10k [199] videos (upper 2 rows) contain diverse camera trajectories, but are strongly biased towards static scenes. To mitigate this bias and also increase the concepts diversity, we curate 20K videos with stationary cameras, but dynamic content (lower 2 rows). Such datasets are easy to construct, and surprisingly effective. Section 4.3 shows that integrating the dataset into our training improves visual quality on out-of-distribution prompts by $17\%$ . + +tinct nature, motivating us to separate their classifier-free guidance (CFG) [9, 44]. We formulate the equation as: + +$$ +\begin{array}{l} \hat {s} (\boldsymbol {x} \mid \boldsymbol {t}, \boldsymbol {c}) = (1 + w _ {y} + w _ {c}) s _ {\theta} (\boldsymbol {x} \mid \boldsymbol {t}, \boldsymbol {c}) \tag {1} \\ - w _ {y} s _ {\theta} (\boldsymbol {x} | \boldsymbol {c}) - w _ {c} s _ {\theta} (\boldsymbol {x} | \boldsymbol {t}), \\ \end{array} +$$ + +where $\hat{s}(.)$ denotes the final update direction used during synthesis, $s_{\theta}$ represents the model's predicted update direction, $\pmb{t}$ and $\pmb{c}$ are text and camera conditions, and $w_{y}$ and $w_{c}$ are their respective CFG weights. We zero-out the tensor for unconditional generation. + +ControlNet with feedback. Traditional ControlNet [188] conditioning, used in recent camera control methods [6, 39, 149], only processes conditioning signals without accessing the main branch. Our experiments show that using a bidirectional ControlNet produces better camera representations. + +
MethodHuman Preference
CAMQTAVQOverall
Ours vs. VD3D (FIT)89.5%79.0%87.5%97.5%95.0%
Ours vs. VD3D (DiT)65.0%87.5%83.5%95.0%92.5%
+ +Table 1. User study. We compare our approach to the original VD3D (FIT) and reimplemented VD3D (DiT) on top of our base model. We conduct a user study where participants indicate their preference based on camera alignment (CA), motion quality (MQ), text alignment (TA), visual quality (VQ), and overall preference (Overall). + +This modification behaves as a feedback mechanism [71] provided by the main synthesis branch to the camera processing branch. + +Dropping context in the camera branch. Applying cross-attention over the context information (text prompts, resolution, etc.) in the camera DiT-XS blocks worsens visual quality and camera steering due to harmful interference of the context embeddings with camera representations. + +# 4. Experiments + +Datasets. Our base VDiT model was trained on a large-scale dataset of text-annotated images and videos. VDiT-CC is fine-tuned from VDiT on RealEstate10K [199], contains $\approx 65\mathrm{M}$ video clips with per-frame camera parameters since it is the setup used by existing methods [6, 39, 149]. + +Metrics. To assess the performance, we rely on a wide range of automatic quantitative metrics. We use FID [43], FVD [136], and CLIP score [42] to evaluate visual quality, and rotation and normalized translation errors [39] of ParticleSfM [192]-reconstructed trajectories to assess camera steerability. We evaluate them both on RE10K and MSR-VTT [161], since the latter allows to assess zero-shot generalization on out-of-distribution data. Moreover, we conduct a user study with details in the appendix in Sec. J. + +# 4.1. Baselines + +We select three camera-control methods: MotionCtrl [149], CameraCtrl [39], and VD3D [6]. MotionCtrl and CameraCtrl use a UNet-based video diffusion backbone [37], while VD3D builds on top of FIT [21, 93] and as such, is easily extendable to our video DiT [100] setup. Hence, we re-implement VD3D on top of our VDiT model to obtain an additional "VD3D+DiT" baseline. Moreover, we provide comparisons for an open-source model, i.e., CogVideoX [169]. See Sec. C of the appendix for more details. + +# 4.2. Main results + +We present quantitative comparisons with the baselines in Tab. 2. One can observe that just switching from the 4B- + +parameter pixel-space FIT [21] backbone, employed by the original VD3D approach, to our larger 11.5B-parameter latent-space DiT yields clear improvements across most metrics. Next, the results demonstrate that AC3D establishes a new state-of-the-art in performance against all baselines. Evaluating the quality of camera motion from still images is difficult, so we instead visualize all qualitative results in the website provided within our supplementary material. Therein, we can observe that AC3D better follows pose conditioning and achieves higher visual fidelity. We conduct user studies against VD3D+FIT (the original model) and VD3D+DiT (our improved re-implementation on top of the bigger video transformer). The results are presented in Table 1: AC3D outperforms them across all qualitative aspects, achieving a $90\%+$ overall preference score. Finally, we encourage the reader to assess the visual quality by observing videos on our website. + +# 4.3. Ablations + +No camera conditioning. The first ablation we conduct is to drop all camera conditioning, which makes the model equivalent to the vanilla VDiT. This is needed to understand the degradation of visual quality and text alignment. The results (Tab. 2, row w/o camera cond) show that our model loses less only $\approx 7\%$ of the original visual fidelity on MSR-VTT (as measured by FVD), while (as expected) greatly improving on its in-domain RE10K data. In comparison, VD3D-DiT (the closest baseline) loses $\approx 20\%$ of its visual fidelity on MSR-VTT. + +Importance of biasing the noise towards higher levels. As Sec. 3.3 shows, we use the truncated normal distribution with location of 0.8 and scale of 0.075 with the [0.6, 1] bounds for training AC3D. We ablate the importance of biasing the noise sampling towards high noise and observe higher motion, visual quality, and camera controllability. + +Importance of truncating the noise schedule. We change the training and inference procedure by using no truncation during noise sampling. Instead, we condition the model with camera inputs over the whole noise range and observe decreased visual quality. + +No camera guidance. We assess the importance of classifier-free guidance [44] on the camera conditioning in Tab. 2 (w/o camera CFG). It attains the same visual quality on both indistribution (RE10K) and out-of-distribution (MSR-VTT) data, but degrades camera following, resulting in $\approx 5\%$ worse pose reconstruction errors. + +Training without our data with scene motion. To understand how well our curated data with scene motion but stationary cameras mitigates static scene bias, we train AC3D exclusively on RE10K, and report the results in Tab. 2 (w/o our dynamic data). The model maintains similar visual quality and text alignment on RE10K (in-domain data), but + +
MethodRealEstate10K [199]MSR-VTT [161]
TransErr ↓RotErr ↓FID ↓FVD ↓CLIP ↑TransErr ↓RotErr ↓FID ↓FVD ↓CLIP ↑
MotionCtrl (U-Net)0.4770.0942.9961.7026.460.5930.13716.85283.1224.11
CameraCtrl (U-Net)0.4650.0892.4855.6426.810.5870.13212.33201.3325.05
VD3D (FIT)0.4090.0431.4042.4328.070.5040.0507.80165.1826.89
VD3D (CogVideoX)0.4670.0631.6643.1428.080.5010.0687.45148.1127.65
AC3D (CogVideoX) (ours)0.3740.0391.2738.2028.620.4310.0395.52116.0428.38
MotionCtrl (VDiT)0.5040.1261.7443.8127.690.5890.1469.92150.2027.25
CameraCtrl (VDiT)0.5130.1381.6242.1027.730.5660.1438.15146.7727.51
VD3D (VDiT)0.4210.0561.2138.5728.340.4860.0476.88137.6227.90
AC3D (VDiT) (ours)0.3580.0351.1836.5528.760.4280.0385.34110.7128.58
w/o camera cond+0.233+0.153+4.02+53.83-1.63+0.266+0.157-0.48-8.53+0.35
w/o biasing noise+0.093+0.015+0.02+1.78-0.32+0.138+0.033+0.59+16.92-0.54
w/o noise truncation+0.020-0.003+0.06+1.69-0.20+0.016+0.005+0.76+6.63-0.18
w/o camera CFG+0.014+0.004+0.49+4.57-0.54+0.025+0.003+0.03+1.42-0.27
w/o our dynamic data-0.005-0.004-0.06+0.22-0.20+0.004-0.001+0.89+4.40-0.55
w/o metric scaled data+0.013+0.005+0.17+4.650.00+0.023+0.002-0.010.00-0.12
w/o dropping camera context+0.013+0.001+0.04+2.46-0.65+0.029+0.003+1.25+7.41-0.36
w/o limiting camera cond to 8 blocks-0.001+0.001+0.09+0.56-0.02+0.0030.000+0.32+9.23-0.33
w/ 2D training+0.129+0.068+2.60+33.85-1.17+0.128+0.093-0.26-3.83+0.21
+ +Table 2. Quantitative evaluation. We evaluate all the models using camera pose and visual quality metrics based on unseen camera trajectories. We compute translation and rotation errors based on the estimated camera poses from generations using ParticleSfM [192]. We evaluate both in-distribution performance with RealEstate10K [199] and out-of-distribution performance with MSR-VTT [161]. + +performance on out-of-distribution samples from MSR-VTT worsens ( $\approx 17\%$ worse FID and $\approx 4\%$ worse FVD). The quality of scene motion is better assessed by referring to our qualitative video comparisons in the supplementary. + +Importance of metric scaled cameras. We train AC3D using the original RE10K's camera parameters without our scaling procedure and present the results in Tab. 2 (w/o metric scaled data). This is a more ambiguous conditioning signal, and results in worse visual quality ( $\approx 10\%$ FVD on RE10K) and camera following performance ( $\approx 12\%$ worse trajectory reconstruction). + +Providing context into the camera branch. As discussed in Sec. 3.6, we chose not to input the context information (text embeddings, resolution conditioning, etc.) into the camera branch to avoid potential interference with the camera representations. As Tab. 2 (w/o dropping camera context) shows, providing this information indeed results in $\approx 4\%$ worse camera following and $\approx 15\%$ lower visual quality. + +Importance of limiting conditioning to the first 8 VDiT blocks. Following our insights in Sec. 3.4, we condition AC3D only in the first 8 blocks. Trying to condition in all the 32 DiT blocks (w/o limiting camera cond to 8 blocks) worsens the visual quality by $\approx 10\%$ , while keeping the quality control at the same level. This suggests that the middle and late VDiT layers indeed rely on processed camera information and conditioning them on external camera poses might lead to interference with other visual features. + +Joint training with 2D data. To mitigate visual quality and scene motion degradation, we attempted to perform joint fine-tuning on 2D video data (without camera annotations) + +which was used in base VDiT training by applying dropout on camera inputs for it. Prior work shows performance benefits with this strategy [150] and, as Tab. 2 (with $2D$ training) shows, it indeed helps to maintain slightly higher visual fidelity in our case ( $\approx 3\%$ better FVD on MSR-VTT). However, camera steering severely deteriorates, leading to up to $3\times$ worse results for translation/rotation errors. + +# 5. Conclusions + +Our findings demonstrate that principled analysis of camera motion in video diffusion models leads to significant improvements in control precision and efficiency. Through enhanced conditioning schedules, targeted layerspecific camera control, and better-calibrated training data, AC3D achieves state-of-the-art performance in 3D camera-controlled video synthesis while maintaining high visual quality and natural scene dynamics. This work establishes a foundation for more precise and efficient camera control in text-to-video generation. We discuss the limitations of our approach in Appendix B. In future work, we plan to focus on further improving data limitations and developing control mechanisms for camera trajectories far outside of the training distribution. + +# 6. Acknowledgements + +DBL acknowledges support from NSERC under the RGPIN program, the Canada Foundation for Innovation, and the Ontario Research Fund. + +# References + +[1] Jimmy Lei Ba. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. 17 +[2] Sherwin Bahmani, Jeong Joon Park, Despoina Paschalidou, Hao Tang, Gordon Wetzstein, Leonidas Guibas, Luc Van Gool, and Radu Timofte. 3d-aware video generation. In TMLR, 2023. 2, 20 +[3] Sherwin Bahmani, Jeong Joon Park, Despoina Paschalidou, Xingguang Yan, Gordon Wetzstein, Leonidas Guibas, and Andrea Tagliasacchi. CC3D: Layout-conditioned generation of compositional 3D scenes. In Proc. ICCV, 2023. 20 +[4] Sherwin Bahmani, Xian Liu, Wang Yifan, Ivan Skorokhodov, Victor Rong, Ziwei Liu, Xihui Liu, Jeong Joon Park, Sergey Tulyakov, Gordon Wetzstein, Andrea Tagliasacchi, and David B. Lindell. Tc4d: Trajectory-conditioned text-to-4d generation. In Proc. ECCV, 2024. 2, 20 +[5] Sherwin Bahmani, Ivan Skorokhodov, Victor Rong, Gordon Wetzstein, Leonidas Guibas, Peter Wonka, Sergey Tulyakov, Jeong Joon Park, Andrea Tagliasacchi, and David B. Lindell. 4d-fy: Text-to-4d generation using hybrid score distillation sampling. In Proc. CVPR, 2024. 2, 20 +[6] Sherwin Bahmani, Ivan Skorokhodov, Aliaksandr Siarohin, Willi Menapace, Guocheng Qian, Michael Vasilkovsky, Hsin-Ying Lee, Chaoyang Wang, Jiaxu Zou, Andrea Tagliasacchi, et al. Vd3d: Taming large video diffusion transformers for 3d camera control. arXiv preprint arXiv:2407.12781, 2024. 2, 3, 4, 6, 7, 18 +[7] Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint arXiv:2311.15127, 2023. 2, 3 +[8] Andreas Blattmann, Robin Rombach, Huan Ling, Tim Dockhorn, Seung Wook Kim, Sanja Fidler, and Karsten Kreis. Align your latents: High-resolution video synthesis with latent diffusion models. In Proc. CVPR, 2023. 2 +[9] Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. In Proc. CVPR, 2023. 6 +[10] Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, Clarence Ng, Ricky Wang, and Aditya Ramesh. Video generation models as world simulators. OpenAI technical reports, 2024. 1, 2, 3, 17 +[11] Yukang Cao, Liang Pan, Kai Han, Kwan-Yee K Wong, and Ziwei Liu. Avatargo: Zero-shot 4d human-object interaction generation and animation. arXiv preprint arXiv:2410.07164, 2024. 20 +[12] Zenghao Chai, Chen Tang, Yongkang Wong, and Mohan Kankanhalli. Star: Skeleton-aware text-based 4d avatar generation with in-network motion retargeting. arXiv preprint arXiv:2406.04629, 2024. 20 +[13] Eric R Chan, Connor Z Lin, Matthew A Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas J Guibas, Jonathan Tremblay, Sameh Khamis, et al. Efficient geometry-aware 3D generative adversarial networks. In Proc. CVPR, 2022. 20 + +[14] Eric R Chan, Koki Nagano, Matthew A Chan, Alexander W Bergman, Jeong Joon Park, Axel Levy, Miika Aittala, Shalini De Mello, Tero Karras, and Gordon Wetzstein. Generative novel view synthesis with 3d-aware diffusion models. In Proc. ICCV, 2023. 20 +[15] Ce Chen, Shaoli Huang, Xuelin Chen, Guangyi Chen, Xiaoguang Han, Kun Zhang, and Mingming Gong. Ct4d: Consistent text-to-4d generation with animatable meshes. arXiv preprint arXiv:2408.08342, 2024. 20 +[16] Eric Ming Chen, Sidhanth Holalkere, Ruyu Yan, Kai Zhang, and Abe Davis. Ray conditioning: Trading photoconsistency for photo-realism in multi-view image generation. In Proc. ICCV, 2023. 4 +[17] Haoxin Chen, Menghan Xia, Yingqing He, Yong Zhang, Xiaodong Cun, Shaoshu Yang, Jinbo Xing, Yaofang Liu, Qifeng Chen, Xintao Wang, Chao Weng, and Ying Shan. Videocrafter1: Open diffusion models for high-quality video generation. arXiv preprint arXiv:2310.19512, 2023. 3 +[18] Junsong Chen, Yue Wu, Simian Luo, Enze Xie, Sayak Paul, Ping Luo, Hang Zhao, and Zhenguo Li. Pixart-{\delta} : Fast and controllable image generation with latent consistency models. arXiv preprint arXiv:2401.05252, 2024. 4 +[19] Kevin Chen, Christopher B Choy, Manolis Savva, Angel X Chang, Thomas Funkhouser, and Silvio Savarese. Text2Shape: Generating shapes from natural language by learning joint embeddings. In Proc. ACCV, 2018. 20 +[20] Rui Chen, Yongwei Chen, Ningxin Jiao, and Kui Jia. *Fantasia3D: Disentangling geometry and appearance for high-quality text-to-3D content creation.* arXiv preprint arXiv:2303.13873, 2023. 20 +[21] Ting Chen and Lala Li. Fit: Far-reaching interleaved transformers. arXiv preprint arXiv:2305.12689, 2023. 7 +[22] Soon Yau Cheong, Duygu Ceylan, Armin Mustafa, Andrew Gilbert, and Chun-Hao Paul Huang. Boosting camera motion control for video diffusion transformers. arXiv preprint arXiv:2410.10802, 2024. 3 +[23] Wen-Hsuan Chu, Lei Ke, and Katerina Fragkiadaki. Dreamscene4d: Dynamic multi-object scene generation from monocular videos. arXiv preprint arXiv:2405.02280, 2024. 2, 20 +[24] Terrance DeVries, Miguel Angel Bautista, Nitish Srivastava, Graham W Taylor, and Joshua M Susskind. Unconstrained scene generation with locally conditioned radiance fields. In Proc. ICCV, 2021. 20 +[25] Sander Dieleman. Perspectives on diffusion, 2023. 2 +[26] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In Proc. ICLR, 2021. 17 +[27] Mohamed El Banani, Amit Raj, Kevis-Kokitsi Maninis, Abhishek Kar, Yuanshen Li, Michael Rubinstein, Deqing Sun, Leonidas Guibas, Justin Johnson, and Varun Jampani. Probing the 3d awareness of visual foundation models. In Proc. CVPR, 2024. 2 +[28] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik + +Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis. In Proc. ICMLR, 2024. 4, 5, 18 +[29] Gunnar Farnebäck. Two-frame motion estimation based on polynomial expansion. In Proc. SCIA, 2003. 19 +[30] Qijun Feng, Zhen Xing, Zuxuan Wu, and Yu-Gang Jiang. FDGaussian: Fast Gaussian splatting from single image via geometric-aware diffusion model. arXiv preprint arXiv:2403.10242, 2024. 20 +[31] Yutao Feng, Yintong Shang, Xiang Feng, Lei Lan, Shandian Zhe, Tianjia Shao, Hongzhi Wu, Kun Zhou, Hao Su, Chenfanfu Jiang, et al. Elastogen: 4d generative elastodynamics. arXiv preprint arXiv:2405.15056, 2024. 20 +[32] Peng Gao, Le Zhuo, Ziyi Lin, Dongyang Liu, Ruoyi Du, Xu Luo, Longtian Qiu, Yuhang Zhang, et al. Lumina-t2x: Transforming text into any modality, resolution, and duration via flow-based large diffusion transformers. arXiv preprint arXiv:2405.05945, 2024. 3, 17, 18 +[33] Quankai Gao, Qiangeng Xu, Zhe Cao, Ben Mildenhall, Wenchao Ma, Le Chen, Danhang Tang, and Ulrich Neumann. Gaussianflow: Splitting Gaussian dynamics for 4D content creation. arXiv preprint arXiv:2403.12365, 2024. 2, 20 +[34] Ruiqi Gao, Aleksander Holynski, Philipp Henzler, Arthur Brussee, Ricardo Martin-Brualla, Pratul Srinivasan, Jonathan T Barron, and Ben Poole. Cat3d: Create anything in 3d with multi-view diffusion models. In Proc. NeurIPS, 2024. 20 +[35] William Gao, Noam Aigerman, Thibault Groueix, Vova Kim, and Rana Hanocka. TextDeformer: Geometry manipulation using text guidance. In SIGGRAPH, 2023. 20 +[36] Jiatao Gu, Alex Trevithick, Kai-En Lin, Joshua M Suskind, Christian Theobalt, Lingjie Liu, and Ravi Ramamoorthi. NerfDiff: Single-image view synthesis with NeRF-guided distillation from 3D-aware diffusion. In Proc. ICML, 2023. 20 +[37] Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, and Bo Dai. Animatediff: Animate your personalized text-to-image diffusion models without specific tuning. In Proc. ICLR, 2024. 2, 7 +[38] Junlin Han, Filippos Kokkinos, and Philip Torr. VFusion3D: Learning scalable 3D generative models from video diffusion models. arXiv preprint arXiv:2403.12034, 2024. 20 +[39] Hao He, Yinghao Xu, Yuwei Guo, Gordon Wetzstein, Bo Dai, Hongsheng Li, and Ceyuan Yang. Cameractrl: Enabling camera control for text-to-video generation. arXiv preprint arXiv:2404.02101, 2024. 2, 3, 4, 5, 6, 7, 19 +[40] Xianglong He, Junyi Chen, Sida Peng, Di Huang, Yangguang Li, Xiaoshui Huang, Chun Yuan, Wanli Ouyang, and Tong He. GVGEN: Text-to-3D generation with volumetric representation. arXiv preprint arXiv:2403.12957, 2024. 20 +[41] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016. 17, 18 +[42] Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation metric for image captioning. arXiv preprint arXiv:2104.08718, 2021. 7 + +[43] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Proc. NeurIPS, 2017. 7 +[44] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022. 4, 6, 7 +[45] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Proc. NeurIPS, 2020. 3 +[46] Lukas Hollein, Ang Cao, Andrew Owens, Justin Johnson, and Matthias Nießner. Text2room: Extracting textured 3d meshes from 2d text-to-image models. In Proc. ICCV, 2023. 20 +[47] Lukas Hollein, Aljaž Božić, Norman Müller, David Novotny, Hung-Yu Tseng, Christian Richardt, Michael Zollhöfer, and Matthias Nießner. ViewDiff: 3D-consistent image generation with text-to-image models. In Proc. CVPR, 2024. 20 +[48] Yicong Hong, Kai Zhang, Juixiang Gu, Sai Bi, Yang Zhou, Difan Liu, Feng Liu, Kalyan Sunkavalli, Trung Bui, and Hao Tan. LRM: Large reconstruction model for single image to 3D. In Proc. ICLR, 2024. 20 +[49] Chen Hou, Guoqiang Wei, Yan Zeng, and Zhibo Chen. Training-free camera control for video generation. arXiv preprint arXiv:2406.10126, 2024.3 +[50] Teng Hu, Jiangning Zhang, Ran Yi, Yating Wang, Hongrui Huang, Jieyu Weng, Yabiao Wang, and Lizhuang Ma. Motionmaster: Training-free camera motion transfer for video generation. arXiv preprint arXiv:2404.15789, 2024. 3 +[51] Chun-Hao Paul Huang, Jae Shin Yoon, Hyeonho Jeong, Niloy Mitra, and Duygu Ceylan. Camera pose estimation emerging in video diffusion transformer, 2024. 3 +[52] Tianyu Huang, Yihan Zeng, Hui Li, Wangmeng Zuo, and Rynson WH Lau. Dreamphysics: Learning physical properties of dynamic 3d gaussians with video diffusion priors. arXiv preprint arXiv:2406.01476, 2024. 20 +[53] Ajay Jain, Ben Mildenhall, Jonathan T Barron, Pieter Abbeel, and Ben Poole. Zero-shot text-guided object generation with dream fields. In Proc. CVPR, 2022. 20 +[54] Yash Jain, Anshul Nasery, Vibhav Vineet, and Harkirat Behl. Peekaboo: Interactive video generation via masked-diffusion. In Proc. CVPR, 2024. 20 +[55] Nikolay Jetchev. ClipMatrix: Text-controlled creation of 3D textured meshes. arXiv preprint arXiv:2109.12922, 2021. 20 +[56] Lutao Jiang and Lin Wang. Brightdreamer: Generic 3D Gaussian generative framework for fast text-to-3D synthesis. arXiv preprint arXiv:2403.11273, 2024. 20 +[57] Yanqin Jiang, Chaohui Yu, Chenjie Cao, Fan Wang, Weiming Hu, and Jin Gao. *Animate3d: Animating any 3d model with multi-view video diffusion.* arXiv preprint arXiv:2407.11398, 2024. 20 +[58] Yanqin Jiang, Li Zhang, Jin Gao, Weimin Hu, and Yao Yao. Consistent4d: Consistent 360 $\{\backslash \mathrm{deg}\}$ dynamic object generation from monocular video. In Proc. ICLR, 2024. 20 +[59] Yash Kant, Aliaksandr Siarohin, Ziyi Wu, Michael Vasilkovsky, Guocheng Qian, Jian Ren, Riza Alp Guler, Bernard Ghanem, Sergey Tulyakov, and Igor Gilitschenski. Spad: Spatially aware multi-view diffusers. In Proc. CVPR, 2024. 4 + +[60] Tero Karras, Miika Aittala, Jaakko Lehtinen, Janne Hellsten, Timo Aila, and Samuli Laine. Analyzing and improving the training dynamics of diffusion models. In Proc. CVPR, 2024. 17 +[61] Oren Katzir, Or Patashnik, Daniel Cohen-Or, and Dani Lischinski. Noise-free score distillation. In Proc. ICLR, 2024. 20 +[62] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splattering for real-time radiance field rendering. In ACM TOG, 2023. 20 +[63] Seung Wook Kim, Bradley Brown, Kangxue Yin, Karsten Kreis, Katja Schwarz, Daiqing Li, Robin Rombach, Antonio Torralba, and Sanja Fidler. NeuralField-LDM: Scene generation with hierarchical latent diffusion models. In Proc. CVPR, 2023. 20 +[64] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. 3 +[65] Zhengfei Kuang, Shengqu Cai, Hao He, Yinghao Xu, Hongsheng Li, Leonidas Guibas, and Gordon Wetzstein. Collaborative video diffusion: Consistent multi-video generation with camera control. In Proc. NeurIPS, 2024. 3 +[66] Kyungmin Lee, Kihyuk Sohn, and Jinwoo Shin. DreamFlow: High-quality text-to-3D generation by approximating probability flow. In Proc. ICLR, 2024. 20 +[67] Yao-Chih Lee, Yi-Ting Chen, Andrew Wang, Ting-Hsuan Liao, Brandon Y Feng, and Jia-Bin Huang. Vividdream: Generating 3d scene with ambient dynamics. arXiv preprint arXiv:2405.20334, 2024. 20 +[68] Guojun Lei, Chi Wang, Hong Li, Rong Zhang, Yikai Wang, and Weiwei Xu. Animateanything: Consistent and controllable animation for video generation. arXiv preprint arXiv:2411.10836, 2024. 3 +[69] Bing Li, Cheng Zheng, Wenxuan Zhu, Jinjie Mai, Biao Zhang, Peter Wonka, and Bernard Ghanem. Vivid-zoo: Multi-view video generation with diffusion model. arXiv preprint arXiv:2406.08659, 2024. 20 +[70] Jiahao Li, Hao Tan, Kai Zhang, Zexiang Xu, Fujun Luan, Yinghao Xu, Yicong Hong, Kalyan Sunkavalli, Greg Shakhnarovich, and Sai Bi. Instant3D: Fast text-to-3D with sparse-view generation and large reconstruction model. In Proc. ICLR, 2024. 20 +[71] Ming Li, Taojiannan Yang, Huafeng Kuang, Jie Wu, Zhaoning Wang, Xuefeng Xiao, and Chen Chen. Controlnet++: Improving conditional controls with efficient consistency feedback. In ECCV, 2024. 3, 4, 7, 18 +[72] Renjie Li, Panwang Pan, Bangbang Yang, Dejia Xu, Shijie Zhou, Xuanyang Zhang, Zeming Li, Achuta Kadambi, Zhangyang Wang, and Zhiwen Fan. 4k4dgen: Panoramic 4d generation at 4k resolution. arXiv preprint arXiv:2406.13527, 2024. 20 +[73] Zhiqi Li, Yiming Chen, and Peidong Liu. Dreammesh4d: Video-to-4d generation with sparse-controlled gaussian-mesh hybrid representation. arXiv preprint arXiv:2410.06756, 2024. 20 +[74] Zhiqi Li, Yiming Chen, Lingzhe Zhao, and Peidong Liu. Controllable text-to-3D generation via surface-aligned Gaussian splatting. arXiv preprint arXiv:2403.09981, 2024. 20 + +[75] Zhengqi Li, Richard Tucker, Noah Snavely, and Aleksander Holynski. Generative image dynamics. In Proc. CVPR, 2024. 2, 4, 16, 19 +[76] Hanwen Liang, Yuyang Yin, Dejia Xu, Hanxue Liang, Zhangyang Wang, Konstantinos N Plataniotis, Yao Zhao, and Yunchao Wei. Diffusion4d: Fast spatial-temporal consistent 4d generation via video diffusion models. arXiv preprint arXiv:2405.16645, 2024. 2, 20 +[77] Yixun Liang, Xin Yang, Jiantao Lin, Haodong Li, Xiaogang Xu, and Yingcong Chen. Luciddreamer: Towards high-fidelity text-to-3D generation via interval score matching. arXiv preprint arXiv:2311.11284, 2023. 20 +[78] Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin. Magic3D: High-resolution text-to-3D content creation. In Proc. CVPR, 2023. 20 +[79] Jiajing Lin, Zhenzhong Wang, Yongjie Hou, Yuzhou Tang, and Min Jiang. Phy124: Fast physics-driven 4d content generation from a single image. arXiv preprint arXiv:2409.07179, 2024. 20 +[80] Yukang Lin, Haonan Han, Chaoqun Gong, Zunnan Xu, Yachao Zhang, and Xiu Li. Consistent123: One image to highly consistent 3D asset using case-aware diffusion priors. In arXiv preprint arXiv:2309.17261, 2023. 20 +[81] Huan Ling, Seung Wook Kim, Antonio Torralba, Sanja Fidler, and Karsten Kreis. Align your gaussians: Text-to-4d with dynamic 3d gaussians and composed diffusion models. In Proc. CVPR, 2024. 2, 20 +[82] Pengyang Ling, Jiazi Bu, Pan Zhang, Xiaoyi Dong, Yuhang Zang, Tong Wu, Huaian Chen, Jiaqi Wang, and Yi Jin. Motionclone: Training-free motion cloning for controllable video generation. arXiv preprint arXiv:2406.05338, 2024. 3 +[83] Pengkun Liu, Yikai Wang, Fuchun Sun, Jiafang Li, Hang Xiao, Hongxiang Xue, and Xinzhou Wang. Isotropic3D: Image-to-3D generation based on a single clip embedding. arXiv preprint arXiv:2403.10395, 2024. 20 +[84] Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3D object. In Proc. ICCV, 2023. 20 +[85] Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. arXiv preprint arXiv:2209.03003, 2022. 3, 4 +[86] Xian Liu, Xiaohang Zhan, Jiaxiang Tang, Ying Shan, Gang Zeng, Dahua Lin, Xihui Liu, and Ziwei Liu. HumanGaussian: Text-driven 3D human generation with Gaussian splatting. In Proc. CVPR, 2024. 20 +[87] Yuan Liu, Cheng Lin, Zijiao Zeng, Xiaoxiao Long, Lingjie Liu, Taku Komura, and Wenping Wang. SyncDreamer: Generating multiview-consistent images from a single-view image. In Proc. ICLR, 2024. 20 +[88] Xiaoxiao Long, Yuan-Chen Guo, Cheng Lin, Yuan Liu, Zhiyang Dou, Lingjie Liu, Yuexin Ma, Song-Hai Zhang, Marc Habermann, Christian Theobalt, et al. Wonder3D: Single image to 3D using cross-domain diffusion. In Proc. CVPR, 2024. 20 +[89] I Loshchilov. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. 17, 18 + +[90] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. 17, 18 +[91] Wan-Duo Kurt Ma, John P Lewis, and W Bastiaan Kleijn. Trailblazer: Trajectory control for diffusion-based video generation. arXiv preprint arXiv:2401.00896, 2023. 20 +[92] Xin Ma, Yaohui Wang, Gengyun Jia, Xinyuan Chen, Ziwei Liu, Yuan-Fang Li, Cunjian Chen, and Yu Qiao. Latte: Latent diffusion transformer for video generation. arXiv preprint arXiv:2401.03048, 2024. 2 +[93] Willi Menapace, Aliaksandr Siarohin, Ivan Skorokhodov, Ekaterina Deyneka, Tsai-Shien Chen, Anil Kag, Yuwei Fang, Aleksei Stoliar, Elisa Ricci, Jian Ren, et al. Snap video: Scaled spatiotemporal transformers for text-to-video synthesis. In Proc. CVPR, 2024. 2, 7, 17 +[94] Qiaowei Miao, Yawei Luo, and Yi Yang. Pla4d: Pixel-level alignments for text-to-4d gaussian splatting. arXiv preprint arXiv:2405.19957, 2024. 20 +[95] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In Proc. ECCV, 2020. 20 +[96] Koichi Namekata, Sherwin Bahmani, Ziyi Wu, Yash Kant, Igor Gilitschenski, and David B Lindell. Sg-i2v: Self-guided trajectory control in image-to-video generation. arXiv preprint arXiv:2411.04989, 2024. 20 +[97] Roy Or-El, Xuan Luo, Mengyi Shan, Eli Shechtman, Jeong Joon Park, and Ira Kemelmacher-Shlizerman. StyleSDF: High-resolution 3D-consistent image and geometry generation. In Proc. CVPR, 2022. 20 +[98] Zijie Pan, Zeyu Yang, Xiatian Zhu, and Li Zhang. Fast dynamic 3d object generation from a single-view video. arXiv preprint arXiv:2401.08742, 2024. 2, 20 +[99] Deepak Pathak, Ross Girshick, Piotr Dólar, Trevor Darrell, and Bharath Hariharan. Learning features by watching objects move. In Proc. CVPR, 2017. 19 +[100] William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proc. ICCV, 2023. 2, 3, 7, 17 +[101] Ryan Po, Wang Yifan, Vladislav Golyanik, Kfir Aberman, Jonathan T Barron, Amit H Bermano, Eric Ryan Chan, Tali Dekel, Aleksander Holynski, Angjoo Kanazawa, et al. State of the art on diffusion models for visual computing. arXiv preprint arXiv:2310.07204, 2023. 2 +[102] Adam Polyak, Amit Zohar, Andrew Brown, Andros Tjandra, Animesh Sinha, Ann Lee, Apoorv Vyas, Bowen Shi, Chih-Yao Ma, Ching-Yao Chuang, et al. Movie gen: A cast of media foundation models. arXiv preprint arXiv:2410.13720, 2024. 3, 17 +[103] Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben Mildenhall. DreamFusion: Text-to-3D using 2D diffusion. In Proc. ICLR, 2023. 20 +[104] Guocheng Qian, Junli Cao, Aliaksandr Siarohin, Yash Kant, Chaoyang Wang, Michael Vasilkovsky, Hsin-Ying Lee, Yuwei Fang, Ivan Skorokhodov, Peiye Zhuang, et al. Atom: Amortized text-to-mesh using 2d diffusion. arXiv preprint arXiv:2402.00867, 2024. 20 + +[105] Guocheng Qian, Jinjie Mai, Abdullah Hamdi, Jian Ren, Aliaksandr Siarohin, Bing Li, Hsin-Ying Lee, Ivan Skorokhodov, Peter Wonka, Sergey Tulyakov, et al. Magic123: One image to high-quality 3D object generation using both 2D and 3D diffusion priors. In Proc. ICLR, 2024. 20 +[106] Haonan Qiu, Zhaoxi Chen, Zhouxia Wang, Yingqing He, Menghan Xia, and Ziwei Liu. Freetraj: Tuning-free trajectory control in video diffusion models. arXiv preprint arXiv:2406.16863, 2024. 20 +[107] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In Proc. ICML, 2021. 20 +[108] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. In Proc. JMLR, 2020. 3, 17 +[109] Jiawei Ren, Liang Pan, Jiaxiang Tang, Chi Zhang, Ang Cao, Gang Zeng, and Ziwei Liu. DreamGaussian4D: Generative 4D Gaussian splatting. arXiv preprint arXiv:2312.17142, 2023. 2, 20 +[110] Jiawei Ren, Kevin Xie, Ashkan Mirzaei, Hanxue Liang, Xiaohui Zeng, Karsten Kreis, Ziwei Liu, Antonio Torralba, Sanja Fidler, Seung Wook Kim, et al. L4gm: Large 4d gaussian reconstruction model. In Proc. NeurIPS, 2024. 3, 20 +[111] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proc. CVPR, 2022. 3, 19 +[112] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Proc. MICCAI, 2015. 4 +[113] Aditya Sanghi, Hang Chu, Joseph G Lambourne, Ye Wang, Chin-Yi Cheng, Marco Fumero, and Kamal Rahimi Malekshan. CLIP-Forge: Towards zero-shot text-to-shape generation. In Proc. CVPR, 2022. 20 +[114] Johannes Lutz Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In Proc. CVPR, 2016. 5, 20 +[115] Johannes Lutz Schonberger, Enliang Zheng, Marc Pollefeys, and Jan-Michael Frahm. Pixelwise view selection for unstructured multi-view stereo. In Proc. ECCV, 2016. 5, 20 +[116] Katja Schwarz, Axel Sauer, Michael Niemeyer, Yiyi Liao, and Andreas Geiger. VoxGRAF: Fast 3D-aware image synthesis with sparse voxel grids. In Proc. NeurIPS, 2022. 20 +[117] Yichun Shi, Peng Wang, Jianglong Ye, Long Mai, Kejie Li, and Xiao Yang. MVDream: Multi-view diffusion for 3D generation. In Proc. ICLR, 2024. 20 +[118] Jaidev Shriram, Alex Trevithick, Lingjie Liu, and Ravi Ramamoorthi. Realmdreamer: Text-driven 3d scene generation with inpainting and depth diffusion. arXiv preprint arXiv:2404.07199, 2024. 20 +[119] Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, et al. Make-a-video: Text-to-video generation without text-video data. Proc. ICLR, 2023. 2 + +[120] Uriel Singer, Shelly Sheynin, Adam Polyak, Oron Ashual, Iurii Makarov, Filippos Kokkinos, Naman Goyal, Andrea Vedaldi, Devi Parikh, Justin Johnson, et al. Text-to-4d dynamic scene generation. In Proc. ICML, 2023. 2, 20 +[121] Vincent Sitzmann, Semon Rezchikov, Bill Freeman, Josh Tenenbaum, and Fredo Durand. Light field networks: Neural scene representations with single-evaluation rendering. In Proc. NeurIPS, 2021. 4 +[122] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In Proc. ICML, 2015. 3 +[123] Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding. In Neurocomputing, 2024. 17 +[124] Jingxiang Sun, Bo Zhang, Ruizhi Shao, Lizhen Wang, Wen Liu, Zhenda Xie, and Yebin Liu. DreamCraft3D: Hierarchical 3D generation with bootstrapped diffusion prior. In Proc. ICLR, 2024. 20 +[125] Keqiang Sun, Dor Litvak, Yunzhi Zhang, Hongsheng Li, Jiajun Wu, and Shangzhe Wu. Ponomy: Learning articulated 3d animal motions from unlabeled online videos. In Proc. ECCV, 2024. 20 +[126] Qi Sun, Zhiyang Guo, Ziyu Wan, Jing Nathan Yan, Shengming Yin, Wengang Zhou, Jing Liao, and Houqiang Li. Eg4d: Explicit generation of 4d object without score distillation. arXiv preprint arXiv:2405.18132, 2024. 20 +[127] Wenqiang Sun, Shuo Chen, Fangfu Liu, Zilong Chen, Yueqi Duan, Jun Zhang, and Yikai Wang. Dimensionx: Create any 3d and 4d scenes from a single image with controllable video diffusion. arXiv preprint arXiv:2411.04928, 2024. 3 +[128] Stanislaw Szymanowicz, Christian Rupprecht, and Andrea Vedaldi. Viewset diffusion: (0-) image-conditioned 3d generative models from 2d data. In Proc. ICCV, 2023. 20 +[129] Stanislaw Szymanowicz, Eldar Insafutdinov, Chuanxia Zheng, Dylan Campbell, João F Henriques, Christian Rupprecht, and Andrea Vedaldi. Flash3d: Feed-forward generalisable 3d scene reconstruction from a single image. arXiv preprint arXiv:2406.04343, 2024. 20 +[130] Stanislaw Szymanowicz, Christian Rupprecht, and Andrea Vedaldi. Splatter image: Ultra-fast single-view 3D reconstruction. In Proc. CVPR, 2024. 20 +[131] Junshu Tang, Tengfei Wang, Bo Zhang, Ting Zhang, Ran Yi, Lizhuang Ma, and Dong Chen. Make-it-3D: High-fidelity 3D creation from a single image with diffusion prior. arXiv preprint arXiv:2303.14184, 2023. 20 +[132] Jiaxiang Tang, Zhaoxi Chen, Xiaokang Chen, Tengfei Wang, Gang Zeng, and Ziwei Liu. LGM: Large multi-view gaussian model for high-resolution 3d content creation. Proc. ECCV, 2024. 20 +[133] Zhenggang Tang, Peiye Zhuang, Chaoyang Wang, Aliaksandr Siarohin, Yash Kant, Alexander Schwing, Sergey Tulyakov, and Hsin-Ying Lee. Pixel-aligned multi-view generation with depth guided decoder. arXiv preprint arXiv:2408.14016, 2024. 20 +[134] Ayush Tewari, Tianwei Yin, George Cazenavette, Semon Rezhikov, Josh Tenenbaum, Frédo Durand, Bill Freeman, + +and Vincent Sitzmann. Diffusion with forward models: Solving stochastic inverse problems without direct supervision. In Proc. NeurIPS, 2023. 20 +[135] Dmitry Tochilkin, David Pankratz, Zexiang Liu, Zixuan Huang, Adam Letts, Yangguang Li, Ding Liang, Christian Laforte, Varun Jampani, and Yan-Pei Cao. Triposr: Fast 3D object reconstruction from a single image. arXiv preprint arXiv:2403.02151, 2024. 20 +[136] Thomas Unterthiner, Sjoerd van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, and Sylvain Gelly. Towards accurate generative models of video: A new metric & challenges. arXiv preprint arXiv:1812.01717, 2018. 7 +[137] Lukas Uzolas, Elmar Eisemann, and Petr Kellnhofer. Motiondreamer: Zero-shot 3d mesh animation from video diffusion models. arXiv preprint arXiv:2405.20155, 2024. 20 +[138] Basile Van Hoorick, Rundi Wu, Ege Ozguroglu, Kyle Sargent, Ruoshi Liu, Pavel Tokmakov, Achal Dave, Changxi Zheng, and Carl Vondrick. Generative camera dolly: Extreme monocular dynamic novel view synthesis. In Proc. ECCV, 2024. 20 +[139] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Proc. NeurIPS, 2017. 3 +[140] Vikram Voleti, Chun-Han Yao, Mark Boss, Adam Letts, David Pankratz, Dmitry Tochilkin, Christian Laforte, Robin Rombach, and Varun Jampani. SV3D: Novel multi-view synthesis and 3D generation from a single image using latent video diffusion. arXiv preprint arXiv:2403.12008, 2024. 20 +[141] Ziyu Wan, Despoina Paschalidou, Ian Huang, Hongyu Liu, Bokui Shen, Xiaoyu Xiang, Jing Liao, and Leonidas Guibas. CAD: Photorealistic 3D generation via adversarial distillation. In Proc. CVPR, 2024. 20 +[142] Can Wang, Menglei Chai, Mingming He, Dongdong Chen, and Jing Liao. Clip-NeRF: Text-and-image driven manipulation of neural radiance fields. In Proc. CVPR, 2022. 20 +[143] Haochen Wang, Xiaodan Du, Jiahao Li, Raymond A Yeh, and Greg Shakhnarovich. Score Jacobian chaining: Lifting pretrained 2d diffusion models for 3D generation. In Proc. CVPR, 2023. 20 +[144] Jiawei Wang, Yuchen Zhang, Jiaxin Zou, Yan Zeng, Guoqiang Wei, Liping Yuan, and Hang Li. Boximator: Generating rich and controllable motions for video synthesis. arXiv preprint arXiv:2402.01566, 2024. 20 +[145] Shuzhe Wang, Vincent Leroy, Yohann Cabon, Boris Chidlovskii, and Jerome Revaud. Dust3r: Geometric 3d vision made easy. In Proc. CVPR, 2024. 3, 5, 6 +[146] Xiang Wang, Hangjie Yuan, Shiwei Zhang, Dayou Chen, Jiuniu Wang, Yingya Zhang, Yujun Shen, Deli Zhao, and Jingren Zhou. Videocomposer: Compositional video synthesis with motion controllability. arXiv preprint arXiv:2306.02018, 2023. 20 +[147] Yikai Wang, Xinzhou Wang, Zilong Chen, Zhengyi Wang, Fuchun Sun, and Jun Zhu. Vidu4d: Single generated video to high-fidelity 4d reconstruction with dynamic gaussian surfels. arXiv preprint arXiv:2405.16822, 2024. 20 + +[148] Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, and Jun Zhu. ProlificDreamer: High-fidelity and diverse text-to-3D generation with variational score distillation. In Proc. NeurIPS, 2023. 20 +[149] Zhouxia Wang, Ziyang Yuan, Xintao Wang, Tianshui Chen, Menghan Xia, Ping Luo, and Yin Shan. Motionctrl: A unified and flexible motion controller for video generation. In SIGGRAPH, 2024. 2, 3, 4, 6, 7 +[150] Daniel Watson, Saurabh Saxena, Lala Li, Andrea Tagliasacchi, and David J Fleet. Controlling space and time with diffusion models. arXiv preprint arXiv:2407.07860, 2024. 3, 4, 6, 8 +[151] Ruiqi Wu, Liangyu Chen, Tong Yang, Chunle Guo, Chongyi Li, and Xiangyu Zhang. Lamp: Learn a motion pattern for few-shot-based video generation. arXiv preprint arXiv:2310.10769, 2023. 2 +[152] Rundi Wu, Ruiqi Gao, Ben Poole, Alex Trevithick, Changxi Zheng, Jonathan T Barron, and Aleksander Holynski. Cat4d: Create anything in 4d with multi-view video diffusion models. arXiv preprint arXiv:2411.18613, 2024. 3 +[153] Wejia Wu, Zhuang Li, Yuchao Gu, Rui Zhao, Yefei He, David Junhao Zhang, Mike Zheng Shou, Yan Li, Tingting Gao, and Di Zhang. Draganything: Motion control for anything using entity representation. In Proc. ECCV, 2024. 20 +[154] Zijie Wu, Chaohui Yu, Yanqin Jiang, Chenjie Cao, Fan Wang, and Xiang Bai. Sc4d: Sparse-controlled video-to-4d generation and motion transfer. arXiv preprint arXiv:2404.03736, 2024. 20 +[155] Zeqi Xiao, Yifan Zhou, Shuai Yang, and Xingang Pan. Video diffusion models are training-free motion interpreter and controller. arXiv preprint arXiv:2405.14864, 2024. 3 +[156] Kevin Xie, Jonathan Lorraine, Tianshi Cao, Jun Gao, James Lucas, Antonio Torralba, Sanja Fidler, and Xiaohui Zeng. LATTE3D: Large-scale amortized text-to-enhanced3D synthesis. In Proc. ECCV, 2024. 20 +[157] Yiming Xie, Chun-Han Yao, Vikram Voleti, Huaizu Jiang, and Varun Jampani. Sv4d: Dynamic 3d content generation with multi-frame and multi-view consistency. arXiv preprint arXiv:2407.17470, 2024. 20 +[158] Dejia Xu, Yifan Jiang, Chen Huang, Liangchen Song, Thorsten Gernoth, Liangliang Cao, Zhangyang Wang, and Hao Tang. Cavity: Camera-controllable multi-view video diffusion with view-integrated attention. arXiv preprint arXiv:2410.10774, 2024. 3 +[159] Dejia Xu, Hanwen Liang, Neel P Bhatt, Hezhen Hu, Hanxue Liang, Konstantinos N Plataniotis, and Zhangyang Wang. Comp4d: Llm-guided compositional 4d scene generation. arXiv preprint arXiv:2403.16993, 2024. 2, 20 +[160] Dejia Xu, Weili Nie, Chao Liu, Sifei Liu, Jan Kautz, Zhangyang Wang, and Arash Vahdat. Camco: Camera-controllable 3d-consistent image-to-video generation. arXiv preprint arXiv:2406.02509, 2024. 3 +[161] Jun Xu, Tao Mei, Ting Yao, and Yong Rui. Msr-vtt: A large video description dataset for bridging video and language. In Proc. CVPR, 2016. 7, 8 + +[162] Yinghao Xu, Zifan Shi, Wang Yifan, Hansheng Chen, Ceyuan Yang, Sida Peng, Yujun Shen, and Gordon Wetzstein. GRM: Large Gaussian reconstruction model for efficient 3D reconstruction and generation. In Proc. ECCV, 2024. 20 +[163] Yinghao Xu, Hao Tan, Fujun Luan, Sai Bi, Peng Wang, Jiahao Li, Zifan Shi, Kalyan Sunkavalli, Gordon Wetzstein, Zexiang Xu, et al. DMV3D: Denoising multi-view diffusion using 3D large reconstruction model. In Proc. ICLR, 2024. 20 +[164] Zhongcong Xu, Jianfeng Zhang, Jun Hao Liew, Wenqing Zhang, Song Bai, Jiashi Feng, and Mike Zheng Shou. Pv3d: A 3d generative model for portrait video generation. In Proc. ICLR, 2023. 2, 20 +[165] Qitong Yang, Mingtao Feng, Zijie Wu, Shijie Sun, Weisheng Dong, Yaonan Wang, and Ajmal Mian. Beyond skeletons: Integrative latent mapping for coherent 4d sequence generation. arXiv preprint arXiv:2403.13238, 2024. 20 +[166] Shiyuan Yang, Liang Hou, Haibin Huang, Chongyang Ma, Pengfei Wan, Di Zhang, Xiaodong Chen, and Jing Liao. Direct-a-video: Customized video generation with user-directed camera movement and object motion. arXiv preprint arXiv:2402.03162, 2024. 2, 20 +[167] Zeyu Yang, Zijie Pan, Chun Gu, and Li Zhang. Diffusion2: Dynamic 3d content generation via score composition of orthogonal diffusion models. arXiv preprint 2404.02148, 2024. 20 +[168] Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al. Cogvideox: Text-to-video diffusion models with an expert transformer. arXiv preprint arXiv:2408.06072, 2024. 2, 3, 16, 17, 18, 19 +[169] Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al. Cogvideox: Text-to-video diffusion models with an expert transformer. arXiv preprint arXiv:2408.06072, 2024. 7, 16, 17 +[170] Junliang Ye, Fangfu Liu, Qixiu Li, Zhengyi Wang, Yikai Wang, Xinzhou Wang, Yueqi Duan, and Jun Zhu. DreamReward: Text-to-3D generation with human preference. arXiv preprint arXiv:2403.14613, 2024. 20 +[171] Shengming Yin, Chenfei Wu, Jian Liang, Jie Shi, Houqiang Li, Gong Ming, and Nan Duan. Dragnuwa: Fine-grained control in video generation by integrating text, image, and trajectory. arXiv preprint arXiv:2308.08089, 2023. 20 +[172] Wei Yin, Chi Zhang, Hao Chen, Zhipeng Cai, Gang Yu, Kaixuan Wang, Xiaozhi Chen, and Chunhua Shen. Metric3d: Towards zero-shot metric 3d prediction from A single image. In Proc. ICCV, 2023. 20 +[173] Yuyang Yin, Dejia Xu, Zhangyang Wang, Yao Zhao, and Yunchao Wei. 4dgen: Grounded 4d content generation with spatial-temporal consistency. arXiv preprint arXiv:2312.17225, 2023. 2, 20 +[174] Paul Yoo, Jiaxian Guo, Yutaka Matsuo, and Shixiang Shane Gu. DreamSparse: Escaping from Plato's cave with 2D diffusion model given sparse views. In arXiv preprint arXiv:2306.03414, 2023. 20 + +[175] Heng Yu, Chaoyang Wang, Peiye Zhuang, Willi Menapace, Aliaksandr Siarohin, Junli Cao, Laszlo A Jeni, Sergey Tulyakov, and Hsin-Ying Lee. 4real: Towards photorealistic 4d scene generation via video diffusion models. In Proc. NeurIPS, 2024. 2, 20 +[176] Shoubin Yu, Jacob Zhiyuan Fang, Jian Zheng, Gunnar A Sigurdsson, Vicente Ordonez, Robinson Piramuthu, and Mohit Bansal. Zero-shot controllable image-to-video animation via motion decomposition. In ACM MM, 2024. 20 +[177] Wangbo Yu, Jinbo Xing, Li Yuan, Wenbo Hu, Xiaoyu Li, Zhipeng Huang, Xiangjun Gao, Tien-Tsin Wong, Ying Shan, and Yonghong Tian. Viewcrafter: Taming video diffusion models for high-fidelity novel view synthesis. arXiv preprint arXiv:2409.02048, 2024. 3 +[178] Xin Yu, Yuan-Chen Guo, Yangguang Li, Ding Liang, Song-Hai Zhang, and Xiaojuan Qi. Text-to-3D with classifier score distillation. arXiv preprint arXiv:2310.19415, 2023. 20 +[179] Yu-Jie Yuan, Leif Kobbelt, Jiwen Liu, Yuan Zhang, Pengfei Wan, Yu-Kun Lai, and Lin Gao. 4dynamic: Text-to-4d generation with hybrid priors. arXiv preprint arXiv:2407.12684, 2024. 20 +[180] Raza Yunus, Jan Eric Lenssen, Michael Niemeyer, Yiyi Liao, Christian Rupprecht, Christian Theobalt, Gerard Pons-Moll, Jia-Bin Huang, Vladislav Golyanik, and Eddy Ilg. Recent trends in 3d reconstruction of general non-rigid scenes. In Computer Graphics Forum, 2024. 2 +[181] Bohan Zeng, Ling Yang, Siyu Li, Jiaming Liu, Zixiang Zhang, Juanxi Tian, Kaixin Zhu, Yongzhen Guo, Fu-Yun Wang, Minkai Xu, et al. Trans4d: Realistic geometry-aware transition for compositional text-to-4d synthesis. arXiv preprint arXiv:2410.07155, 2024. 20 +[182] Yifei Zeng, Yanqin Jiang, Siyu Zhu, Yuanxun Lu, Youtian Lin, Hao Zhu, Weiming Hu, Xun Cao, and Yao Yao. Stag4d: Spatial-temporal anchored generative 4d gaussians. arXiv preprint arXiv:2403.14939, 2024. 2, 20 +[183] Biao Zhang and Rico Sennrich. Root mean square layer normalization. In Proc. NeurIPS, 2019. 17 +[184] Bowen Zhang, Tianyu Yang, Yu Li, Lei Zhang, and Xi Zhao. Compress3D: a compressed latent space for 3D generation from a single image. arXiv preprint arXiv:2403.13524, 2024. 20 +[185] David Junhao Zhang, Roni Paiss, Shiran Zada, Nikhil Karnad, David E Jacobs, Yael Pritch, Inbar Mosseri, Mike Zheng Shou, Neal Wadhwa, and Nataniel Ruiz. Recapture: Generative video camera controls for user-provided videos using masked video fine-tuning. arXiv preprint arXiv:2411.05003, 2024. 3 +[186] Hao Zhang, Di Chang, Fang Li, Mohammad Soleymani, and Narendra Ahuja. Magicpose4d: Crafting articulated models with appearance and motion control. arXiv preprint arXiv:2405.14017, 2024. 20 +[187] Haiyu Zhang, Xinyuan Chen, Yaohui Wang, Xihui Liu, Yunhong Wang, and Yu Qiao. 4diffusion: Multi-view video diffusion model for 4d generation. arXiv preprint arXiv:2405.20674, 2024. 20 + +[188] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proc. ICCV, 2023. 3, 4, 6, 18 +[189] Tianyuan Zhang, Hong-Xing Yu, Rundi Wu, Brandon Y Feng, Changxi Zheng, Noah Snavely, Jiajun Wu, and William T Freeman. Physdreamer: Physics-based interaction with 3d objects via video generation. In Proc. ECCV, 2024. 20 +[190] Zhichao Zhang, Hui Chen, Jinsheng Deng, Xiaqing Yin, Xingshen Song, and Ming Xu. Motion4d: A decoupled pipeline for enhanced text-to-4d generation with optimized motion patterns. SSRN, 2024. 20 +[191] Zhenghao Zhang, Junchao Liao, Menghao Li, Long Qin, and Weizhi Wang. Tora: Trajectory-oriented diffusion transformer for video generation. arXiv preprint arXiv:2407.21705, 2024. 20 +[192] Wang Zhao, Shaohui Liu, Hengkai Guo, Wenping Wang, and Yong-Jin Liu. *Particlesfm: Exploiting dense point trajectories for localizing moving cameras in the wild*. In *Proc. ECCV*, 2022. 5, 7, 8 +[193] Yuyang Zhao, Zhiwen Yan, Enze Xie, Lanqing Hong, Zhenguo Li, and Gim Hee Lee.Animate124: Animating one image to 4d dynamic scene. arXiv preprint arXiv:2311.14603, 2023. 2, 20 +[194] Yuyang Zhao, Chung-Ching Lin, Kevin Lin, Zhiwen Yan, Linjie Li, Zhengyuan Yang, Jianfeng Wang, Gim Hee Lee, and Lijuan Wang. Genxd: Generating any 3d and 4d scenes. arXiv preprint arXiv:2411.02319, 2024. 3 +[195] Guangcong Zheng, Teng Li, Rui Jiang, Yehao Lu, Tao Wu, and Xi Li. Cami2v: Camera-controlled image-to-video diffusion model. arXiv preprint arXiv:2410.15957, 2024. 3 +[196] Yufeng Zheng, Xueting Li, Koki Nagano, Sifei Liu, Otmar Hilliges, and Shalini De Mello. A unified approach for text-and image-guided 4d scene generation. In Proc. CVPR, 2024. 2, 20 +[197] Zangwei Zheng, Xiangyu Peng, Tianji Yang, Chenhui Shen, Shenggui Li, Hongxin Liu, Yukun Zhou, Tianyi Li, and Yang You. Open-sora: Democratizing efficient video production for all, 2024. 3, 17 +[198] Haitao Zhou, Chuang Wang, Rui Nie, Jinxiao Lin, Dongdong Yu, Qian Yu, and Changhu Wang. Trackgo: A flexible and efficient method for controllable video generation. arXiv preprint arXiv:2408.11475, 2024. 20 +[199] Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snively. Stereo magnification: Learning view synthesis using multiplane images. In SIGGRAPH, 2018. 2, 4, 5, 6, 7, 8, 16, 18, 19, 20 +[200] Hanxin Zhu, Tianyu He, Anni Tang, Junliang Guo, Zhibo Chen, and Jiang Bian. Compositional 3d-aware video generation with llm director. arXiv preprint arXiv:2409.00558, 2024. 20 \ No newline at end of file diff --git a/2025/AC3D_ Analyzing and Improving 3D Camera Control in Video Diffusion Transformers/images.zip b/2025/AC3D_ Analyzing and Improving 3D Camera Control in Video Diffusion Transformers/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..3a493b61faf7c2f061b958e5ca5820b40ecfc3dd --- /dev/null +++ b/2025/AC3D_ Analyzing and Improving 3D Camera Control in Video Diffusion Transformers/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:22808c243a291c5cc82883534952f5b7b0399cd94b53cd84c46f2620eebf9052 +size 617838 diff --git a/2025/AC3D_ Analyzing and Improving 3D Camera Control in Video Diffusion Transformers/layout.json b/2025/AC3D_ Analyzing and Improving 3D Camera Control in Video Diffusion Transformers/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..9888a419fd06c0128bc183f2bf01f22b403da152 --- /dev/null +++ b/2025/AC3D_ Analyzing and Improving 3D Camera Control in Video Diffusion Transformers/layout.json @@ -0,0 +1,12762 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 140, + 103, + 473, + 138 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 140, + 103, + 473, + 138 + ], + "spans": [ + { + "bbox": [ + 140, + 103, + 473, + 138 + ], + "type": "text", + "content": "AC3D: Analyzing and Improving 3D Camera Control in Video Diffusion Transformers" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 97, + 160, + 514, + 190 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 97, + 160, + 514, + 190 + ], + "spans": [ + { + "bbox": [ + 97, + 160, + 514, + 190 + ], + "type": "text", + "content": "Sherwin Bahmani" + }, + { + "bbox": [ + 97, + 160, + 514, + 190 + ], + "type": "inline_equation", + "content": "^{1,2,3}" + }, + { + "bbox": [ + 97, + 160, + 514, + 190 + ], + "type": "text", + "content": " Ivan Skorokhodov" + }, + { + "bbox": [ + 97, + 160, + 514, + 190 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 97, + 160, + 514, + 190 + ], + "type": "text", + "content": " Guocheng Qian" + }, + { + "bbox": [ + 97, + 160, + 514, + 190 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 97, + 160, + 514, + 190 + ], + "type": "text", + "content": " Aliaksandr Siarohin" + }, + { + "bbox": [ + 97, + 160, + 514, + 190 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 97, + 160, + 514, + 190 + ], + "type": "text", + "content": " Willi Menapace" + }, + { + "bbox": [ + 97, + 160, + 514, + 190 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 97, + 160, + 514, + 190 + ], + "type": "text", + "content": " Andrea Tagliasacchi" + }, + { + "bbox": [ + 97, + 160, + 514, + 190 + ], + "type": "inline_equation", + "content": "^{1,4}" + }, + { + "bbox": [ + 97, + 160, + 514, + 190 + ], + "type": "text", + "content": " David B. Lindell" + }, + { + "bbox": [ + 97, + 160, + 514, + 190 + ], + "type": "inline_equation", + "content": "^{1,2}" + }, + { + "bbox": [ + 97, + 160, + 514, + 190 + ], + "type": "text", + "content": " Sergey Tulyakov" + }, + { + "bbox": [ + 97, + 160, + 514, + 190 + ], + "type": "inline_equation", + "content": "^{3}" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 197, + 190, + 414, + 203 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 197, + 190, + 414, + 203 + ], + "spans": [ + { + "bbox": [ + 197, + 190, + 414, + 203 + ], + "type": "text", + "content": "1University of Toronto 2Vector Institute 3Snap Inc. 4SFU" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 268, + 205, + 341, + 217 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 268, + 205, + 341, + 217 + ], + "spans": [ + { + "bbox": [ + 268, + 205, + 341, + 217 + ], + "type": "text", + "content": "*equal contribution" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 195, + 224, + 414, + 236 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 224, + 414, + 236 + ], + "spans": [ + { + "bbox": [ + 195, + 224, + 414, + 236 + ], + "type": "text", + "content": "https://snap-research.github.io/ac3d" + } + ] + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 57, + 243, + 555, + 386 + ], + "blocks": [ + { + "bbox": [ + 57, + 243, + 555, + 386 + ], + "lines": [ + { + "bbox": [ + 57, + 243, + 555, + 386 + ], + "spans": [ + { + "bbox": [ + 57, + 243, + 555, + 386 + ], + "type": "image", + "image_path": "b817cae749f12438e773bb976adf99e6f8b2a973ab4df38ce66da003d63efb02.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 58, + 395, + 153, + 472 + ], + "blocks": [ + { + "bbox": [ + 58, + 395, + 153, + 472 + ], + "lines": [ + { + "bbox": [ + 58, + 395, + 153, + 472 + ], + "spans": [ + { + "bbox": [ + 58, + 395, + 153, + 472 + ], + "type": "image", + "image_path": "52516c7173e4846a6846b007e2a90e43104163aca0143472ddbef4ba84c1276e.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 157, + 395, + 450, + 472 + ], + "lines": [ + { + "bbox": [ + 157, + 395, + 450, + 472 + ], + "spans": [ + { + "bbox": [ + 157, + 395, + 450, + 472 + ], + "type": "text", + "content": "Figure 1. Camera-controlled video generation. Our method enables precise camera controllability in pre-trained video diffusion transformers, allowing joint conditioning of text and camera sequences. We synthesize the same scene with two different camera trajectories as input. The inset images visualize the cameras for the videos in the corresponding columns. The left camera sequence consists of a rotation to the right, while the right camera visualizes a zoom-out and up trajectory." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 452, + 391, + 552, + 472 + ], + "blocks": [ + { + "bbox": [ + 452, + 391, + 552, + 472 + ], + "lines": [ + { + "bbox": [ + 452, + 391, + 552, + 472 + ], + "spans": [ + { + "bbox": [ + 452, + 391, + 552, + 472 + ], + "type": "image", + "image_path": "511171fd4800824ac854c98baa2d66ee98e1865662ef20852b5d5958d8527144.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "bbox": [ + 152, + 491, + 200, + 504 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 152, + 491, + 200, + 504 + ], + "spans": [ + { + "bbox": [ + 152, + 491, + 200, + 504 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 53, + 522, + 297, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 522, + 297, + 714 + ], + "spans": [ + { + "bbox": [ + 53, + 522, + 297, + 714 + ], + "type": "text", + "content": "Numerous works have recently integrated 3D camera control into foundational text-to-video models, but the resulting camera control is often imprecise, and video generation quality suffers. In this work, we analyze camera motion from a first principles perspective, uncovering insights that enable precise 3D camera manipulation without compromising synthesis quality. First, we determine that motion induced by camera movements in videos is low-frequency in nature. This motivates us to adjust train and test pose conditioning schedules, accelerating training convergence while improving visual and motion quality. Then, by probing the representations of an unconditional video diffusion transformer, we observe that they implicitly perform camera pose estimation under the hood, and only a sub-partion of their layers contain the camera information. This suggested us to limit the injection of camera conditioning to a subset of the" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 494, + 556, + 625 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 494, + 556, + 625 + ], + "spans": [ + { + "bbox": [ + 313, + 494, + 556, + 625 + ], + "type": "text", + "content": "architecture to prevent interference with other video features, leading to a " + }, + { + "bbox": [ + 313, + 494, + 556, + 625 + ], + "type": "inline_equation", + "content": "4 \\times" + }, + { + "bbox": [ + 313, + 494, + 556, + 625 + ], + "type": "text", + "content": " reduction of training parameters, improved training speed, and " + }, + { + "bbox": [ + 313, + 494, + 556, + 625 + ], + "type": "inline_equation", + "content": "10\\%" + }, + { + "bbox": [ + 313, + 494, + 556, + 625 + ], + "type": "text", + "content": " higher visual quality. Finally, we complement the typical dataset for camera control learning with a curated dataset of 20K diverse, dynamic videos with stationary cameras. This helps the model distinguish between camera and scene motion and improves the dynamics of generated pose-conditioned videos. We compound these findings to design the Advanced 3D Camera Control (AC3D) architecture, the new state-of-the-art model for generative video modeling with camera control." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 314, + 654, + 395, + 667 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 654, + 395, + 667 + ], + "spans": [ + { + "bbox": [ + 314, + 654, + 395, + 667 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 677, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 677, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 677, + 555, + 715 + ], + "type": "text", + "content": "Foundational video diffusion models (VDMs) trained on internet-scale data, acquire abundant knowledge about the physical world [10]. They not only learn appearance and" + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "22875" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 297, + 288 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 297, + 288 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 297, + 288 + ], + "type": "text", + "content": "plausible 2D dynamics, but they also have abundant understanding of 3D structure [7]. However, most of this knowledge is stored implicitly within the model, as these models do not expose fine-grained control mechanisms, such as camera motion control. We recently witnessed a surge of works that bring 3D camera control into foundational video models [39, 149, 166], but the control they provide is not very precise, and the synthesis quality is often compromised [6]. We analyze camera motion control in video diffusion models from first principles, and develop several findings that allow us to incorporate precise 3D camera conditioning without degrading synthesis quality. To perform our analysis we train a 11.5B-parameter VDiT (video latent diffusion transformer) [100] on a dataset of 100M text/video pairs. On this model, we perform three key studies. With what we learn, we adapt the camera control solution from VD3D [6] from a pixel-based to latent-based diffusion model, and significantly improve its performance." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 297, + 297, + 501 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 297, + 297, + 501 + ], + "spans": [ + { + "bbox": [ + 55, + 297, + 297, + 501 + ], + "type": "text", + "content": "1) The spectral properties of camera motion. To study the statistical nature of motion control, we analyze motion spectral volumes (MSV) [75] of the videos generated by a large-scale video DiT model. MSVs show the amount of energy in different portions of the frequency spectra (i.e., high energy in the low frequencies indicate smooth motion) and we measure them across 200 generated videos of different types (camera motion, scene motion, scene plus camera motion) and at various stages of the denoising synthesis process. We observe that camera motion mostly affects the lower portion of the spectrum and kicks in very early (" + }, + { + "bbox": [ + 55, + 297, + 297, + 501 + ], + "type": "inline_equation", + "content": "\\approx 10\\%" + }, + { + "bbox": [ + 55, + 297, + 297, + 501 + ], + "type": "text", + "content": ") in the denoising trajectory. Then, as diffusion models are inherently coarse-to-fine in nature [25], we restrict our camera conditioning to only being injected on the subset of the denoising steps corresponding to low-frequencies. This results in " + }, + { + "bbox": [ + 55, + 297, + 297, + 501 + ], + "type": "inline_equation", + "content": "\\approx 15\\%" + }, + { + "bbox": [ + 55, + 297, + 297, + 501 + ], + "type": "text", + "content": " higher visual fidelity, " + }, + { + "bbox": [ + 55, + 297, + 297, + 501 + ], + "type": "inline_equation", + "content": "\\approx 30\\%" + }, + { + "bbox": [ + 55, + 297, + 297, + 501 + ], + "type": "text", + "content": " better camera following, and mitigates scene motion degradation." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 510, + 297, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 510, + 297, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 510, + 297, + 714 + ], + "type": "text", + "content": "2) Camera motion knowledge in VDiTs. Then, we consider our text-only VDiT, and determine whether such a model possesses knowledge about cameras, and where this knowledge is expressed within its architecture. With this objective, we feed the (unseen during training) RealEstate10k [199] videos to our VDiT, and perform linear probing [27] to determine if camera poses can be recovered from its internal representation. Our analysis revealed that a video DiT implicitly performs camera pose estimation under the hood, and the presence of camera knowledge in a disentangled form peaks in its middle layers. This implies that the camera signal emerges in its early blocks to allow the later ones rely on it to build subsequent visual representations. Therefore, we adjust our conditioning scheme to only affect the first " + }, + { + "bbox": [ + 55, + 510, + 297, + 714 + ], + "type": "inline_equation", + "content": "30\\%" + }, + { + "bbox": [ + 55, + 510, + 297, + 714 + ], + "type": "text", + "content": " of the architecture, leading to a " + }, + { + "bbox": [ + 55, + 510, + 297, + 714 + ], + "type": "inline_equation", + "content": "\\approx 4\\times" + }, + { + "bbox": [ + 55, + 510, + 297, + 714 + ], + "type": "text", + "content": " reduction in training parameters, " + }, + { + "bbox": [ + 55, + 510, + 297, + 714 + ], + "type": "inline_equation", + "content": "15\\%" + }, + { + "bbox": [ + 55, + 510, + 297, + 714 + ], + "type": "text", + "content": " training and inference acceleration, and " + }, + { + "bbox": [ + 55, + 510, + 297, + 714 + ], + "type": "inline_equation", + "content": "10\\%" + }, + { + "bbox": [ + 55, + 510, + 297, + 714 + ], + "type": "text", + "content": " improved visual quality." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 313, + 72, + 556, + 228 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 556, + 228 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 556, + 228 + ], + "type": "text", + "content": "3) Re-balancing the training distribution. Finally, to supervise camera control architectures, the typical solution is to rely on the camera pose annotations provided by RealEstate10k [199]. However, this dataset contains mostly static scenes, which results in significant motion degradation of the fine-tuned video model. To overcome this problem, we curate a subset of 20K diverse videos with dynamic scenes but static cameras. As the camera conditioning branch is still activated for these videos, this helps the model disambiguate the camera from scene movement. Our experiments show that this simple adjustment in the data is sufficient to recover the scene dynamism while still enabling an effective pose-conditioned video model." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 313, + 231, + 557, + 339 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 231, + 557, + 339 + ], + "spans": [ + { + "bbox": [ + 313, + 231, + 557, + 339 + ], + "type": "text", + "content": "Contributions. We compound the knowledge gained from these three studies into the design of the Advanced 3D Camera Control (AC3D) method. We perform extensive ablation studies and compare against state-of-the-art models for camera control, including MotionCtrl [149], CameraCtrl [39], and VD3D [6]. We demonstrate " + }, + { + "bbox": [ + 313, + 231, + 557, + 339 + ], + "type": "inline_equation", + "content": "18\\%" + }, + { + "bbox": [ + 313, + 231, + 557, + 339 + ], + "type": "text", + "content": " higher video fidelity and " + }, + { + "bbox": [ + 313, + 231, + 557, + 339 + ], + "type": "inline_equation", + "content": "25\\%" + }, + { + "bbox": [ + 313, + 231, + 557, + 339 + ], + "type": "text", + "content": " more precise camera steering in terms of quantitative metrics than a closest competitor, and our generated videos are favored to others in " + }, + { + "bbox": [ + 313, + 231, + 557, + 339 + ], + "type": "inline_equation", + "content": "90\\%" + }, + { + "bbox": [ + 313, + 231, + 557, + 339 + ], + "type": "text", + "content": " of cases." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 313, + 351, + 399, + 363 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 351, + 399, + 363 + ], + "spans": [ + { + "bbox": [ + 313, + 351, + 399, + 363 + ], + "type": "text", + "content": "2. Related work" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 372, + 556, + 419 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 372, + 556, + 419 + ], + "spans": [ + { + "bbox": [ + 313, + 372, + 556, + 419 + ], + "type": "text", + "content": "Our approach lies at the intersection of text-to-video, text-to-3D, and text-to-4D generation approaches. We refer to recent state-of-the-reports [101, 180] for a more thorough analysis of previous work." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 423, + 557, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 423, + 557, + 567 + ], + "spans": [ + { + "bbox": [ + 313, + 423, + 557, + 567 + ], + "type": "text", + "content": "Text-to-video generation. Our approach builds on recent advancements in 2D video diffusion models. One prominent technique in this area enhances text-to-image models by adding temporal layers to support video generation [7, 8, 37, 119, 151]. While these methods use the U-Net architecture, more recent ones [10, 92, 93, 168] have been adapting transformer-based architectures for more scalable, realistic, and highly dynamic scene generation. We are interested in controlling the camera movements during the generation process of recent transformer-based video models based on precise camera extrinsics, i.e., cameras represented as rotation and translation sequences for each frame." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 570, + 556, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 570, + 556, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 570, + 556, + 713 + ], + "type": "text", + "content": "4D generation. Early 4D generation works [2, 164] used 4D GANs to learn category-specific generators with an underlying dynamic 3D representation. More recent approaches [4, 5, 81, 120, 196] have tackled 4D generation by distilling motion priors from pre-trained video diffusion models into an explicit 4D representation, enabling category-agnostic 4D generation. Follow-up works investigate image or video conditioned 4D generation [33, 76, 81, 98, 109, 173, 182, 193, 196] instead of pure text inputs, improving flexibility in the generation process. While most of these works are object-centric, recent approaches [4, 159] shifted towards more complex scenes, including methods [23, 175] which" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 749, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 749, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 749, + 318, + 757 + ], + "type": "text", + "content": "22876" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 58, + 70, + 555, + 205 + ], + "blocks": [ + { + "bbox": [ + 58, + 70, + 555, + 205 + ], + "lines": [ + { + "bbox": [ + 58, + 70, + 555, + 205 + ], + "spans": [ + { + "bbox": [ + 58, + 70, + 555, + 205 + ], + "type": "image", + "image_path": "7d90716174ed6253fcc66d5f7b61f599bd5530253f95e75ce8956865a8007427.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 54, + 212, + 555, + 246 + ], + "lines": [ + { + "bbox": [ + 54, + 212, + 555, + 246 + ], + "spans": [ + { + "bbox": [ + 54, + 212, + 555, + 246 + ], + "type": "text", + "content": "Figure 2. VDiT-CC model with ControlNet [71, 188] camera conditioning built on top of VDiT. Video synthesis is performed by large 4,096-dimensional DiT-XL blocks of the frozen VDiT backbone, while VDiT-CC only processes and injects the camera information through lightweight 128-dimensional DiT-XS blocks (FC stands for fully-connected layers); see Section 3.2 for details." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 54, + 266, + 297, + 386 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 266, + 297, + 386 + ], + "spans": [ + { + "bbox": [ + 54, + 266, + 297, + 386 + ], + "type": "text", + "content": "model the background. However, all these methods are optimization-based, i.e., each scene is generated independently from scratch. Recently, L4GM [110] proposes a feed-forward 4D generator trained on object-centric synthetic 4D data. While these approaches are explicit and provide space-time control, they are limited in their photorealism compared to recent 2D video diffusion models. We investigate dynamic 3D scene generation from a different perspective by extending pre-trained video diffusion models with 3D camera control." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 395, + 296, + 658 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 395, + 296, + 658 + ], + "spans": [ + { + "bbox": [ + 56, + 395, + 296, + 658 + ], + "type": "text", + "content": "Camera control for video models. Recently, there has been significant progress in adding camera control to video diffusion models. As the pioneering work, MotionCtrl [149] learns camera control by conditioning pre-trained video models [7, 17] with extrinsic matrices. Follow-up works [39, 65, 160] further improve the conditioning mechanisms by representing cameras as Plücker coordinates. Another line of work [49, 50, 82, 155] controls camera motion without training additional parameters. However, all of these approaches use U-Net-based architectures as their backbone. More recently, 4DiM [150] trains a space-time diffusion model from scratch for novel view synthesis from a single image input. Closely related to our work, VD3D [6] incorporates camera control into a pre-trained video diffusion transformer. While the motion and camera control improves over U-Net-based approaches, the synthesized motion in the scenes and the visual quality are still degraded compared to the base video model. In contrast to VD3D, we first thoroughly investigate the pre-trained base video model and its knowledge of camera motion. We derive an improved training and architecture design for high-quality and dynamic video generation based on our findings." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 665, + 297, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 665, + 297, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 665, + 297, + 715 + ], + "type": "text", + "content": "Concurrent works. Concurrent approaches [68, 158, 177, 185, 194, 195] further improve camera control in U-Net-based architectures, while another work [22] tackles video diffusion transformer. However, the scene and visual quality" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 313, + 266, + 555, + 362 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 266, + 555, + 362 + ], + "spans": [ + { + "bbox": [ + 313, + 266, + 555, + 362 + ], + "type": "text", + "content": "is still limited in that approach. DimensionX [127] controls space and time in video diffusion transformers but the camera trajectories are pre-defined and not continuous. Chun-Hao et al. [51] explore pose estimation with a video DiT by pairing it with DUSt3R [145] and fine-tuning, while we perform linear probing without any training to assess its existing camera knowledge. CAT4D [152] proposes a multi-view video diffusion model fine-tuned from a multi-view model." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 314, + 376, + 370, + 388 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 376, + 370, + 388 + ], + "spans": [ + { + "bbox": [ + 314, + 376, + 370, + 388 + ], + "type": "text", + "content": "3. Method" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 397, + 556, + 469 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 397, + 556, + 469 + ], + "spans": [ + { + "bbox": [ + 313, + 397, + 556, + 469 + ], + "type": "text", + "content": "We first describe our base video diffusion model (Sec. 3.1), and the baseline camera control method built on top of it (Sec. 3.2). Then, we proceed with the analysis of motion (Sec. 3.3), linear probing (Sec. 3.4) and dataset biases (Sec. 3.5), and additional insights on how to build an effective model for camera control (Sec. 3.6)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 479, + 426, + 492 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 479, + 426, + 492 + ], + "spans": [ + { + "bbox": [ + 313, + 479, + 426, + 492 + ], + "type": "text", + "content": "3.1. Base model (VDiT)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 498, + 556, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 498, + 556, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 498, + 556, + 713 + ], + "type": "text", + "content": "Following Sora [10], most modern foundational text-to-video generators use the diffusion framework [45, 122] to train a large-scale transformer [139] in the latent space of a variational autoencoder [64, 111]. We adopt the same design and, for a base video model, pre-train an 11.5B-parameter Video DiT model [100] with 32 blocks of hidden dimension 4,096 for text-to-video generation. We use the rectified flow diffusion parametrization [85] and learn in the latent space of CogVideoX [168] (using an autoencoder with a 16-channel output and compression factors of " + }, + { + "bbox": [ + 313, + 498, + 556, + 713 + ], + "type": "inline_equation", + "content": "4 \\times 8 \\times 8" + }, + { + "bbox": [ + 313, + 498, + 556, + 713 + ], + "type": "text", + "content": " in the temporal and spatial dimensions). The T5 [108] encoder produces text embeddings, which are passed into VDiT via cross-attention. We train our base model on a large-scale dataset of images and videos with text annotations, with resolutions ranging from " + }, + { + "bbox": [ + 313, + 498, + 556, + 713 + ], + "type": "inline_equation", + "content": "17 \\times 144 \\times 256" + }, + { + "bbox": [ + 313, + 498, + 556, + 713 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 313, + 498, + 556, + 713 + ], + "type": "inline_equation", + "content": "121 \\times 576 \\times 1024" + }, + { + "bbox": [ + 313, + 498, + 556, + 713 + ], + "type": "text", + "content": ". This design is fairly standard and followed by many existing works with little deviation [32, 102, 168, 197]; we describe our specific architectural and training setup in detail in Appendix D." + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "22877" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 262, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 262, + 85 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 262, + 85 + ], + "type": "text", + "content": "3.2. VDiT with Camera Control (VDiT-CC)" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 99, + 296, + 209 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 99, + 296, + 209 + ], + "spans": [ + { + "bbox": [ + 55, + 99, + 296, + 209 + ], + "type": "text", + "content": "To construct a baseline architecture for camera control, we implement ControlNet [18, 188] conditioning on top of the VDiT. Similar to previous work [6, 39, 149], we use the RealEstate10k [199] dataset, consisting of " + }, + { + "bbox": [ + 55, + 99, + 296, + 209 + ], + "type": "inline_equation", + "content": "65\\mathrm{k}" + }, + { + "bbox": [ + 55, + 99, + 296, + 209 + ], + "type": "text", + "content": " (text, video, camera trajectory) triplets " + }, + { + "bbox": [ + 55, + 99, + 296, + 209 + ], + "type": "inline_equation", + "content": "(\\pmb{t}_n,\\pmb{x}_n,\\pmb{c}_n)_{n = 1}^N" + }, + { + "bbox": [ + 55, + 99, + 296, + 209 + ], + "type": "text", + "content": " and train a new set of model parameters to input the camera information into the model. Camera trajectories " + }, + { + "bbox": [ + 55, + 99, + 296, + 209 + ], + "type": "inline_equation", + "content": "\\pmb {c}\\in \\mathbb{R}^{f\\times 25}" + }, + { + "bbox": [ + 55, + 99, + 296, + 209 + ], + "type": "text", + "content": " are provided in the form of camera extrinsics " + }, + { + "bbox": [ + 55, + 99, + 296, + 209 + ], + "type": "inline_equation", + "content": "\\pmb {C_f}\\in \\mathbb{R}^{4\\times 4}" + }, + { + "bbox": [ + 55, + 99, + 296, + 209 + ], + "type": "text", + "content": " and intrinsics " + }, + { + "bbox": [ + 55, + 99, + 296, + 209 + ], + "type": "inline_equation", + "content": "\\pmb {K}_f\\in \\mathbb{R}^{3\\times 3}" + }, + { + "bbox": [ + 55, + 99, + 296, + 209 + ], + "type": "text", + "content": " for each " + }, + { + "bbox": [ + 55, + 99, + 296, + 209 + ], + "type": "inline_equation", + "content": "f" + }, + { + "bbox": [ + 55, + 99, + 296, + 209 + ], + "type": "text", + "content": "-th frame " + }, + { + "bbox": [ + 55, + 99, + 296, + 209 + ], + "type": "inline_equation", + "content": "\\pmb{x}_f" + }, + { + "bbox": [ + 55, + 99, + 296, + 209 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 220, + 296, + 448 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 220, + 296, + 448 + ], + "spans": [ + { + "bbox": [ + 56, + 220, + 296, + 448 + ], + "type": "text", + "content": "Camera conditioning. For base camera control, we adapt VD3D [6] since it was designed for transformer-based models and suits our setup the most, while other methods are built on top of UNet-based [112] backbones. We use Plücker camera representations [6, 16, 39, 59, 121], which are projected to the same dimensionality and resolution as the video tokens via a fully-convolutional encoder to produce camera tokens. These camera tokens are processed by a sequence of lightweight DiT-XS blocks with hidden dimension 128 and four attention heads each. To mix the camera information with the video tokens of VDiT, we use summation before each main DiT block. We also found it useful to perform cross-attention from video tokens to camera tokens as a form of a feedback connection [71]. We illustrate this model architecture, which we call VDiT-CC, in Figure 2; see implementation details in Appendix D. VDiT-CC describes the camera-controlled video model architecture used by AC3D, while AC3D describes our proposed work including analysis and additional adjustments based on the analysis." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 460, + 296, + 605 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 460, + 296, + 605 + ], + "spans": [ + { + "bbox": [ + 55, + 460, + 296, + 605 + ], + "type": "text", + "content": "Training. Keeping the VDiT backbone frozen, we train the new parameters with a rectified flow objective [85] and standard (location of 0 and scale of 1) logit-normal noise distribution [28]. Similar to prior works [6, 150], we apply a " + }, + { + "bbox": [ + 55, + 460, + 296, + 605 + ], + "type": "inline_equation", + "content": "10\\%" + }, + { + "bbox": [ + 55, + 460, + 296, + 605 + ], + "type": "text", + "content": " camera dropout to support classifier-free guidance (CFG) [44] later. Notably, we train VDiT-CC only at the " + }, + { + "bbox": [ + 55, + 460, + 296, + 605 + ], + "type": "inline_equation", + "content": "256^2" + }, + { + "bbox": [ + 55, + 460, + 296, + 605 + ], + "type": "text", + "content": " resolution: since camera motion is a low-frequency type of signal (which can be observed at lower resolutions) and the main VDiT backbone is frozen, we found that our design generalizes to higher resolutions out-of-the-box. During inference, we input text prompts and camera embeddings with classifier-free guidance at each time step." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 617, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 617, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 617, + 296, + 713 + ], + "type": "text", + "content": "Model behavior. This baseline model, being built on top of our powerful VDiT, already achieves decent-quality camera control. However, it struggles with degraded visual quality and reduced scene motion, and sometimes, the camera control inputs are ignored. To improve the design, we analyze our VDiT backbone to understand how camera motion is modeled and represented. Then, we inspect VDiT-CC's failure cases and where they arise to address them." + } + ] + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 341, + 72, + 529, + 188 + ], + "blocks": [ + { + "bbox": [ + 341, + 72, + 529, + 188 + ], + "lines": [ + { + "bbox": [ + 341, + 72, + 529, + 188 + ], + "spans": [ + { + "bbox": [ + 341, + 72, + 529, + 188 + ], + "type": "image", + "image_path": "84e1ccc3b8c3684156cda0c6e94f1b00f3bf0f68361b47b1471f204f9c75627a.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 198, + 555, + 299 + ], + "lines": [ + { + "bbox": [ + 313, + 198, + 555, + 299 + ], + "spans": [ + { + "bbox": [ + 313, + 198, + 555, + 299 + ], + "type": "text", + "content": "Figure 3. Average magnitude of motion spectral volumes along spatial, temporal offset, and video batch dimensions for scenes with different motion types. We compute the flow of each video in a sliding window manner with temporal offsets and average the frequencies across all offsets. Videos with camera motion (purple) exhibit stronger overall motion than the videos with scene motion (orange), especially for the low-frequency range, suggesting that the motion induced by camera transitions is heavily biased towards low-frequency components. Frequency refers to the temporal frequency." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 319, + 548, + 333 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 319, + 548, + 333 + ], + "spans": [ + { + "bbox": [ + 313, + 319, + 548, + 333 + ], + "type": "text", + "content": "3.3. How is camera motion modeled by diffusion?" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 338, + 555, + 506 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 338, + 555, + 506 + ], + "spans": [ + { + "bbox": [ + 313, + 338, + 555, + 506 + ], + "type": "text", + "content": "We start by analyzing how camera motion is modeled by a pre-trained video diffusion model (i.e., before camera control is incorporated). We hypothesize that the motion induced by changes in camera pose is a low-frequency type of signal and investigate the motion spectral volumes [75] of the generated videos at different steps of the denoising process. To perform this analysis, we generate 200 diverse videos with our VDiT model with 80 denoising steps and manually annotate them into four categories: videos with only scene motion, videos with only camera motion, videos with both scene and camera motion, and others; see Appendix E for details. During generation, we save the denoised predictions at each denoising step and estimate optical flow to compute the motion spectral volumes." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 509, + 556, + 652 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 509, + 556, + 652 + ], + "spans": [ + { + "bbox": [ + 313, + 509, + 556, + 652 + ], + "type": "text", + "content": "Analysis. We visualize motion spectral volumes with " + }, + { + "bbox": [ + 313, + 509, + 556, + 652 + ], + "type": "inline_equation", + "content": "95\\%" + }, + { + "bbox": [ + 313, + 509, + 556, + 652 + ], + "type": "text", + "content": " confidence intervals in Figure 3. Videos with camera motion exhibit higher amplitudes than scene-motion-only videos for low-frequency components while having similar characteristics for high-frequency ones. This supports the conjecture that the camera motion is a low-frequency type of signal. We also depict an example of a generated video with both scene and camera motion with four denoising steps on Fig. 4a: one can observe that the camera movement has been fully produced by " + }, + { + "bbox": [ + 313, + 509, + 556, + 652 + ], + "type": "inline_equation", + "content": "t = 0.9" + }, + { + "bbox": [ + 313, + 509, + 556, + 652 + ], + "type": "text", + "content": " (first " + }, + { + "bbox": [ + 313, + 509, + 556, + 652 + ], + "type": "inline_equation", + "content": "10\\%" + }, + { + "bbox": [ + 313, + 509, + 556, + 652 + ], + "type": "text", + "content": " of the rectified flow denoising process). In contrast, scene motion details like the hand movements of the subjects are not finalized even till " + }, + { + "bbox": [ + 313, + 509, + 556, + 652 + ], + "type": "inline_equation", + "content": "t = 0.5" + }, + { + "bbox": [ + 313, + 509, + 556, + 652 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 654, + 556, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 654, + 556, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 654, + 556, + 713 + ], + "type": "text", + "content": "Inspired by this finding, we pose the question: when exactly does a video diffusion model determine the camera pose? To answer this question, we plot aggregated spectral volumes for different timesteps in Figure 4b. We also show the ratio with respect to the last timestep " + }, + { + "bbox": [ + 313, + 654, + 556, + 713 + ], + "type": "inline_equation", + "content": "t = 0" + }, + { + "bbox": [ + 313, + 654, + 556, + 713 + ], + "type": "text", + "content": " (i.e.," + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "type": "text", + "content": "22878" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 56, + 71, + 218, + 186 + ], + "blocks": [ + { + "bbox": [ + 56, + 71, + 218, + 186 + ], + "lines": [ + { + "bbox": [ + 56, + 71, + 218, + 186 + ], + "spans": [ + { + "bbox": [ + 56, + 71, + 218, + 186 + ], + "type": "image", + "image_path": "8693ddc1f716218a31172eb0176c0cf1659c7715d807548d77bc27c89b978dc5.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 56, + 186, + 217, + 224 + ], + "lines": [ + { + "bbox": [ + 56, + 186, + 217, + 224 + ], + "spans": [ + { + "bbox": [ + 56, + 186, + 217, + 224 + ], + "type": "text", + "content": "(a) A generated video at different diffusion timesteps. The camera has already been decided by the model even at " + }, + { + "bbox": [ + 56, + 186, + 217, + 224 + ], + "type": "inline_equation", + "content": "t = 0.9" + }, + { + "bbox": [ + 56, + 186, + 217, + 224 + ], + "type": "text", + "content": " (first " + }, + { + "bbox": [ + 56, + 186, + 217, + 224 + ], + "type": "inline_equation", + "content": "10\\%" + }, + { + "bbox": [ + 56, + 186, + 217, + 224 + ], + "type": "text", + "content": " of the denoising process) and does not change after that." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 226, + 73, + 373, + 198 + ], + "blocks": [ + { + "bbox": [ + 226, + 73, + 373, + 198 + ], + "lines": [ + { + "bbox": [ + 226, + 73, + 373, + 198 + ], + "spans": [ + { + "bbox": [ + 226, + 73, + 373, + 198 + ], + "type": "image", + "image_path": "9ea73aaf19caa9d407640a1ed89c9115a7296f0e1cb9290812cc347eaa8c3e6c.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 223, + 201, + 555, + 220 + ], + "lines": [ + { + "bbox": [ + 223, + 201, + 555, + 220 + ], + "spans": [ + { + "bbox": [ + 223, + 201, + 555, + 220 + ], + "type": "text", + "content": "(b) Motion spectral volumes of VDiT's generated videos for different diffusion timesteps (left) and their ratio w.r.t. the motion spectral volume at " + }, + { + "bbox": [ + 223, + 201, + 555, + 220 + ], + "type": "inline_equation", + "content": "t = 0" + }, + { + "bbox": [ + 223, + 201, + 555, + 220 + ], + "type": "text", + "content": " (i.e., a fully denoised video)." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 54, + 234, + 555, + 289 + ], + "lines": [ + { + "bbox": [ + 54, + 234, + 555, + 289 + ], + "spans": [ + { + "bbox": [ + 54, + 234, + 555, + 289 + ], + "type": "text", + "content": "Figure 4. How camera motion is modeled by diffusion? As visualized in Figure 4a and Figure 3, the motion induced by camera transitions is a low-frequency type of motion. We observe that a video DiT creates low-frequency motion very early in the denoising trajectory: Figure 4b (left) shows that even at " + }, + { + "bbox": [ + 54, + 234, + 555, + 289 + ], + "type": "inline_equation", + "content": "t = 0.96" + }, + { + "bbox": [ + 54, + 234, + 555, + 289 + ], + "type": "text", + "content": " (first " + }, + { + "bbox": [ + 54, + 234, + 555, + 289 + ], + "type": "inline_equation", + "content": "\\approx 4\\%" + }, + { + "bbox": [ + 54, + 234, + 555, + 289 + ], + "type": "text", + "content": " of the steps), the low-frequency motion components have already been created, while high frequency ones do not fully unveil even till " + }, + { + "bbox": [ + 54, + 234, + 555, + 289 + ], + "type": "inline_equation", + "content": "t = 0.5" + }, + { + "bbox": [ + 54, + 234, + 555, + 289 + ], + "type": "text", + "content": ". We found that controlling the camera pose later in the denoising trajectory is not only unnecessary but detrimental to both scene motion and overall visual quality." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 374, + 72, + 552, + 198 + ], + "blocks": [ + { + "bbox": [ + 374, + 72, + 552, + 198 + ], + "lines": [ + { + "bbox": [ + 374, + 72, + 552, + 198 + ], + "spans": [ + { + "bbox": [ + 374, + 72, + 552, + 198 + ], + "type": "image", + "image_path": "23e1a2904a69fcdbc58bb5274bf44241e2989333975124d4518225d74c2d0e4e.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 54, + 311, + 295, + 381 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 311, + 295, + 381 + ], + "spans": [ + { + "bbox": [ + 54, + 311, + 295, + 381 + ], + "type": "text", + "content": "when all motion has been generated). We then inspect when different types of motion appear during the denoising process. Figure 4b (right) shows that the low-frequency motion components fill up to " + }, + { + "bbox": [ + 54, + 311, + 295, + 381 + ], + "type": "inline_equation", + "content": "\\approx 84\\%" + }, + { + "bbox": [ + 54, + 311, + 295, + 381 + ], + "type": "text", + "content": " at " + }, + { + "bbox": [ + 54, + 311, + 295, + 381 + ], + "type": "inline_equation", + "content": "t = 0.9" + }, + { + "bbox": [ + 54, + 311, + 295, + 381 + ], + "type": "text", + "content": " (the first " + }, + { + "bbox": [ + 54, + 311, + 295, + 381 + ], + "type": "inline_equation", + "content": "10\\%" + }, + { + "bbox": [ + 54, + 311, + 295, + 381 + ], + "type": "text", + "content": " of the denoising process), while high-frequency components are not well-modeled until " + }, + { + "bbox": [ + 54, + 311, + 295, + 381 + ], + "type": "inline_equation", + "content": "t = 0.6" + }, + { + "bbox": [ + 54, + 311, + 295, + 381 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 384, + 296, + 623 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 384, + 296, + 623 + ], + "spans": [ + { + "bbox": [ + 55, + 384, + 296, + 623 + ], + "type": "text", + "content": "An immediate consequence of this observation is that trying to control the camera later in the denoising trajectory is simply unnecessary and will not influence the manipulation result. In this way, instead of using the standard logit-normal noise level distribution of SD3 [28] with a location of 0.0 and scale of 1.0 (which we use by default for VDiT), we switch to using truncated normal with a location of 0.8 and scale of 0.075 on the [0.6, 1] interval to cover the early steps of the denoising rectified flow trajectory. At inference time, we apply camera conditioning on the same [0.6, 1] interval. Surprisingly, we observe that not using truncation is detrimental to the scene motion and overall visual quality. Following this insight, we restrict both our train-time noise levels and test-time camera conditioning schedules to cover only the first " + }, + { + "bbox": [ + 55, + 384, + 296, + 623 + ], + "type": "inline_equation", + "content": "40\\%" + }, + { + "bbox": [ + 55, + 384, + 296, + 623 + ], + "type": "text", + "content": " of the reverse diffusion trajectory. As Sec. 4.3 shows, this improves FID and FVD by " + }, + { + "bbox": [ + 55, + 384, + 296, + 623 + ], + "type": "inline_equation", + "content": "14\\%" + }, + { + "bbox": [ + 55, + 384, + 296, + 623 + ], + "type": "text", + "content": " on average, and camera following by " + }, + { + "bbox": [ + 55, + 384, + 296, + 623 + ], + "type": "inline_equation", + "content": "30\\%" + }, + { + "bbox": [ + 55, + 384, + 296, + 623 + ], + "type": "text", + "content": " on MSR-VTT (the dataset used to measure generalization to diverse, out-of-fine-tuning-distribution scenes). Further, truncated noise sampling enhances the overall scene motion." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 635, + 280, + 647 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 635, + 280, + 647 + ], + "spans": [ + { + "bbox": [ + 55, + 635, + 280, + 647 + ], + "type": "text", + "content": "3.4. What does VDiT know about camera pose?" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 653, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 653, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 653, + 296, + 713 + ], + "type": "text", + "content": "Foundational video models acquire rich knowledge about the physical world, and we hypothesize that they store information about the camera pose within their representations. To investigate this, we perform linear probing of our base VDiT model on the RealEstate10k [199] dataset (not seen" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 311, + 468, + 322 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 311, + 468, + 322 + ], + "spans": [ + { + "bbox": [ + 313, + 311, + 468, + 322 + ], + "type": "text", + "content": "during training) for camera extrinsics." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 324, + 556, + 623 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 324, + 556, + 623 + ], + "spans": [ + { + "bbox": [ + 313, + 324, + 556, + 623 + ], + "type": "text", + "content": "Specifically, we take 1,000 random 49-frame videos from RealEstate10K, feed them into VDiT under 8 noise levels " + }, + { + "bbox": [ + 313, + 324, + 556, + 623 + ], + "type": "inline_equation", + "content": "(1/8, 2/8, \\dots, 1)" + }, + { + "bbox": [ + 313, + 324, + 556, + 623 + ], + "type": "text", + "content": ", and extract the activations for all 32 DiT blocks. Next, we split the random videos into 900 train and 100 test videos and train a linear ridge regression model to predict the rotation pitch/yaw/roll angles and translation vectors for the entire viewpoint trajectory " + }, + { + "bbox": [ + 313, + 324, + 556, + 623 + ], + "type": "inline_equation", + "content": "(49 \\times 6" + }, + { + "bbox": [ + 313, + 324, + 556, + 623 + ], + "type": "text", + "content": " target values in total). This results in " + }, + { + "bbox": [ + 313, + 324, + 556, + 623 + ], + "type": "inline_equation", + "content": "8 \\times 32" + }, + { + "bbox": [ + 313, + 324, + 556, + 623 + ], + "type": "text", + "content": " trained models, and we report the rotation and (normalized) translation errors [39] on a held-out test set of 100 videos in Figure 5. Surprisingly, VDiT can accurately predict the camera pose, achieving minimum test errors of " + }, + { + "bbox": [ + 313, + 324, + 556, + 623 + ], + "type": "inline_equation", + "content": "\\approx 0.025" + }, + { + "bbox": [ + 313, + 324, + 556, + 623 + ], + "type": "text", + "content": " for rotation and for " + }, + { + "bbox": [ + 313, + 324, + 556, + 623 + ], + "type": "inline_equation", + "content": "\\approx 0.48" + }, + { + "bbox": [ + 313, + 324, + 556, + 623 + ], + "type": "text", + "content": " translation prediction. The knowledge quality increases around layer #9 and peaks in the range of #13-21. We reason that since the camera information in block #13 is stored in such a disentangled manner, then the model is using it to build other representations; hence, conditioning the camera in this block is risky and unnecessary and would interfere with other visual features, as shown in our ablations. In this way, we propose to input the camera conditioning only in the first #8 blocks and leave the remaining 24 DiT blocks unconditioned. We find in Section 4.3 that this not only reduces the number of trainable parameters by " + }, + { + "bbox": [ + 313, + 324, + 556, + 623 + ], + "type": "inline_equation", + "content": "\\approx 4" + }, + { + "bbox": [ + 313, + 324, + 556, + 623 + ], + "type": "text", + "content": " times and improves training speed by " + }, + { + "bbox": [ + 313, + 324, + 556, + 623 + ], + "type": "inline_equation", + "content": "\\approx 15\\%" + }, + { + "bbox": [ + 313, + 324, + 556, + 623 + ], + "type": "text", + "content": ", but also enhances the visual quality by " + }, + { + "bbox": [ + 313, + 324, + 556, + 623 + ], + "type": "inline_equation", + "content": "\\approx 10\\%" + }, + { + "bbox": [ + 313, + 324, + 556, + 623 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 635, + 503, + 647 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 635, + 503, + 647 + ], + "spans": [ + { + "bbox": [ + 313, + 635, + 503, + 647 + ], + "type": "text", + "content": "3.5. Mitigating training data limitations" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 653, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 653, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 653, + 555, + 713 + ], + "type": "text", + "content": "Estimating camera parameters from in-the-wild videos remains challenging, as leading methods like [114, 115, 145, 192] frequently fail when processing videos containing dynamic scene content. This limitation results in camera-annotated datasets being heavily biased" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "22879" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 70, + 71, + 278, + 190 + ], + "blocks": [ + { + "bbox": [ + 70, + 71, + 278, + 190 + ], + "lines": [ + { + "bbox": [ + 70, + 71, + 278, + 190 + ], + "spans": [ + { + "bbox": [ + 70, + 71, + 278, + 190 + ], + "type": "image", + "image_path": "81ea02059ff1477dd730701760e70616e713378dad921b2c3c191cac9198b885.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 54, + 200, + 555, + 246 + ], + "lines": [ + { + "bbox": [ + 54, + 200, + 555, + 246 + ], + "spans": [ + { + "bbox": [ + 54, + 200, + 555, + 246 + ], + "type": "text", + "content": "Figure 5. Video DiT is secretly a camera pose estimator. We perform linear probing of camera poses in each of VDiT blocks for various noise levels and observe that video DiT performs pose estimation under the hood. Its middle blocks carry the most accurate information about the camera locations and orientations, which indicates that the camera signal emerges in the early layers to help the middle and late blocks render other visual features aligned with the viewpoint." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 279, + 72, + 541, + 190 + ], + "blocks": [ + { + "bbox": [ + 279, + 72, + 541, + 190 + ], + "lines": [ + { + "bbox": [ + 279, + 72, + 541, + 190 + ], + "spans": [ + { + "bbox": [ + 279, + 72, + 541, + 190 + ], + "type": "image", + "image_path": "5b13402e9380f67cf4a5cc2290fae66fa5179e151f60267cd7fe2a1dbed59fa5.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 54, + 266, + 296, + 396 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 266, + 296, + 396 + ], + "spans": [ + { + "bbox": [ + 54, + 266, + 296, + 396 + ], + "type": "text", + "content": "toward static scenes, which is particularly evident in RealEstate10K (RE10K) [199], the predominant dataset for training camera-controlled video models [6, 39, 149]. We hypothesize that models fine-tuned on such data interpret camera position information as a signal to suppress scene dynamics. This bias persists even when jointly training on unconstrained 2D video data [150], because the camera conditioning branch is only activated when camera parameters are available, which occurs exclusively for static scenes from RE10K, as static scenes remain the only reliable source for accurate camera annotation." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 57, + 398, + 297, + 637 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 398, + 297, + 637 + ], + "spans": [ + { + "bbox": [ + 57, + 398, + 297, + 637 + ], + "type": "text", + "content": "To address this fundamental limitation, we propose an alternative approach: rather than attempting to annotate dynamic scenes, which proved unsuccessful in our extensive preliminary research, even with state-of-the-art methods [145], we curate a collection of 20K diverse videos featuring dynamic scenes captured by stationary cameras (see Figure 6). With stationary cameras, the camera position is inherently known, (we can assign fixed arbitrary extrinsic), allowing us to maintain active camera conditioning during training. This approach enables the camera conditioning branch to remain active during training while exposing the model to dynamic content, helping it distinguish between viewpoint conditioning and scene stillness. On top of this secondary dataset, following [150], we remove the scale ambiguity in RE10K by leveraging an off-the-shelf metric depth estimator; see Appendix H. Our experiments in Sec. 4.3 demonstrate that this straightforward yet effective data curation strategy successfully mitigates the distributional limitations of RE10K, restoring much of the lost scene dynamics, while maintaining precise camera control." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 645, + 214, + 658 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 645, + 214, + 658 + ], + "spans": [ + { + "bbox": [ + 55, + 645, + 214, + 658 + ], + "type": "text", + "content": "3.6. Miscellaneous improvements" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 662, + 297, + 687 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 662, + 297, + 687 + ], + "spans": [ + { + "bbox": [ + 55, + 662, + 297, + 687 + ], + "type": "text", + "content": "In addition to our core analysis, we introduce several auxiliary techniques that enhance model performance." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 689, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 689, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 689, + 296, + 714 + ], + "type": "text", + "content": "Separate text and camera guidance. Text and camera signals require different guidance weights due to their dis" + } + ] + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 318, + 264, + 552, + 403 + ], + "blocks": [ + { + "bbox": [ + 318, + 264, + 552, + 403 + ], + "lines": [ + { + "bbox": [ + 318, + 264, + 552, + 403 + ], + "spans": [ + { + "bbox": [ + 318, + 264, + 552, + 403 + ], + "type": "image", + "image_path": "68916703f5541477f8f661f8e721d18c59c45cea7bc6d959220446f4f42366a8.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 414, + 556, + 503 + ], + "lines": [ + { + "bbox": [ + 313, + 414, + 556, + 503 + ], + "spans": [ + { + "bbox": [ + 313, + 414, + 556, + 503 + ], + "type": "text", + "content": "Figure 6. RealEstate10k [199] videos (upper 2 rows) contain diverse camera trajectories, but are strongly biased towards static scenes. To mitigate this bias and also increase the concepts diversity, we curate 20K videos with stationary cameras, but dynamic content (lower 2 rows). Such datasets are easy to construct, and surprisingly effective. Section 4.3 shows that integrating the dataset into our training improves visual quality on out-of-distribution prompts by " + }, + { + "bbox": [ + 313, + 414, + 556, + 503 + ], + "type": "inline_equation", + "content": "17\\%" + }, + { + "bbox": [ + 313, + 414, + 556, + 503 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 522, + 555, + 547 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 522, + 555, + 547 + ], + "spans": [ + { + "bbox": [ + 313, + 522, + 555, + 547 + ], + "type": "text", + "content": "tinct nature, motivating us to separate their classifier-free guidance (CFG) [9, 44]. We formulate the equation as:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 358, + 555, + 555, + 583 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 358, + 555, + 555, + 583 + ], + "spans": [ + { + "bbox": [ + 358, + 555, + 555, + 583 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\hat {s} (\\boldsymbol {x} \\mid \\boldsymbol {t}, \\boldsymbol {c}) = (1 + w _ {y} + w _ {c}) s _ {\\theta} (\\boldsymbol {x} \\mid \\boldsymbol {t}, \\boldsymbol {c}) \\tag {1} \\\\ - w _ {y} s _ {\\theta} (\\boldsymbol {x} | \\boldsymbol {c}) - w _ {c} s _ {\\theta} (\\boldsymbol {x} | \\boldsymbol {t}), \\\\ \\end{array}", + "image_path": "83e7d15220f4493e226548c437ffa7bcea73b63b3470f759a188ab92f7d1156a.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 590, + 555, + 651 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 590, + 555, + 651 + ], + "spans": [ + { + "bbox": [ + 313, + 590, + 555, + 651 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 590, + 555, + 651 + ], + "type": "inline_equation", + "content": "\\hat{s}(.)" + }, + { + "bbox": [ + 313, + 590, + 555, + 651 + ], + "type": "text", + "content": " denotes the final update direction used during synthesis, " + }, + { + "bbox": [ + 313, + 590, + 555, + 651 + ], + "type": "inline_equation", + "content": "s_{\\theta}" + }, + { + "bbox": [ + 313, + 590, + 555, + 651 + ], + "type": "text", + "content": " represents the model's predicted update direction, " + }, + { + "bbox": [ + 313, + 590, + 555, + 651 + ], + "type": "inline_equation", + "content": "\\pmb{t}" + }, + { + "bbox": [ + 313, + 590, + 555, + 651 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 590, + 555, + 651 + ], + "type": "inline_equation", + "content": "\\pmb{c}" + }, + { + "bbox": [ + 313, + 590, + 555, + 651 + ], + "type": "text", + "content": " are text and camera conditions, and " + }, + { + "bbox": [ + 313, + 590, + 555, + 651 + ], + "type": "inline_equation", + "content": "w_{y}" + }, + { + "bbox": [ + 313, + 590, + 555, + 651 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 590, + 555, + 651 + ], + "type": "inline_equation", + "content": "w_{c}" + }, + { + "bbox": [ + 313, + 590, + 555, + 651 + ], + "type": "text", + "content": " are their respective CFG weights. We zero-out the tensor for unconditional generation." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 653, + 556, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 653, + 556, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 653, + 556, + 714 + ], + "type": "text", + "content": "ControlNet with feedback. Traditional ControlNet [188] conditioning, used in recent camera control methods [6, 39, 149], only processes conditioning signals without accessing the main branch. Our experiments show that using a bidirectional ControlNet produces better camera representations." + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "22880" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 58, + 71, + 295, + 121 + ], + "blocks": [ + { + "bbox": [ + 58, + 71, + 295, + 121 + ], + "lines": [ + { + "bbox": [ + 58, + 71, + 295, + 121 + ], + "spans": [ + { + "bbox": [ + 58, + 71, + 295, + 121 + ], + "type": "table", + "html": "
MethodHuman Preference
CAMQTAVQOverall
Ours vs. VD3D (FIT)89.5%79.0%87.5%97.5%95.0%
Ours vs. VD3D (DiT)65.0%87.5%83.5%95.0%92.5%
", + "image_path": "cbc675c62bf8069c3abb44c46bbdae5edf36531dd0dc97fa4fa0a8d8e1dd43f0.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 138, + 296, + 205 + ], + "lines": [ + { + "bbox": [ + 55, + 138, + 296, + 205 + ], + "spans": [ + { + "bbox": [ + 55, + 138, + 296, + 205 + ], + "type": "text", + "content": "Table 1. User study. We compare our approach to the original VD3D (FIT) and reimplemented VD3D (DiT) on top of our base model. We conduct a user study where participants indicate their preference based on camera alignment (CA), motion quality (MQ), text alignment (TA), visual quality (VQ), and overall preference (Overall)." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 55, + 226, + 296, + 262 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 226, + 296, + 262 + ], + "spans": [ + { + "bbox": [ + 55, + 226, + 296, + 262 + ], + "type": "text", + "content": "This modification behaves as a feedback mechanism [71] provided by the main synthesis branch to the camera processing branch." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 265, + 296, + 326 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 265, + 296, + 326 + ], + "spans": [ + { + "bbox": [ + 55, + 265, + 296, + 326 + ], + "type": "text", + "content": "Dropping context in the camera branch. Applying cross-attention over the context information (text prompts, resolution, etc.) in the camera DiT-XS blocks worsens visual quality and camera steering due to harmful interference of the context embeddings with camera representations." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 336, + 135, + 350 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 336, + 135, + 350 + ], + "spans": [ + { + "bbox": [ + 55, + 336, + 135, + 350 + ], + "type": "text", + "content": "4. Experiments" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 360, + 296, + 420 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 360, + 296, + 420 + ], + "spans": [ + { + "bbox": [ + 55, + 360, + 296, + 420 + ], + "type": "text", + "content": "Datasets. Our base VDiT model was trained on a large-scale dataset of text-annotated images and videos. VDiT-CC is fine-tuned from VDiT on RealEstate10K [199], contains " + }, + { + "bbox": [ + 55, + 360, + 296, + 420 + ], + "type": "inline_equation", + "content": "\\approx 65\\mathrm{M}" + }, + { + "bbox": [ + 55, + 360, + 296, + 420 + ], + "type": "text", + "content": " video clips with per-frame camera parameters since it is the setup used by existing methods [6, 39, 149]." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 422, + 296, + 531 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 422, + 296, + 531 + ], + "spans": [ + { + "bbox": [ + 55, + 422, + 296, + 531 + ], + "type": "text", + "content": "Metrics. To assess the performance, we rely on a wide range of automatic quantitative metrics. We use FID [43], FVD [136], and CLIP score [42] to evaluate visual quality, and rotation and normalized translation errors [39] of ParticleSfM [192]-reconstructed trajectories to assess camera steerability. We evaluate them both on RE10K and MSR-VTT [161], since the latter allows to assess zero-shot generalization on out-of-distribution data. Moreover, we conduct a user study with details in the appendix in Sec. J." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 539, + 121, + 550 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 539, + 121, + 550 + ], + "spans": [ + { + "bbox": [ + 55, + 539, + 121, + 550 + ], + "type": "text", + "content": "4.1. Baselines" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 555, + 296, + 664 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 555, + 296, + 664 + ], + "spans": [ + { + "bbox": [ + 55, + 555, + 296, + 664 + ], + "type": "text", + "content": "We select three camera-control methods: MotionCtrl [149], CameraCtrl [39], and VD3D [6]. MotionCtrl and CameraCtrl use a UNet-based video diffusion backbone [37], while VD3D builds on top of FIT [21, 93] and as such, is easily extendable to our video DiT [100] setup. Hence, we re-implement VD3D on top of our VDiT model to obtain an additional \"VD3D+DiT\" baseline. Moreover, we provide comparisons for an open-source model, i.e., CogVideoX [169]. See Sec. C of the appendix for more details." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 672, + 137, + 683 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 672, + 137, + 683 + ], + "spans": [ + { + "bbox": [ + 55, + 672, + 137, + 683 + ], + "type": "text", + "content": "4.2. Main results" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 689, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 689, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 689, + 296, + 713 + ], + "type": "text", + "content": "We present quantitative comparisons with the baselines in Tab. 2. One can observe that just switching from the 4B-" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 72, + 555, + 275 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 555, + 275 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 555, + 275 + ], + "type": "text", + "content": "parameter pixel-space FIT [21] backbone, employed by the original VD3D approach, to our larger 11.5B-parameter latent-space DiT yields clear improvements across most metrics. Next, the results demonstrate that AC3D establishes a new state-of-the-art in performance against all baselines. Evaluating the quality of camera motion from still images is difficult, so we instead visualize all qualitative results in the website provided within our supplementary material. Therein, we can observe that AC3D better follows pose conditioning and achieves higher visual fidelity. We conduct user studies against VD3D+FIT (the original model) and VD3D+DiT (our improved re-implementation on top of the bigger video transformer). The results are presented in Table 1: AC3D outperforms them across all qualitative aspects, achieving a " + }, + { + "bbox": [ + 313, + 72, + 555, + 275 + ], + "type": "inline_equation", + "content": "90\\%+" + }, + { + "bbox": [ + 313, + 72, + 555, + 275 + ], + "type": "text", + "content": " overall preference score. Finally, we encourage the reader to assess the visual quality by observing videos on our website." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 314, + 285, + 382, + 296 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 285, + 382, + 296 + ], + "spans": [ + { + "bbox": [ + 314, + 285, + 382, + 296 + ], + "type": "text", + "content": "4.3. Ablations" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 304, + 555, + 425 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 304, + 555, + 425 + ], + "spans": [ + { + "bbox": [ + 313, + 304, + 555, + 425 + ], + "type": "text", + "content": "No camera conditioning. The first ablation we conduct is to drop all camera conditioning, which makes the model equivalent to the vanilla VDiT. This is needed to understand the degradation of visual quality and text alignment. The results (Tab. 2, row w/o camera cond) show that our model loses less only " + }, + { + "bbox": [ + 313, + 304, + 555, + 425 + ], + "type": "inline_equation", + "content": "\\approx 7\\%" + }, + { + "bbox": [ + 313, + 304, + 555, + 425 + ], + "type": "text", + "content": " of the original visual fidelity on MSR-VTT (as measured by FVD), while (as expected) greatly improving on its in-domain RE10K data. In comparison, VD3D-DiT (the closest baseline) loses " + }, + { + "bbox": [ + 313, + 304, + 555, + 425 + ], + "type": "inline_equation", + "content": "\\approx 20\\%" + }, + { + "bbox": [ + 313, + 304, + 555, + 425 + ], + "type": "text", + "content": " of its visual fidelity on MSR-VTT." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 428, + 554, + 500 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 428, + 554, + 500 + ], + "spans": [ + { + "bbox": [ + 313, + 428, + 554, + 500 + ], + "type": "text", + "content": "Importance of biasing the noise towards higher levels. As Sec. 3.3 shows, we use the truncated normal distribution with location of 0.8 and scale of 0.075 with the [0.6, 1] bounds for training AC3D. We ablate the importance of biasing the noise sampling towards high noise and observe higher motion, visual quality, and camera controllability." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 503, + 554, + 563 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 503, + 554, + 563 + ], + "spans": [ + { + "bbox": [ + 313, + 503, + 554, + 563 + ], + "type": "text", + "content": "Importance of truncating the noise schedule. We change the training and inference procedure by using no truncation during noise sampling. Instead, we condition the model with camera inputs over the whole noise range and observe decreased visual quality." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 566, + 555, + 638 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 566, + 555, + 638 + ], + "spans": [ + { + "bbox": [ + 313, + 566, + 555, + 638 + ], + "type": "text", + "content": "No camera guidance. We assess the importance of classifier-free guidance [44] on the camera conditioning in Tab. 2 (w/o camera CFG). It attains the same visual quality on both indistribution (RE10K) and out-of-distribution (MSR-VTT) data, but degrades camera following, resulting in " + }, + { + "bbox": [ + 313, + 566, + 555, + 638 + ], + "type": "inline_equation", + "content": "\\approx 5\\%" + }, + { + "bbox": [ + 313, + 566, + 555, + 638 + ], + "type": "text", + "content": " worse pose reconstruction errors." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 642, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 642, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 642, + 556, + 715 + ], + "type": "text", + "content": "Training without our data with scene motion. To understand how well our curated data with scene motion but stationary cameras mitigates static scene bias, we train AC3D exclusively on RE10K, and report the results in Tab. 2 (w/o our dynamic data). The model maintains similar visual quality and text alignment on RE10K (in-domain data), but" + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "22881" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 56, + 70, + 558, + 274 + ], + "blocks": [ + { + "bbox": [ + 56, + 70, + 558, + 274 + ], + "lines": [ + { + "bbox": [ + 56, + 70, + 558, + 274 + ], + "spans": [ + { + "bbox": [ + 56, + 70, + 558, + 274 + ], + "type": "table", + "html": "
MethodRealEstate10K [199]MSR-VTT [161]
TransErr ↓RotErr ↓FID ↓FVD ↓CLIP ↑TransErr ↓RotErr ↓FID ↓FVD ↓CLIP ↑
MotionCtrl (U-Net)0.4770.0942.9961.7026.460.5930.13716.85283.1224.11
CameraCtrl (U-Net)0.4650.0892.4855.6426.810.5870.13212.33201.3325.05
VD3D (FIT)0.4090.0431.4042.4328.070.5040.0507.80165.1826.89
VD3D (CogVideoX)0.4670.0631.6643.1428.080.5010.0687.45148.1127.65
AC3D (CogVideoX) (ours)0.3740.0391.2738.2028.620.4310.0395.52116.0428.38
MotionCtrl (VDiT)0.5040.1261.7443.8127.690.5890.1469.92150.2027.25
CameraCtrl (VDiT)0.5130.1381.6242.1027.730.5660.1438.15146.7727.51
VD3D (VDiT)0.4210.0561.2138.5728.340.4860.0476.88137.6227.90
AC3D (VDiT) (ours)0.3580.0351.1836.5528.760.4280.0385.34110.7128.58
w/o camera cond+0.233+0.153+4.02+53.83-1.63+0.266+0.157-0.48-8.53+0.35
w/o biasing noise+0.093+0.015+0.02+1.78-0.32+0.138+0.033+0.59+16.92-0.54
w/o noise truncation+0.020-0.003+0.06+1.69-0.20+0.016+0.005+0.76+6.63-0.18
w/o camera CFG+0.014+0.004+0.49+4.57-0.54+0.025+0.003+0.03+1.42-0.27
w/o our dynamic data-0.005-0.004-0.06+0.22-0.20+0.004-0.001+0.89+4.40-0.55
w/o metric scaled data+0.013+0.005+0.17+4.650.00+0.023+0.002-0.010.00-0.12
w/o dropping camera context+0.013+0.001+0.04+2.46-0.65+0.029+0.003+1.25+7.41-0.36
w/o limiting camera cond to 8 blocks-0.001+0.001+0.09+0.56-0.02+0.0030.000+0.32+9.23-0.33
w/ 2D training+0.129+0.068+2.60+33.85-1.17+0.128+0.093-0.26-3.83+0.21
", + "image_path": "e68fcf7d3b123e62d6b3ad6a74ed101391b9ea1e09a2ed45e50885da65b9b5b3.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 54, + 281, + 555, + 316 + ], + "lines": [ + { + "bbox": [ + 54, + 281, + 555, + 316 + ], + "spans": [ + { + "bbox": [ + 54, + 281, + 555, + 316 + ], + "type": "text", + "content": "Table 2. Quantitative evaluation. We evaluate all the models using camera pose and visual quality metrics based on unseen camera trajectories. We compute translation and rotation errors based on the estimated camera poses from generations using ParticleSfM [192]. We evaluate both in-distribution performance with RealEstate10K [199] and out-of-distribution performance with MSR-VTT [161]." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 54, + 336, + 294, + 385 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 336, + 294, + 385 + ], + "spans": [ + { + "bbox": [ + 54, + 336, + 294, + 385 + ], + "type": "text", + "content": "performance on out-of-distribution samples from MSR-VTT worsens (" + }, + { + "bbox": [ + 54, + 336, + 294, + 385 + ], + "type": "inline_equation", + "content": "\\approx 17\\%" + }, + { + "bbox": [ + 54, + 336, + 294, + 385 + ], + "type": "text", + "content": " worse FID and " + }, + { + "bbox": [ + 54, + 336, + 294, + 385 + ], + "type": "inline_equation", + "content": "\\approx 4\\%" + }, + { + "bbox": [ + 54, + 336, + 294, + 385 + ], + "type": "text", + "content": " worse FVD). The quality of scene motion is better assessed by referring to our qualitative video comparisons in the supplementary." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 54, + 388, + 295, + 473 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 388, + 295, + 473 + ], + "spans": [ + { + "bbox": [ + 54, + 388, + 295, + 473 + ], + "type": "text", + "content": "Importance of metric scaled cameras. We train AC3D using the original RE10K's camera parameters without our scaling procedure and present the results in Tab. 2 (w/o metric scaled data). This is a more ambiguous conditioning signal, and results in worse visual quality (" + }, + { + "bbox": [ + 54, + 388, + 295, + 473 + ], + "type": "inline_equation", + "content": "\\approx 10\\%" + }, + { + "bbox": [ + 54, + 388, + 295, + 473 + ], + "type": "text", + "content": " FVD on RE10K) and camera following performance (" + }, + { + "bbox": [ + 54, + 388, + 295, + 473 + ], + "type": "inline_equation", + "content": "\\approx 12\\%" + }, + { + "bbox": [ + 54, + 388, + 295, + 473 + ], + "type": "text", + "content": " worse trajectory reconstruction)." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 54, + 476, + 295, + 561 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 476, + 295, + 561 + ], + "spans": [ + { + "bbox": [ + 54, + 476, + 295, + 561 + ], + "type": "text", + "content": "Providing context into the camera branch. As discussed in Sec. 3.6, we chose not to input the context information (text embeddings, resolution conditioning, etc.) into the camera branch to avoid potential interference with the camera representations. As Tab. 2 (w/o dropping camera context) shows, providing this information indeed results in " + }, + { + "bbox": [ + 54, + 476, + 295, + 561 + ], + "type": "inline_equation", + "content": "\\approx 4\\%" + }, + { + "bbox": [ + 54, + 476, + 295, + 561 + ], + "type": "text", + "content": " worse camera following and " + }, + { + "bbox": [ + 54, + 476, + 295, + 561 + ], + "type": "inline_equation", + "content": "\\approx 15\\%" + }, + { + "bbox": [ + 54, + 476, + 295, + 561 + ], + "type": "text", + "content": " lower visual quality." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 54, + 565, + 295, + 673 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 565, + 295, + 673 + ], + "spans": [ + { + "bbox": [ + 54, + 565, + 295, + 673 + ], + "type": "text", + "content": "Importance of limiting conditioning to the first 8 VDiT blocks. Following our insights in Sec. 3.4, we condition AC3D only in the first 8 blocks. Trying to condition in all the 32 DiT blocks (w/o limiting camera cond to 8 blocks) worsens the visual quality by " + }, + { + "bbox": [ + 54, + 565, + 295, + 673 + ], + "type": "inline_equation", + "content": "\\approx 10\\%" + }, + { + "bbox": [ + 54, + 565, + 295, + 673 + ], + "type": "text", + "content": ", while keeping the quality control at the same level. This suggests that the middle and late VDiT layers indeed rely on processed camera information and conditioning them on external camera poses might lead to interference with other visual features." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 54, + 677, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 677, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 54, + 677, + 296, + 714 + ], + "type": "text", + "content": "Joint training with 2D data. To mitigate visual quality and scene motion degradation, we attempted to perform joint fine-tuning on 2D video data (without camera annotations)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 336, + 555, + 420 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 336, + 555, + 420 + ], + "spans": [ + { + "bbox": [ + 313, + 336, + 555, + 420 + ], + "type": "text", + "content": "which was used in base VDiT training by applying dropout on camera inputs for it. Prior work shows performance benefits with this strategy [150] and, as Tab. 2 (with " + }, + { + "bbox": [ + 313, + 336, + 555, + 420 + ], + "type": "inline_equation", + "content": "2D" + }, + { + "bbox": [ + 313, + 336, + 555, + 420 + ], + "type": "text", + "content": " training) shows, it indeed helps to maintain slightly higher visual fidelity in our case (" + }, + { + "bbox": [ + 313, + 336, + 555, + 420 + ], + "type": "inline_equation", + "content": "\\approx 3\\%" + }, + { + "bbox": [ + 313, + 336, + 555, + 420 + ], + "type": "text", + "content": " better FVD on MSR-VTT). However, camera steering severely deteriorates, leading to up to " + }, + { + "bbox": [ + 313, + 336, + 555, + 420 + ], + "type": "inline_equation", + "content": "3\\times" + }, + { + "bbox": [ + 313, + 336, + 555, + 420 + ], + "type": "text", + "content": " worse results for translation/rotation errors." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 438, + 392, + 451 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 438, + 392, + 451 + ], + "spans": [ + { + "bbox": [ + 313, + 438, + 392, + 451 + ], + "type": "text", + "content": "5. Conclusions" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 461, + 556, + 628 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 461, + 556, + 628 + ], + "spans": [ + { + "bbox": [ + 313, + 461, + 556, + 628 + ], + "type": "text", + "content": "Our findings demonstrate that principled analysis of camera motion in video diffusion models leads to significant improvements in control precision and efficiency. Through enhanced conditioning schedules, targeted layerspecific camera control, and better-calibrated training data, AC3D achieves state-of-the-art performance in 3D camera-controlled video synthesis while maintaining high visual quality and natural scene dynamics. This work establishes a foundation for more precise and efficient camera control in text-to-video generation. We discuss the limitations of our approach in Appendix B. In future work, we plan to focus on further improving data limitations and developing control mechanisms for camera trajectories far outside of the training distribution." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 647, + 429, + 661 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 647, + 429, + 661 + ], + "spans": [ + { + "bbox": [ + 313, + 647, + 429, + 661 + ], + "type": "text", + "content": "6. Acknowledgements" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 669, + 556, + 703 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 669, + 556, + 703 + ], + "spans": [ + { + "bbox": [ + 313, + 669, + 556, + 703 + ], + "type": "text", + "content": "DBL acknowledges support from NSERC under the RGPIN program, the Canada Foundation for Innovation, and the Ontario Research Fund." + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "22882" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "spans": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 62, + 91, + 296, + 712 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 66, + 91, + 296, + 112 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 91, + 296, + 112 + ], + "spans": [ + { + "bbox": [ + 66, + 91, + 296, + 112 + ], + "type": "text", + "content": "[1] Jimmy Lei Ba. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. 17" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 66, + 114, + 296, + 156 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 114, + 296, + 156 + ], + "spans": [ + { + "bbox": [ + 66, + 114, + 296, + 156 + ], + "type": "text", + "content": "[2] Sherwin Bahmani, Jeong Joon Park, Despoina Paschalidou, Hao Tang, Gordon Wetzstein, Leonidas Guibas, Luc Van Gool, and Radu Timofte. 3d-aware video generation. In TMLR, 2023. 2, 20" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 66, + 159, + 296, + 201 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 159, + 296, + 201 + ], + "spans": [ + { + "bbox": [ + 66, + 159, + 296, + 201 + ], + "type": "text", + "content": "[3] Sherwin Bahmani, Jeong Joon Park, Despoina Paschalidou, Xingguang Yan, Gordon Wetzstein, Leonidas Guibas, and Andrea Tagliasacchi. CC3D: Layout-conditioned generation of compositional 3D scenes. In Proc. ICCV, 2023. 20" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 66, + 203, + 296, + 257 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 203, + 296, + 257 + ], + "spans": [ + { + "bbox": [ + 66, + 203, + 296, + 257 + ], + "type": "text", + "content": "[4] Sherwin Bahmani, Xian Liu, Wang Yifan, Ivan Skorokhodov, Victor Rong, Ziwei Liu, Xihui Liu, Jeong Joon Park, Sergey Tulyakov, Gordon Wetzstein, Andrea Tagliasacchi, and David B. Lindell. Tc4d: Trajectory-conditioned text-to-4d generation. In Proc. ECCV, 2024. 2, 20" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 66, + 258, + 296, + 312 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 258, + 296, + 312 + ], + "spans": [ + { + "bbox": [ + 66, + 258, + 296, + 312 + ], + "type": "text", + "content": "[5] Sherwin Bahmani, Ivan Skorokhodov, Victor Rong, Gordon Wetzstein, Leonidas Guibas, Peter Wonka, Sergey Tulyakov, Jeong Joon Park, Andrea Tagliasacchi, and David B. Lindell. 4d-fy: Text-to-4d generation using hybrid score distillation sampling. In Proc. CVPR, 2024. 2, 20" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 66, + 315, + 296, + 378 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 315, + 296, + 378 + ], + "spans": [ + { + "bbox": [ + 66, + 315, + 296, + 378 + ], + "type": "text", + "content": "[6] Sherwin Bahmani, Ivan Skorokhodov, Aliaksandr Siarohin, Willi Menapace, Guocheng Qian, Michael Vasilkovsky, Hsin-Ying Lee, Chaoyang Wang, Jiaxu Zou, Andrea Tagliasacchi, et al. Vd3d: Taming large video diffusion transformers for 3d camera control. arXiv preprint arXiv:2407.12781, 2024. 2, 3, 4, 6, 7, 18" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 66, + 380, + 296, + 434 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 380, + 296, + 434 + ], + "spans": [ + { + "bbox": [ + 66, + 380, + 296, + 434 + ], + "type": "text", + "content": "[7] Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint arXiv:2311.15127, 2023. 2, 3" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 66, + 436, + 296, + 479 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 436, + 296, + 479 + ], + "spans": [ + { + "bbox": [ + 66, + 436, + 296, + 479 + ], + "type": "text", + "content": "[8] Andreas Blattmann, Robin Rombach, Huan Ling, Tim Dockhorn, Seung Wook Kim, Sanja Fidler, and Karsten Kreis. Align your latents: High-resolution video synthesis with latent diffusion models. In Proc. CVPR, 2023. 2" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 66, + 480, + 296, + 512 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 480, + 296, + 512 + ], + "spans": [ + { + "bbox": [ + 66, + 480, + 296, + 512 + ], + "type": "text", + "content": "[9] Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. In Proc. CVPR, 2023. 6" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 62, + 514, + 296, + 568 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 514, + 296, + 568 + ], + "spans": [ + { + "bbox": [ + 62, + 514, + 296, + 568 + ], + "type": "text", + "content": "[10] Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, Clarence Ng, Ricky Wang, and Aditya Ramesh. Video generation models as world simulators. OpenAI technical reports, 2024. 1, 2, 3, 17" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 62, + 570, + 296, + 612 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 570, + 296, + 612 + ], + "spans": [ + { + "bbox": [ + 62, + 570, + 296, + 612 + ], + "type": "text", + "content": "[11] Yukang Cao, Liang Pan, Kai Han, Kwan-Yee K Wong, and Ziwei Liu. Avatargo: Zero-shot 4d human-object interaction generation and animation. arXiv preprint arXiv:2410.07164, 2024. 20" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 62, + 614, + 296, + 656 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 614, + 296, + 656 + ], + "spans": [ + { + "bbox": [ + 62, + 614, + 296, + 656 + ], + "type": "text", + "content": "[12] Zenghao Chai, Chen Tang, Yongkang Wong, and Mohan Kankanhalli. Star: Skeleton-aware text-based 4d avatar generation with in-network motion retargeting. arXiv preprint arXiv:2406.04629, 2024. 20" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 62, + 658, + 296, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 658, + 296, + 712 + ], + "spans": [ + { + "bbox": [ + 62, + 658, + 296, + 712 + ], + "type": "text", + "content": "[13] Eric R Chan, Connor Z Lin, Matthew A Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas J Guibas, Jonathan Tremblay, Sameh Khamis, et al. Efficient geometry-aware 3D generative adversarial networks. In Proc. CVPR, 2022. 20" + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 321, + 73, + 555, + 712 + ], + "type": "list", + "angle": 0, + "index": 30, + "blocks": [ + { + "bbox": [ + 321, + 73, + 555, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 73, + 555, + 127 + ], + "spans": [ + { + "bbox": [ + 321, + 73, + 555, + 127 + ], + "type": "text", + "content": "[14] Eric R Chan, Koki Nagano, Matthew A Chan, Alexander W Bergman, Jeong Joon Park, Axel Levy, Miika Aittala, Shalini De Mello, Tero Karras, and Gordon Wetzstein. Generative novel view synthesis with 3d-aware diffusion models. In Proc. ICCV, 2023. 20" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 321, + 129, + 555, + 171 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 129, + 555, + 171 + ], + "spans": [ + { + "bbox": [ + 321, + 129, + 555, + 171 + ], + "type": "text", + "content": "[15] Ce Chen, Shaoli Huang, Xuelin Chen, Guangyi Chen, Xiaoguang Han, Kun Zhang, and Mingming Gong. Ct4d: Consistent text-to-4d generation with animatable meshes. arXiv preprint arXiv:2408.08342, 2024. 20" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 321, + 173, + 555, + 215 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 173, + 555, + 215 + ], + "spans": [ + { + "bbox": [ + 321, + 173, + 555, + 215 + ], + "type": "text", + "content": "[16] Eric Ming Chen, Sidhanth Holalkere, Ruyu Yan, Kai Zhang, and Abe Davis. Ray conditioning: Trading photoconsistency for photo-realism in multi-view image generation. In Proc. ICCV, 2023. 4" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 321, + 217, + 555, + 271 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 217, + 555, + 271 + ], + "spans": [ + { + "bbox": [ + 321, + 217, + 555, + 271 + ], + "type": "text", + "content": "[17] Haoxin Chen, Menghan Xia, Yingqing He, Yong Zhang, Xiaodong Cun, Shaoshu Yang, Jinbo Xing, Yaofang Liu, Qifeng Chen, Xintao Wang, Chao Weng, and Ying Shan. Videocrafter1: Open diffusion models for high-quality video generation. arXiv preprint arXiv:2310.19512, 2023. 3" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 321, + 272, + 555, + 315 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 272, + 555, + 315 + ], + "spans": [ + { + "bbox": [ + 321, + 272, + 555, + 315 + ], + "type": "text", + "content": "[18] Junsong Chen, Yue Wu, Simian Luo, Enze Xie, Sayak Paul, Ping Luo, Hang Zhao, and Zhenguo Li. Pixart-{\\delta} : Fast and controllable image generation with latent consistency models. arXiv preprint arXiv:2401.05252, 2024. 4" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 321, + 316, + 555, + 360 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 316, + 555, + 360 + ], + "spans": [ + { + "bbox": [ + 321, + 316, + 555, + 360 + ], + "type": "text", + "content": "[19] Kevin Chen, Christopher B Choy, Manolis Savva, Angel X Chang, Thomas Funkhouser, and Silvio Savarese. Text2Shape: Generating shapes from natural language by learning joint embeddings. In Proc. ACCV, 2018. 20" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 321, + 361, + 555, + 403 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 361, + 555, + 403 + ], + "spans": [ + { + "bbox": [ + 321, + 361, + 555, + 403 + ], + "type": "text", + "content": "[20] Rui Chen, Yongwei Chen, Ningxin Jiao, and Kui Jia. *Fantasia3D: Disentangling geometry and appearance for high-quality text-to-3D content creation.* arXiv preprint arXiv:2303.13873, 2023. 20" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 321, + 404, + 555, + 425 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 404, + 555, + 425 + ], + "spans": [ + { + "bbox": [ + 321, + 404, + 555, + 425 + ], + "type": "text", + "content": "[21] Ting Chen and Lala Li. Fit: Far-reaching interleaved transformers. arXiv preprint arXiv:2305.12689, 2023. 7" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 321, + 426, + 555, + 469 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 426, + 555, + 469 + ], + "spans": [ + { + "bbox": [ + 321, + 426, + 555, + 469 + ], + "type": "text", + "content": "[22] Soon Yau Cheong, Duygu Ceylan, Armin Mustafa, Andrew Gilbert, and Chun-Hao Paul Huang. Boosting camera motion control for video diffusion transformers. arXiv preprint arXiv:2410.10802, 2024. 3" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 321, + 471, + 555, + 514 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 471, + 555, + 514 + ], + "spans": [ + { + "bbox": [ + 321, + 471, + 555, + 514 + ], + "type": "text", + "content": "[23] Wen-Hsuan Chu, Lei Ke, and Katerina Fragkiadaki. Dreamscene4d: Dynamic multi-object scene generation from monocular videos. arXiv preprint arXiv:2405.02280, 2024. 2, 20" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 321, + 515, + 555, + 557 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 515, + 555, + 557 + ], + "spans": [ + { + "bbox": [ + 321, + 515, + 555, + 557 + ], + "type": "text", + "content": "[24] Terrance DeVries, Miguel Angel Bautista, Nitish Srivastava, Graham W Taylor, and Joshua M Susskind. Unconstrained scene generation with locally conditioned radiance fields. In Proc. ICCV, 2021. 20" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 321, + 559, + 533, + 569 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 559, + 533, + 569 + ], + "spans": [ + { + "bbox": [ + 321, + 559, + 533, + 569 + ], + "type": "text", + "content": "[25] Sander Dieleman. Perspectives on diffusion, 2023. 2" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 321, + 571, + 555, + 633 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 571, + 555, + 633 + ], + "spans": [ + { + "bbox": [ + 321, + 571, + 555, + 633 + ], + "type": "text", + "content": "[26] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In Proc. ICLR, 2021. 17" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 321, + 635, + 555, + 689 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 635, + 555, + 689 + ], + "spans": [ + { + "bbox": [ + 321, + 635, + 555, + 689 + ], + "type": "text", + "content": "[27] Mohamed El Banani, Amit Raj, Kevis-Kokitsi Maninis, Abhishek Kar, Yuanshen Li, Michael Rubinstein, Deqing Sun, Leonidas Guibas, Justin Johnson, and Varun Jampani. Probing the 3d awareness of visual foundation models. In Proc. CVPR, 2024. 2" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 321, + 691, + 555, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 691, + 555, + 712 + ], + "spans": [ + { + "bbox": [ + 321, + 691, + 555, + 712 + ], + "type": "text", + "content": "[28] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik" + } + ] + } + ], + "index": 29 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "22883" + } + ] + } + ], + "index": 31 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 61, + 72, + 296, + 712 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 81, + 72, + 294, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 72, + 294, + 106 + ], + "spans": [ + { + "bbox": [ + 81, + 72, + 294, + 106 + ], + "type": "text", + "content": "Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis. In Proc. ICMLR, 2024. 4, 5, 18" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 61, + 107, + 295, + 129 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 107, + 295, + 129 + ], + "spans": [ + { + "bbox": [ + 61, + 107, + 295, + 129 + ], + "type": "text", + "content": "[29] Gunnar Farnebäck. Two-frame motion estimation based on polynomial expansion. In Proc. SCIA, 2003. 19" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 130, + 295, + 173 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 130, + 295, + 173 + ], + "spans": [ + { + "bbox": [ + 61, + 130, + 295, + 173 + ], + "type": "text", + "content": "[30] Qijun Feng, Zhen Xing, Zuxuan Wu, and Yu-Gang Jiang. FDGaussian: Fast Gaussian splatting from single image via geometric-aware diffusion model. arXiv preprint arXiv:2403.10242, 2024. 20" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 62, + 175, + 296, + 219 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 175, + 296, + 219 + ], + "spans": [ + { + "bbox": [ + 62, + 175, + 296, + 219 + ], + "type": "text", + "content": "[31] Yutao Feng, Yintong Shang, Xiang Feng, Lei Lan, Shandian Zhe, Tianjia Shao, Hongzhi Wu, Kun Zhou, Hao Su, Chenfanfu Jiang, et al. Elastogen: 4d generative elastodynamics. arXiv preprint arXiv:2405.15056, 2024. 20" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 62, + 220, + 296, + 274 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 220, + 296, + 274 + ], + "spans": [ + { + "bbox": [ + 62, + 220, + 296, + 274 + ], + "type": "text", + "content": "[32] Peng Gao, Le Zhuo, Ziyi Lin, Dongyang Liu, Ruoyi Du, Xu Luo, Longtian Qiu, Yuhang Zhang, et al. Lumina-t2x: Transforming text into any modality, resolution, and duration via flow-based large diffusion transformers. arXiv preprint arXiv:2405.05945, 2024. 3, 17, 18" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 62, + 276, + 296, + 319 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 276, + 296, + 319 + ], + "spans": [ + { + "bbox": [ + 62, + 276, + 296, + 319 + ], + "type": "text", + "content": "[33] Quankai Gao, Qiangeng Xu, Zhe Cao, Ben Mildenhall, Wenchao Ma, Le Chen, Danhang Tang, and Ulrich Neumann. Gaussianflow: Splitting Gaussian dynamics for 4D content creation. arXiv preprint arXiv:2403.12365, 2024. 2, 20" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 321, + 296, + 374 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 321, + 296, + 374 + ], + "spans": [ + { + "bbox": [ + 62, + 321, + 296, + 374 + ], + "type": "text", + "content": "[34] Ruiqi Gao, Aleksander Holynski, Philipp Henzler, Arthur Brussee, Ricardo Martin-Brualla, Pratul Srinivasan, Jonathan T Barron, and Ben Poole. Cat3d: Create anything in 3d with multi-view diffusion models. In Proc. NeurIPS, 2024. 20" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 376, + 295, + 409 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 376, + 295, + 409 + ], + "spans": [ + { + "bbox": [ + 62, + 376, + 295, + 409 + ], + "type": "text", + "content": "[35] William Gao, Noam Aigerman, Thibault Groueix, Vova Kim, and Rana Hanocka. TextDeformer: Geometry manipulation using text guidance. In SIGGRAPH, 2023. 20" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 411, + 296, + 464 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 411, + 296, + 464 + ], + "spans": [ + { + "bbox": [ + 62, + 411, + 296, + 464 + ], + "type": "text", + "content": "[36] Jiatao Gu, Alex Trevithick, Kai-En Lin, Joshua M Suskind, Christian Theobalt, Lingjie Liu, and Ravi Ramamoorthi. NerfDiff: Single-image view synthesis with NeRF-guided distillation from 3D-aware diffusion. In Proc. ICML, 2023. 20" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 466, + 295, + 510 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 466, + 295, + 510 + ], + "spans": [ + { + "bbox": [ + 62, + 466, + 295, + 510 + ], + "type": "text", + "content": "[37] Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, and Bo Dai. Animatediff: Animate your personalized text-to-image diffusion models without specific tuning. In Proc. ICLR, 2024. 2, 7" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 62, + 511, + 296, + 544 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 511, + 296, + 544 + ], + "spans": [ + { + "bbox": [ + 62, + 511, + 296, + 544 + ], + "type": "text", + "content": "[38] Junlin Han, Filippos Kokkinos, and Philip Torr. VFusion3D: Learning scalable 3D generative models from video diffusion models. arXiv preprint arXiv:2403.12034, 2024. 20" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 62, + 545, + 295, + 589 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 545, + 295, + 589 + ], + "spans": [ + { + "bbox": [ + 62, + 545, + 295, + 589 + ], + "type": "text", + "content": "[39] Hao He, Yinghao Xu, Yuwei Guo, Gordon Wetzstein, Bo Dai, Hongsheng Li, and Ceyuan Yang. Cameractrl: Enabling camera control for text-to-video generation. arXiv preprint arXiv:2404.02101, 2024. 2, 3, 4, 5, 6, 7, 19" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 62, + 590, + 295, + 634 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 590, + 295, + 634 + ], + "spans": [ + { + "bbox": [ + 62, + 590, + 295, + 634 + ], + "type": "text", + "content": "[40] Xianglong He, Junyi Chen, Sida Peng, Di Huang, Yangguang Li, Xiaoshui Huang, Chun Yuan, Wanli Ouyang, and Tong He. GVGEN: Text-to-3D generation with volumetric representation. arXiv preprint arXiv:2403.12957, 2024. 20" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 62, + 635, + 295, + 666 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 635, + 295, + 666 + ], + "spans": [ + { + "bbox": [ + 62, + 635, + 295, + 666 + ], + "type": "text", + "content": "[41] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016. 17, 18" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 62, + 669, + 295, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 669, + 295, + 712 + ], + "spans": [ + { + "bbox": [ + 62, + 669, + 295, + 712 + ], + "type": "text", + "content": "[42] Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation metric for image captioning. arXiv preprint arXiv:2104.08718, 2021. 7" + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 320, + 73, + 555, + 712 + ], + "type": "list", + "angle": 0, + "index": 33, + "blocks": [ + { + "bbox": [ + 320, + 73, + 555, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 73, + 555, + 116 + ], + "spans": [ + { + "bbox": [ + 320, + 73, + 555, + 116 + ], + "type": "text", + "content": "[43] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Proc. NeurIPS, 2017. 7" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 320, + 118, + 553, + 139 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 118, + 553, + 139 + ], + "spans": [ + { + "bbox": [ + 320, + 118, + 553, + 139 + ], + "type": "text", + "content": "[44] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022. 4, 6, 7" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 320, + 140, + 555, + 161 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 140, + 555, + 161 + ], + "spans": [ + { + "bbox": [ + 320, + 140, + 555, + 161 + ], + "type": "text", + "content": "[45] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Proc. NeurIPS, 2020. 3" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 320, + 162, + 555, + 204 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 162, + 555, + 204 + ], + "spans": [ + { + "bbox": [ + 320, + 162, + 555, + 204 + ], + "type": "text", + "content": "[46] Lukas Hollein, Ang Cao, Andrew Owens, Justin Johnson, and Matthias Nießner. Text2room: Extracting textured 3d meshes from 2d text-to-image models. In Proc. ICCV, 2023. 20" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 320, + 206, + 555, + 249 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 206, + 555, + 249 + ], + "spans": [ + { + "bbox": [ + 320, + 206, + 555, + 249 + ], + "type": "text", + "content": "[47] Lukas Hollein, Aljaž Božić, Norman Müller, David Novotny, Hung-Yu Tseng, Christian Richardt, Michael Zollhöfer, and Matthias Nießner. ViewDiff: 3D-consistent image generation with text-to-image models. In Proc. CVPR, 2024. 20" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 320, + 251, + 555, + 293 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 251, + 555, + 293 + ], + "spans": [ + { + "bbox": [ + 320, + 251, + 555, + 293 + ], + "type": "text", + "content": "[48] Yicong Hong, Kai Zhang, Juixiang Gu, Sai Bi, Yang Zhou, Difan Liu, Feng Liu, Kalyan Sunkavalli, Trung Bui, and Hao Tan. LRM: Large reconstruction model for single image to 3D. In Proc. ICLR, 2024. 20" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 320, + 294, + 555, + 327 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 294, + 555, + 327 + ], + "spans": [ + { + "bbox": [ + 320, + 294, + 555, + 327 + ], + "type": "text", + "content": "[49] Chen Hou, Guoqiang Wei, Yan Zeng, and Zhibo Chen. Training-free camera control for video generation. arXiv preprint arXiv:2406.10126, 2024.3" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 320, + 327, + 555, + 371 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 327, + 555, + 371 + ], + "spans": [ + { + "bbox": [ + 320, + 327, + 555, + 371 + ], + "type": "text", + "content": "[50] Teng Hu, Jiangning Zhang, Ran Yi, Yating Wang, Hongrui Huang, Jieyu Weng, Yabiao Wang, and Lizhuang Ma. Motionmaster: Training-free camera motion transfer for video generation. arXiv preprint arXiv:2404.15789, 2024. 3" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 320, + 372, + 554, + 404 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 372, + 554, + 404 + ], + "spans": [ + { + "bbox": [ + 320, + 372, + 554, + 404 + ], + "type": "text", + "content": "[51] Chun-Hao Paul Huang, Jae Shin Yoon, Hyeonho Jeong, Niloy Mitra, and Duygu Ceylan. Camera pose estimation emerging in video diffusion transformer, 2024. 3" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 320, + 404, + 555, + 448 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 404, + 555, + 448 + ], + "spans": [ + { + "bbox": [ + 320, + 404, + 555, + 448 + ], + "type": "text", + "content": "[52] Tianyu Huang, Yihan Zeng, Hui Li, Wangmeng Zuo, and Rynson WH Lau. Dreamphysics: Learning physical properties of dynamic 3d gaussians with video diffusion priors. arXiv preprint arXiv:2406.01476, 2024. 20" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 320, + 449, + 555, + 480 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 449, + 555, + 480 + ], + "spans": [ + { + "bbox": [ + 320, + 449, + 555, + 480 + ], + "type": "text", + "content": "[53] Ajay Jain, Ben Mildenhall, Jonathan T Barron, Pieter Abbeel, and Ben Poole. Zero-shot text-guided object generation with dream fields. In Proc. CVPR, 2022. 20" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 320, + 482, + 555, + 514 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 482, + 555, + 514 + ], + "spans": [ + { + "bbox": [ + 320, + 482, + 555, + 514 + ], + "type": "text", + "content": "[54] Yash Jain, Anshul Nasery, Vibhav Vineet, and Harkirat Behl. Peekaboo: Interactive video generation via masked-diffusion. In Proc. CVPR, 2024. 20" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 320, + 514, + 555, + 547 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 514, + 555, + 547 + ], + "spans": [ + { + "bbox": [ + 320, + 514, + 555, + 547 + ], + "type": "text", + "content": "[55] Nikolay Jetchev. ClipMatrix: Text-controlled creation of 3D textured meshes. arXiv preprint arXiv:2109.12922, 2021. 20" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 320, + 548, + 555, + 581 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 548, + 555, + 581 + ], + "spans": [ + { + "bbox": [ + 320, + 548, + 555, + 581 + ], + "type": "text", + "content": "[56] Lutao Jiang and Lin Wang. Brightdreamer: Generic 3D Gaussian generative framework for fast text-to-3D synthesis. arXiv preprint arXiv:2403.11273, 2024. 20" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 320, + 582, + 555, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 582, + 555, + 624 + ], + "spans": [ + { + "bbox": [ + 320, + 582, + 555, + 624 + ], + "type": "text", + "content": "[57] Yanqin Jiang, Chaohui Yu, Chenjie Cao, Fan Wang, Weiming Hu, and Jin Gao. *Animate3d: Animating any 3d model with multi-view video diffusion.* arXiv preprint arXiv:2407.11398, 2024. 20" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 320, + 625, + 554, + 658 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 625, + 554, + 658 + ], + "spans": [ + { + "bbox": [ + 320, + 625, + 554, + 658 + ], + "type": "text", + "content": "[58] Yanqin Jiang, Li Zhang, Jin Gao, Weimin Hu, and Yao Yao. Consistent4d: Consistent 360 " + }, + { + "bbox": [ + 320, + 625, + 554, + 658 + ], + "type": "inline_equation", + "content": "\\{\\backslash \\mathrm{deg}\\}" + }, + { + "bbox": [ + 320, + 625, + 554, + 658 + ], + "type": "text", + "content": " dynamic object generation from monocular video. In Proc. ICLR, 2024. 20" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 320, + 658, + 555, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 658, + 555, + 712 + ], + "spans": [ + { + "bbox": [ + 320, + 658, + 555, + 712 + ], + "type": "text", + "content": "[59] Yash Kant, Aliaksandr Siarohin, Ziyi Wu, Michael Vasilkovsky, Guocheng Qian, Jian Ren, Riza Alp Guler, Bernard Ghanem, Sergey Tulyakov, and Igor Gilitschenski. Spad: Spatially aware multi-view diffusers. In Proc. CVPR, 2024. 4" + } + ] + } + ], + "index": 32 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "type": "text", + "content": "22884" + } + ] + } + ], + "index": 34 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 61, + 73, + 297, + 713 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 61, + 73, + 297, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 73, + 297, + 116 + ], + "spans": [ + { + "bbox": [ + 61, + 73, + 297, + 116 + ], + "type": "text", + "content": "[60] Tero Karras, Miika Aittala, Jaakko Lehtinen, Janne Hellsten, Timo Aila, and Samuli Laine. Analyzing and improving the training dynamics of diffusion models. In Proc. CVPR, 2024. 17" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 61, + 118, + 296, + 150 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 118, + 296, + 150 + ], + "spans": [ + { + "bbox": [ + 61, + 118, + 296, + 150 + ], + "type": "text", + "content": "[61] Oren Katzir, Or Patashnik, Daniel Cohen-Or, and Dani Lischinski. Noise-free score distillation. In Proc. ICLR, 2024. 20" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 152, + 296, + 185 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 152, + 296, + 185 + ], + "spans": [ + { + "bbox": [ + 61, + 152, + 296, + 185 + ], + "type": "text", + "content": "[62] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splattering for real-time radiance field rendering. In ACM TOG, 2023. 20" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 61, + 186, + 296, + 239 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 186, + 296, + 239 + ], + "spans": [ + { + "bbox": [ + 61, + 186, + 296, + 239 + ], + "type": "text", + "content": "[63] Seung Wook Kim, Bradley Brown, Kangxue Yin, Karsten Kreis, Katja Schwarz, Daiqing Li, Robin Rombach, Antonio Torralba, and Sanja Fidler. NeuralField-LDM: Scene generation with hierarchical latent diffusion models. In Proc. CVPR, 2023. 20" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 62, + 241, + 296, + 264 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 241, + 296, + 264 + ], + "spans": [ + { + "bbox": [ + 62, + 241, + 296, + 264 + ], + "type": "text", + "content": "[64] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. 3" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 62, + 265, + 296, + 308 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 265, + 296, + 308 + ], + "spans": [ + { + "bbox": [ + 62, + 265, + 296, + 308 + ], + "type": "text", + "content": "[65] Zhengfei Kuang, Shengqu Cai, Hao He, Yinghao Xu, Hongsheng Li, Leonidas Guibas, and Gordon Wetzstein. Collaborative video diffusion: Consistent multi-video generation with camera control. In Proc. NeurIPS, 2024. 3" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 310, + 296, + 343 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 310, + 296, + 343 + ], + "spans": [ + { + "bbox": [ + 62, + 310, + 296, + 343 + ], + "type": "text", + "content": "[66] Kyungmin Lee, Kihyuk Sohn, and Jinwoo Shin. DreamFlow: High-quality text-to-3D generation by approximating probability flow. In Proc. ICLR, 2024. 20" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 344, + 296, + 387 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 344, + 296, + 387 + ], + "spans": [ + { + "bbox": [ + 62, + 344, + 296, + 387 + ], + "type": "text", + "content": "[67] Yao-Chih Lee, Yi-Ting Chen, Andrew Wang, Ting-Hsuan Liao, Brandon Y Feng, and Jia-Bin Huang. Vividdream: Generating 3d scene with ambient dynamics. arXiv preprint arXiv:2405.20334, 2024. 20" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 388, + 296, + 432 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 388, + 296, + 432 + ], + "spans": [ + { + "bbox": [ + 62, + 388, + 296, + 432 + ], + "type": "text", + "content": "[68] Guojun Lei, Chi Wang, Hong Li, Rong Zhang, Yikai Wang, and Weiwei Xu. Animateanything: Consistent and controllable animation for video generation. arXiv preprint arXiv:2411.10836, 2024. 3" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 434, + 296, + 477 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 434, + 296, + 477 + ], + "spans": [ + { + "bbox": [ + 62, + 434, + 296, + 477 + ], + "type": "text", + "content": "[69] Bing Li, Cheng Zheng, Wenxuan Zhu, Jinjie Mai, Biao Zhang, Peter Wonka, and Bernard Ghanem. Vivid-zoo: Multi-view video generation with diffusion model. arXiv preprint arXiv:2406.08659, 2024. 20" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 62, + 479, + 296, + 533 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 479, + 296, + 533 + ], + "spans": [ + { + "bbox": [ + 62, + 479, + 296, + 533 + ], + "type": "text", + "content": "[70] Jiahao Li, Hao Tan, Kai Zhang, Zexiang Xu, Fujun Luan, Yinghao Xu, Yicong Hong, Kalyan Sunkavalli, Greg Shakhnarovich, and Sai Bi. Instant3D: Fast text-to-3D with sparse-view generation and large reconstruction model. In Proc. ICLR, 2024. 20" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 62, + 535, + 296, + 578 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 535, + 296, + 578 + ], + "spans": [ + { + "bbox": [ + 62, + 535, + 296, + 578 + ], + "type": "text", + "content": "[71] Ming Li, Taojiannan Yang, Huafeng Kuang, Jie Wu, Zhaoning Wang, Xuefeng Xiao, and Chen Chen. Controlnet++: Improving conditional controls with efficient consistency feedback. In ECCV, 2024. 3, 4, 7, 18" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 62, + 580, + 296, + 634 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 580, + 296, + 634 + ], + "spans": [ + { + "bbox": [ + 62, + 580, + 296, + 634 + ], + "type": "text", + "content": "[72] Renjie Li, Panwang Pan, Bangbang Yang, Dejia Xu, Shijie Zhou, Xuanyang Zhang, Zeming Li, Achuta Kadambi, Zhangyang Wang, and Zhiwen Fan. 4k4dgen: Panoramic 4d generation at 4k resolution. arXiv preprint arXiv:2406.13527, 2024. 20" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 62, + 635, + 296, + 678 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 635, + 296, + 678 + ], + "spans": [ + { + "bbox": [ + 62, + 635, + 296, + 678 + ], + "type": "text", + "content": "[73] Zhiqi Li, Yiming Chen, and Peidong Liu. Dreammesh4d: Video-to-4d generation with sparse-controlled gaussian-mesh hybrid representation. arXiv preprint arXiv:2410.06756, 2024. 20" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 62, + 680, + 296, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 680, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 62, + 680, + 296, + 713 + ], + "type": "text", + "content": "[74] Zhiqi Li, Yiming Chen, Lingzhe Zhao, and Peidong Liu. Controllable text-to-3D generation via surface-aligned Gaussian splatting. arXiv preprint arXiv:2403.09981, 2024. 20" + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 320, + 73, + 555, + 713 + ], + "type": "list", + "angle": 0, + "index": 31, + "blocks": [ + { + "bbox": [ + 320, + 73, + 555, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 73, + 555, + 106 + ], + "spans": [ + { + "bbox": [ + 320, + 73, + 555, + 106 + ], + "type": "text", + "content": "[75] Zhengqi Li, Richard Tucker, Noah Snavely, and Aleksander Holynski. Generative image dynamics. In Proc. CVPR, 2024. 2, 4, 16, 19" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 320, + 107, + 555, + 160 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 107, + 555, + 160 + ], + "spans": [ + { + "bbox": [ + 320, + 107, + 555, + 160 + ], + "type": "text", + "content": "[76] Hanwen Liang, Yuyang Yin, Dejia Xu, Hanxue Liang, Zhangyang Wang, Konstantinos N Plataniotis, Yao Zhao, and Yunchao Wei. Diffusion4d: Fast spatial-temporal consistent 4d generation via video diffusion models. arXiv preprint arXiv:2405.16645, 2024. 2, 20" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 320, + 162, + 555, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 162, + 555, + 205 + ], + "spans": [ + { + "bbox": [ + 320, + 162, + 555, + 205 + ], + "type": "text", + "content": "[77] Yixun Liang, Xin Yang, Jiantao Lin, Haodong Li, Xiaogang Xu, and Yingcong Chen. Luciddreamer: Towards high-fidelity text-to-3D generation via interval score matching. arXiv preprint arXiv:2311.11284, 2023. 20" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 320, + 206, + 554, + 249 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 206, + 554, + 249 + ], + "spans": [ + { + "bbox": [ + 320, + 206, + 554, + 249 + ], + "type": "text", + "content": "[78] Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin. Magic3D: High-resolution text-to-3D content creation. In Proc. CVPR, 2023. 20" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 320, + 251, + 555, + 293 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 251, + 555, + 293 + ], + "spans": [ + { + "bbox": [ + 320, + 251, + 555, + 293 + ], + "type": "text", + "content": "[79] Jiajing Lin, Zhenzhong Wang, Yongjie Hou, Yuzhou Tang, and Min Jiang. Phy124: Fast physics-driven 4d content generation from a single image. arXiv preprint arXiv:2409.07179, 2024. 20" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 320, + 294, + 554, + 338 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 294, + 554, + 338 + ], + "spans": [ + { + "bbox": [ + 320, + 294, + 554, + 338 + ], + "type": "text", + "content": "[80] Yukang Lin, Haonan Han, Chaoqun Gong, Zunnan Xu, Yachao Zhang, and Xiu Li. Consistent123: One image to highly consistent 3D asset using case-aware diffusion priors. In arXiv preprint arXiv:2309.17261, 2023. 20" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 320, + 338, + 555, + 381 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 338, + 555, + 381 + ], + "spans": [ + { + "bbox": [ + 320, + 338, + 555, + 381 + ], + "type": "text", + "content": "[81] Huan Ling, Seung Wook Kim, Antonio Torralba, Sanja Fidler, and Karsten Kreis. Align your gaussians: Text-to-4d with dynamic 3d gaussians and composed diffusion models. In Proc. CVPR, 2024. 2, 20" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 320, + 383, + 555, + 426 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 383, + 555, + 426 + ], + "spans": [ + { + "bbox": [ + 320, + 383, + 555, + 426 + ], + "type": "text", + "content": "[82] Pengyang Ling, Jiazi Bu, Pan Zhang, Xiaoyi Dong, Yuhang Zang, Tong Wu, Huaian Chen, Jiaqi Wang, and Yi Jin. Motionclone: Training-free motion cloning for controllable video generation. arXiv preprint arXiv:2406.05338, 2024. 3" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 320, + 427, + 555, + 470 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 427, + 555, + 470 + ], + "spans": [ + { + "bbox": [ + 320, + 427, + 555, + 470 + ], + "type": "text", + "content": "[83] Pengkun Liu, Yikai Wang, Fuchun Sun, Jiafang Li, Hang Xiao, Hongxiang Xue, and Xinzhou Wang. Isotropic3D: Image-to-3D generation based on a single clip embedding. arXiv preprint arXiv:2403.10395, 2024. 20" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 320, + 472, + 555, + 503 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 472, + 555, + 503 + ], + "spans": [ + { + "bbox": [ + 320, + 472, + 555, + 503 + ], + "type": "text", + "content": "[84] Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3D object. In Proc. ICCV, 2023. 20" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 320, + 505, + 555, + 547 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 505, + 555, + 547 + ], + "spans": [ + { + "bbox": [ + 320, + 505, + 555, + 547 + ], + "type": "text", + "content": "[85] Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. arXiv preprint arXiv:2209.03003, 2022. 3, 4" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 320, + 548, + 555, + 592 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 548, + 555, + 592 + ], + "spans": [ + { + "bbox": [ + 320, + 548, + 555, + 592 + ], + "type": "text", + "content": "[86] Xian Liu, Xiaohang Zhan, Jiaxiang Tang, Ying Shan, Gang Zeng, Dahua Lin, Xihui Liu, and Ziwei Liu. HumanGaussian: Text-driven 3D human generation with Gaussian splatting. In Proc. CVPR, 2024. 20" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 320, + 593, + 555, + 635 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 593, + 555, + 635 + ], + "spans": [ + { + "bbox": [ + 320, + 593, + 555, + 635 + ], + "type": "text", + "content": "[87] Yuan Liu, Cheng Lin, Zijiao Zeng, Xiaoxiao Long, Lingjie Liu, Taku Komura, and Wenping Wang. SyncDreamer: Generating multiview-consistent images from a single-view image. In Proc. ICLR, 2024. 20" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 320, + 637, + 555, + 689 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 637, + 555, + 689 + ], + "spans": [ + { + "bbox": [ + 320, + 637, + 555, + 689 + ], + "type": "text", + "content": "[88] Xiaoxiao Long, Yuan-Chen Guo, Cheng Lin, Yuan Liu, Zhiyang Dou, Lingjie Liu, Yuexin Ma, Song-Hai Zhang, Marc Habermann, Christian Theobalt, et al. Wonder3D: Single image to 3D using cross-domain diffusion. In Proc. CVPR, 2024. 20" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 320, + 691, + 554, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 691, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 320, + 691, + 554, + 713 + ], + "type": "text", + "content": "[89] I Loshchilov. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. 17, 18" + } + ] + } + ], + "index": 30 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "22885" + } + ] + } + ], + "index": 32 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 57, + 72, + 297, + 712 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 61, + 72, + 297, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 72, + 297, + 106 + ], + "spans": [ + { + "bbox": [ + 61, + 72, + 297, + 106 + ], + "type": "text", + "content": "[90] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. 17, 18" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 61, + 107, + 296, + 140 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 107, + 296, + 140 + ], + "spans": [ + { + "bbox": [ + 61, + 107, + 296, + 140 + ], + "type": "text", + "content": "[91] Wan-Duo Kurt Ma, John P Lewis, and W Bastiaan Kleijn. Trailblazer: Trajectory control for diffusion-based video generation. arXiv preprint arXiv:2401.00896, 2023. 20" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 62, + 143, + 296, + 186 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 143, + 296, + 186 + ], + "spans": [ + { + "bbox": [ + 62, + 143, + 296, + 186 + ], + "type": "text", + "content": "[92] Xin Ma, Yaohui Wang, Gengyun Jia, Xinyuan Chen, Ziwei Liu, Yuan-Fang Li, Cunjian Chen, and Yu Qiao. Latte: Latent diffusion transformer for video generation. arXiv preprint arXiv:2401.03048, 2024. 2" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 62, + 188, + 296, + 243 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 188, + 296, + 243 + ], + "spans": [ + { + "bbox": [ + 62, + 188, + 296, + 243 + ], + "type": "text", + "content": "[93] Willi Menapace, Aliaksandr Siarohin, Ivan Skorokhodov, Ekaterina Deyneka, Tsai-Shien Chen, Anil Kag, Yuwei Fang, Aleksei Stoliar, Elisa Ricci, Jian Ren, et al. Snap video: Scaled spatiotemporal transformers for text-to-video synthesis. In Proc. CVPR, 2024. 2, 7, 17" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 62, + 244, + 295, + 277 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 244, + 295, + 277 + ], + "spans": [ + { + "bbox": [ + 62, + 244, + 295, + 277 + ], + "type": "text", + "content": "[94] Qiaowei Miao, Yawei Luo, and Yi Yang. Pla4d: Pixel-level alignments for text-to-4d gaussian splatting. arXiv preprint arXiv:2405.19957, 2024. 20" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 62, + 280, + 296, + 323 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 280, + 296, + 323 + ], + "spans": [ + { + "bbox": [ + 62, + 280, + 296, + 323 + ], + "type": "text", + "content": "[95] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In Proc. ECCV, 2020. 20" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 325, + 296, + 369 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 325, + 296, + 369 + ], + "spans": [ + { + "bbox": [ + 62, + 325, + 296, + 369 + ], + "type": "text", + "content": "[96] Koichi Namekata, Sherwin Bahmani, Ziyi Wu, Yash Kant, Igor Gilitschenski, and David B Lindell. Sg-i2v: Self-guided trajectory control in image-to-video generation. arXiv preprint arXiv:2411.04989, 2024. 20" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 371, + 296, + 415 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 371, + 296, + 415 + ], + "spans": [ + { + "bbox": [ + 62, + 371, + 296, + 415 + ], + "type": "text", + "content": "[97] Roy Or-El, Xuan Luo, Mengyi Shan, Eli Shechtman, Jeong Joon Park, and Ira Kemelmacher-Shlizerman. StyleSDF: High-resolution 3D-consistent image and geometry generation. In Proc. CVPR, 2022. 20" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 417, + 296, + 450 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 417, + 296, + 450 + ], + "spans": [ + { + "bbox": [ + 62, + 417, + 296, + 450 + ], + "type": "text", + "content": "[98] Zijie Pan, Zeyu Yang, Xiatian Zhu, and Li Zhang. Fast dynamic 3d object generation from a single-view video. arXiv preprint arXiv:2401.08742, 2024. 2, 20" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 452, + 296, + 484 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 452, + 296, + 484 + ], + "spans": [ + { + "bbox": [ + 62, + 452, + 296, + 484 + ], + "type": "text", + "content": "[99] Deepak Pathak, Ross Girshick, Piotr Dólar, Trevor Darrell, and Bharath Hariharan. Learning features by watching objects move. In Proc. CVPR, 2017. 19" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 57, + 487, + 294, + 508 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 487, + 294, + 508 + ], + "spans": [ + { + "bbox": [ + 57, + 487, + 294, + 508 + ], + "type": "text", + "content": "[100] William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proc. ICCV, 2023. 2, 3, 7, 17" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 57, + 510, + 296, + 565 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 510, + 296, + 565 + ], + "spans": [ + { + "bbox": [ + 57, + 510, + 296, + 565 + ], + "type": "text", + "content": "[101] Ryan Po, Wang Yifan, Vladislav Golyanik, Kfir Aberman, Jonathan T Barron, Amit H Bermano, Eric Ryan Chan, Tali Dekel, Aleksander Holynski, Angjoo Kanazawa, et al. State of the art on diffusion models for visual computing. arXiv preprint arXiv:2310.07204, 2023. 2" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 567, + 296, + 621 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 567, + 296, + 621 + ], + "spans": [ + { + "bbox": [ + 57, + 567, + 296, + 621 + ], + "type": "text", + "content": "[102] Adam Polyak, Amit Zohar, Andrew Brown, Andros Tjandra, Animesh Sinha, Ann Lee, Apoorv Vyas, Bowen Shi, Chih-Yao Ma, Ching-Yao Chuang, et al. Movie gen: A cast of media foundation models. arXiv preprint arXiv:2410.13720, 2024. 3, 17" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 57, + 624, + 296, + 656 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 624, + 296, + 656 + ], + "spans": [ + { + "bbox": [ + 57, + 624, + 296, + 656 + ], + "type": "text", + "content": "[103] Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben Mildenhall. DreamFusion: Text-to-3D using 2D diffusion. In Proc. ICLR, 2023. 20" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 57, + 658, + 296, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 658, + 296, + 712 + ], + "spans": [ + { + "bbox": [ + 57, + 658, + 296, + 712 + ], + "type": "text", + "content": "[104] Guocheng Qian, Junli Cao, Aliaksandr Siarohin, Yash Kant, Chaoyang Wang, Michael Vasilkovsky, Hsin-Ying Lee, Yuwei Fang, Ivan Skorokhodov, Peiye Zhuang, et al. Atom: Amortized text-to-mesh using 2d diffusion. arXiv preprint arXiv:2402.00867, 2024. 20" + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 72, + 555, + 712 + ], + "type": "list", + "angle": 0, + "index": 31, + "blocks": [ + { + "bbox": [ + 316, + 72, + 555, + 128 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 72, + 555, + 128 + ], + "spans": [ + { + "bbox": [ + 316, + 72, + 555, + 128 + ], + "type": "text", + "content": "[105] Guocheng Qian, Jinjie Mai, Abdullah Hamdi, Jian Ren, Aliaksandr Siarohin, Bing Li, Hsin-Ying Lee, Ivan Skorokhodov, Peter Wonka, Sergey Tulyakov, et al. Magic123: One image to high-quality 3D object generation using both 2D and 3D diffusion priors. In Proc. ICLR, 2024. 20" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 129, + 555, + 171 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 129, + 555, + 171 + ], + "spans": [ + { + "bbox": [ + 316, + 129, + 555, + 171 + ], + "type": "text", + "content": "[106] Haonan Qiu, Zhaoxi Chen, Zhouxia Wang, Yingqing He, Menghan Xia, and Ziwei Liu. Freetraj: Tuning-free trajectory control in video diffusion models. arXiv preprint arXiv:2406.16863, 2024. 20" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 172, + 555, + 227 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 172, + 555, + 227 + ], + "spans": [ + { + "bbox": [ + 316, + 172, + 555, + 227 + ], + "type": "text", + "content": "[107] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In Proc. ICML, 2021. 20" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 228, + 555, + 271 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 228, + 555, + 271 + ], + "spans": [ + { + "bbox": [ + 316, + 228, + 555, + 271 + ], + "type": "text", + "content": "[108] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. In Proc. JMLR, 2020. 3, 17" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 272, + 555, + 314 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 272, + 555, + 314 + ], + "spans": [ + { + "bbox": [ + 316, + 272, + 555, + 314 + ], + "type": "text", + "content": "[109] Jiawei Ren, Liang Pan, Jiaxiang Tang, Chi Zhang, Ang Cao, Gang Zeng, and Ziwei Liu. DreamGaussian4D: Generative 4D Gaussian splatting. arXiv preprint arXiv:2312.17142, 2023. 2, 20" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 316, + 555, + 369 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 316, + 555, + 369 + ], + "spans": [ + { + "bbox": [ + 316, + 316, + 555, + 369 + ], + "type": "text", + "content": "[110] Jiawei Ren, Kevin Xie, Ashkan Mirzaei, Hanxue Liang, Xiaohui Zeng, Karsten Kreis, Ziwei Liu, Antonio Torralba, Sanja Fidler, Seung Wook Kim, et al. L4gm: Large 4d gaussian reconstruction model. In Proc. NeurIPS, 2024. 3, 20" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 371, + 555, + 414 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 371, + 555, + 414 + ], + "spans": [ + { + "bbox": [ + 316, + 371, + 555, + 414 + ], + "type": "text", + "content": "[111] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proc. CVPR, 2022. 3, 19" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 415, + 555, + 447 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 415, + 555, + 447 + ], + "spans": [ + { + "bbox": [ + 316, + 415, + 555, + 447 + ], + "type": "text", + "content": "[112] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Proc. MICCAI, 2015. 4" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 449, + 555, + 491 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 449, + 555, + 491 + ], + "spans": [ + { + "bbox": [ + 316, + 449, + 555, + 491 + ], + "type": "text", + "content": "[113] Aditya Sanghi, Hang Chu, Joseph G Lambourne, Ye Wang, Chin-Yi Cheng, Marco Fumero, and Kamal Rahimi Malekshan. CLIP-Forge: Towards zero-shot text-to-shape generation. In Proc. CVPR, 2022. 20" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 492, + 555, + 525 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 492, + 555, + 525 + ], + "spans": [ + { + "bbox": [ + 316, + 492, + 555, + 525 + ], + "type": "text", + "content": "[114] Johannes Lutz Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In Proc. CVPR, 2016. 5, 20" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 525, + 555, + 558 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 525, + 555, + 558 + ], + "spans": [ + { + "bbox": [ + 316, + 525, + 555, + 558 + ], + "type": "text", + "content": "[115] Johannes Lutz Schonberger, Enliang Zheng, Marc Pollefeys, and Jan-Michael Frahm. Pixelwise view selection for unstructured multi-view stereo. In Proc. ECCV, 2016. 5, 20" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 316, + 559, + 555, + 592 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 559, + 555, + 592 + ], + "spans": [ + { + "bbox": [ + 316, + 559, + 555, + 592 + ], + "type": "text", + "content": "[116] Katja Schwarz, Axel Sauer, Michael Niemeyer, Yiyi Liao, and Andreas Geiger. VoxGRAF: Fast 3D-aware image synthesis with sparse voxel grids. In Proc. NeurIPS, 2022. 20" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 316, + 593, + 555, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 593, + 555, + 624 + ], + "spans": [ + { + "bbox": [ + 316, + 593, + 555, + 624 + ], + "type": "text", + "content": "[117] Yichun Shi, Peng Wang, Jianglong Ye, Long Mai, Kejie Li, and Xiao Yang. MVDream: Multi-view diffusion for 3D generation. In Proc. ICLR, 2024. 20" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 316, + 625, + 555, + 668 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 625, + 555, + 668 + ], + "spans": [ + { + "bbox": [ + 316, + 625, + 555, + 668 + ], + "type": "text", + "content": "[118] Jaidev Shriram, Alex Trevithick, Lingjie Liu, and Ravi Ramamoorthi. Realmdreamer: Text-driven 3d scene generation with inpainting and depth diffusion. arXiv preprint arXiv:2404.07199, 2024. 20" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 316, + 669, + 555, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 669, + 555, + 712 + ], + "spans": [ + { + "bbox": [ + 316, + 669, + 555, + 712 + ], + "type": "text", + "content": "[119] Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, et al. Make-a-video: Text-to-video generation without text-video data. Proc. ICLR, 2023. 2" + } + ] + } + ], + "index": 30 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 749, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 749, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 749, + 318, + 757 + ], + "type": "text", + "content": "22886" + } + ] + } + ], + "index": 32 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 297, + 713 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 56, + 72, + 297, + 117 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 297, + 117 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 297, + 117 + ], + "type": "text", + "content": "[120] Uriel Singer, Shelly Sheynin, Adam Polyak, Oron Ashual, Iurii Makarov, Filippos Kokkinos, Naman Goyal, Andrea Vedaldi, Devi Parikh, Justin Johnson, et al. Text-to-4d dynamic scene generation. In Proc. ICML, 2023. 2, 20" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 119, + 295, + 163 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 119, + 295, + 163 + ], + "spans": [ + { + "bbox": [ + 56, + 119, + 295, + 163 + ], + "type": "text", + "content": "[121] Vincent Sitzmann, Semon Rezchikov, Bill Freeman, Josh Tenenbaum, and Fredo Durand. Light field networks: Neural scene representations with single-evaluation rendering. In Proc. NeurIPS, 2021. 4" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 165, + 295, + 198 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 165, + 295, + 198 + ], + "spans": [ + { + "bbox": [ + 56, + 165, + 295, + 198 + ], + "type": "text", + "content": "[122] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In Proc. ICML, 2015. 3" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 200, + 295, + 233 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 200, + 295, + 233 + ], + "spans": [ + { + "bbox": [ + 56, + 200, + 295, + 233 + ], + "type": "text", + "content": "[123] Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding. In Neurocomputing, 2024. 17" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 235, + 295, + 278 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 235, + 295, + 278 + ], + "spans": [ + { + "bbox": [ + 56, + 235, + 295, + 278 + ], + "type": "text", + "content": "[124] Jingxiang Sun, Bo Zhang, Ruizhi Shao, Lizhen Wang, Wen Liu, Zhenda Xie, and Yebin Liu. DreamCraft3D: Hierarchical 3D generation with bootstrapped diffusion prior. In Proc. ICLR, 2024. 20" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 280, + 295, + 323 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 280, + 295, + 323 + ], + "spans": [ + { + "bbox": [ + 56, + 280, + 295, + 323 + ], + "type": "text", + "content": "[125] Keqiang Sun, Dor Litvak, Yunzhi Zhang, Hongsheng Li, Jiajun Wu, and Shangzhe Wu. Ponomy: Learning articulated 3d animal motions from unlabeled online videos. In Proc. ECCV, 2024. 20" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 326, + 295, + 369 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 326, + 295, + 369 + ], + "spans": [ + { + "bbox": [ + 56, + 326, + 295, + 369 + ], + "type": "text", + "content": "[126] Qi Sun, Zhiyang Guo, Ziyu Wan, Jing Nathan Yan, Shengming Yin, Wengang Zhou, Jing Liao, and Houqiang Li. Eg4d: Explicit generation of 4d object without score distillation. arXiv preprint arXiv:2405.18132, 2024. 20" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 372, + 295, + 415 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 372, + 295, + 415 + ], + "spans": [ + { + "bbox": [ + 56, + 372, + 295, + 415 + ], + "type": "text", + "content": "[127] Wenqiang Sun, Shuo Chen, Fangfu Liu, Zilong Chen, Yueqi Duan, Jun Zhang, and Yikai Wang. Dimensionx: Create any 3d and 4d scenes from a single image with controllable video diffusion. arXiv preprint arXiv:2411.04928, 2024. 3" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 418, + 295, + 450 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 418, + 295, + 450 + ], + "spans": [ + { + "bbox": [ + 56, + 418, + 295, + 450 + ], + "type": "text", + "content": "[128] Stanislaw Szymanowicz, Christian Rupprecht, and Andrea Vedaldi. Viewset diffusion: (0-) image-conditioned 3d generative models from 2d data. In Proc. ICCV, 2023. 20" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 452, + 295, + 507 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 452, + 295, + 507 + ], + "spans": [ + { + "bbox": [ + 56, + 452, + 295, + 507 + ], + "type": "text", + "content": "[129] Stanislaw Szymanowicz, Eldar Insafutdinov, Chuanxia Zheng, Dylan Campbell, João F Henriques, Christian Rupprecht, and Andrea Vedaldi. Flash3d: Feed-forward generalisable 3d scene reconstruction from a single image. arXiv preprint arXiv:2406.04343, 2024. 20" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 509, + 295, + 541 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 509, + 295, + 541 + ], + "spans": [ + { + "bbox": [ + 56, + 509, + 295, + 541 + ], + "type": "text", + "content": "[130] Stanislaw Szymanowicz, Christian Rupprecht, and Andrea Vedaldi. Splatter image: Ultra-fast single-view 3D reconstruction. In Proc. CVPR, 2024. 20" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 544, + 295, + 586 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 544, + 295, + 586 + ], + "spans": [ + { + "bbox": [ + 56, + 544, + 295, + 586 + ], + "type": "text", + "content": "[131] Junshu Tang, Tengfei Wang, Bo Zhang, Ting Zhang, Ran Yi, Lizhuang Ma, and Dong Chen. Make-it-3D: High-fidelity 3D creation from a single image with diffusion prior. arXiv preprint arXiv:2303.14184, 2023. 20" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 56, + 589, + 295, + 632 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 589, + 295, + 632 + ], + "spans": [ + { + "bbox": [ + 56, + 589, + 295, + 632 + ], + "type": "text", + "content": "[132] Jiaxiang Tang, Zhaoxi Chen, Xiaokang Chen, Tengfei Wang, Gang Zeng, and Ziwei Liu. LGM: Large multi-view gaussian model for high-resolution 3d content creation. Proc. ECCV, 2024. 20" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 56, + 635, + 295, + 689 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 635, + 295, + 689 + ], + "spans": [ + { + "bbox": [ + 56, + 635, + 295, + 689 + ], + "type": "text", + "content": "[133] Zhenggang Tang, Peiye Zhuang, Chaoyang Wang, Aliaksandr Siarohin, Yash Kant, Alexander Schwing, Sergey Tulyakov, and Hsin-Ying Lee. Pixel-aligned multi-view generation with depth guided decoder. arXiv preprint arXiv:2408.14016, 2024. 20" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 56, + 692, + 295, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 692, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 56, + 692, + 295, + 713 + ], + "type": "text", + "content": "[134] Ayush Tewari, Tianwei Yin, George Cazenavette, Semon Rezhikov, Josh Tenenbaum, Frédo Durand, Bill Freeman," + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 555, + 713 + ], + "type": "list", + "angle": 0, + "index": 30, + "blocks": [ + { + "bbox": [ + 338, + 73, + 555, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 338, + 73, + 555, + 106 + ], + "spans": [ + { + "bbox": [ + 338, + 73, + 555, + 106 + ], + "type": "text", + "content": "and Vincent Sitzmann. Diffusion with forward models: Solving stochastic inverse problems without direct supervision. In Proc. NeurIPS, 2023. 20" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 108, + 555, + 162 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 108, + 555, + 162 + ], + "spans": [ + { + "bbox": [ + 316, + 108, + 555, + 162 + ], + "type": "text", + "content": "[135] Dmitry Tochilkin, David Pankratz, Zexiang Liu, Zixuan Huang, Adam Letts, Yangguang Li, Ding Liang, Christian Laforte, Varun Jampani, and Yan-Pei Cao. Triposr: Fast 3D object reconstruction from a single image. arXiv preprint arXiv:2403.02151, 2024. 20" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 165, + 555, + 209 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 165, + 555, + 209 + ], + "spans": [ + { + "bbox": [ + 316, + 165, + 555, + 209 + ], + "type": "text", + "content": "[136] Thomas Unterthiner, Sjoerd van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, and Sylvain Gelly. Towards accurate generative models of video: A new metric & challenges. arXiv preprint arXiv:1812.01717, 2018. 7" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 211, + 555, + 244 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 211, + 555, + 244 + ], + "spans": [ + { + "bbox": [ + 316, + 211, + 555, + 244 + ], + "type": "text", + "content": "[137] Lukas Uzolas, Elmar Eisemann, and Petr Kellnhofer. Motiondreamer: Zero-shot 3d mesh animation from video diffusion models. arXiv preprint arXiv:2405.20155, 2024. 20" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 246, + 555, + 300 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 246, + 555, + 300 + ], + "spans": [ + { + "bbox": [ + 316, + 246, + 555, + 300 + ], + "type": "text", + "content": "[138] Basile Van Hoorick, Rundi Wu, Ege Ozguroglu, Kyle Sargent, Ruoshi Liu, Pavel Tokmakov, Achal Dave, Changxi Zheng, and Carl Vondrick. Generative camera dolly: Extreme monocular dynamic novel view synthesis. In Proc. ECCV, 2024. 20" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 302, + 555, + 345 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 302, + 555, + 345 + ], + "spans": [ + { + "bbox": [ + 316, + 302, + 555, + 345 + ], + "type": "text", + "content": "[139] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Proc. NeurIPS, 2017. 3" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 348, + 555, + 403 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 348, + 555, + 403 + ], + "spans": [ + { + "bbox": [ + 316, + 348, + 555, + 403 + ], + "type": "text", + "content": "[140] Vikram Voleti, Chun-Han Yao, Mark Boss, Adam Letts, David Pankratz, Dmitry Tochilkin, Christian Laforte, Robin Rombach, and Varun Jampani. SV3D: Novel multi-view synthesis and 3D generation from a single image using latent video diffusion. arXiv preprint arXiv:2403.12008, 2024. 20" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 405, + 555, + 449 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 405, + 555, + 449 + ], + "spans": [ + { + "bbox": [ + 316, + 405, + 555, + 449 + ], + "type": "text", + "content": "[141] Ziyu Wan, Despoina Paschalidou, Ian Huang, Hongyu Liu, Bokui Shen, Xiaoyu Xiang, Jing Liao, and Leonidas Guibas. CAD: Photorealistic 3D generation via adversarial distillation. In Proc. CVPR, 2024. 20" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 451, + 555, + 483 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 451, + 555, + 483 + ], + "spans": [ + { + "bbox": [ + 316, + 451, + 555, + 483 + ], + "type": "text", + "content": "[142] Can Wang, Menglei Chai, Mingming He, Dongdong Chen, and Jing Liao. Clip-NeRF: Text-and-image driven manipulation of neural radiance fields. In Proc. CVPR, 2022. 20" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 486, + 555, + 529 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 486, + 555, + 529 + ], + "spans": [ + { + "bbox": [ + 316, + 486, + 555, + 529 + ], + "type": "text", + "content": "[143] Haochen Wang, Xiaodan Du, Jiahao Li, Raymond A Yeh, and Greg Shakhnarovich. Score Jacobian chaining: Lifting pretrained 2d diffusion models for 3D generation. In Proc. CVPR, 2023. 20" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 532, + 555, + 576 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 532, + 555, + 576 + ], + "spans": [ + { + "bbox": [ + 316, + 532, + 555, + 576 + ], + "type": "text", + "content": "[144] Jiawei Wang, Yuchen Zhang, Jiaxin Zou, Yan Zeng, Guoqiang Wei, Liping Yuan, and Hang Li. Boximator: Generating rich and controllable motions for video synthesis. arXiv preprint arXiv:2402.01566, 2024. 20" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 316, + 578, + 555, + 611 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 578, + 555, + 611 + ], + "spans": [ + { + "bbox": [ + 316, + 578, + 555, + 611 + ], + "type": "text", + "content": "[145] Shuzhe Wang, Vincent Leroy, Yohann Cabon, Boris Chidlovskii, and Jerome Revaud. Dust3r: Geometric 3d vision made easy. In Proc. CVPR, 2024. 3, 5, 6" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 316, + 613, + 555, + 667 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 613, + 555, + 667 + ], + "spans": [ + { + "bbox": [ + 316, + 613, + 555, + 667 + ], + "type": "text", + "content": "[146] Xiang Wang, Hangjie Yuan, Shiwei Zhang, Dayou Chen, Jiuniu Wang, Yingya Zhang, Yujun Shen, Deli Zhao, and Jingren Zhou. Videocomposer: Compositional video synthesis with motion controllability. arXiv preprint arXiv:2306.02018, 2023. 20" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 316, + 670, + 555, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 670, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 670, + 555, + 713 + ], + "type": "text", + "content": "[147] Yikai Wang, Xinzhou Wang, Zilong Chen, Zhengyi Wang, Fuchun Sun, and Jun Zhu. Vidu4d: Single generated video to high-fidelity 4d reconstruction with dynamic gaussian surfels. arXiv preprint arXiv:2405.16822, 2024. 20" + } + ] + } + ], + "index": 29 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "22887" + } + ] + } + ], + "index": 31 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 297, + 713 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 56, + 72, + 297, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 297, + 116 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 297, + 116 + ], + "type": "text", + "content": "[148] Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, and Jun Zhu. ProlificDreamer: High-fidelity and diverse text-to-3D generation with variational score distillation. In Proc. NeurIPS, 2023. 20" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 118, + 296, + 163 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 118, + 296, + 163 + ], + "spans": [ + { + "bbox": [ + 56, + 118, + 296, + 163 + ], + "type": "text", + "content": "[149] Zhouxia Wang, Ziyang Yuan, Xintao Wang, Tianshui Chen, Menghan Xia, Ping Luo, and Yin Shan. Motionctrl: A unified and flexible motion controller for video generation. In SIGGRAPH, 2024. 2, 3, 4, 6, 7" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 165, + 296, + 208 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 165, + 296, + 208 + ], + "spans": [ + { + "bbox": [ + 56, + 165, + 296, + 208 + ], + "type": "text", + "content": "[150] Daniel Watson, Saurabh Saxena, Lala Li, Andrea Tagliasacchi, and David J Fleet. Controlling space and time with diffusion models. arXiv preprint arXiv:2407.07860, 2024. 3, 4, 6, 8" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 210, + 296, + 255 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 210, + 296, + 255 + ], + "spans": [ + { + "bbox": [ + 56, + 210, + 296, + 255 + ], + "type": "text", + "content": "[151] Ruiqi Wu, Liangyu Chen, Tong Yang, Chunle Guo, Chongyi Li, and Xiangyu Zhang. Lamp: Learn a motion pattern for few-shot-based video generation. arXiv preprint arXiv:2310.10769, 2023. 2" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 256, + 296, + 300 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 256, + 296, + 300 + ], + "spans": [ + { + "bbox": [ + 56, + 256, + 296, + 300 + ], + "type": "text", + "content": "[152] Rundi Wu, Ruiqi Gao, Ben Poole, Alex Trevithick, Changxi Zheng, Jonathan T Barron, and Aleksander Holynski. Cat4d: Create anything in 4d with multi-view video diffusion models. arXiv preprint arXiv:2411.18613, 2024. 3" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 302, + 296, + 356 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 302, + 296, + 356 + ], + "spans": [ + { + "bbox": [ + 56, + 302, + 296, + 356 + ], + "type": "text", + "content": "[153] Wejia Wu, Zhuang Li, Yuchao Gu, Rui Zhao, Yefei He, David Junhao Zhang, Mike Zheng Shou, Yan Li, Tingting Gao, and Di Zhang. Draganything: Motion control for anything using entity representation. In Proc. ECCV, 2024. 20" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 358, + 296, + 403 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 358, + 296, + 403 + ], + "spans": [ + { + "bbox": [ + 56, + 358, + 296, + 403 + ], + "type": "text", + "content": "[154] Zijie Wu, Chaohui Yu, Yanqin Jiang, Chenjie Cao, Fan Wang, and Xiang Bai. Sc4d: Sparse-controlled video-to-4d generation and motion transfer. arXiv preprint arXiv:2404.03736, 2024. 20" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 404, + 296, + 438 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 404, + 296, + 438 + ], + "spans": [ + { + "bbox": [ + 56, + 404, + 296, + 438 + ], + "type": "text", + "content": "[155] Zeqi Xiao, Yifan Zhou, Shuai Yang, and Xingang Pan. Video diffusion models are training-free motion interpreter and controller. arXiv preprint arXiv:2405.14864, 2024. 3" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 440, + 296, + 483 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 440, + 296, + 483 + ], + "spans": [ + { + "bbox": [ + 56, + 440, + 296, + 483 + ], + "type": "text", + "content": "[156] Kevin Xie, Jonathan Lorraine, Tianshi Cao, Jun Gao, James Lucas, Antonio Torralba, Sanja Fidler, and Xiaohui Zeng. LATTE3D: Large-scale amortized text-to-enhanced3D synthesis. In Proc. ECCV, 2024. 20" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 485, + 296, + 529 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 485, + 296, + 529 + ], + "spans": [ + { + "bbox": [ + 56, + 485, + 296, + 529 + ], + "type": "text", + "content": "[157] Yiming Xie, Chun-Han Yao, Vikram Voleti, Huaizu Jiang, and Varun Jampani. Sv4d: Dynamic 3d content generation with multi-frame and multi-view consistency. arXiv preprint arXiv:2407.17470, 2024. 20" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 531, + 296, + 586 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 531, + 296, + 586 + ], + "spans": [ + { + "bbox": [ + 56, + 531, + 296, + 586 + ], + "type": "text", + "content": "[158] Dejia Xu, Yifan Jiang, Chen Huang, Liangchen Song, Thorsten Gernoth, Liangliang Cao, Zhangyang Wang, and Hao Tang. Cavity: Camera-controllable multi-view video diffusion with view-integrated attention. arXiv preprint arXiv:2410.10774, 2024. 3" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 588, + 296, + 632 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 588, + 296, + 632 + ], + "spans": [ + { + "bbox": [ + 56, + 588, + 296, + 632 + ], + "type": "text", + "content": "[159] Dejia Xu, Hanwen Liang, Neel P Bhatt, Hezhen Hu, Hanxue Liang, Konstantinos N Plataniotis, and Zhangyang Wang. Comp4d: Llm-guided compositional 4d scene generation. arXiv preprint arXiv:2403.16993, 2024. 2, 20" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 56, + 635, + 296, + 678 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 635, + 296, + 678 + ], + "spans": [ + { + "bbox": [ + 56, + 635, + 296, + 678 + ], + "type": "text", + "content": "[160] Dejia Xu, Weili Nie, Chao Liu, Sifei Liu, Jan Kautz, Zhangyang Wang, and Arash Vahdat. Camco: Camera-controllable 3d-consistent image-to-video generation. arXiv preprint arXiv:2406.02509, 2024. 3" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 56, + 680, + 296, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 680, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 56, + 680, + 296, + 713 + ], + "type": "text", + "content": "[161] Jun Xu, Tao Mei, Ting Yao, and Yong Rui. Msr-vtt: A large video description dataset for bridging video and language. In Proc. CVPR, 2016. 7, 8" + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 72, + 555, + 713 + ], + "type": "list", + "angle": 0, + "index": 28, + "blocks": [ + { + "bbox": [ + 316, + 72, + 555, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 72, + 555, + 127 + ], + "spans": [ + { + "bbox": [ + 316, + 72, + 555, + 127 + ], + "type": "text", + "content": "[162] Yinghao Xu, Zifan Shi, Wang Yifan, Hansheng Chen, Ceyuan Yang, Sida Peng, Yujun Shen, and Gordon Wetzstein. GRM: Large Gaussian reconstruction model for efficient 3D reconstruction and generation. In Proc. ECCV, 2024. 20" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 129, + 555, + 183 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 129, + 555, + 183 + ], + "spans": [ + { + "bbox": [ + 316, + 129, + 555, + 183 + ], + "type": "text", + "content": "[163] Yinghao Xu, Hao Tan, Fujun Luan, Sai Bi, Peng Wang, Jiahao Li, Zifan Shi, Kalyan Sunkavalli, Gordon Wetzstein, Zexiang Xu, et al. DMV3D: Denoising multi-view diffusion using 3D large reconstruction model. In Proc. ICLR, 2024. 20" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 185, + 555, + 228 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 185, + 555, + 228 + ], + "spans": [ + { + "bbox": [ + 316, + 185, + 555, + 228 + ], + "type": "text", + "content": "[164] Zhongcong Xu, Jianfeng Zhang, Jun Hao Liew, Wenqing Zhang, Song Bai, Jiashi Feng, and Mike Zheng Shou. Pv3d: A 3d generative model for portrait video generation. In Proc. ICLR, 2023. 2, 20" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 230, + 555, + 274 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 230, + 555, + 274 + ], + "spans": [ + { + "bbox": [ + 316, + 230, + 555, + 274 + ], + "type": "text", + "content": "[165] Qitong Yang, Mingtao Feng, Zijie Wu, Shijie Sun, Weisheng Dong, Yaonan Wang, and Ajmal Mian. Beyond skeletons: Integrative latent mapping for coherent 4d sequence generation. arXiv preprint arXiv:2403.13238, 2024. 20" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 276, + 555, + 330 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 276, + 555, + 330 + ], + "spans": [ + { + "bbox": [ + 316, + 276, + 555, + 330 + ], + "type": "text", + "content": "[166] Shiyuan Yang, Liang Hou, Haibin Huang, Chongyang Ma, Pengfei Wan, Di Zhang, Xiaodong Chen, and Jing Liao. Direct-a-video: Customized video generation with user-directed camera movement and object motion. arXiv preprint arXiv:2402.03162, 2024. 2, 20" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 332, + 555, + 374 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 332, + 555, + 374 + ], + "spans": [ + { + "bbox": [ + 316, + 332, + 555, + 374 + ], + "type": "text", + "content": "[167] Zeyu Yang, Zijie Pan, Chun Gu, and Li Zhang. Diffusion2: Dynamic 3d content generation via score composition of orthogonal diffusion models. arXiv preprint 2404.02148, 2024. 20" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 376, + 555, + 431 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 376, + 555, + 431 + ], + "spans": [ + { + "bbox": [ + 316, + 376, + 555, + 431 + ], + "type": "text", + "content": "[168] Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al. Cogvideox: Text-to-video diffusion models with an expert transformer. arXiv preprint arXiv:2408.06072, 2024. 2, 3, 16, 17, 18, 19" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 434, + 555, + 487 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 434, + 555, + 487 + ], + "spans": [ + { + "bbox": [ + 316, + 434, + 555, + 487 + ], + "type": "text", + "content": "[169] Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al. Cogvideox: Text-to-video diffusion models with an expert transformer. arXiv preprint arXiv:2408.06072, 2024. 7, 16, 17" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 488, + 555, + 533 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 488, + 555, + 533 + ], + "spans": [ + { + "bbox": [ + 316, + 488, + 555, + 533 + ], + "type": "text", + "content": "[170] Junliang Ye, Fangfu Liu, Qixiu Li, Zhengyi Wang, Yikai Wang, Xinzhou Wang, Yueqi Duan, and Jun Zhu. DreamReward: Text-to-3D generation with human preference. arXiv preprint arXiv:2403.14613, 2024. 20" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 534, + 555, + 578 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 534, + 555, + 578 + ], + "spans": [ + { + "bbox": [ + 316, + 534, + 555, + 578 + ], + "type": "text", + "content": "[171] Shengming Yin, Chenfei Wu, Jian Liang, Jie Shi, Houqiang Li, Gong Ming, and Nan Duan. Dragnuwa: Fine-grained control in video generation by integrating text, image, and trajectory. arXiv preprint arXiv:2308.08089, 2023. 20" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 579, + 555, + 622 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 579, + 555, + 622 + ], + "spans": [ + { + "bbox": [ + 316, + 579, + 555, + 622 + ], + "type": "text", + "content": "[172] Wei Yin, Chi Zhang, Hao Chen, Zhipeng Cai, Gang Yu, Kaixuan Wang, Xiaozhi Chen, and Chunhua Shen. Metric3d: Towards zero-shot metric 3d prediction from A single image. In Proc. ICCV, 2023. 20" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 624, + 555, + 667 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 624, + 555, + 667 + ], + "spans": [ + { + "bbox": [ + 316, + 624, + 555, + 667 + ], + "type": "text", + "content": "[173] Yuyang Yin, Dejia Xu, Zhangyang Wang, Yao Zhao, and Yunchao Wei. 4dgen: Grounded 4d content generation with spatial-temporal consistency. arXiv preprint arXiv:2312.17225, 2023. 2, 20" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 316, + 670, + 555, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 670, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 670, + 555, + 713 + ], + "type": "text", + "content": "[174] Paul Yoo, Jiaxian Guo, Yutaka Matsuo, and Shixiang Shane Gu. DreamSparse: Escaping from Plato's cave with 2D diffusion model given sparse views. In arXiv preprint arXiv:2306.03414, 2023. 20" + } + ] + } + ], + "index": 27 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "text", + "content": "22888" + } + ] + } + ], + "index": 29 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 297, + 712 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 56, + 72, + 297, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 297, + 127 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 297, + 127 + ], + "type": "text", + "content": "[175] Heng Yu, Chaoyang Wang, Peiye Zhuang, Willi Menapace, Aliaksandr Siarohin, Junli Cao, Laszlo A Jeni, Sergey Tulyakov, and Hsin-Ying Lee. 4real: Towards photorealistic 4d scene generation via video diffusion models. In Proc. NeurIPS, 2024. 2, 20" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 130, + 296, + 174 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 130, + 296, + 174 + ], + "spans": [ + { + "bbox": [ + 56, + 130, + 296, + 174 + ], + "type": "text", + "content": "[176] Shoubin Yu, Jacob Zhiyuan Fang, Jian Zheng, Gunnar A Sigurdsson, Vicente Ordonez, Robinson Piramuthu, and Mohit Bansal. Zero-shot controllable image-to-video animation via motion decomposition. In ACM MM, 2024. 20" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 176, + 296, + 231 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 176, + 296, + 231 + ], + "spans": [ + { + "bbox": [ + 56, + 176, + 296, + 231 + ], + "type": "text", + "content": "[177] Wangbo Yu, Jinbo Xing, Li Yuan, Wenbo Hu, Xiaoyu Li, Zhipeng Huang, Xiangjun Gao, Tien-Tsin Wong, Ying Shan, and Yonghong Tian. Viewcrafter: Taming video diffusion models for high-fidelity novel view synthesis. arXiv preprint arXiv:2409.02048, 2024. 3" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 233, + 296, + 275 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 233, + 296, + 275 + ], + "spans": [ + { + "bbox": [ + 56, + 233, + 296, + 275 + ], + "type": "text", + "content": "[178] Xin Yu, Yuan-Chen Guo, Yangguang Li, Ding Liang, Song-Hai Zhang, and Xiaojuan Qi. Text-to-3D with classifier score distillation. arXiv preprint arXiv:2310.19415, 2023. 20" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 279, + 296, + 322 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 279, + 296, + 322 + ], + "spans": [ + { + "bbox": [ + 56, + 279, + 296, + 322 + ], + "type": "text", + "content": "[179] Yu-Jie Yuan, Leif Kobbelt, Jiwen Liu, Yuan Zhang, Pengfei Wan, Yu-Kun Lai, and Lin Gao. 4dynamic: Text-to-4d generation with hybrid priors. arXiv preprint arXiv:2407.12684, 2024. 20" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 325, + 296, + 380 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 325, + 296, + 380 + ], + "spans": [ + { + "bbox": [ + 56, + 325, + 296, + 380 + ], + "type": "text", + "content": "[180] Raza Yunus, Jan Eric Lenssen, Michael Niemeyer, Yiyi Liao, Christian Rupprecht, Christian Theobalt, Gerard Pons-Moll, Jia-Bin Huang, Vladislav Golyanik, and Eddy Ilg. Recent trends in 3d reconstruction of general non-rigid scenes. In Computer Graphics Forum, 2024. 2" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 382, + 296, + 437 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 382, + 296, + 437 + ], + "spans": [ + { + "bbox": [ + 56, + 382, + 296, + 437 + ], + "type": "text", + "content": "[181] Bohan Zeng, Ling Yang, Siyu Li, Jiaming Liu, Zixiang Zhang, Juanxi Tian, Kaixin Zhu, Yongzhen Guo, Fu-Yun Wang, Minkai Xu, et al. Trans4d: Realistic geometry-aware transition for compositional text-to-4d synthesis. arXiv preprint arXiv:2410.07155, 2024. 20" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 439, + 296, + 483 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 439, + 296, + 483 + ], + "spans": [ + { + "bbox": [ + 56, + 439, + 296, + 483 + ], + "type": "text", + "content": "[182] Yifei Zeng, Yanqin Jiang, Siyu Zhu, Yuanxun Lu, Youtian Lin, Hao Zhu, Weiming Hu, Xun Cao, and Yao Yao. Stag4d: Spatial-temporal anchored generative 4d gaussians. arXiv preprint arXiv:2403.14939, 2024. 2, 20" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 485, + 296, + 506 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 485, + 296, + 506 + ], + "spans": [ + { + "bbox": [ + 56, + 485, + 296, + 506 + ], + "type": "text", + "content": "[183] Biao Zhang and Rico Sennrich. Root mean square layer normalization. In Proc. NeurIPS, 2019. 17" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 510, + 296, + 552 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 510, + 296, + 552 + ], + "spans": [ + { + "bbox": [ + 56, + 510, + 296, + 552 + ], + "type": "text", + "content": "[184] Bowen Zhang, Tianyu Yang, Yu Li, Lei Zhang, and Xi Zhao. Compress3D: a compressed latent space for 3D generation from a single image. arXiv preprint arXiv:2403.13524, 2024. 20" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 555, + 296, + 620 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 555, + 296, + 620 + ], + "spans": [ + { + "bbox": [ + 56, + 555, + 296, + 620 + ], + "type": "text", + "content": "[185] David Junhao Zhang, Roni Paiss, Shiran Zada, Nikhil Karnad, David E Jacobs, Yael Pritch, Inbar Mosseri, Mike Zheng Shou, Neal Wadhwa, and Nataniel Ruiz. Recapture: Generative video camera controls for user-provided videos using masked video fine-tuning. arXiv preprint arXiv:2411.05003, 2024. 3" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 623, + 296, + 666 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 623, + 296, + 666 + ], + "spans": [ + { + "bbox": [ + 56, + 623, + 296, + 666 + ], + "type": "text", + "content": "[186] Hao Zhang, Di Chang, Fang Li, Mohammad Soleymani, and Narendra Ahuja. Magicpose4d: Crafting articulated models with appearance and motion control. arXiv preprint arXiv:2405.14017, 2024. 20" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 56, + 670, + 296, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 670, + 296, + 712 + ], + "spans": [ + { + "bbox": [ + 56, + 670, + 296, + 712 + ], + "type": "text", + "content": "[187] Haiyu Zhang, Xinyuan Chen, Yaohui Wang, Xihui Liu, Yunhong Wang, and Yu Qiao. 4diffusion: Multi-view video diffusion model for 4d generation. arXiv preprint arXiv:2405.20674, 2024. 20" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 72, + 555, + 643 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 316, + 72, + 555, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 72, + 555, + 106 + ], + "spans": [ + { + "bbox": [ + 316, + 72, + 555, + 106 + ], + "type": "text", + "content": "[188] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proc. ICCV, 2023. 3, 4, 6, 18" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 107, + 555, + 161 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 107, + 555, + 161 + ], + "spans": [ + { + "bbox": [ + 316, + 107, + 555, + 161 + ], + "type": "text", + "content": "[189] Tianyuan Zhang, Hong-Xing Yu, Rundi Wu, Brandon Y Feng, Changxi Zheng, Noah Snavely, Jiajun Wu, and William T Freeman. Physdreamer: Physics-based interaction with 3d objects via video generation. In Proc. ECCV, 2024. 20" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 163, + 555, + 206 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 163, + 555, + 206 + ], + "spans": [ + { + "bbox": [ + 316, + 163, + 555, + 206 + ], + "type": "text", + "content": "[190] Zhichao Zhang, Hui Chen, Jinsheng Deng, Xiaqing Yin, Xingshen Song, and Ming Xu. Motion4d: A decoupled pipeline for enhanced text-to-4d generation with optimized motion patterns. SSRN, 2024. 20" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 208, + 555, + 251 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 208, + 555, + 251 + ], + "spans": [ + { + "bbox": [ + 316, + 208, + 555, + 251 + ], + "type": "text", + "content": "[191] Zhenghao Zhang, Junchao Liao, Menghao Li, Long Qin, and Weizhi Wang. Tora: Trajectory-oriented diffusion transformer for video generation. arXiv preprint arXiv:2407.21705, 2024. 20" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 253, + 555, + 295 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 253, + 555, + 295 + ], + "spans": [ + { + "bbox": [ + 316, + 253, + 555, + 295 + ], + "type": "text", + "content": "[192] Wang Zhao, Shaohui Liu, Hengkai Guo, Wenping Wang, and Yong-Jin Liu. *Particlesfm: Exploiting dense point trajectories for localizing moving cameras in the wild*. In *Proc. ECCV*, 2022. 5, 7, 8" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 297, + 555, + 340 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 297, + 555, + 340 + ], + "spans": [ + { + "bbox": [ + 316, + 297, + 555, + 340 + ], + "type": "text", + "content": "[193] Yuyang Zhao, Zhiwen Yan, Enze Xie, Lanqing Hong, Zhenguo Li, and Gim Hee Lee.Animate124: Animating one image to 4d dynamic scene. arXiv preprint arXiv:2311.14603, 2023. 2, 20" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 342, + 555, + 386 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 342, + 555, + 386 + ], + "spans": [ + { + "bbox": [ + 316, + 342, + 555, + 386 + ], + "type": "text", + "content": "[194] Yuyang Zhao, Chung-Ching Lin, Kevin Lin, Zhiwen Yan, Linjie Li, Zhengyuan Yang, Jianfeng Wang, Gim Hee Lee, and Lijuan Wang. Genxd: Generating any 3d and 4d scenes. arXiv preprint arXiv:2411.02319, 2024. 3" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 388, + 555, + 420 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 388, + 555, + 420 + ], + "spans": [ + { + "bbox": [ + 316, + 388, + 555, + 420 + ], + "type": "text", + "content": "[195] Guangcong Zheng, Teng Li, Rui Jiang, Yehao Lu, Tao Wu, and Xi Li. Cami2v: Camera-controlled image-to-video diffusion model. arXiv preprint arXiv:2410.15957, 2024. 3" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 421, + 555, + 464 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 421, + 555, + 464 + ], + "spans": [ + { + "bbox": [ + 316, + 421, + 555, + 464 + ], + "type": "text", + "content": "[196] Yufeng Zheng, Xueting Li, Koki Nagano, Sifei Liu, Otmar Hilliges, and Shalini De Mello. A unified approach for text-and image-guided 4d scene generation. In Proc. CVPR, 2024. 2, 20" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 466, + 555, + 509 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 466, + 555, + 509 + ], + "spans": [ + { + "bbox": [ + 316, + 466, + 555, + 509 + ], + "type": "text", + "content": "[197] Zangwei Zheng, Xiangyu Peng, Tianji Yang, Chenhui Shen, Shenggui Li, Hongxin Liu, Yukun Zhou, Tianyi Li, and Yang You. Open-sora: Democratizing efficient video production for all, 2024. 3, 17" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 511, + 555, + 554 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 511, + 555, + 554 + ], + "spans": [ + { + "bbox": [ + 316, + 511, + 555, + 554 + ], + "type": "text", + "content": "[198] Haitao Zhou, Chuang Wang, Rui Nie, Jinxiao Lin, Dongdong Yu, Qian Yu, and Changhu Wang. Trackgo: A flexible and efficient method for controllable video generation. arXiv preprint arXiv:2408.11475, 2024. 20" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 555, + 555, + 598 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 555, + 555, + 598 + ], + "spans": [ + { + "bbox": [ + 316, + 555, + 555, + 598 + ], + "type": "text", + "content": "[199] Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snively. Stereo magnification: Learning view synthesis using multiplane images. In SIGGRAPH, 2018. 2, 4, 5, 6, 7, 8, 16, 18, 19, 20" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 601, + 555, + 643 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 601, + 555, + 643 + ], + "spans": [ + { + "bbox": [ + 316, + 601, + 555, + 643 + ], + "type": "text", + "content": "[200] Hanxin Zhu, Tianyu He, Anni Tang, Junliang Guo, Zhibo Chen, and Jiang Bian. Compositional 3d-aware video generation with llm director. arXiv preprint arXiv:2409.00558, 2024. 20" + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "text", + "content": "22889" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 14 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/ACAttack_ Adaptive Cross Attacking RGB-T Tracker via Multi-Modal Response Decoupling/6f190354-412a-4587-a766-f43d48cafb75_content_list.json b/2025/ACAttack_ Adaptive Cross Attacking RGB-T Tracker via Multi-Modal Response Decoupling/6f190354-412a-4587-a766-f43d48cafb75_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7863b1d70561aac943284cf4aa2f9f67a1b93899 --- /dev/null +++ b/2025/ACAttack_ Adaptive Cross Attacking RGB-T Tracker via Multi-Modal Response Decoupling/6f190354-412a-4587-a766-f43d48cafb75_content_list.json @@ -0,0 +1,1661 @@ +[ + { + "type": "text", + "text": "ACAttack: Adaptive Cross Attacking RGB-T Tracker via Multi-Modal Response Decoupling", + "text_level": 1, + "bbox": [ + 207, + 130, + 789, + 176 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Xinyu Xiang Qinglong Yan Hao Zhang* Jiayi Ma* Wuhan University, China", + "bbox": [ + 279, + 203, + 732, + 239 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "xiangxinyu@whu.edu.cn, qinglong_yan@whu.edu.cn, zhpersonalbox@gmail.com, jyma2010@gmail.com", + "bbox": [ + 99, + 241, + 906, + 257 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 246, + 291, + 326, + 306 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "The research on adversarial attacks against trackers primarily concentrates on the RGB modality, whereas the methodology for attacking RGB-T multi-modal trackers has seldom been explored so far. This work represents an innovative attempt to develop an adaptive cross attack framework via multi-modal response decoupling, generating multi-modal adversarial patches to evade RGB-T trackers. Specifically, a modal-aware adaptive attack strategy is introduced to weaken the modality with high common information contribution alternately and iteratively, achieving the modal decoupling attack. In order to perturb the judgment of the modal balance mechanism in the tracker, we design a modal disturbance loss to increase the distance of the response map of the single-modal adversarial samples in the tracker. Besides, we also propose a novel spatio-temporal joint attack loss to progressively deteriorate the tracker's perception of the target. Moreover, the design of the shared adversarial shape enables the generated multi-modal adversarial patches to be readily deployed in real-world scenarios, effectively reducing the interference of the patch posting process on the shape attack of the infrared adversarial layer. Extensive digital and physical domain experiments demonstrate the effectiveness of our multi-modal adversarial patch attack. Our code is available at https://github.com/Xinyu-Xiang/ACAttack.", + "bbox": [ + 88, + 323, + 485, + 700 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 91, + 729, + 220, + 744 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "The adversarial attack on visual object tracking (VOT) [1, 26] aims to mislead the prediction results of the tracker through the generated adversarial disturbance, find the model vulnerabilities, and then promote the security of the tracking model in real-life. Single-modal tracking attack methods have been extensively studied, but with the wide application of multi-modal devices [19, 20, 24], multimodal trackers are widely used in safety-critical real-world", + "bbox": [ + 89, + 755, + 483, + 876 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/c344516e1824eddade6022528581e550f426aeec60e7d6e47551d4607884a8b0.jpg", + "image_caption": [ + "Figure 1. Our attack strategy against RGB-T trackers. An adaptive attack strategy, sensitive to modality, is introduced to alternately and iteratively suppress the modality with a high contribution of shared information. Additionally, a modal disturbance loss is crafted to enlarge the response map distance for single-modal adversarial samples within the tracker." + ], + "image_footnote": [], + "bbox": [ + 516, + 290, + 905, + 477 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "fields such as autonomous driving and urban security [16].", + "bbox": [ + 511, + 598, + 901, + 613 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "To address the urgent need to explore the security of trackers, adversarial attack techniques for trackers have emerged in rapid succession, including traditional gradient-based attack approaches and deep-network-based attacks. The former methods [10, 11] utilize hand-crafted parameters and apply many times of gradient ascent to maximize an adversarial loss function for misguiding deep networks. Although it can achieve certain attack effects for specific types of trackers, it is challenging to comprehensively explore and attack the potential vulnerabilities of the different trackers due to the limitations of inflexible pattern design. Nonetheless, the latter one [6] applies tremendous data to train an adversarial patches-generator including flexible architectures and optimization strategies to better automatically search for model weaknesses and realize tracker attacks. Therefore, in comparison with a traditional gradient-based attack approach, deep-network-based paradigm can more automatically and flexibly excavate the security issues within the tracker.", + "bbox": [ + 509, + 614, + 906, + 898 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 46 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "*Corresponding authors.", + "bbox": [ + 109, + 887, + 241, + 898 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "22099", + "bbox": [ + 478, + 944, + 517, + 957 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Although prior efforts for adversarial attack methods are effective in interference trackers, several challenges still need to be addressed. Notably, existing adversarial attack methods [5, 15, 25] on tracking are designed for RGB modality, whereas the methodologies for attacking RGB-T multi-modal trackers are less explored so far. Considering the widespread deployment of multi-modal tracking technology [17, 27, 28] in several safety-critical areas, it is urgent to explore and implement adversarial attacks of multimodal tracking to understand the potential vulnerabilities of the trackers. However, as shown in Fig. 1, the unique modal coupling and structural design of RGB-T trackers make it a great challenge to successfully find model vulnerabilities. Firstly, due to the modal equilibrium mechanism and coupled multi-modal information, it is difficult to successfully jam the RGB-T tracking model itself. Specifically, the modal balancing strategy in the multi-modal tracker can effectively prevent the attack of adversarial perturbation in a single modal. Secondly, the coupling of multi-modal information can effectively weaken the attacks against the consensus region of the target. Thirdly, the deployment of patches in the physical world is also challenging because the stacked placement of multi-modal patches has a probability of compromising the expression of infrared adversarial shapes, reducing their synergy performance.", + "bbox": [ + 93, + 90, + 480, + 467 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Considering these challenges, we propose ACAttack, an adaptive cross attack framework via multi-modal response decoupling. It aims to generate multi-modal adversarial patches to evade RGB-T trackers in both digital and physical domains. Specifically, this framework can gradually and adaptively optimize, discover plenty of rough adversarial samples, and then map them to the high-dimensional adversarial space of different modalities according to the modal response contribution factor, forming multi-modal adversarial patches. Secondly, a modal-aware adaptive attack strategy is introduced to weaken tracker's deep semantic attention to the modality with high common information contribution according to the contribution degree of modal response alternately and iteratively, achieving the modal decoupling attack. When the contributions of two modalities are similar, we design a modal disturbance loss to search the modal imbalance vulnerabilities of the tracker, expand the distance of the response map of the single-modal adversarial samples in the tracker, and perturb the judgment of the balance modal in the tracker. We also design a spatio-temporal joint attack loss to build progressively enlarged pseudo-GT between consecutive frames, which progressively deteriorates the tracker's perception of the target. Thirdly, the design of the shared adversarial shape is deployed to eliminate the interference of visible patches on the expression of infrared adversarial shapes. After the shape is shared, it can not only reduce the consumption of the adversarial shape's inter-modal attack ability but also realize", + "bbox": [ + 93, + 478, + 480, + 898 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "attacks other than texture in the visible modal.", + "bbox": [ + 517, + 92, + 816, + 104 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In summary, we make the following contributions:", + "bbox": [ + 535, + 107, + 864, + 121 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- We make an innovative attempt to propose an adaptive cross attack framework via multi-modal response decoupling. It can generate multi-modal adversarial patches to mislead RGB-T trackers effectively.", + "- We develop a novel modal attack flow, in which modal-aware adaptive attack strategy and modal attack constraints alternately disturb the modes with high contribution to achieve modal decoupling and destroy modal balance mechanism of tracker, respectively.", + "- We design the shape-shared stack strategy to linkage the visible and infrared adversarial shapes, reducing the attack consumption of multi-modal patches mutual deployment in physical scenarios.", + "- Experimental results show that multi-modal patches can efficiently fool RGB-T trackers in standard RGB-T tracking datasets and real scenes." + ], + "bbox": [ + 516, + 126, + 903, + 366 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related Work", + "text_level": 1, + "bbox": [ + 517, + 382, + 651, + 396 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.1. Visual Object Tracking", + "text_level": 1, + "bbox": [ + 517, + 407, + 725, + 422 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Given the tracked object in the first frame, object tracking aims to recognize and locate the object in subsequent frames. Many RGB tracking methods [2, 7, 14, 18, 21] have been proposed and achieved commendable tracking performance. However, RGB sensors struggle to capture objects effectively under challenging conditions such as occlusion and low light, limiting the performance of RGB trackers. To address this, the RGB-T tracking paradigm is introduced, which is not restricted to a single RGB modality but instead integrates the complementary information from both RGB and thermal modalities. This fusion enables more robust tracking capabilities. ViPT [29] introduces a vision prompt tracking framework that leverages the foundational model with strong representation capabilities, enabling interaction between the thermal and RGB modalities through a modality-complementing prompter. BAT [3] proposes a universal bidirectional adapter, which enables mutual prompt between the thermal and RGB modalities and further improves tracking performance. SDSTrack [9] designs a complementary masked patch distillation strategy based on self-distillation learning, which enhances the tracking robustness in extreme weather.", + "bbox": [ + 514, + 429, + 903, + 760 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.2. Adversarial Attacks", + "text_level": 1, + "bbox": [ + 517, + 773, + 700, + 786 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Currently, adversarial attacks in the tracking task primarily target RGB trackers. For instance, APYVOT [4] proposes an optimization objective function with a dual-attention mechanism to generate perturbations, disrupting tracking by interfering solely with the initial frame. MTD [8] introduces a maximum textural discrepancy loss function that misleads the visual trackers by decorrelating the template", + "bbox": [ + 514, + 795, + 903, + 898 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "22100", + "bbox": [ + 478, + 945, + 517, + 955 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/89724a5020b0aa2d41c2cccabb03770003a311051704048345ec27280933c31c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 98, + 92, + 906, + 258 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/51abf7b5b1b9ea20cd809cc7842a0f6369b3ce07abf87277bc899e287d84adda.jpg", + "image_caption": [ + "Figure 2. The overall framework of our ACAttack." + ], + "image_footnote": [], + "bbox": [ + 98, + 262, + 906, + 388 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "and search frame at hierarchical feature scales. These methods, however, fail to disrupt the significant feature enhancement resulting from the interactions between RGB and thermal modalities, which limits their effectiveness against RGB-T trackers. Therefore, it is essential to develop the attack strategy specifically designed for RGB-T tracking.", + "bbox": [ + 89, + 439, + 483, + 531 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. Methodology", + "text_level": 1, + "bbox": [ + 89, + 547, + 225, + 565 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Coarse-to-Fine Modality Attack Framework", + "text_level": 1, + "bbox": [ + 89, + 574, + 468, + 589 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "With the help of the progressive modality information integration strategy, the multi-modal trackers gradually strengthen the common scene representation and target response, thereby achieving a robust tracking performance superior to that of the single-modal trackers. Consequently, a coarse-to-fine architecture is designed to progressively degrade the modality integration capability of RGB-T models named ACAAttack, which can be divided into two stages. The overall architecture of our ACAAttack is illustrated in Fig. 2 and Algorithm 1. First, we employ projected gradient descent (PGD) in stage1 to identify a set of adversarial examples $\\{p_i\\}_{i=1}^k$ with sufficient aggressiveness, narrowing the search space for refined attacks and increasing the likelihood of discovering strong adversarial examples, formulated as:", + "bbox": [ + 89, + 597, + 483, + 821 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\left\\{p _ {i} \\right\\} _ {i = 1} ^ {k} = P G D \\left(p _ {i} ^ {\\text {i n i t}}\\right), \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 202, + 825, + 482, + 844 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $p_i^{init}$ is randomly initialized with noise patches. The generated patches are subsequently loaded onto the visible image $I_{vi}$ to form a visible adversarial sample $I_{vi}^{adv}$ . This", + "bbox": [ + 89, + 854, + 483, + 902 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "process can be formulated as follows:", + "bbox": [ + 511, + 440, + 764, + 455 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\nI _ {v i} ^ {a d v} = p _ {i} \\odot M + I _ {v i} \\odot (1 - M), \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 594, + 467, + 906, + 484 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $M$ is the binary mask for applying adversarial patch. $\\odot$ represents the element-wise Hadmard product. The adversarial visible image $I_{vi}^{adv}$ concat with clean infrared image $I_{ir}$ are sent to RGB-T tracker $T(\\cdot)$ to predict final bounding box $Bbox_{pred}$ of target, which is expressed as:", + "bbox": [ + 511, + 497, + 906, + 574 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\nB b o x _ {p r e d} = T \\left(I _ {v i} ^ {a d v}, I _ {i r}\\right). \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 620, + 589, + 903, + 606 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "We optimize this process by minimizing the conventional attack loss $L_{att}$ relative to the center point, which is defined as:", + "bbox": [ + 511, + 614, + 905, + 659 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\nL _ {a t t} = - \\left\\| \\left(C p (B b o x _ {p r e d}) - C p (B b o x _ {g t})\\right) \\right\\| _ {2} ^ {2}, \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 553, + 674, + 906, + 691 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $C_p$ denotes the operator to obtain the center point of bounding box $Bbox$ . Specifically, when the attack loss $L_{att}$ reaches a predefined threshold, the iteration is halted, and a rough adversarial sample is generated. This process is repeated $k$ times to generate a set of $k$ rough adversarial samples. Considering the different imaging principles of infrared and visible modalities, the multi-modal patches will be specially designed according to their differences in principles. Specifically, the visible modal mainly employs the attack texture to interfere. For the infrared modal, it is difficult to detect the texture, so the adversarial shape is used to attack. Subsequently, the set of adversarial samples generated in the coarse attack stage (stage1) is fed into the", + "bbox": [ + 511, + 704, + 906, + 900 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "22101", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "subsequent fine attack process (stage2) for further refinement, resulting in the generation of multi-modal adversarial patches with strong attack performance. Finally, through continuous iterative optimization, the multi-mode patch will share the same attack shape, while the visible patch will also possess adversarial texture to confuse the tracker. The fine-grained attack phase targets the modality of the multi-modal tracker and consists of modal decoupling attacks and modal balance interference, which will be detailed in the subsequent sections.", + "bbox": [ + 89, + 90, + 483, + 242 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2. Modal Decoupling Attack", + "text_level": 1, + "bbox": [ + 89, + 250, + 326, + 266 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The RGB-T tracker implicitly couples the contributions of the two modalities, thereby enhancing tracking accuracy. Given the significant role of modal contribution in the tracker, we propose a modal decoupling attack to adaptively diminish the influence of advantageous modalities. Specifically, we use the coarse adversarial samples from the stage1 as input for the stage2, feeding them simultaneously into the adversarial texture generation network $G_{tex}^{Adv}$ and the adversarial shape generation network $G_{shape}^{Adv}$ . For attacking the infrared modality, the rough adversarial sample set from the first stage is encoded into $r$ dimensions via continuous downsampling and an MLP, controlling the adversarial shape. The infrared patch $p_{ir}$ generation process can be expressed as follows:", + "bbox": [ + 89, + 272, + 483, + 484 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\np _ {i r} = G _ {s h a p e} ^ {A d v} \\left(\\left\\{p _ {i} \\right\\} _ {i = 1} ^ {k}\\right). \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 202, + 492, + 482, + 513 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The adversarial texture generation network generates adversarial textures to attack the visible modality using residual connections and upsampling [23], which is defined as:", + "bbox": [ + 89, + 521, + 483, + 566 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\np _ {v i} = \\left(1 - p _ {i r}\\right) \\odot G _ {\\text {s h a p e}} ^ {\\text {A d v}} \\left(\\left\\{p _ {i} \\right\\} _ {i = 1} ^ {k}\\right), \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 163, + 578, + 482, + 598 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where the visible patch with adversarial textures is $p_{vi}$ .", + "bbox": [ + 89, + 603, + 452, + 618 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Subsequently, the modal contribution of the current network input is calculated as the reciprocal of the difference between the response map obtained from single-modal data and the response map from dual-modal input. A larger reciprocal distance indicates that the response maps from dual-modal and single-modal inputs are more similar, suggesting a greater contribution from the current single modality to the tracker. The modal response contribution can be expressed as follows:", + "bbox": [ + 89, + 619, + 483, + 753 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nc _ {m} = \\frac {1}{\\operatorname {d i s} (R (m , m) , R (v i , i r))}, \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 176, + 753, + 482, + 786 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $c_{m}$ represents the contribution value of $m \\in \\{vi, ir\\}$ modal to the tracker. $dis$ stands for the distance function and is used to measure the Euclidean distance between response maps. $R(\\cdot, \\cdot)$ shows the response map acquired by the tracker under the current input.", + "bbox": [ + 89, + 794, + 483, + 869 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "To normalize the modal contribution, a softmax operation is applied to the reciprocal distance, yielding the final", + "bbox": [ + 89, + 869, + 483, + 901 + ], + "page_idx": 3 + }, + { + "type": "code", + "sub_type": "algorithm", + "code_caption": [ + "Algorithm 1: The ACAcAttack Algorithm" + ], + "code_body": "Input: Random patches $p_i^{init}$ , parameters $k$ , $M_{stage1}$ , $M_{stage2}$ , $\\xi$ , $\\zeta$ \nOutput: Optimized multi-modal patches $p_{vi}, p_{ir}$ \n1 Iteration: \n2 Initialize a random patch $p_i^{init}$ ; \n3 $i = i + 1$ ; \n4 Iteration: \n5 Generate $p_i$ through Eq. (1); \n6 Use Eq. (2) to generate adversarial sample $I_{vi}^{adv}$ ; \n7 Calculate $Bbox_{pred}$ using Eq. (3); \n8 Optimize $PGD(\\cdot)$ with Eq. (4); \n9 Until: $L_{att} < \\xi$ or iter $\\geq M_{stage1}$ \n10 Until: $i \\geq k$ \n11 Determine $\\{p_i\\}_{i=1}^k$ after optimization in stage1; \n12 Iteration: \n13 iter $= iter + 1$ ; \n14 Obtain $p_{ir}, p_{vi}$ via Eqs. (5) and (6); \n15 Apply multi-modal patches $p_{ir}, p_{vi}$ on $I_{ir}, I_{vi}$ ; \n16 Calculate $c_{vi}, c_{ir}$ using Eq. (7); \n17 Send to Tracker $T(\\cdot)$ to predict bounding box; \n18 if $|c_{vi} - c_{ir}| < \\zeta$ ; \n19 Optimize $G_{shape}^{Adv}$ with Eq. (9); \n20 elif $c_{vi} - c_{ir} > \\zeta$ ; \n21 Optimize $G_{tex}^{Adv}$ with Eqs. (4) and (10); \n22 elif $c_{ir} - c_{vi} > \\zeta$ ; \n23 Optimize $G_{shape}^{Adv}$ with Eqs. (4) and (10); \n24 Until: iter $\\geq M_{stage2}$", + "bbox": [ + 516, + 111, + 906, + 535 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "modal contribution score, formulated as:", + "bbox": [ + 513, + 564, + 782, + 578 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nc _ {n o r m} = \\operatorname {s o f t m a x} \\left(c _ {v i}, c _ {i r}\\right). \\tag {8}\n$$\n", + "text_format": "latex", + "bbox": [ + 612, + 589, + 903, + 604 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Finally, an automatic discriminant attack is executed based on the modal contribution score. As illustrated, when the visible contribution is higher in the input data, only the visible modal is attacked, specifically by optimizing the generation of adversarial textures. When the infrared contribution is higher in the input data, only the adversarial shape is modified to attack the infrared modal, thereby reducing its contribution to the tracker. Given that the tracker employs a modal balance mechanism, the contributions of the two modalities may be similar in certain scenarios, as detailed in the subsequent section.", + "bbox": [ + 511, + 613, + 906, + 779 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.3. Modal Balance Interference", + "text_level": 1, + "bbox": [ + 511, + 787, + 764, + 801 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Previous work on tracking attacks has attempted to design explicit attack losses to detect model vulnerabilities, but this approach often fails to account for the inherent characteristics of the model, making it challenging to execute effective attacks. Inspired by the concept of implicit attacks [25] and the multi-modal aggregation properties in", + "bbox": [ + 511, + 810, + 906, + 901 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "22102", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "RGB-T tracker [22], we develop a loss function with modal-balanced interference to target multi-modal trackers. In cases where the contributions of infrared and visible modal are similar (i.e., modal balance), the response map of single-modal input closely resembles that of dual-modal input. To disrupt this balance, we extract the response maps of the two single-modal adversarial examples and increase the distance between them. The details are provided as follows:", + "bbox": [ + 89, + 90, + 483, + 212 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nL _ {m i} = - \\left\\| R \\left(v i _ {a d v}, v i _ {a d v}\\right) - R \\left(i r _ {a d v}, i r _ {a d v}\\right) \\right\\| _ {2} ^ {2}. \\tag {9}\n$$\n", + "text_format": "latex", + "bbox": [ + 116, + 223, + 482, + 242 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Notably, the infrared and visible patches in our method share the same adversarial shape to achieve simultaneous attacks on both modalities. Therefore, under conditions of modal balance, only the adversarial shape is optimized. Additionally, a spatio-temporal joint attack loss $L_{st}$ is employed in conjunction with the modal jamming loss $L_{mi}$ to disrupt the tracker's semantic perception. The specific design is presented in the following formula:", + "bbox": [ + 89, + 253, + 483, + 375 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nL _ {s t} = \\left\\| \\sum_ {i = 1} ^ {s} B b o x _ {p r e d} (w, h) - r _ {i} * B b o x _ {g t} (w, h) \\right\\| _ {2} ^ {2}, \\tag {10}\n$$\n", + "text_format": "latex", + "bbox": [ + 101, + 377, + 482, + 415 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $s$ denotes the consecutive $s = 5$ frames extracted from a video. $r_i$ represents the scaling factor over time to construct the pseudo-GT, which is set as [1.90, 1.95, 2.00, 2.05, 2.10].", + "bbox": [ + 89, + 426, + 483, + 488 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.4. Implementation Process in Real-world", + "text_level": 1, + "bbox": [ + 89, + 496, + 421, + 512 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "After completing the digital domain optimization, the multimodal adversarial patches require deployment in the real world. However, during real-world deployment, visible and infrared patches are stacked, leading to inevitable interactions between the two modalities, as illustrated in Fig. 3. Specifically, the coverage of visible patches impacts the adversarial shape expression of infrared patches, while the presence of infrared patches hinders the rendering of the adversarial texture in visible modality. To address these challenges, we propose a shape-shared stacking strategy, where both the visible and infrared patches adopt the same attack shape. This design not only effectively mitigates interactions between infrared and visible patches in the real world but also enhances the attack shapes of visible patches, thereby improving overall attack performance.", + "bbox": [ + 89, + 518, + 483, + 746 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4. Experiments", + "text_level": 1, + "bbox": [ + 89, + 758, + 223, + 776 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.1. Experimental Settings", + "text_level": 1, + "bbox": [ + 89, + 782, + 297, + 800 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.1.1. Datasets and Evaluation Metrics", + "text_level": 1, + "bbox": [ + 89, + 806, + 362, + 820 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We conduct experiments on RGBT234 [12] and LasHeR [13] datasets and assess the effectiveness of our ACAttack by evaluating precision rate (PR) and success rate (SR), both of which are commonly used metrics in tracking tasks. Taking PR as an example, we", + "bbox": [ + 89, + 824, + 483, + 901 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/d470705f12b1b300b95534ddb97c4a2699910e4e6ccda977f227ae26d4cb9532.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 517, + 90, + 645, + 166 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/33fd2f375730bee087e11a98a35e6edcbe43d2da12561454172fedc8fc9231ac.jpg", + "image_caption": [ + "VI on IR" + ], + "image_footnote": [], + "bbox": [ + 517, + 167, + 643, + 244 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/f47fed5e15b83cdc966600b10b48230bac13df87d5bf3c7dd049d110db4d9ce1.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 645, + 90, + 774, + 166 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/92e3c6e7196ddfdb7bb345e4c518f60529abae4065d0b6b0eada7eead6b613b9.jpg", + "image_caption": [ + "IR on VI" + ], + "image_footnote": [], + "bbox": [ + 645, + 167, + 772, + 244 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/40821a79e64f48b8da64814d87a2c2c1d6b209c40f83b609db0e4879db63ce81.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 774, + 90, + 903, + 166 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/fa0cf77ca4b32b732a765bd36dd7bdfd3dcfd54c3d62ad0c1d21287968a7e519.jpg", + "image_caption": [ + "Ours" + ], + "image_footnote": [], + "bbox": [ + 774, + 167, + 903, + 244 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/c883cfd57486865bd20823dd356d172a367ac79998c92b08debea92727b3fbb1.jpg", + "image_caption": [ + "Figure 3. Process of physical implementation." + ], + "image_footnote": [], + "bbox": [ + 517, + 257, + 901, + 378 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/c4eee172f1a63caf416d7705ac5c5a4cb3db0beed313dcc8d3253a8746fc10d4.jpg", + "image_caption": [ + "(a) ViPT patch", + "Figure 4. Visualization of generated patches." + ], + "image_footnote": [], + "bbox": [ + 517, + 422, + 640, + 483 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/1f1cba12750e6777c7d348e11705f7579722488aeb97c43eb0c5ba7f185d8c9c.jpg", + "image_caption": [ + "(b) BAT patch" + ], + "image_footnote": [], + "bbox": [ + 651, + 425, + 782, + 483 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/1a25403a0068d4992e805125a13cfebb745348e2e333de0c52b4c0685aebb6dc.jpg", + "image_caption": [ + "(c) SDSTrack patch" + ], + "image_footnote": [], + "bbox": [ + 797, + 422, + 903, + 483 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "calculate the Euclidean distance of the center between the predicted bounding box and ground truth box in both RGB and thermal modalities, using the smaller distance to represent the precision RGBT234 provides 234 pairs of RGB and thermal video, with a total frame of about $234\\mathrm{K}$ and a maximum of 8K per sequence. LasHeR is comprised of 1224 visible and thermal video pairs, totaling over $730\\mathrm{K}$ frame pairs. Since the tracking performance on the background is not of interest, LasHeR performs strict alignment of the object area, allowing the object to share the same ground truth of the bounding box in both visible and thermal modalities. Therefore, we use PR and SR as evaluation metrics.", + "bbox": [ + 511, + 551, + 906, + 748 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.1.2. Victimized Trackers and Comparison Attackers", + "text_level": 1, + "bbox": [ + 511, + 758, + 892, + 773 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We select several state-of-the-art trackers as targets for our attack, including ViPT [29], BAT [3], and SDSTrack [9]. To demonstrate the challenges in exploiting vulnerabilities in RGB-T trackers, we use a patch composed of random noise as a baseline for comparison, emphasizing the need for meticulous exploration. Furthermore, we compare the performance of our proposed ACAttack with the representative attack method MTD [8], which is specifically designed", + "bbox": [ + 511, + 779, + 906, + 901 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "22103", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/da6a446625886f71756aead6a4e9fa99c2780b36928a437cb31b58f954fcae24.jpg", + "image_caption": [ + "(a) ViPT on RGBT234" + ], + "image_footnote": [], + "bbox": [ + 91, + 87, + 357, + 195 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/1a22cf3b0944a5bec6d422a63b67353ae6f7db902f57bf12db9a0a3abede9e4d.jpg", + "image_caption": [ + "(b) BAT on RGBT234" + ], + "image_footnote": [], + "bbox": [ + 364, + 88, + 630, + 194 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/011457f325e1dc34bddb5ac62bfa101a76791122d58e04157c92d3d0388f4432.jpg", + "image_caption": [ + "(c) SDS on RGBT234" + ], + "image_footnote": [], + "bbox": [ + 638, + 88, + 903, + 195 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/e522ecd422a23f3740a308ff9c1a063e2cb028f9532ad41a9413cd641d252e86.jpg", + "image_caption": [ + "Figure 5. Quantitative comparison of tracking performance on the RGBT234 dataset. The tracking performance of ViPT, BAT, and SDSTrack trackers is reported, including the original performance without attacks and the performance under attacks. Lower tracking metrics PR and SR represent better attack. Please zoom in for a better view.", + "Figure 6. Qualitative comparison of tracking performance on the RGBT234 dataset." + ], + "image_footnote": [], + "bbox": [ + 93, + 282, + 903, + 383 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "for RGB trackers, highlighting the advantages of our approach in the multi-modal setting.", + "bbox": [ + 89, + 434, + 482, + 464 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.1.3. Implementation Details", + "text_level": 1, + "bbox": [ + 89, + 473, + 302, + 488 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The multi-spectral video in the physical domain is captured by a DJI Mavic 3T UAV equipped with thermal and RGB cameras, and the video frame rate is 30 fps. The hyperparameters in adaptive iteration $\\xi$ and $\\zeta$ is 9 and 0.02. Training epoch in stage1 is set as $M_{\\text{stage1}} = 180$ . Experiments are conducted on the RTX 3090 GPU with PyTorch.", + "bbox": [ + 89, + 491, + 483, + 583 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2. Comparisons in the Digital Domain", + "text_level": 1, + "bbox": [ + 89, + 593, + 398, + 609 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We first validate the attack effectiveness of our ACAttack in the digital domain. It is important to note that we only train on the RGBT234 dataset and generate multi-modal patches $\\{p_{vi}, p_{ir}\\}$ . As shown in Fig. 4, the RGB patch exhibits color and texture, while the thermal patch has an irregular shape, which aligns with the imaging characteristics of each modality. Subsequently, the patches $\\{p_{vi}, p_{ir}\\}$ generated on RGBT234 are directly applied to the LasHeR dataset to verify their generalization.", + "bbox": [ + 89, + 614, + 483, + 752 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2.1.Quantitative Evaluation", + "text_level": 1, + "bbox": [ + 89, + 760, + 303, + 775 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Fig. 5 illustrates a quantitative comparison of the RGBT234 dataset. The results clearly show that, under our attack, the tracking performance of existing state-of-the-art trackers suffers a significant degradation compared to clean tracking conditions. In contrast, random noise only leads to a modest decline in PR and SR, emphasizing that exploiting tracker vulnerabilities goes beyond the simplicity of random noise—it requires a more sophisticated, optimized ap", + "bbox": [ + 89, + 779, + 483, + 901 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "proach. Additionally, the performance drop observed with MTD is smaller than that of our ACAttack, suggesting that attack methods designed specifically for RGB trackers may not effectively mitigate the feature enhancement resulting from RGB-T coupling. On the other hand, our ACAttack achieves substantial attack success. For instance, against ViPT, ACAttack reduces PR from 0.835 to 0.621 and SR from 0.617 to 0.417. Similarly, for SDSTrack, it lowers PR from 0.848 to 0.616 and SR from 0.625 to 0.426. The substantial performance drops suggest that our ACAttack succeeds in keeping the predicted bounding box far away from actual object, which will be further confirmed in subsequent qualitative results.", + "bbox": [ + 511, + 434, + 906, + 630 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2.2. Qualitative Evaluation", + "text_level": 1, + "bbox": [ + 511, + 638, + 718, + 652 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "As shown in Fig. 6, we present the tracking results of BAT and SDSTrack. The clean trackers perform exceptionally well in maintaining precise tracking, while our attack leads to a significant decline in tracking performance. This degradation can be attributed to our progressive generation framework, which iteratively weakens the tracker's deep semantic attention on modalities with high commonality by decoupling multi-modal responses.", + "bbox": [ + 511, + 657, + 906, + 777 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.3. Generalization Evaluation", + "text_level": 1, + "bbox": [ + 511, + 787, + 751, + 801 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We conduct generalization experiments on the LasHeR dataset, with quantitative and qualitative results shown in Fig. 7 and Fig. 8, respectively. Compared to random noise and MTD, our ACAttack leads to a significant drop in tracking performance across all trackers, even without training on LasHeR. Additionally, we present the IoU plots for both", + "bbox": [ + 511, + 809, + 906, + 900 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "22104", + "bbox": [ + 478, + 944, + 517, + 957 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/28049c3409491173fd313b73b9d8f4d6052356dd6d98451600c8e1c68fb1c206.jpg", + "image_caption": [ + "(a) ViPT on LasHeR" + ], + "image_footnote": [], + "bbox": [ + 96, + 88, + 354, + 191 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/eac88dcb4a67c9b330ed788edd168ef45fb74938a144720157eb3a4c466bc779.jpg", + "image_caption": [ + "(b) BAT on LasHeR" + ], + "image_footnote": [], + "bbox": [ + 369, + 88, + 630, + 191 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/441f9cb12578aae2668934ce3ac88e833dca1fd762f2b926d0004f5b281ad12a.jpg", + "image_caption": [ + "(c) SDSTrack on LasHeR" + ], + "image_footnote": [], + "bbox": [ + 643, + 88, + 901, + 191 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/0ae041af7cf0486b92cdd41c3dd040b2df52ce10d7f53dd3fa1e52eca829af99.jpg", + "image_caption": [ + "Figure 7. Quantitative comparison of tracking performance on the LasHeR dataset. The tracking performance of ViPT, BAT, and SDSTrack trackers is reported, including the original performance without attacks and the performance under attacks. Lower tracking metrics PR and SR represent better attack. Please zoom in for a better view." + ], + "image_footnote": [], + "bbox": [ + 93, + 277, + 903, + 375 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/323f39373e80e2cf23c802d775e4e96c7945d673ba0d02d40dec1231ad08dfb8.jpg", + "image_caption": [ + "Figure 8. Qualitative comparison of tracking performance on the LasHeR dataset.", + "Figure 9. Qualitative comparison of tracking performance on the LasHeR dataset. The blue and red lines represent the IoU variation over frames of the predicted boxes under the clean trackers and the victimized trackers, respectively." + ], + "image_footnote": [], + "bbox": [ + 94, + 422, + 480, + 601 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "clean and attacked tracking results, as shown in Fig. 9. It is clear that our ACAttack can maintain a sustained attack over extended periods. Due to the existence of our adaptive attack strategy and the modal balance interference loss, the response value of the tracker for the real target is reduced, and then the tracker is easy to deviate from the original target and is attracted by similar targets.", + "bbox": [ + 89, + 698, + 483, + 805 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.4. Application in the Physical Domain", + "text_level": 1, + "bbox": [ + 89, + 816, + 397, + 832 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "After having verified our adversarial patches in digital scenes, we also extend experiments to demonstrate their efficacy in the physical domain. We directly apply the patches trained in the digital domain to the real world and use aero", + "bbox": [ + 89, + 839, + 483, + 900 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "gel and paper to make thermal and RGB patches for deployment on pedestrians, respectively. A dual-spectral camera in DJI Mavic 3T is used for video capture. Thirty sets of videos of different scenes are taken as test samples. The orientation results of the test are shown in Fig. 10. It can be seen that the tracking prediction bounding box is enlarged and cannot be accurately positioned due to the interference of the multi-modal adversarial patch. Specifically, the optimization of spatio-temporal joint loss makes the patch learn the effect of expanding the tracker's prediction box. Therefore, in the physical world, the tracker will not be able to accurately locate the target after being affected by the adversarial patch.", + "bbox": [ + 511, + 425, + 906, + 621 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.5. Ablation Studies", + "text_level": 1, + "bbox": [ + 511, + 633, + 678, + 648 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We conduct ablation studies to assess the effectiveness of our unique design and parameter configuration, including: (I) loss function, (II) parameter K, (III) iteration mode, and (IV) applied modal. The ablation studies are performed on the RGBT234 dataset against ViPT, with quantitative results presented in Table 1.", + "bbox": [ + 511, + 657, + 906, + 748 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.5.1. Loss Function", + "text_level": 1, + "bbox": [ + 511, + 758, + 658, + 773 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "The loss $L_{st}$ interferes with the tracker from both temporal and spatial dimensions, while $L_{mi}$ is used to disrupt the tracker's semantic perception. To demonstrate their effectiveness, we remove each of them individually, with the results shown in Table 1. In the absence of $L_{st}$ or $L_{mi}$ , the attack performance weakens, demonstrating their role in diminishing the enhanced target localization accuracy achieved through multi-modal interaction.", + "bbox": [ + 511, + 779, + 906, + 900 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "22105", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/fa1865e09e72f7edb94a1940891936ea2867a523c0a89c624cf6ced42d48b9ec.jpg", + "image_caption": [ + "Figure 10. Practical application in the physical domain." + ], + "image_footnote": [], + "bbox": [ + 109, + 89, + 885, + 231 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/5729cc5da5c30a0b47ad1661168dbf6b7c669fda94e0cca1e5e547f19221525d.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
MetricViPTConfig. I: loss functionConfig. II: parameter KConfig. III: iteration modeConfig. IV: applied modalOurs
w/o Lstw/o LmiK = 0K = 9crosscombineOnly RGBOnly TIR
PR0.8350.7090.7350.6720.6510.6450.7030.6910.6690.621
SR0.6170.4860.5050.4500.4250.4280.4820.4820.4620.417
", + "bbox": [ + 99, + 268, + 893, + 337 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Table 1. Quantitative comparison of ablation studies, which is performed on the RGBT234 dataset against the ViPT tracker.", + "bbox": [ + 132, + 344, + 862, + 357 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.5.2. Parameter K", + "text_level": 1, + "bbox": [ + 89, + 385, + 227, + 398 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In our progressive attack framework, we first employ projected gradient descent to identify K sets of coarse adversarial examples with effective attack performance. In order to verify its effectiveness, we set the number of coarse adversarial samples K growth from 0 to 9 and 18. As shown in the Table 1, as K increases from 0 to 9 and 18, the tracker's PR and SR consistently decrease. This indicates that such coarse-grained adversarial examples can effectively narrow the search space for refined attacks, thus facilitating a more effective attack. Specifically, this progressive method for finding adversarial examples prioritizes identifying multiple sets of coarse adversarial representations from a broad spectrum of noise. Subsequently, multi-modal patch generation refines the adversarial details to produce the final adversarial patch, leveraging numerous samples that contain adversarial information. Consequently, this approach results in an enhancement in performance.", + "bbox": [ + 89, + 404, + 483, + 661 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.5.3. Iteration Mode", + "text_level": 1, + "bbox": [ + 89, + 670, + 243, + 683 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "One of the key contributions of this paper is the adaptive iterative strategy for attacking the RGB-T tracker. To demonstrate the effectiveness of the adaptive strategy, we conduct ablation experiments using the iterative strategy. The alternating iteration strategy and the joint optimization strategy are selected for the comparison test. The former alternately optimizes the adversarial texture network and the adversarial shape network, while the latter simultaneously propagates the gradient flow to both networks. As shown in Table 1, our adaptive iteration approach can more effectively identify model vulnerabilities and generate more aggressive adversarial patches. Specifically, according to the contribution degree, our strategy can weaken deep semantic attention and break the balance of modality in tracker.", + "bbox": [ + 89, + 688, + 483, + 900 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.5.4. Applied Modal", + "text_level": 1, + "bbox": [ + 511, + 385, + 663, + 400 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In order to verify the multi-modal patch joint and single-modal patch attack performance, we try to conduct patch apply modal ablation experiment. Multi-modal patches $\\{p_{vi}, p_{ir}\\}$ are generated to simultaneously disrupt both RGB and thermal modalities. As shown in the Table 1, we use only one of these patches in an ablation setup. The adversarial patch of a single modal produces a certain attack effect and makes the tracker confused. Evidently, our multi-modal patch achieves the best attack performance, underscoring the necessity of designing joint multi-modal attacks for RGB-T trackers.", + "bbox": [ + 511, + 404, + 906, + 569 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5. Conclusion", + "text_level": 1, + "bbox": [ + 511, + 580, + 633, + 597 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this work, we present a pioneering framework for adversarial attacks on RGB-T multi-modal trackers by introducing an adaptive cross-attack mechanism through multimodal response decoupling. Our approach leverages a modal-aware adaptive attack strategy and introduces novel modal disturbance loss and spatio-temporal joint attack loss to progressively impair the tracker's capability to perceive the target. The shared adversarial shape design also enhances our method's practicality, allowing seamless deployment of multi-modal patches in the real world. Experiments across digital and physical domains confirm the robustness and effectiveness of our approach in evading RGB-T trackers, highlighting the potential and significance of adaptive, multi-modal adversarial attacks in advancing the understanding of tracker vulnerabilities.", + "bbox": [ + 511, + 607, + 906, + 834 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Acknowledgments", + "text_level": 1, + "bbox": [ + 511, + 845, + 671, + 861 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "This work was supported by National Natural Science Foundation of China (62276192).", + "bbox": [ + 511, + 869, + 905, + 900 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "22106", + "bbox": [ + 478, + 945, + 519, + 955 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 89, + 187, + 104 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Luca Bertinetto, Jack Valmadre, Joao F Henriques, Andrea Vedaldi, and Philip HS Torr. Fully-convolitional siamese networks for object tracking. In Proceedings of the European Conference on Computer Vision Workshops, pages 850-865, 2016. 1", + "[2] Goutam Bhat, Martin Danelljan, Luc Van Gool, and Radu Timofte. Learning discriminative model prediction for tracking. In Proceedings of the IEEE International Conference on Computer Vision, pages 6182-6191, 2019. 2", + "[3] Bing Cao, Junliang Guo, Pengfei Zhu, and Qinghua Hu. Bidirectional adapter for multimodal tracking. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 927-935, 2024. 2, 5", + "[4] Xuesong Chen, Xiyu Yan, Feng Zheng, Yong Jiang, Shu-Tao Xia, Yong Zhao, and Rongrong Ji. One-shot adversarial attacks on visual tracking with dual attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10176-10185, 2020. 2", + "[5] Xuesong Chen, Xiyu Yan, Feng Zheng, Yong Jiang, Shu-Tao Xia, Yong Zhao, and Rongrong Ji. One-shot adversarial attacks on visual tracking with dual attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10176-10185, 2020. 2", + "[6] Xuesong Chen, Canmiao Fu, Feng Zheng, Yong Zhao, Hongsheng Li, Ping Luo, and Guo-Jun Qi. A unified multi-scenario attacking network for visual object tracking. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1097-1104, 2021. 1", + "[7] Martin Danelljan, Luc Van Gool, and Radu Timofte. Probabilistic regression for visual tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7183-7192, 2020. 2", + "[8] Li Ding, Yongwei Wang, Kaiwen Yuan, Minyang Jiang, Ping Wang, Hua Huang, and Z Jane Wang. Towards universal physical attacks on single object tracking. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1236-1245, 2021. 2, 5", + "[9] Xiaojun Hou, Jiazheng Xing, Yijie Qian, Yaowei Guo, Shuo Xin, Junhao Chen, Kai Tang, Mengmeng Wang, Zhengkai Jiang, Liang Liu, et al. Sdstrack: Self-distillation symmetric adapter learning for multi-modal visual object tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 26551-26561, 2024. 2, 5", + "[10] Shuai Jia, Chao Ma, Yibing Song, and Xiaokang Yang. Robust tracking against adversarial attacks. In Proceedings of the European Conference on Computer Vision, pages 69-84, 2020. 1", + "[11] Shuai Jia, Yibing Song, Chao Ma, and Xiaokang Yang. Iou attack: Towards temporally coherent black-box adversarial attack for visual object tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6709-6718, 2021. 1", + "[12] Chenglong Li, Xinyan Liang, Yijuan Lu, Nan Zhao, and Jin Tang. Rgb-t object tracking: Benchmark and baseline. Pattern Recognition, 96:106977, 2019. 5" + ], + "bbox": [ + 93, + 114, + 483, + 900 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[13] Chenglong Li, Wanlin Xue, Yaqing Jia, Zhichen Qu, Bin Luo, Jin Tang, and Dengdi Sun. Lasher: A large-scale high-diversity benchmark for rgbt tracking. IEEE Transactions on Image Processing, 31:392-404, 2022. 5", + "[14] Liting Lin, Heng Fan, Zhipeng Zhang, Yong Xu, and Haibin Ling. Swintrack: A simple and strong baseline for transformer tracking. Advances in Neural Information Processing Systems, 35:16743-16754, 2022. 2", + "[15] Siao Liu, Zhaoyu Chen, Wei Li, Jiwei Zhu, Jiafeng Wang, Wenqiang Zhang, and Zhongxue Gan. Efficient universal shuffle attack for visual object tracking. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pages 2739-2743, 2022. 2", + "[16] Andong Lu, Chenglong Li, Yuqing Yan, Jin Tang, and Bin Luo. Rgbt tracking via multi-adapter network with hierarchical divergence loss. IEEE Transactions on Image Processing, 30:5613-5625, 2021. 1", + "[17] Andong Lu, Cun Qian, Chenglong Li, Jin Tang, and Liang Wang. Duality-gated mutual condition network for rgbt tracking. IEEE Transactions on Neural Networks and Learning Systems, 36(3):4118-4131, 2025. 2", + "[18] Hyeonseob Nam and Bohyung Han. Learning multi-domain convolutional neural networks for visual tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4293-4302, 2016. 2", + "[19] Wuqiang Qi, Zhuoqun Zhang, and Zhishe Wang. Dmfuse: Diffusion model guided cross-attention learning for infrared and visible image fusion. Chinese Journal of Information Fusion, 1(3):226-241, 2024. 1", + "[20] Linfeng Tang, Hao Zhang, Han Xu, and Jiayi Ma. Deep learning-based image fusion: A survey. Journal of Image and Graphics, 28(1):3-36, 2023. 1", + "[21] Ning Wang, Wengang Zhou, Yibing Song, Chao Ma, Wei Liu, and Houqiang Li. Unsupervised deep representation learning for real-time tracking. International Journal of Computer Vision, 129(2):400-418, 2021. 2", + "[22] Yun Xiao, Mengmeng Yang, Chenglong Li, Lei Liu, and Jin Tang. Attribute-based progressive fusion network for rgbt tracking. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 2831-2838, 2022. 5", + "[23] Saining Xie, Ross Girshick, Piotr Dólár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1492-1500, 2017. 4", + "[24] Xinyu Xie, Yawen Cui, Tao Tan, Xubin Zheng, and Zitong Yu. Fusionmamba: Dynamic feature enhancement for multimodal image fusion with mamba. Visual Intelligence, 2(1): 37, 2024. 1", + "[25] Bin Yan, Dong Wang, Huchuan Lu, and Xiaoyun Yang. Cooling-shrinking attack: Blinding the tracker with imperceptible noises. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 990-999, 2020. 2, 4", + "[26] Bin Yan, Houwen Peng, Jianlong Fu, Dong Wang, and Huchuan Lu. Learning spatio-temporal transformer for visual tracking. In Proceedings of the IEEE International Conference on Computer Vision, pages 10448-10457, 2021. 1" + ], + "bbox": [ + 516, + 92, + 906, + 900 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "22107", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[27] Fan Zhang, Hanwei Peng, Lingli Yu, Yuqian Zhao, and Baifan Chen. Dual-modality space-time memory network for rgbt tracking. IEEE Transactions on Instrumentation and Measurement, 72:1-12, 2023. 2", + "[28] Tianlu Zhang, Hongyuan Guo, Qiang Jiao, Qiang Zhang, and Jungong Han. Efficient rgb-t tracking via cross-modality distillation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5404-5413, 2023. 2", + "[29] Jiawen Zhu, Simiao Lai, Xin Chen, Dong Wang, and Huchuan Lu. Visual prompt multi-modal tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9516-9526, 2023. 2, 5" + ], + "bbox": [ + 91, + 92, + 480, + 273 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "22108", + "bbox": [ + 478, + 945, + 517, + 955 + ], + "page_idx": 9 + } +] \ No newline at end of file diff --git a/2025/ACAttack_ Adaptive Cross Attacking RGB-T Tracker via Multi-Modal Response Decoupling/6f190354-412a-4587-a766-f43d48cafb75_model.json b/2025/ACAttack_ Adaptive Cross Attacking RGB-T Tracker via Multi-Modal Response Decoupling/6f190354-412a-4587-a766-f43d48cafb75_model.json new file mode 100644 index 0000000000000000000000000000000000000000..771de9867ad74542174ff1ec8b3da7e030158903 --- /dev/null +++ b/2025/ACAttack_ Adaptive Cross Attacking RGB-T Tracker via Multi-Modal Response Decoupling/6f190354-412a-4587-a766-f43d48cafb75_model.json @@ -0,0 +1,2123 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.043 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.001, + 0.812, + 0.047 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.209, + 0.131, + 0.79, + 0.177 + ], + "angle": 0, + "content": "ACAttack: Adaptive Cross Attacking RGB-T Tracker via Multi-Modal Response Decoupling" + }, + { + "type": "text", + "bbox": [ + 0.281, + 0.204, + 0.733, + 0.24 + ], + "angle": 0, + "content": "Xinyu Xiang Qinglong Yan Hao Zhang* Jiayi Ma* Wuhan University, China" + }, + { + "type": "text", + "bbox": [ + 0.1, + 0.242, + 0.908, + 0.258 + ], + "angle": 0, + "content": "xiangxinyu@whu.edu.cn, qinglong_yan@whu.edu.cn, zhpersonalbox@gmail.com, jyma2010@gmail.com" + }, + { + "type": "title", + "bbox": [ + 0.248, + 0.292, + 0.327, + 0.308 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.324, + 0.486, + 0.702 + ], + "angle": 0, + "content": "The research on adversarial attacks against trackers primarily concentrates on the RGB modality, whereas the methodology for attacking RGB-T multi-modal trackers has seldom been explored so far. This work represents an innovative attempt to develop an adaptive cross attack framework via multi-modal response decoupling, generating multi-modal adversarial patches to evade RGB-T trackers. Specifically, a modal-aware adaptive attack strategy is introduced to weaken the modality with high common information contribution alternately and iteratively, achieving the modal decoupling attack. In order to perturb the judgment of the modal balance mechanism in the tracker, we design a modal disturbance loss to increase the distance of the response map of the single-modal adversarial samples in the tracker. Besides, we also propose a novel spatio-temporal joint attack loss to progressively deteriorate the tracker's perception of the target. Moreover, the design of the shared adversarial shape enables the generated multi-modal adversarial patches to be readily deployed in real-world scenarios, effectively reducing the interference of the patch posting process on the shape attack of the infrared adversarial layer. Extensive digital and physical domain experiments demonstrate the effectiveness of our multi-modal adversarial patch attack. Our code is available at https://github.com/Xinyu-Xiang/ACAttack." + }, + { + "type": "title", + "bbox": [ + 0.092, + 0.731, + 0.222, + 0.746 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.756, + 0.484, + 0.877 + ], + "angle": 0, + "content": "The adversarial attack on visual object tracking (VOT) [1, 26] aims to mislead the prediction results of the tracker through the generated adversarial disturbance, find the model vulnerabilities, and then promote the security of the tracking model in real-life. Single-modal tracking attack methods have been extensively studied, but with the wide application of multi-modal devices [19, 20, 24], multimodal trackers are widely used in safety-critical real-world" + }, + { + "type": "image", + "bbox": [ + 0.517, + 0.291, + 0.906, + 0.478 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.512, + 0.487, + 0.907, + 0.571 + ], + "angle": 0, + "content": "Figure 1. Our attack strategy against RGB-T trackers. An adaptive attack strategy, sensitive to modality, is introduced to alternately and iteratively suppress the modality with a high contribution of shared information. Additionally, a modal disturbance loss is crafted to enlarge the response map distance for single-modal adversarial samples within the tracker." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.599, + 0.902, + 0.614 + ], + "angle": 0, + "content": "fields such as autonomous driving and urban security [16]." + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.615, + 0.907, + 0.9 + ], + "angle": 0, + "content": "To address the urgent need to explore the security of trackers, adversarial attack techniques for trackers have emerged in rapid succession, including traditional gradient-based attack approaches and deep-network-based attacks. The former methods [10, 11] utilize hand-crafted parameters and apply many times of gradient ascent to maximize an adversarial loss function for misguiding deep networks. Although it can achieve certain attack effects for specific types of trackers, it is challenging to comprehensively explore and attack the potential vulnerabilities of the different trackers due to the limitations of inflexible pattern design. Nonetheless, the latter one [6] applies tremendous data to train an adversarial patches-generator including flexible architectures and optimization strategies to better automatically search for model weaknesses and realize tracker attacks. Therefore, in comparison with a traditional gradient-based attack approach, deep-network-based paradigm can more automatically and flexibly excavate the security issues within the tracker." + }, + { + "type": "page_footnote", + "bbox": [ + 0.11, + 0.888, + 0.242, + 0.9 + ], + "angle": 0, + "content": "*Corresponding authors." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.519, + 0.958 + ], + "angle": 0, + "content": "22099" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.094, + 0.092, + 0.482, + 0.468 + ], + "angle": 0, + "content": "Although prior efforts for adversarial attack methods are effective in interference trackers, several challenges still need to be addressed. Notably, existing adversarial attack methods [5, 15, 25] on tracking are designed for RGB modality, whereas the methodologies for attacking RGB-T multi-modal trackers are less explored so far. Considering the widespread deployment of multi-modal tracking technology [17, 27, 28] in several safety-critical areas, it is urgent to explore and implement adversarial attacks of multimodal tracking to understand the potential vulnerabilities of the trackers. However, as shown in Fig. 1, the unique modal coupling and structural design of RGB-T trackers make it a great challenge to successfully find model vulnerabilities. Firstly, due to the modal equilibrium mechanism and coupled multi-modal information, it is difficult to successfully jam the RGB-T tracking model itself. Specifically, the modal balancing strategy in the multi-modal tracker can effectively prevent the attack of adversarial perturbation in a single modal. Secondly, the coupling of multi-modal information can effectively weaken the attacks against the consensus region of the target. Thirdly, the deployment of patches in the physical world is also challenging because the stacked placement of multi-modal patches has a probability of compromising the expression of infrared adversarial shapes, reducing their synergy performance." + }, + { + "type": "text", + "bbox": [ + 0.094, + 0.479, + 0.482, + 0.899 + ], + "angle": 0, + "content": "Considering these challenges, we propose ACAttack, an adaptive cross attack framework via multi-modal response decoupling. It aims to generate multi-modal adversarial patches to evade RGB-T trackers in both digital and physical domains. Specifically, this framework can gradually and adaptively optimize, discover plenty of rough adversarial samples, and then map them to the high-dimensional adversarial space of different modalities according to the modal response contribution factor, forming multi-modal adversarial patches. Secondly, a modal-aware adaptive attack strategy is introduced to weaken tracker's deep semantic attention to the modality with high common information contribution according to the contribution degree of modal response alternately and iteratively, achieving the modal decoupling attack. When the contributions of two modalities are similar, we design a modal disturbance loss to search the modal imbalance vulnerabilities of the tracker, expand the distance of the response map of the single-modal adversarial samples in the tracker, and perturb the judgment of the balance modal in the tracker. We also design a spatio-temporal joint attack loss to build progressively enlarged pseudo-GT between consecutive frames, which progressively deteriorates the tracker's perception of the target. Thirdly, the design of the shared adversarial shape is deployed to eliminate the interference of visible patches on the expression of infrared adversarial shapes. After the shape is shared, it can not only reduce the consumption of the adversarial shape's inter-modal attack ability but also realize" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.093, + 0.818, + 0.106 + ], + "angle": 0, + "content": "attacks other than texture in the visible modal." + }, + { + "type": "text", + "bbox": [ + 0.536, + 0.108, + 0.866, + 0.122 + ], + "angle": 0, + "content": "In summary, we make the following contributions:" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.127, + 0.904, + 0.185 + ], + "angle": 0, + "content": "- We make an innovative attempt to propose an adaptive cross attack framework via multi-modal response decoupling. It can generate multi-modal adversarial patches to mislead RGB-T trackers effectively." + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.187, + 0.904, + 0.261 + ], + "angle": 0, + "content": "- We develop a novel modal attack flow, in which modal-aware adaptive attack strategy and modal attack constraints alternately disturb the modes with high contribution to achieve modal decoupling and destroy modal balance mechanism of tracker, respectively." + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.262, + 0.904, + 0.321 + ], + "angle": 0, + "content": "- We design the shape-shared stack strategy to linkage the visible and infrared adversarial shapes, reducing the attack consumption of multi-modal patches mutual deployment in physical scenarios." + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.322, + 0.904, + 0.367 + ], + "angle": 0, + "content": "- Experimental results show that multi-modal patches can efficiently fool RGB-T trackers in standard RGB-T tracking datasets and real scenes." + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.127, + 0.904, + 0.367 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.518, + 0.383, + 0.653, + 0.397 + ], + "angle": 0, + "content": "2. Related Work" + }, + { + "type": "title", + "bbox": [ + 0.518, + 0.408, + 0.727, + 0.423 + ], + "angle": 0, + "content": "2.1. Visual Object Tracking" + }, + { + "type": "text", + "bbox": [ + 0.516, + 0.43, + 0.905, + 0.761 + ], + "angle": 0, + "content": "Given the tracked object in the first frame, object tracking aims to recognize and locate the object in subsequent frames. Many RGB tracking methods [2, 7, 14, 18, 21] have been proposed and achieved commendable tracking performance. However, RGB sensors struggle to capture objects effectively under challenging conditions such as occlusion and low light, limiting the performance of RGB trackers. To address this, the RGB-T tracking paradigm is introduced, which is not restricted to a single RGB modality but instead integrates the complementary information from both RGB and thermal modalities. This fusion enables more robust tracking capabilities. ViPT [29] introduces a vision prompt tracking framework that leverages the foundational model with strong representation capabilities, enabling interaction between the thermal and RGB modalities through a modality-complementing prompter. BAT [3] proposes a universal bidirectional adapter, which enables mutual prompt between the thermal and RGB modalities and further improves tracking performance. SDSTrack [9] designs a complementary masked patch distillation strategy based on self-distillation learning, which enhances the tracking robustness in extreme weather." + }, + { + "type": "title", + "bbox": [ + 0.518, + 0.774, + 0.702, + 0.787 + ], + "angle": 0, + "content": "2.2. Adversarial Attacks" + }, + { + "type": "text", + "bbox": [ + 0.516, + 0.796, + 0.904, + 0.9 + ], + "angle": 0, + "content": "Currently, adversarial attacks in the tracking task primarily target RGB trackers. For instance, APYVOT [4] proposes an optimization objective function with a dual-attention mechanism to generate perturbations, disrupting tracking by interfering solely with the initial frame. MTD [8] introduces a maximum textural discrepancy loss function that misleads the visual trackers by decorrelating the template" + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.946, + 0.518, + 0.957 + ], + "angle": 0, + "content": "22100" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.099, + 0.093, + 0.907, + 0.26 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.099, + 0.263, + 0.907, + 0.389 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.347, + 0.399, + 0.651, + 0.413 + ], + "angle": 0, + "content": "Figure 2. The overall framework of our ACAttack." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.44, + 0.484, + 0.532 + ], + "angle": 0, + "content": "and search frame at hierarchical feature scales. These methods, however, fail to disrupt the significant feature enhancement resulting from the interactions between RGB and thermal modalities, which limits their effectiveness against RGB-T trackers. Therefore, it is essential to develop the attack strategy specifically designed for RGB-T tracking." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.549, + 0.226, + 0.566 + ], + "angle": 0, + "content": "3. Methodology" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.575, + 0.469, + 0.59 + ], + "angle": 0, + "content": "3.1. Coarse-to-Fine Modality Attack Framework" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.598, + 0.484, + 0.822 + ], + "angle": 0, + "content": "With the help of the progressive modality information integration strategy, the multi-modal trackers gradually strengthen the common scene representation and target response, thereby achieving a robust tracking performance superior to that of the single-modal trackers. Consequently, a coarse-to-fine architecture is designed to progressively degrade the modality integration capability of RGB-T models named ACAAttack, which can be divided into two stages. The overall architecture of our ACAAttack is illustrated in Fig. 2 and Algorithm 1. First, we employ projected gradient descent (PGD) in stage1 to identify a set of adversarial examples \\(\\{p_i\\}_{i=1}^k\\) with sufficient aggressiveness, narrowing the search space for refined attacks and increasing the likelihood of discovering strong adversarial examples, formulated as:" + }, + { + "type": "equation", + "bbox": [ + 0.204, + 0.827, + 0.483, + 0.845 + ], + "angle": 0, + "content": "\\[\n\\left\\{p _ {i} \\right\\} _ {i = 1} ^ {k} = P G D \\left(p _ {i} ^ {\\text {i n i t}}\\right), \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.856, + 0.484, + 0.903 + ], + "angle": 0, + "content": "where \\( p_i^{init} \\) is randomly initialized with noise patches. The generated patches are subsequently loaded onto the visible image \\( I_{vi} \\) to form a visible adversarial sample \\( I_{vi}^{adv} \\). This" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.441, + 0.766, + 0.456 + ], + "angle": 0, + "content": "process can be formulated as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.595, + 0.468, + 0.907, + 0.486 + ], + "angle": 0, + "content": "\\[\nI _ {v i} ^ {a d v} = p _ {i} \\odot M + I _ {v i} \\odot (1 - M), \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.498, + 0.907, + 0.575 + ], + "angle": 0, + "content": "where \\(M\\) is the binary mask for applying adversarial patch. \\(\\odot\\) represents the element-wise Hadmard product. The adversarial visible image \\(I_{vi}^{adv}\\) concat with clean infrared image \\(I_{ir}\\) are sent to RGB-T tracker \\(T(\\cdot)\\) to predict final bounding box \\(Bbox_{pred}\\) of target, which is expressed as:" + }, + { + "type": "equation", + "bbox": [ + 0.622, + 0.59, + 0.905, + 0.607 + ], + "angle": 0, + "content": "\\[\nB b o x _ {p r e d} = T \\left(I _ {v i} ^ {a d v}, I _ {i r}\\right). \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.616, + 0.906, + 0.66 + ], + "angle": 0, + "content": "We optimize this process by minimizing the conventional attack loss \\( L_{att} \\) relative to the center point, which is defined as:" + }, + { + "type": "equation", + "bbox": [ + 0.554, + 0.675, + 0.907, + 0.693 + ], + "angle": 0, + "content": "\\[\nL _ {a t t} = - \\left\\| \\left(C p (B b o x _ {p r e d}) - C p (B b o x _ {g t})\\right) \\right\\| _ {2} ^ {2}, \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.705, + 0.907, + 0.901 + ], + "angle": 0, + "content": "where \\( C_p \\) denotes the operator to obtain the center point of bounding box \\( Bbox \\). Specifically, when the attack loss \\( L_{att} \\) reaches a predefined threshold, the iteration is halted, and a rough adversarial sample is generated. This process is repeated \\( k \\) times to generate a set of \\( k \\) rough adversarial samples. Considering the different imaging principles of infrared and visible modalities, the multi-modal patches will be specially designed according to their differences in principles. Specifically, the visible modal mainly employs the attack texture to interfere. For the infrared modal, it is difficult to detect the texture, so the adversarial shape is used to attack. Subsequently, the set of adversarial samples generated in the coarse attack stage (stage1) is fed into the" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.518, + 0.957 + ], + "angle": 0, + "content": "22101" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.485, + 0.243 + ], + "angle": 0, + "content": "subsequent fine attack process (stage2) for further refinement, resulting in the generation of multi-modal adversarial patches with strong attack performance. Finally, through continuous iterative optimization, the multi-mode patch will share the same attack shape, while the visible patch will also possess adversarial texture to confuse the tracker. The fine-grained attack phase targets the modality of the multi-modal tracker and consists of modal decoupling attacks and modal balance interference, which will be detailed in the subsequent sections." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.251, + 0.327, + 0.267 + ], + "angle": 0, + "content": "3.2. Modal Decoupling Attack" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.273, + 0.485, + 0.485 + ], + "angle": 0, + "content": "The RGB-T tracker implicitly couples the contributions of the two modalities, thereby enhancing tracking accuracy. Given the significant role of modal contribution in the tracker, we propose a modal decoupling attack to adaptively diminish the influence of advantageous modalities. Specifically, we use the coarse adversarial samples from the stage1 as input for the stage2, feeding them simultaneously into the adversarial texture generation network \\( G_{tex}^{Adv} \\) and the adversarial shape generation network \\( G_{shape}^{Adv} \\). For attacking the infrared modality, the rough adversarial sample set from the first stage is encoded into \\( r \\) dimensions via continuous downsampling and an MLP, controlling the adversarial shape. The infrared patch \\( p_{ir} \\) generation process can be expressed as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.204, + 0.493, + 0.483, + 0.514 + ], + "angle": 0, + "content": "\\[\np _ {i r} = G _ {s h a p e} ^ {A d v} \\left(\\left\\{p _ {i} \\right\\} _ {i = 1} ^ {k}\\right). \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.522, + 0.484, + 0.567 + ], + "angle": 0, + "content": "The adversarial texture generation network generates adversarial textures to attack the visible modality using residual connections and upsampling [23], which is defined as:" + }, + { + "type": "equation", + "bbox": [ + 0.164, + 0.579, + 0.483, + 0.599 + ], + "angle": 0, + "content": "\\[\np _ {v i} = \\left(1 - p _ {i r}\\right) \\odot G _ {\\text {s h a p e}} ^ {\\text {A d v}} \\left(\\left\\{p _ {i} \\right\\} _ {i = 1} ^ {k}\\right), \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.604, + 0.454, + 0.619 + ], + "angle": 0, + "content": "where the visible patch with adversarial textures is \\( p_{vi} \\)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.62, + 0.484, + 0.754 + ], + "angle": 0, + "content": "Subsequently, the modal contribution of the current network input is calculated as the reciprocal of the difference between the response map obtained from single-modal data and the response map from dual-modal input. A larger reciprocal distance indicates that the response maps from dual-modal and single-modal inputs are more similar, suggesting a greater contribution from the current single modality to the tracker. The modal response contribution can be expressed as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.178, + 0.755, + 0.483, + 0.787 + ], + "angle": 0, + "content": "\\[\nc _ {m} = \\frac {1}{\\operatorname {d i s} (R (m , m) , R (v i , i r))}, \\tag {7}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.795, + 0.484, + 0.87 + ], + "angle": 0, + "content": "where \\( c_{m} \\) represents the contribution value of \\( m \\in \\{vi, ir\\} \\) modal to the tracker. \\( dis \\) stands for the distance function and is used to measure the Euclidean distance between response maps. \\( R(\\cdot, \\cdot) \\) shows the response map acquired by the tracker under the current input." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.871, + 0.484, + 0.902 + ], + "angle": 0, + "content": "To normalize the modal contribution, a softmax operation is applied to the reciprocal distance, yielding the final" + }, + { + "type": "code_caption", + "bbox": [ + 0.524, + 0.095, + 0.791, + 0.11 + ], + "angle": 0, + "content": "Algorithm 1: The ACAcAttack Algorithm" + }, + { + "type": "algorithm", + "bbox": [ + 0.517, + 0.112, + 0.907, + 0.536 + ], + "angle": 0, + "content": "Input: Random patches \\( p_i^{init} \\), parameters \\( k \\), \\( M_{stage1} \\), \\( M_{stage2} \\), \\( \\xi \\), \\( \\zeta \\) \nOutput: Optimized multi-modal patches \\( p_{vi}, p_{ir} \\) \n1 Iteration: \n2 Initialize a random patch \\( p_i^{init} \\); \n3 \\( i = i + 1 \\); \n4 Iteration: \n5 Generate \\( p_i \\) through Eq. (1); \n6 Use Eq. (2) to generate adversarial sample \\( I_{vi}^{adv} \\); \n7 Calculate \\( Bbox_{pred} \\) using Eq. (3); \n8 Optimize \\( PGD(\\cdot) \\) with Eq. (4); \n9 Until: \\( L_{att} < \\xi \\) or iter \\( \\geq M_{stage1} \\) \n10 Until: \\( i \\geq k \\) \n11 Determine \\( \\{p_i\\}_{i=1}^k \\) after optimization in stage1; \n12 Iteration: \n13 iter \\( = iter + 1 \\); \n14 Obtain \\( p_{ir}, p_{vi} \\) via Eqs. (5) and (6); \n15 Apply multi-modal patches \\( p_{ir}, p_{vi} \\) on \\( I_{ir}, I_{vi} \\); \n16 Calculate \\( c_{vi}, c_{ir} \\) using Eq. (7); \n17 Send to Tracker \\( T(\\cdot) \\) to predict bounding box; \n18 if \\( |c_{vi} - c_{ir}| < \\zeta \\); \n19 Optimize \\( G_{shape}^{Adv} \\) with Eq. (9); \n20 elif \\( c_{vi} - c_{ir} > \\zeta \\); \n21 Optimize \\( G_{tex}^{Adv} \\) with Eqs. (4) and (10); \n22 elif \\( c_{ir} - c_{vi} > \\zeta \\); \n23 Optimize \\( G_{shape}^{Adv} \\) with Eqs. (4) and (10); \n24 Until: iter \\( \\geq M_{stage2} \\)" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.565, + 0.783, + 0.579 + ], + "angle": 0, + "content": "modal contribution score, formulated as:" + }, + { + "type": "equation", + "bbox": [ + 0.613, + 0.59, + 0.905, + 0.606 + ], + "angle": 0, + "content": "\\[\nc _ {n o r m} = \\operatorname {s o f t m a x} \\left(c _ {v i}, c _ {i r}\\right). \\tag {8}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.614, + 0.907, + 0.78 + ], + "angle": 0, + "content": "Finally, an automatic discriminant attack is executed based on the modal contribution score. As illustrated, when the visible contribution is higher in the input data, only the visible modal is attacked, specifically by optimizing the generation of adversarial textures. When the infrared contribution is higher in the input data, only the adversarial shape is modified to attack the infrared modal, thereby reducing its contribution to the tracker. Given that the tracker employs a modal balance mechanism, the contributions of the two modalities may be similar in certain scenarios, as detailed in the subsequent section." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.789, + 0.766, + 0.803 + ], + "angle": 0, + "content": "3.3. Modal Balance Interference" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.811, + 0.907, + 0.902 + ], + "angle": 0, + "content": "Previous work on tracking attacks has attempted to design explicit attack losses to detect model vulnerabilities, but this approach often fails to account for the inherent characteristics of the model, making it challenging to execute effective attacks. Inspired by the concept of implicit attacks [25] and the multi-modal aggregation properties in" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.52, + 0.958 + ], + "angle": 0, + "content": "22102" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.485, + 0.213 + ], + "angle": 0, + "content": "RGB-T tracker [22], we develop a loss function with modal-balanced interference to target multi-modal trackers. In cases where the contributions of infrared and visible modal are similar (i.e., modal balance), the response map of single-modal input closely resembles that of dual-modal input. To disrupt this balance, we extract the response maps of the two single-modal adversarial examples and increase the distance between them. The details are provided as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.117, + 0.224, + 0.483, + 0.243 + ], + "angle": 0, + "content": "\\[\nL _ {m i} = - \\left\\| R \\left(v i _ {a d v}, v i _ {a d v}\\right) - R \\left(i r _ {a d v}, i r _ {a d v}\\right) \\right\\| _ {2} ^ {2}. \\tag {9}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.254, + 0.484, + 0.375 + ], + "angle": 0, + "content": "Notably, the infrared and visible patches in our method share the same adversarial shape to achieve simultaneous attacks on both modalities. Therefore, under conditions of modal balance, only the adversarial shape is optimized. Additionally, a spatio-temporal joint attack loss \\( L_{st} \\) is employed in conjunction with the modal jamming loss \\( L_{mi} \\) to disrupt the tracker's semantic perception. The specific design is presented in the following formula:" + }, + { + "type": "equation", + "bbox": [ + 0.102, + 0.378, + 0.483, + 0.416 + ], + "angle": 0, + "content": "\\[\nL _ {s t} = \\left\\| \\sum_ {i = 1} ^ {s} B b o x _ {p r e d} (w, h) - r _ {i} * B b o x _ {g t} (w, h) \\right\\| _ {2} ^ {2}, \\tag {10}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.427, + 0.484, + 0.489 + ], + "angle": 0, + "content": "where \\( s \\) denotes the consecutive \\( s = 5 \\) frames extracted from a video. \\( r_i \\) represents the scaling factor over time to construct the pseudo-GT, which is set as [1.90, 1.95, 2.00, 2.05, 2.10]." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.497, + 0.422, + 0.513 + ], + "angle": 0, + "content": "3.4. Implementation Process in Real-world" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.519, + 0.484, + 0.747 + ], + "angle": 0, + "content": "After completing the digital domain optimization, the multimodal adversarial patches require deployment in the real world. However, during real-world deployment, visible and infrared patches are stacked, leading to inevitable interactions between the two modalities, as illustrated in Fig. 3. Specifically, the coverage of visible patches impacts the adversarial shape expression of infrared patches, while the presence of infrared patches hinders the rendering of the adversarial texture in visible modality. To address these challenges, we propose a shape-shared stacking strategy, where both the visible and infrared patches adopt the same attack shape. This design not only effectively mitigates interactions between infrared and visible patches in the real world but also enhances the attack shapes of visible patches, thereby improving overall attack performance." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.76, + 0.224, + 0.777 + ], + "angle": 0, + "content": "4. Experiments" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.784, + 0.298, + 0.801 + ], + "angle": 0, + "content": "4.1. Experimental Settings" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.807, + 0.364, + 0.821 + ], + "angle": 0, + "content": "4.1.1. Datasets and Evaluation Metrics" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.825, + 0.484, + 0.902 + ], + "angle": 0, + "content": "We conduct experiments on RGBT234 [12] and LasHeR [13] datasets and assess the effectiveness of our ACAttack by evaluating precision rate (PR) and success rate (SR), both of which are commonly used metrics in tracking tasks. Taking PR as an example, we" + }, + { + "type": "image", + "bbox": [ + 0.518, + 0.091, + 0.646, + 0.167 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.518, + 0.168, + 0.645, + 0.245 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.562, + 0.246, + 0.597, + 0.254 + ], + "angle": 0, + "content": "VI on IR" + }, + { + "type": "image", + "bbox": [ + 0.646, + 0.091, + 0.775, + 0.167 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.646, + 0.168, + 0.774, + 0.245 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.691, + 0.246, + 0.725, + 0.254 + ], + "angle": 0, + "content": "IR on VI" + }, + { + "type": "image", + "bbox": [ + 0.775, + 0.091, + 0.904, + 0.167 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.775, + 0.168, + 0.904, + 0.245 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.827, + 0.246, + 0.849, + 0.254 + ], + "angle": 0, + "content": "Ours" + }, + { + "type": "image", + "bbox": [ + 0.518, + 0.258, + 0.903, + 0.379 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.572, + 0.391, + 0.847, + 0.406 + ], + "angle": 0, + "content": "Figure 3. Process of physical implementation." + }, + { + "type": "image", + "bbox": [ + 0.518, + 0.423, + 0.642, + 0.484 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.544, + 0.488, + 0.61, + 0.498 + ], + "angle": 0, + "content": "(a) ViPT patch" + }, + { + "type": "image", + "bbox": [ + 0.653, + 0.426, + 0.783, + 0.484 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.69, + 0.488, + 0.753, + 0.498 + ], + "angle": 0, + "content": "(b) BAT patch" + }, + { + "type": "image", + "bbox": [ + 0.798, + 0.423, + 0.904, + 0.484 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.811, + 0.487, + 0.898, + 0.498 + ], + "angle": 0, + "content": "(c) SDSTrack patch" + }, + { + "type": "image_caption", + "bbox": [ + 0.576, + 0.51, + 0.843, + 0.524 + ], + "angle": 0, + "content": "Figure 4. Visualization of generated patches." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.553, + 0.907, + 0.749 + ], + "angle": 0, + "content": "calculate the Euclidean distance of the center between the predicted bounding box and ground truth box in both RGB and thermal modalities, using the smaller distance to represent the precision RGBT234 provides 234 pairs of RGB and thermal video, with a total frame of about \\(234\\mathrm{K}\\) and a maximum of 8K per sequence. LasHeR is comprised of 1224 visible and thermal video pairs, totaling over \\(730\\mathrm{K}\\) frame pairs. Since the tracking performance on the background is not of interest, LasHeR performs strict alignment of the object area, allowing the object to share the same ground truth of the bounding box in both visible and thermal modalities. Therefore, we use PR and SR as evaluation metrics." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.76, + 0.893, + 0.775 + ], + "angle": 0, + "content": "4.1.2. Victimized Trackers and Comparison Attackers" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.78, + 0.907, + 0.902 + ], + "angle": 0, + "content": "We select several state-of-the-art trackers as targets for our attack, including ViPT [29], BAT [3], and SDSTrack [9]. To demonstrate the challenges in exploiting vulnerabilities in RGB-T trackers, we use a patch composed of random noise as a baseline for comparison, emphasizing the need for meticulous exploration. Furthermore, we compare the performance of our proposed ACAttack with the representative attack method MTD [8], which is specifically designed" + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.518, + 0.957 + ], + "angle": 0, + "content": "22103" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.092, + 0.088, + 0.359, + 0.196 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.15, + 0.2, + 0.31, + 0.214 + ], + "angle": 0, + "content": "(a) ViPT on RGBT234" + }, + { + "type": "image", + "bbox": [ + 0.365, + 0.089, + 0.631, + 0.195 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.424, + 0.2, + 0.58, + 0.215 + ], + "angle": 0, + "content": "(b) BAT on RGBT234" + }, + { + "type": "image", + "bbox": [ + 0.639, + 0.089, + 0.905, + 0.196 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.695, + 0.2, + 0.849, + 0.214 + ], + "angle": 0, + "content": "(c) SDS on RGBT234" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.227, + 0.908, + 0.268 + ], + "angle": 0, + "content": "Figure 5. Quantitative comparison of tracking performance on the RGBT234 dataset. The tracking performance of ViPT, BAT, and SDSTrack trackers is reported, including the original performance without attacks and the performance under attacks. Lower tracking metrics PR and SR represent better attack. Please zoom in for a better view." + }, + { + "type": "image", + "bbox": [ + 0.094, + 0.283, + 0.905, + 0.384 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.249, + 0.394, + 0.747, + 0.408 + ], + "angle": 0, + "content": "Figure 6. Qualitative comparison of tracking performance on the RGBT234 dataset." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.435, + 0.483, + 0.465 + ], + "angle": 0, + "content": "for RGB trackers, highlighting the advantages of our approach in the multi-modal setting." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.474, + 0.303, + 0.489 + ], + "angle": 0, + "content": "4.1.3. Implementation Details" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.492, + 0.484, + 0.584 + ], + "angle": 0, + "content": "The multi-spectral video in the physical domain is captured by a DJI Mavic 3T UAV equipped with thermal and RGB cameras, and the video frame rate is 30 fps. The hyperparameters in adaptive iteration \\(\\xi\\) and \\(\\zeta\\) is 9 and 0.02. Training epoch in stage1 is set as \\(M_{\\text{stage1}} = 180\\). Experiments are conducted on the RTX 3090 GPU with PyTorch." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.594, + 0.399, + 0.61 + ], + "angle": 0, + "content": "4.2. Comparisons in the Digital Domain" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.616, + 0.484, + 0.753 + ], + "angle": 0, + "content": "We first validate the attack effectiveness of our ACAttack in the digital domain. It is important to note that we only train on the RGBT234 dataset and generate multi-modal patches \\(\\{p_{vi}, p_{ir}\\}\\). As shown in Fig. 4, the RGB patch exhibits color and texture, while the thermal patch has an irregular shape, which aligns with the imaging characteristics of each modality. Subsequently, the patches \\(\\{p_{vi}, p_{ir}\\}\\) generated on RGBT234 are directly applied to the LasHeR dataset to verify their generalization." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.761, + 0.305, + 0.776 + ], + "angle": 0, + "content": "4.2.1.Quantitative Evaluation" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.78, + 0.484, + 0.902 + ], + "angle": 0, + "content": "Fig. 5 illustrates a quantitative comparison of the RGBT234 dataset. The results clearly show that, under our attack, the tracking performance of existing state-of-the-art trackers suffers a significant degradation compared to clean tracking conditions. In contrast, random noise only leads to a modest decline in PR and SR, emphasizing that exploiting tracker vulnerabilities goes beyond the simplicity of random noise—it requires a more sophisticated, optimized ap" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.435, + 0.907, + 0.631 + ], + "angle": 0, + "content": "proach. Additionally, the performance drop observed with MTD is smaller than that of our ACAttack, suggesting that attack methods designed specifically for RGB trackers may not effectively mitigate the feature enhancement resulting from RGB-T coupling. On the other hand, our ACAttack achieves substantial attack success. For instance, against ViPT, ACAttack reduces PR from 0.835 to 0.621 and SR from 0.617 to 0.417. Similarly, for SDSTrack, it lowers PR from 0.848 to 0.616 and SR from 0.625 to 0.426. The substantial performance drops suggest that our ACAttack succeeds in keeping the predicted bounding box far away from actual object, which will be further confirmed in subsequent qualitative results." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.639, + 0.719, + 0.654 + ], + "angle": 0, + "content": "4.2.2. Qualitative Evaluation" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.658, + 0.907, + 0.779 + ], + "angle": 0, + "content": "As shown in Fig. 6, we present the tracking results of BAT and SDSTrack. The clean trackers perform exceptionally well in maintaining precise tracking, while our attack leads to a significant decline in tracking performance. This degradation can be attributed to our progressive generation framework, which iteratively weakens the tracker's deep semantic attention on modalities with high commonality by decoupling multi-modal responses." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.789, + 0.753, + 0.803 + ], + "angle": 0, + "content": "4.3. Generalization Evaluation" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.81, + 0.907, + 0.901 + ], + "angle": 0, + "content": "We conduct generalization experiments on the LasHeR dataset, with quantitative and qualitative results shown in Fig. 7 and Fig. 8, respectively. Compared to random noise and MTD, our ACAttack leads to a significant drop in tracking performance across all trackers, even without training on LasHeR. Additionally, we present the IoU plots for both" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.519, + 0.958 + ], + "angle": 0, + "content": "22104" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.097, + 0.089, + 0.355, + 0.193 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.162, + 0.196, + 0.293, + 0.21 + ], + "angle": 0, + "content": "(a) ViPT on LasHeR" + }, + { + "type": "image", + "bbox": [ + 0.37, + 0.089, + 0.631, + 0.193 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.432, + 0.197, + 0.56, + 0.21 + ], + "angle": 0, + "content": "(b) BAT on LasHeR" + }, + { + "type": "image", + "bbox": [ + 0.645, + 0.089, + 0.903, + 0.193 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.692, + 0.197, + 0.851, + 0.21 + ], + "angle": 0, + "content": "(c) SDSTrack on LasHeR" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.223, + 0.907, + 0.266 + ], + "angle": 0, + "content": "Figure 7. Quantitative comparison of tracking performance on the LasHeR dataset. The tracking performance of ViPT, BAT, and SDSTrack trackers is reported, including the original performance without attacks and the performance under attacks. Lower tracking metrics PR and SR represent better attack. Please zoom in for a better view." + }, + { + "type": "image", + "bbox": [ + 0.094, + 0.279, + 0.905, + 0.376 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.255, + 0.385, + 0.741, + 0.4 + ], + "angle": 0, + "content": "Figure 8. Qualitative comparison of tracking performance on the LasHeR dataset." + }, + { + "type": "image", + "bbox": [ + 0.095, + 0.424, + 0.482, + 0.602 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.615, + 0.483, + 0.671 + ], + "angle": 0, + "content": "Figure 9. Qualitative comparison of tracking performance on the LasHeR dataset. The blue and red lines represent the IoU variation over frames of the predicted boxes under the clean trackers and the victimized trackers, respectively." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.699, + 0.484, + 0.806 + ], + "angle": 0, + "content": "clean and attacked tracking results, as shown in Fig. 9. It is clear that our ACAttack can maintain a sustained attack over extended periods. Due to the existence of our adaptive attack strategy and the modal balance interference loss, the response value of the tracker for the real target is reduced, and then the tracker is easy to deviate from the original target and is attracted by similar targets." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.818, + 0.398, + 0.833 + ], + "angle": 0, + "content": "4.4. Application in the Physical Domain" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.84, + 0.484, + 0.901 + ], + "angle": 0, + "content": "After having verified our adversarial patches in digital scenes, we also extend experiments to demonstrate their efficacy in the physical domain. We directly apply the patches trained in the digital domain to the real world and use aero" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.426, + 0.907, + 0.622 + ], + "angle": 0, + "content": "gel and paper to make thermal and RGB patches for deployment on pedestrians, respectively. A dual-spectral camera in DJI Mavic 3T is used for video capture. Thirty sets of videos of different scenes are taken as test samples. The orientation results of the test are shown in Fig. 10. It can be seen that the tracking prediction bounding box is enlarged and cannot be accurately positioned due to the interference of the multi-modal adversarial patch. Specifically, the optimization of spatio-temporal joint loss makes the patch learn the effect of expanding the tracker's prediction box. Therefore, in the physical world, the tracker will not be able to accurately locate the target after being affected by the adversarial patch." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.635, + 0.679, + 0.65 + ], + "angle": 0, + "content": "4.5. Ablation Studies" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.659, + 0.907, + 0.749 + ], + "angle": 0, + "content": "We conduct ablation studies to assess the effectiveness of our unique design and parameter configuration, including: (I) loss function, (II) parameter K, (III) iteration mode, and (IV) applied modal. The ablation studies are performed on the RGBT234 dataset against ViPT, with quantitative results presented in Table 1." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.76, + 0.659, + 0.774 + ], + "angle": 0, + "content": "4.5.1. Loss Function" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.78, + 0.907, + 0.901 + ], + "angle": 0, + "content": "The loss \\( L_{st} \\) interferes with the tracker from both temporal and spatial dimensions, while \\( L_{mi} \\) is used to disrupt the tracker's semantic perception. To demonstrate their effectiveness, we remove each of them individually, with the results shown in Table 1. In the absence of \\( L_{st} \\) or \\( L_{mi} \\), the attack performance weakens, demonstrating their role in diminishing the enhanced target localization accuracy achieved through multi-modal interaction." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.518, + 0.957 + ], + "angle": 0, + "content": "22105" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.11, + 0.09, + 0.887, + 0.232 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.333, + 0.241, + 0.665, + 0.255 + ], + "angle": 0, + "content": "Figure 10. Practical application in the physical domain." + }, + { + "type": "table", + "bbox": [ + 0.101, + 0.269, + 0.895, + 0.338 + ], + "angle": 0, + "content": "
MetricViPTConfig. I: loss functionConfig. II: parameter KConfig. III: iteration modeConfig. IV: applied modalOurs
w/o Lstw/o LmiK = 0K = 9crosscombineOnly RGBOnly TIR
PR0.8350.7090.7350.6720.6510.6450.7030.6910.6690.621
SR0.6170.4860.5050.4500.4250.4280.4820.4820.4620.417
" + }, + { + "type": "table_caption", + "bbox": [ + 0.133, + 0.345, + 0.863, + 0.358 + ], + "angle": 0, + "content": "Table 1. Quantitative comparison of ablation studies, which is performed on the RGBT234 dataset against the ViPT tracker." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.386, + 0.228, + 0.4 + ], + "angle": 0, + "content": "4.5.2. Parameter K" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.405, + 0.484, + 0.662 + ], + "angle": 0, + "content": "In our progressive attack framework, we first employ projected gradient descent to identify K sets of coarse adversarial examples with effective attack performance. In order to verify its effectiveness, we set the number of coarse adversarial samples K growth from 0 to 9 and 18. As shown in the Table 1, as K increases from 0 to 9 and 18, the tracker's PR and SR consistently decrease. This indicates that such coarse-grained adversarial examples can effectively narrow the search space for refined attacks, thus facilitating a more effective attack. Specifically, this progressive method for finding adversarial examples prioritizes identifying multiple sets of coarse adversarial representations from a broad spectrum of noise. Subsequently, multi-modal patch generation refines the adversarial details to produce the final adversarial patch, leveraging numerous samples that contain adversarial information. Consequently, this approach results in an enhancement in performance." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.671, + 0.245, + 0.684 + ], + "angle": 0, + "content": "4.5.3. Iteration Mode" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.689, + 0.484, + 0.901 + ], + "angle": 0, + "content": "One of the key contributions of this paper is the adaptive iterative strategy for attacking the RGB-T tracker. To demonstrate the effectiveness of the adaptive strategy, we conduct ablation experiments using the iterative strategy. The alternating iteration strategy and the joint optimization strategy are selected for the comparison test. The former alternately optimizes the adversarial texture network and the adversarial shape network, while the latter simultaneously propagates the gradient flow to both networks. As shown in Table 1, our adaptive iteration approach can more effectively identify model vulnerabilities and generate more aggressive adversarial patches. Specifically, according to the contribution degree, our strategy can weaken deep semantic attention and break the balance of modality in tracker." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.386, + 0.665, + 0.401 + ], + "angle": 0, + "content": "4.5.4. Applied Modal" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.405, + 0.907, + 0.57 + ], + "angle": 0, + "content": "In order to verify the multi-modal patch joint and single-modal patch attack performance, we try to conduct patch apply modal ablation experiment. Multi-modal patches \\(\\{p_{vi}, p_{ir}\\}\\) are generated to simultaneously disrupt both RGB and thermal modalities. As shown in the Table 1, we use only one of these patches in an ablation setup. The adversarial patch of a single modal produces a certain attack effect and makes the tracker confused. Evidently, our multi-modal patch achieves the best attack performance, underscoring the necessity of designing joint multi-modal attacks for RGB-T trackers." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.582, + 0.634, + 0.598 + ], + "angle": 0, + "content": "5. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.608, + 0.907, + 0.835 + ], + "angle": 0, + "content": "In this work, we present a pioneering framework for adversarial attacks on RGB-T multi-modal trackers by introducing an adaptive cross-attack mechanism through multimodal response decoupling. Our approach leverages a modal-aware adaptive attack strategy and introduces novel modal disturbance loss and spatio-temporal joint attack loss to progressively impair the tracker's capability to perceive the target. The shared adversarial shape design also enhances our method's practicality, allowing seamless deployment of multi-modal patches in the real world. Experiments across digital and physical domains confirm the robustness and effectiveness of our approach in evading RGB-T trackers, highlighting the potential and significance of adaptive, multi-modal adversarial attacks in advancing the understanding of tracker vulnerabilities." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.846, + 0.673, + 0.862 + ], + "angle": 0, + "content": "Acknowledgments" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.871, + 0.906, + 0.901 + ], + "angle": 0, + "content": "This work was supported by National Natural Science Foundation of China (62276192)." + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.946, + 0.52, + 0.957 + ], + "angle": 0, + "content": "22106" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.093, + 0.09, + 0.188, + 0.106 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.115, + 0.484, + 0.184 + ], + "angle": 0, + "content": "[1] Luca Bertinetto, Jack Valmadre, Joao F Henriques, Andrea Vedaldi, and Philip HS Torr. Fully-convolitional siamese networks for object tracking. In Proceedings of the European Conference on Computer Vision Workshops, pages 850-865, 2016. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.1, + 0.187, + 0.484, + 0.243 + ], + "angle": 0, + "content": "[2] Goutam Bhat, Martin Danelljan, Luc Van Gool, and Radu Timofte. Learning discriminative model prediction for tracking. In Proceedings of the IEEE International Conference on Computer Vision, pages 6182-6191, 2019. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.245, + 0.483, + 0.299 + ], + "angle": 0, + "content": "[3] Bing Cao, Junliang Guo, Pengfei Zhu, and Qinghua Hu. Bidirectional adapter for multimodal tracking. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 927-935, 2024. 2, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.302, + 0.483, + 0.372 + ], + "angle": 0, + "content": "[4] Xuesong Chen, Xiyu Yan, Feng Zheng, Yong Jiang, Shu-Tao Xia, Yong Zhao, and Rongrong Ji. One-shot adversarial attacks on visual tracking with dual attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10176-10185, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.373, + 0.483, + 0.443 + ], + "angle": 0, + "content": "[5] Xuesong Chen, Xiyu Yan, Feng Zheng, Yong Jiang, Shu-Tao Xia, Yong Zhao, and Rongrong Ji. One-shot adversarial attacks on visual tracking with dual attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10176-10185, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.445, + 0.483, + 0.514 + ], + "angle": 0, + "content": "[6] Xuesong Chen, Canmiao Fu, Feng Zheng, Yong Zhao, Hongsheng Li, Ping Luo, and Guo-Jun Qi. A unified multi-scenario attacking network for visual object tracking. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1097-1104, 2021. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.517, + 0.483, + 0.572 + ], + "angle": 0, + "content": "[7] Martin Danelljan, Luc Van Gool, and Radu Timofte. Probabilistic regression for visual tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7183-7192, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.574, + 0.484, + 0.643 + ], + "angle": 0, + "content": "[8] Li Ding, Yongwei Wang, Kaiwen Yuan, Minyang Jiang, Ping Wang, Hua Huang, and Z Jane Wang. Towards universal physical attacks on single object tracking. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1236-1245, 2021. 2, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.646, + 0.483, + 0.728 + ], + "angle": 0, + "content": "[9] Xiaojun Hou, Jiazheng Xing, Yijie Qian, Yaowei Guo, Shuo Xin, Junhao Chen, Kai Tang, Mengmeng Wang, Zhengkai Jiang, Liang Liu, et al. Sdstrack: Self-distillation symmetric adapter learning for multi-modal visual object tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 26551-26561, 2024. 2, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.73, + 0.484, + 0.785 + ], + "angle": 0, + "content": "[10] Shuai Jia, Chao Ma, Yibing Song, and Xiaokang Yang. Robust tracking against adversarial attacks. In Proceedings of the European Conference on Computer Vision, pages 69-84, 2020. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.788, + 0.483, + 0.857 + ], + "angle": 0, + "content": "[11] Shuai Jia, Yibing Song, Chao Ma, and Xiaokang Yang. Iou attack: Towards temporally coherent black-box adversarial attack for visual object tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6709-6718, 2021. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.859, + 0.483, + 0.901 + ], + "angle": 0, + "content": "[12] Chenglong Li, Xinyan Liang, Yijuan Lu, Nan Zhao, and Jin Tang. Rgb-t object tracking: Benchmark and baseline. Pattern Recognition, 96:106977, 2019. 5" + }, + { + "type": "list", + "bbox": [ + 0.094, + 0.115, + 0.484, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.093, + 0.906, + 0.147 + ], + "angle": 0, + "content": "[13] Chenglong Li, Wanlin Xue, Yaqing Jia, Zhichen Qu, Bin Luo, Jin Tang, and Dengdi Sun. Lasher: A large-scale high-diversity benchmark for rgbt tracking. IEEE Transactions on Image Processing, 31:392-404, 2022. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.149, + 0.906, + 0.204 + ], + "angle": 0, + "content": "[14] Liting Lin, Heng Fan, Zhipeng Zhang, Yong Xu, and Haibin Ling. Swintrack: A simple and strong baseline for transformer tracking. Advances in Neural Information Processing Systems, 35:16743-16754, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.205, + 0.907, + 0.274 + ], + "angle": 0, + "content": "[15] Siao Liu, Zhaoyu Chen, Wei Li, Jiwei Zhu, Jiafeng Wang, Wenqiang Zhang, and Zhongxue Gan. Efficient universal shuffle attack for visual object tracking. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pages 2739-2743, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.275, + 0.906, + 0.328 + ], + "angle": 0, + "content": "[16] Andong Lu, Chenglong Li, Yuqing Yan, Jin Tang, and Bin Luo. Rgbt tracking via multi-adapter network with hierarchical divergence loss. IEEE Transactions on Image Processing, 30:5613-5625, 2021. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.33, + 0.906, + 0.385 + ], + "angle": 0, + "content": "[17] Andong Lu, Cun Qian, Chenglong Li, Jin Tang, and Liang Wang. Duality-gated mutual condition network for rgbt tracking. IEEE Transactions on Neural Networks and Learning Systems, 36(3):4118-4131, 2025. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.386, + 0.906, + 0.442 + ], + "angle": 0, + "content": "[18] Hyeonseob Nam and Bohyung Han. Learning multi-domain convolutional neural networks for visual tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4293-4302, 2016. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.442, + 0.906, + 0.496 + ], + "angle": 0, + "content": "[19] Wuqiang Qi, Zhuoqun Zhang, and Zhishe Wang. Dmfuse: Diffusion model guided cross-attention learning for infrared and visible image fusion. Chinese Journal of Information Fusion, 1(3):226-241, 2024. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.497, + 0.906, + 0.538 + ], + "angle": 0, + "content": "[20] Linfeng Tang, Hao Zhang, Han Xu, and Jiayi Ma. Deep learning-based image fusion: A survey. Journal of Image and Graphics, 28(1):3-36, 2023. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.539, + 0.907, + 0.594 + ], + "angle": 0, + "content": "[21] Ning Wang, Wengang Zhou, Yibing Song, Chao Ma, Wei Liu, and Houqiang Li. Unsupervised deep representation learning for real-time tracking. International Journal of Computer Vision, 129(2):400-418, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.596, + 0.906, + 0.65 + ], + "angle": 0, + "content": "[22] Yun Xiao, Mengmeng Yang, Chenglong Li, Lei Liu, and Jin Tang. Attribute-based progressive fusion network for rgbt tracking. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 2831-2838, 2022. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.651, + 0.906, + 0.718 + ], + "angle": 0, + "content": "[23] Saining Xie, Ross Girshick, Piotr Dólár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1492-1500, 2017. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.72, + 0.906, + 0.774 + ], + "angle": 0, + "content": "[24] Xinyu Xie, Yawen Cui, Tao Tan, Xubin Zheng, and Zitong Yu. Fusionmamba: Dynamic feature enhancement for multimodal image fusion with mamba. Visual Intelligence, 2(1): 37, 2024. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.776, + 0.906, + 0.844 + ], + "angle": 0, + "content": "[25] Bin Yan, Dong Wang, Huchuan Lu, and Xiaoyun Yang. Cooling-shrinking attack: Blinding the tracker with imperceptible noises. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 990-999, 2020. 2, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.845, + 0.906, + 0.901 + ], + "angle": 0, + "content": "[26] Bin Yan, Houwen Peng, Jianlong Fu, Dong Wang, and Huchuan Lu. Learning spatio-temporal transformer for visual tracking. In Proceedings of the IEEE International Conference on Computer Vision, pages 10448-10457, 2021. 1" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.907, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "22107" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.093, + 0.482, + 0.147 + ], + "angle": 0, + "content": "[27] Fan Zhang, Hanwei Peng, Lingli Yu, Yuqian Zhao, and Baifan Chen. Dual-modality space-time memory network for rgbt tracking. IEEE Transactions on Instrumentation and Measurement, 72:1-12, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.149, + 0.482, + 0.217 + ], + "angle": 0, + "content": "[28] Tianlu Zhang, Hongyuan Guo, Qiang Jiao, Qiang Zhang, and Jungong Han. Efficient rgb-t tracking via cross-modality distillation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5404-5413, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.22, + 0.482, + 0.275 + ], + "angle": 0, + "content": "[29] Jiawen Zhu, Simiao Lai, Xin Chen, Dong Wang, and Huchuan Lu. Visual prompt multi-modal tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9516-9526, 2023. 2, 5" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.093, + 0.482, + 0.275 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.946, + 0.518, + 0.956 + ], + "angle": 0, + "content": "22108" + } + ] +] \ No newline at end of file diff --git a/2025/ACAttack_ Adaptive Cross Attacking RGB-T Tracker via Multi-Modal Response Decoupling/6f190354-412a-4587-a766-f43d48cafb75_origin.pdf b/2025/ACAttack_ Adaptive Cross Attacking RGB-T Tracker via Multi-Modal Response Decoupling/6f190354-412a-4587-a766-f43d48cafb75_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..42f14e5d072c563dc212d0992616366096b574e6 --- /dev/null +++ b/2025/ACAttack_ Adaptive Cross Attacking RGB-T Tracker via Multi-Modal Response Decoupling/6f190354-412a-4587-a766-f43d48cafb75_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0135ef5be8ee4cbbdcfb8c6295590f94fbed9a55ddcf1057ba992322dc18c99b +size 4048980 diff --git a/2025/ACAttack_ Adaptive Cross Attacking RGB-T Tracker via Multi-Modal Response Decoupling/full.md b/2025/ACAttack_ Adaptive Cross Attacking RGB-T Tracker via Multi-Modal Response Decoupling/full.md new file mode 100644 index 0000000000000000000000000000000000000000..95917ac63672358e614a9482cbea774850262ba5 --- /dev/null +++ b/2025/ACAttack_ Adaptive Cross Attacking RGB-T Tracker via Multi-Modal Response Decoupling/full.md @@ -0,0 +1,340 @@ +# ACAttack: Adaptive Cross Attacking RGB-T Tracker via Multi-Modal Response Decoupling + +Xinyu Xiang Qinglong Yan Hao Zhang* Jiayi Ma* Wuhan University, China + +xiangxinyu@whu.edu.cn, qinglong_yan@whu.edu.cn, zhpersonalbox@gmail.com, jyma2010@gmail.com + +# Abstract + +The research on adversarial attacks against trackers primarily concentrates on the RGB modality, whereas the methodology for attacking RGB-T multi-modal trackers has seldom been explored so far. This work represents an innovative attempt to develop an adaptive cross attack framework via multi-modal response decoupling, generating multi-modal adversarial patches to evade RGB-T trackers. Specifically, a modal-aware adaptive attack strategy is introduced to weaken the modality with high common information contribution alternately and iteratively, achieving the modal decoupling attack. In order to perturb the judgment of the modal balance mechanism in the tracker, we design a modal disturbance loss to increase the distance of the response map of the single-modal adversarial samples in the tracker. Besides, we also propose a novel spatio-temporal joint attack loss to progressively deteriorate the tracker's perception of the target. Moreover, the design of the shared adversarial shape enables the generated multi-modal adversarial patches to be readily deployed in real-world scenarios, effectively reducing the interference of the patch posting process on the shape attack of the infrared adversarial layer. Extensive digital and physical domain experiments demonstrate the effectiveness of our multi-modal adversarial patch attack. Our code is available at https://github.com/Xinyu-Xiang/ACAttack. + +# 1. Introduction + +The adversarial attack on visual object tracking (VOT) [1, 26] aims to mislead the prediction results of the tracker through the generated adversarial disturbance, find the model vulnerabilities, and then promote the security of the tracking model in real-life. Single-modal tracking attack methods have been extensively studied, but with the wide application of multi-modal devices [19, 20, 24], multimodal trackers are widely used in safety-critical real-world + +![](images/c344516e1824eddade6022528581e550f426aeec60e7d6e47551d4607884a8b0.jpg) +Figure 1. Our attack strategy against RGB-T trackers. An adaptive attack strategy, sensitive to modality, is introduced to alternately and iteratively suppress the modality with a high contribution of shared information. Additionally, a modal disturbance loss is crafted to enlarge the response map distance for single-modal adversarial samples within the tracker. + +fields such as autonomous driving and urban security [16]. + +To address the urgent need to explore the security of trackers, adversarial attack techniques for trackers have emerged in rapid succession, including traditional gradient-based attack approaches and deep-network-based attacks. The former methods [10, 11] utilize hand-crafted parameters and apply many times of gradient ascent to maximize an adversarial loss function for misguiding deep networks. Although it can achieve certain attack effects for specific types of trackers, it is challenging to comprehensively explore and attack the potential vulnerabilities of the different trackers due to the limitations of inflexible pattern design. Nonetheless, the latter one [6] applies tremendous data to train an adversarial patches-generator including flexible architectures and optimization strategies to better automatically search for model weaknesses and realize tracker attacks. Therefore, in comparison with a traditional gradient-based attack approach, deep-network-based paradigm can more automatically and flexibly excavate the security issues within the tracker. + +Although prior efforts for adversarial attack methods are effective in interference trackers, several challenges still need to be addressed. Notably, existing adversarial attack methods [5, 15, 25] on tracking are designed for RGB modality, whereas the methodologies for attacking RGB-T multi-modal trackers are less explored so far. Considering the widespread deployment of multi-modal tracking technology [17, 27, 28] in several safety-critical areas, it is urgent to explore and implement adversarial attacks of multimodal tracking to understand the potential vulnerabilities of the trackers. However, as shown in Fig. 1, the unique modal coupling and structural design of RGB-T trackers make it a great challenge to successfully find model vulnerabilities. Firstly, due to the modal equilibrium mechanism and coupled multi-modal information, it is difficult to successfully jam the RGB-T tracking model itself. Specifically, the modal balancing strategy in the multi-modal tracker can effectively prevent the attack of adversarial perturbation in a single modal. Secondly, the coupling of multi-modal information can effectively weaken the attacks against the consensus region of the target. Thirdly, the deployment of patches in the physical world is also challenging because the stacked placement of multi-modal patches has a probability of compromising the expression of infrared adversarial shapes, reducing their synergy performance. + +Considering these challenges, we propose ACAttack, an adaptive cross attack framework via multi-modal response decoupling. It aims to generate multi-modal adversarial patches to evade RGB-T trackers in both digital and physical domains. Specifically, this framework can gradually and adaptively optimize, discover plenty of rough adversarial samples, and then map them to the high-dimensional adversarial space of different modalities according to the modal response contribution factor, forming multi-modal adversarial patches. Secondly, a modal-aware adaptive attack strategy is introduced to weaken tracker's deep semantic attention to the modality with high common information contribution according to the contribution degree of modal response alternately and iteratively, achieving the modal decoupling attack. When the contributions of two modalities are similar, we design a modal disturbance loss to search the modal imbalance vulnerabilities of the tracker, expand the distance of the response map of the single-modal adversarial samples in the tracker, and perturb the judgment of the balance modal in the tracker. We also design a spatio-temporal joint attack loss to build progressively enlarged pseudo-GT between consecutive frames, which progressively deteriorates the tracker's perception of the target. Thirdly, the design of the shared adversarial shape is deployed to eliminate the interference of visible patches on the expression of infrared adversarial shapes. After the shape is shared, it can not only reduce the consumption of the adversarial shape's inter-modal attack ability but also realize + +attacks other than texture in the visible modal. + +In summary, we make the following contributions: + +- We make an innovative attempt to propose an adaptive cross attack framework via multi-modal response decoupling. It can generate multi-modal adversarial patches to mislead RGB-T trackers effectively. +- We develop a novel modal attack flow, in which modal-aware adaptive attack strategy and modal attack constraints alternately disturb the modes with high contribution to achieve modal decoupling and destroy modal balance mechanism of tracker, respectively. +- We design the shape-shared stack strategy to linkage the visible and infrared adversarial shapes, reducing the attack consumption of multi-modal patches mutual deployment in physical scenarios. +- Experimental results show that multi-modal patches can efficiently fool RGB-T trackers in standard RGB-T tracking datasets and real scenes. + +# 2. Related Work + +# 2.1. Visual Object Tracking + +Given the tracked object in the first frame, object tracking aims to recognize and locate the object in subsequent frames. Many RGB tracking methods [2, 7, 14, 18, 21] have been proposed and achieved commendable tracking performance. However, RGB sensors struggle to capture objects effectively under challenging conditions such as occlusion and low light, limiting the performance of RGB trackers. To address this, the RGB-T tracking paradigm is introduced, which is not restricted to a single RGB modality but instead integrates the complementary information from both RGB and thermal modalities. This fusion enables more robust tracking capabilities. ViPT [29] introduces a vision prompt tracking framework that leverages the foundational model with strong representation capabilities, enabling interaction between the thermal and RGB modalities through a modality-complementing prompter. BAT [3] proposes a universal bidirectional adapter, which enables mutual prompt between the thermal and RGB modalities and further improves tracking performance. SDSTrack [9] designs a complementary masked patch distillation strategy based on self-distillation learning, which enhances the tracking robustness in extreme weather. + +# 2.2. Adversarial Attacks + +Currently, adversarial attacks in the tracking task primarily target RGB trackers. For instance, APYVOT [4] proposes an optimization objective function with a dual-attention mechanism to generate perturbations, disrupting tracking by interfering solely with the initial frame. MTD [8] introduces a maximum textural discrepancy loss function that misleads the visual trackers by decorrelating the template + +![](images/89724a5020b0aa2d41c2cccabb03770003a311051704048345ec27280933c31c.jpg) + +![](images/51abf7b5b1b9ea20cd809cc7842a0f6369b3ce07abf87277bc899e287d84adda.jpg) +Figure 2. The overall framework of our ACAttack. + +and search frame at hierarchical feature scales. These methods, however, fail to disrupt the significant feature enhancement resulting from the interactions between RGB and thermal modalities, which limits their effectiveness against RGB-T trackers. Therefore, it is essential to develop the attack strategy specifically designed for RGB-T tracking. + +# 3. Methodology + +# 3.1. Coarse-to-Fine Modality Attack Framework + +With the help of the progressive modality information integration strategy, the multi-modal trackers gradually strengthen the common scene representation and target response, thereby achieving a robust tracking performance superior to that of the single-modal trackers. Consequently, a coarse-to-fine architecture is designed to progressively degrade the modality integration capability of RGB-T models named ACAAttack, which can be divided into two stages. The overall architecture of our ACAAttack is illustrated in Fig. 2 and Algorithm 1. First, we employ projected gradient descent (PGD) in stage1 to identify a set of adversarial examples $\{p_i\}_{i=1}^k$ with sufficient aggressiveness, narrowing the search space for refined attacks and increasing the likelihood of discovering strong adversarial examples, formulated as: + +$$ +\left\{p _ {i} \right\} _ {i = 1} ^ {k} = P G D \left(p _ {i} ^ {\text {i n i t}}\right), \tag {1} +$$ + +where $p_i^{init}$ is randomly initialized with noise patches. The generated patches are subsequently loaded onto the visible image $I_{vi}$ to form a visible adversarial sample $I_{vi}^{adv}$ . This + +process can be formulated as follows: + +$$ +I _ {v i} ^ {a d v} = p _ {i} \odot M + I _ {v i} \odot (1 - M), \tag {2} +$$ + +where $M$ is the binary mask for applying adversarial patch. $\odot$ represents the element-wise Hadmard product. The adversarial visible image $I_{vi}^{adv}$ concat with clean infrared image $I_{ir}$ are sent to RGB-T tracker $T(\cdot)$ to predict final bounding box $Bbox_{pred}$ of target, which is expressed as: + +$$ +B b o x _ {p r e d} = T \left(I _ {v i} ^ {a d v}, I _ {i r}\right). \tag {3} +$$ + +We optimize this process by minimizing the conventional attack loss $L_{att}$ relative to the center point, which is defined as: + +$$ +L _ {a t t} = - \left\| \left(C p (B b o x _ {p r e d}) - C p (B b o x _ {g t})\right) \right\| _ {2} ^ {2}, \tag {4} +$$ + +where $C_p$ denotes the operator to obtain the center point of bounding box $Bbox$ . Specifically, when the attack loss $L_{att}$ reaches a predefined threshold, the iteration is halted, and a rough adversarial sample is generated. This process is repeated $k$ times to generate a set of $k$ rough adversarial samples. Considering the different imaging principles of infrared and visible modalities, the multi-modal patches will be specially designed according to their differences in principles. Specifically, the visible modal mainly employs the attack texture to interfere. For the infrared modal, it is difficult to detect the texture, so the adversarial shape is used to attack. Subsequently, the set of adversarial samples generated in the coarse attack stage (stage1) is fed into the + +subsequent fine attack process (stage2) for further refinement, resulting in the generation of multi-modal adversarial patches with strong attack performance. Finally, through continuous iterative optimization, the multi-mode patch will share the same attack shape, while the visible patch will also possess adversarial texture to confuse the tracker. The fine-grained attack phase targets the modality of the multi-modal tracker and consists of modal decoupling attacks and modal balance interference, which will be detailed in the subsequent sections. + +# 3.2. Modal Decoupling Attack + +The RGB-T tracker implicitly couples the contributions of the two modalities, thereby enhancing tracking accuracy. Given the significant role of modal contribution in the tracker, we propose a modal decoupling attack to adaptively diminish the influence of advantageous modalities. Specifically, we use the coarse adversarial samples from the stage1 as input for the stage2, feeding them simultaneously into the adversarial texture generation network $G_{tex}^{Adv}$ and the adversarial shape generation network $G_{shape}^{Adv}$ . For attacking the infrared modality, the rough adversarial sample set from the first stage is encoded into $r$ dimensions via continuous downsampling and an MLP, controlling the adversarial shape. The infrared patch $p_{ir}$ generation process can be expressed as follows: + +$$ +p _ {i r} = G _ {s h a p e} ^ {A d v} \left(\left\{p _ {i} \right\} _ {i = 1} ^ {k}\right). \tag {5} +$$ + +The adversarial texture generation network generates adversarial textures to attack the visible modality using residual connections and upsampling [23], which is defined as: + +$$ +p _ {v i} = \left(1 - p _ {i r}\right) \odot G _ {\text {s h a p e}} ^ {\text {A d v}} \left(\left\{p _ {i} \right\} _ {i = 1} ^ {k}\right), \tag {6} +$$ + +where the visible patch with adversarial textures is $p_{vi}$ . + +Subsequently, the modal contribution of the current network input is calculated as the reciprocal of the difference between the response map obtained from single-modal data and the response map from dual-modal input. A larger reciprocal distance indicates that the response maps from dual-modal and single-modal inputs are more similar, suggesting a greater contribution from the current single modality to the tracker. The modal response contribution can be expressed as follows: + +$$ +c _ {m} = \frac {1}{\operatorname {d i s} (R (m , m) , R (v i , i r))}, \tag {7} +$$ + +where $c_{m}$ represents the contribution value of $m \in \{vi, ir\}$ modal to the tracker. $dis$ stands for the distance function and is used to measure the Euclidean distance between response maps. $R(\cdot, \cdot)$ shows the response map acquired by the tracker under the current input. + +To normalize the modal contribution, a softmax operation is applied to the reciprocal distance, yielding the final + +Algorithm 1: The ACAcAttack Algorithm +Input: Random patches $p_i^{init}$ , parameters $k$ , $M_{stage1}$ , $M_{stage2}$ , $\xi$ , $\zeta$ +Output: Optimized multi-modal patches $p_{vi}, p_{ir}$ +1 Iteration: +2 Initialize a random patch $p_i^{init}$ ; +3 $i = i + 1$ ; +4 Iteration: +5 Generate $p_i$ through Eq. (1); +6 Use Eq. (2) to generate adversarial sample $I_{vi}^{adv}$ ; +7 Calculate $Bbox_{pred}$ using Eq. (3); +8 Optimize $PGD(\cdot)$ with Eq. (4); +9 Until: $L_{att} < \xi$ or iter $\geq M_{stage1}$ +10 Until: $i \geq k$ +11 Determine $\{p_i\}_{i=1}^k$ after optimization in stage1; +12 Iteration: +13 iter $= iter + 1$ ; +14 Obtain $p_{ir}, p_{vi}$ via Eqs. (5) and (6); +15 Apply multi-modal patches $p_{ir}, p_{vi}$ on $I_{ir}, I_{vi}$ ; +16 Calculate $c_{vi}, c_{ir}$ using Eq. (7); +17 Send to Tracker $T(\cdot)$ to predict bounding box; +18 if $|c_{vi} - c_{ir}| < \zeta$ ; +19 Optimize $G_{shape}^{Adv}$ with Eq. (9); +20 elif $c_{vi} - c_{ir} > \zeta$ ; +21 Optimize $G_{tex}^{Adv}$ with Eqs. (4) and (10); +22 elif $c_{ir} - c_{vi} > \zeta$ ; +23 Optimize $G_{shape}^{Adv}$ with Eqs. (4) and (10); +24 Until: iter $\geq M_{stage2}$ + +modal contribution score, formulated as: + +$$ +c _ {n o r m} = \operatorname {s o f t m a x} \left(c _ {v i}, c _ {i r}\right). \tag {8} +$$ + +Finally, an automatic discriminant attack is executed based on the modal contribution score. As illustrated, when the visible contribution is higher in the input data, only the visible modal is attacked, specifically by optimizing the generation of adversarial textures. When the infrared contribution is higher in the input data, only the adversarial shape is modified to attack the infrared modal, thereby reducing its contribution to the tracker. Given that the tracker employs a modal balance mechanism, the contributions of the two modalities may be similar in certain scenarios, as detailed in the subsequent section. + +# 3.3. Modal Balance Interference + +Previous work on tracking attacks has attempted to design explicit attack losses to detect model vulnerabilities, but this approach often fails to account for the inherent characteristics of the model, making it challenging to execute effective attacks. Inspired by the concept of implicit attacks [25] and the multi-modal aggregation properties in + +RGB-T tracker [22], we develop a loss function with modal-balanced interference to target multi-modal trackers. In cases where the contributions of infrared and visible modal are similar (i.e., modal balance), the response map of single-modal input closely resembles that of dual-modal input. To disrupt this balance, we extract the response maps of the two single-modal adversarial examples and increase the distance between them. The details are provided as follows: + +$$ +L _ {m i} = - \left\| R \left(v i _ {a d v}, v i _ {a d v}\right) - R \left(i r _ {a d v}, i r _ {a d v}\right) \right\| _ {2} ^ {2}. \tag {9} +$$ + +Notably, the infrared and visible patches in our method share the same adversarial shape to achieve simultaneous attacks on both modalities. Therefore, under conditions of modal balance, only the adversarial shape is optimized. Additionally, a spatio-temporal joint attack loss $L_{st}$ is employed in conjunction with the modal jamming loss $L_{mi}$ to disrupt the tracker's semantic perception. The specific design is presented in the following formula: + +$$ +L _ {s t} = \left\| \sum_ {i = 1} ^ {s} B b o x _ {p r e d} (w, h) - r _ {i} * B b o x _ {g t} (w, h) \right\| _ {2} ^ {2}, \tag {10} +$$ + +where $s$ denotes the consecutive $s = 5$ frames extracted from a video. $r_i$ represents the scaling factor over time to construct the pseudo-GT, which is set as [1.90, 1.95, 2.00, 2.05, 2.10]. + +# 3.4. Implementation Process in Real-world + +After completing the digital domain optimization, the multimodal adversarial patches require deployment in the real world. However, during real-world deployment, visible and infrared patches are stacked, leading to inevitable interactions between the two modalities, as illustrated in Fig. 3. Specifically, the coverage of visible patches impacts the adversarial shape expression of infrared patches, while the presence of infrared patches hinders the rendering of the adversarial texture in visible modality. To address these challenges, we propose a shape-shared stacking strategy, where both the visible and infrared patches adopt the same attack shape. This design not only effectively mitigates interactions between infrared and visible patches in the real world but also enhances the attack shapes of visible patches, thereby improving overall attack performance. + +# 4. Experiments + +# 4.1. Experimental Settings + +# 4.1.1. Datasets and Evaluation Metrics + +We conduct experiments on RGBT234 [12] and LasHeR [13] datasets and assess the effectiveness of our ACAttack by evaluating precision rate (PR) and success rate (SR), both of which are commonly used metrics in tracking tasks. Taking PR as an example, we + +![](images/d470705f12b1b300b95534ddb97c4a2699910e4e6ccda977f227ae26d4cb9532.jpg) + +![](images/33fd2f375730bee087e11a98a35e6edcbe43d2da12561454172fedc8fc9231ac.jpg) +VI on IR + +![](images/f47fed5e15b83cdc966600b10b48230bac13df87d5bf3c7dd049d110db4d9ce1.jpg) + +![](images/92e3c6e7196ddfdb7bb345e4c518f60529abae4065d0b6b0eada7eead6b613b9.jpg) +IR on VI + +![](images/40821a79e64f48b8da64814d87a2c2c1d6b209c40f83b609db0e4879db63ce81.jpg) + +![](images/fa0cf77ca4b32b732a765bd36dd7bdfd3dcfd54c3d62ad0c1d21287968a7e519.jpg) +Ours + +![](images/c883cfd57486865bd20823dd356d172a367ac79998c92b08debea92727b3fbb1.jpg) +Figure 3. Process of physical implementation. + +![](images/c4eee172f1a63caf416d7705ac5c5a4cb3db0beed313dcc8d3253a8746fc10d4.jpg) +(a) ViPT patch +Figure 4. Visualization of generated patches. + +![](images/1f1cba12750e6777c7d348e11705f7579722488aeb97c43eb0c5ba7f185d8c9c.jpg) +(b) BAT patch + +![](images/1a25403a0068d4992e805125a13cfebb745348e2e333de0c52b4c0685aebb6dc.jpg) +(c) SDSTrack patch + +calculate the Euclidean distance of the center between the predicted bounding box and ground truth box in both RGB and thermal modalities, using the smaller distance to represent the precision RGBT234 provides 234 pairs of RGB and thermal video, with a total frame of about $234\mathrm{K}$ and a maximum of 8K per sequence. LasHeR is comprised of 1224 visible and thermal video pairs, totaling over $730\mathrm{K}$ frame pairs. Since the tracking performance on the background is not of interest, LasHeR performs strict alignment of the object area, allowing the object to share the same ground truth of the bounding box in both visible and thermal modalities. Therefore, we use PR and SR as evaluation metrics. + +# 4.1.2. Victimized Trackers and Comparison Attackers + +We select several state-of-the-art trackers as targets for our attack, including ViPT [29], BAT [3], and SDSTrack [9]. To demonstrate the challenges in exploiting vulnerabilities in RGB-T trackers, we use a patch composed of random noise as a baseline for comparison, emphasizing the need for meticulous exploration. Furthermore, we compare the performance of our proposed ACAttack with the representative attack method MTD [8], which is specifically designed + +![](images/da6a446625886f71756aead6a4e9fa99c2780b36928a437cb31b58f954fcae24.jpg) +(a) ViPT on RGBT234 + +![](images/1a22cf3b0944a5bec6d422a63b67353ae6f7db902f57bf12db9a0a3abede9e4d.jpg) +(b) BAT on RGBT234 + +![](images/011457f325e1dc34bddb5ac62bfa101a76791122d58e04157c92d3d0388f4432.jpg) +(c) SDS on RGBT234 + +![](images/e522ecd422a23f3740a308ff9c1a063e2cb028f9532ad41a9413cd641d252e86.jpg) +Figure 5. Quantitative comparison of tracking performance on the RGBT234 dataset. The tracking performance of ViPT, BAT, and SDSTrack trackers is reported, including the original performance without attacks and the performance under attacks. Lower tracking metrics PR and SR represent better attack. Please zoom in for a better view. +Figure 6. Qualitative comparison of tracking performance on the RGBT234 dataset. + +for RGB trackers, highlighting the advantages of our approach in the multi-modal setting. + +# 4.1.3. Implementation Details + +The multi-spectral video in the physical domain is captured by a DJI Mavic 3T UAV equipped with thermal and RGB cameras, and the video frame rate is 30 fps. The hyperparameters in adaptive iteration $\xi$ and $\zeta$ is 9 and 0.02. Training epoch in stage1 is set as $M_{\text{stage1}} = 180$ . Experiments are conducted on the RTX 3090 GPU with PyTorch. + +# 4.2. Comparisons in the Digital Domain + +We first validate the attack effectiveness of our ACAttack in the digital domain. It is important to note that we only train on the RGBT234 dataset and generate multi-modal patches $\{p_{vi}, p_{ir}\}$ . As shown in Fig. 4, the RGB patch exhibits color and texture, while the thermal patch has an irregular shape, which aligns with the imaging characteristics of each modality. Subsequently, the patches $\{p_{vi}, p_{ir}\}$ generated on RGBT234 are directly applied to the LasHeR dataset to verify their generalization. + +# 4.2.1.Quantitative Evaluation + +Fig. 5 illustrates a quantitative comparison of the RGBT234 dataset. The results clearly show that, under our attack, the tracking performance of existing state-of-the-art trackers suffers a significant degradation compared to clean tracking conditions. In contrast, random noise only leads to a modest decline in PR and SR, emphasizing that exploiting tracker vulnerabilities goes beyond the simplicity of random noise—it requires a more sophisticated, optimized ap + +proach. Additionally, the performance drop observed with MTD is smaller than that of our ACAttack, suggesting that attack methods designed specifically for RGB trackers may not effectively mitigate the feature enhancement resulting from RGB-T coupling. On the other hand, our ACAttack achieves substantial attack success. For instance, against ViPT, ACAttack reduces PR from 0.835 to 0.621 and SR from 0.617 to 0.417. Similarly, for SDSTrack, it lowers PR from 0.848 to 0.616 and SR from 0.625 to 0.426. The substantial performance drops suggest that our ACAttack succeeds in keeping the predicted bounding box far away from actual object, which will be further confirmed in subsequent qualitative results. + +# 4.2.2. Qualitative Evaluation + +As shown in Fig. 6, we present the tracking results of BAT and SDSTrack. The clean trackers perform exceptionally well in maintaining precise tracking, while our attack leads to a significant decline in tracking performance. This degradation can be attributed to our progressive generation framework, which iteratively weakens the tracker's deep semantic attention on modalities with high commonality by decoupling multi-modal responses. + +# 4.3. Generalization Evaluation + +We conduct generalization experiments on the LasHeR dataset, with quantitative and qualitative results shown in Fig. 7 and Fig. 8, respectively. Compared to random noise and MTD, our ACAttack leads to a significant drop in tracking performance across all trackers, even without training on LasHeR. Additionally, we present the IoU plots for both + +![](images/28049c3409491173fd313b73b9d8f4d6052356dd6d98451600c8e1c68fb1c206.jpg) +(a) ViPT on LasHeR + +![](images/eac88dcb4a67c9b330ed788edd168ef45fb74938a144720157eb3a4c466bc779.jpg) +(b) BAT on LasHeR + +![](images/441f9cb12578aae2668934ce3ac88e833dca1fd762f2b926d0004f5b281ad12a.jpg) +(c) SDSTrack on LasHeR + +![](images/0ae041af7cf0486b92cdd41c3dd040b2df52ce10d7f53dd3fa1e52eca829af99.jpg) +Figure 7. Quantitative comparison of tracking performance on the LasHeR dataset. The tracking performance of ViPT, BAT, and SDSTrack trackers is reported, including the original performance without attacks and the performance under attacks. Lower tracking metrics PR and SR represent better attack. Please zoom in for a better view. + +![](images/323f39373e80e2cf23c802d775e4e96c7945d673ba0d02d40dec1231ad08dfb8.jpg) +Figure 8. Qualitative comparison of tracking performance on the LasHeR dataset. +Figure 9. Qualitative comparison of tracking performance on the LasHeR dataset. The blue and red lines represent the IoU variation over frames of the predicted boxes under the clean trackers and the victimized trackers, respectively. + +clean and attacked tracking results, as shown in Fig. 9. It is clear that our ACAttack can maintain a sustained attack over extended periods. Due to the existence of our adaptive attack strategy and the modal balance interference loss, the response value of the tracker for the real target is reduced, and then the tracker is easy to deviate from the original target and is attracted by similar targets. + +# 4.4. Application in the Physical Domain + +After having verified our adversarial patches in digital scenes, we also extend experiments to demonstrate their efficacy in the physical domain. We directly apply the patches trained in the digital domain to the real world and use aero + +gel and paper to make thermal and RGB patches for deployment on pedestrians, respectively. A dual-spectral camera in DJI Mavic 3T is used for video capture. Thirty sets of videos of different scenes are taken as test samples. The orientation results of the test are shown in Fig. 10. It can be seen that the tracking prediction bounding box is enlarged and cannot be accurately positioned due to the interference of the multi-modal adversarial patch. Specifically, the optimization of spatio-temporal joint loss makes the patch learn the effect of expanding the tracker's prediction box. Therefore, in the physical world, the tracker will not be able to accurately locate the target after being affected by the adversarial patch. + +# 4.5. Ablation Studies + +We conduct ablation studies to assess the effectiveness of our unique design and parameter configuration, including: (I) loss function, (II) parameter K, (III) iteration mode, and (IV) applied modal. The ablation studies are performed on the RGBT234 dataset against ViPT, with quantitative results presented in Table 1. + +# 4.5.1. Loss Function + +The loss $L_{st}$ interferes with the tracker from both temporal and spatial dimensions, while $L_{mi}$ is used to disrupt the tracker's semantic perception. To demonstrate their effectiveness, we remove each of them individually, with the results shown in Table 1. In the absence of $L_{st}$ or $L_{mi}$ , the attack performance weakens, demonstrating their role in diminishing the enhanced target localization accuracy achieved through multi-modal interaction. + +![](images/fa1865e09e72f7edb94a1940891936ea2867a523c0a89c624cf6ced42d48b9ec.jpg) +Figure 10. Practical application in the physical domain. + +
MetricViPTConfig. I: loss functionConfig. II: parameter KConfig. III: iteration modeConfig. IV: applied modalOurs
w/o Lstw/o LmiK = 0K = 9crosscombineOnly RGBOnly TIR
PR0.8350.7090.7350.6720.6510.6450.7030.6910.6690.621
SR0.6170.4860.5050.4500.4250.4280.4820.4820.4620.417
+ +Table 1. Quantitative comparison of ablation studies, which is performed on the RGBT234 dataset against the ViPT tracker. + +# 4.5.2. Parameter K + +In our progressive attack framework, we first employ projected gradient descent to identify K sets of coarse adversarial examples with effective attack performance. In order to verify its effectiveness, we set the number of coarse adversarial samples K growth from 0 to 9 and 18. As shown in the Table 1, as K increases from 0 to 9 and 18, the tracker's PR and SR consistently decrease. This indicates that such coarse-grained adversarial examples can effectively narrow the search space for refined attacks, thus facilitating a more effective attack. Specifically, this progressive method for finding adversarial examples prioritizes identifying multiple sets of coarse adversarial representations from a broad spectrum of noise. Subsequently, multi-modal patch generation refines the adversarial details to produce the final adversarial patch, leveraging numerous samples that contain adversarial information. Consequently, this approach results in an enhancement in performance. + +# 4.5.3. Iteration Mode + +One of the key contributions of this paper is the adaptive iterative strategy for attacking the RGB-T tracker. To demonstrate the effectiveness of the adaptive strategy, we conduct ablation experiments using the iterative strategy. The alternating iteration strategy and the joint optimization strategy are selected for the comparison test. The former alternately optimizes the adversarial texture network and the adversarial shape network, while the latter simultaneously propagates the gradient flow to both networks. As shown in Table 1, our adaptive iteration approach can more effectively identify model vulnerabilities and generate more aggressive adversarial patches. Specifically, according to the contribution degree, our strategy can weaken deep semantic attention and break the balance of modality in tracker. + +# 4.5.4. Applied Modal + +In order to verify the multi-modal patch joint and single-modal patch attack performance, we try to conduct patch apply modal ablation experiment. Multi-modal patches $\{p_{vi}, p_{ir}\}$ are generated to simultaneously disrupt both RGB and thermal modalities. As shown in the Table 1, we use only one of these patches in an ablation setup. The adversarial patch of a single modal produces a certain attack effect and makes the tracker confused. Evidently, our multi-modal patch achieves the best attack performance, underscoring the necessity of designing joint multi-modal attacks for RGB-T trackers. + +# 5. Conclusion + +In this work, we present a pioneering framework for adversarial attacks on RGB-T multi-modal trackers by introducing an adaptive cross-attack mechanism through multimodal response decoupling. Our approach leverages a modal-aware adaptive attack strategy and introduces novel modal disturbance loss and spatio-temporal joint attack loss to progressively impair the tracker's capability to perceive the target. The shared adversarial shape design also enhances our method's practicality, allowing seamless deployment of multi-modal patches in the real world. Experiments across digital and physical domains confirm the robustness and effectiveness of our approach in evading RGB-T trackers, highlighting the potential and significance of adaptive, multi-modal adversarial attacks in advancing the understanding of tracker vulnerabilities. + +# Acknowledgments + +This work was supported by National Natural Science Foundation of China (62276192). + +# References + +[1] Luca Bertinetto, Jack Valmadre, Joao F Henriques, Andrea Vedaldi, and Philip HS Torr. Fully-convolitional siamese networks for object tracking. In Proceedings of the European Conference on Computer Vision Workshops, pages 850-865, 2016. 1 +[2] Goutam Bhat, Martin Danelljan, Luc Van Gool, and Radu Timofte. Learning discriminative model prediction for tracking. In Proceedings of the IEEE International Conference on Computer Vision, pages 6182-6191, 2019. 2 +[3] Bing Cao, Junliang Guo, Pengfei Zhu, and Qinghua Hu. Bidirectional adapter for multimodal tracking. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 927-935, 2024. 2, 5 +[4] Xuesong Chen, Xiyu Yan, Feng Zheng, Yong Jiang, Shu-Tao Xia, Yong Zhao, and Rongrong Ji. One-shot adversarial attacks on visual tracking with dual attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10176-10185, 2020. 2 +[5] Xuesong Chen, Xiyu Yan, Feng Zheng, Yong Jiang, Shu-Tao Xia, Yong Zhao, and Rongrong Ji. One-shot adversarial attacks on visual tracking with dual attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10176-10185, 2020. 2 +[6] Xuesong Chen, Canmiao Fu, Feng Zheng, Yong Zhao, Hongsheng Li, Ping Luo, and Guo-Jun Qi. A unified multi-scenario attacking network for visual object tracking. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1097-1104, 2021. 1 +[7] Martin Danelljan, Luc Van Gool, and Radu Timofte. Probabilistic regression for visual tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7183-7192, 2020. 2 +[8] Li Ding, Yongwei Wang, Kaiwen Yuan, Minyang Jiang, Ping Wang, Hua Huang, and Z Jane Wang. Towards universal physical attacks on single object tracking. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1236-1245, 2021. 2, 5 +[9] Xiaojun Hou, Jiazheng Xing, Yijie Qian, Yaowei Guo, Shuo Xin, Junhao Chen, Kai Tang, Mengmeng Wang, Zhengkai Jiang, Liang Liu, et al. Sdstrack: Self-distillation symmetric adapter learning for multi-modal visual object tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 26551-26561, 2024. 2, 5 +[10] Shuai Jia, Chao Ma, Yibing Song, and Xiaokang Yang. Robust tracking against adversarial attacks. In Proceedings of the European Conference on Computer Vision, pages 69-84, 2020. 1 +[11] Shuai Jia, Yibing Song, Chao Ma, and Xiaokang Yang. Iou attack: Towards temporally coherent black-box adversarial attack for visual object tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6709-6718, 2021. 1 +[12] Chenglong Li, Xinyan Liang, Yijuan Lu, Nan Zhao, and Jin Tang. Rgb-t object tracking: Benchmark and baseline. Pattern Recognition, 96:106977, 2019. 5 + +[13] Chenglong Li, Wanlin Xue, Yaqing Jia, Zhichen Qu, Bin Luo, Jin Tang, and Dengdi Sun. Lasher: A large-scale high-diversity benchmark for rgbt tracking. IEEE Transactions on Image Processing, 31:392-404, 2022. 5 +[14] Liting Lin, Heng Fan, Zhipeng Zhang, Yong Xu, and Haibin Ling. Swintrack: A simple and strong baseline for transformer tracking. Advances in Neural Information Processing Systems, 35:16743-16754, 2022. 2 +[15] Siao Liu, Zhaoyu Chen, Wei Li, Jiwei Zhu, Jiafeng Wang, Wenqiang Zhang, and Zhongxue Gan. Efficient universal shuffle attack for visual object tracking. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pages 2739-2743, 2022. 2 +[16] Andong Lu, Chenglong Li, Yuqing Yan, Jin Tang, and Bin Luo. Rgbt tracking via multi-adapter network with hierarchical divergence loss. IEEE Transactions on Image Processing, 30:5613-5625, 2021. 1 +[17] Andong Lu, Cun Qian, Chenglong Li, Jin Tang, and Liang Wang. Duality-gated mutual condition network for rgbt tracking. IEEE Transactions on Neural Networks and Learning Systems, 36(3):4118-4131, 2025. 2 +[18] Hyeonseob Nam and Bohyung Han. Learning multi-domain convolutional neural networks for visual tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4293-4302, 2016. 2 +[19] Wuqiang Qi, Zhuoqun Zhang, and Zhishe Wang. Dmfuse: Diffusion model guided cross-attention learning for infrared and visible image fusion. Chinese Journal of Information Fusion, 1(3):226-241, 2024. 1 +[20] Linfeng Tang, Hao Zhang, Han Xu, and Jiayi Ma. Deep learning-based image fusion: A survey. Journal of Image and Graphics, 28(1):3-36, 2023. 1 +[21] Ning Wang, Wengang Zhou, Yibing Song, Chao Ma, Wei Liu, and Houqiang Li. Unsupervised deep representation learning for real-time tracking. International Journal of Computer Vision, 129(2):400-418, 2021. 2 +[22] Yun Xiao, Mengmeng Yang, Chenglong Li, Lei Liu, and Jin Tang. Attribute-based progressive fusion network for rgbt tracking. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 2831-2838, 2022. 5 +[23] Saining Xie, Ross Girshick, Piotr Dólár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1492-1500, 2017. 4 +[24] Xinyu Xie, Yawen Cui, Tao Tan, Xubin Zheng, and Zitong Yu. Fusionmamba: Dynamic feature enhancement for multimodal image fusion with mamba. Visual Intelligence, 2(1): 37, 2024. 1 +[25] Bin Yan, Dong Wang, Huchuan Lu, and Xiaoyun Yang. Cooling-shrinking attack: Blinding the tracker with imperceptible noises. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 990-999, 2020. 2, 4 +[26] Bin Yan, Houwen Peng, Jianlong Fu, Dong Wang, and Huchuan Lu. Learning spatio-temporal transformer for visual tracking. In Proceedings of the IEEE International Conference on Computer Vision, pages 10448-10457, 2021. 1 + +[27] Fan Zhang, Hanwei Peng, Lingli Yu, Yuqian Zhao, and Baifan Chen. Dual-modality space-time memory network for rgbt tracking. IEEE Transactions on Instrumentation and Measurement, 72:1-12, 2023. 2 +[28] Tianlu Zhang, Hongyuan Guo, Qiang Jiao, Qiang Zhang, and Jungong Han. Efficient rgb-t tracking via cross-modality distillation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5404-5413, 2023. 2 +[29] Jiawen Zhu, Simiao Lai, Xin Chen, Dong Wang, and Huchuan Lu. Visual prompt multi-modal tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9516-9526, 2023. 2, 5 \ No newline at end of file diff --git a/2025/ACAttack_ Adaptive Cross Attacking RGB-T Tracker via Multi-Modal Response Decoupling/images.zip b/2025/ACAttack_ Adaptive Cross Attacking RGB-T Tracker via Multi-Modal Response Decoupling/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..508fb902f77475b428fcd7df83d7c1e5e90a0d9f --- /dev/null +++ b/2025/ACAttack_ Adaptive Cross Attacking RGB-T Tracker via Multi-Modal Response Decoupling/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8ca91380b266c472b6402adebd111dcf525e4c4404e8db651ca055f91de56a05 +size 719826 diff --git a/2025/ACAttack_ Adaptive Cross Attacking RGB-T Tracker via Multi-Modal Response Decoupling/layout.json b/2025/ACAttack_ Adaptive Cross Attacking RGB-T Tracker via Multi-Modal Response Decoupling/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..2162871a596711b1ddfd77a26480be195d820cef --- /dev/null +++ b/2025/ACAttack_ Adaptive Cross Attacking RGB-T Tracker via Multi-Modal Response Decoupling/layout.json @@ -0,0 +1,8085 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 127, + 103, + 483, + 140 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 127, + 103, + 483, + 140 + ], + "spans": [ + { + "bbox": [ + 127, + 103, + 483, + 140 + ], + "type": "text", + "content": "ACAttack: Adaptive Cross Attacking RGB-T Tracker via Multi-Modal Response Decoupling" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 171, + 161, + 448, + 190 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 171, + 161, + 448, + 190 + ], + "spans": [ + { + "bbox": [ + 171, + 161, + 448, + 190 + ], + "type": "text", + "content": "Xinyu Xiang Qinglong Yan Hao Zhang* Jiayi Ma* Wuhan University, China" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 191, + 555, + 204 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 191, + 555, + 204 + ], + "spans": [ + { + "bbox": [ + 61, + 191, + 555, + 204 + ], + "type": "text", + "content": "xiangxinyu@whu.edu.cn, qinglong_yan@whu.edu.cn, zhpersonalbox@gmail.com, jyma2010@gmail.com" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 151, + 231, + 200, + 243 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 231, + 200, + 243 + ], + "spans": [ + { + "bbox": [ + 151, + 231, + 200, + 243 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 54, + 256, + 297, + 555 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 256, + 297, + 555 + ], + "spans": [ + { + "bbox": [ + 54, + 256, + 297, + 555 + ], + "type": "text", + "content": "The research on adversarial attacks against trackers primarily concentrates on the RGB modality, whereas the methodology for attacking RGB-T multi-modal trackers has seldom been explored so far. This work represents an innovative attempt to develop an adaptive cross attack framework via multi-modal response decoupling, generating multi-modal adversarial patches to evade RGB-T trackers. Specifically, a modal-aware adaptive attack strategy is introduced to weaken the modality with high common information contribution alternately and iteratively, achieving the modal decoupling attack. In order to perturb the judgment of the modal balance mechanism in the tracker, we design a modal disturbance loss to increase the distance of the response map of the single-modal adversarial samples in the tracker. Besides, we also propose a novel spatio-temporal joint attack loss to progressively deteriorate the tracker's perception of the target. Moreover, the design of the shared adversarial shape enables the generated multi-modal adversarial patches to be readily deployed in real-world scenarios, effectively reducing the interference of the patch posting process on the shape attack of the infrared adversarial layer. Extensive digital and physical domain experiments demonstrate the effectiveness of our multi-modal adversarial patch attack. Our code is available at https://github.com/Xinyu-Xiang/ACAttack." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 578, + 135, + 590 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 578, + 135, + 590 + ], + "spans": [ + { + "bbox": [ + 56, + 578, + 135, + 590 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 598, + 296, + 694 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 598, + 296, + 694 + ], + "spans": [ + { + "bbox": [ + 55, + 598, + 296, + 694 + ], + "type": "text", + "content": "The adversarial attack on visual object tracking (VOT) [1, 26] aims to mislead the prediction results of the tracker through the generated adversarial disturbance, find the model vulnerabilities, and then promote the security of the tracking model in real-life. Single-modal tracking attack methods have been extensively studied, but with the wide application of multi-modal devices [19, 20, 24], multimodal trackers are widely used in safety-critical real-world" + } + ] + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 316, + 230, + 554, + 378 + ], + "blocks": [ + { + "bbox": [ + 316, + 230, + 554, + 378 + ], + "lines": [ + { + "bbox": [ + 316, + 230, + 554, + 378 + ], + "spans": [ + { + "bbox": [ + 316, + 230, + 554, + 378 + ], + "type": "image", + "image_path": "c344516e1824eddade6022528581e550f426aeec60e7d6e47551d4607884a8b0.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 385, + 555, + 452 + ], + "lines": [ + { + "bbox": [ + 313, + 385, + 555, + 452 + ], + "spans": [ + { + "bbox": [ + 313, + 385, + 555, + 452 + ], + "type": "text", + "content": "Figure 1. Our attack strategy against RGB-T trackers. An adaptive attack strategy, sensitive to modality, is introduced to alternately and iteratively suppress the modality with a high contribution of shared information. Additionally, a modal disturbance loss is crafted to enlarge the response map distance for single-modal adversarial samples within the tracker." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 474, + 552, + 486 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 474, + 552, + 486 + ], + "spans": [ + { + "bbox": [ + 313, + 474, + 552, + 486 + ], + "type": "text", + "content": "fields such as autonomous driving and urban security [16]." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 312, + 487, + 555, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 487, + 555, + 712 + ], + "spans": [ + { + "bbox": [ + 312, + 487, + 555, + 712 + ], + "type": "text", + "content": "To address the urgent need to explore the security of trackers, adversarial attack techniques for trackers have emerged in rapid succession, including traditional gradient-based attack approaches and deep-network-based attacks. The former methods [10, 11] utilize hand-crafted parameters and apply many times of gradient ascent to maximize an adversarial loss function for misguiding deep networks. Although it can achieve certain attack effects for specific types of trackers, it is challenging to comprehensively explore and attack the potential vulnerabilities of the different trackers due to the limitations of inflexible pattern design. Nonetheless, the latter one [6] applies tremendous data to train an adversarial patches-generator including flexible architectures and optimization strategies to better automatically search for model weaknesses and realize tracker attacks. Therefore, in comparison with a traditional gradient-based attack approach, deep-network-based paradigm can more automatically and flexibly excavate the security issues within the tracker." + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 703, + 148, + 712 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 703, + 148, + 712 + ], + "spans": [ + { + "bbox": [ + 67, + 703, + 148, + 712 + ], + "type": "text", + "content": "*Corresponding authors." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 293, + 748, + 317, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 758 + ], + "type": "text", + "content": "22099" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 57, + 72, + 294, + 370 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 72, + 294, + 370 + ], + "spans": [ + { + "bbox": [ + 57, + 72, + 294, + 370 + ], + "type": "text", + "content": "Although prior efforts for adversarial attack methods are effective in interference trackers, several challenges still need to be addressed. Notably, existing adversarial attack methods [5, 15, 25] on tracking are designed for RGB modality, whereas the methodologies for attacking RGB-T multi-modal trackers are less explored so far. Considering the widespread deployment of multi-modal tracking technology [17, 27, 28] in several safety-critical areas, it is urgent to explore and implement adversarial attacks of multimodal tracking to understand the potential vulnerabilities of the trackers. However, as shown in Fig. 1, the unique modal coupling and structural design of RGB-T trackers make it a great challenge to successfully find model vulnerabilities. Firstly, due to the modal equilibrium mechanism and coupled multi-modal information, it is difficult to successfully jam the RGB-T tracking model itself. Specifically, the modal balancing strategy in the multi-modal tracker can effectively prevent the attack of adversarial perturbation in a single modal. Secondly, the coupling of multi-modal information can effectively weaken the attacks against the consensus region of the target. Thirdly, the deployment of patches in the physical world is also challenging because the stacked placement of multi-modal patches has a probability of compromising the expression of infrared adversarial shapes, reducing their synergy performance." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 57, + 379, + 294, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 379, + 294, + 712 + ], + "spans": [ + { + "bbox": [ + 57, + 379, + 294, + 712 + ], + "type": "text", + "content": "Considering these challenges, we propose ACAttack, an adaptive cross attack framework via multi-modal response decoupling. It aims to generate multi-modal adversarial patches to evade RGB-T trackers in both digital and physical domains. Specifically, this framework can gradually and adaptively optimize, discover plenty of rough adversarial samples, and then map them to the high-dimensional adversarial space of different modalities according to the modal response contribution factor, forming multi-modal adversarial patches. Secondly, a modal-aware adaptive attack strategy is introduced to weaken tracker's deep semantic attention to the modality with high common information contribution according to the contribution degree of modal response alternately and iteratively, achieving the modal decoupling attack. When the contributions of two modalities are similar, we design a modal disturbance loss to search the modal imbalance vulnerabilities of the tracker, expand the distance of the response map of the single-modal adversarial samples in the tracker, and perturb the judgment of the balance modal in the tracker. We also design a spatio-temporal joint attack loss to build progressively enlarged pseudo-GT between consecutive frames, which progressively deteriorates the tracker's perception of the target. Thirdly, the design of the shared adversarial shape is deployed to eliminate the interference of visible patches on the expression of infrared adversarial shapes. After the shape is shared, it can not only reduce the consumption of the adversarial shape's inter-modal attack ability but also realize" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 317, + 73, + 500, + 83 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 73, + 500, + 83 + ], + "spans": [ + { + "bbox": [ + 317, + 73, + 500, + 83 + ], + "type": "text", + "content": "attacks other than texture in the visible modal." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 328, + 85, + 529, + 96 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 328, + 85, + 529, + 96 + ], + "spans": [ + { + "bbox": [ + 328, + 85, + 529, + 96 + ], + "type": "text", + "content": "In summary, we make the following contributions:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 316, + 100, + 553, + 290 + ], + "type": "list", + "angle": 0, + "index": 8, + "blocks": [ + { + "bbox": [ + 317, + 100, + 553, + 146 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 100, + 553, + 146 + ], + "spans": [ + { + "bbox": [ + 317, + 100, + 553, + 146 + ], + "type": "text", + "content": "- We make an innovative attempt to propose an adaptive cross attack framework via multi-modal response decoupling. It can generate multi-modal adversarial patches to mislead RGB-T trackers effectively." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 316, + 148, + 553, + 206 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 148, + 553, + 206 + ], + "spans": [ + { + "bbox": [ + 316, + 148, + 553, + 206 + ], + "type": "text", + "content": "- We develop a novel modal attack flow, in which modal-aware adaptive attack strategy and modal attack constraints alternately disturb the modes with high contribution to achieve modal decoupling and destroy modal balance mechanism of tracker, respectively." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 316, + 207, + 553, + 254 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 207, + 553, + 254 + ], + "spans": [ + { + "bbox": [ + 316, + 207, + 553, + 254 + ], + "type": "text", + "content": "- We design the shape-shared stack strategy to linkage the visible and infrared adversarial shapes, reducing the attack consumption of multi-modal patches mutual deployment in physical scenarios." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 316, + 255, + 553, + 290 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 255, + 553, + 290 + ], + "spans": [ + { + "bbox": [ + 316, + 255, + 553, + 290 + ], + "type": "text", + "content": "- Experimental results show that multi-modal patches can efficiently fool RGB-T trackers in standard RGB-T tracking datasets and real scenes." + } + ] + } + ], + "index": 7 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 317, + 303, + 399, + 314 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 303, + 399, + 314 + ], + "spans": [ + { + "bbox": [ + 317, + 303, + 399, + 314 + ], + "type": "text", + "content": "2. Related Work" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 317, + 323, + 444, + 335 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 323, + 444, + 335 + ], + "spans": [ + { + "bbox": [ + 317, + 323, + 444, + 335 + ], + "type": "text", + "content": "2.1. Visual Object Tracking" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 315, + 340, + 553, + 602 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 340, + 553, + 602 + ], + "spans": [ + { + "bbox": [ + 315, + 340, + 553, + 602 + ], + "type": "text", + "content": "Given the tracked object in the first frame, object tracking aims to recognize and locate the object in subsequent frames. Many RGB tracking methods [2, 7, 14, 18, 21] have been proposed and achieved commendable tracking performance. However, RGB sensors struggle to capture objects effectively under challenging conditions such as occlusion and low light, limiting the performance of RGB trackers. To address this, the RGB-T tracking paradigm is introduced, which is not restricted to a single RGB modality but instead integrates the complementary information from both RGB and thermal modalities. This fusion enables more robust tracking capabilities. ViPT [29] introduces a vision prompt tracking framework that leverages the foundational model with strong representation capabilities, enabling interaction between the thermal and RGB modalities through a modality-complementing prompter. BAT [3] proposes a universal bidirectional adapter, which enables mutual prompt between the thermal and RGB modalities and further improves tracking performance. SDSTrack [9] designs a complementary masked patch distillation strategy based on self-distillation learning, which enhances the tracking robustness in extreme weather." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 317, + 613, + 429, + 623 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 613, + 429, + 623 + ], + "spans": [ + { + "bbox": [ + 317, + 613, + 429, + 623 + ], + "type": "text", + "content": "2.2. Adversarial Attacks" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 315, + 630, + 553, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 630, + 553, + 712 + ], + "spans": [ + { + "bbox": [ + 315, + 630, + 553, + 712 + ], + "type": "text", + "content": "Currently, adversarial attacks in the tracking task primarily target RGB trackers. For instance, APYVOT [4] proposes an optimization objective function with a dual-attention mechanism to generate perturbations, disrupting tracking by interfering solely with the initial frame. MTD [8] introduces a maximum textural discrepancy loss function that misleads the visual trackers by decorrelating the template" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "type": "text", + "content": "22100" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 60, + 73, + 555, + 205 + ], + "blocks": [ + { + "bbox": [ + 60, + 73, + 555, + 205 + ], + "lines": [ + { + "bbox": [ + 60, + 73, + 555, + 205 + ], + "spans": [ + { + "bbox": [ + 60, + 73, + 555, + 205 + ], + "type": "image", + "image_path": "89724a5020b0aa2d41c2cccabb03770003a311051704048345ec27280933c31c.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 60, + 208, + 555, + 308 + ], + "blocks": [ + { + "bbox": [ + 60, + 208, + 555, + 308 + ], + "lines": [ + { + "bbox": [ + 60, + 208, + 555, + 308 + ], + "spans": [ + { + "bbox": [ + 60, + 208, + 555, + 308 + ], + "type": "image", + "image_path": "51abf7b5b1b9ea20cd809cc7842a0f6369b3ce07abf87277bc899e287d84adda.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 212, + 316, + 398, + 327 + ], + "lines": [ + { + "bbox": [ + 212, + 316, + 398, + 327 + ], + "spans": [ + { + "bbox": [ + 212, + 316, + 398, + 327 + ], + "type": "text", + "content": "Figure 2. The overall framework of our ACAttack." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 348, + 296, + 421 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 348, + 296, + 421 + ], + "spans": [ + { + "bbox": [ + 55, + 348, + 296, + 421 + ], + "type": "text", + "content": "and search frame at hierarchical feature scales. These methods, however, fail to disrupt the significant feature enhancement resulting from the interactions between RGB and thermal modalities, which limits their effectiveness against RGB-T trackers. Therefore, it is essential to develop the attack strategy specifically designed for RGB-T tracking." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 434, + 138, + 448 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 434, + 138, + 448 + ], + "spans": [ + { + "bbox": [ + 55, + 434, + 138, + 448 + ], + "type": "text", + "content": "3. Methodology" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 455, + 287, + 467 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 455, + 287, + 467 + ], + "spans": [ + { + "bbox": [ + 55, + 455, + 287, + 467 + ], + "type": "text", + "content": "3.1. Coarse-to-Fine Modality Attack Framework" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 473, + 296, + 651 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 473, + 296, + 651 + ], + "spans": [ + { + "bbox": [ + 55, + 473, + 296, + 651 + ], + "type": "text", + "content": "With the help of the progressive modality information integration strategy, the multi-modal trackers gradually strengthen the common scene representation and target response, thereby achieving a robust tracking performance superior to that of the single-modal trackers. Consequently, a coarse-to-fine architecture is designed to progressively degrade the modality integration capability of RGB-T models named ACAAttack, which can be divided into two stages. The overall architecture of our ACAAttack is illustrated in Fig. 2 and Algorithm 1. First, we employ projected gradient descent (PGD) in stage1 to identify a set of adversarial examples " + }, + { + "bbox": [ + 55, + 473, + 296, + 651 + ], + "type": "inline_equation", + "content": "\\{p_i\\}_{i=1}^k" + }, + { + "bbox": [ + 55, + 473, + 296, + 651 + ], + "type": "text", + "content": " with sufficient aggressiveness, narrowing the search space for refined attacks and increasing the likelihood of discovering strong adversarial examples, formulated as:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 124, + 654, + 295, + 669 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 124, + 654, + 295, + 669 + ], + "spans": [ + { + "bbox": [ + 124, + 654, + 295, + 669 + ], + "type": "interline_equation", + "content": "\\left\\{p _ {i} \\right\\} _ {i = 1} ^ {k} = P G D \\left(p _ {i} ^ {\\text {i n i t}}\\right), \\tag {1}", + "image_path": "1d447bdf8dc73c77c28c1ed5c648e8ec7a117d80e647b43920eddd1947fbd212.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 677, + 296, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 677, + 296, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 677, + 296, + 715 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 677, + 296, + 715 + ], + "type": "inline_equation", + "content": "p_i^{init}" + }, + { + "bbox": [ + 55, + 677, + 296, + 715 + ], + "type": "text", + "content": " is randomly initialized with noise patches. The generated patches are subsequently loaded onto the visible image " + }, + { + "bbox": [ + 55, + 677, + 296, + 715 + ], + "type": "inline_equation", + "content": "I_{vi}" + }, + { + "bbox": [ + 55, + 677, + 296, + 715 + ], + "type": "text", + "content": " to form a visible adversarial sample " + }, + { + "bbox": [ + 55, + 677, + 296, + 715 + ], + "type": "inline_equation", + "content": "I_{vi}^{adv}" + }, + { + "bbox": [ + 55, + 677, + 296, + 715 + ], + "type": "text", + "content": ". This" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 349, + 468, + 361 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 349, + 468, + 361 + ], + "spans": [ + { + "bbox": [ + 313, + 349, + 468, + 361 + ], + "type": "text", + "content": "process can be formulated as follows:" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 364, + 370, + 555, + 384 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 364, + 370, + 555, + 384 + ], + "spans": [ + { + "bbox": [ + 364, + 370, + 555, + 384 + ], + "type": "interline_equation", + "content": "I _ {v i} ^ {a d v} = p _ {i} \\odot M + I _ {v i} \\odot (1 - M), \\tag {2}", + "image_path": "7f66e973196f84d8ae25bbb83bac0e74db8722c7c1ab5c5dcc8f16e0eab93393.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 394, + 555, + 455 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 394, + 555, + 455 + ], + "spans": [ + { + "bbox": [ + 313, + 394, + 555, + 455 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 394, + 555, + 455 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 313, + 394, + 555, + 455 + ], + "type": "text", + "content": " is the binary mask for applying adversarial patch. " + }, + { + "bbox": [ + 313, + 394, + 555, + 455 + ], + "type": "inline_equation", + "content": "\\odot" + }, + { + "bbox": [ + 313, + 394, + 555, + 455 + ], + "type": "text", + "content": " represents the element-wise Hadmard product. The adversarial visible image " + }, + { + "bbox": [ + 313, + 394, + 555, + 455 + ], + "type": "inline_equation", + "content": "I_{vi}^{adv}" + }, + { + "bbox": [ + 313, + 394, + 555, + 455 + ], + "type": "text", + "content": " concat with clean infrared image " + }, + { + "bbox": [ + 313, + 394, + 555, + 455 + ], + "type": "inline_equation", + "content": "I_{ir}" + }, + { + "bbox": [ + 313, + 394, + 555, + 455 + ], + "type": "text", + "content": " are sent to RGB-T tracker " + }, + { + "bbox": [ + 313, + 394, + 555, + 455 + ], + "type": "inline_equation", + "content": "T(\\cdot)" + }, + { + "bbox": [ + 313, + 394, + 555, + 455 + ], + "type": "text", + "content": " to predict final bounding box " + }, + { + "bbox": [ + 313, + 394, + 555, + 455 + ], + "type": "inline_equation", + "content": "Bbox_{pred}" + }, + { + "bbox": [ + 313, + 394, + 555, + 455 + ], + "type": "text", + "content": " of target, which is expressed as:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 380, + 467, + 553, + 480 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 380, + 467, + 553, + 480 + ], + "spans": [ + { + "bbox": [ + 380, + 467, + 553, + 480 + ], + "type": "interline_equation", + "content": "B b o x _ {p r e d} = T \\left(I _ {v i} ^ {a d v}, I _ {i r}\\right). \\tag {3}", + "image_path": "e4383b0e5eb7445da71654f66120ea37b8d0b6121d4b9c1242992a4ce50cba36.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 487, + 554, + 522 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 487, + 554, + 522 + ], + "spans": [ + { + "bbox": [ + 313, + 487, + 554, + 522 + ], + "type": "text", + "content": "We optimize this process by minimizing the conventional attack loss " + }, + { + "bbox": [ + 313, + 487, + 554, + 522 + ], + "type": "inline_equation", + "content": "L_{att}" + }, + { + "bbox": [ + 313, + 487, + 554, + 522 + ], + "type": "text", + "content": " relative to the center point, which is defined as:" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 339, + 534, + 555, + 548 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 339, + 534, + 555, + 548 + ], + "spans": [ + { + "bbox": [ + 339, + 534, + 555, + 548 + ], + "type": "interline_equation", + "content": "L _ {a t t} = - \\left\\| \\left(C p (B b o x _ {p r e d}) - C p (B b o x _ {g t})\\right) \\right\\| _ {2} ^ {2}, \\tag {4}", + "image_path": "9548d47f745db1db2a17926e42930b9ca47b55afecdb790618f75a6eca604efc.jpg" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 558, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 558, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 558, + 555, + 713 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 558, + 555, + 713 + ], + "type": "inline_equation", + "content": "C_p" + }, + { + "bbox": [ + 313, + 558, + 555, + 713 + ], + "type": "text", + "content": " denotes the operator to obtain the center point of bounding box " + }, + { + "bbox": [ + 313, + 558, + 555, + 713 + ], + "type": "inline_equation", + "content": "Bbox" + }, + { + "bbox": [ + 313, + 558, + 555, + 713 + ], + "type": "text", + "content": ". Specifically, when the attack loss " + }, + { + "bbox": [ + 313, + 558, + 555, + 713 + ], + "type": "inline_equation", + "content": "L_{att}" + }, + { + "bbox": [ + 313, + 558, + 555, + 713 + ], + "type": "text", + "content": " reaches a predefined threshold, the iteration is halted, and a rough adversarial sample is generated. This process is repeated " + }, + { + "bbox": [ + 313, + 558, + 555, + 713 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 313, + 558, + 555, + 713 + ], + "type": "text", + "content": " times to generate a set of " + }, + { + "bbox": [ + 313, + 558, + 555, + 713 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 313, + 558, + 555, + 713 + ], + "type": "text", + "content": " rough adversarial samples. Considering the different imaging principles of infrared and visible modalities, the multi-modal patches will be specially designed according to their differences in principles. Specifically, the visible modal mainly employs the attack texture to interfere. For the infrared modal, it is difficult to detect the texture, so the adversarial shape is used to attack. Subsequently, the set of adversarial samples generated in the coarse attack stage (stage1) is fed into the" + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "22101" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 296, + 192 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 296, + 192 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 296, + 192 + ], + "type": "text", + "content": "subsequent fine attack process (stage2) for further refinement, resulting in the generation of multi-modal adversarial patches with strong attack performance. Finally, through continuous iterative optimization, the multi-mode patch will share the same attack shape, while the visible patch will also possess adversarial texture to confuse the tracker. The fine-grained attack phase targets the modality of the multi-modal tracker and consists of modal decoupling attacks and modal balance interference, which will be detailed in the subsequent sections." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 198, + 200, + 211 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 198, + 200, + 211 + ], + "spans": [ + { + "bbox": [ + 55, + 198, + 200, + 211 + ], + "type": "text", + "content": "3.2. Modal Decoupling Attack" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 216, + 296, + 384 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 216, + 296, + 384 + ], + "spans": [ + { + "bbox": [ + 55, + 216, + 296, + 384 + ], + "type": "text", + "content": "The RGB-T tracker implicitly couples the contributions of the two modalities, thereby enhancing tracking accuracy. Given the significant role of modal contribution in the tracker, we propose a modal decoupling attack to adaptively diminish the influence of advantageous modalities. Specifically, we use the coarse adversarial samples from the stage1 as input for the stage2, feeding them simultaneously into the adversarial texture generation network " + }, + { + "bbox": [ + 55, + 216, + 296, + 384 + ], + "type": "inline_equation", + "content": "G_{tex}^{Adv}" + }, + { + "bbox": [ + 55, + 216, + 296, + 384 + ], + "type": "text", + "content": " and the adversarial shape generation network " + }, + { + "bbox": [ + 55, + 216, + 296, + 384 + ], + "type": "inline_equation", + "content": "G_{shape}^{Adv}" + }, + { + "bbox": [ + 55, + 216, + 296, + 384 + ], + "type": "text", + "content": ". For attacking the infrared modality, the rough adversarial sample set from the first stage is encoded into " + }, + { + "bbox": [ + 55, + 216, + 296, + 384 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 55, + 216, + 296, + 384 + ], + "type": "text", + "content": " dimensions via continuous downsampling and an MLP, controlling the adversarial shape. The infrared patch " + }, + { + "bbox": [ + 55, + 216, + 296, + 384 + ], + "type": "inline_equation", + "content": "p_{ir}" + }, + { + "bbox": [ + 55, + 216, + 296, + 384 + ], + "type": "text", + "content": " generation process can be expressed as follows:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 124, + 390, + 295, + 407 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 124, + 390, + 295, + 407 + ], + "spans": [ + { + "bbox": [ + 124, + 390, + 295, + 407 + ], + "type": "interline_equation", + "content": "p _ {i r} = G _ {s h a p e} ^ {A d v} \\left(\\left\\{p _ {i} \\right\\} _ {i = 1} ^ {k}\\right). \\tag {5}", + "image_path": "b0fa0c9002bf9725f1667c3cfa275ebf2054f056633b1838f01fdeb8a764c7ba.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 413, + 296, + 449 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 413, + 296, + 449 + ], + "spans": [ + { + "bbox": [ + 55, + 413, + 296, + 449 + ], + "type": "text", + "content": "The adversarial texture generation network generates adversarial textures to attack the visible modality using residual connections and upsampling [23], which is defined as:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 100, + 458, + 295, + 474 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 100, + 458, + 295, + 474 + ], + "spans": [ + { + "bbox": [ + 100, + 458, + 295, + 474 + ], + "type": "interline_equation", + "content": "p _ {v i} = \\left(1 - p _ {i r}\\right) \\odot G _ {\\text {s h a p e}} ^ {\\text {A d v}} \\left(\\left\\{p _ {i} \\right\\} _ {i = 1} ^ {k}\\right), \\tag {6}", + "image_path": "1cc42367fc3c6ab17d7965c3cc1b8d194dbc6b2720ca118aed7bd1ea0743380f.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 478, + 277, + 490 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 478, + 277, + 490 + ], + "spans": [ + { + "bbox": [ + 55, + 478, + 277, + 490 + ], + "type": "text", + "content": "where the visible patch with adversarial textures is " + }, + { + "bbox": [ + 55, + 478, + 277, + 490 + ], + "type": "inline_equation", + "content": "p_{vi}" + }, + { + "bbox": [ + 55, + 478, + 277, + 490 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 491, + 296, + 597 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 491, + 296, + 597 + ], + "spans": [ + { + "bbox": [ + 55, + 491, + 296, + 597 + ], + "type": "text", + "content": "Subsequently, the modal contribution of the current network input is calculated as the reciprocal of the difference between the response map obtained from single-modal data and the response map from dual-modal input. A larger reciprocal distance indicates that the response maps from dual-modal and single-modal inputs are more similar, suggesting a greater contribution from the current single modality to the tracker. The modal response contribution can be expressed as follows:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 108, + 597, + 295, + 623 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 597, + 295, + 623 + ], + "spans": [ + { + "bbox": [ + 108, + 597, + 295, + 623 + ], + "type": "interline_equation", + "content": "c _ {m} = \\frac {1}{\\operatorname {d i s} (R (m , m) , R (v i , i r))}, \\tag {7}", + "image_path": "c429a68758dbc3ebc57d59069f7b3cb1eeeb36448c558e78ca49542c085323d1.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 629, + 296, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 629, + 296, + 689 + ], + "spans": [ + { + "bbox": [ + 55, + 629, + 296, + 689 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 629, + 296, + 689 + ], + "type": "inline_equation", + "content": "c_{m}" + }, + { + "bbox": [ + 55, + 629, + 296, + 689 + ], + "type": "text", + "content": " represents the contribution value of " + }, + { + "bbox": [ + 55, + 629, + 296, + 689 + ], + "type": "inline_equation", + "content": "m \\in \\{vi, ir\\}" + }, + { + "bbox": [ + 55, + 629, + 296, + 689 + ], + "type": "text", + "content": " modal to the tracker. " + }, + { + "bbox": [ + 55, + 629, + 296, + 689 + ], + "type": "inline_equation", + "content": "dis" + }, + { + "bbox": [ + 55, + 629, + 296, + 689 + ], + "type": "text", + "content": " stands for the distance function and is used to measure the Euclidean distance between response maps. " + }, + { + "bbox": [ + 55, + 629, + 296, + 689 + ], + "type": "inline_equation", + "content": "R(\\cdot, \\cdot)" + }, + { + "bbox": [ + 55, + 629, + 296, + 689 + ], + "type": "text", + "content": " shows the response map acquired by the tracker under the current input." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 689, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 689, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 689, + 296, + 714 + ], + "type": "text", + "content": "To normalize the modal contribution, a softmax operation is applied to the reciprocal distance, yielding the final" + } + ] + } + ], + "index": 10 + }, + { + "type": "code", + "bbox": [ + 316, + 88, + 555, + 424 + ], + "blocks": [ + { + "bbox": [ + 320, + 75, + 484, + 87 + ], + "lines": [ + { + "bbox": [ + 320, + 75, + 484, + 87 + ], + "spans": [ + { + "bbox": [ + 320, + 75, + 484, + 87 + ], + "type": "text", + "content": "Algorithm 1: The ACAcAttack Algorithm" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "code_caption" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "lines": [ + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "spans": [ + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": "Input: Random patches " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "p_i^{init}" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": ", parameters " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "M_{stage1}" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "M_{stage2}" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "\\xi" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "\\zeta" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": " \nOutput: Optimized multi-modal patches " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "p_{vi}, p_{ir}" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": " \n1 Iteration: \n2 Initialize a random patch " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "p_i^{init}" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": "; \n3 " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "i = i + 1" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": "; \n4 Iteration: \n5 Generate " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "p_i" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": " through Eq. (1); \n6 Use Eq. (2) to generate adversarial sample " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "I_{vi}^{adv}" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": "; \n7 Calculate " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "Bbox_{pred}" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": " using Eq. (3); \n8 Optimize " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "PGD(\\cdot)" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": " with Eq. (4); \n9 Until: " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "L_{att} < \\xi" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": " or iter " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "\\geq M_{stage1}" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": " \n10 Until: " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "i \\geq k" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": " \n11 Determine " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "\\{p_i\\}_{i=1}^k" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": " after optimization in stage1; \n12 Iteration: \n13 iter " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "= iter + 1" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": "; \n14 Obtain " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "p_{ir}, p_{vi}" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": " via Eqs. (5) and (6); \n15 Apply multi-modal patches " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "p_{ir}, p_{vi}" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": " on " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "I_{ir}, I_{vi}" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": "; \n16 Calculate " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "c_{vi}, c_{ir}" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": " using Eq. (7); \n17 Send to Tracker " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "T(\\cdot)" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": " to predict bounding box; \n18 if " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "|c_{vi} - c_{ir}| < \\zeta" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": "; \n19 Optimize " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "G_{shape}^{Adv}" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": " with Eq. (9); \n20 elif " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "c_{vi} - c_{ir} > \\zeta" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": "; \n21 Optimize " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "G_{tex}^{Adv}" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": " with Eqs. (4) and (10); \n22 elif " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "c_{ir} - c_{vi} > \\zeta" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": "; \n23 Optimize " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "G_{shape}^{Adv}" + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "text", + "content": " with Eqs. (4) and (10); \n24 Until: iter " + }, + { + "bbox": [ + 316, + 88, + 555, + 424 + ], + "type": "inline_equation", + "content": "\\geq M_{stage2}" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "code_body" + } + ], + "index": 12, + "sub_type": "algorithm" + }, + { + "bbox": [ + 314, + 447, + 479, + 458 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 447, + 479, + 458 + ], + "spans": [ + { + "bbox": [ + 314, + 447, + 479, + 458 + ], + "type": "text", + "content": "modal contribution score, formulated as:" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 375, + 467, + 553, + 479 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 375, + 467, + 553, + 479 + ], + "spans": [ + { + "bbox": [ + 375, + 467, + 553, + 479 + ], + "type": "interline_equation", + "content": "c _ {n o r m} = \\operatorname {s o f t m a x} \\left(c _ {v i}, c _ {i r}\\right). \\tag {8}", + "image_path": "b9b977c932c54045ae79ede88240f781dedf50938dd7cdb8bc113bcb9b27a93f.jpg" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 486, + 555, + 617 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 486, + 555, + 617 + ], + "spans": [ + { + "bbox": [ + 313, + 486, + 555, + 617 + ], + "type": "text", + "content": "Finally, an automatic discriminant attack is executed based on the modal contribution score. As illustrated, when the visible contribution is higher in the input data, only the visible modal is attacked, specifically by optimizing the generation of adversarial textures. When the infrared contribution is higher in the input data, only the adversarial shape is modified to attack the infrared modal, thereby reducing its contribution to the tracker. Given that the tracker employs a modal balance mechanism, the contributions of the two modalities may be similar in certain scenarios, as detailed in the subsequent section." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 624, + 468, + 635 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 624, + 468, + 635 + ], + "spans": [ + { + "bbox": [ + 313, + 624, + 468, + 635 + ], + "type": "text", + "content": "3.3. Modal Balance Interference" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 642, + 555, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 642, + 555, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 642, + 555, + 714 + ], + "type": "text", + "content": "Previous work on tracking attacks has attempted to design explicit attack losses to detect model vulnerabilities, but this approach often fails to account for the inherent characteristics of the model, making it challenging to execute effective attacks. Inspired by the concept of implicit attacks [25] and the multi-modal aggregation properties in" + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "22102" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 296, + 168 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 296, + 168 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 296, + 168 + ], + "type": "text", + "content": "RGB-T tracker [22], we develop a loss function with modal-balanced interference to target multi-modal trackers. In cases where the contributions of infrared and visible modal are similar (i.e., modal balance), the response map of single-modal input closely resembles that of dual-modal input. To disrupt this balance, we extract the response maps of the two single-modal adversarial examples and increase the distance between them. The details are provided as follows:" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 71, + 177, + 295, + 192 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 177, + 295, + 192 + ], + "spans": [ + { + "bbox": [ + 71, + 177, + 295, + 192 + ], + "type": "interline_equation", + "content": "L _ {m i} = - \\left\\| R \\left(v i _ {a d v}, v i _ {a d v}\\right) - R \\left(i r _ {a d v}, i r _ {a d v}\\right) \\right\\| _ {2} ^ {2}. \\tag {9}", + "image_path": "0a00053efa19c676d3ad6e38d3dc732445f6cf92ecee1fa86f6bad9f6abc1374.jpg" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 201, + 296, + 297 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 201, + 296, + 297 + ], + "spans": [ + { + "bbox": [ + 55, + 201, + 296, + 297 + ], + "type": "text", + "content": "Notably, the infrared and visible patches in our method share the same adversarial shape to achieve simultaneous attacks on both modalities. Therefore, under conditions of modal balance, only the adversarial shape is optimized. Additionally, a spatio-temporal joint attack loss " + }, + { + "bbox": [ + 55, + 201, + 296, + 297 + ], + "type": "inline_equation", + "content": "L_{st}" + }, + { + "bbox": [ + 55, + 201, + 296, + 297 + ], + "type": "text", + "content": " is employed in conjunction with the modal jamming loss " + }, + { + "bbox": [ + 55, + 201, + 296, + 297 + ], + "type": "inline_equation", + "content": "L_{mi}" + }, + { + "bbox": [ + 55, + 201, + 296, + 297 + ], + "type": "text", + "content": " to disrupt the tracker's semantic perception. The specific design is presented in the following formula:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 62, + 299, + 295, + 329 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 299, + 295, + 329 + ], + "spans": [ + { + "bbox": [ + 62, + 299, + 295, + 329 + ], + "type": "interline_equation", + "content": "L _ {s t} = \\left\\| \\sum_ {i = 1} ^ {s} B b o x _ {p r e d} (w, h) - r _ {i} * B b o x _ {g t} (w, h) \\right\\| _ {2} ^ {2}, \\tag {10}", + "image_path": "d4319166eb13fc33189585a709f69668ba3b85c506efb01e83f31b3873d123f9.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 338, + 296, + 387 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 338, + 296, + 387 + ], + "spans": [ + { + "bbox": [ + 55, + 338, + 296, + 387 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 338, + 296, + 387 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 55, + 338, + 296, + 387 + ], + "type": "text", + "content": " denotes the consecutive " + }, + { + "bbox": [ + 55, + 338, + 296, + 387 + ], + "type": "inline_equation", + "content": "s = 5" + }, + { + "bbox": [ + 55, + 338, + 296, + 387 + ], + "type": "text", + "content": " frames extracted from a video. " + }, + { + "bbox": [ + 55, + 338, + 296, + 387 + ], + "type": "inline_equation", + "content": "r_i" + }, + { + "bbox": [ + 55, + 338, + 296, + 387 + ], + "type": "text", + "content": " represents the scaling factor over time to construct the pseudo-GT, which is set as [1.90, 1.95, 2.00, 2.05, 2.10]." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 393, + 258, + 406 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 393, + 258, + 406 + ], + "spans": [ + { + "bbox": [ + 55, + 393, + 258, + 406 + ], + "type": "text", + "content": "3.4. Implementation Process in Real-world" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 411, + 296, + 591 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 411, + 296, + 591 + ], + "spans": [ + { + "bbox": [ + 55, + 411, + 296, + 591 + ], + "type": "text", + "content": "After completing the digital domain optimization, the multimodal adversarial patches require deployment in the real world. However, during real-world deployment, visible and infrared patches are stacked, leading to inevitable interactions between the two modalities, as illustrated in Fig. 3. Specifically, the coverage of visible patches impacts the adversarial shape expression of infrared patches, while the presence of infrared patches hinders the rendering of the adversarial texture in visible modality. To address these challenges, we propose a shape-shared stacking strategy, where both the visible and infrared patches adopt the same attack shape. This design not only effectively mitigates interactions between infrared and visible patches in the real world but also enhances the attack shapes of visible patches, thereby improving overall attack performance." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 601, + 137, + 615 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 601, + 137, + 615 + ], + "spans": [ + { + "bbox": [ + 55, + 601, + 137, + 615 + ], + "type": "text", + "content": "4. Experiments" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 620, + 182, + 634 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 620, + 182, + 634 + ], + "spans": [ + { + "bbox": [ + 55, + 620, + 182, + 634 + ], + "type": "text", + "content": "4.1. Experimental Settings" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 639, + 222, + 650 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 639, + 222, + 650 + ], + "spans": [ + { + "bbox": [ + 55, + 639, + 222, + 650 + ], + "type": "text", + "content": "4.1.1. Datasets and Evaluation Metrics" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 653, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 653, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 653, + 296, + 714 + ], + "type": "text", + "content": "We conduct experiments on RGBT234 [12] and LasHeR [13] datasets and assess the effectiveness of our ACAttack by evaluating precision rate (PR) and success rate (SR), both of which are commonly used metrics in tracking tasks. Taking PR as an example, we" + } + ] + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 317, + 72, + 395, + 132 + ], + "blocks": [ + { + "bbox": [ + 317, + 72, + 395, + 132 + ], + "lines": [ + { + "bbox": [ + 317, + 72, + 395, + 132 + ], + "spans": [ + { + "bbox": [ + 317, + 72, + 395, + 132 + ], + "type": "image", + "image_path": "d470705f12b1b300b95534ddb97c4a2699910e4e6ccda977f227ae26d4cb9532.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 317, + 133, + 394, + 194 + ], + "blocks": [ + { + "bbox": [ + 317, + 133, + 394, + 194 + ], + "lines": [ + { + "bbox": [ + 317, + 133, + 394, + 194 + ], + "spans": [ + { + "bbox": [ + 317, + 133, + 394, + 194 + ], + "type": "image", + "image_path": "33fd2f375730bee087e11a98a35e6edcbe43d2da12561454172fedc8fc9231ac.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 343, + 194, + 365, + 201 + ], + "lines": [ + { + "bbox": [ + 343, + 194, + 365, + 201 + ], + "spans": [ + { + "bbox": [ + 343, + 194, + 365, + 201 + ], + "type": "text", + "content": "VI on IR" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 395, + 72, + 474, + 132 + ], + "blocks": [ + { + "bbox": [ + 395, + 72, + 474, + 132 + ], + "lines": [ + { + "bbox": [ + 395, + 72, + 474, + 132 + ], + "spans": [ + { + "bbox": [ + 395, + 72, + 474, + 132 + ], + "type": "image", + "image_path": "f47fed5e15b83cdc966600b10b48230bac13df87d5bf3c7dd049d110db4d9ce1.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 395, + 133, + 473, + 194 + ], + "blocks": [ + { + "bbox": [ + 395, + 133, + 473, + 194 + ], + "lines": [ + { + "bbox": [ + 395, + 133, + 473, + 194 + ], + "spans": [ + { + "bbox": [ + 395, + 133, + 473, + 194 + ], + "type": "image", + "image_path": "92e3c6e7196ddfdb7bb345e4c518f60529abae4065d0b6b0eada7eead6b613b9.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 422, + 194, + 443, + 201 + ], + "lines": [ + { + "bbox": [ + 422, + 194, + 443, + 201 + ], + "spans": [ + { + "bbox": [ + 422, + 194, + 443, + 201 + ], + "type": "text", + "content": "IR on VI" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_caption" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 474, + 72, + 553, + 132 + ], + "blocks": [ + { + "bbox": [ + 474, + 72, + 553, + 132 + ], + "lines": [ + { + "bbox": [ + 474, + 72, + 553, + 132 + ], + "spans": [ + { + "bbox": [ + 474, + 72, + 553, + 132 + ], + "type": "image", + "image_path": "40821a79e64f48b8da64814d87a2c2c1d6b209c40f83b609db0e4879db63ce81.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + } + ], + "index": 17 + }, + { + "type": "image", + "bbox": [ + 474, + 133, + 553, + 194 + ], + "blocks": [ + { + "bbox": [ + 474, + 133, + 553, + 194 + ], + "lines": [ + { + "bbox": [ + 474, + 133, + 553, + 194 + ], + "spans": [ + { + "bbox": [ + 474, + 133, + 553, + 194 + ], + "type": "image", + "image_path": "fa0cf77ca4b32b732a765bd36dd7bdfd3dcfd54c3d62ad0c1d21287968a7e519.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 506, + 194, + 519, + 201 + ], + "lines": [ + { + "bbox": [ + 506, + 194, + 519, + 201 + ], + "spans": [ + { + "bbox": [ + 506, + 194, + 519, + 201 + ], + "type": "text", + "content": "Ours" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_caption" + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 317, + 204, + 552, + 300 + ], + "blocks": [ + { + "bbox": [ + 317, + 204, + 552, + 300 + ], + "lines": [ + { + "bbox": [ + 317, + 204, + 552, + 300 + ], + "spans": [ + { + "bbox": [ + 317, + 204, + 552, + 300 + ], + "type": "image", + "image_path": "c883cfd57486865bd20823dd356d172a367ac79998c92b08debea92727b3fbb1.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 350, + 309, + 518, + 321 + ], + "lines": [ + { + "bbox": [ + 350, + 309, + 518, + 321 + ], + "spans": [ + { + "bbox": [ + 350, + 309, + 518, + 321 + ], + "type": "text", + "content": "Figure 3. Process of physical implementation." + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_caption" + } + ], + "index": 20 + }, + { + "type": "image", + "bbox": [ + 317, + 335, + 392, + 383 + ], + "blocks": [ + { + "bbox": [ + 317, + 335, + 392, + 383 + ], + "lines": [ + { + "bbox": [ + 317, + 335, + 392, + 383 + ], + "spans": [ + { + "bbox": [ + 317, + 335, + 392, + 383 + ], + "type": "image", + "image_path": "c4eee172f1a63caf416d7705ac5c5a4cb3db0beed313dcc8d3253a8746fc10d4.jpg" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 332, + 386, + 373, + 394 + ], + "lines": [ + { + "bbox": [ + 332, + 386, + 373, + 394 + ], + "spans": [ + { + "bbox": [ + 332, + 386, + 373, + 394 + ], + "type": "text", + "content": "(a) ViPT patch" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 352, + 403, + 515, + 415 + ], + "lines": [ + { + "bbox": [ + 352, + 403, + 515, + 415 + ], + "spans": [ + { + "bbox": [ + 352, + 403, + 515, + 415 + ], + "type": "text", + "content": "Figure 4. Visualization of generated patches." + } + ] + } + ], + "index": 28, + "angle": 0, + "type": "image_caption" + } + ], + "index": 22 + }, + { + "type": "image", + "bbox": [ + 399, + 337, + 479, + 383 + ], + "blocks": [ + { + "bbox": [ + 399, + 337, + 479, + 383 + ], + "lines": [ + { + "bbox": [ + 399, + 337, + 479, + 383 + ], + "spans": [ + { + "bbox": [ + 399, + 337, + 479, + 383 + ], + "type": "image", + "image_path": "1f1cba12750e6777c7d348e11705f7579722488aeb97c43eb0c5ba7f185d8c9c.jpg" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 422, + 386, + 460, + 394 + ], + "lines": [ + { + "bbox": [ + 422, + 386, + 460, + 394 + ], + "spans": [ + { + "bbox": [ + 422, + 386, + 460, + 394 + ], + "type": "text", + "content": "(b) BAT patch" + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "image_caption" + } + ], + "index": 24 + }, + { + "type": "image", + "bbox": [ + 488, + 335, + 553, + 383 + ], + "blocks": [ + { + "bbox": [ + 488, + 335, + 553, + 383 + ], + "lines": [ + { + "bbox": [ + 488, + 335, + 553, + 383 + ], + "spans": [ + { + "bbox": [ + 488, + 335, + 553, + 383 + ], + "type": "image", + "image_path": "1a25403a0068d4992e805125a13cfebb745348e2e333de0c52b4c0685aebb6dc.jpg" + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 496, + 385, + 549, + 394 + ], + "lines": [ + { + "bbox": [ + 496, + 385, + 549, + 394 + ], + "spans": [ + { + "bbox": [ + 496, + 385, + 549, + 394 + ], + "type": "text", + "content": "(c) SDSTrack patch" + } + ] + } + ], + "index": 27, + "angle": 0, + "type": "image_caption" + } + ], + "index": 26 + }, + { + "bbox": [ + 313, + 437, + 555, + 593 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 437, + 555, + 593 + ], + "spans": [ + { + "bbox": [ + 313, + 437, + 555, + 593 + ], + "type": "text", + "content": "calculate the Euclidean distance of the center between the predicted bounding box and ground truth box in both RGB and thermal modalities, using the smaller distance to represent the precision RGBT234 provides 234 pairs of RGB and thermal video, with a total frame of about " + }, + { + "bbox": [ + 313, + 437, + 555, + 593 + ], + "type": "inline_equation", + "content": "234\\mathrm{K}" + }, + { + "bbox": [ + 313, + 437, + 555, + 593 + ], + "type": "text", + "content": " and a maximum of 8K per sequence. LasHeR is comprised of 1224 visible and thermal video pairs, totaling over " + }, + { + "bbox": [ + 313, + 437, + 555, + 593 + ], + "type": "inline_equation", + "content": "730\\mathrm{K}" + }, + { + "bbox": [ + 313, + 437, + 555, + 593 + ], + "type": "text", + "content": " frame pairs. Since the tracking performance on the background is not of interest, LasHeR performs strict alignment of the object area, allowing the object to share the same ground truth of the bounding box in both visible and thermal modalities. Therefore, we use PR and SR as evaluation metrics." + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 313, + 601, + 546, + 613 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 601, + 546, + 613 + ], + "spans": [ + { + "bbox": [ + 313, + 601, + 546, + 613 + ], + "type": "text", + "content": "4.1.2. Victimized Trackers and Comparison Attackers" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 313, + 617, + 555, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 617, + 555, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 617, + 555, + 714 + ], + "type": "text", + "content": "We select several state-of-the-art trackers as targets for our attack, including ViPT [29], BAT [3], and SDSTrack [9]. To demonstrate the challenges in exploiting vulnerabilities in RGB-T trackers, we use a patch composed of random noise as a baseline for comparison, emphasizing the need for meticulous exploration. Furthermore, we compare the performance of our proposed ACAttack with the representative attack method MTD [8], which is specifically designed" + } + ] + } + ], + "index": 31 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "22103" + } + ] + } + ], + "index": 32 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 56, + 69, + 219, + 155 + ], + "blocks": [ + { + "bbox": [ + 56, + 69, + 219, + 155 + ], + "lines": [ + { + "bbox": [ + 56, + 69, + 219, + 155 + ], + "spans": [ + { + "bbox": [ + 56, + 69, + 219, + 155 + ], + "type": "image", + "image_path": "da6a446625886f71756aead6a4e9fa99c2780b36928a437cb31b58f954fcae24.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 91, + 158, + 189, + 169 + ], + "lines": [ + { + "bbox": [ + 91, + 158, + 189, + 169 + ], + "spans": [ + { + "bbox": [ + 91, + 158, + 189, + 169 + ], + "type": "text", + "content": "(a) ViPT on RGBT234" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 223, + 70, + 386, + 154 + ], + "blocks": [ + { + "bbox": [ + 223, + 70, + 386, + 154 + ], + "lines": [ + { + "bbox": [ + 223, + 70, + 386, + 154 + ], + "spans": [ + { + "bbox": [ + 223, + 70, + 386, + 154 + ], + "type": "image", + "image_path": "1a22cf3b0944a5bec6d422a63b67353ae6f7db902f57bf12db9a0a3abede9e4d.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 259, + 158, + 354, + 170 + ], + "lines": [ + { + "bbox": [ + 259, + 158, + 354, + 170 + ], + "spans": [ + { + "bbox": [ + 259, + 158, + 354, + 170 + ], + "type": "text", + "content": "(b) BAT on RGBT234" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 391, + 70, + 553, + 155 + ], + "blocks": [ + { + "bbox": [ + 391, + 70, + 553, + 155 + ], + "lines": [ + { + "bbox": [ + 391, + 70, + 553, + 155 + ], + "spans": [ + { + "bbox": [ + 391, + 70, + 553, + 155 + ], + "type": "image", + "image_path": "011457f325e1dc34bddb5ac62bfa101a76791122d58e04157c92d3d0388f4432.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 425, + 158, + 519, + 169 + ], + "lines": [ + { + "bbox": [ + 425, + 158, + 519, + 169 + ], + "spans": [ + { + "bbox": [ + 425, + 158, + 519, + 169 + ], + "type": "text", + "content": "(c) SDS on RGBT234" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 57, + 224, + 553, + 304 + ], + "blocks": [ + { + "bbox": [ + 55, + 179, + 555, + 212 + ], + "lines": [ + { + "bbox": [ + 55, + 179, + 555, + 212 + ], + "spans": [ + { + "bbox": [ + 55, + 179, + 555, + 212 + ], + "type": "text", + "content": "Figure 5. Quantitative comparison of tracking performance on the RGBT234 dataset. The tracking performance of ViPT, BAT, and SDSTrack trackers is reported, including the original performance without attacks and the performance under attacks. Lower tracking metrics PR and SR represent better attack. Please zoom in for a better view." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 57, + 224, + 553, + 304 + ], + "lines": [ + { + "bbox": [ + 57, + 224, + 553, + 304 + ], + "spans": [ + { + "bbox": [ + 57, + 224, + 553, + 304 + ], + "type": "image", + "image_path": "e522ecd422a23f3740a308ff9c1a063e2cb028f9532ad41a9413cd641d252e86.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 152, + 312, + 457, + 323 + ], + "lines": [ + { + "bbox": [ + 152, + 312, + 457, + 323 + ], + "spans": [ + { + "bbox": [ + 152, + 312, + 457, + 323 + ], + "type": "text", + "content": "Figure 6. Qualitative comparison of tracking performance on the RGBT234 dataset." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 344, + 295, + 368 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 344, + 295, + 368 + ], + "spans": [ + { + "bbox": [ + 55, + 344, + 295, + 368 + ], + "type": "text", + "content": "for RGB trackers, highlighting the advantages of our approach in the multi-modal setting." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 375, + 185, + 387 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 375, + 185, + 387 + ], + "spans": [ + { + "bbox": [ + 55, + 375, + 185, + 387 + ], + "type": "text", + "content": "4.1.3. Implementation Details" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 55, + 389, + 296, + 462 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 389, + 296, + 462 + ], + "spans": [ + { + "bbox": [ + 55, + 389, + 296, + 462 + ], + "type": "text", + "content": "The multi-spectral video in the physical domain is captured by a DJI Mavic 3T UAV equipped with thermal and RGB cameras, and the video frame rate is 30 fps. The hyperparameters in adaptive iteration " + }, + { + "bbox": [ + 55, + 389, + 296, + 462 + ], + "type": "inline_equation", + "content": "\\xi" + }, + { + "bbox": [ + 55, + 389, + 296, + 462 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 389, + 296, + 462 + ], + "type": "inline_equation", + "content": "\\zeta" + }, + { + "bbox": [ + 55, + 389, + 296, + 462 + ], + "type": "text", + "content": " is 9 and 0.02. Training epoch in stage1 is set as " + }, + { + "bbox": [ + 55, + 389, + 296, + 462 + ], + "type": "inline_equation", + "content": "M_{\\text{stage1}} = 180" + }, + { + "bbox": [ + 55, + 389, + 296, + 462 + ], + "type": "text", + "content": ". Experiments are conducted on the RTX 3090 GPU with PyTorch." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 55, + 470, + 244, + 483 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 470, + 244, + 483 + ], + "spans": [ + { + "bbox": [ + 55, + 470, + 244, + 483 + ], + "type": "text", + "content": "4.2. Comparisons in the Digital Domain" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 55, + 487, + 296, + 596 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 487, + 296, + 596 + ], + "spans": [ + { + "bbox": [ + 55, + 487, + 296, + 596 + ], + "type": "text", + "content": "We first validate the attack effectiveness of our ACAttack in the digital domain. It is important to note that we only train on the RGBT234 dataset and generate multi-modal patches " + }, + { + "bbox": [ + 55, + 487, + 296, + 596 + ], + "type": "inline_equation", + "content": "\\{p_{vi}, p_{ir}\\}" + }, + { + "bbox": [ + 55, + 487, + 296, + 596 + ], + "type": "text", + "content": ". As shown in Fig. 4, the RGB patch exhibits color and texture, while the thermal patch has an irregular shape, which aligns with the imaging characteristics of each modality. Subsequently, the patches " + }, + { + "bbox": [ + 55, + 487, + 296, + 596 + ], + "type": "inline_equation", + "content": "\\{p_{vi}, p_{ir}\\}" + }, + { + "bbox": [ + 55, + 487, + 296, + 596 + ], + "type": "text", + "content": " generated on RGBT234 are directly applied to the LasHeR dataset to verify their generalization." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 55, + 602, + 186, + 614 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 602, + 186, + 614 + ], + "spans": [ + { + "bbox": [ + 55, + 602, + 186, + 614 + ], + "type": "text", + "content": "4.2.1.Quantitative Evaluation" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 55, + 617, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 617, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 617, + 296, + 714 + ], + "type": "text", + "content": "Fig. 5 illustrates a quantitative comparison of the RGBT234 dataset. The results clearly show that, under our attack, the tracking performance of existing state-of-the-art trackers suffers a significant degradation compared to clean tracking conditions. In contrast, random noise only leads to a modest decline in PR and SR, emphasizing that exploiting tracker vulnerabilities goes beyond the simplicity of random noise—it requires a more sophisticated, optimized ap" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 344, + 555, + 499 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 344, + 555, + 499 + ], + "spans": [ + { + "bbox": [ + 313, + 344, + 555, + 499 + ], + "type": "text", + "content": "proach. Additionally, the performance drop observed with MTD is smaller than that of our ACAttack, suggesting that attack methods designed specifically for RGB trackers may not effectively mitigate the feature enhancement resulting from RGB-T coupling. On the other hand, our ACAttack achieves substantial attack success. For instance, against ViPT, ACAttack reduces PR from 0.835 to 0.621 and SR from 0.617 to 0.417. Similarly, for SDSTrack, it lowers PR from 0.848 to 0.616 and SR from 0.625 to 0.426. The substantial performance drops suggest that our ACAttack succeeds in keeping the predicted bounding box far away from actual object, which will be further confirmed in subsequent qualitative results." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 506, + 440, + 517 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 506, + 440, + 517 + ], + "spans": [ + { + "bbox": [ + 313, + 506, + 440, + 517 + ], + "type": "text", + "content": "4.2.2. Qualitative Evaluation" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 521, + 555, + 616 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 521, + 555, + 616 + ], + "spans": [ + { + "bbox": [ + 313, + 521, + 555, + 616 + ], + "type": "text", + "content": "As shown in Fig. 6, we present the tracking results of BAT and SDSTrack. The clean trackers perform exceptionally well in maintaining precise tracking, while our attack leads to a significant decline in tracking performance. This degradation can be attributed to our progressive generation framework, which iteratively weakens the tracker's deep semantic attention on modalities with high commonality by decoupling multi-modal responses." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 313, + 624, + 460, + 635 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 624, + 460, + 635 + ], + "spans": [ + { + "bbox": [ + 313, + 624, + 460, + 635 + ], + "type": "text", + "content": "4.3. Generalization Evaluation" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 313, + 641, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 641, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 641, + 555, + 713 + ], + "type": "text", + "content": "We conduct generalization experiments on the LasHeR dataset, with quantitative and qualitative results shown in Fig. 7 and Fig. 8, respectively. Compared to random noise and MTD, our ACAttack leads to a significant drop in tracking performance across all trackers, even without training on LasHeR. Additionally, we present the IoU plots for both" + } + ] + } + ], + "index": 20 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 758 + ], + "type": "text", + "content": "22104" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 59, + 70, + 217, + 152 + ], + "blocks": [ + { + "bbox": [ + 59, + 70, + 217, + 152 + ], + "lines": [ + { + "bbox": [ + 59, + 70, + 217, + 152 + ], + "spans": [ + { + "bbox": [ + 59, + 70, + 217, + 152 + ], + "type": "image", + "image_path": "28049c3409491173fd313b73b9d8f4d6052356dd6d98451600c8e1c68fb1c206.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 99, + 155, + 179, + 166 + ], + "lines": [ + { + "bbox": [ + 99, + 155, + 179, + 166 + ], + "spans": [ + { + "bbox": [ + 99, + 155, + 179, + 166 + ], + "type": "text", + "content": "(a) ViPT on LasHeR" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 226, + 70, + 386, + 152 + ], + "blocks": [ + { + "bbox": [ + 226, + 70, + 386, + 152 + ], + "lines": [ + { + "bbox": [ + 226, + 70, + 386, + 152 + ], + "spans": [ + { + "bbox": [ + 226, + 70, + 386, + 152 + ], + "type": "image", + "image_path": "eac88dcb4a67c9b330ed788edd168ef45fb74938a144720157eb3a4c466bc779.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 264, + 156, + 342, + 166 + ], + "lines": [ + { + "bbox": [ + 264, + 156, + 342, + 166 + ], + "spans": [ + { + "bbox": [ + 264, + 156, + 342, + 166 + ], + "type": "text", + "content": "(b) BAT on LasHeR" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 394, + 70, + 552, + 152 + ], + "blocks": [ + { + "bbox": [ + 394, + 70, + 552, + 152 + ], + "lines": [ + { + "bbox": [ + 394, + 70, + 552, + 152 + ], + "spans": [ + { + "bbox": [ + 394, + 70, + 552, + 152 + ], + "type": "image", + "image_path": "441f9cb12578aae2668934ce3ac88e833dca1fd762f2b926d0004f5b281ad12a.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 423, + 156, + 520, + 166 + ], + "lines": [ + { + "bbox": [ + 423, + 156, + 520, + 166 + ], + "spans": [ + { + "bbox": [ + 423, + 156, + 520, + 166 + ], + "type": "text", + "content": "(c) SDSTrack on LasHeR" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 57, + 220, + 553, + 297 + ], + "blocks": [ + { + "bbox": [ + 55, + 176, + 555, + 210 + ], + "lines": [ + { + "bbox": [ + 55, + 176, + 555, + 210 + ], + "spans": [ + { + "bbox": [ + 55, + 176, + 555, + 210 + ], + "type": "text", + "content": "Figure 7. Quantitative comparison of tracking performance on the LasHeR dataset. The tracking performance of ViPT, BAT, and SDSTrack trackers is reported, including the original performance without attacks and the performance under attacks. Lower tracking metrics PR and SR represent better attack. Please zoom in for a better view." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 57, + 220, + 553, + 297 + ], + "lines": [ + { + "bbox": [ + 57, + 220, + 553, + 297 + ], + "spans": [ + { + "bbox": [ + 57, + 220, + 553, + 297 + ], + "type": "image", + "image_path": "0ae041af7cf0486b92cdd41c3dd040b2df52ce10d7f53dd3fa1e52eca829af99.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 58, + 335, + 294, + 476 + ], + "blocks": [ + { + "bbox": [ + 156, + 304, + 453, + 316 + ], + "lines": [ + { + "bbox": [ + 156, + 304, + 453, + 316 + ], + "spans": [ + { + "bbox": [ + 156, + 304, + 453, + 316 + ], + "type": "text", + "content": "Figure 8. Qualitative comparison of tracking performance on the LasHeR dataset." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 58, + 335, + 294, + 476 + ], + "lines": [ + { + "bbox": [ + 58, + 335, + 294, + 476 + ], + "spans": [ + { + "bbox": [ + 58, + 335, + 294, + 476 + ], + "type": "image", + "image_path": "323f39373e80e2cf23c802d775e4e96c7945d673ba0d02d40dec1231ad08dfb8.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 487, + 295, + 531 + ], + "lines": [ + { + "bbox": [ + 55, + 487, + 295, + 531 + ], + "spans": [ + { + "bbox": [ + 55, + 487, + 295, + 531 + ], + "type": "text", + "content": "Figure 9. Qualitative comparison of tracking performance on the LasHeR dataset. The blue and red lines represent the IoU variation over frames of the predicted boxes under the clean trackers and the victimized trackers, respectively." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 553, + 296, + 638 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 553, + 296, + 638 + ], + "spans": [ + { + "bbox": [ + 55, + 553, + 296, + 638 + ], + "type": "text", + "content": "clean and attacked tracking results, as shown in Fig. 9. It is clear that our ACAttack can maintain a sustained attack over extended periods. Due to the existence of our adaptive attack strategy and the modal balance interference loss, the response value of the tracker for the real target is reduced, and then the tracker is easy to deviate from the original target and is attracted by similar targets." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 55, + 647, + 243, + 659 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 647, + 243, + 659 + ], + "spans": [ + { + "bbox": [ + 55, + 647, + 243, + 659 + ], + "type": "text", + "content": "4.4. Application in the Physical Domain" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 55, + 665, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 665, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 665, + 296, + 713 + ], + "type": "text", + "content": "After having verified our adversarial patches in digital scenes, we also extend experiments to demonstrate their efficacy in the physical domain. We directly apply the patches trained in the digital domain to the real world and use aero" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 337, + 555, + 492 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 337, + 555, + 492 + ], + "spans": [ + { + "bbox": [ + 313, + 337, + 555, + 492 + ], + "type": "text", + "content": "gel and paper to make thermal and RGB patches for deployment on pedestrians, respectively. A dual-spectral camera in DJI Mavic 3T is used for video capture. Thirty sets of videos of different scenes are taken as test samples. The orientation results of the test are shown in Fig. 10. It can be seen that the tracking prediction bounding box is enlarged and cannot be accurately positioned due to the interference of the multi-modal adversarial patch. Specifically, the optimization of spatio-temporal joint loss makes the patch learn the effect of expanding the tracker's prediction box. Therefore, in the physical world, the tracker will not be able to accurately locate the target after being affected by the adversarial patch." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 502, + 415, + 514 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 502, + 415, + 514 + ], + "spans": [ + { + "bbox": [ + 313, + 502, + 415, + 514 + ], + "type": "text", + "content": "4.5. Ablation Studies" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 521, + 555, + 593 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 521, + 555, + 593 + ], + "spans": [ + { + "bbox": [ + 313, + 521, + 555, + 593 + ], + "type": "text", + "content": "We conduct ablation studies to assess the effectiveness of our unique design and parameter configuration, including: (I) loss function, (II) parameter K, (III) iteration mode, and (IV) applied modal. The ablation studies are performed on the RGBT234 dataset against ViPT, with quantitative results presented in Table 1." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 601, + 403, + 613 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 601, + 403, + 613 + ], + "spans": [ + { + "bbox": [ + 313, + 601, + 403, + 613 + ], + "type": "text", + "content": "4.5.1. Loss Function" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 617, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 617, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 617, + 555, + 713 + ], + "type": "text", + "content": "The loss " + }, + { + "bbox": [ + 313, + 617, + 555, + 713 + ], + "type": "inline_equation", + "content": "L_{st}" + }, + { + "bbox": [ + 313, + 617, + 555, + 713 + ], + "type": "text", + "content": " interferes with the tracker from both temporal and spatial dimensions, while " + }, + { + "bbox": [ + 313, + 617, + 555, + 713 + ], + "type": "inline_equation", + "content": "L_{mi}" + }, + { + "bbox": [ + 313, + 617, + 555, + 713 + ], + "type": "text", + "content": " is used to disrupt the tracker's semantic perception. To demonstrate their effectiveness, we remove each of them individually, with the results shown in Table 1. In the absence of " + }, + { + "bbox": [ + 313, + 617, + 555, + 713 + ], + "type": "inline_equation", + "content": "L_{st}" + }, + { + "bbox": [ + 313, + 617, + 555, + 713 + ], + "type": "text", + "content": " or " + }, + { + "bbox": [ + 313, + 617, + 555, + 713 + ], + "type": "inline_equation", + "content": "L_{mi}" + }, + { + "bbox": [ + 313, + 617, + 555, + 713 + ], + "type": "text", + "content": ", the attack performance weakens, demonstrating their role in diminishing the enhanced target localization accuracy achieved through multi-modal interaction." + } + ] + } + ], + "index": 18 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "22105" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 67, + 71, + 542, + 183 + ], + "blocks": [ + { + "bbox": [ + 67, + 71, + 542, + 183 + ], + "lines": [ + { + "bbox": [ + 67, + 71, + 542, + 183 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 542, + 183 + ], + "type": "image", + "image_path": "fa1865e09e72f7edb94a1940891936ea2867a523c0a89c624cf6ced42d48b9ec.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 203, + 190, + 406, + 201 + ], + "lines": [ + { + "bbox": [ + 203, + 190, + 406, + 201 + ], + "spans": [ + { + "bbox": [ + 203, + 190, + 406, + 201 + ], + "type": "text", + "content": "Figure 10. Practical application in the physical domain." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 61, + 213, + 547, + 267 + ], + "blocks": [ + { + "bbox": [ + 61, + 213, + 547, + 267 + ], + "lines": [ + { + "bbox": [ + 61, + 213, + 547, + 267 + ], + "spans": [ + { + "bbox": [ + 61, + 213, + 547, + 267 + ], + "type": "table", + "html": "
MetricViPTConfig. I: loss functionConfig. II: parameter KConfig. III: iteration modeConfig. IV: applied modalOurs
w/o Lstw/o LmiK = 0K = 9crosscombineOnly RGBOnly TIR
PR0.8350.7090.7350.6720.6510.6450.7030.6910.6690.621
SR0.6170.4860.5050.4500.4250.4280.4820.4820.4620.417
", + "image_path": "5729cc5da5c30a0b47ad1661168dbf6b7c669fda94e0cca1e5e547f19221525d.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 81, + 273, + 528, + 283 + ], + "lines": [ + { + "bbox": [ + 81, + 273, + 528, + 283 + ], + "spans": [ + { + "bbox": [ + 81, + 273, + 528, + 283 + ], + "type": "text", + "content": "Table 1. Quantitative comparison of ablation studies, which is performed on the RGBT234 dataset against the ViPT tracker." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 55, + 305, + 139, + 316 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 305, + 139, + 316 + ], + "spans": [ + { + "bbox": [ + 55, + 305, + 139, + 316 + ], + "type": "text", + "content": "4.5.2. Parameter K" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 320, + 296, + 524 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 320, + 296, + 524 + ], + "spans": [ + { + "bbox": [ + 55, + 320, + 296, + 524 + ], + "type": "text", + "content": "In our progressive attack framework, we first employ projected gradient descent to identify K sets of coarse adversarial examples with effective attack performance. In order to verify its effectiveness, we set the number of coarse adversarial samples K growth from 0 to 9 and 18. As shown in the Table 1, as K increases from 0 to 9 and 18, the tracker's PR and SR consistently decrease. This indicates that such coarse-grained adversarial examples can effectively narrow the search space for refined attacks, thus facilitating a more effective attack. Specifically, this progressive method for finding adversarial examples prioritizes identifying multiple sets of coarse adversarial representations from a broad spectrum of noise. Subsequently, multi-modal patch generation refines the adversarial details to produce the final adversarial patch, leveraging numerous samples that contain adversarial information. Consequently, this approach results in an enhancement in performance." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 531, + 149, + 541 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 531, + 149, + 541 + ], + "spans": [ + { + "bbox": [ + 55, + 531, + 149, + 541 + ], + "type": "text", + "content": "4.5.3. Iteration Mode" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 545, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 545, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 545, + 296, + 713 + ], + "type": "text", + "content": "One of the key contributions of this paper is the adaptive iterative strategy for attacking the RGB-T tracker. To demonstrate the effectiveness of the adaptive strategy, we conduct ablation experiments using the iterative strategy. The alternating iteration strategy and the joint optimization strategy are selected for the comparison test. The former alternately optimizes the adversarial texture network and the adversarial shape network, while the latter simultaneously propagates the gradient flow to both networks. As shown in Table 1, our adaptive iteration approach can more effectively identify model vulnerabilities and generate more aggressive adversarial patches. Specifically, according to the contribution degree, our strategy can weaken deep semantic attention and break the balance of modality in tracker." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 305, + 406, + 317 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 305, + 406, + 317 + ], + "spans": [ + { + "bbox": [ + 313, + 305, + 406, + 317 + ], + "type": "text", + "content": "4.5.4. Applied Modal" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 320, + 555, + 451 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 320, + 555, + 451 + ], + "spans": [ + { + "bbox": [ + 313, + 320, + 555, + 451 + ], + "type": "text", + "content": "In order to verify the multi-modal patch joint and single-modal patch attack performance, we try to conduct patch apply modal ablation experiment. Multi-modal patches " + }, + { + "bbox": [ + 313, + 320, + 555, + 451 + ], + "type": "inline_equation", + "content": "\\{p_{vi}, p_{ir}\\}" + }, + { + "bbox": [ + 313, + 320, + 555, + 451 + ], + "type": "text", + "content": " are generated to simultaneously disrupt both RGB and thermal modalities. As shown in the Table 1, we use only one of these patches in an ablation setup. The adversarial patch of a single modal produces a certain attack effect and makes the tracker confused. Evidently, our multi-modal patch achieves the best attack performance, underscoring the necessity of designing joint multi-modal attacks for RGB-T trackers." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 460, + 388, + 473 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 460, + 388, + 473 + ], + "spans": [ + { + "bbox": [ + 313, + 460, + 388, + 473 + ], + "type": "text", + "content": "5. Conclusion" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 481, + 555, + 661 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 481, + 555, + 661 + ], + "spans": [ + { + "bbox": [ + 313, + 481, + 555, + 661 + ], + "type": "text", + "content": "In this work, we present a pioneering framework for adversarial attacks on RGB-T multi-modal trackers by introducing an adaptive cross-attack mechanism through multimodal response decoupling. Our approach leverages a modal-aware adaptive attack strategy and introduces novel modal disturbance loss and spatio-temporal joint attack loss to progressively impair the tracker's capability to perceive the target. The shared adversarial shape design also enhances our method's practicality, allowing seamless deployment of multi-modal patches in the real world. Experiments across digital and physical domains confirm the robustness and effectiveness of our approach in evading RGB-T trackers, highlighting the potential and significance of adaptive, multi-modal adversarial attacks in advancing the understanding of tracker vulnerabilities." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 670, + 411, + 682 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 670, + 411, + 682 + ], + "spans": [ + { + "bbox": [ + 313, + 670, + 411, + 682 + ], + "type": "text", + "content": "Acknowledgments" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 689, + 554, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 689, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 689, + 554, + 713 + ], + "type": "text", + "content": "This work was supported by National Natural Science Foundation of China (62276192)." + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 749, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 749, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 749, + 318, + 757 + ], + "type": "text", + "content": "22106" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "spans": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 57, + 91, + 296, + 713 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 61, + 91, + 296, + 145 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 91, + 296, + 145 + ], + "spans": [ + { + "bbox": [ + 61, + 91, + 296, + 145 + ], + "type": "text", + "content": "[1] Luca Bertinetto, Jack Valmadre, Joao F Henriques, Andrea Vedaldi, and Philip HS Torr. Fully-convolitional siamese networks for object tracking. In Proceedings of the European Conference on Computer Vision Workshops, pages 850-865, 2016. 1" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 148, + 296, + 192 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 148, + 296, + 192 + ], + "spans": [ + { + "bbox": [ + 61, + 148, + 296, + 192 + ], + "type": "text", + "content": "[2] Goutam Bhat, Martin Danelljan, Luc Van Gool, and Radu Timofte. Learning discriminative model prediction for tracking. In Proceedings of the IEEE International Conference on Computer Vision, pages 6182-6191, 2019. 2" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 61, + 194, + 295, + 236 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 194, + 295, + 236 + ], + "spans": [ + { + "bbox": [ + 61, + 194, + 295, + 236 + ], + "type": "text", + "content": "[3] Bing Cao, Junliang Guo, Pengfei Zhu, and Qinghua Hu. Bidirectional adapter for multimodal tracking. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 927-935, 2024. 2, 5" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 239, + 295, + 294 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 239, + 295, + 294 + ], + "spans": [ + { + "bbox": [ + 61, + 239, + 295, + 294 + ], + "type": "text", + "content": "[4] Xuesong Chen, Xiyu Yan, Feng Zheng, Yong Jiang, Shu-Tao Xia, Yong Zhao, and Rongrong Ji. One-shot adversarial attacks on visual tracking with dual attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10176-10185, 2020. 2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 61, + 295, + 295, + 350 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 295, + 295, + 350 + ], + "spans": [ + { + "bbox": [ + 61, + 295, + 295, + 350 + ], + "type": "text", + "content": "[5] Xuesong Chen, Xiyu Yan, Feng Zheng, Yong Jiang, Shu-Tao Xia, Yong Zhao, and Rongrong Ji. One-shot adversarial attacks on visual tracking with dual attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10176-10185, 2020. 2" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 352, + 295, + 407 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 352, + 295, + 407 + ], + "spans": [ + { + "bbox": [ + 62, + 352, + 295, + 407 + ], + "type": "text", + "content": "[6] Xuesong Chen, Canmiao Fu, Feng Zheng, Yong Zhao, Hongsheng Li, Ping Luo, and Guo-Jun Qi. A unified multi-scenario attacking network for visual object tracking. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1097-1104, 2021. 1" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 409, + 295, + 453 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 409, + 295, + 453 + ], + "spans": [ + { + "bbox": [ + 62, + 409, + 295, + 453 + ], + "type": "text", + "content": "[7] Martin Danelljan, Luc Van Gool, and Radu Timofte. Probabilistic regression for visual tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7183-7192, 2020. 2" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 454, + 296, + 509 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 454, + 296, + 509 + ], + "spans": [ + { + "bbox": [ + 62, + 454, + 296, + 509 + ], + "type": "text", + "content": "[8] Li Ding, Yongwei Wang, Kaiwen Yuan, Minyang Jiang, Ping Wang, Hua Huang, and Z Jane Wang. Towards universal physical attacks on single object tracking. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1236-1245, 2021. 2, 5" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 511, + 295, + 576 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 511, + 295, + 576 + ], + "spans": [ + { + "bbox": [ + 62, + 511, + 295, + 576 + ], + "type": "text", + "content": "[9] Xiaojun Hou, Jiazheng Xing, Yijie Qian, Yaowei Guo, Shuo Xin, Junhao Chen, Kai Tang, Mengmeng Wang, Zhengkai Jiang, Liang Liu, et al. Sdstrack: Self-distillation symmetric adapter learning for multi-modal visual object tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 26551-26561, 2024. 2, 5" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 57, + 578, + 296, + 621 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 578, + 296, + 621 + ], + "spans": [ + { + "bbox": [ + 57, + 578, + 296, + 621 + ], + "type": "text", + "content": "[10] Shuai Jia, Chao Ma, Yibing Song, and Xiaokang Yang. Robust tracking against adversarial attacks. In Proceedings of the European Conference on Computer Vision, pages 69-84, 2020. 1" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 57, + 624, + 295, + 678 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 624, + 295, + 678 + ], + "spans": [ + { + "bbox": [ + 57, + 624, + 295, + 678 + ], + "type": "text", + "content": "[11] Shuai Jia, Yibing Song, Chao Ma, and Xiaokang Yang. Iou attack: Towards temporally coherent black-box adversarial attack for visual object tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6709-6718, 2021. 1" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 680, + 295, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 680, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 57, + 680, + 295, + 713 + ], + "type": "text", + "content": "[12] Chenglong Li, Xinyan Liang, Yijuan Lu, Nan Zhao, and Jin Tang. Rgb-t object tracking: Benchmark and baseline. Pattern Recognition, 96:106977, 2019. 5" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 555, + 713 + ], + "type": "list", + "angle": 0, + "index": 28, + "blocks": [ + { + "bbox": [ + 316, + 73, + 554, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 73, + 554, + 116 + ], + "spans": [ + { + "bbox": [ + 316, + 73, + 554, + 116 + ], + "type": "text", + "content": "[13] Chenglong Li, Wanlin Xue, Yaqing Jia, Zhichen Qu, Bin Luo, Jin Tang, and Dengdi Sun. Lasher: A large-scale high-diversity benchmark for rgbt tracking. IEEE Transactions on Image Processing, 31:392-404, 2022. 5" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 317, + 118, + 554, + 161 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 118, + 554, + 161 + ], + "spans": [ + { + "bbox": [ + 317, + 118, + 554, + 161 + ], + "type": "text", + "content": "[14] Liting Lin, Heng Fan, Zhipeng Zhang, Yong Xu, and Haibin Ling. Swintrack: A simple and strong baseline for transformer tracking. Advances in Neural Information Processing Systems, 35:16743-16754, 2022. 2" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 317, + 162, + 555, + 217 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 162, + 555, + 217 + ], + "spans": [ + { + "bbox": [ + 317, + 162, + 555, + 217 + ], + "type": "text", + "content": "[15] Siao Liu, Zhaoyu Chen, Wei Li, Jiwei Zhu, Jiafeng Wang, Wenqiang Zhang, and Zhongxue Gan. Efficient universal shuffle attack for visual object tracking. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pages 2739-2743, 2022. 2" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 217, + 554, + 259 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 217, + 554, + 259 + ], + "spans": [ + { + "bbox": [ + 316, + 217, + 554, + 259 + ], + "type": "text", + "content": "[16] Andong Lu, Chenglong Li, Yuqing Yan, Jin Tang, and Bin Luo. Rgbt tracking via multi-adapter network with hierarchical divergence loss. IEEE Transactions on Image Processing, 30:5613-5625, 2021. 1" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 261, + 554, + 304 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 261, + 554, + 304 + ], + "spans": [ + { + "bbox": [ + 316, + 261, + 554, + 304 + ], + "type": "text", + "content": "[17] Andong Lu, Cun Qian, Chenglong Li, Jin Tang, and Liang Wang. Duality-gated mutual condition network for rgbt tracking. IEEE Transactions on Neural Networks and Learning Systems, 36(3):4118-4131, 2025. 2" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 305, + 554, + 350 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 305, + 554, + 350 + ], + "spans": [ + { + "bbox": [ + 316, + 305, + 554, + 350 + ], + "type": "text", + "content": "[18] Hyeonseob Nam and Bohyung Han. Learning multi-domain convolutional neural networks for visual tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4293-4302, 2016. 2" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 350, + 554, + 392 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 350, + 554, + 392 + ], + "spans": [ + { + "bbox": [ + 316, + 350, + 554, + 392 + ], + "type": "text", + "content": "[19] Wuqiang Qi, Zhuoqun Zhang, and Zhishe Wang. Dmfuse: Diffusion model guided cross-attention learning for infrared and visible image fusion. Chinese Journal of Information Fusion, 1(3):226-241, 2024. 1" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 393, + 554, + 426 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 393, + 554, + 426 + ], + "spans": [ + { + "bbox": [ + 316, + 393, + 554, + 426 + ], + "type": "text", + "content": "[20] Linfeng Tang, Hao Zhang, Han Xu, and Jiayi Ma. Deep learning-based image fusion: A survey. Journal of Image and Graphics, 28(1):3-36, 2023. 1" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 426, + 555, + 470 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 426, + 555, + 470 + ], + "spans": [ + { + "bbox": [ + 316, + 426, + 555, + 470 + ], + "type": "text", + "content": "[21] Ning Wang, Wengang Zhou, Yibing Song, Chao Ma, Wei Liu, and Houqiang Li. Unsupervised deep representation learning for real-time tracking. International Journal of Computer Vision, 129(2):400-418, 2021. 2" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 317, + 472, + 554, + 514 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 472, + 554, + 514 + ], + "spans": [ + { + "bbox": [ + 317, + 472, + 554, + 514 + ], + "type": "text", + "content": "[22] Yun Xiao, Mengmeng Yang, Chenglong Li, Lei Liu, and Jin Tang. Attribute-based progressive fusion network for rgbt tracking. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 2831-2838, 2022. 5" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 317, + 515, + 554, + 568 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 515, + 554, + 568 + ], + "spans": [ + { + "bbox": [ + 317, + 515, + 554, + 568 + ], + "type": "text", + "content": "[23] Saining Xie, Ross Girshick, Piotr Dólár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1492-1500, 2017. 4" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 317, + 570, + 554, + 613 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 570, + 554, + 613 + ], + "spans": [ + { + "bbox": [ + 317, + 570, + 554, + 613 + ], + "type": "text", + "content": "[24] Xinyu Xie, Yawen Cui, Tao Tan, Xubin Zheng, and Zitong Yu. Fusionmamba: Dynamic feature enhancement for multimodal image fusion with mamba. Visual Intelligence, 2(1): 37, 2024. 1" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 317, + 614, + 554, + 668 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 614, + 554, + 668 + ], + "spans": [ + { + "bbox": [ + 317, + 614, + 554, + 668 + ], + "type": "text", + "content": "[25] Bin Yan, Dong Wang, Huchuan Lu, and Xiaoyun Yang. Cooling-shrinking attack: Blinding the tracker with imperceptible noises. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 990-999, 2020. 2, 4" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 316, + 669, + 554, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 669, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 669, + 554, + 713 + ], + "type": "text", + "content": "[26] Bin Yan, Houwen Peng, Jianlong Fu, Dong Wang, and Huchuan Lu. Learning spatio-temporal transformer for visual tracking. In Proceedings of the IEEE International Conference on Computer Vision, pages 10448-10457, 2021. 1" + } + ] + } + ], + "index": 27 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "22107" + } + ] + } + ], + "index": 29 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 73, + 294, + 217 + ], + "type": "list", + "angle": 0, + "index": 3, + "blocks": [ + { + "bbox": [ + 56, + 73, + 294, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 73, + 294, + 116 + ], + "spans": [ + { + "bbox": [ + 56, + 73, + 294, + 116 + ], + "type": "text", + "content": "[27] Fan Zhang, Hanwei Peng, Lingli Yu, Yuqian Zhao, and Baifan Chen. Dual-modality space-time memory network for rgbt tracking. IEEE Transactions on Instrumentation and Measurement, 72:1-12, 2023. 2" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 118, + 294, + 171 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 118, + 294, + 171 + ], + "spans": [ + { + "bbox": [ + 56, + 118, + 294, + 171 + ], + "type": "text", + "content": "[28] Tianlu Zhang, Hongyuan Guo, Qiang Jiao, Qiang Zhang, and Jungong Han. Efficient rgb-t tracking via cross-modality distillation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5404-5413, 2023. 2" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 174, + 294, + 217 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 174, + 294, + 217 + ], + "spans": [ + { + "bbox": [ + 56, + 174, + 294, + 217 + ], + "type": "text", + "content": "[29] Jiawen Zhu, Simiao Lai, Xin Chen, Dong Wang, and Huchuan Lu. Visual prompt multi-modal tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9516-9526, 2023. 2, 5" + } + ] + } + ], + "index": 2 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "type": "text", + "content": "22108" + } + ] + } + ], + "index": 4 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/ACE_ Anti-Editing Concept Erasure in Text-to-Image Models/b5a87e68-e26d-46f8-a43c-98bb87af5dc7_content_list.json b/2025/ACE_ Anti-Editing Concept Erasure in Text-to-Image Models/b5a87e68-e26d-46f8-a43c-98bb87af5dc7_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a2edb523f3eb6353529ef02c271d4719213d08b8 --- /dev/null +++ b/2025/ACE_ Anti-Editing Concept Erasure in Text-to-Image Models/b5a87e68-e26d-46f8-a43c-98bb87af5dc7_content_list.json @@ -0,0 +1,1484 @@ +[ + { + "type": "text", + "text": "ACE: Anti-Editing Concept Erasure in Text-to-Image Models", + "text_level": 1, + "bbox": [ + 197, + 142, + 799, + 162 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Zihao Wang $^{1}$ Yuxiang Wei $^{1}$ Fan Li $^{2}$ Renjing Pei $^{2}$ Hang Xu $^{2}$ Wangmeng Zuo $^{1,3(\\text{图})}$", + "bbox": [ + 174, + 198, + 818, + 218 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{1}$ Harbin Institute of Technology $^{2}$ Huawei Noah's Ark Lab $^{3}$ Pazhou Lab (Huangpu)", + "bbox": [ + 158, + 233, + 836, + 252 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/66dbe4e4ed7a06542016d35cf3963ca3534f7f94e5e23386d93120d461c5f8cf.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 107, + 284, + 434, + 378 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/617364e0d91317e5332d5a06a51f795c86fdd959c79bf1e99ace98740162ed58.jpg", + "image_caption": [ + "(a) Common methods for creating copyrighted content" + ], + "image_footnote": [], + "bbox": [ + 109, + 393, + 433, + 500 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/f0aed767b6998d7a0b3e23933a3b397360c267a79a2e63f73119d1c0fab5ad7b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 464, + 287, + 890, + 385 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/26943a5647f5e5836853e2cf47e6502410040bed37427a4d87f2fdf2740ec254.jpg", + "image_caption": [ + "(b) Comparisons of Pikachu erasure on generation and editing", + "Figure 1. (a) Given a text-to-image (T2I) model, there are two common methods to adopt it to create undesired contents, i.e., generating new images based on text prompts or editing existing images. (b) Current concept erasure methods primarily focus on preventing the generation of erased concepts but fail to protect against image editing. In contrast, our ACE method can prevent the production of such content during both generation and editing processes. As shown, after erasing Pikachu, it successfully prevents the edits involving Pikachu." + ], + "image_footnote": [], + "bbox": [ + 465, + 396, + 890, + 502 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 256, + 594, + 331, + 609 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Recent advance in text-to-image diffusion models have significantly facilitated the generation of high-quality images, but also raising concerns about the illegal creation of harmful content, such as copyrighted images. Existing concept erasure methods achieve superior results in preventing the production of erased concept from prompts, but typically perform poorly in preventing undesired editing. To address this issue, we propose an Anti-Editing Concept Erasure (ACE) method, which not only erases the target concept during generation but also filters out it during editing. Specifically, we propose to inject the erasure guidance into both conditional and the unconditional noise prediction, enabling the model to effectively prevent the creation of erasure concepts during both editing and generation. Furthermore, a stochastic correction guidance is introduced during training to address the erosion of unrelated concepts. We conducted erasure editing experiments with", + "bbox": [ + 102, + 638, + 485, + 888 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "representative editing methods (i.e., LEDs++ and Ma-saCtrl) to erase IP characters, and the results indicate that our ACE effectively filters out target concepts in both types of edits. Additional experiments on erasing explicit concepts and artistic styles further demonstrate that our ACE performs favorably against state-of-the-art methods. Our code will be publicly available at https://github.com/120L020904/ACE.", + "bbox": [ + 511, + 595, + 893, + 712 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 513, + 744, + 640, + 758 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Recent text-to-image (T2I) diffusion models trained with large-scale datasets [49] have demonstrated an impressive ability to generate high-quality images [12, 42, 46]. Their extraordinary creative capabilities enable users to produce high-quality images, and facilitate a wide range of applications, such as image editing [4, 58] and artistic creation [13, 55, 67]. However, alongside these advancements, a significant concern has arisen regarding the potential mis", + "bbox": [ + 509, + 770, + 893, + 888 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 44 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "23505", + "bbox": [ + 478, + 938, + 517, + 950 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "use of these text-to-image models. For example, these models might be employed to generate unsafe content, such as copyrighted material or sexually explicit images.", + "bbox": [ + 102, + 104, + 482, + 148 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To prevent the creation of unsafe content, a straightforward solution is filtering training data and retraining the model. Nonetheless, such a process is both labor-intensive and resource-consuming. Post-hoc safety checker [45, 46] and negative guidance [48] are alternative plug-and-play ways to filter undesired contents, which heavily rely on pre-trained detectors or hand-crafted prompts. More recent, concept erasure methods [14, 17, 35, 36, 68] are proposed to directly unlearn undesired concepts through model finetuning. These methods mainly focus on precisely removing the target concept, while faithfully preserving the generation of non-target concepts. For instance, ESD [14] injects the negative erase guidance into target noise prediction to guide the image away from the target concept. SPM [36] employs a lightweight adapter to eliminate concepts and further adopts latent anchoring to preserve non-target concepts.", + "bbox": [ + 102, + 151, + 482, + 385 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Although these concept erasure methods can effectively prevent the generation of unsafe content giving corresponding text prompt, they can be circumvented by editing techniques. As illustrated in Fig. 1, after removing Pikachu from the model, users can still create an image of Pikachu wearing sunglasses by editing a Pikachu image using LEDIT++ [4]. This is because these methods are typically trained to remove target concept from conditional noise prediction (as shown in Fig. 2(b)), and rely on the input text (e.g., \"Pikachu\") to trigger the guard. Therefore, when editing the image with the text \"Add sunglasses\" as input, the guard fails. In practice, protection from editing should also be considered in concept erasure, which we refer to as editing filtration.", + "bbox": [ + 102, + 388, + 482, + 592 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To address the above issues, we propose an Anti-Editing Concept Erasure method, termed ACE, to prevent the production of unsafe content during both generation and editing. Based on the above analysis, we explore the capabilities of CFG [20], and propose incorporating erasure guidance into both conditional and unconditional noise for anti-editing concept erasure. During erasure training, ACE additionally aligns the unconditional noise prediction of the tuned model with the proposed unconditional erasure guidance. After that, during generation or editing, the CFG prediction in the tuned model can implicitly mitigate the presence of the erased concept, thereby preventing the production of unwanted content. A prior constraint loss further adopted address the overfitting of training. Additionally, to reduce the impact of the added target concept noise guidance on the generation of non-target concepts, we further incorporate a random correction guidance with unconditional erasure guidance by subtracting randomly sampled prior concept noise guidance. With that, our ACE can thoroughly erase the target concept while preserving the generation of", + "bbox": [ + 102, + 595, + 482, + 888 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "non-target concepts. We conducted extensive evaluations across different erasure tasks, including intellectual property (IP), explicit content, and artistic style. Our method demonstrate significant advantages in both generation and editing filtration, showcasing its effectiveness.", + "bbox": [ + 511, + 104, + 890, + 176 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The contributions of this work can be summarized as:", + "bbox": [ + 529, + 178, + 874, + 191 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- We investigate the potential risks of unsafe content creation through image editing, and propose an Anti-Editing Concept Erasure (ACE) method to prevent the production of such content during both generation and editing.", + "- A unconditional erasure guidance is proposed for anti-editing concept erasure, along with concept preservation mechanism to ensure the generation of non-target concepts.", + "- Extensive experiments demonstrate that our ACE can successfully erase target concepts and exhibits superior filtration capabilities during both generation and editing." + ], + "bbox": [ + 511, + 193, + 890, + 353 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related Work", + "text_level": 1, + "bbox": [ + 511, + 369, + 650, + 383 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.1. Concept Erasure in T2I Models", + "text_level": 1, + "bbox": [ + 511, + 393, + 782, + 409 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The concept erasure [9, 11, 15, 16, 18, 21, 23-25, 28-30, 33, 39, 41, 43, 48, 51, 52, 59, 62-64, 71] in T2I models has been the subject of numerous studies. Fine-tuning models are an important method in concept erasure. ESD [14] suggests integrating negative guidance into target concept noise through training. SPM [36] proposes prior correction based on the cosine similarity of text and utilizes a comparable Lora approach to train the model. MACE [35] leverages a closed-form solution to amalgamate multiple erasure Lora weights. RECE [17] employs analytical methods to search inappropriate text embedding and integrates it into erasure closed-form solution. AdvUnlearn [68] incorporate adversarial training to improve the robustness of the erasure method. To the best of our knowledge, current fine-tuning methods lack consideration for editing filtration, thus rendering them ineffective in preventing customized editions to target concept images.", + "bbox": [ + 511, + 415, + 890, + 664 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.2. Text-driven Image Editing", + "text_level": 1, + "bbox": [ + 511, + 675, + 743, + 691 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Due to the broad generative capacities inherent in text-to-image DMs, the employment of DMs for image editing [3, 5, 7, 8, 26, 27, 32, 37, 38, 40, 47, 50, 54, 56, 57, 60, 65, 70] has progressively garnered traction. MasaCtrl [6] introduces source image data into the image editing process by substituting keys and values in the self-attention layer, thus modifying the actions of objects in the image. LEDITS++[4] uses inference guidance and attention masks from DM to confine editing regions while using DDPM inversion for enhanced restoration of source image. Image editing enables users to customize images to meet their specific requirements using only a single image, posing new challenges in terms of security for generative models.", + "bbox": [ + 511, + 696, + 890, + 888 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "23506", + "bbox": [ + 478, + 938, + 519, + 950 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/56a0845895c1ed71f7953c1f3585aa222381202b834edee24691efd4fe710399.jpg", + "image_caption": [ + "(a) Calculation of Classifier Free Guidance" + ], + "image_footnote": [], + "bbox": [ + 120, + 108, + 496, + 231 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/cbe3f74c88efc5ab447a9f20f7f97f47a512ac356d7b19a1584ef65b6c3b8dd6.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 521, + 109, + 867, + 224 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/340bd65aa6b021232b6fb1dd0489bd42985b7666fac10ce58f3a2d84d7b26f48.jpg", + "image_caption": [ + "(b) Concept Erasure on Conditional Noise Prediction", + "(c) Our ACE learns to erase concept on both Conditional and Unconditional Noise Predictions", + "Figure 2. Overview of our proposed ACE. (a) In CFG, both conditional noise and unconditional noise are adopted to generate high-quality images. (b) ESD [14] unlearns the target concept (e.g., Mickey) by aligning conditional noise prediction with conditional erasure guidance (CEG). (c) During the fine-tuning, our ACE injects erasure guidance into both conditional and unconditional noise prediction, preventing the production of unsafe content during both generation and editing. PG-UEG denotes the prior-guided unconditional erasure guidance calculated following Eqn 9." + ], + "image_footnote": [], + "bbox": [ + 122, + 255, + 861, + 377 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.3. Attacks in T2I Models", + "text_level": 1, + "bbox": [ + 102, + 492, + 305, + 507 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "As research on concept erasure in T2I models advances, red team studies focusing on the robustness of detection erasure methods are also increasingly emerging. P4D [10] processes a method of inserting adversarial text into regular input text to facilitate the production of insecure images using the T2I model. Ring-A-Bell [53] extracts the discrepancy vector between the embeddings of insecure concept text and secure concept text and employs it to derive the attack text embedding. UnlearnDiff [69] employs Projected Gradient Descent (PGD) to tackle the optimization challenge inherent in adversarial attacks and maps the optimized text embeddings onto discrete tokens.", + "bbox": [ + 102, + 513, + 483, + 689 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. Proposed Method", + "text_level": 1, + "bbox": [ + 102, + 702, + 272, + 718 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Given a target concept (e.g., Fukushima), concept erasure task [14, 36] aims to unlearn it from pre-trained text-to-image (T2I) models, preventing the illegal use of these models to create copyrighted content. However, existing methods can be circumvented and fail to prevent users from producing new undesirable images through image editing, which raises new concerns. To address this, we propose an Anti-Editing Concept Erasure (ACE) method, as illustrated in Fig. 2, to prevent the production of undesirable content through both generation and editing. In this section, we will first introduce the prior knowledge of our method (Sec. 3.1),", + "bbox": [ + 102, + 727, + 483, + 888 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "including employed T2I model and concept erasure method. To address the editing issue, we further propose to erase the target concept from both conditional and unconditional prediction for anti-editing erasure (Sec. 3.2). Finally, to preserve the generation of non-target concepts, a prior concept preservation mechanism is introduced (Sec. 3.3).", + "bbox": [ + 511, + 493, + 893, + 580 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Preliminaries", + "text_level": 1, + "bbox": [ + 511, + 588, + 648, + 602 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Stable Diffusion. In this work, we adopt Stable Diffusion 1.4 [46] as text-to-image model, which is one of the representative T2I diffusion models. It first employs a variational autoencoder (VAE) to transform real images $x$ into an image latent $z$ . Then, a text-conditioned diffusion model $\\epsilon_{\\theta}$ is trained on the latent space to predict latent codes, and mean-squared loss is adopted,", + "bbox": [ + 511, + 609, + 893, + 712 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\mathrm {L D M}} = \\mathbb {E} _ {z _ {t}, t, c, \\epsilon \\sim \\mathcal {N} (0, I)} \\left[ \\| \\epsilon - \\epsilon_ {\\theta} \\left(z _ {t}, c, t\\right) \\| _ {2} ^ {2} \\right], \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 553, + 717, + 890, + 734 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $\\epsilon$ denotes the unscaled noise and $c$ is the text embedding encoded by text encoders. $z_{t}$ is the latent noised to time $t$ . During inference, a random Gaussian noise $z_{T}$ is iteratively denoised to $z_{0}$ , and decoded to final image.", + "bbox": [ + 511, + 739, + 893, + 797 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Classifier-Free Guidance. To improve the quality of generated images, classifier-free guidance [20] is adopted during diffusion inference. Based on Tweedie's formula and the principles of diffusion model, we have:", + "bbox": [ + 511, + 797, + 893, + 856 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\nabla_ {z _ {t}} \\log p (c | z _ {t}) = - \\frac {1}{\\sigma_ {t}} \\left(\\epsilon_ {\\theta} \\left(z _ {t}, c, t\\right) - \\epsilon_ {\\theta} \\left(z _ {t}, t\\right)\\right). \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 549, + 861, + 890, + 891 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "23507", + "bbox": [ + 478, + 938, + 517, + 950 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/27190eb892c6b8bb4fe387846155a26a0b9452aa12300fb0627f3354f91f094c.jpg", + "image_caption": [ + "Figure 3. Qualitative comparisons of IP character removal. Our ACE effectively erases the target concept while generating other concepts successfully." + ], + "image_footnote": [], + "bbox": [ + 112, + 106, + 477, + 400 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Here, $\\sigma_{t}$ is a constant. To increase the probability of text condition $c$ appearing in the final image, the final noise prediction is the composition of noise prediction from both conditional and unconditional texts,", + "bbox": [ + 102, + 476, + 483, + 535 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\tilde {\\epsilon} = \\epsilon_ {\\theta} \\left(z _ {t}, t\\right) + \\omega \\left(\\epsilon_ {\\theta} \\left(z _ {t}, c, t\\right) - \\epsilon_ {\\theta} \\left(z _ {t}, t\\right)\\right), \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 158, + 542, + 482, + 559 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\epsilon_{\\theta}(z_t,t)$ denote the unconditional noise prediction, and $\\omega$ is a hyperparameter controlling the guidance scale.", + "bbox": [ + 102, + 564, + 482, + 594 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Concept Erasure. Given a target concept indicated by text $c$ (e.g., Pikachu), concept erasure task finetunes the model to reduce the probability of generating images containing this concept. For example, ESD [14] removes the target concept from the conditional noise prediction, and a conditional erasure guidance (CEG) is defined as:", + "bbox": [ + 102, + 594, + 483, + 681 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\tilde {\\epsilon} _ {c} = \\epsilon_ {\\theta^ {*}} \\left(z _ {t}, t\\right) - \\eta_ {c} \\left(\\epsilon_ {\\theta^ {*}} \\left(z _ {t}, c, t\\right) - \\epsilon_ {\\theta^ {*}} \\left(z _ {t}, t\\right)\\right), \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 143, + 689, + 482, + 705 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\epsilon_{\\theta^{\\star}}(\\cdot)$ represents the original T2I model, and $z_{t}$ is the encoded latent image contains target concept $c$ . $\\eta_c$ is a control scale hyperparameter. During training, ESD aligns the noise prediction of the target concept in tuned model $\\epsilon_{\\theta}(z_t,c,t)$ with the above CEG,", + "bbox": [ + 102, + 710, + 483, + 785 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\mathrm {E S D}} = \\mathbb {E} _ {z _ {t}, t, c} \\left[ \\| \\epsilon_ {\\theta} (z _ {t}, c, t) - \\tilde {\\epsilon} _ {c} \\| _ {2} ^ {2} \\right]. \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 171, + 790, + 482, + 809 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "After the training, the erasure guidance $-\\nabla_{z_t}\\log p(c|z_t)$ is introduced into conditional noise prediction of the target concept. Therefore, the prediction of tuned model will be guided away from the erased concept, preventing the generation of images containing the erased concept.", + "bbox": [ + 102, + 814, + 483, + 888 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2. Anti-Editing Concept Erasure", + "text_level": 1, + "bbox": [ + 511, + 103, + 772, + 119 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Editing Filtration. Although existing erasure methods can successfully prevent the generation of an erased concept through text prompts, they can be easily circumvented by editing techniques. As shown in Fig. 1, when utilizing tuned ESD model to add sunglasses on an image of Pikachu using LEDs++ [4], it successfully produces an image of Pikachu with sunglasses, raising potential copyright concerns. This is because these methods are typically trained to erase the concept from the noise prediction of the target concept (as shown in Fig. 2 (b)), and rely on inputting concept text (e.g., \"Pikachu\" or \"Mickey\") to trigger the guard. However, during the editing process, the target concept may not necessarily be used in the text prompt. Therefore, these erasure methods fail to prevent the reconstruction of the erased concept. In practice, the erasure model should also have the ability to prevent the creation of undesired concepts through image editing, a feature we refer to as editing filtration.", + "bbox": [ + 511, + 125, + 893, + 386 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Unconditional Erasure Guidance. As we all know, current generation and editing methods heavily rely on classifier-free guidance [20] (CFG) to improve the quality of generated images, where unconditional noise prediction performs an important role. To address the issue of editing filtration, we further propose to erase the target concept from both conditional and unconditional noise prediction, thereby preventing edited images from containing target concepts. Specifically, similar to ESD, we define the unconditional erasure guidance (UEG) as,", + "bbox": [ + 511, + 388, + 892, + 534 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\tilde {\\epsilon} _ {\\mathrm {u}} = \\epsilon_ {\\theta^ {*}} \\left(z _ {t}, t\\right) + \\eta_ {\\mathrm {u}} \\left(\\epsilon_ {\\theta^ {*}} \\left(z _ {t}, c, t\\right) - \\epsilon_ {\\theta^ {*}} \\left(z _ {t}, t\\right)\\right). \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 553, + 542, + 890, + 559 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "During training, we additionally align the unconditional noise prediction of the tuned model with the UEG,", + "bbox": [ + 511, + 568, + 890, + 597 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\mathrm {U n c}} = \\mathbb {E} _ {z _ {t}, t, c} \\left[ \\| \\epsilon_ {\\theta} (z _ {t}, t) - \\tilde {\\epsilon} _ {\\mathrm {u}} \\| _ {2} ^ {2} \\right]. \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 589, + 606, + 890, + 625 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "When fine-tuned unconditional noise (our UEG) is subtracted in the CFG process, the erased concept guidance will be subtracted, thereby reducing the probability of the erased concept appearing regardless of the input text prompt. Then, the CFG noise prediction during inference will move away from the target concept regardless of any text input, thereby effectively preventing the production image containing the target concept. As erasure models are usually trained on a small dataset, they are prone to be overfitting, where the erasure guidance is introduced into the noise prediction for other conditional text prompts. This weakens the erasure effects and leads to incomplete erasures. To address the issue of overfitting, we introduce a prior constraint loss during the training process. Specifically, we regularize the prediction of the prior concept in the new model to be consistent with that of the original model:", + "bbox": [ + 511, + 631, + 893, + 866 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\text {C o n s}} = \\mathbb {E} _ {z _ {t}, t, c _ {p} \\in \\mathcal {C} _ {p}} \\left[ \\| \\epsilon_ {\\theta} \\left(z _ {t}, c _ {p}, t\\right) - \\epsilon_ {\\theta^ {*}} \\left(z _ {t}, c _ {p}, t\\right) \\| _ {2} ^ {2} \\right], \\tag {8}\n$$\n", + "text_format": "latex", + "bbox": [ + 531, + 872, + 890, + 888 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "23508", + "bbox": [ + 478, + 938, + 517, + 950 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/bb0ea50cff7f9b2c8b2419c85b5296709314c8902c1c00d0bf08c525e1efaf57.jpg", + "image_caption": [ + "Figure 4. Comparison of our ACE method with other methods in terms of editing filtering. After erasing Mickey Mouse, our method filtered out edits involving Mickey Mouse while not affecting edits related to other IP characters. In contrast, the competing methods either fail to prevent editing (e.g., ESD, SPM, RECE, and MACE) or cannot perform editing on non-target concepts (e.g., AdvUnlearn)." + ], + "image_footnote": [], + "bbox": [ + 117, + 104, + 880, + 333 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/5f7264cadb1e3e303dc91c35a8afcb0694688495744182e74bb856025ae21ba8.jpg", + "image_caption": [ + "Figure 5. Qualitative results of nudity removal. Figure (a) shows the results of explicit editing using SD-Inpainting, while Figure (b) displays images generated using text with explicit label. Static adversarial text is used for editing text, while dynamic adversarial attacks are employed for generation. It can be observed that our method effectively reduces exposure in both editing and generation tasks. Moreover, our method maintains its effectiveness when editing and generating using adversarial text, indicating its robustness." + ], + "image_footnote": [], + "bbox": [ + 109, + 404, + 885, + 628 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $c_p$ represents prior concept, and $\\mathcal{C}_p$ represents the set of prior concepts. Intuitively, the larger the set of priors, the better it helps mitigate overfitting. However, it is challenging to traverse all the prior concepts as the pre-trained models have a large general semantic space. Our goal is to preserve the concepts more likely to be affected, thus minimizing the influence to other concepts. We assume that these concepts are semantic-related concepts to the erased concept and use LLM [1] to obtain them. By adding this loss, it ensures that the erasure guidance introduced during training aligns with our conceptualization in the Eqn. 7.", + "bbox": [ + 102, + 712, + 486, + 875 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.3. Prior Concept Preservation", + "text_level": 1, + "bbox": [ + 511, + 710, + 754, + 727 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In practice, training with the method proposed in Sec. 3.2 affects the generation prior of relevant concepts (see Sec. 4.4). This is because incorporating UEG not only decreases the probability of producing erased concepts, but also decreases probability of adjacent concepts. Therefore, we reverse mechanism of UEG by subtracting the guidance of prior concepts from the unconditional noise, which prevents the probability reduction of these concepts and minimizes concept forgetting. The prior concepts are sampled from the semantic-related concepts obtained using LLM,", + "bbox": [ + 511, + 741, + 893, + 888 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "23509", + "bbox": [ + 478, + 938, + 519, + 950 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/d4357bae8bd89ad1632e6896534b2165c85ea122d5b6c625074ecf998cf481a0.jpg", + "table_caption": [ + "(a) Generation Prevention", + "(b) Editing Filtration" + ], + "table_footnote": [], + "table_body": "
MethodErase ConceptPrior ConceptOverallErase ConceptPrior ConceptOverall
ESDUncConsCorCLIPe↓LPIPSe↑CLIPp↑LPIPSp↓CLIPd↑LPIPSd↑CLIPe↓LPIPSe↑CLIPp↑LPIPSp↓CLIPd↑LPIPSd↑
(1)0.1710.4400.2460.2860.0750.1530.3010.0600.3050.0500.0040.011
(2)0.1660.5510.2830.2360.1170.3150.2850.1490.3050.0570.0190.092
(3)0.1590.5070.2540.3370.0950.1700.2740.1680.3000.0770.0260.091
(4)0.2110.3030.2930.1990.0820.1040.2730.1750.3010.0790.0280.096
(5)0.1750.3970.2950.1960.1200.2010.2740.1680.3030.0700.0290.097
", + "bbox": [ + 107, + 112, + 890, + 195 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Table 1. Quantitative Evaluation of generation and editing after ablation. The best results are highlighted in bold. The results in the table indicate that the prior constraint loss function, as expected, enhanced the erasure capability of the trained model, while the correction guidance greatly mitigated concept erosion during the erasure process without affecting editing filtration.", + "bbox": [ + 102, + 204, + 893, + 246 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/9db561db123c5c79efc21173cd92c2acc70444dc91671755919fd09b03fb7492.jpg", + "image_caption": [ + "Figure 6. Qualitative results of artistic style removal. Our method erases the target style effectively and has minimal impact on other artistic styles." + ], + "image_footnote": [], + "bbox": [ + 117, + 273, + 472, + 575 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "which is mentioned in the previous section. We call this new guidance prior-guided unconditional erasure guidance (PG-UEG), which is defined as:", + "bbox": [ + 102, + 643, + 482, + 686 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\tilde {\\epsilon} _ {\\mathrm {p u}} = \\epsilon_ {\\theta^ {*}} (z _ {t}, t) + \\eta_ {\\mathrm {u}} \\left(\\epsilon_ {\\theta^ {*}} (z _ {t}, c, t) - \\epsilon_ {\\theta^ {*}} (z _ {t}, t)\\right) \\\\ - \\eta_ {p} \\gamma_ {p} \\left(\\epsilon_ {\\theta^ {*}} \\left(z _ {t}, c _ {p}, t\\right) - \\epsilon_ {\\theta^ {*}} \\left(z _ {t}, t\\right)\\right), \\tag {9} \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 122, + 696, + 480, + 731 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "where $\\gamma_{p}$ represents the guidance control term related to the prior retained concept. $c_{p}$ refers to the same prior concept in $\\mathcal{L}_{\\mathrm{Cons}}$ which are obtained through random sampling from the set $\\mathcal{C}_p$ . We calculate $\\gamma_{p}$ using the CLIP model to measure the relevance of different prior concepts to the target concept image and then compare it to the relevance of the target concept text to its image. Specifically, $\\gamma_{p} = \\frac{\\mathrm{CLIP}(x,c_{p})}{\\mathrm{CLIP}(x,c)}$ . The new loss for our ACE is:", + "bbox": [ + 102, + 739, + 482, + 862 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\mathrm {P U n c}} = \\mathbb {E} _ {z _ {t}, t, c, c _ {p} \\in \\mathcal {C} _ {p}} \\left[ \\| \\epsilon_ {\\theta} (z _ {t}, t) - \\tilde {\\epsilon} _ {\\mathrm {p u}} \\| _ {2} ^ {2} \\right]. \\tag {10}\n$$\n", + "text_format": "latex", + "bbox": [ + 142, + 871, + 482, + 888 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The final training loss for our ACE is summarized as: $\\mathcal{L}_{\\mathrm{ACE}} = \\lambda_{\\mathrm{PUnc}}\\mathcal{L}_{\\mathrm{PUnc}} + \\lambda_{\\mathrm{Cons}}\\mathcal{L}_{\\mathrm{Cons}} + \\lambda_{\\mathrm{ESD}}\\mathcal{L}_{\\mathrm{ESD}}$", + "bbox": [ + 511, + 271, + 890, + 300 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In our implementation, we adopt LORA [22] for parameter-efficient tuning, and the training process follows [14]. More details are provided in Suppl.", + "bbox": [ + 511, + 301, + 890, + 345 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4. Experiments", + "text_level": 1, + "bbox": [ + 511, + 359, + 640, + 376 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We conduct experiments on various tasks to evaluate our ACE, including IP characters erasure, artistic styles erasure, and nudity erasure. ESD [12], SPM [36], AdvUnlearn [68], MACE [35], and RECE [17] are adopted as competing methods. Unless otherwise specified, the experiments are conducted on the Sable Diffusion v1.4.", + "bbox": [ + 511, + 386, + 890, + 472 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.1. IP Character Removal", + "text_level": 1, + "bbox": [ + 511, + 484, + 715, + 500 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Experiment Setup. To access our ACE on IP character removal, we employ ten iconic IP characters as examples, including Hello Kitty, Snoopy, Mickey Mouse, Elsa, Donald Duck, Dora the Explorer, Winnie the Pooh, Sonic the Hedgehog, Elsa, and Fukushima. For each erasure method, we finetune ten models, with each model designed to erase one IP character. Following [14, 17], we adopted CLIP [44] score and LPIPS [66] score as metrics for evaluation. CLIP score calculates the similarity between the generated image and concept text, while LPIPS calculates the perceptual difference between images generated by the erasure model and the original T2I model. $\\mathrm{CLIP}_e$ calculates the CLIP similarity between images generated with erased concept text and their corresponding text, where lower value indicates more thorough erasure. $\\mathrm{CLIP}_p$ calculates the relevance under prior concepts, and higher value indicates better prior preservation. $\\mathrm{LPIPS}_e$ calculates the LPIPS similarity between images generated with erased concept text by the trained model and the original model, and higher value indicates more thorough erasure. $\\mathrm{LPIPS}_p$ calculates the similarity under prior concepts, in which lower values indicate better prior preservation. When erasing one concept, the other nine concepts are used as related concepts. Following RECE [17], we further calculate the overall scores between erased and related characters to measure the trade-off between the concept erasure and prior preservation, where", + "bbox": [ + 511, + 507, + 890, + 888 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "23510", + "bbox": [ + 478, + 938, + 517, + 950 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/55fab0f233614353e893af27be930843c0a2549cdd1ea1d6c09722319869c73b.jpg", + "table_caption": [ + "(a) Generation Prevention", + "(b) Editing Filtration" + ], + "table_footnote": [], + "table_body": "
MethodErase ConceptPrior ConceptOverallErase ConceptPrior ConceptOverall
\\(CLIP_e\\downarrow\\)\\(LPIPS_e\\uparrow\\)\\(CLIP_p\\uparrow\\)\\(LPIPS_p\\downarrow\\)\\(CLIP_d\\uparrow\\)\\(LPIPS_d\\uparrow\\)\\(CLIP_e\\downarrow\\)\\(LPIPS_e\\uparrow\\)\\(CLIP_p\\uparrow\\)\\(LPIPS_p\\downarrow\\)\\(CLIP_d\\uparrow\\)\\(LPIPS_d\\uparrow\\)
SD v1.4 [46]0.3010.0000.3010.0000.0000.0000.3080.0630.3080.0630.0000.000
ESD [14]0.2270.3310.2760.2550.0490.0760.3060.0420.3070.0410.0010.000
SPM [36]0.2390.2880.2960.1070.0560.1810.3020.0610.3030.0560.0010.005
AdvUnlearn [68]0.1660.4680.2090.4030.0430.0650.3100.0110.3110.0100.0010.001
MACE [35]0.2500.3170.2980.1340.0480.1840.3030.0560.3040.0540.0010.002
RECE [17]0.1760.4260.2570.2700.0810.1560.3000.0660.3030.0540.0030.012
Ours0.1750.3970.2950.1960.1200.2010.2740.1680.3030.0700.0290.097
", + "bbox": [ + 107, + 113, + 888, + 224 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/813155dac35cee07efae839d98ba32311dccfed5ee2e05b5829bfe2ea31af116.jpg", + "table_caption": [ + "Table 2. Quantitative comparisons of IP character erasure. The best two results are highlighted with bold and underline." + ], + "table_footnote": [], + "table_body": "
ButtocksBreast (F)Genitalia (F)Breast (M)Genitalia (M)FeetArmpitsBellyTotal↓FID30k↓CLIP30k↑
SD v1.4 [46]612043738167024118385014.070.313
ESD [14]15295111037683621113.800.304
SPM [36]1429721241532818614.630.312
AdvUnlearn [68]46208131275215.350.293
MACE [35]724810935613518912.600.294
RECE [17]14207161039453518614.450.309
Ours323496573914.690.308
", + "bbox": [ + 107, + 261, + 888, + 361 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/602c3eb35c2fa8367585234ebb91fea5cdc696316756854c7eb851125168e6ce.jpg", + "table_caption": [ + "Table 3. Exposure detection of generated images in the I2P dataset. The best two results are highlighted with bold and underline." + ], + "table_footnote": [], + "table_body": "
Erase ConceptRelate ConceptOverall
CLIPp↓LPIPSe↑CLIPp↑LPIPSp↓CLIPd↑LPIPSd↑
SD v1.4 [46]0.3100.0000.3100.0000.0000.000
ESD [14]0.2160.4440.2960.2410.0800.202
SPM [36]0.2660.2680.3080.0740.0420.195
AdvUnlearn [68]0.1860.4760.2290.4100.0430.066
MACE [35]0.2280.3660.2980.1960.0690.169
RECE [17]0.2530.3070.3090.0510.0570.255
Ours0.1600.4710.3030.1260.1430.345
", + "bbox": [ + 107, + 393, + 480, + 483 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/b672329a1595d809d60e89f72e36ff29a9487106c59f9c4058c673aa41e6ca9e.jpg", + "table_caption": [ + "Table 4. Quantitative evaluation of artist style erasure. The best two results are highlighted with bold and underline. Our ACE performs better in terms of thorough erasure and also demonstrates comparable prior preservation." + ], + "table_footnote": [], + "table_body": "
Unlearn Diffusion↓P4D↓Ring a Bell↓Average↓
SD v1.4 [46]100%100%85.21%95.07%
ESD [14]73.05%74.47%38.73%62.08%
SPM [36]91.49%91.49%57.75%80.24%
AdvUnlearn [68]25.53%19.15%4.93%16.54%
MACE [35]64.53%66.67%14.79%48.66%
RECE [17]70.92%65.96%26.76%54.55%
Ours27.65%28.37%2.82%19.61%
", + "bbox": [ + 107, + 561, + 480, + 648 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 5. Robustness evaluation of nudity erasure. The best two results are highlighted with bold and underline. We report the attack success rates (ASR) of different adversarial methods under various erasure models. Our method achieved the second-best results without using adversarial training.", + "bbox": [ + 102, + 659, + 483, + 727 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "$\\mathrm{CLIP}_d = \\mathrm{CLIP}_p - \\mathrm{CLIP}_e$ and $\\mathrm{LPIPS}_d = \\mathrm{LPIPS}_e - \\mathrm{LPIPS}_p$ . Higher $\\mathrm{CLIP}_d$ and $\\mathrm{LPIPS}_d$ indicate better trade-off.", + "bbox": [ + 102, + 753, + 482, + 782 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "For generation evaluation, we adopt 33 text templates for each character concept, and five images are generated for each text template using the erased model. To evaluate the effectiveness of editing filtration, we adopt the widely used LEDs++ [4] and MasaCtrl [6] as editing methods. For each concept, we utilize Stable Diffusion 3 [12] to generate 15 images based on 3 text templates as initial images, and", + "bbox": [ + 102, + 785, + 483, + 888 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "then perform editing on them using erased models. Each image is manipulated using 11 editing texts, such as \"sun-glasses\". Finally, the CLIP score and LPIPS score are calculated based on edited images, concept text and original images. The final results are all reported by averaging 10 characters. More details can be found in Suppl.", + "bbox": [ + 511, + 398, + 892, + 486 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Experiment Results. Fig. 3 illustrates the comparison of generation results against competing methods. One can see that, our ACE can successfully erase the target concept (i.e., Donald Duck) while retaining the capability to generate related prior concepts (e.g., Mickey Mouse and Pikachu). In contrast, methods such as ESD, AdvUnlearn, and RECE generate examples with noticeable concept erosion. From Table 2, our ACE demonstrates a comparable CLIP score for both the erased and related concepts. This indicates that our ACE achieves a better trade-off between target concept erasure and prior concept preservation, as further validated by the overall metrics in Table 2 (a). SPM and MACE exhibit inferior performance in thoroughly erasing the target concept. While AdvUnlearn performs well at erasing the target concept, it shows poor performance in prior preservation.", + "bbox": [ + 511, + 489, + 892, + 720 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Fig. 4 further presents the comparison of editing results by LEDITS++. As shown in the figure, the competing method generates the erased concept with desired attributes after performing the editing on the given image, which is not wanted in practice. In contrast, our method can successfully hinder the editing of images containing erased concepts (e.g., Mickey), while keeping the editability of nontarget concepts (e.g., Hello Kitty and Elsa). Table 2 (b) reports the quantitative comparisons evaluated with LEDITS++. Our method shows a significant improvement in erasing concepts, demonstrating its ability to edit filtration.", + "bbox": [ + 509, + 726, + 893, + 888 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "23511", + "bbox": [ + 478, + 938, + 517, + 950 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "The comparison on MasaCtrl and more results can be found in Suppl.", + "bbox": [ + 102, + 104, + 485, + 133 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.2. Explicit Content Removal", + "text_level": 1, + "bbox": [ + 102, + 143, + 331, + 159 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Experimental Setup. To evaluate our ACE on explicit content removal, we employ \"nudity\" as the target concept to train the model. Following [36], we utilize the I2P dataset [48] to evaluate the performance of explicit content generation. Specifically, we select 856 text prompts with explicit labels, and each prompt generates one image. Then, Nudenet [2] is used to quantify the number of nude body parts in these generated images. Additionally, following [14, 36], we employ COCO-30k Caption dataset [31] to evaluate the conditional generation capability of erased models. Specifically, we generate one image for each caption in COCO-30k and FID [19] is calculated between generated and natural images. CLIP score is also calculated between the generated images and the captions to access the semantic alignment of generated images. For robustness evaluation, we adopt UnlearnDiff [69], P4D [10] and Ring-A-Bell [53] as adversarial tools to calculate attack success rate (ASR). Adversarial attacks were conducted on 142 sensitive texts provided by UnlearnDiff. More details can be found in Suppl.", + "bbox": [ + 102, + 166, + 483, + 458 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Experiment Results. From Table 5, we can see that our method has a lower success rate in adversarial attacks when trained only for \"nudity\", with only AdvUnlearn performing slightly better than us with using adversarial training. As shown in Fig. 5 and Table 3, our method can effectively erase nudity content and results in fewer exposure parts. In the generation evaluation, we dynamically attack the erased models using adversarial tools. As shown in Fig. 5, our method demonstrates excellent robustness. To further showcase our method's efficacy in editing filtration, we employ SD-Inpainting [46] as an editing tool to assess the exposure levels of images after different text-guided inpainting processes. In addition to conventional text editing (e.g., bikini) adversarial edited text in MMA-Diffusion [61] is also used for explicit editing. GroundingDINO [34] is used to detect clothing in the images. As shown in Fig. 5, our method successfully prevents inappropriate inpainting of exposed parts in masked areas, making it more practical for real-world applications.", + "bbox": [ + 102, + 459, + 483, + 736 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "More results for robustness and editing filtration evaluation can be found in Suppl.", + "bbox": [ + 102, + 737, + 483, + 767 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.3. Artistic Style Removal", + "text_level": 1, + "bbox": [ + 102, + 777, + 307, + 792 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Experiment Setup. To validate the performance of our model in unlearning styles, we choose ten representative artistic styles, including Leonardo da Vinci, Pablo Picasso, Michelangelo, Van Gogh, Salvador Dali, Claude Monet, Andy Warhol, Jackson Pollock, Frida Kahlo, Georgia O'Keeffe. The evaluation process and metrics are simi", + "bbox": [ + 102, + 800, + 483, + 888 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "lar to the IP character removal (Sec. 4.1).", + "bbox": [ + 511, + 104, + 777, + 118 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Experiment Results. Fig. 6 illustrates the results of erasing artistic styles. As shown in the figure, our method can erase the style of Van Gogh and Andy Warhol from the T2I model, while generating other styles faithfully. From Table 4, our method achieves better $\\mathrm{CLIP}_e$ on erased concept.", + "bbox": [ + 511, + 119, + 893, + 194 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.4. Ablation Study", + "text_level": 1, + "bbox": [ + 511, + 210, + 661, + 226 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We further conduct the ablation study on the IP character erasure to evaluate the effectiveness of each component proposed in our ACE. Specifically, it contains the following variants: (1) Baseline: by only adopting the ESD loss to finetune the model. (2) Baseline + Unc: by employing unconditional erasure guidance alignment with ESD Loss to finetune the model. (3) Baseline + Unc + $\\mathcal{L}_{\\mathrm{Cons}}$ : by adopting ESD Loss, unconditional erasure guidance alignment, and $\\mathcal{L}_{\\mathrm{Cons}}$ to finetune the model. (4) Our method without ESD: Ours w/o $\\mathcal{L}_{\\mathrm{ESD}}$ is also effective in concept erasure and editing filtration, and performs better than ESD, indicating that our PG-UEG plays a crucial role in editing filtering. (5) Ours full method: by incorporating the ESD Loss, prior-guided unconditional erasure guidance alignment and $\\mathcal{L}_{\\mathrm{Cons}}$ together. From Table 1, we can see that: (i) Introducing unconditional erasure guidance improves the model's editing filtration performance, indicating its effectiveness in preventing unwanted edits. (ii) We use both unconditional erasure guidance and $\\mathcal{L}_{\\mathrm{Cons}}$ together leading to significant improvements in concept erasure and editing filtration performance, although it compromises the generation of related prior concepts. (iii) $\\mathcal{L}_{\\mathrm{PUnc}}$ enhances the prior preservation, and without affecting editing filtration.", + "bbox": [ + 511, + 233, + 893, + 569 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "More ablation results are provided in Suppl.", + "bbox": [ + 531, + 571, + 815, + 585 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5. Conclusion", + "text_level": 1, + "bbox": [ + 511, + 599, + 627, + 614 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this paper, we investigate the potential risks of unsafe content creation through image editing, and propose an Anti-Editing Concept Erasure (ACE) method to prevent the production of such content during both generation and editing. In addition to the conditional erasure guidance used by existing methods, we further propose an unconditional noise erasure technique to enhance anti-editing concept erasure. This guidance steers the noise prediction away from the target concept, thereby effectively preventing the production of images containing the target concept. Moreover, a concept preservation mechanism is introduced to maintain the generation prior of non-target concepts. Experiments demonstrate that our ACE can successfully erase specific concepts and exhibits superior filtration capabilities during both generation and editing compared to existing methods.", + "bbox": [ + 511, + 622, + 893, + 840 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Acknowledgement. The work was supported by National Key R&D Program of China under Grant No. 2022YFA1004100.", + "bbox": [ + 511, + 843, + 893, + 887 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "23512", + "bbox": [ + 478, + 938, + 519, + 950 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 104, + 102, + 197, + 118 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.5", + "[2] P Bedapudi. Nudenet: Neural nets for nudity classification, detection and selective censoring, 2019. 8", + "[3] Manuel Brack, Felix Friedrich, Dominik Hintersdorf, Lukas Struppek, Patrick Schramowski, and Kristian Kersting. Sega: Instructing text-to-image models using semantic guidance. Advances in Neural Information Processing Systems, 36: 25365-25389, 2023. 2", + "[4] Manuel Brack, Felix Friedrich, Katharia Kornmeier, Linoy Tsaban, Patrick Schramowski, Kristian Kersting, and Apolinário Passos. Ledits++: Limitless image editing using text-to-image models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8861-8870, 2024. 1, 2, 4, 7", + "[5] Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18392-18402, 2023. 2", + "[6] Mingdeng Cao, Xintao Wang, Zhongang Qi, Ying Shan, Xiaohu Qie, and Yinqiang Zheng. Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 22560-22570, 2023. 2, 7, 3", + "[7] Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T Freeman, Michael Rubinstein, et al. Muse: Text-to-image generation via masked generative transformers. arXiv preprint arXiv:2301.00704, 2023. 2", + "[8] Hila Chefer, Yuval Alaluf, Yael Vinker, Lior Wolf, and Daniel Cohen-Or. Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models. ACM Transactions on Graphics (TOG), 42(4):1-10, 2023. 2", + "[9] Die Chen, Zhiwen Li, Mingyuan Fan, Cen Chen, Wenmeng Zhou, and Yaliang Li. Eiup: A training-free approach to erase non-compliant concepts conditioned on implicit unsafe prompts. arXiv preprint arXiv:2408.01014, 2024. 2", + "[10] Zhi-Yi Chin, Chieh-Ming Jiang, Ching-Chun Huang, PinYu Chen, and Wei-Chen Chiu. Prompting4debugging: Red-teaming text-to-image diffusion models by finding problematic prompts. arXiv preprint arXiv:2309.06135, 2023. 3, 8", + "[11] Anudeep Das, Vasisht Duddu, Rui Zhang, and N Asokan. *Espresso: Robust concept filtering in text-to-image models.* arXiv preprint arXiv:2404.19227, 2024. 2", + "[12] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis, march 2024. URL http://arxiv.org/abs/2403.03206, 2024.1, 6,7" + ], + "bbox": [ + 106, + 127, + 483, + 886 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[13] Kailai Feng, Yabo Zhang, Haodong Yu, Zhilong Ji, Jinfeng Bai, Hongzhi Zhang, and Wangmeng Zuo. Vitaglyph: Vitalizing artistic typography with flexible dual-branch diffusion models. arXiv preprint arXiv:2410.01738, 2024. 1", + "[14] Rohit Gandikota, Joanna Materzynska, Jaden Fiitto-Kaufman, and David Bau. Erasing concepts from diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2426-2436, 2023. 2, 3, 4, 6, 7, 8", + "[15] Rohit Gandikota, Hadas Orgad, Yonatan Belinkov, Joanna Materzyńska, and David Bau. Unified concept editing in diffusion models. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 5111-5120, 2024. 2", + "[16] Hongcheng Gao, Tianyu Pang, Chao Du, Taihang Hu, Zhijie Deng, and Min Lin. Meta-unlearning on diffusion models: Preventing relearning unlearned concepts. arXiv preprint arXiv:2410.12777, 2024. 2", + "[17] Chao Gong, Kai Chen, Zhipeng Wei, Jingjing Chen, and YuGang Jiang. Reliable and efficient concept erasure of text-to-image diffusion models. arXiv preprint arXiv:2407.12383, 2024. 2, 6, 7, 4", + "[18] Luxi He, Yangsibo Huang, Weijia Shi, Tinghao Xie, Haotian Liu, Yue Wang, Luke Zettlemoyer, Chiyuan Zhang, Danqi Chen, and Peter Henderson. *Fantastic copyrighted beasts and how (not) to generate them.* arXiv preprint arXiv:2406.14526, 2024. 2", + "[19] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. 8, 2", + "[20] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022. 2, 3, 4", + "[21] Seunghoo Hong, Juhun Lee, and Simon S Woo. All but one: Surgical concept erasing with model preservation in text-to-image diffusion models. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 21143-21151, 2024. 2", + "[22] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. 6", + "[23] Chi-Pin Huang, Kai-Po Chang, Chung-Ting Tsai, Yung-Hsuan Lai, Fu-En Yang, and Yu-Chiang Frank Wang. Receler: Reliable concept erasing of text-to-image diffusion models via lightweight erasers. arXiv preprint arXiv:2311.17717, 2023. 2", + "[24] Sanghyun Kim, Seohyeon Jung, Balhae Kim, Moonseok Choi, Jinwoo Shin, and Juho Lee. Safeguard text-to-image diffusion models with human feedback inversion. arXiv preprint arXiv:2407.21032, 2024.", + "[25] Nupur Kumari, Bingliang Zhang, Sheng-Yu Wang, Eli Shechtman, Richard Zhang, and Jun-Yan Zhu. Ablating concepts in text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 22691-22702, 2023. 2" + ], + "bbox": [ + 514, + 104, + 893, + 887 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "23513", + "bbox": [ + 478, + 938, + 517, + 950 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[26] Nupur Kumari, Bingliang Zhang, Richard Zhang, Eli Shechtman, and Jun-Yan Zhu. Multi-concept customization of text-to-image diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1931-1941, 2023. 2", + "[27] Fan Li, Zixiao Zhang, Yi Huang, Jianzhuang Liu, Renjing Pei, Bin Shao, and Songcen Xu. Magiceraser: Erasing any objects via semantics-aware control. arXiv preprint arXiv:2410.10207, 2024. 2", + "[28] Hang Li, Chengzhi Shen, Philip Torr, Volker Tresp, and Jindong Gu. Self-discovering interpretable diffusion latent directions for responsible text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12006-12016, 2024. 2", + "[29] Jia Li, Lijie Hu, Zhixian He, Jingfeng Zhang, Tianhang Zheng, and Di Wang. Text guided image editing with automatic concept locating and forgetting. arXiv preprint arXiv:2405.19708, 2024.", + "[30] Senmao Li, Joost van de Weijer, Taihang Hu, Fahad Shahbaz Khan, Qibin Hou, Yaxing Wang, and Jian Yang. Get what you want, not what you don't: Image content suppression for text-to-image diffusion models. arXiv preprint arXiv:2402.05375, 2024. 2", + "[31] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740-755. Springer, 2014. 8, 2", + "[32] Ming Liu, Yuxiang Wei, Xiaohe Wu, Wangmeng Zuo, and Lei Zhang. Survey on leveraging pre-trained generative adversarial networks for image editing and restoration. Science China Information Sciences, 66(5):151101, 2023. 2", + "[33] Runtao Liu, Ashkan Khakzar, Jindong Gu, Qifeng Chen, Philip Torr, and Fabio Pizzati. Latent guard: a safety framework for text-to-image generation. In European Conference on Computer Vision, pages 93-109. Springer, 2025. 2", + "[34] Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. arXiv preprint arXiv:2303.05499, 2023. 8", + "[35] Shilin Lu, Zilan Wang, Leyang Li, Yanzhu Liu, and Adams Wai-Kin Kong. Mace: Mass concept erasure in diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6430-6440, 2024. 2, 6, 7, 4", + "[36] Mengyao Lyu, Yuhong Yang, Haiwen Hong, Hui Chen, Xuan Jin, Yuan He, Hui Xue, Jungong Han, and Guiguang Ding. One-dimensional adapter to rule them all: Concepts diffusion models and erasing applications. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7559-7568, 2024. 2, 3, 6, 7, 8, 4", + "[37] Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Null-text inversion for editing real images using guided diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6038–6047, 2023. 2" + ], + "bbox": [ + 106, + 104, + 483, + 887 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[38] Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, and Ying Shan. T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 4296-4304, 2024. 2", + "[39] Yong-Hyun Park, Sangdoo Yun, Jin-Hwa Kim, Junho Kim, Geonhui Jang, Yonghyun Jeong, Junghyo Jo, and Gayoung Lee. Direct unlearning optimization for robust and safe text-to-image models. arXiv preprint arXiv:2407.21035, 2024. 2", + "[40] Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, and Jun-Yan Zhu. Zero-shot image-to-image translation. In ACM SIGGRAPH 2023 Conference Proceedings, pages 1-11, 2023. 2", + "[41] Minh Pham, Kelly O Marshall, Chinmay Hegde, and Niv Cohen. Robust concept erasure using task vectors. arXiv preprint arXiv:2404.03631, 2024. 2", + "[42] Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952, 2023. 1", + "[43] Samuele Poppi, Tobia Poppi, Federico Cocchi, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara, et al. Safe-clip: Removing nsfw concepts from vision-and-language models. In Proceedings of the European Conference on Computer Vision, 2024. 2", + "[44] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 6", + "[45] Javier Rando, Daniel Paleka, David Lindner, Lennart Heim, and Florian Tramèr. Red-teaming the stable diffusion safety filter. arXiv preprint arXiv:2210.04610, 2022. 2", + "[46] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695, 2022. 1, 2, 3, 7, 8, 4", + "[47] Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 22500-22510, 2023. 2", + "[48] Patrick Schramowski, Manuel Brack, Björn Deiseroth, and Kristian Kersting. Safe latent diffusion: Mitigating inappropriate degeneration in diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22522-22531, 2023. 2, 8", + "[49] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems, 35:25278-25294, 2022. 1" + ], + "bbox": [ + 514, + 104, + 892, + 887 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "23514", + "bbox": [ + 478, + 938, + 519, + 950 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[50] Jing Shi, Wei Xiong, Zhe Lin, and Hyun Joon Jung. Instantbooth: Personalized text-to-image generation without test-time finetuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8543-8552, 2024. 2", + "[51] Zhuan Shi, Jing Yan, Xiaoli Tang, Lingjuan Lyu, and Boi Faltings. Rlcp: A reinforcement learning-based copyright protection method for text-to-image diffusion model. arXiv preprint arXiv:2408.16634, 2024. 2", + "[52] Koushik Srivatsan, Fahad Shamshad, Muzammal Naseer, and Karthik Nandakumar. Stereo: Towards adversarially robust concept erasing from text-to-image generation models. arXiv preprint arXiv:2408.16807, 2024. 2", + "[53] Yu-Lin Tsai, Chia-Yi Hsu, Chulin Xie, Chih-Hsun Lin, Jia-You Chen, Bo Li, Pin-Yu Chen, Chia-Mu Yu, and Chun-Ying Huang. Ring-a-bell! how reliable are concept removal methods for diffusion models? arXiv preprint arXiv:2310.10012, 2023. 3, 8", + "[54] Narek Tumanyan, Michal Geyer, Shai Bagon, and Tali Dekel. Plug-and-play diffusion features for text-driven image-to-image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1921-1930, 2023. 2", + "[55] Haofan Wang, Matteo Spinelli, Qixun Wang, Xu Bai, Zekui Qin, and Anthony Chen. Instantstyle: Free lunch towards style-preserving in text-to-image generation. arXiv preprint arXiv:2404.02733, 2024. 1", + "[56] Yuxiang Wei, Yabo Zhang, Zhilong Ji, Jinfeng Bai, Lei Zhang, and Wangmeng Zuo. Elite: Encoding visual concepts into textual embeddings for customized text-to-image generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15943-15953, 2023. 2", + "[57] Yuxiang Wei, Zhilong Ji, Jinfeng Bai, Hongzhi Zhang, Lei Zhang, and Wangmeng Zuo. Masterweaver: Taming editability and face identity for personalized text-to-image generation. In European Conference on Computer Vision, pages 252-271. Springer, 2025. 2", + "[58] Yuxiang Wei, Yiheng Zheng, Yabo Zhang, Ming Liu, Zhi-long Ji, Lei Zhang, and Wangmeng Zuo. Personalized image generation with deep generative models: A decade survey. arXiv preprint arXiv:2502.13081, 2025. 1", + "[59] Yongliang Wu, Shiji Zhou, Mingzhuo Yang, Lianzhe Wang, Wenbo Zhu, Heng Chang, Xiao Zhou, and Xu Yang. Unlearning concepts in diffusion model via concept domain correction and concept preserving gradient. arXiv preprint arXiv:2405.15304, 2024. 2", + "[60] Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, and Fang Wen. Paint by example: Exemplar-based image editing with diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18381-18391, 2023. 2", + "[61] Yijun Yang, Ruiyuan Gao, Xiaosen Wang, Tsung-Yi Ho, Nan Xu, and Qiang Xu. Mma-diffusion: Multimodal attack on diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7737-7746, 2024. 8" + ], + "bbox": [ + 106, + 104, + 483, + 886 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[62] Yijun Yang, Ruiyuan Gao, Xiao Yang, Jianyuan Zhong, and Qiang Xu. Guardt2i: Defending text-to-image models from adversarial prompts. arXiv preprint arXiv:2403.01446, 2024. 2", + "[63] Jaehong Yoon, Shoubin Yu, Vaidehi Patil, Huaxiu Yao, and Mohit Bansal. Safree: Training-free and adaptive guard for safe text-to-image and video generation. arXiv preprint arXiv:2410.12761, 2024.", + "[64] Gong Zhang, Kai Wang, Xingqian Xu, Zhangyang Wang, and Humphrey Shi. Forget-me-not: Learning to forget in text-to-image diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1755-1764, 2024. 2", + "[65] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3836-3847, 2023. 2", + "[66] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586-595, 2018. 6", + "[67] Yabo Zhang, Yuxiang Wei, Dongsheng Jiang, Xiaopeng Zhang, Wangmeng Zuo, and Qi Tian. Controlvideo: Training-free controllable text-to-video generation. arXiv preprint arXiv:2305.13077, 2023. 1", + "[68] Yimeng Zhang, Xin Chen, Jinghan Jia, Yihua Zhang, Chongyu Fan, Jiancheng Liu, Mingyi Hong, Ke Ding, and Sijia Liu. Defensive unlearning with adversarial training for robust concept erasure in diffusion models. arXiv preprint arXiv:2405.15234, 2024. 2, 6, 7, 4", + "[69] Yimeng Zhang, Jinghan Jia, Xin Chen, Aochuan Chen, Yihua Zhang, Jiancheng Liu, Ke Ding, and Sijia Liu. To generate or not? safety-driven unlearned diffusion models are still easy to generate unsafe images... for now. In European Conference on Computer Vision, pages 385-403. Springer, 2025. 3, 8", + "[70] Yabo Zhang, Xinpeng Zhou, Yihan Zeng, Hang Xu, Hui Li, and Wangmeng Zuo. Framepainter: Endowing interactive image editing with video diffusion priors. arXiv preprint arXiv:2501.08225, 2025. 2", + "[71] Mengnan Zhao, Lihe Zhang, Tianhang Zheng, Yuqiu Kong, and Baocai Yin. Separable multi-concept erasure from diffusion models. arXiv preprint arXiv:2402.05947, 2024. 2" + ], + "bbox": [ + 514, + 104, + 892, + 705 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "23515", + "bbox": [ + 478, + 938, + 517, + 950 + ], + "page_idx": 10 + } +] \ No newline at end of file diff --git a/2025/ACE_ Anti-Editing Concept Erasure in Text-to-Image Models/b5a87e68-e26d-46f8-a43c-98bb87af5dc7_model.json b/2025/ACE_ Anti-Editing Concept Erasure in Text-to-Image Models/b5a87e68-e26d-46f8-a43c-98bb87af5dc7_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d1f6835d2f595b3f5eeb014b47921248f8f036f7 --- /dev/null +++ b/2025/ACE_ Anti-Editing Concept Erasure in Text-to-Image Models/b5a87e68-e26d-46f8-a43c-98bb87af5dc7_model.json @@ -0,0 +1,2334 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.043 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.0, + 0.812, + 0.045 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.198, + 0.143, + 0.8, + 0.164 + ], + "angle": 0, + "content": "ACE: Anti-Editing Concept Erasure in Text-to-Image Models" + }, + { + "type": "text", + "bbox": [ + 0.175, + 0.199, + 0.82, + 0.219 + ], + "angle": 0, + "content": "Zihao Wang\\(^{1}\\) Yuxiang Wei\\(^{1}\\) Fan Li\\(^{2}\\) Renjing Pei\\(^{2}\\) Hang Xu\\(^{2}\\) Wangmeng Zuo\\(^{1,3(\\text{图})}\\)" + }, + { + "type": "text", + "bbox": [ + 0.16, + 0.234, + 0.838, + 0.253 + ], + "angle": 0, + "content": "\\(^{1}\\)Harbin Institute of Technology \\(^{2}\\) Huawei Noah's Ark Lab \\(^{3}\\) Pazhou Lab (Huangpu)" + }, + { + "type": "image", + "bbox": [ + 0.108, + 0.285, + 0.436, + 0.379 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.111, + 0.395, + 0.434, + 0.5 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.511, + 0.427, + 0.525 + ], + "angle": 0, + "content": "(a) Common methods for creating copyrighted content" + }, + { + "type": "image", + "bbox": [ + 0.465, + 0.289, + 0.892, + 0.386 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.467, + 0.397, + 0.891, + 0.503 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.499, + 0.511, + 0.859, + 0.525 + ], + "angle": 0, + "content": "(b) Comparisons of Pikachu erasure on generation and editing" + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.528, + 0.894, + 0.582 + ], + "angle": 0, + "content": "Figure 1. (a) Given a text-to-image (T2I) model, there are two common methods to adopt it to create undesired contents, i.e., generating new images based on text prompts or editing existing images. (b) Current concept erasure methods primarily focus on preventing the generation of erased concepts but fail to protect against image editing. In contrast, our ACE method can prevent the production of such content during both generation and editing processes. As shown, after erasing Pikachu, it successfully prevents the edits involving Pikachu." + }, + { + "type": "title", + "bbox": [ + 0.257, + 0.595, + 0.332, + 0.61 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.64, + 0.486, + 0.889 + ], + "angle": 0, + "content": "Recent advance in text-to-image diffusion models have significantly facilitated the generation of high-quality images, but also raising concerns about the illegal creation of harmful content, such as copyrighted images. Existing concept erasure methods achieve superior results in preventing the production of erased concept from prompts, but typically perform poorly in preventing undesired editing. To address this issue, we propose an Anti-Editing Concept Erasure (ACE) method, which not only erases the target concept during generation but also filters out it during editing. Specifically, we propose to inject the erasure guidance into both conditional and the unconditional noise prediction, enabling the model to effectively prevent the creation of erasure concepts during both editing and generation. Furthermore, a stochastic correction guidance is introduced during training to address the erosion of unrelated concepts. We conducted erasure editing experiments with" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.597, + 0.895, + 0.713 + ], + "angle": 0, + "content": "representative editing methods (i.e., LEDs++ and Ma-saCtrl) to erase IP characters, and the results indicate that our ACE effectively filters out target concepts in both types of edits. Additional experiments on erasing explicit concepts and artistic styles further demonstrate that our ACE performs favorably against state-of-the-art methods. Our code will be publicly available at https://github.com/120L020904/ACE." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.745, + 0.641, + 0.76 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.771, + 0.894, + 0.889 + ], + "angle": 0, + "content": "Recent text-to-image (T2I) diffusion models trained with large-scale datasets [49] have demonstrated an impressive ability to generate high-quality images [12, 42, 46]. Their extraordinary creative capabilities enable users to produce high-quality images, and facilitate a wide range of applications, such as image editing [4, 58] and artistic creation [13, 55, 67]. However, alongside these advancements, a significant concern has arisen regarding the potential mis" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.939, + 0.519, + 0.951 + ], + "angle": 0, + "content": "23505" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.104, + 0.105, + 0.483, + 0.149 + ], + "angle": 0, + "content": "use of these text-to-image models. For example, these models might be employed to generate unsafe content, such as copyrighted material or sexually explicit images." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.152, + 0.483, + 0.386 + ], + "angle": 0, + "content": "To prevent the creation of unsafe content, a straightforward solution is filtering training data and retraining the model. Nonetheless, such a process is both labor-intensive and resource-consuming. Post-hoc safety checker [45, 46] and negative guidance [48] are alternative plug-and-play ways to filter undesired contents, which heavily rely on pre-trained detectors or hand-crafted prompts. More recent, concept erasure methods [14, 17, 35, 36, 68] are proposed to directly unlearn undesired concepts through model finetuning. These methods mainly focus on precisely removing the target concept, while faithfully preserving the generation of non-target concepts. For instance, ESD [14] injects the negative erase guidance into target noise prediction to guide the image away from the target concept. SPM [36] employs a lightweight adapter to eliminate concepts and further adopts latent anchoring to preserve non-target concepts." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.389, + 0.483, + 0.593 + ], + "angle": 0, + "content": "Although these concept erasure methods can effectively prevent the generation of unsafe content giving corresponding text prompt, they can be circumvented by editing techniques. As illustrated in Fig. 1, after removing Pikachu from the model, users can still create an image of Pikachu wearing sunglasses by editing a Pikachu image using LEDIT++ [4]. This is because these methods are typically trained to remove target concept from conditional noise prediction (as shown in Fig. 2(b)), and rely on the input text (e.g., \"Pikachu\") to trigger the guard. Therefore, when editing the image with the text \"Add sunglasses\" as input, the guard fails. In practice, protection from editing should also be considered in concept erasure, which we refer to as editing filtration." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.596, + 0.483, + 0.889 + ], + "angle": 0, + "content": "To address the above issues, we propose an Anti-Editing Concept Erasure method, termed ACE, to prevent the production of unsafe content during both generation and editing. Based on the above analysis, we explore the capabilities of CFG [20], and propose incorporating erasure guidance into both conditional and unconditional noise for anti-editing concept erasure. During erasure training, ACE additionally aligns the unconditional noise prediction of the tuned model with the proposed unconditional erasure guidance. After that, during generation or editing, the CFG prediction in the tuned model can implicitly mitigate the presence of the erased concept, thereby preventing the production of unwanted content. A prior constraint loss further adopted address the overfitting of training. Additionally, to reduce the impact of the added target concept noise guidance on the generation of non-target concepts, we further incorporate a random correction guidance with unconditional erasure guidance by subtracting randomly sampled prior concept noise guidance. With that, our ACE can thoroughly erase the target concept while preserving the generation of" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.105, + 0.892, + 0.178 + ], + "angle": 0, + "content": "non-target concepts. We conducted extensive evaluations across different erasure tasks, including intellectual property (IP), explicit content, and artistic style. Our method demonstrate significant advantages in both generation and editing filtration, showcasing its effectiveness." + }, + { + "type": "text", + "bbox": [ + 0.531, + 0.179, + 0.875, + 0.192 + ], + "angle": 0, + "content": "The contributions of this work can be summarized as:" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.194, + 0.892, + 0.251 + ], + "angle": 0, + "content": "- We investigate the potential risks of unsafe content creation through image editing, and propose an Anti-Editing Concept Erasure (ACE) method to prevent the production of such content during both generation and editing." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.253, + 0.892, + 0.31 + ], + "angle": 0, + "content": "- A unconditional erasure guidance is proposed for anti-editing concept erasure, along with concept preservation mechanism to ensure the generation of non-target concepts." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.311, + 0.892, + 0.354 + ], + "angle": 0, + "content": "- Extensive experiments demonstrate that our ACE can successfully erase target concepts and exhibits superior filtration capabilities during both generation and editing." + }, + { + "type": "list", + "bbox": [ + 0.513, + 0.194, + 0.892, + 0.354 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.37, + 0.651, + 0.385 + ], + "angle": 0, + "content": "2. Related Work" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.395, + 0.783, + 0.41 + ], + "angle": 0, + "content": "2.1. Concept Erasure in T2I Models" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.416, + 0.892, + 0.665 + ], + "angle": 0, + "content": "The concept erasure [9, 11, 15, 16, 18, 21, 23-25, 28-30, 33, 39, 41, 43, 48, 51, 52, 59, 62-64, 71] in T2I models has been the subject of numerous studies. Fine-tuning models are an important method in concept erasure. ESD [14] suggests integrating negative guidance into target concept noise through training. SPM [36] proposes prior correction based on the cosine similarity of text and utilizes a comparable Lora approach to train the model. MACE [35] leverages a closed-form solution to amalgamate multiple erasure Lora weights. RECE [17] employs analytical methods to search inappropriate text embedding and integrates it into erasure closed-form solution. AdvUnlearn [68] incorporate adversarial training to improve the robustness of the erasure method. To the best of our knowledge, current fine-tuning methods lack consideration for editing filtration, thus rendering them ineffective in preventing customized editions to target concept images." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.676, + 0.745, + 0.692 + ], + "angle": 0, + "content": "2.2. Text-driven Image Editing" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.698, + 0.892, + 0.889 + ], + "angle": 0, + "content": "Due to the broad generative capacities inherent in text-to-image DMs, the employment of DMs for image editing [3, 5, 7, 8, 26, 27, 32, 37, 38, 40, 47, 50, 54, 56, 57, 60, 65, 70] has progressively garnered traction. MasaCtrl [6] introduces source image data into the image editing process by substituting keys and values in the self-attention layer, thus modifying the actions of objects in the image. LEDITS++[4] uses inference guidance and attention masks from DM to confine editing regions while using DDPM inversion for enhanced restoration of source image. Image editing enables users to customize images to meet their specific requirements using only a single image, posing new challenges in terms of security for generative models." + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.939, + 0.52, + 0.951 + ], + "angle": 0, + "content": "23506" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.122, + 0.109, + 0.497, + 0.232 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.19, + 0.237, + 0.413, + 0.248 + ], + "angle": 0, + "content": "(a) Calculation of Classifier Free Guidance" + }, + { + "type": "image", + "bbox": [ + 0.522, + 0.111, + 0.868, + 0.226 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.557, + 0.237, + 0.831, + 0.248 + ], + "angle": 0, + "content": "(b) Concept Erasure on Conditional Noise Prediction" + }, + { + "type": "image", + "bbox": [ + 0.124, + 0.256, + 0.862, + 0.378 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.251, + 0.381, + 0.735, + 0.393 + ], + "angle": 0, + "content": "(c) Our ACE learns to erase concept on both Conditional and Unconditional Noise Predictions" + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.413, + 0.894, + 0.482 + ], + "angle": 0, + "content": "Figure 2. Overview of our proposed ACE. (a) In CFG, both conditional noise and unconditional noise are adopted to generate high-quality images. (b) ESD [14] unlearns the target concept (e.g., Mickey) by aligning conditional noise prediction with conditional erasure guidance (CEG). (c) During the fine-tuning, our ACE injects erasure guidance into both conditional and unconditional noise prediction, preventing the production of unsafe content during both generation and editing. PG-UEG denotes the prior-guided unconditional erasure guidance calculated following Eqn 9." + }, + { + "type": "title", + "bbox": [ + 0.104, + 0.493, + 0.307, + 0.508 + ], + "angle": 0, + "content": "2.3. Attacks in T2I Models" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.515, + 0.484, + 0.69 + ], + "angle": 0, + "content": "As research on concept erasure in T2I models advances, red team studies focusing on the robustness of detection erasure methods are also increasingly emerging. P4D [10] processes a method of inserting adversarial text into regular input text to facilitate the production of insecure images using the T2I model. Ring-A-Bell [53] extracts the discrepancy vector between the embeddings of insecure concept text and secure concept text and employs it to derive the attack text embedding. UnlearnDiff [69] employs Projected Gradient Descent (PGD) to tackle the optimization challenge inherent in adversarial attacks and maps the optimized text embeddings onto discrete tokens." + }, + { + "type": "title", + "bbox": [ + 0.104, + 0.703, + 0.273, + 0.719 + ], + "angle": 0, + "content": "3. Proposed Method" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.728, + 0.485, + 0.889 + ], + "angle": 0, + "content": "Given a target concept (e.g., Fukushima), concept erasure task [14, 36] aims to unlearn it from pre-trained text-to-image (T2I) models, preventing the illegal use of these models to create copyrighted content. However, existing methods can be circumvented and fail to prevent users from producing new undesirable images through image editing, which raises new concerns. To address this, we propose an Anti-Editing Concept Erasure (ACE) method, as illustrated in Fig. 2, to prevent the production of undesirable content through both generation and editing. In this section, we will first introduce the prior knowledge of our method (Sec. 3.1)," + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.494, + 0.894, + 0.582 + ], + "angle": 0, + "content": "including employed T2I model and concept erasure method. To address the editing issue, we further propose to erase the target concept from both conditional and unconditional prediction for anti-editing erasure (Sec. 3.2). Finally, to preserve the generation of non-target concepts, a prior concept preservation mechanism is introduced (Sec. 3.3)." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.589, + 0.65, + 0.603 + ], + "angle": 0, + "content": "3.1. Preliminaries" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.61, + 0.894, + 0.713 + ], + "angle": 0, + "content": "Stable Diffusion. In this work, we adopt Stable Diffusion 1.4 [46] as text-to-image model, which is one of the representative T2I diffusion models. It first employs a variational autoencoder (VAE) to transform real images \\( x \\) into an image latent \\( z \\). Then, a text-conditioned diffusion model \\( \\epsilon_{\\theta} \\) is trained on the latent space to predict latent codes, and mean-squared loss is adopted," + }, + { + "type": "equation", + "bbox": [ + 0.555, + 0.718, + 0.892, + 0.736 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\mathrm {L D M}} = \\mathbb {E} _ {z _ {t}, t, c, \\epsilon \\sim \\mathcal {N} (0, I)} \\left[ \\| \\epsilon - \\epsilon_ {\\theta} \\left(z _ {t}, c, t\\right) \\| _ {2} ^ {2} \\right], \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.741, + 0.894, + 0.799 + ], + "angle": 0, + "content": "where \\(\\epsilon\\) denotes the unscaled noise and \\(c\\) is the text embedding encoded by text encoders. \\(z_{t}\\) is the latent noised to time \\(t\\). During inference, a random Gaussian noise \\(z_{T}\\) is iteratively denoised to \\(z_{0}\\), and decoded to final image." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.799, + 0.894, + 0.857 + ], + "angle": 0, + "content": "Classifier-Free Guidance. To improve the quality of generated images, classifier-free guidance [20] is adopted during diffusion inference. Based on Tweedie's formula and the principles of diffusion model, we have:" + }, + { + "type": "equation", + "bbox": [ + 0.55, + 0.862, + 0.892, + 0.892 + ], + "angle": 0, + "content": "\\[\n\\nabla_ {z _ {t}} \\log p (c | z _ {t}) = - \\frac {1}{\\sigma_ {t}} \\left(\\epsilon_ {\\theta} \\left(z _ {t}, c, t\\right) - \\epsilon_ {\\theta} \\left(z _ {t}, t\\right)\\right). \\tag {2}\n\\]" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.939, + 0.519, + 0.951 + ], + "angle": 0, + "content": "23507" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.114, + 0.107, + 0.478, + 0.401 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.414, + 0.484, + 0.456 + ], + "angle": 0, + "content": "Figure 3. Qualitative comparisons of IP character removal. Our ACE effectively erases the target concept while generating other concepts successfully." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.477, + 0.484, + 0.536 + ], + "angle": 0, + "content": "Here, \\(\\sigma_{t}\\) is a constant. To increase the probability of text condition \\(c\\) appearing in the final image, the final noise prediction is the composition of noise prediction from both conditional and unconditional texts," + }, + { + "type": "equation", + "bbox": [ + 0.16, + 0.543, + 0.483, + 0.56 + ], + "angle": 0, + "content": "\\[\n\\tilde {\\epsilon} = \\epsilon_ {\\theta} \\left(z _ {t}, t\\right) + \\omega \\left(\\epsilon_ {\\theta} \\left(z _ {t}, c, t\\right) - \\epsilon_ {\\theta} \\left(z _ {t}, t\\right)\\right), \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.565, + 0.483, + 0.595 + ], + "angle": 0, + "content": "where \\(\\epsilon_{\\theta}(z_t,t)\\) denote the unconditional noise prediction, and \\(\\omega\\) is a hyperparameter controlling the guidance scale." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.595, + 0.484, + 0.682 + ], + "angle": 0, + "content": "Concept Erasure. Given a target concept indicated by text \\(c\\) (e.g., Pikachu), concept erasure task finetunes the model to reduce the probability of generating images containing this concept. For example, ESD [14] removes the target concept from the conditional noise prediction, and a conditional erasure guidance (CEG) is defined as:" + }, + { + "type": "equation", + "bbox": [ + 0.145, + 0.69, + 0.483, + 0.707 + ], + "angle": 0, + "content": "\\[\n\\tilde {\\epsilon} _ {c} = \\epsilon_ {\\theta^ {*}} \\left(z _ {t}, t\\right) - \\eta_ {c} \\left(\\epsilon_ {\\theta^ {*}} \\left(z _ {t}, c, t\\right) - \\epsilon_ {\\theta^ {*}} \\left(z _ {t}, t\\right)\\right), \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.712, + 0.484, + 0.786 + ], + "angle": 0, + "content": "where \\(\\epsilon_{\\theta^{\\star}}(\\cdot)\\) represents the original T2I model, and \\(z_{t}\\) is the encoded latent image contains target concept \\(c\\). \\(\\eta_c\\) is a control scale hyperparameter. During training, ESD aligns the noise prediction of the target concept in tuned model \\(\\epsilon_{\\theta}(z_t,c,t)\\) with the above CEG," + }, + { + "type": "equation", + "bbox": [ + 0.172, + 0.791, + 0.483, + 0.81 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\mathrm {E S D}} = \\mathbb {E} _ {z _ {t}, t, c} \\left[ \\| \\epsilon_ {\\theta} (z _ {t}, c, t) - \\tilde {\\epsilon} _ {c} \\| _ {2} ^ {2} \\right]. \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.815, + 0.484, + 0.889 + ], + "angle": 0, + "content": "After the training, the erasure guidance \\(-\\nabla_{z_t}\\log p(c|z_t)\\) is introduced into conditional noise prediction of the target concept. Therefore, the prediction of tuned model will be guided away from the erased concept, preventing the generation of images containing the erased concept." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.104, + 0.773, + 0.12 + ], + "angle": 0, + "content": "3.2. Anti-Editing Concept Erasure" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.125, + 0.894, + 0.387 + ], + "angle": 0, + "content": "Editing Filtration. Although existing erasure methods can successfully prevent the generation of an erased concept through text prompts, they can be easily circumvented by editing techniques. As shown in Fig. 1, when utilizing tuned ESD model to add sunglasses on an image of Pikachu using LEDs++ [4], it successfully produces an image of Pikachu with sunglasses, raising potential copyright concerns. This is because these methods are typically trained to erase the concept from the noise prediction of the target concept (as shown in Fig. 2 (b)), and rely on inputting concept text (e.g., \"Pikachu\" or \"Mickey\") to trigger the guard. However, during the editing process, the target concept may not necessarily be used in the text prompt. Therefore, these erasure methods fail to prevent the reconstruction of the erased concept. In practice, the erasure model should also have the ability to prevent the creation of undesired concepts through image editing, a feature we refer to as editing filtration." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.389, + 0.893, + 0.535 + ], + "angle": 0, + "content": "Unconditional Erasure Guidance. As we all know, current generation and editing methods heavily rely on classifier-free guidance [20] (CFG) to improve the quality of generated images, where unconditional noise prediction performs an important role. To address the issue of editing filtration, we further propose to erase the target concept from both conditional and unconditional noise prediction, thereby preventing edited images from containing target concepts. Specifically, similar to ESD, we define the unconditional erasure guidance (UEG) as," + }, + { + "type": "equation", + "bbox": [ + 0.554, + 0.544, + 0.892, + 0.56 + ], + "angle": 0, + "content": "\\[\n\\tilde {\\epsilon} _ {\\mathrm {u}} = \\epsilon_ {\\theta^ {*}} \\left(z _ {t}, t\\right) + \\eta_ {\\mathrm {u}} \\left(\\epsilon_ {\\theta^ {*}} \\left(z _ {t}, c, t\\right) - \\epsilon_ {\\theta^ {*}} \\left(z _ {t}, t\\right)\\right). \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.569, + 0.892, + 0.598 + ], + "angle": 0, + "content": "During training, we additionally align the unconditional noise prediction of the tuned model with the UEG," + }, + { + "type": "equation", + "bbox": [ + 0.59, + 0.607, + 0.892, + 0.625 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\mathrm {U n c}} = \\mathbb {E} _ {z _ {t}, t, c} \\left[ \\| \\epsilon_ {\\theta} (z _ {t}, t) - \\tilde {\\epsilon} _ {\\mathrm {u}} \\| _ {2} ^ {2} \\right]. \\tag {7}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.632, + 0.894, + 0.867 + ], + "angle": 0, + "content": "When fine-tuned unconditional noise (our UEG) is subtracted in the CFG process, the erased concept guidance will be subtracted, thereby reducing the probability of the erased concept appearing regardless of the input text prompt. Then, the CFG noise prediction during inference will move away from the target concept regardless of any text input, thereby effectively preventing the production image containing the target concept. As erasure models are usually trained on a small dataset, they are prone to be overfitting, where the erasure guidance is introduced into the noise prediction for other conditional text prompts. This weakens the erasure effects and leads to incomplete erasures. To address the issue of overfitting, we introduce a prior constraint loss during the training process. Specifically, we regularize the prediction of the prior concept in the new model to be consistent with that of the original model:" + }, + { + "type": "equation", + "bbox": [ + 0.532, + 0.873, + 0.892, + 0.89 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\text {C o n s}} = \\mathbb {E} _ {z _ {t}, t, c _ {p} \\in \\mathcal {C} _ {p}} \\left[ \\| \\epsilon_ {\\theta} \\left(z _ {t}, c _ {p}, t\\right) - \\epsilon_ {\\theta^ {*}} \\left(z _ {t}, c _ {p}, t\\right) \\| _ {2} ^ {2} \\right], \\tag {8}\n\\]" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.939, + 0.519, + 0.951 + ], + "angle": 0, + "content": "23508" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.118, + 0.106, + 0.881, + 0.334 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.343, + 0.897, + 0.386 + ], + "angle": 0, + "content": "Figure 4. Comparison of our ACE method with other methods in terms of editing filtering. After erasing Mickey Mouse, our method filtered out edits involving Mickey Mouse while not affecting edits related to other IP characters. In contrast, the competing methods either fail to prevent editing (e.g., ESD, SPM, RECE, and MACE) or cannot perform editing on non-target concepts (e.g., AdvUnlearn)." + }, + { + "type": "image", + "bbox": [ + 0.111, + 0.405, + 0.887, + 0.63 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.645, + 0.895, + 0.7 + ], + "angle": 0, + "content": "Figure 5. Qualitative results of nudity removal. Figure (a) shows the results of explicit editing using SD-Inpainting, while Figure (b) displays images generated using text with explicit label. Static adversarial text is used for editing text, while dynamic adversarial attacks are employed for generation. It can be observed that our method effectively reduces exposure in both editing and generation tasks. Moreover, our method maintains its effectiveness when editing and generating using adversarial text, indicating its robustness." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.713, + 0.487, + 0.875 + ], + "angle": 0, + "content": "where \\( c_p \\) represents prior concept, and \\( \\mathcal{C}_p \\) represents the set of prior concepts. Intuitively, the larger the set of priors, the better it helps mitigate overfitting. However, it is challenging to traverse all the prior concepts as the pre-trained models have a large general semantic space. Our goal is to preserve the concepts more likely to be affected, thus minimizing the influence to other concepts. We assume that these concepts are semantic-related concepts to the erased concept and use LLM [1] to obtain them. By adding this loss, it ensures that the erasure guidance introduced during training aligns with our conceptualization in the Eqn. 7." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.712, + 0.755, + 0.728 + ], + "angle": 0, + "content": "3.3. Prior Concept Preservation" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.742, + 0.895, + 0.89 + ], + "angle": 0, + "content": "In practice, training with the method proposed in Sec. 3.2 affects the generation prior of relevant concepts (see Sec. 4.4). This is because incorporating UEG not only decreases the probability of producing erased concepts, but also decreases probability of adjacent concepts. Therefore, we reverse mechanism of UEG by subtracting the guidance of prior concepts from the unconditional noise, which prevents the probability reduction of these concepts and minimizes concept forgetting. The prior concepts are sampled from the semantic-related concepts obtained using LLM," + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.939, + 0.52, + 0.951 + ], + "angle": 0, + "content": "23509" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.359, + 0.103, + 0.482, + 0.112 + ], + "angle": 0, + "content": "(a) Generation Prevention" + }, + { + "type": "table_caption", + "bbox": [ + 0.682, + 0.103, + 0.782, + 0.112 + ], + "angle": 0, + "content": "(b) Editing Filtration" + }, + { + "type": "table", + "bbox": [ + 0.108, + 0.113, + 0.891, + 0.196 + ], + "angle": 0, + "content": "
MethodErase ConceptPrior ConceptOverallErase ConceptPrior ConceptOverall
ESDUncConsCorCLIPe↓LPIPSe↑CLIPp↑LPIPSp↓CLIPd↑LPIPSd↑CLIPe↓LPIPSe↑CLIPp↑LPIPSp↓CLIPd↑LPIPSd↑
(1)0.1710.4400.2460.2860.0750.1530.3010.0600.3050.0500.0040.011
(2)0.1660.5510.2830.2360.1170.3150.2850.1490.3050.0570.0190.092
(3)0.1590.5070.2540.3370.0950.1700.2740.1680.3000.0770.0260.091
(4)0.2110.3030.2930.1990.0820.1040.2730.1750.3010.0790.0280.096
(5)0.1750.3970.2950.1960.1200.2010.2740.1680.3030.0700.0290.097
" + }, + { + "type": "table_caption", + "bbox": [ + 0.103, + 0.205, + 0.894, + 0.247 + ], + "angle": 0, + "content": "Table 1. Quantitative Evaluation of generation and editing after ablation. The best results are highlighted in bold. The results in the table indicate that the prior constraint loss function, as expected, enhanced the erasure capability of the trained model, while the correction guidance greatly mitigated concept erosion during the erasure process without affecting editing filtration." + }, + { + "type": "image", + "bbox": [ + 0.118, + 0.274, + 0.473, + 0.576 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.591, + 0.483, + 0.632 + ], + "angle": 0, + "content": "Figure 6. Qualitative results of artistic style removal. Our method erases the target style effectively and has minimal impact on other artistic styles." + }, + { + "type": "text", + "bbox": [ + 0.104, + 0.644, + 0.483, + 0.687 + ], + "angle": 0, + "content": "which is mentioned in the previous section. We call this new guidance prior-guided unconditional erasure guidance (PG-UEG), which is defined as:" + }, + { + "type": "equation", + "bbox": [ + 0.124, + 0.697, + 0.482, + 0.732 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\tilde {\\epsilon} _ {\\mathrm {p u}} = \\epsilon_ {\\theta^ {*}} (z _ {t}, t) + \\eta_ {\\mathrm {u}} \\left(\\epsilon_ {\\theta^ {*}} (z _ {t}, c, t) - \\epsilon_ {\\theta^ {*}} (z _ {t}, t)\\right) \\\\ - \\eta_ {p} \\gamma_ {p} \\left(\\epsilon_ {\\theta^ {*}} \\left(z _ {t}, c _ {p}, t\\right) - \\epsilon_ {\\theta^ {*}} \\left(z _ {t}, t\\right)\\right), \\tag {9} \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.741, + 0.483, + 0.863 + ], + "angle": 0, + "content": "where \\(\\gamma_{p}\\) represents the guidance control term related to the prior retained concept. \\(c_{p}\\) refers to the same prior concept in \\(\\mathcal{L}_{\\mathrm{Cons}}\\) which are obtained through random sampling from the set \\(\\mathcal{C}_p\\). We calculate \\(\\gamma_{p}\\) using the CLIP model to measure the relevance of different prior concepts to the target concept image and then compare it to the relevance of the target concept text to its image. Specifically, \\(\\gamma_{p} = \\frac{\\mathrm{CLIP}(x,c_{p})}{\\mathrm{CLIP}(x,c)}\\). The new loss for our ACE is:" + }, + { + "type": "equation", + "bbox": [ + 0.143, + 0.872, + 0.483, + 0.89 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\mathrm {P U n c}} = \\mathbb {E} _ {z _ {t}, t, c, c _ {p} \\in \\mathcal {C} _ {p}} \\left[ \\| \\epsilon_ {\\theta} (z _ {t}, t) - \\tilde {\\epsilon} _ {\\mathrm {p u}} \\| _ {2} ^ {2} \\right]. \\tag {10}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.272, + 0.891, + 0.301 + ], + "angle": 0, + "content": "The final training loss for our ACE is summarized as: \\(\\mathcal{L}_{\\mathrm{ACE}} = \\lambda_{\\mathrm{PUnc}}\\mathcal{L}_{\\mathrm{PUnc}} + \\lambda_{\\mathrm{Cons}}\\mathcal{L}_{\\mathrm{Cons}} + \\lambda_{\\mathrm{ESD}}\\mathcal{L}_{\\mathrm{ESD}}\\)" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.302, + 0.892, + 0.346 + ], + "angle": 0, + "content": "In our implementation, we adopt LORA [22] for parameter-efficient tuning, and the training process follows [14]. More details are provided in Suppl." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.361, + 0.641, + 0.377 + ], + "angle": 0, + "content": "4. Experiments" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.387, + 0.892, + 0.473 + ], + "angle": 0, + "content": "We conduct experiments on various tasks to evaluate our ACE, including IP characters erasure, artistic styles erasure, and nudity erasure. ESD [12], SPM [36], AdvUnlearn [68], MACE [35], and RECE [17] are adopted as competing methods. Unless otherwise specified, the experiments are conducted on the Sable Diffusion v1.4." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.486, + 0.716, + 0.5 + ], + "angle": 0, + "content": "4.1. IP Character Removal" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.508, + 0.892, + 0.889 + ], + "angle": 0, + "content": "Experiment Setup. To access our ACE on IP character removal, we employ ten iconic IP characters as examples, including Hello Kitty, Snoopy, Mickey Mouse, Elsa, Donald Duck, Dora the Explorer, Winnie the Pooh, Sonic the Hedgehog, Elsa, and Fukushima. For each erasure method, we finetune ten models, with each model designed to erase one IP character. Following [14, 17], we adopted CLIP [44] score and LPIPS [66] score as metrics for evaluation. CLIP score calculates the similarity between the generated image and concept text, while LPIPS calculates the perceptual difference between images generated by the erasure model and the original T2I model. \\(\\mathrm{CLIP}_e\\) calculates the CLIP similarity between images generated with erased concept text and their corresponding text, where lower value indicates more thorough erasure. \\(\\mathrm{CLIP}_p\\) calculates the relevance under prior concepts, and higher value indicates better prior preservation. \\(\\mathrm{LPIPS}_e\\) calculates the LPIPS similarity between images generated with erased concept text by the trained model and the original model, and higher value indicates more thorough erasure. \\(\\mathrm{LPIPS}_p\\) calculates the similarity under prior concepts, in which lower values indicate better prior preservation. When erasing one concept, the other nine concepts are used as related concepts. Following RECE [17], we further calculate the overall scores between erased and related characters to measure the trade-off between the concept erasure and prior preservation, where" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.939, + 0.519, + 0.951 + ], + "angle": 0, + "content": "23510" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.308, + 0.103, + 0.444, + 0.113 + ], + "angle": 0, + "content": "(a) Generation Prevention" + }, + { + "type": "table_caption", + "bbox": [ + 0.663, + 0.103, + 0.772, + 0.114 + ], + "angle": 0, + "content": "(b) Editing Filtration" + }, + { + "type": "table", + "bbox": [ + 0.108, + 0.114, + 0.89, + 0.226 + ], + "angle": 0, + "content": "
MethodErase ConceptPrior ConceptOverallErase ConceptPrior ConceptOverall
\\(CLIP_e\\downarrow\\)\\(LPIPS_e\\uparrow\\)\\(CLIP_p\\uparrow\\)\\(LPIPS_p\\downarrow\\)\\(CLIP_d\\uparrow\\)\\(LPIPS_d\\uparrow\\)\\(CLIP_e\\downarrow\\)\\(LPIPS_e\\uparrow\\)\\(CLIP_p\\uparrow\\)\\(LPIPS_p\\downarrow\\)\\(CLIP_d\\uparrow\\)\\(LPIPS_d\\uparrow\\)
SD v1.4 [46]0.3010.0000.3010.0000.0000.0000.3080.0630.3080.0630.0000.000
ESD [14]0.2270.3310.2760.2550.0490.0760.3060.0420.3070.0410.0010.000
SPM [36]0.2390.2880.2960.1070.0560.1810.3020.0610.3030.0560.0010.005
AdvUnlearn [68]0.1660.4680.2090.4030.0430.0650.3100.0110.3110.0100.0010.001
MACE [35]0.2500.3170.2980.1340.0480.1840.3030.0560.3040.0540.0010.002
RECE [17]0.1760.4260.2570.2700.0810.1560.3000.0660.3030.0540.0030.012
Ours0.1750.3970.2950.1960.1200.2010.2740.1680.3030.0700.0290.097
" + }, + { + "type": "table_caption", + "bbox": [ + 0.143, + 0.235, + 0.851, + 0.249 + ], + "angle": 0, + "content": "Table 2. Quantitative comparisons of IP character erasure. The best two results are highlighted with bold and underline." + }, + { + "type": "table", + "bbox": [ + 0.109, + 0.262, + 0.89, + 0.362 + ], + "angle": 0, + "content": "
ButtocksBreast (F)Genitalia (F)Breast (M)Genitalia (M)FeetArmpitsBellyTotal↓FID30k↓CLIP30k↑
SD v1.4 [46]612043738167024118385014.070.313
ESD [14]15295111037683621113.800.304
SPM [36]1429721241532818614.630.312
AdvUnlearn [68]46208131275215.350.293
MACE [35]724810935613518912.600.294
RECE [17]14207161039453518614.450.309
Ours323496573914.690.308
" + }, + { + "type": "table_caption", + "bbox": [ + 0.119, + 0.371, + 0.877, + 0.385 + ], + "angle": 0, + "content": "Table 3. Exposure detection of generated images in the I2P dataset. The best two results are highlighted with bold and underline." + }, + { + "type": "table", + "bbox": [ + 0.108, + 0.395, + 0.481, + 0.484 + ], + "angle": 0, + "content": "
Erase ConceptRelate ConceptOverall
CLIPp↓LPIPSe↑CLIPp↑LPIPSp↓CLIPd↑LPIPSd↑
SD v1.4 [46]0.3100.0000.3100.0000.0000.000
ESD [14]0.2160.4440.2960.2410.0800.202
SPM [36]0.2660.2680.3080.0740.0420.195
AdvUnlearn [68]0.1860.4760.2290.4100.0430.066
MACE [35]0.2280.3660.2980.1960.0690.169
RECE [17]0.2530.3070.3090.0510.0570.255
Ours0.1600.4710.3030.1260.1430.345
" + }, + { + "type": "table_caption", + "bbox": [ + 0.103, + 0.493, + 0.484, + 0.547 + ], + "angle": 0, + "content": "Table 4. Quantitative evaluation of artist style erasure. The best two results are highlighted with bold and underline. Our ACE performs better in terms of thorough erasure and also demonstrates comparable prior preservation." + }, + { + "type": "table", + "bbox": [ + 0.108, + 0.562, + 0.481, + 0.65 + ], + "angle": 0, + "content": "
Unlearn Diffusion↓P4D↓Ring a Bell↓Average↓
SD v1.4 [46]100%100%85.21%95.07%
ESD [14]73.05%74.47%38.73%62.08%
SPM [36]91.49%91.49%57.75%80.24%
AdvUnlearn [68]25.53%19.15%4.93%16.54%
MACE [35]64.53%66.67%14.79%48.66%
RECE [17]70.92%65.96%26.76%54.55%
Ours27.65%28.37%2.82%19.61%
" + }, + { + "type": "table_caption", + "bbox": [ + 0.103, + 0.66, + 0.484, + 0.728 + ], + "angle": 0, + "content": "Table 5. Robustness evaluation of nudity erasure. The best two results are highlighted with bold and underline. We report the attack success rates (ASR) of different adversarial methods under various erasure models. Our method achieved the second-best results without using adversarial training." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.755, + 0.483, + 0.784 + ], + "angle": 0, + "content": "\\(\\mathrm{CLIP}_d = \\mathrm{CLIP}_p - \\mathrm{CLIP}_e\\) and \\(\\mathrm{LPIPS}_d = \\mathrm{LPIPS}_e - \\mathrm{LPIPS}_p\\). Higher \\(\\mathrm{CLIP}_d\\) and \\(\\mathrm{LPIPS}_d\\) indicate better trade-off." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.786, + 0.484, + 0.889 + ], + "angle": 0, + "content": "For generation evaluation, we adopt 33 text templates for each character concept, and five images are generated for each text template using the erased model. To evaluate the effectiveness of editing filtration, we adopt the widely used LEDs++ [4] and MasaCtrl [6] as editing methods. For each concept, we utilize Stable Diffusion 3 [12] to generate 15 images based on 3 text templates as initial images, and" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.399, + 0.893, + 0.487 + ], + "angle": 0, + "content": "then perform editing on them using erased models. Each image is manipulated using 11 editing texts, such as \"sun-glasses\". Finally, the CLIP score and LPIPS score are calculated based on edited images, concept text and original images. The final results are all reported by averaging 10 characters. More details can be found in Suppl." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.49, + 0.893, + 0.722 + ], + "angle": 0, + "content": "Experiment Results. Fig. 3 illustrates the comparison of generation results against competing methods. One can see that, our ACE can successfully erase the target concept (i.e., Donald Duck) while retaining the capability to generate related prior concepts (e.g., Mickey Mouse and Pikachu). In contrast, methods such as ESD, AdvUnlearn, and RECE generate examples with noticeable concept erosion. From Table 2, our ACE demonstrates a comparable CLIP score for both the erased and related concepts. This indicates that our ACE achieves a better trade-off between target concept erasure and prior concept preservation, as further validated by the overall metrics in Table 2 (a). SPM and MACE exhibit inferior performance in thoroughly erasing the target concept. While AdvUnlearn performs well at erasing the target concept, it shows poor performance in prior preservation." + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.727, + 0.894, + 0.889 + ], + "angle": 0, + "content": "Fig. 4 further presents the comparison of editing results by LEDITS++. As shown in the figure, the competing method generates the erased concept with desired attributes after performing the editing on the given image, which is not wanted in practice. In contrast, our method can successfully hinder the editing of images containing erased concepts (e.g., Mickey), while keeping the editability of nontarget concepts (e.g., Hello Kitty and Elsa). Table 2 (b) reports the quantitative comparisons evaluated with LEDITS++. Our method shows a significant improvement in erasing concepts, demonstrating its ability to edit filtration." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.939, + 0.518, + 0.951 + ], + "angle": 0, + "content": "23511" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.104, + 0.105, + 0.486, + 0.135 + ], + "angle": 0, + "content": "The comparison on MasaCtrl and more results can be found in Suppl." + }, + { + "type": "title", + "bbox": [ + 0.104, + 0.145, + 0.333, + 0.16 + ], + "angle": 0, + "content": "4.2. Explicit Content Removal" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.167, + 0.485, + 0.459 + ], + "angle": 0, + "content": "Experimental Setup. To evaluate our ACE on explicit content removal, we employ \"nudity\" as the target concept to train the model. Following [36], we utilize the I2P dataset [48] to evaluate the performance of explicit content generation. Specifically, we select 856 text prompts with explicit labels, and each prompt generates one image. Then, Nudenet [2] is used to quantify the number of nude body parts in these generated images. Additionally, following [14, 36], we employ COCO-30k Caption dataset [31] to evaluate the conditional generation capability of erased models. Specifically, we generate one image for each caption in COCO-30k and FID [19] is calculated between generated and natural images. CLIP score is also calculated between the generated images and the captions to access the semantic alignment of generated images. For robustness evaluation, we adopt UnlearnDiff [69], P4D [10] and Ring-A-Bell [53] as adversarial tools to calculate attack success rate (ASR). Adversarial attacks were conducted on 142 sensitive texts provided by UnlearnDiff. More details can be found in Suppl." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.46, + 0.485, + 0.737 + ], + "angle": 0, + "content": "Experiment Results. From Table 5, we can see that our method has a lower success rate in adversarial attacks when trained only for \"nudity\", with only AdvUnlearn performing slightly better than us with using adversarial training. As shown in Fig. 5 and Table 3, our method can effectively erase nudity content and results in fewer exposure parts. In the generation evaluation, we dynamically attack the erased models using adversarial tools. As shown in Fig. 5, our method demonstrates excellent robustness. To further showcase our method's efficacy in editing filtration, we employ SD-Inpainting [46] as an editing tool to assess the exposure levels of images after different text-guided inpainting processes. In addition to conventional text editing (e.g., bikini) adversarial edited text in MMA-Diffusion [61] is also used for explicit editing. GroundingDINO [34] is used to detect clothing in the images. As shown in Fig. 5, our method successfully prevents inappropriate inpainting of exposed parts in masked areas, making it more practical for real-world applications." + }, + { + "type": "text", + "bbox": [ + 0.104, + 0.738, + 0.485, + 0.768 + ], + "angle": 0, + "content": "More results for robustness and editing filtration evaluation can be found in Suppl." + }, + { + "type": "title", + "bbox": [ + 0.104, + 0.778, + 0.308, + 0.794 + ], + "angle": 0, + "content": "4.3. Artistic Style Removal" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.801, + 0.485, + 0.889 + ], + "angle": 0, + "content": "Experiment Setup. To validate the performance of our model in unlearning styles, we choose ten representative artistic styles, including Leonardo da Vinci, Pablo Picasso, Michelangelo, Van Gogh, Salvador Dali, Claude Monet, Andy Warhol, Jackson Pollock, Frida Kahlo, Georgia O'Keeffe. The evaluation process and metrics are simi" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.105, + 0.779, + 0.119 + ], + "angle": 0, + "content": "lar to the IP character removal (Sec. 4.1)." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.121, + 0.894, + 0.195 + ], + "angle": 0, + "content": "Experiment Results. Fig. 6 illustrates the results of erasing artistic styles. As shown in the figure, our method can erase the style of Van Gogh and Andy Warhol from the T2I model, while generating other styles faithfully. From Table 4, our method achieves better \\(\\mathrm{CLIP}_e\\) on erased concept." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.211, + 0.662, + 0.227 + ], + "angle": 0, + "content": "4.4. Ablation Study" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.234, + 0.895, + 0.57 + ], + "angle": 0, + "content": "We further conduct the ablation study on the IP character erasure to evaluate the effectiveness of each component proposed in our ACE. Specifically, it contains the following variants: (1) Baseline: by only adopting the ESD loss to finetune the model. (2) Baseline + Unc: by employing unconditional erasure guidance alignment with ESD Loss to finetune the model. (3) Baseline + Unc + \\(\\mathcal{L}_{\\mathrm{Cons}}\\): by adopting ESD Loss, unconditional erasure guidance alignment, and \\(\\mathcal{L}_{\\mathrm{Cons}}\\) to finetune the model. (4) Our method without ESD: Ours w/o \\(\\mathcal{L}_{\\mathrm{ESD}}\\) is also effective in concept erasure and editing filtration, and performs better than ESD, indicating that our PG-UEG plays a crucial role in editing filtering. (5) Ours full method: by incorporating the ESD Loss, prior-guided unconditional erasure guidance alignment and \\(\\mathcal{L}_{\\mathrm{Cons}}\\) together. From Table 1, we can see that: (i) Introducing unconditional erasure guidance improves the model's editing filtration performance, indicating its effectiveness in preventing unwanted edits. (ii) We use both unconditional erasure guidance and \\(\\mathcal{L}_{\\mathrm{Cons}}\\) together leading to significant improvements in concept erasure and editing filtration performance, although it compromises the generation of related prior concepts. (iii) \\(\\mathcal{L}_{\\mathrm{PUnc}}\\) enhances the prior preservation, and without affecting editing filtration." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.572, + 0.816, + 0.587 + ], + "angle": 0, + "content": "More ablation results are provided in Suppl." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.6, + 0.629, + 0.615 + ], + "angle": 0, + "content": "5. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.623, + 0.894, + 0.842 + ], + "angle": 0, + "content": "In this paper, we investigate the potential risks of unsafe content creation through image editing, and propose an Anti-Editing Concept Erasure (ACE) method to prevent the production of such content during both generation and editing. In addition to the conditional erasure guidance used by existing methods, we further propose an unconditional noise erasure technique to enhance anti-editing concept erasure. This guidance steers the noise prediction away from the target concept, thereby effectively preventing the production of images containing the target concept. Moreover, a concept preservation mechanism is introduced to maintain the generation prior of non-target concepts. Experiments demonstrate that our ACE can successfully erase specific concepts and exhibits superior filtration capabilities during both generation and editing compared to existing methods." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.844, + 0.894, + 0.888 + ], + "angle": 0, + "content": "Acknowledgement. The work was supported by National Key R&D Program of China under Grant No. 2022YFA1004100." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.939, + 0.521, + 0.951 + ], + "angle": 0, + "content": "23512" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.106, + 0.103, + 0.198, + 0.119 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.114, + 0.128, + 0.484, + 0.194 + ], + "angle": 0, + "content": "[1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.5" + }, + { + "type": "ref_text", + "bbox": [ + 0.114, + 0.197, + 0.484, + 0.224 + ], + "angle": 0, + "content": "[2] P Bedapudi. Nudenet: Neural nets for nudity classification, detection and selective censoring, 2019. 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.114, + 0.226, + 0.484, + 0.292 + ], + "angle": 0, + "content": "[3] Manuel Brack, Felix Friedrich, Dominik Hintersdorf, Lukas Struppek, Patrick Schramowski, and Kristian Kersting. Sega: Instructing text-to-image models using semantic guidance. Advances in Neural Information Processing Systems, 36: 25365-25389, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.114, + 0.295, + 0.484, + 0.375 + ], + "angle": 0, + "content": "[4] Manuel Brack, Felix Friedrich, Katharia Kornmeier, Linoy Tsaban, Patrick Schramowski, Kristian Kersting, and Apolinário Passos. Ledits++: Limitless image editing using text-to-image models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8861-8870, 2024. 1, 2, 4, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.114, + 0.378, + 0.484, + 0.444 + ], + "angle": 0, + "content": "[5] Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18392-18402, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.114, + 0.447, + 0.484, + 0.526 + ], + "angle": 0, + "content": "[6] Mingdeng Cao, Xintao Wang, Zhongang Qi, Ying Shan, Xiaohu Qie, and Yinqiang Zheng. Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 22560-22570, 2023. 2, 7, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.114, + 0.529, + 0.484, + 0.596 + ], + "angle": 0, + "content": "[7] Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T Freeman, Michael Rubinstein, et al. Muse: Text-to-image generation via masked generative transformers. arXiv preprint arXiv:2301.00704, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.114, + 0.599, + 0.484, + 0.652 + ], + "angle": 0, + "content": "[8] Hila Chefer, Yuval Alaluf, Yael Vinker, Lior Wolf, and Daniel Cohen-Or. Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models. ACM Transactions on Graphics (TOG), 42(4):1-10, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.114, + 0.654, + 0.484, + 0.707 + ], + "angle": 0, + "content": "[9] Die Chen, Zhiwen Li, Mingyuan Fan, Cen Chen, Wenmeng Zhou, and Yaliang Li. Eiup: A training-free approach to erase non-compliant concepts conditioned on implicit unsafe prompts. arXiv preprint arXiv:2408.01014, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.71, + 0.483, + 0.763 + ], + "angle": 0, + "content": "[10] Zhi-Yi Chin, Chieh-Ming Jiang, Ching-Chun Huang, PinYu Chen, and Wei-Chen Chiu. Prompting4debugging: Red-teaming text-to-image diffusion models by finding problematic prompts. arXiv preprint arXiv:2309.06135, 2023. 3, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.766, + 0.483, + 0.806 + ], + "angle": 0, + "content": "[11] Anudeep Das, Vasisht Duddu, Rui Zhang, and N Asokan. *Espresso: Robust concept filtering in text-to-image models.* arXiv preprint arXiv:2404.19227, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.808, + 0.484, + 0.887 + ], + "angle": 0, + "content": "[12] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis, march 2024. URL http://arxiv.org/abs/2403.03206, 2024.1, 6,7" + }, + { + "type": "list", + "bbox": [ + 0.107, + 0.128, + 0.484, + 0.887 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.106, + 0.893, + 0.159 + ], + "angle": 0, + "content": "[13] Kailai Feng, Yabo Zhang, Haodong Yu, Zhilong Ji, Jinfeng Bai, Hongzhi Zhang, and Wangmeng Zuo. Vitaglyph: Vitalizing artistic typography with flexible dual-branch diffusion models. arXiv preprint arXiv:2410.01738, 2024. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.161, + 0.894, + 0.227 + ], + "angle": 0, + "content": "[14] Rohit Gandikota, Joanna Materzynska, Jaden Fiitto-Kaufman, and David Bau. Erasing concepts from diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2426-2436, 2023. 2, 3, 4, 6, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.23, + 0.894, + 0.295 + ], + "angle": 0, + "content": "[15] Rohit Gandikota, Hadas Orgad, Yonatan Belinkov, Joanna Materzyńska, and David Bau. Unified concept editing in diffusion models. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 5111-5120, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.298, + 0.893, + 0.351 + ], + "angle": 0, + "content": "[16] Hongcheng Gao, Tianyu Pang, Chao Du, Taihang Hu, Zhijie Deng, and Min Lin. Meta-unlearning on diffusion models: Preventing relearning unlearned concepts. arXiv preprint arXiv:2410.12777, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.354, + 0.893, + 0.406 + ], + "angle": 0, + "content": "[17] Chao Gong, Kai Chen, Zhipeng Wei, Jingjing Chen, and YuGang Jiang. Reliable and efficient concept erasure of text-to-image diffusion models. arXiv preprint arXiv:2407.12383, 2024. 2, 6, 7, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.409, + 0.893, + 0.475 + ], + "angle": 0, + "content": "[18] Luxi He, Yangsibo Huang, Weijia Shi, Tinghao Xie, Haotian Liu, Yue Wang, Luke Zettlemoyer, Chiyuan Zhang, Danqi Chen, and Peter Henderson. *Fantastic copyrighted beasts and how (not) to generate them.* arXiv preprint arXiv:2406.14526, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.477, + 0.893, + 0.543 + ], + "angle": 0, + "content": "[19] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. 8, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.546, + 0.893, + 0.572 + ], + "angle": 0, + "content": "[20] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022. 2, 3, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.574, + 0.893, + 0.638 + ], + "angle": 0, + "content": "[21] Seunghoo Hong, Juhun Lee, and Simon S Woo. All but one: Surgical concept erasing with model preservation in text-to-image diffusion models. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 21143-21151, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.642, + 0.893, + 0.696 + ], + "angle": 0, + "content": "[22] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.698, + 0.893, + 0.763 + ], + "angle": 0, + "content": "[23] Chi-Pin Huang, Kai-Po Chang, Chung-Ting Tsai, Yung-Hsuan Lai, Fu-En Yang, and Yu-Chiang Frank Wang. Receler: Reliable concept erasing of text-to-image diffusion models via lightweight erasers. arXiv preprint arXiv:2311.17717, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.766, + 0.893, + 0.819 + ], + "angle": 0, + "content": "[24] Sanghyun Kim, Seohyeon Jung, Balhae Kim, Moonseok Choi, Jinwoo Shin, and Juho Lee. Safeguard text-to-image diffusion models with human feedback inversion. arXiv preprint arXiv:2407.21032, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.821, + 0.894, + 0.888 + ], + "angle": 0, + "content": "[25] Nupur Kumari, Bingliang Zhang, Sheng-Yu Wang, Eli Shechtman, Richard Zhang, and Jun-Yan Zhu. Ablating concepts in text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 22691-22702, 2023. 2" + }, + { + "type": "list", + "bbox": [ + 0.516, + 0.106, + 0.894, + 0.888 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.939, + 0.519, + 0.951 + ], + "angle": 0, + "content": "23513" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.106, + 0.484, + 0.173 + ], + "angle": 0, + "content": "[26] Nupur Kumari, Bingliang Zhang, Richard Zhang, Eli Shechtman, and Jun-Yan Zhu. Multi-concept customization of text-to-image diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1931-1941, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.174, + 0.484, + 0.226 + ], + "angle": 0, + "content": "[27] Fan Li, Zixiao Zhang, Yi Huang, Jianzhuang Liu, Renjing Pei, Bin Shao, and Songcen Xu. Magiceraser: Erasing any objects via semantics-aware control. arXiv preprint arXiv:2410.10207, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.228, + 0.484, + 0.294 + ], + "angle": 0, + "content": "[28] Hang Li, Chengzhi Shen, Philip Torr, Volker Tresp, and Jindong Gu. Self-discovering interpretable diffusion latent directions for responsible text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12006-12016, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.295, + 0.484, + 0.348 + ], + "angle": 0, + "content": "[29] Jia Li, Lijie Hu, Zhixian He, Jingfeng Zhang, Tianhang Zheng, and Di Wang. Text guided image editing with automatic concept locating and forgetting. arXiv preprint arXiv:2405.19708, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.349, + 0.484, + 0.415 + ], + "angle": 0, + "content": "[30] Senmao Li, Joost van de Weijer, Taihang Hu, Fahad Shahbaz Khan, Qibin Hou, Yaxing Wang, and Jian Yang. Get what you want, not what you don't: Image content suppression for text-to-image diffusion models. arXiv preprint arXiv:2402.05375, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.417, + 0.484, + 0.497 + ], + "angle": 0, + "content": "[31] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740-755. Springer, 2014. 8, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.498, + 0.484, + 0.551 + ], + "angle": 0, + "content": "[32] Ming Liu, Yuxiang Wei, Xiaohe Wu, Wangmeng Zuo, and Lei Zhang. Survey on leveraging pre-trained generative adversarial networks for image editing and restoration. Science China Information Sciences, 66(5):151101, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.552, + 0.484, + 0.605 + ], + "angle": 0, + "content": "[33] Runtao Liu, Ashkan Khakzar, Jindong Gu, Qifeng Chen, Philip Torr, and Fabio Pizzati. Latent guard: a safety framework for text-to-image generation. In European Conference on Computer Vision, pages 93-109. Springer, 2025. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.606, + 0.484, + 0.671 + ], + "angle": 0, + "content": "[34] Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. arXiv preprint arXiv:2303.05499, 2023. 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.673, + 0.484, + 0.739 + ], + "angle": 0, + "content": "[35] Shilin Lu, Zilan Wang, Leyang Li, Yanzhu Liu, and Adams Wai-Kin Kong. Mace: Mass concept erasure in diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6430-6440, 2024. 2, 6, 7, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.741, + 0.484, + 0.82 + ], + "angle": 0, + "content": "[36] Mengyao Lyu, Yuhong Yang, Haiwen Hong, Hui Chen, Xuan Jin, Yuan He, Hui Xue, Jungong Han, and Guiguang Ding. One-dimensional adapter to rule them all: Concepts diffusion models and erasing applications. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7559-7568, 2024. 2, 3, 6, 7, 8, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.821, + 0.484, + 0.888 + ], + "angle": 0, + "content": "[37] Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Null-text inversion for editing real images using guided diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6038–6047, 2023. 2" + }, + { + "type": "list", + "bbox": [ + 0.107, + 0.106, + 0.484, + 0.888 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.106, + 0.893, + 0.173 + ], + "angle": 0, + "content": "[38] Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, and Ying Shan. T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 4296-4304, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.174, + 0.893, + 0.239 + ], + "angle": 0, + "content": "[39] Yong-Hyun Park, Sangdoo Yun, Jin-Hwa Kim, Junho Kim, Geonhui Jang, Yonghyun Jeong, Junghyo Jo, and Gayoung Lee. Direct unlearning optimization for robust and safe text-to-image models. arXiv preprint arXiv:2407.21035, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.241, + 0.893, + 0.294 + ], + "angle": 0, + "content": "[40] Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, and Jun-Yan Zhu. Zero-shot image-to-image translation. In ACM SIGGRAPH 2023 Conference Proceedings, pages 1-11, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.295, + 0.893, + 0.335 + ], + "angle": 0, + "content": "[41] Minh Pham, Kelly O Marshall, Chinmay Hegde, and Niv Cohen. Robust concept erasure using task vectors. arXiv preprint arXiv:2404.03631, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.336, + 0.893, + 0.402 + ], + "angle": 0, + "content": "[42] Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952, 2023. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.403, + 0.893, + 0.469 + ], + "angle": 0, + "content": "[43] Samuele Poppi, Tobia Poppi, Federico Cocchi, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara, et al. Safe-clip: Removing nsfw concepts from vision-and-language models. In Proceedings of the European Conference on Computer Vision, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.471, + 0.893, + 0.55 + ], + "angle": 0, + "content": "[44] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.551, + 0.893, + 0.592 + ], + "angle": 0, + "content": "[45] Javier Rando, Daniel Paleka, David Lindner, Lennart Heim, and Florian Tramèr. Red-teaming the stable diffusion safety filter. arXiv preprint arXiv:2210.04610, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.592, + 0.893, + 0.659 + ], + "angle": 0, + "content": "[46] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695, 2022. 1, 2, 3, 7, 8, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.66, + 0.893, + 0.739 + ], + "angle": 0, + "content": "[47] Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 22500-22510, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.741, + 0.893, + 0.807 + ], + "angle": 0, + "content": "[48] Patrick Schramowski, Manuel Brack, Björn Deiseroth, and Kristian Kersting. Safe latent diffusion: Mitigating inappropriate degeneration in diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22522-22531, 2023. 2, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.808, + 0.893, + 0.888 + ], + "angle": 0, + "content": "[49] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems, 35:25278-25294, 2022. 1" + }, + { + "type": "list", + "bbox": [ + 0.516, + 0.106, + 0.893, + 0.888 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.939, + 0.52, + 0.951 + ], + "angle": 0, + "content": "23514" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.106, + 0.484, + 0.171 + ], + "angle": 0, + "content": "[50] Jing Shi, Wei Xiong, Zhe Lin, and Hyun Joon Jung. Instantbooth: Personalized text-to-image generation without test-time finetuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8543-8552, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.173, + 0.484, + 0.227 + ], + "angle": 0, + "content": "[51] Zhuan Shi, Jing Yan, Xiaoli Tang, Lingjuan Lyu, and Boi Faltings. Rlcp: A reinforcement learning-based copyright protection method for text-to-image diffusion model. arXiv preprint arXiv:2408.16634, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.228, + 0.484, + 0.281 + ], + "angle": 0, + "content": "[52] Koushik Srivatsan, Fahad Shamshad, Muzammal Naseer, and Karthik Nandakumar. Stereo: Towards adversarially robust concept erasing from text-to-image generation models. arXiv preprint arXiv:2408.16807, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.282, + 0.484, + 0.347 + ], + "angle": 0, + "content": "[53] Yu-Lin Tsai, Chia-Yi Hsu, Chulin Xie, Chih-Hsun Lin, Jia-You Chen, Bo Li, Pin-Yu Chen, Chia-Mu Yu, and Chun-Ying Huang. Ring-a-bell! how reliable are concept removal methods for diffusion models? arXiv preprint arXiv:2310.10012, 2023. 3, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.349, + 0.484, + 0.416 + ], + "angle": 0, + "content": "[54] Narek Tumanyan, Michal Geyer, Shai Bagon, and Tali Dekel. Plug-and-play diffusion features for text-driven image-to-image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1921-1930, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.417, + 0.484, + 0.469 + ], + "angle": 0, + "content": "[55] Haofan Wang, Matteo Spinelli, Qixun Wang, Xu Bai, Zekui Qin, and Anthony Chen. Instantstyle: Free lunch towards style-preserving in text-to-image generation. arXiv preprint arXiv:2404.02733, 2024. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.471, + 0.484, + 0.549 + ], + "angle": 0, + "content": "[56] Yuxiang Wei, Yabo Zhang, Zhilong Ji, Jinfeng Bai, Lei Zhang, and Wangmeng Zuo. Elite: Encoding visual concepts into textual embeddings for customized text-to-image generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15943-15953, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.551, + 0.484, + 0.618 + ], + "angle": 0, + "content": "[57] Yuxiang Wei, Zhilong Ji, Jinfeng Bai, Hongzhi Zhang, Lei Zhang, and Wangmeng Zuo. Masterweaver: Taming editability and face identity for personalized text-to-image generation. In European Conference on Computer Vision, pages 252-271. Springer, 2025. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.619, + 0.484, + 0.672 + ], + "angle": 0, + "content": "[58] Yuxiang Wei, Yiheng Zheng, Yabo Zhang, Ming Liu, Zhi-long Ji, Lei Zhang, and Wangmeng Zuo. Personalized image generation with deep generative models: A decade survey. arXiv preprint arXiv:2502.13081, 2025. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.673, + 0.484, + 0.739 + ], + "angle": 0, + "content": "[59] Yongliang Wu, Shiji Zhou, Mingzhuo Yang, Lianzhe Wang, Wenbo Zhu, Heng Chang, Xiao Zhou, and Xu Yang. Unlearning concepts in diffusion model via concept domain correction and concept preserving gradient. arXiv preprint arXiv:2405.15304, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.741, + 0.484, + 0.819 + ], + "angle": 0, + "content": "[60] Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, and Fang Wen. Paint by example: Exemplar-based image editing with diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18381-18391, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.821, + 0.484, + 0.887 + ], + "angle": 0, + "content": "[61] Yijun Yang, Ruiyuan Gao, Xiaosen Wang, Tsung-Yi Ho, Nan Xu, and Qiang Xu. Mma-diffusion: Multimodal attack on diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7737-7746, 2024. 8" + }, + { + "type": "list", + "bbox": [ + 0.107, + 0.106, + 0.484, + 0.887 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.106, + 0.893, + 0.158 + ], + "angle": 0, + "content": "[62] Yijun Yang, Ruiyuan Gao, Xiao Yang, Jianyuan Zhong, and Qiang Xu. Guardt2i: Defending text-to-image models from adversarial prompts. arXiv preprint arXiv:2403.01446, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.161, + 0.893, + 0.213 + ], + "angle": 0, + "content": "[63] Jaehong Yoon, Shoubin Yu, Vaidehi Patil, Huaxiu Yao, and Mohit Bansal. Safree: Training-free and adaptive guard for safe text-to-image and video generation. arXiv preprint arXiv:2410.12761, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.216, + 0.893, + 0.282 + ], + "angle": 0, + "content": "[64] Gong Zhang, Kai Wang, Xingqian Xu, Zhangyang Wang, and Humphrey Shi. Forget-me-not: Learning to forget in text-to-image diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1755-1764, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.284, + 0.893, + 0.337 + ], + "angle": 0, + "content": "[65] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3836-3847, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.339, + 0.893, + 0.405 + ], + "angle": 0, + "content": "[66] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586-595, 2018. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.407, + 0.893, + 0.46 + ], + "angle": 0, + "content": "[67] Yabo Zhang, Yuxiang Wei, Dongsheng Jiang, Xiaopeng Zhang, Wangmeng Zuo, and Qi Tian. Controlvideo: Training-free controllable text-to-video generation. arXiv preprint arXiv:2305.13077, 2023. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.462, + 0.893, + 0.528 + ], + "angle": 0, + "content": "[68] Yimeng Zhang, Xin Chen, Jinghan Jia, Yihua Zhang, Chongyu Fan, Jiancheng Liu, Mingyi Hong, Ke Ding, and Sijia Liu. Defensive unlearning with adversarial training for robust concept erasure in diffusion models. arXiv preprint arXiv:2405.15234, 2024. 2, 6, 7, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.53, + 0.893, + 0.609 + ], + "angle": 0, + "content": "[69] Yimeng Zhang, Jinghan Jia, Xin Chen, Aochuan Chen, Yihua Zhang, Jiancheng Liu, Ke Ding, and Sijia Liu. To generate or not? safety-driven unlearned diffusion models are still easy to generate unsafe images... for now. In European Conference on Computer Vision, pages 385-403. Springer, 2025. 3, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.611, + 0.893, + 0.664 + ], + "angle": 0, + "content": "[70] Yabo Zhang, Xinpeng Zhou, Yihan Zeng, Hang Xu, Hui Li, and Wangmeng Zuo. Framepainter: Endowing interactive image editing with video diffusion priors. arXiv preprint arXiv:2501.08225, 2025. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.516, + 0.666, + 0.893, + 0.706 + ], + "angle": 0, + "content": "[71] Mengnan Zhao, Lihe Zhang, Tianhang Zheng, Yuqiu Kong, and Baocai Yin. Separable multi-concept erasure from diffusion models. arXiv preprint arXiv:2402.05947, 2024. 2" + }, + { + "type": "list", + "bbox": [ + 0.516, + 0.106, + 0.893, + 0.706 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.939, + 0.519, + 0.951 + ], + "angle": 0, + "content": "23515" + } + ] +] \ No newline at end of file diff --git a/2025/ACE_ Anti-Editing Concept Erasure in Text-to-Image Models/b5a87e68-e26d-46f8-a43c-98bb87af5dc7_origin.pdf b/2025/ACE_ Anti-Editing Concept Erasure in Text-to-Image Models/b5a87e68-e26d-46f8-a43c-98bb87af5dc7_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..96796fbbef2d17d65100ab3daaae54b737a98bfe --- /dev/null +++ b/2025/ACE_ Anti-Editing Concept Erasure in Text-to-Image Models/b5a87e68-e26d-46f8-a43c-98bb87af5dc7_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e632fb952138d60253a7461402259b386cd43f5e35be6a4654d055620407585d +size 2764056 diff --git a/2025/ACE_ Anti-Editing Concept Erasure in Text-to-Image Models/full.md b/2025/ACE_ Anti-Editing Concept Erasure in Text-to-Image Models/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3b00203c6ac65d9b09ac65eaac4141d64c599225 --- /dev/null +++ b/2025/ACE_ Anti-Editing Concept Erasure in Text-to-Image Models/full.md @@ -0,0 +1,319 @@ +# ACE: Anti-Editing Concept Erasure in Text-to-Image Models + +Zihao Wang $^{1}$ Yuxiang Wei $^{1}$ Fan Li $^{2}$ Renjing Pei $^{2}$ Hang Xu $^{2}$ Wangmeng Zuo $^{1,3(\text{图})}$ + +$^{1}$ Harbin Institute of Technology $^{2}$ Huawei Noah's Ark Lab $^{3}$ Pazhou Lab (Huangpu) + +![](images/66dbe4e4ed7a06542016d35cf3963ca3534f7f94e5e23386d93120d461c5f8cf.jpg) + +![](images/617364e0d91317e5332d5a06a51f795c86fdd959c79bf1e99ace98740162ed58.jpg) +(a) Common methods for creating copyrighted content + +![](images/f0aed767b6998d7a0b3e23933a3b397360c267a79a2e63f73119d1c0fab5ad7b.jpg) + +![](images/26943a5647f5e5836853e2cf47e6502410040bed37427a4d87f2fdf2740ec254.jpg) +(b) Comparisons of Pikachu erasure on generation and editing +Figure 1. (a) Given a text-to-image (T2I) model, there are two common methods to adopt it to create undesired contents, i.e., generating new images based on text prompts or editing existing images. (b) Current concept erasure methods primarily focus on preventing the generation of erased concepts but fail to protect against image editing. In contrast, our ACE method can prevent the production of such content during both generation and editing processes. As shown, after erasing Pikachu, it successfully prevents the edits involving Pikachu. + +# Abstract + +Recent advance in text-to-image diffusion models have significantly facilitated the generation of high-quality images, but also raising concerns about the illegal creation of harmful content, such as copyrighted images. Existing concept erasure methods achieve superior results in preventing the production of erased concept from prompts, but typically perform poorly in preventing undesired editing. To address this issue, we propose an Anti-Editing Concept Erasure (ACE) method, which not only erases the target concept during generation but also filters out it during editing. Specifically, we propose to inject the erasure guidance into both conditional and the unconditional noise prediction, enabling the model to effectively prevent the creation of erasure concepts during both editing and generation. Furthermore, a stochastic correction guidance is introduced during training to address the erosion of unrelated concepts. We conducted erasure editing experiments with + +representative editing methods (i.e., LEDs++ and Ma-saCtrl) to erase IP characters, and the results indicate that our ACE effectively filters out target concepts in both types of edits. Additional experiments on erasing explicit concepts and artistic styles further demonstrate that our ACE performs favorably against state-of-the-art methods. Our code will be publicly available at https://github.com/120L020904/ACE. + +# 1. Introduction + +Recent text-to-image (T2I) diffusion models trained with large-scale datasets [49] have demonstrated an impressive ability to generate high-quality images [12, 42, 46]. Their extraordinary creative capabilities enable users to produce high-quality images, and facilitate a wide range of applications, such as image editing [4, 58] and artistic creation [13, 55, 67]. However, alongside these advancements, a significant concern has arisen regarding the potential mis + +use of these text-to-image models. For example, these models might be employed to generate unsafe content, such as copyrighted material or sexually explicit images. + +To prevent the creation of unsafe content, a straightforward solution is filtering training data and retraining the model. Nonetheless, such a process is both labor-intensive and resource-consuming. Post-hoc safety checker [45, 46] and negative guidance [48] are alternative plug-and-play ways to filter undesired contents, which heavily rely on pre-trained detectors or hand-crafted prompts. More recent, concept erasure methods [14, 17, 35, 36, 68] are proposed to directly unlearn undesired concepts through model finetuning. These methods mainly focus on precisely removing the target concept, while faithfully preserving the generation of non-target concepts. For instance, ESD [14] injects the negative erase guidance into target noise prediction to guide the image away from the target concept. SPM [36] employs a lightweight adapter to eliminate concepts and further adopts latent anchoring to preserve non-target concepts. + +Although these concept erasure methods can effectively prevent the generation of unsafe content giving corresponding text prompt, they can be circumvented by editing techniques. As illustrated in Fig. 1, after removing Pikachu from the model, users can still create an image of Pikachu wearing sunglasses by editing a Pikachu image using LEDIT++ [4]. This is because these methods are typically trained to remove target concept from conditional noise prediction (as shown in Fig. 2(b)), and rely on the input text (e.g., "Pikachu") to trigger the guard. Therefore, when editing the image with the text "Add sunglasses" as input, the guard fails. In practice, protection from editing should also be considered in concept erasure, which we refer to as editing filtration. + +To address the above issues, we propose an Anti-Editing Concept Erasure method, termed ACE, to prevent the production of unsafe content during both generation and editing. Based on the above analysis, we explore the capabilities of CFG [20], and propose incorporating erasure guidance into both conditional and unconditional noise for anti-editing concept erasure. During erasure training, ACE additionally aligns the unconditional noise prediction of the tuned model with the proposed unconditional erasure guidance. After that, during generation or editing, the CFG prediction in the tuned model can implicitly mitigate the presence of the erased concept, thereby preventing the production of unwanted content. A prior constraint loss further adopted address the overfitting of training. Additionally, to reduce the impact of the added target concept noise guidance on the generation of non-target concepts, we further incorporate a random correction guidance with unconditional erasure guidance by subtracting randomly sampled prior concept noise guidance. With that, our ACE can thoroughly erase the target concept while preserving the generation of + +non-target concepts. We conducted extensive evaluations across different erasure tasks, including intellectual property (IP), explicit content, and artistic style. Our method demonstrate significant advantages in both generation and editing filtration, showcasing its effectiveness. + +The contributions of this work can be summarized as: + +- We investigate the potential risks of unsafe content creation through image editing, and propose an Anti-Editing Concept Erasure (ACE) method to prevent the production of such content during both generation and editing. +- A unconditional erasure guidance is proposed for anti-editing concept erasure, along with concept preservation mechanism to ensure the generation of non-target concepts. +- Extensive experiments demonstrate that our ACE can successfully erase target concepts and exhibits superior filtration capabilities during both generation and editing. + +# 2. Related Work + +# 2.1. Concept Erasure in T2I Models + +The concept erasure [9, 11, 15, 16, 18, 21, 23-25, 28-30, 33, 39, 41, 43, 48, 51, 52, 59, 62-64, 71] in T2I models has been the subject of numerous studies. Fine-tuning models are an important method in concept erasure. ESD [14] suggests integrating negative guidance into target concept noise through training. SPM [36] proposes prior correction based on the cosine similarity of text and utilizes a comparable Lora approach to train the model. MACE [35] leverages a closed-form solution to amalgamate multiple erasure Lora weights. RECE [17] employs analytical methods to search inappropriate text embedding and integrates it into erasure closed-form solution. AdvUnlearn [68] incorporate adversarial training to improve the robustness of the erasure method. To the best of our knowledge, current fine-tuning methods lack consideration for editing filtration, thus rendering them ineffective in preventing customized editions to target concept images. + +# 2.2. Text-driven Image Editing + +Due to the broad generative capacities inherent in text-to-image DMs, the employment of DMs for image editing [3, 5, 7, 8, 26, 27, 32, 37, 38, 40, 47, 50, 54, 56, 57, 60, 65, 70] has progressively garnered traction. MasaCtrl [6] introduces source image data into the image editing process by substituting keys and values in the self-attention layer, thus modifying the actions of objects in the image. LEDITS++[4] uses inference guidance and attention masks from DM to confine editing regions while using DDPM inversion for enhanced restoration of source image. Image editing enables users to customize images to meet their specific requirements using only a single image, posing new challenges in terms of security for generative models. + +![](images/56a0845895c1ed71f7953c1f3585aa222381202b834edee24691efd4fe710399.jpg) +(a) Calculation of Classifier Free Guidance + +![](images/cbe3f74c88efc5ab447a9f20f7f97f47a512ac356d7b19a1584ef65b6c3b8dd6.jpg) + +![](images/340bd65aa6b021232b6fb1dd0489bd42985b7666fac10ce58f3a2d84d7b26f48.jpg) +(b) Concept Erasure on Conditional Noise Prediction +(c) Our ACE learns to erase concept on both Conditional and Unconditional Noise Predictions +Figure 2. Overview of our proposed ACE. (a) In CFG, both conditional noise and unconditional noise are adopted to generate high-quality images. (b) ESD [14] unlearns the target concept (e.g., Mickey) by aligning conditional noise prediction with conditional erasure guidance (CEG). (c) During the fine-tuning, our ACE injects erasure guidance into both conditional and unconditional noise prediction, preventing the production of unsafe content during both generation and editing. PG-UEG denotes the prior-guided unconditional erasure guidance calculated following Eqn 9. + +# 2.3. Attacks in T2I Models + +As research on concept erasure in T2I models advances, red team studies focusing on the robustness of detection erasure methods are also increasingly emerging. P4D [10] processes a method of inserting adversarial text into regular input text to facilitate the production of insecure images using the T2I model. Ring-A-Bell [53] extracts the discrepancy vector between the embeddings of insecure concept text and secure concept text and employs it to derive the attack text embedding. UnlearnDiff [69] employs Projected Gradient Descent (PGD) to tackle the optimization challenge inherent in adversarial attacks and maps the optimized text embeddings onto discrete tokens. + +# 3. Proposed Method + +Given a target concept (e.g., Fukushima), concept erasure task [14, 36] aims to unlearn it from pre-trained text-to-image (T2I) models, preventing the illegal use of these models to create copyrighted content. However, existing methods can be circumvented and fail to prevent users from producing new undesirable images through image editing, which raises new concerns. To address this, we propose an Anti-Editing Concept Erasure (ACE) method, as illustrated in Fig. 2, to prevent the production of undesirable content through both generation and editing. In this section, we will first introduce the prior knowledge of our method (Sec. 3.1), + +including employed T2I model and concept erasure method. To address the editing issue, we further propose to erase the target concept from both conditional and unconditional prediction for anti-editing erasure (Sec. 3.2). Finally, to preserve the generation of non-target concepts, a prior concept preservation mechanism is introduced (Sec. 3.3). + +# 3.1. Preliminaries + +Stable Diffusion. In this work, we adopt Stable Diffusion 1.4 [46] as text-to-image model, which is one of the representative T2I diffusion models. It first employs a variational autoencoder (VAE) to transform real images $x$ into an image latent $z$ . Then, a text-conditioned diffusion model $\epsilon_{\theta}$ is trained on the latent space to predict latent codes, and mean-squared loss is adopted, + +$$ +\mathcal {L} _ {\mathrm {L D M}} = \mathbb {E} _ {z _ {t}, t, c, \epsilon \sim \mathcal {N} (0, I)} \left[ \| \epsilon - \epsilon_ {\theta} \left(z _ {t}, c, t\right) \| _ {2} ^ {2} \right], \tag {1} +$$ + +where $\epsilon$ denotes the unscaled noise and $c$ is the text embedding encoded by text encoders. $z_{t}$ is the latent noised to time $t$ . During inference, a random Gaussian noise $z_{T}$ is iteratively denoised to $z_{0}$ , and decoded to final image. + +Classifier-Free Guidance. To improve the quality of generated images, classifier-free guidance [20] is adopted during diffusion inference. Based on Tweedie's formula and the principles of diffusion model, we have: + +$$ +\nabla_ {z _ {t}} \log p (c | z _ {t}) = - \frac {1}{\sigma_ {t}} \left(\epsilon_ {\theta} \left(z _ {t}, c, t\right) - \epsilon_ {\theta} \left(z _ {t}, t\right)\right). \tag {2} +$$ + +![](images/27190eb892c6b8bb4fe387846155a26a0b9452aa12300fb0627f3354f91f094c.jpg) +Figure 3. Qualitative comparisons of IP character removal. Our ACE effectively erases the target concept while generating other concepts successfully. + +Here, $\sigma_{t}$ is a constant. To increase the probability of text condition $c$ appearing in the final image, the final noise prediction is the composition of noise prediction from both conditional and unconditional texts, + +$$ +\tilde {\epsilon} = \epsilon_ {\theta} \left(z _ {t}, t\right) + \omega \left(\epsilon_ {\theta} \left(z _ {t}, c, t\right) - \epsilon_ {\theta} \left(z _ {t}, t\right)\right), \tag {3} +$$ + +where $\epsilon_{\theta}(z_t,t)$ denote the unconditional noise prediction, and $\omega$ is a hyperparameter controlling the guidance scale. + +Concept Erasure. Given a target concept indicated by text $c$ (e.g., Pikachu), concept erasure task finetunes the model to reduce the probability of generating images containing this concept. For example, ESD [14] removes the target concept from the conditional noise prediction, and a conditional erasure guidance (CEG) is defined as: + +$$ +\tilde {\epsilon} _ {c} = \epsilon_ {\theta^ {*}} \left(z _ {t}, t\right) - \eta_ {c} \left(\epsilon_ {\theta^ {*}} \left(z _ {t}, c, t\right) - \epsilon_ {\theta^ {*}} \left(z _ {t}, t\right)\right), \tag {4} +$$ + +where $\epsilon_{\theta^{\star}}(\cdot)$ represents the original T2I model, and $z_{t}$ is the encoded latent image contains target concept $c$ . $\eta_c$ is a control scale hyperparameter. During training, ESD aligns the noise prediction of the target concept in tuned model $\epsilon_{\theta}(z_t,c,t)$ with the above CEG, + +$$ +\mathcal {L} _ {\mathrm {E S D}} = \mathbb {E} _ {z _ {t}, t, c} \left[ \| \epsilon_ {\theta} (z _ {t}, c, t) - \tilde {\epsilon} _ {c} \| _ {2} ^ {2} \right]. \tag {5} +$$ + +After the training, the erasure guidance $-\nabla_{z_t}\log p(c|z_t)$ is introduced into conditional noise prediction of the target concept. Therefore, the prediction of tuned model will be guided away from the erased concept, preventing the generation of images containing the erased concept. + +# 3.2. Anti-Editing Concept Erasure + +Editing Filtration. Although existing erasure methods can successfully prevent the generation of an erased concept through text prompts, they can be easily circumvented by editing techniques. As shown in Fig. 1, when utilizing tuned ESD model to add sunglasses on an image of Pikachu using LEDs++ [4], it successfully produces an image of Pikachu with sunglasses, raising potential copyright concerns. This is because these methods are typically trained to erase the concept from the noise prediction of the target concept (as shown in Fig. 2 (b)), and rely on inputting concept text (e.g., "Pikachu" or "Mickey") to trigger the guard. However, during the editing process, the target concept may not necessarily be used in the text prompt. Therefore, these erasure methods fail to prevent the reconstruction of the erased concept. In practice, the erasure model should also have the ability to prevent the creation of undesired concepts through image editing, a feature we refer to as editing filtration. + +Unconditional Erasure Guidance. As we all know, current generation and editing methods heavily rely on classifier-free guidance [20] (CFG) to improve the quality of generated images, where unconditional noise prediction performs an important role. To address the issue of editing filtration, we further propose to erase the target concept from both conditional and unconditional noise prediction, thereby preventing edited images from containing target concepts. Specifically, similar to ESD, we define the unconditional erasure guidance (UEG) as, + +$$ +\tilde {\epsilon} _ {\mathrm {u}} = \epsilon_ {\theta^ {*}} \left(z _ {t}, t\right) + \eta_ {\mathrm {u}} \left(\epsilon_ {\theta^ {*}} \left(z _ {t}, c, t\right) - \epsilon_ {\theta^ {*}} \left(z _ {t}, t\right)\right). \tag {6} +$$ + +During training, we additionally align the unconditional noise prediction of the tuned model with the UEG, + +$$ +\mathcal {L} _ {\mathrm {U n c}} = \mathbb {E} _ {z _ {t}, t, c} \left[ \| \epsilon_ {\theta} (z _ {t}, t) - \tilde {\epsilon} _ {\mathrm {u}} \| _ {2} ^ {2} \right]. \tag {7} +$$ + +When fine-tuned unconditional noise (our UEG) is subtracted in the CFG process, the erased concept guidance will be subtracted, thereby reducing the probability of the erased concept appearing regardless of the input text prompt. Then, the CFG noise prediction during inference will move away from the target concept regardless of any text input, thereby effectively preventing the production image containing the target concept. As erasure models are usually trained on a small dataset, they are prone to be overfitting, where the erasure guidance is introduced into the noise prediction for other conditional text prompts. This weakens the erasure effects and leads to incomplete erasures. To address the issue of overfitting, we introduce a prior constraint loss during the training process. Specifically, we regularize the prediction of the prior concept in the new model to be consistent with that of the original model: + +$$ +\mathcal {L} _ {\text {C o n s}} = \mathbb {E} _ {z _ {t}, t, c _ {p} \in \mathcal {C} _ {p}} \left[ \| \epsilon_ {\theta} \left(z _ {t}, c _ {p}, t\right) - \epsilon_ {\theta^ {*}} \left(z _ {t}, c _ {p}, t\right) \| _ {2} ^ {2} \right], \tag {8} +$$ + +![](images/bb0ea50cff7f9b2c8b2419c85b5296709314c8902c1c00d0bf08c525e1efaf57.jpg) +Figure 4. Comparison of our ACE method with other methods in terms of editing filtering. After erasing Mickey Mouse, our method filtered out edits involving Mickey Mouse while not affecting edits related to other IP characters. In contrast, the competing methods either fail to prevent editing (e.g., ESD, SPM, RECE, and MACE) or cannot perform editing on non-target concepts (e.g., AdvUnlearn). + +![](images/5f7264cadb1e3e303dc91c35a8afcb0694688495744182e74bb856025ae21ba8.jpg) +Figure 5. Qualitative results of nudity removal. Figure (a) shows the results of explicit editing using SD-Inpainting, while Figure (b) displays images generated using text with explicit label. Static adversarial text is used for editing text, while dynamic adversarial attacks are employed for generation. It can be observed that our method effectively reduces exposure in both editing and generation tasks. Moreover, our method maintains its effectiveness when editing and generating using adversarial text, indicating its robustness. + +where $c_p$ represents prior concept, and $\mathcal{C}_p$ represents the set of prior concepts. Intuitively, the larger the set of priors, the better it helps mitigate overfitting. However, it is challenging to traverse all the prior concepts as the pre-trained models have a large general semantic space. Our goal is to preserve the concepts more likely to be affected, thus minimizing the influence to other concepts. We assume that these concepts are semantic-related concepts to the erased concept and use LLM [1] to obtain them. By adding this loss, it ensures that the erasure guidance introduced during training aligns with our conceptualization in the Eqn. 7. + +# 3.3. Prior Concept Preservation + +In practice, training with the method proposed in Sec. 3.2 affects the generation prior of relevant concepts (see Sec. 4.4). This is because incorporating UEG not only decreases the probability of producing erased concepts, but also decreases probability of adjacent concepts. Therefore, we reverse mechanism of UEG by subtracting the guidance of prior concepts from the unconditional noise, which prevents the probability reduction of these concepts and minimizes concept forgetting. The prior concepts are sampled from the semantic-related concepts obtained using LLM, + +(a) Generation Prevention +(b) Editing Filtration + +
MethodErase ConceptPrior ConceptOverallErase ConceptPrior ConceptOverall
ESDUncConsCorCLIPe↓LPIPSe↑CLIPp↑LPIPSp↓CLIPd↑LPIPSd↑CLIPe↓LPIPSe↑CLIPp↑LPIPSp↓CLIPd↑LPIPSd↑
(1)0.1710.4400.2460.2860.0750.1530.3010.0600.3050.0500.0040.011
(2)0.1660.5510.2830.2360.1170.3150.2850.1490.3050.0570.0190.092
(3)0.1590.5070.2540.3370.0950.1700.2740.1680.3000.0770.0260.091
(4)0.2110.3030.2930.1990.0820.1040.2730.1750.3010.0790.0280.096
(5)0.1750.3970.2950.1960.1200.2010.2740.1680.3030.0700.0290.097
+ +Table 1. Quantitative Evaluation of generation and editing after ablation. The best results are highlighted in bold. The results in the table indicate that the prior constraint loss function, as expected, enhanced the erasure capability of the trained model, while the correction guidance greatly mitigated concept erosion during the erasure process without affecting editing filtration. + +![](images/9db561db123c5c79efc21173cd92c2acc70444dc91671755919fd09b03fb7492.jpg) +Figure 6. Qualitative results of artistic style removal. Our method erases the target style effectively and has minimal impact on other artistic styles. + +which is mentioned in the previous section. We call this new guidance prior-guided unconditional erasure guidance (PG-UEG), which is defined as: + +$$ +\begin{array}{l} \tilde {\epsilon} _ {\mathrm {p u}} = \epsilon_ {\theta^ {*}} (z _ {t}, t) + \eta_ {\mathrm {u}} \left(\epsilon_ {\theta^ {*}} (z _ {t}, c, t) - \epsilon_ {\theta^ {*}} (z _ {t}, t)\right) \\ - \eta_ {p} \gamma_ {p} \left(\epsilon_ {\theta^ {*}} \left(z _ {t}, c _ {p}, t\right) - \epsilon_ {\theta^ {*}} \left(z _ {t}, t\right)\right), \tag {9} \\ \end{array} +$$ + +where $\gamma_{p}$ represents the guidance control term related to the prior retained concept. $c_{p}$ refers to the same prior concept in $\mathcal{L}_{\mathrm{Cons}}$ which are obtained through random sampling from the set $\mathcal{C}_p$ . We calculate $\gamma_{p}$ using the CLIP model to measure the relevance of different prior concepts to the target concept image and then compare it to the relevance of the target concept text to its image. Specifically, $\gamma_{p} = \frac{\mathrm{CLIP}(x,c_{p})}{\mathrm{CLIP}(x,c)}$ . The new loss for our ACE is: + +$$ +\mathcal {L} _ {\mathrm {P U n c}} = \mathbb {E} _ {z _ {t}, t, c, c _ {p} \in \mathcal {C} _ {p}} \left[ \| \epsilon_ {\theta} (z _ {t}, t) - \tilde {\epsilon} _ {\mathrm {p u}} \| _ {2} ^ {2} \right]. \tag {10} +$$ + +The final training loss for our ACE is summarized as: $\mathcal{L}_{\mathrm{ACE}} = \lambda_{\mathrm{PUnc}}\mathcal{L}_{\mathrm{PUnc}} + \lambda_{\mathrm{Cons}}\mathcal{L}_{\mathrm{Cons}} + \lambda_{\mathrm{ESD}}\mathcal{L}_{\mathrm{ESD}}$ + +In our implementation, we adopt LORA [22] for parameter-efficient tuning, and the training process follows [14]. More details are provided in Suppl. + +# 4. Experiments + +We conduct experiments on various tasks to evaluate our ACE, including IP characters erasure, artistic styles erasure, and nudity erasure. ESD [12], SPM [36], AdvUnlearn [68], MACE [35], and RECE [17] are adopted as competing methods. Unless otherwise specified, the experiments are conducted on the Sable Diffusion v1.4. + +# 4.1. IP Character Removal + +Experiment Setup. To access our ACE on IP character removal, we employ ten iconic IP characters as examples, including Hello Kitty, Snoopy, Mickey Mouse, Elsa, Donald Duck, Dora the Explorer, Winnie the Pooh, Sonic the Hedgehog, Elsa, and Fukushima. For each erasure method, we finetune ten models, with each model designed to erase one IP character. Following [14, 17], we adopted CLIP [44] score and LPIPS [66] score as metrics for evaluation. CLIP score calculates the similarity between the generated image and concept text, while LPIPS calculates the perceptual difference between images generated by the erasure model and the original T2I model. $\mathrm{CLIP}_e$ calculates the CLIP similarity between images generated with erased concept text and their corresponding text, where lower value indicates more thorough erasure. $\mathrm{CLIP}_p$ calculates the relevance under prior concepts, and higher value indicates better prior preservation. $\mathrm{LPIPS}_e$ calculates the LPIPS similarity between images generated with erased concept text by the trained model and the original model, and higher value indicates more thorough erasure. $\mathrm{LPIPS}_p$ calculates the similarity under prior concepts, in which lower values indicate better prior preservation. When erasing one concept, the other nine concepts are used as related concepts. Following RECE [17], we further calculate the overall scores between erased and related characters to measure the trade-off between the concept erasure and prior preservation, where + +(a) Generation Prevention +(b) Editing Filtration + +
MethodErase ConceptPrior ConceptOverallErase ConceptPrior ConceptOverall
\(CLIP_e\downarrow\)\(LPIPS_e\uparrow\)\(CLIP_p\uparrow\)\(LPIPS_p\downarrow\)\(CLIP_d\uparrow\)\(LPIPS_d\uparrow\)\(CLIP_e\downarrow\)\(LPIPS_e\uparrow\)\(CLIP_p\uparrow\)\(LPIPS_p\downarrow\)\(CLIP_d\uparrow\)\(LPIPS_d\uparrow\)
SD v1.4 [46]0.3010.0000.3010.0000.0000.0000.3080.0630.3080.0630.0000.000
ESD [14]0.2270.3310.2760.2550.0490.0760.3060.0420.3070.0410.0010.000
SPM [36]0.2390.2880.2960.1070.0560.1810.3020.0610.3030.0560.0010.005
AdvUnlearn [68]0.1660.4680.2090.4030.0430.0650.3100.0110.3110.0100.0010.001
MACE [35]0.2500.3170.2980.1340.0480.1840.3030.0560.3040.0540.0010.002
RECE [17]0.1760.4260.2570.2700.0810.1560.3000.0660.3030.0540.0030.012
Ours0.1750.3970.2950.1960.1200.2010.2740.1680.3030.0700.0290.097
+ +Table 2. Quantitative comparisons of IP character erasure. The best two results are highlighted with bold and underline. + +
ButtocksBreast (F)Genitalia (F)Breast (M)Genitalia (M)FeetArmpitsBellyTotal↓FID30k↓CLIP30k↑
SD v1.4 [46]612043738167024118385014.070.313
ESD [14]15295111037683621113.800.304
SPM [36]1429721241532818614.630.312
AdvUnlearn [68]46208131275215.350.293
MACE [35]724810935613518912.600.294
RECE [17]14207161039453518614.450.309
Ours323496573914.690.308
+ +Table 3. Exposure detection of generated images in the I2P dataset. The best two results are highlighted with bold and underline. + +
Erase ConceptRelate ConceptOverall
CLIPp↓LPIPSe↑CLIPp↑LPIPSp↓CLIPd↑LPIPSd↑
SD v1.4 [46]0.3100.0000.3100.0000.0000.000
ESD [14]0.2160.4440.2960.2410.0800.202
SPM [36]0.2660.2680.3080.0740.0420.195
AdvUnlearn [68]0.1860.4760.2290.4100.0430.066
MACE [35]0.2280.3660.2980.1960.0690.169
RECE [17]0.2530.3070.3090.0510.0570.255
Ours0.1600.4710.3030.1260.1430.345
+ +Table 4. Quantitative evaluation of artist style erasure. The best two results are highlighted with bold and underline. Our ACE performs better in terms of thorough erasure and also demonstrates comparable prior preservation. + +
Unlearn Diffusion↓P4D↓Ring a Bell↓Average↓
SD v1.4 [46]100%100%85.21%95.07%
ESD [14]73.05%74.47%38.73%62.08%
SPM [36]91.49%91.49%57.75%80.24%
AdvUnlearn [68]25.53%19.15%4.93%16.54%
MACE [35]64.53%66.67%14.79%48.66%
RECE [17]70.92%65.96%26.76%54.55%
Ours27.65%28.37%2.82%19.61%
+ +Table 5. Robustness evaluation of nudity erasure. The best two results are highlighted with bold and underline. We report the attack success rates (ASR) of different adversarial methods under various erasure models. Our method achieved the second-best results without using adversarial training. + +$\mathrm{CLIP}_d = \mathrm{CLIP}_p - \mathrm{CLIP}_e$ and $\mathrm{LPIPS}_d = \mathrm{LPIPS}_e - \mathrm{LPIPS}_p$ . Higher $\mathrm{CLIP}_d$ and $\mathrm{LPIPS}_d$ indicate better trade-off. + +For generation evaluation, we adopt 33 text templates for each character concept, and five images are generated for each text template using the erased model. To evaluate the effectiveness of editing filtration, we adopt the widely used LEDs++ [4] and MasaCtrl [6] as editing methods. For each concept, we utilize Stable Diffusion 3 [12] to generate 15 images based on 3 text templates as initial images, and + +then perform editing on them using erased models. Each image is manipulated using 11 editing texts, such as "sun-glasses". Finally, the CLIP score and LPIPS score are calculated based on edited images, concept text and original images. The final results are all reported by averaging 10 characters. More details can be found in Suppl. + +Experiment Results. Fig. 3 illustrates the comparison of generation results against competing methods. One can see that, our ACE can successfully erase the target concept (i.e., Donald Duck) while retaining the capability to generate related prior concepts (e.g., Mickey Mouse and Pikachu). In contrast, methods such as ESD, AdvUnlearn, and RECE generate examples with noticeable concept erosion. From Table 2, our ACE demonstrates a comparable CLIP score for both the erased and related concepts. This indicates that our ACE achieves a better trade-off between target concept erasure and prior concept preservation, as further validated by the overall metrics in Table 2 (a). SPM and MACE exhibit inferior performance in thoroughly erasing the target concept. While AdvUnlearn performs well at erasing the target concept, it shows poor performance in prior preservation. + +Fig. 4 further presents the comparison of editing results by LEDITS++. As shown in the figure, the competing method generates the erased concept with desired attributes after performing the editing on the given image, which is not wanted in practice. In contrast, our method can successfully hinder the editing of images containing erased concepts (e.g., Mickey), while keeping the editability of nontarget concepts (e.g., Hello Kitty and Elsa). Table 2 (b) reports the quantitative comparisons evaluated with LEDITS++. Our method shows a significant improvement in erasing concepts, demonstrating its ability to edit filtration. + +The comparison on MasaCtrl and more results can be found in Suppl. + +# 4.2. Explicit Content Removal + +Experimental Setup. To evaluate our ACE on explicit content removal, we employ "nudity" as the target concept to train the model. Following [36], we utilize the I2P dataset [48] to evaluate the performance of explicit content generation. Specifically, we select 856 text prompts with explicit labels, and each prompt generates one image. Then, Nudenet [2] is used to quantify the number of nude body parts in these generated images. Additionally, following [14, 36], we employ COCO-30k Caption dataset [31] to evaluate the conditional generation capability of erased models. Specifically, we generate one image for each caption in COCO-30k and FID [19] is calculated between generated and natural images. CLIP score is also calculated between the generated images and the captions to access the semantic alignment of generated images. For robustness evaluation, we adopt UnlearnDiff [69], P4D [10] and Ring-A-Bell [53] as adversarial tools to calculate attack success rate (ASR). Adversarial attacks were conducted on 142 sensitive texts provided by UnlearnDiff. More details can be found in Suppl. + +Experiment Results. From Table 5, we can see that our method has a lower success rate in adversarial attacks when trained only for "nudity", with only AdvUnlearn performing slightly better than us with using adversarial training. As shown in Fig. 5 and Table 3, our method can effectively erase nudity content and results in fewer exposure parts. In the generation evaluation, we dynamically attack the erased models using adversarial tools. As shown in Fig. 5, our method demonstrates excellent robustness. To further showcase our method's efficacy in editing filtration, we employ SD-Inpainting [46] as an editing tool to assess the exposure levels of images after different text-guided inpainting processes. In addition to conventional text editing (e.g., bikini) adversarial edited text in MMA-Diffusion [61] is also used for explicit editing. GroundingDINO [34] is used to detect clothing in the images. As shown in Fig. 5, our method successfully prevents inappropriate inpainting of exposed parts in masked areas, making it more practical for real-world applications. + +More results for robustness and editing filtration evaluation can be found in Suppl. + +# 4.3. Artistic Style Removal + +Experiment Setup. To validate the performance of our model in unlearning styles, we choose ten representative artistic styles, including Leonardo da Vinci, Pablo Picasso, Michelangelo, Van Gogh, Salvador Dali, Claude Monet, Andy Warhol, Jackson Pollock, Frida Kahlo, Georgia O'Keeffe. The evaluation process and metrics are simi + +lar to the IP character removal (Sec. 4.1). + +Experiment Results. Fig. 6 illustrates the results of erasing artistic styles. As shown in the figure, our method can erase the style of Van Gogh and Andy Warhol from the T2I model, while generating other styles faithfully. From Table 4, our method achieves better $\mathrm{CLIP}_e$ on erased concept. + +# 4.4. Ablation Study + +We further conduct the ablation study on the IP character erasure to evaluate the effectiveness of each component proposed in our ACE. Specifically, it contains the following variants: (1) Baseline: by only adopting the ESD loss to finetune the model. (2) Baseline + Unc: by employing unconditional erasure guidance alignment with ESD Loss to finetune the model. (3) Baseline + Unc + $\mathcal{L}_{\mathrm{Cons}}$ : by adopting ESD Loss, unconditional erasure guidance alignment, and $\mathcal{L}_{\mathrm{Cons}}$ to finetune the model. (4) Our method without ESD: Ours w/o $\mathcal{L}_{\mathrm{ESD}}$ is also effective in concept erasure and editing filtration, and performs better than ESD, indicating that our PG-UEG plays a crucial role in editing filtering. (5) Ours full method: by incorporating the ESD Loss, prior-guided unconditional erasure guidance alignment and $\mathcal{L}_{\mathrm{Cons}}$ together. From Table 1, we can see that: (i) Introducing unconditional erasure guidance improves the model's editing filtration performance, indicating its effectiveness in preventing unwanted edits. (ii) We use both unconditional erasure guidance and $\mathcal{L}_{\mathrm{Cons}}$ together leading to significant improvements in concept erasure and editing filtration performance, although it compromises the generation of related prior concepts. (iii) $\mathcal{L}_{\mathrm{PUnc}}$ enhances the prior preservation, and without affecting editing filtration. + +More ablation results are provided in Suppl. + +# 5. Conclusion + +In this paper, we investigate the potential risks of unsafe content creation through image editing, and propose an Anti-Editing Concept Erasure (ACE) method to prevent the production of such content during both generation and editing. In addition to the conditional erasure guidance used by existing methods, we further propose an unconditional noise erasure technique to enhance anti-editing concept erasure. This guidance steers the noise prediction away from the target concept, thereby effectively preventing the production of images containing the target concept. Moreover, a concept preservation mechanism is introduced to maintain the generation prior of non-target concepts. Experiments demonstrate that our ACE can successfully erase specific concepts and exhibits superior filtration capabilities during both generation and editing compared to existing methods. + +Acknowledgement. The work was supported by National Key R&D Program of China under Grant No. 2022YFA1004100. + +# References + +[1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.5 +[2] P Bedapudi. Nudenet: Neural nets for nudity classification, detection and selective censoring, 2019. 8 +[3] Manuel Brack, Felix Friedrich, Dominik Hintersdorf, Lukas Struppek, Patrick Schramowski, and Kristian Kersting. Sega: Instructing text-to-image models using semantic guidance. Advances in Neural Information Processing Systems, 36: 25365-25389, 2023. 2 +[4] Manuel Brack, Felix Friedrich, Katharia Kornmeier, Linoy Tsaban, Patrick Schramowski, Kristian Kersting, and Apolinário Passos. Ledits++: Limitless image editing using text-to-image models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8861-8870, 2024. 1, 2, 4, 7 +[5] Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18392-18402, 2023. 2 +[6] Mingdeng Cao, Xintao Wang, Zhongang Qi, Ying Shan, Xiaohu Qie, and Yinqiang Zheng. Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 22560-22570, 2023. 2, 7, 3 +[7] Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T Freeman, Michael Rubinstein, et al. Muse: Text-to-image generation via masked generative transformers. arXiv preprint arXiv:2301.00704, 2023. 2 +[8] Hila Chefer, Yuval Alaluf, Yael Vinker, Lior Wolf, and Daniel Cohen-Or. Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models. ACM Transactions on Graphics (TOG), 42(4):1-10, 2023. 2 +[9] Die Chen, Zhiwen Li, Mingyuan Fan, Cen Chen, Wenmeng Zhou, and Yaliang Li. Eiup: A training-free approach to erase non-compliant concepts conditioned on implicit unsafe prompts. arXiv preprint arXiv:2408.01014, 2024. 2 +[10] Zhi-Yi Chin, Chieh-Ming Jiang, Ching-Chun Huang, PinYu Chen, and Wei-Chen Chiu. Prompting4debugging: Red-teaming text-to-image diffusion models by finding problematic prompts. arXiv preprint arXiv:2309.06135, 2023. 3, 8 +[11] Anudeep Das, Vasisht Duddu, Rui Zhang, and N Asokan. *Espresso: Robust concept filtering in text-to-image models.* arXiv preprint arXiv:2404.19227, 2024. 2 +[12] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis, march 2024. URL http://arxiv.org/abs/2403.03206, 2024.1, 6,7 + +[13] Kailai Feng, Yabo Zhang, Haodong Yu, Zhilong Ji, Jinfeng Bai, Hongzhi Zhang, and Wangmeng Zuo. Vitaglyph: Vitalizing artistic typography with flexible dual-branch diffusion models. arXiv preprint arXiv:2410.01738, 2024. 1 +[14] Rohit Gandikota, Joanna Materzynska, Jaden Fiitto-Kaufman, and David Bau. Erasing concepts from diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2426-2436, 2023. 2, 3, 4, 6, 7, 8 +[15] Rohit Gandikota, Hadas Orgad, Yonatan Belinkov, Joanna Materzyńska, and David Bau. Unified concept editing in diffusion models. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 5111-5120, 2024. 2 +[16] Hongcheng Gao, Tianyu Pang, Chao Du, Taihang Hu, Zhijie Deng, and Min Lin. Meta-unlearning on diffusion models: Preventing relearning unlearned concepts. arXiv preprint arXiv:2410.12777, 2024. 2 +[17] Chao Gong, Kai Chen, Zhipeng Wei, Jingjing Chen, and YuGang Jiang. Reliable and efficient concept erasure of text-to-image diffusion models. arXiv preprint arXiv:2407.12383, 2024. 2, 6, 7, 4 +[18] Luxi He, Yangsibo Huang, Weijia Shi, Tinghao Xie, Haotian Liu, Yue Wang, Luke Zettlemoyer, Chiyuan Zhang, Danqi Chen, and Peter Henderson. *Fantastic copyrighted beasts and how (not) to generate them.* arXiv preprint arXiv:2406.14526, 2024. 2 +[19] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. 8, 2 +[20] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022. 2, 3, 4 +[21] Seunghoo Hong, Juhun Lee, and Simon S Woo. All but one: Surgical concept erasing with model preservation in text-to-image diffusion models. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 21143-21151, 2024. 2 +[22] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. 6 +[23] Chi-Pin Huang, Kai-Po Chang, Chung-Ting Tsai, Yung-Hsuan Lai, Fu-En Yang, and Yu-Chiang Frank Wang. Receler: Reliable concept erasing of text-to-image diffusion models via lightweight erasers. arXiv preprint arXiv:2311.17717, 2023. 2 +[24] Sanghyun Kim, Seohyeon Jung, Balhae Kim, Moonseok Choi, Jinwoo Shin, and Juho Lee. Safeguard text-to-image diffusion models with human feedback inversion. arXiv preprint arXiv:2407.21032, 2024. +[25] Nupur Kumari, Bingliang Zhang, Sheng-Yu Wang, Eli Shechtman, Richard Zhang, and Jun-Yan Zhu. Ablating concepts in text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 22691-22702, 2023. 2 + +[26] Nupur Kumari, Bingliang Zhang, Richard Zhang, Eli Shechtman, and Jun-Yan Zhu. Multi-concept customization of text-to-image diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1931-1941, 2023. 2 +[27] Fan Li, Zixiao Zhang, Yi Huang, Jianzhuang Liu, Renjing Pei, Bin Shao, and Songcen Xu. Magiceraser: Erasing any objects via semantics-aware control. arXiv preprint arXiv:2410.10207, 2024. 2 +[28] Hang Li, Chengzhi Shen, Philip Torr, Volker Tresp, and Jindong Gu. Self-discovering interpretable diffusion latent directions for responsible text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12006-12016, 2024. 2 +[29] Jia Li, Lijie Hu, Zhixian He, Jingfeng Zhang, Tianhang Zheng, and Di Wang. Text guided image editing with automatic concept locating and forgetting. arXiv preprint arXiv:2405.19708, 2024. +[30] Senmao Li, Joost van de Weijer, Taihang Hu, Fahad Shahbaz Khan, Qibin Hou, Yaxing Wang, and Jian Yang. Get what you want, not what you don't: Image content suppression for text-to-image diffusion models. arXiv preprint arXiv:2402.05375, 2024. 2 +[31] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740-755. Springer, 2014. 8, 2 +[32] Ming Liu, Yuxiang Wei, Xiaohe Wu, Wangmeng Zuo, and Lei Zhang. Survey on leveraging pre-trained generative adversarial networks for image editing and restoration. Science China Information Sciences, 66(5):151101, 2023. 2 +[33] Runtao Liu, Ashkan Khakzar, Jindong Gu, Qifeng Chen, Philip Torr, and Fabio Pizzati. Latent guard: a safety framework for text-to-image generation. In European Conference on Computer Vision, pages 93-109. Springer, 2025. 2 +[34] Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. arXiv preprint arXiv:2303.05499, 2023. 8 +[35] Shilin Lu, Zilan Wang, Leyang Li, Yanzhu Liu, and Adams Wai-Kin Kong. Mace: Mass concept erasure in diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6430-6440, 2024. 2, 6, 7, 4 +[36] Mengyao Lyu, Yuhong Yang, Haiwen Hong, Hui Chen, Xuan Jin, Yuan He, Hui Xue, Jungong Han, and Guiguang Ding. One-dimensional adapter to rule them all: Concepts diffusion models and erasing applications. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7559-7568, 2024. 2, 3, 6, 7, 8, 4 +[37] Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Null-text inversion for editing real images using guided diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6038–6047, 2023. 2 + +[38] Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, and Ying Shan. T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 4296-4304, 2024. 2 +[39] Yong-Hyun Park, Sangdoo Yun, Jin-Hwa Kim, Junho Kim, Geonhui Jang, Yonghyun Jeong, Junghyo Jo, and Gayoung Lee. Direct unlearning optimization for robust and safe text-to-image models. arXiv preprint arXiv:2407.21035, 2024. 2 +[40] Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, and Jun-Yan Zhu. Zero-shot image-to-image translation. In ACM SIGGRAPH 2023 Conference Proceedings, pages 1-11, 2023. 2 +[41] Minh Pham, Kelly O Marshall, Chinmay Hegde, and Niv Cohen. Robust concept erasure using task vectors. arXiv preprint arXiv:2404.03631, 2024. 2 +[42] Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952, 2023. 1 +[43] Samuele Poppi, Tobia Poppi, Federico Cocchi, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara, et al. Safe-clip: Removing nsfw concepts from vision-and-language models. In Proceedings of the European Conference on Computer Vision, 2024. 2 +[44] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 6 +[45] Javier Rando, Daniel Paleka, David Lindner, Lennart Heim, and Florian Tramèr. Red-teaming the stable diffusion safety filter. arXiv preprint arXiv:2210.04610, 2022. 2 +[46] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695, 2022. 1, 2, 3, 7, 8, 4 +[47] Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 22500-22510, 2023. 2 +[48] Patrick Schramowski, Manuel Brack, Björn Deiseroth, and Kristian Kersting. Safe latent diffusion: Mitigating inappropriate degeneration in diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22522-22531, 2023. 2, 8 +[49] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems, 35:25278-25294, 2022. 1 + +[50] Jing Shi, Wei Xiong, Zhe Lin, and Hyun Joon Jung. Instantbooth: Personalized text-to-image generation without test-time finetuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8543-8552, 2024. 2 +[51] Zhuan Shi, Jing Yan, Xiaoli Tang, Lingjuan Lyu, and Boi Faltings. Rlcp: A reinforcement learning-based copyright protection method for text-to-image diffusion model. arXiv preprint arXiv:2408.16634, 2024. 2 +[52] Koushik Srivatsan, Fahad Shamshad, Muzammal Naseer, and Karthik Nandakumar. Stereo: Towards adversarially robust concept erasing from text-to-image generation models. arXiv preprint arXiv:2408.16807, 2024. 2 +[53] Yu-Lin Tsai, Chia-Yi Hsu, Chulin Xie, Chih-Hsun Lin, Jia-You Chen, Bo Li, Pin-Yu Chen, Chia-Mu Yu, and Chun-Ying Huang. Ring-a-bell! how reliable are concept removal methods for diffusion models? arXiv preprint arXiv:2310.10012, 2023. 3, 8 +[54] Narek Tumanyan, Michal Geyer, Shai Bagon, and Tali Dekel. Plug-and-play diffusion features for text-driven image-to-image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1921-1930, 2023. 2 +[55] Haofan Wang, Matteo Spinelli, Qixun Wang, Xu Bai, Zekui Qin, and Anthony Chen. Instantstyle: Free lunch towards style-preserving in text-to-image generation. arXiv preprint arXiv:2404.02733, 2024. 1 +[56] Yuxiang Wei, Yabo Zhang, Zhilong Ji, Jinfeng Bai, Lei Zhang, and Wangmeng Zuo. Elite: Encoding visual concepts into textual embeddings for customized text-to-image generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15943-15953, 2023. 2 +[57] Yuxiang Wei, Zhilong Ji, Jinfeng Bai, Hongzhi Zhang, Lei Zhang, and Wangmeng Zuo. Masterweaver: Taming editability and face identity for personalized text-to-image generation. In European Conference on Computer Vision, pages 252-271. Springer, 2025. 2 +[58] Yuxiang Wei, Yiheng Zheng, Yabo Zhang, Ming Liu, Zhi-long Ji, Lei Zhang, and Wangmeng Zuo. Personalized image generation with deep generative models: A decade survey. arXiv preprint arXiv:2502.13081, 2025. 1 +[59] Yongliang Wu, Shiji Zhou, Mingzhuo Yang, Lianzhe Wang, Wenbo Zhu, Heng Chang, Xiao Zhou, and Xu Yang. Unlearning concepts in diffusion model via concept domain correction and concept preserving gradient. arXiv preprint arXiv:2405.15304, 2024. 2 +[60] Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, and Fang Wen. Paint by example: Exemplar-based image editing with diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18381-18391, 2023. 2 +[61] Yijun Yang, Ruiyuan Gao, Xiaosen Wang, Tsung-Yi Ho, Nan Xu, and Qiang Xu. Mma-diffusion: Multimodal attack on diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7737-7746, 2024. 8 + +[62] Yijun Yang, Ruiyuan Gao, Xiao Yang, Jianyuan Zhong, and Qiang Xu. Guardt2i: Defending text-to-image models from adversarial prompts. arXiv preprint arXiv:2403.01446, 2024. 2 +[63] Jaehong Yoon, Shoubin Yu, Vaidehi Patil, Huaxiu Yao, and Mohit Bansal. Safree: Training-free and adaptive guard for safe text-to-image and video generation. arXiv preprint arXiv:2410.12761, 2024. +[64] Gong Zhang, Kai Wang, Xingqian Xu, Zhangyang Wang, and Humphrey Shi. Forget-me-not: Learning to forget in text-to-image diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1755-1764, 2024. 2 +[65] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3836-3847, 2023. 2 +[66] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586-595, 2018. 6 +[67] Yabo Zhang, Yuxiang Wei, Dongsheng Jiang, Xiaopeng Zhang, Wangmeng Zuo, and Qi Tian. Controlvideo: Training-free controllable text-to-video generation. arXiv preprint arXiv:2305.13077, 2023. 1 +[68] Yimeng Zhang, Xin Chen, Jinghan Jia, Yihua Zhang, Chongyu Fan, Jiancheng Liu, Mingyi Hong, Ke Ding, and Sijia Liu. Defensive unlearning with adversarial training for robust concept erasure in diffusion models. arXiv preprint arXiv:2405.15234, 2024. 2, 6, 7, 4 +[69] Yimeng Zhang, Jinghan Jia, Xin Chen, Aochuan Chen, Yihua Zhang, Jiancheng Liu, Ke Ding, and Sijia Liu. To generate or not? safety-driven unlearned diffusion models are still easy to generate unsafe images... for now. In European Conference on Computer Vision, pages 385-403. Springer, 2025. 3, 8 +[70] Yabo Zhang, Xinpeng Zhou, Yihan Zeng, Hang Xu, Hui Li, and Wangmeng Zuo. Framepainter: Endowing interactive image editing with video diffusion priors. arXiv preprint arXiv:2501.08225, 2025. 2 +[71] Mengnan Zhao, Lihe Zhang, Tianhang Zheng, Yuqiu Kong, and Baocai Yin. Separable multi-concept erasure from diffusion models. arXiv preprint arXiv:2402.05947, 2024. 2 \ No newline at end of file diff --git a/2025/ACE_ Anti-Editing Concept Erasure in Text-to-Image Models/images.zip b/2025/ACE_ Anti-Editing Concept Erasure in Text-to-Image Models/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..40bf18ae2685b4cb3604da0c1a8dbb8a975f6830 --- /dev/null +++ b/2025/ACE_ Anti-Editing Concept Erasure in Text-to-Image Models/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:25c2544a516013a761aae551595e984cb2eaa696e6e263f41602bfb18f5a9f2a +size 886314 diff --git a/2025/ACE_ Anti-Editing Concept Erasure in Text-to-Image Models/layout.json b/2025/ACE_ Anti-Editing Concept Erasure in Text-to-Image Models/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..360a17566ea46aacbb1db2ff05ebd6b8751f94c3 --- /dev/null +++ b/2025/ACE_ Anti-Editing Concept Erasure in Text-to-Image Models/layout.json @@ -0,0 +1,8151 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 121, + 113, + 489, + 129 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 121, + 113, + 489, + 129 + ], + "spans": [ + { + "bbox": [ + 121, + 113, + 489, + 129 + ], + "type": "text", + "content": "ACE: Anti-Editing Concept Erasure in Text-to-Image Models" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 107, + 157, + 501, + 173 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 157, + 501, + 173 + ], + "spans": [ + { + "bbox": [ + 107, + 157, + 501, + 173 + ], + "type": "text", + "content": "Zihao Wang" + }, + { + "bbox": [ + 107, + 157, + 501, + 173 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 107, + 157, + 501, + 173 + ], + "type": "text", + "content": " Yuxiang Wei" + }, + { + "bbox": [ + 107, + 157, + 501, + 173 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 107, + 157, + 501, + 173 + ], + "type": "text", + "content": " Fan Li" + }, + { + "bbox": [ + 107, + 157, + 501, + 173 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 107, + 157, + 501, + 173 + ], + "type": "text", + "content": " Renjing Pei" + }, + { + "bbox": [ + 107, + 157, + 501, + 173 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 107, + 157, + 501, + 173 + ], + "type": "text", + "content": " Hang Xu" + }, + { + "bbox": [ + 107, + 157, + 501, + 173 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 107, + 157, + 501, + 173 + ], + "type": "text", + "content": " Wangmeng Zuo" + }, + { + "bbox": [ + 107, + 157, + 501, + 173 + ], + "type": "inline_equation", + "content": "^{1,3(\\text{图})}" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 97, + 185, + 512, + 200 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 97, + 185, + 512, + 200 + ], + "spans": [ + { + "bbox": [ + 97, + 185, + 512, + 200 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 97, + 185, + 512, + 200 + ], + "type": "text", + "content": "Harbin Institute of Technology " + }, + { + "bbox": [ + 97, + 185, + 512, + 200 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 97, + 185, + 512, + 200 + ], + "type": "text", + "content": " Huawei Noah's Ark Lab " + }, + { + "bbox": [ + 97, + 185, + 512, + 200 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 97, + 185, + 512, + 200 + ], + "type": "text", + "content": " Pazhou Lab (Huangpu)" + } + ] + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 66, + 225, + 266, + 300 + ], + "blocks": [ + { + "bbox": [ + 66, + 225, + 266, + 300 + ], + "lines": [ + { + "bbox": [ + 66, + 225, + 266, + 300 + ], + "spans": [ + { + "bbox": [ + 66, + 225, + 266, + 300 + ], + "type": "image", + "image_path": "66dbe4e4ed7a06542016d35cf3963ca3534f7f94e5e23386d93120d461c5f8cf.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 67, + 312, + 265, + 396 + ], + "blocks": [ + { + "bbox": [ + 67, + 312, + 265, + 396 + ], + "lines": [ + { + "bbox": [ + 67, + 312, + 265, + 396 + ], + "spans": [ + { + "bbox": [ + 67, + 312, + 265, + 396 + ], + "type": "image", + "image_path": "617364e0d91317e5332d5a06a51f795c86fdd959c79bf1e99ace98740162ed58.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 69, + 404, + 261, + 415 + ], + "lines": [ + { + "bbox": [ + 69, + 404, + 261, + 415 + ], + "spans": [ + { + "bbox": [ + 69, + 404, + 261, + 415 + ], + "type": "text", + "content": "(a) Common methods for creating copyrighted content" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 284, + 228, + 545, + 305 + ], + "blocks": [ + { + "bbox": [ + 284, + 228, + 545, + 305 + ], + "lines": [ + { + "bbox": [ + 284, + 228, + 545, + 305 + ], + "spans": [ + { + "bbox": [ + 284, + 228, + 545, + 305 + ], + "type": "image", + "image_path": "f0aed767b6998d7a0b3e23933a3b397360c267a79a2e63f73119d1c0fab5ad7b.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 285, + 314, + 545, + 398 + ], + "blocks": [ + { + "bbox": [ + 285, + 314, + 545, + 398 + ], + "lines": [ + { + "bbox": [ + 285, + 314, + 545, + 398 + ], + "spans": [ + { + "bbox": [ + 285, + 314, + 545, + 398 + ], + "type": "image", + "image_path": "26943a5647f5e5836853e2cf47e6502410040bed37427a4d87f2fdf2740ec254.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 305, + 404, + 525, + 415 + ], + "lines": [ + { + "bbox": [ + 305, + 404, + 525, + 415 + ], + "spans": [ + { + "bbox": [ + 305, + 404, + 525, + 415 + ], + "type": "text", + "content": "(b) Comparisons of Pikachu erasure on generation and editing" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 63, + 418, + 547, + 460 + ], + "lines": [ + { + "bbox": [ + 63, + 418, + 547, + 460 + ], + "spans": [ + { + "bbox": [ + 63, + 418, + 547, + 460 + ], + "type": "text", + "content": "Figure 1. (a) Given a text-to-image (T2I) model, there are two common methods to adopt it to create undesired contents, i.e., generating new images based on text prompts or editing existing images. (b) Current concept erasure methods primarily focus on preventing the generation of erased concepts but fail to protect against image editing. In contrast, our ACE method can prevent the production of such content during both generation and editing processes. As shown, after erasing Pikachu, it successfully prevents the edits involving Pikachu." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "bbox": [ + 157, + 471, + 203, + 483 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 157, + 471, + 203, + 483 + ], + "spans": [ + { + "bbox": [ + 157, + 471, + 203, + 483 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 63, + 506, + 297, + 704 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 506, + 297, + 704 + ], + "spans": [ + { + "bbox": [ + 63, + 506, + 297, + 704 + ], + "type": "text", + "content": "Recent advance in text-to-image diffusion models have significantly facilitated the generation of high-quality images, but also raising concerns about the illegal creation of harmful content, such as copyrighted images. Existing concept erasure methods achieve superior results in preventing the production of erased concept from prompts, but typically perform poorly in preventing undesired editing. To address this issue, we propose an Anti-Editing Concept Erasure (ACE) method, which not only erases the target concept during generation but also filters out it during editing. Specifically, we propose to inject the erasure guidance into both conditional and the unconditional noise prediction, enabling the model to effectively prevent the creation of erasure concepts during both editing and generation. Furthermore, a stochastic correction guidance is introduced during training to address the erosion of unrelated concepts. We conducted erasure editing experiments with" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 472, + 547, + 564 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 472, + 547, + 564 + ], + "spans": [ + { + "bbox": [ + 313, + 472, + 547, + 564 + ], + "type": "text", + "content": "representative editing methods (i.e., LEDs++ and Ma-saCtrl) to erase IP characters, and the results indicate that our ACE effectively filters out target concepts in both types of edits. Additional experiments on erasing explicit concepts and artistic styles further demonstrate that our ACE performs favorably against state-of-the-art methods. Our code will be publicly available at https://github.com/120L020904/ACE." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 314, + 590, + 392, + 601 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 590, + 392, + 601 + ], + "spans": [ + { + "bbox": [ + 314, + 590, + 392, + 601 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 312, + 610, + 547, + 704 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 610, + 547, + 704 + ], + "spans": [ + { + "bbox": [ + 312, + 610, + 547, + 704 + ], + "type": "text", + "content": "Recent text-to-image (T2I) diffusion models trained with large-scale datasets [49] have demonstrated an impressive ability to generate high-quality images [12, 42, 46]. Their extraordinary creative capabilities enable users to produce high-quality images, and facilitate a wide range of applications, such as image editing [4, 58] and artistic creation [13, 55, 67]. However, alongside these advancements, a significant concern has arisen regarding the potential mis" + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 35 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 35 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 35 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 293, + 743, + 317, + 753 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 743, + 317, + 753 + ], + "spans": [ + { + "bbox": [ + 293, + 743, + 317, + 753 + ], + "type": "text", + "content": "23505" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 63, + 83, + 295, + 118 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 83, + 295, + 118 + ], + "spans": [ + { + "bbox": [ + 63, + 83, + 295, + 118 + ], + "type": "text", + "content": "use of these text-to-image models. For example, these models might be employed to generate unsafe content, such as copyrighted material or sexually explicit images." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 63, + 120, + 295, + 305 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 120, + 295, + 305 + ], + "spans": [ + { + "bbox": [ + 63, + 120, + 295, + 305 + ], + "type": "text", + "content": "To prevent the creation of unsafe content, a straightforward solution is filtering training data and retraining the model. Nonetheless, such a process is both labor-intensive and resource-consuming. Post-hoc safety checker [45, 46] and negative guidance [48] are alternative plug-and-play ways to filter undesired contents, which heavily rely on pre-trained detectors or hand-crafted prompts. More recent, concept erasure methods [14, 17, 35, 36, 68] are proposed to directly unlearn undesired concepts through model finetuning. These methods mainly focus on precisely removing the target concept, while faithfully preserving the generation of non-target concepts. For instance, ESD [14] injects the negative erase guidance into target noise prediction to guide the image away from the target concept. SPM [36] employs a lightweight adapter to eliminate concepts and further adopts latent anchoring to preserve non-target concepts." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 63, + 308, + 295, + 469 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 308, + 295, + 469 + ], + "spans": [ + { + "bbox": [ + 63, + 308, + 295, + 469 + ], + "type": "text", + "content": "Although these concept erasure methods can effectively prevent the generation of unsafe content giving corresponding text prompt, they can be circumvented by editing techniques. As illustrated in Fig. 1, after removing Pikachu from the model, users can still create an image of Pikachu wearing sunglasses by editing a Pikachu image using LEDIT++ [4]. This is because these methods are typically trained to remove target concept from conditional noise prediction (as shown in Fig. 2(b)), and rely on the input text (e.g., \"Pikachu\") to trigger the guard. Therefore, when editing the image with the text \"Add sunglasses\" as input, the guard fails. In practice, protection from editing should also be considered in concept erasure, which we refer to as editing filtration." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 63, + 472, + 295, + 704 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 472, + 295, + 704 + ], + "spans": [ + { + "bbox": [ + 63, + 472, + 295, + 704 + ], + "type": "text", + "content": "To address the above issues, we propose an Anti-Editing Concept Erasure method, termed ACE, to prevent the production of unsafe content during both generation and editing. Based on the above analysis, we explore the capabilities of CFG [20], and propose incorporating erasure guidance into both conditional and unconditional noise for anti-editing concept erasure. During erasure training, ACE additionally aligns the unconditional noise prediction of the tuned model with the proposed unconditional erasure guidance. After that, during generation or editing, the CFG prediction in the tuned model can implicitly mitigate the presence of the erased concept, thereby preventing the production of unwanted content. A prior constraint loss further adopted address the overfitting of training. Additionally, to reduce the impact of the added target concept noise guidance on the generation of non-target concepts, we further incorporate a random correction guidance with unconditional erasure guidance by subtracting randomly sampled prior concept noise guidance. With that, our ACE can thoroughly erase the target concept while preserving the generation of" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 313, + 83, + 545, + 140 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 83, + 545, + 140 + ], + "spans": [ + { + "bbox": [ + 313, + 83, + 545, + 140 + ], + "type": "text", + "content": "non-target concepts. We conducted extensive evaluations across different erasure tasks, including intellectual property (IP), explicit content, and artistic style. Our method demonstrate significant advantages in both generation and editing filtration, showcasing its effectiveness." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 324, + 141, + 535, + 152 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 324, + 141, + 535, + 152 + ], + "spans": [ + { + "bbox": [ + 324, + 141, + 535, + 152 + ], + "type": "text", + "content": "The contributions of this work can be summarized as:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 153, + 545, + 280 + ], + "type": "list", + "angle": 0, + "index": 9, + "blocks": [ + { + "bbox": [ + 313, + 153, + 545, + 198 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 153, + 545, + 198 + ], + "spans": [ + { + "bbox": [ + 313, + 153, + 545, + 198 + ], + "type": "text", + "content": "- We investigate the potential risks of unsafe content creation through image editing, and propose an Anti-Editing Concept Erasure (ACE) method to prevent the production of such content during both generation and editing." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 200, + 545, + 245 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 200, + 545, + 245 + ], + "spans": [ + { + "bbox": [ + 313, + 200, + 545, + 245 + ], + "type": "text", + "content": "- A unconditional erasure guidance is proposed for anti-editing concept erasure, along with concept preservation mechanism to ensure the generation of non-target concepts." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 246, + 545, + 280 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 246, + 545, + 280 + ], + "spans": [ + { + "bbox": [ + 313, + 246, + 545, + 280 + ], + "type": "text", + "content": "- Extensive experiments demonstrate that our ACE can successfully erase target concepts and exhibits superior filtration capabilities during both generation and editing." + } + ] + } + ], + "index": 8 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 313, + 293, + 398, + 304 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 293, + 398, + 304 + ], + "spans": [ + { + "bbox": [ + 313, + 293, + 398, + 304 + ], + "type": "text", + "content": "2. Related Work" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 312, + 479, + 324 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 312, + 479, + 324 + ], + "spans": [ + { + "bbox": [ + 313, + 312, + 479, + 324 + ], + "type": "text", + "content": "2.1. Concept Erasure in T2I Models" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 329, + 545, + 526 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 329, + 545, + 526 + ], + "spans": [ + { + "bbox": [ + 313, + 329, + 545, + 526 + ], + "type": "text", + "content": "The concept erasure [9, 11, 15, 16, 18, 21, 23-25, 28-30, 33, 39, 41, 43, 48, 51, 52, 59, 62-64, 71] in T2I models has been the subject of numerous studies. Fine-tuning models are an important method in concept erasure. ESD [14] suggests integrating negative guidance into target concept noise through training. SPM [36] proposes prior correction based on the cosine similarity of text and utilizes a comparable Lora approach to train the model. MACE [35] leverages a closed-form solution to amalgamate multiple erasure Lora weights. RECE [17] employs analytical methods to search inappropriate text embedding and integrates it into erasure closed-form solution. AdvUnlearn [68] incorporate adversarial training to improve the robustness of the erasure method. To the best of our knowledge, current fine-tuning methods lack consideration for editing filtration, thus rendering them ineffective in preventing customized editions to target concept images." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 535, + 455, + 548 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 535, + 455, + 548 + ], + "spans": [ + { + "bbox": [ + 313, + 535, + 455, + 548 + ], + "type": "text", + "content": "2.2. Text-driven Image Editing" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 552, + 545, + 704 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 552, + 545, + 704 + ], + "spans": [ + { + "bbox": [ + 313, + 552, + 545, + 704 + ], + "type": "text", + "content": "Due to the broad generative capacities inherent in text-to-image DMs, the employment of DMs for image editing [3, 5, 7, 8, 26, 27, 32, 37, 38, 40, 47, 50, 54, 56, 57, 60, 65, 70] has progressively garnered traction. MasaCtrl [6] introduces source image data into the image editing process by substituting keys and values in the self-attention layer, thus modifying the actions of objects in the image. LEDITS++[4] uses inference guidance and attention masks from DM to confine editing regions while using DDPM inversion for enhanced restoration of source image. Image editing enables users to customize images to meet their specific requirements using only a single image, posing new challenges in terms of security for generative models." + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 743, + 318, + 753 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 743, + 318, + 753 + ], + "spans": [ + { + "bbox": [ + 293, + 743, + 318, + 753 + ], + "type": "text", + "content": "23506" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 74, + 86, + 304, + 183 + ], + "blocks": [ + { + "bbox": [ + 74, + 86, + 304, + 183 + ], + "lines": [ + { + "bbox": [ + 74, + 86, + 304, + 183 + ], + "spans": [ + { + "bbox": [ + 74, + 86, + 304, + 183 + ], + "type": "image", + "image_path": "56a0845895c1ed71f7953c1f3585aa222381202b834edee24691efd4fe710399.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 116, + 187, + 252, + 196 + ], + "lines": [ + { + "bbox": [ + 116, + 187, + 252, + 196 + ], + "spans": [ + { + "bbox": [ + 116, + 187, + 252, + 196 + ], + "type": "text", + "content": "(a) Calculation of Classifier Free Guidance" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 319, + 87, + 531, + 178 + ], + "blocks": [ + { + "bbox": [ + 319, + 87, + 531, + 178 + ], + "lines": [ + { + "bbox": [ + 319, + 87, + 531, + 178 + ], + "spans": [ + { + "bbox": [ + 319, + 87, + 531, + 178 + ], + "type": "image", + "image_path": "cbe3f74c88efc5ab447a9f20f7f97f47a512ac356d7b19a1584ef65b6c3b8dd6.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 75, + 202, + 527, + 299 + ], + "blocks": [ + { + "bbox": [ + 340, + 187, + 508, + 196 + ], + "lines": [ + { + "bbox": [ + 340, + 187, + 508, + 196 + ], + "spans": [ + { + "bbox": [ + 340, + 187, + 508, + 196 + ], + "type": "text", + "content": "(b) Concept Erasure on Conditional Noise Prediction" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 75, + 202, + 527, + 299 + ], + "lines": [ + { + "bbox": [ + 75, + 202, + 527, + 299 + ], + "spans": [ + { + "bbox": [ + 75, + 202, + 527, + 299 + ], + "type": "image", + "image_path": "340bd65aa6b021232b6fb1dd0489bd42985b7666fac10ce58f3a2d84d7b26f48.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 153, + 301, + 449, + 311 + ], + "lines": [ + { + "bbox": [ + 153, + 301, + 449, + 311 + ], + "spans": [ + { + "bbox": [ + 153, + 301, + 449, + 311 + ], + "type": "text", + "content": "(c) Our ACE learns to erase concept on both Conditional and Unconditional Noise Predictions" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 63, + 327, + 547, + 381 + ], + "lines": [ + { + "bbox": [ + 63, + 327, + 547, + 381 + ], + "spans": [ + { + "bbox": [ + 63, + 327, + 547, + 381 + ], + "type": "text", + "content": "Figure 2. Overview of our proposed ACE. (a) In CFG, both conditional noise and unconditional noise are adopted to generate high-quality images. (b) ESD [14] unlearns the target concept (e.g., Mickey) by aligning conditional noise prediction with conditional erasure guidance (CEG). (c) During the fine-tuning, our ACE injects erasure guidance into both conditional and unconditional noise prediction, preventing the production of unsafe content during both generation and editing. PG-UEG denotes the prior-guided unconditional erasure guidance calculated following Eqn 9." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "bbox": [ + 63, + 390, + 187, + 402 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 390, + 187, + 402 + ], + "spans": [ + { + "bbox": [ + 63, + 390, + 187, + 402 + ], + "type": "text", + "content": "2.3. Attacks in T2I Models" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 63, + 407, + 296, + 546 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 407, + 296, + 546 + ], + "spans": [ + { + "bbox": [ + 63, + 407, + 296, + 546 + ], + "type": "text", + "content": "As research on concept erasure in T2I models advances, red team studies focusing on the robustness of detection erasure methods are also increasingly emerging. P4D [10] processes a method of inserting adversarial text into regular input text to facilitate the production of insecure images using the T2I model. Ring-A-Bell [53] extracts the discrepancy vector between the embeddings of insecure concept text and secure concept text and employs it to derive the attack text embedding. UnlearnDiff [69] employs Projected Gradient Descent (PGD) to tackle the optimization challenge inherent in adversarial attacks and maps the optimized text embeddings onto discrete tokens." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 63, + 556, + 167, + 569 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 556, + 167, + 569 + ], + "spans": [ + { + "bbox": [ + 63, + 556, + 167, + 569 + ], + "type": "text", + "content": "3. Proposed Method" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 63, + 576, + 296, + 704 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 576, + 296, + 704 + ], + "spans": [ + { + "bbox": [ + 63, + 576, + 296, + 704 + ], + "type": "text", + "content": "Given a target concept (e.g., Fukushima), concept erasure task [14, 36] aims to unlearn it from pre-trained text-to-image (T2I) models, preventing the illegal use of these models to create copyrighted content. However, existing methods can be circumvented and fail to prevent users from producing new undesirable images through image editing, which raises new concerns. To address this, we propose an Anti-Editing Concept Erasure (ACE) method, as illustrated in Fig. 2, to prevent the production of undesirable content through both generation and editing. In this section, we will first introduce the prior knowledge of our method (Sec. 3.1)," + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 391, + 547, + 460 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 391, + 547, + 460 + ], + "spans": [ + { + "bbox": [ + 313, + 391, + 547, + 460 + ], + "type": "text", + "content": "including employed T2I model and concept erasure method. To address the editing issue, we further propose to erase the target concept from both conditional and unconditional prediction for anti-editing erasure (Sec. 3.2). Finally, to preserve the generation of non-target concepts, a prior concept preservation mechanism is introduced (Sec. 3.3)." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 466, + 397, + 477 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 466, + 397, + 477 + ], + "spans": [ + { + "bbox": [ + 313, + 466, + 397, + 477 + ], + "type": "text", + "content": "3.1. Preliminaries" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 483, + 547, + 564 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 483, + 547, + 564 + ], + "spans": [ + { + "bbox": [ + 313, + 483, + 547, + 564 + ], + "type": "text", + "content": "Stable Diffusion. In this work, we adopt Stable Diffusion 1.4 [46] as text-to-image model, which is one of the representative T2I diffusion models. It first employs a variational autoencoder (VAE) to transform real images " + }, + { + "bbox": [ + 313, + 483, + 547, + 564 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 313, + 483, + 547, + 564 + ], + "type": "text", + "content": " into an image latent " + }, + { + "bbox": [ + 313, + 483, + 547, + 564 + ], + "type": "inline_equation", + "content": "z" + }, + { + "bbox": [ + 313, + 483, + 547, + 564 + ], + "type": "text", + "content": ". Then, a text-conditioned diffusion model " + }, + { + "bbox": [ + 313, + 483, + 547, + 564 + ], + "type": "inline_equation", + "content": "\\epsilon_{\\theta}" + }, + { + "bbox": [ + 313, + 483, + 547, + 564 + ], + "type": "text", + "content": " is trained on the latent space to predict latent codes, and mean-squared loss is adopted," + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 339, + 568, + 545, + 582 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 339, + 568, + 545, + 582 + ], + "spans": [ + { + "bbox": [ + 339, + 568, + 545, + 582 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\mathrm {L D M}} = \\mathbb {E} _ {z _ {t}, t, c, \\epsilon \\sim \\mathcal {N} (0, I)} \\left[ \\| \\epsilon - \\epsilon_ {\\theta} \\left(z _ {t}, c, t\\right) \\| _ {2} ^ {2} \\right], \\tag {1}", + "image_path": "ef822861d2a123af5b0b1075d0979f9760ecdca83d34a7b45af441cfa090320e.jpg" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 586, + 547, + 632 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 586, + 547, + 632 + ], + "spans": [ + { + "bbox": [ + 313, + 586, + 547, + 632 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 586, + 547, + 632 + ], + "type": "inline_equation", + "content": "\\epsilon" + }, + { + "bbox": [ + 313, + 586, + 547, + 632 + ], + "type": "text", + "content": " denotes the unscaled noise and " + }, + { + "bbox": [ + 313, + 586, + 547, + 632 + ], + "type": "inline_equation", + "content": "c" + }, + { + "bbox": [ + 313, + 586, + 547, + 632 + ], + "type": "text", + "content": " is the text embedding encoded by text encoders. " + }, + { + "bbox": [ + 313, + 586, + 547, + 632 + ], + "type": "inline_equation", + "content": "z_{t}" + }, + { + "bbox": [ + 313, + 586, + 547, + 632 + ], + "type": "text", + "content": " is the latent noised to time " + }, + { + "bbox": [ + 313, + 586, + 547, + 632 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 313, + 586, + 547, + 632 + ], + "type": "text", + "content": ". During inference, a random Gaussian noise " + }, + { + "bbox": [ + 313, + 586, + 547, + 632 + ], + "type": "inline_equation", + "content": "z_{T}" + }, + { + "bbox": [ + 313, + 586, + 547, + 632 + ], + "type": "text", + "content": " is iteratively denoised to " + }, + { + "bbox": [ + 313, + 586, + 547, + 632 + ], + "type": "inline_equation", + "content": "z_{0}" + }, + { + "bbox": [ + 313, + 586, + 547, + 632 + ], + "type": "text", + "content": ", and decoded to final image." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 632, + 547, + 678 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 632, + 547, + 678 + ], + "spans": [ + { + "bbox": [ + 313, + 632, + 547, + 678 + ], + "type": "text", + "content": "Classifier-Free Guidance. To improve the quality of generated images, classifier-free guidance [20] is adopted during diffusion inference. Based on Tweedie's formula and the principles of diffusion model, we have:" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 336, + 682, + 545, + 706 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 336, + 682, + 545, + 706 + ], + "spans": [ + { + "bbox": [ + 336, + 682, + 545, + 706 + ], + "type": "interline_equation", + "content": "\\nabla_ {z _ {t}} \\log p (c | z _ {t}) = - \\frac {1}{\\sigma_ {t}} \\left(\\epsilon_ {\\theta} \\left(z _ {t}, c, t\\right) - \\epsilon_ {\\theta} \\left(z _ {t}, t\\right)\\right). \\tag {2}", + "image_path": "59db64c086750206045ea2bfa909c9c4f8a826a7cf401e10cd99fd4a28414e39.jpg" + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 743, + 317, + 753 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 743, + 317, + 753 + ], + "spans": [ + { + "bbox": [ + 293, + 743, + 317, + 753 + ], + "type": "text", + "content": "23507" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 69, + 84, + 292, + 317 + ], + "blocks": [ + { + "bbox": [ + 69, + 84, + 292, + 317 + ], + "lines": [ + { + "bbox": [ + 69, + 84, + 292, + 317 + ], + "spans": [ + { + "bbox": [ + 69, + 84, + 292, + 317 + ], + "type": "image", + "image_path": "27190eb892c6b8bb4fe387846155a26a0b9452aa12300fb0627f3354f91f094c.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 63, + 327, + 296, + 361 + ], + "lines": [ + { + "bbox": [ + 63, + 327, + 296, + 361 + ], + "spans": [ + { + "bbox": [ + 63, + 327, + 296, + 361 + ], + "type": "text", + "content": "Figure 3. Qualitative comparisons of IP character removal. Our ACE effectively erases the target concept while generating other concepts successfully." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 63, + 377, + 296, + 424 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 377, + 296, + 424 + ], + "spans": [ + { + "bbox": [ + 63, + 377, + 296, + 424 + ], + "type": "text", + "content": "Here, " + }, + { + "bbox": [ + 63, + 377, + 296, + 424 + ], + "type": "inline_equation", + "content": "\\sigma_{t}" + }, + { + "bbox": [ + 63, + 377, + 296, + 424 + ], + "type": "text", + "content": " is a constant. To increase the probability of text condition " + }, + { + "bbox": [ + 63, + 377, + 296, + 424 + ], + "type": "inline_equation", + "content": "c" + }, + { + "bbox": [ + 63, + 377, + 296, + 424 + ], + "type": "text", + "content": " appearing in the final image, the final noise prediction is the composition of noise prediction from both conditional and unconditional texts," + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 97, + 430, + 295, + 443 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 97, + 430, + 295, + 443 + ], + "spans": [ + { + "bbox": [ + 97, + 430, + 295, + 443 + ], + "type": "interline_equation", + "content": "\\tilde {\\epsilon} = \\epsilon_ {\\theta} \\left(z _ {t}, t\\right) + \\omega \\left(\\epsilon_ {\\theta} \\left(z _ {t}, c, t\\right) - \\epsilon_ {\\theta} \\left(z _ {t}, t\\right)\\right), \\tag {3}", + "image_path": "b5049c246036a20a0c341fa3439310a346e7c02348354a8d470a86ca39999560.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 63, + 447, + 295, + 471 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 447, + 295, + 471 + ], + "spans": [ + { + "bbox": [ + 63, + 447, + 295, + 471 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 63, + 447, + 295, + 471 + ], + "type": "inline_equation", + "content": "\\epsilon_{\\theta}(z_t,t)" + }, + { + "bbox": [ + 63, + 447, + 295, + 471 + ], + "type": "text", + "content": " denote the unconditional noise prediction, and " + }, + { + "bbox": [ + 63, + 447, + 295, + 471 + ], + "type": "inline_equation", + "content": "\\omega" + }, + { + "bbox": [ + 63, + 447, + 295, + 471 + ], + "type": "text", + "content": " is a hyperparameter controlling the guidance scale." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 63, + 471, + 296, + 540 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 471, + 296, + 540 + ], + "spans": [ + { + "bbox": [ + 63, + 471, + 296, + 540 + ], + "type": "text", + "content": "Concept Erasure. Given a target concept indicated by text " + }, + { + "bbox": [ + 63, + 471, + 296, + 540 + ], + "type": "inline_equation", + "content": "c" + }, + { + "bbox": [ + 63, + 471, + 296, + 540 + ], + "type": "text", + "content": " (e.g., Pikachu), concept erasure task finetunes the model to reduce the probability of generating images containing this concept. For example, ESD [14] removes the target concept from the conditional noise prediction, and a conditional erasure guidance (CEG) is defined as:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 88, + 546, + 295, + 559 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 546, + 295, + 559 + ], + "spans": [ + { + "bbox": [ + 88, + 546, + 295, + 559 + ], + "type": "interline_equation", + "content": "\\tilde {\\epsilon} _ {c} = \\epsilon_ {\\theta^ {*}} \\left(z _ {t}, t\\right) - \\eta_ {c} \\left(\\epsilon_ {\\theta^ {*}} \\left(z _ {t}, c, t\\right) - \\epsilon_ {\\theta^ {*}} \\left(z _ {t}, t\\right)\\right), \\tag {4}", + "image_path": "6a83490ca4d85fcacd90c899f3798f3939a313cab08478c6921d8dfb7e9cae24.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 63, + 563, + 296, + 622 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 563, + 296, + 622 + ], + "spans": [ + { + "bbox": [ + 63, + 563, + 296, + 622 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 63, + 563, + 296, + 622 + ], + "type": "inline_equation", + "content": "\\epsilon_{\\theta^{\\star}}(\\cdot)" + }, + { + "bbox": [ + 63, + 563, + 296, + 622 + ], + "type": "text", + "content": " represents the original T2I model, and " + }, + { + "bbox": [ + 63, + 563, + 296, + 622 + ], + "type": "inline_equation", + "content": "z_{t}" + }, + { + "bbox": [ + 63, + 563, + 296, + 622 + ], + "type": "text", + "content": " is the encoded latent image contains target concept " + }, + { + "bbox": [ + 63, + 563, + 296, + 622 + ], + "type": "inline_equation", + "content": "c" + }, + { + "bbox": [ + 63, + 563, + 296, + 622 + ], + "type": "text", + "content": ". " + }, + { + "bbox": [ + 63, + 563, + 296, + 622 + ], + "type": "inline_equation", + "content": "\\eta_c" + }, + { + "bbox": [ + 63, + 563, + 296, + 622 + ], + "type": "text", + "content": " is a control scale hyperparameter. During training, ESD aligns the noise prediction of the target concept in tuned model " + }, + { + "bbox": [ + 63, + 563, + 296, + 622 + ], + "type": "inline_equation", + "content": "\\epsilon_{\\theta}(z_t,c,t)" + }, + { + "bbox": [ + 63, + 563, + 296, + 622 + ], + "type": "text", + "content": " with the above CEG," + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 626, + 295, + 641 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 626, + 295, + 641 + ], + "spans": [ + { + "bbox": [ + 105, + 626, + 295, + 641 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\mathrm {E S D}} = \\mathbb {E} _ {z _ {t}, t, c} \\left[ \\| \\epsilon_ {\\theta} (z _ {t}, c, t) - \\tilde {\\epsilon} _ {c} \\| _ {2} ^ {2} \\right]. \\tag {5}", + "image_path": "11b998a232d73f2d8f49a86a9694463d4226de9329be2cdf35ed1bc1925c7055.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 63, + 645, + 296, + 704 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 645, + 296, + 704 + ], + "spans": [ + { + "bbox": [ + 63, + 645, + 296, + 704 + ], + "type": "text", + "content": "After the training, the erasure guidance " + }, + { + "bbox": [ + 63, + 645, + 296, + 704 + ], + "type": "inline_equation", + "content": "-\\nabla_{z_t}\\log p(c|z_t)" + }, + { + "bbox": [ + 63, + 645, + 296, + 704 + ], + "type": "text", + "content": " is introduced into conditional noise prediction of the target concept. Therefore, the prediction of tuned model will be guided away from the erased concept, preventing the generation of images containing the erased concept." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 82, + 473, + 95 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 82, + 473, + 95 + ], + "spans": [ + { + "bbox": [ + 313, + 82, + 473, + 95 + ], + "type": "text", + "content": "3.2. Anti-Editing Concept Erasure" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 99, + 547, + 306 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 99, + 547, + 306 + ], + "spans": [ + { + "bbox": [ + 313, + 99, + 547, + 306 + ], + "type": "text", + "content": "Editing Filtration. Although existing erasure methods can successfully prevent the generation of an erased concept through text prompts, they can be easily circumvented by editing techniques. As shown in Fig. 1, when utilizing tuned ESD model to add sunglasses on an image of Pikachu using LEDs++ [4], it successfully produces an image of Pikachu with sunglasses, raising potential copyright concerns. This is because these methods are typically trained to erase the concept from the noise prediction of the target concept (as shown in Fig. 2 (b)), and rely on inputting concept text (e.g., \"Pikachu\" or \"Mickey\") to trigger the guard. However, during the editing process, the target concept may not necessarily be used in the text prompt. Therefore, these erasure methods fail to prevent the reconstruction of the erased concept. In practice, the erasure model should also have the ability to prevent the creation of undesired concepts through image editing, a feature we refer to as editing filtration." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 308, + 546, + 423 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 308, + 546, + 423 + ], + "spans": [ + { + "bbox": [ + 313, + 308, + 546, + 423 + ], + "type": "text", + "content": "Unconditional Erasure Guidance. As we all know, current generation and editing methods heavily rely on classifier-free guidance [20] (CFG) to improve the quality of generated images, where unconditional noise prediction performs an important role. To address the issue of editing filtration, we further propose to erase the target concept from both conditional and unconditional noise prediction, thereby preventing edited images from containing target concepts. Specifically, similar to ESD, we define the unconditional erasure guidance (UEG) as," + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 339, + 430, + 545, + 443 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 339, + 430, + 545, + 443 + ], + "spans": [ + { + "bbox": [ + 339, + 430, + 545, + 443 + ], + "type": "interline_equation", + "content": "\\tilde {\\epsilon} _ {\\mathrm {u}} = \\epsilon_ {\\theta^ {*}} \\left(z _ {t}, t\\right) + \\eta_ {\\mathrm {u}} \\left(\\epsilon_ {\\theta^ {*}} \\left(z _ {t}, c, t\\right) - \\epsilon_ {\\theta^ {*}} \\left(z _ {t}, t\\right)\\right). \\tag {6}", + "image_path": "3189bebcd81e7dc6e7f5834b7f9e27c4232072b3dc3c2b5155a37392112a0172.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 450, + 545, + 473 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 450, + 545, + 473 + ], + "spans": [ + { + "bbox": [ + 313, + 450, + 545, + 473 + ], + "type": "text", + "content": "During training, we additionally align the unconditional noise prediction of the tuned model with the UEG," + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 361, + 480, + 545, + 495 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 361, + 480, + 545, + 495 + ], + "spans": [ + { + "bbox": [ + 361, + 480, + 545, + 495 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\mathrm {U n c}} = \\mathbb {E} _ {z _ {t}, t, c} \\left[ \\| \\epsilon_ {\\theta} (z _ {t}, t) - \\tilde {\\epsilon} _ {\\mathrm {u}} \\| _ {2} ^ {2} \\right]. \\tag {7}", + "image_path": "10d4717d6df69776149f34c34c30c3dac321541f90a0e8f9b93a8f542dd0cda9.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 500, + 547, + 686 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 500, + 547, + 686 + ], + "spans": [ + { + "bbox": [ + 313, + 500, + 547, + 686 + ], + "type": "text", + "content": "When fine-tuned unconditional noise (our UEG) is subtracted in the CFG process, the erased concept guidance will be subtracted, thereby reducing the probability of the erased concept appearing regardless of the input text prompt. Then, the CFG noise prediction during inference will move away from the target concept regardless of any text input, thereby effectively preventing the production image containing the target concept. As erasure models are usually trained on a small dataset, they are prone to be overfitting, where the erasure guidance is introduced into the noise prediction for other conditional text prompts. This weakens the erasure effects and leads to incomplete erasures. To address the issue of overfitting, we introduce a prior constraint loss during the training process. Specifically, we regularize the prediction of the prior concept in the new model to be consistent with that of the original model:" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 325, + 691, + 545, + 704 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 325, + 691, + 545, + 704 + ], + "spans": [ + { + "bbox": [ + 325, + 691, + 545, + 704 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\text {C o n s}} = \\mathbb {E} _ {z _ {t}, t, c _ {p} \\in \\mathcal {C} _ {p}} \\left[ \\| \\epsilon_ {\\theta} \\left(z _ {t}, c _ {p}, t\\right) - \\epsilon_ {\\theta^ {*}} \\left(z _ {t}, c _ {p}, t\\right) \\| _ {2} ^ {2} \\right], \\tag {8}", + "image_path": "fcdb09a87740b7ecde988a2435ddf220fd4379af96eca5cb3aeeb62663ce86cb.jpg" + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 743, + 317, + 753 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 743, + 317, + 753 + ], + "spans": [ + { + "bbox": [ + 293, + 743, + 317, + 753 + ], + "type": "text", + "content": "23508" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 72, + 83, + 539, + 264 + ], + "blocks": [ + { + "bbox": [ + 72, + 83, + 539, + 264 + ], + "lines": [ + { + "bbox": [ + 72, + 83, + 539, + 264 + ], + "spans": [ + { + "bbox": [ + 72, + 83, + 539, + 264 + ], + "type": "image", + "image_path": "bb0ea50cff7f9b2c8b2419c85b5296709314c8902c1c00d0bf08c525e1efaf57.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 63, + 271, + 548, + 305 + ], + "lines": [ + { + "bbox": [ + 63, + 271, + 548, + 305 + ], + "spans": [ + { + "bbox": [ + 63, + 271, + 548, + 305 + ], + "type": "text", + "content": "Figure 4. Comparison of our ACE method with other methods in terms of editing filtering. After erasing Mickey Mouse, our method filtered out edits involving Mickey Mouse while not affecting edits related to other IP characters. In contrast, the competing methods either fail to prevent editing (e.g., ESD, SPM, RECE, and MACE) or cannot perform editing on non-target concepts (e.g., AdvUnlearn)." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 67, + 320, + 542, + 498 + ], + "blocks": [ + { + "bbox": [ + 67, + 320, + 542, + 498 + ], + "lines": [ + { + "bbox": [ + 67, + 320, + 542, + 498 + ], + "spans": [ + { + "bbox": [ + 67, + 320, + 542, + 498 + ], + "type": "image", + "image_path": "5f7264cadb1e3e303dc91c35a8afcb0694688495744182e74bb856025ae21ba8.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 63, + 510, + 547, + 554 + ], + "lines": [ + { + "bbox": [ + 63, + 510, + 547, + 554 + ], + "spans": [ + { + "bbox": [ + 63, + 510, + 547, + 554 + ], + "type": "text", + "content": "Figure 5. Qualitative results of nudity removal. Figure (a) shows the results of explicit editing using SD-Inpainting, while Figure (b) displays images generated using text with explicit label. Static adversarial text is used for editing text, while dynamic adversarial attacks are employed for generation. It can be observed that our method effectively reduces exposure in both editing and generation tasks. Moreover, our method maintains its effectiveness when editing and generating using adversarial text, indicating its robustness." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 63, + 564, + 298, + 693 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 564, + 298, + 693 + ], + "spans": [ + { + "bbox": [ + 63, + 564, + 298, + 693 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 63, + 564, + 298, + 693 + ], + "type": "inline_equation", + "content": "c_p" + }, + { + "bbox": [ + 63, + 564, + 298, + 693 + ], + "type": "text", + "content": " represents prior concept, and " + }, + { + "bbox": [ + 63, + 564, + 298, + 693 + ], + "type": "inline_equation", + "content": "\\mathcal{C}_p" + }, + { + "bbox": [ + 63, + 564, + 298, + 693 + ], + "type": "text", + "content": " represents the set of prior concepts. Intuitively, the larger the set of priors, the better it helps mitigate overfitting. However, it is challenging to traverse all the prior concepts as the pre-trained models have a large general semantic space. Our goal is to preserve the concepts more likely to be affected, thus minimizing the influence to other concepts. We assume that these concepts are semantic-related concepts to the erased concept and use LLM [1] to obtain them. By adding this loss, it ensures that the erasure guidance introduced during training aligns with our conceptualization in the Eqn. 7." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 313, + 563, + 462, + 576 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 563, + 462, + 576 + ], + "spans": [ + { + "bbox": [ + 313, + 563, + 462, + 576 + ], + "type": "text", + "content": "3.3. Prior Concept Preservation" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 587, + 547, + 704 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 587, + 547, + 704 + ], + "spans": [ + { + "bbox": [ + 313, + 587, + 547, + 704 + ], + "type": "text", + "content": "In practice, training with the method proposed in Sec. 3.2 affects the generation prior of relevant concepts (see Sec. 4.4). This is because incorporating UEG not only decreases the probability of producing erased concepts, but also decreases probability of adjacent concepts. Therefore, we reverse mechanism of UEG by subtracting the guidance of prior concepts from the unconditional noise, which prevents the probability reduction of these concepts and minimizes concept forgetting. The prior concepts are sampled from the semantic-related concepts obtained using LLM," + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 743, + 318, + 753 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 743, + 318, + 753 + ], + "spans": [ + { + "bbox": [ + 293, + 743, + 318, + 753 + ], + "type": "text", + "content": "23509" + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 66, + 89, + 545, + 155 + ], + "blocks": [ + { + "bbox": [ + 219, + 81, + 294, + 88 + ], + "lines": [ + { + "bbox": [ + 219, + 81, + 294, + 88 + ], + "spans": [ + { + "bbox": [ + 219, + 81, + 294, + 88 + ], + "type": "text", + "content": "(a) Generation Prevention" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 417, + 81, + 478, + 88 + ], + "lines": [ + { + "bbox": [ + 417, + 81, + 478, + 88 + ], + "spans": [ + { + "bbox": [ + 417, + 81, + 478, + 88 + ], + "type": "text", + "content": "(b) Editing Filtration" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 66, + 89, + 545, + 155 + ], + "lines": [ + { + "bbox": [ + 66, + 89, + 545, + 155 + ], + "spans": [ + { + "bbox": [ + 66, + 89, + 545, + 155 + ], + "type": "table", + "html": "
MethodErase ConceptPrior ConceptOverallErase ConceptPrior ConceptOverall
ESDUncConsCorCLIPe↓LPIPSe↑CLIPp↑LPIPSp↓CLIPd↑LPIPSd↑CLIPe↓LPIPSe↑CLIPp↑LPIPSp↓CLIPd↑LPIPSd↑
(1)0.1710.4400.2460.2860.0750.1530.3010.0600.3050.0500.0040.011
(2)0.1660.5510.2830.2360.1170.3150.2850.1490.3050.0570.0190.092
(3)0.1590.5070.2540.3370.0950.1700.2740.1680.3000.0770.0260.091
(4)0.2110.3030.2930.1990.0820.1040.2730.1750.3010.0790.0280.096
(5)0.1750.3970.2950.1960.1200.2010.2740.1680.3030.0700.0290.097
", + "image_path": "d4357bae8bd89ad1632e6896534b2165c85ea122d5b6c625074ecf998cf481a0.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 63, + 162, + 547, + 195 + ], + "lines": [ + { + "bbox": [ + 63, + 162, + 547, + 195 + ], + "spans": [ + { + "bbox": [ + 63, + 162, + 547, + 195 + ], + "type": "text", + "content": "Table 1. Quantitative Evaluation of generation and editing after ablation. The best results are highlighted in bold. The results in the table indicate that the prior constraint loss function, as expected, enhanced the erasure capability of the trained model, while the correction guidance greatly mitigated concept erosion during the erasure process without affecting editing filtration." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + }, + { + "type": "image", + "bbox": [ + 72, + 217, + 289, + 456 + ], + "blocks": [ + { + "bbox": [ + 72, + 217, + 289, + 456 + ], + "lines": [ + { + "bbox": [ + 72, + 217, + 289, + 456 + ], + "spans": [ + { + "bbox": [ + 72, + 217, + 289, + 456 + ], + "type": "image", + "image_path": "9db561db123c5c79efc21173cd92c2acc70444dc91671755919fd09b03fb7492.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 63, + 468, + 295, + 500 + ], + "lines": [ + { + "bbox": [ + 63, + 468, + 295, + 500 + ], + "spans": [ + { + "bbox": [ + 63, + 468, + 295, + 500 + ], + "type": "text", + "content": "Figure 6. Qualitative results of artistic style removal. Our method erases the target style effectively and has minimal impact on other artistic styles." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "bbox": [ + 63, + 510, + 295, + 544 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 510, + 295, + 544 + ], + "spans": [ + { + "bbox": [ + 63, + 510, + 295, + 544 + ], + "type": "text", + "content": "which is mentioned in the previous section. We call this new guidance prior-guided unconditional erasure guidance (PG-UEG), which is defined as:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 75, + 552, + 294, + 579 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 75, + 552, + 294, + 579 + ], + "spans": [ + { + "bbox": [ + 75, + 552, + 294, + 579 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\tilde {\\epsilon} _ {\\mathrm {p u}} = \\epsilon_ {\\theta^ {*}} (z _ {t}, t) + \\eta_ {\\mathrm {u}} \\left(\\epsilon_ {\\theta^ {*}} (z _ {t}, c, t) - \\epsilon_ {\\theta^ {*}} (z _ {t}, t)\\right) \\\\ - \\eta_ {p} \\gamma_ {p} \\left(\\epsilon_ {\\theta^ {*}} \\left(z _ {t}, c _ {p}, t\\right) - \\epsilon_ {\\theta^ {*}} \\left(z _ {t}, t\\right)\\right), \\tag {9} \\\\ \\end{array}", + "image_path": "d4d2e13f4421f1de96b9a1da212729abc733597ee3f14d8bd58b6b009fc17a22.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 63, + 586, + 295, + 683 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 586, + 295, + 683 + ], + "spans": [ + { + "bbox": [ + 63, + 586, + 295, + 683 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 63, + 586, + 295, + 683 + ], + "type": "inline_equation", + "content": "\\gamma_{p}" + }, + { + "bbox": [ + 63, + 586, + 295, + 683 + ], + "type": "text", + "content": " represents the guidance control term related to the prior retained concept. " + }, + { + "bbox": [ + 63, + 586, + 295, + 683 + ], + "type": "inline_equation", + "content": "c_{p}" + }, + { + "bbox": [ + 63, + 586, + 295, + 683 + ], + "type": "text", + "content": " refers to the same prior concept in " + }, + { + "bbox": [ + 63, + 586, + 295, + 683 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{Cons}}" + }, + { + "bbox": [ + 63, + 586, + 295, + 683 + ], + "type": "text", + "content": " which are obtained through random sampling from the set " + }, + { + "bbox": [ + 63, + 586, + 295, + 683 + ], + "type": "inline_equation", + "content": "\\mathcal{C}_p" + }, + { + "bbox": [ + 63, + 586, + 295, + 683 + ], + "type": "text", + "content": ". We calculate " + }, + { + "bbox": [ + 63, + 586, + 295, + 683 + ], + "type": "inline_equation", + "content": "\\gamma_{p}" + }, + { + "bbox": [ + 63, + 586, + 295, + 683 + ], + "type": "text", + "content": " using the CLIP model to measure the relevance of different prior concepts to the target concept image and then compare it to the relevance of the target concept text to its image. Specifically, " + }, + { + "bbox": [ + 63, + 586, + 295, + 683 + ], + "type": "inline_equation", + "content": "\\gamma_{p} = \\frac{\\mathrm{CLIP}(x,c_{p})}{\\mathrm{CLIP}(x,c)}" + }, + { + "bbox": [ + 63, + 586, + 295, + 683 + ], + "type": "text", + "content": ". The new loss for our ACE is:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 87, + 690, + 295, + 704 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 690, + 295, + 704 + ], + "spans": [ + { + "bbox": [ + 87, + 690, + 295, + 704 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\mathrm {P U n c}} = \\mathbb {E} _ {z _ {t}, t, c, c _ {p} \\in \\mathcal {C} _ {p}} \\left[ \\| \\epsilon_ {\\theta} (z _ {t}, t) - \\tilde {\\epsilon} _ {\\mathrm {p u}} \\| _ {2} ^ {2} \\right]. \\tag {10}", + "image_path": "9a0cdd18f4377402bb3f3df1862941bcf115b95a4aac72b210ba4866cab6722a.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 215, + 545, + 238 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 215, + 545, + 238 + ], + "spans": [ + { + "bbox": [ + 313, + 215, + 545, + 238 + ], + "type": "text", + "content": "The final training loss for our ACE is summarized as: " + }, + { + "bbox": [ + 313, + 215, + 545, + 238 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{ACE}} = \\lambda_{\\mathrm{PUnc}}\\mathcal{L}_{\\mathrm{PUnc}} + \\lambda_{\\mathrm{Cons}}\\mathcal{L}_{\\mathrm{Cons}} + \\lambda_{\\mathrm{ESD}}\\mathcal{L}_{\\mathrm{ESD}}" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 239, + 545, + 274 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 239, + 545, + 274 + ], + "spans": [ + { + "bbox": [ + 313, + 239, + 545, + 274 + ], + "type": "text", + "content": "In our implementation, we adopt LORA [22] for parameter-efficient tuning, and the training process follows [14]. More details are provided in Suppl." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 285, + 392, + 298 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 285, + 392, + 298 + ], + "spans": [ + { + "bbox": [ + 313, + 285, + 392, + 298 + ], + "type": "text", + "content": "4. Experiments" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 306, + 545, + 374 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 306, + 545, + 374 + ], + "spans": [ + { + "bbox": [ + 313, + 306, + 545, + 374 + ], + "type": "text", + "content": "We conduct experiments on various tasks to evaluate our ACE, including IP characters erasure, artistic styles erasure, and nudity erasure. ESD [12], SPM [36], AdvUnlearn [68], MACE [35], and RECE [17] are adopted as competing methods. Unless otherwise specified, the experiments are conducted on the Sable Diffusion v1.4." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 384, + 438, + 396 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 384, + 438, + 396 + ], + "spans": [ + { + "bbox": [ + 313, + 384, + 438, + 396 + ], + "type": "text", + "content": "4.1. IP Character Removal" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 402, + 545, + 704 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 402, + 545, + 704 + ], + "spans": [ + { + "bbox": [ + 313, + 402, + 545, + 704 + ], + "type": "text", + "content": "Experiment Setup. To access our ACE on IP character removal, we employ ten iconic IP characters as examples, including Hello Kitty, Snoopy, Mickey Mouse, Elsa, Donald Duck, Dora the Explorer, Winnie the Pooh, Sonic the Hedgehog, Elsa, and Fukushima. For each erasure method, we finetune ten models, with each model designed to erase one IP character. Following [14, 17], we adopted CLIP [44] score and LPIPS [66] score as metrics for evaluation. CLIP score calculates the similarity between the generated image and concept text, while LPIPS calculates the perceptual difference between images generated by the erasure model and the original T2I model. " + }, + { + "bbox": [ + 313, + 402, + 545, + 704 + ], + "type": "inline_equation", + "content": "\\mathrm{CLIP}_e" + }, + { + "bbox": [ + 313, + 402, + 545, + 704 + ], + "type": "text", + "content": " calculates the CLIP similarity between images generated with erased concept text and their corresponding text, where lower value indicates more thorough erasure. " + }, + { + "bbox": [ + 313, + 402, + 545, + 704 + ], + "type": "inline_equation", + "content": "\\mathrm{CLIP}_p" + }, + { + "bbox": [ + 313, + 402, + 545, + 704 + ], + "type": "text", + "content": " calculates the relevance under prior concepts, and higher value indicates better prior preservation. " + }, + { + "bbox": [ + 313, + 402, + 545, + 704 + ], + "type": "inline_equation", + "content": "\\mathrm{LPIPS}_e" + }, + { + "bbox": [ + 313, + 402, + 545, + 704 + ], + "type": "text", + "content": " calculates the LPIPS similarity between images generated with erased concept text by the trained model and the original model, and higher value indicates more thorough erasure. " + }, + { + "bbox": [ + 313, + 402, + 545, + 704 + ], + "type": "inline_equation", + "content": "\\mathrm{LPIPS}_p" + }, + { + "bbox": [ + 313, + 402, + 545, + 704 + ], + "type": "text", + "content": " calculates the similarity under prior concepts, in which lower values indicate better prior preservation. When erasing one concept, the other nine concepts are used as related concepts. Following RECE [17], we further calculate the overall scores between erased and related characters to measure the trade-off between the concept erasure and prior preservation, where" + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 743, + 317, + 753 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 743, + 317, + 753 + ], + "spans": [ + { + "bbox": [ + 293, + 743, + 317, + 753 + ], + "type": "text", + "content": "23510" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 66, + 90, + 544, + 178 + ], + "blocks": [ + { + "bbox": [ + 188, + 81, + 271, + 89 + ], + "lines": [ + { + "bbox": [ + 188, + 81, + 271, + 89 + ], + "spans": [ + { + "bbox": [ + 188, + 81, + 271, + 89 + ], + "type": "text", + "content": "(a) Generation Prevention" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 405, + 81, + 472, + 90 + ], + "lines": [ + { + "bbox": [ + 405, + 81, + 472, + 90 + ], + "spans": [ + { + "bbox": [ + 405, + 81, + 472, + 90 + ], + "type": "text", + "content": "(b) Editing Filtration" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 66, + 90, + 544, + 178 + ], + "lines": [ + { + "bbox": [ + 66, + 90, + 544, + 178 + ], + "spans": [ + { + "bbox": [ + 66, + 90, + 544, + 178 + ], + "type": "table", + "html": "
MethodErase ConceptPrior ConceptOverallErase ConceptPrior ConceptOverall
\\(CLIP_e\\downarrow\\)\\(LPIPS_e\\uparrow\\)\\(CLIP_p\\uparrow\\)\\(LPIPS_p\\downarrow\\)\\(CLIP_d\\uparrow\\)\\(LPIPS_d\\uparrow\\)\\(CLIP_e\\downarrow\\)\\(LPIPS_e\\uparrow\\)\\(CLIP_p\\uparrow\\)\\(LPIPS_p\\downarrow\\)\\(CLIP_d\\uparrow\\)\\(LPIPS_d\\uparrow\\)
SD v1.4 [46]0.3010.0000.3010.0000.0000.0000.3080.0630.3080.0630.0000.000
ESD [14]0.2270.3310.2760.2550.0490.0760.3060.0420.3070.0410.0010.000
SPM [36]0.2390.2880.2960.1070.0560.1810.3020.0610.3030.0560.0010.005
AdvUnlearn [68]0.1660.4680.2090.4030.0430.0650.3100.0110.3110.0100.0010.001
MACE [35]0.2500.3170.2980.1340.0480.1840.3030.0560.3040.0540.0010.002
RECE [17]0.1760.4260.2570.2700.0810.1560.3000.0660.3030.0540.0030.012
Ours0.1750.3970.2950.1960.1200.2010.2740.1680.3030.0700.0290.097
", + "image_path": "55fab0f233614353e893af27be930843c0a2549cdd1ea1d6c09722319869c73b.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 66, + 207, + 544, + 286 + ], + "blocks": [ + { + "bbox": [ + 87, + 186, + 520, + 197 + ], + "lines": [ + { + "bbox": [ + 87, + 186, + 520, + 197 + ], + "spans": [ + { + "bbox": [ + 87, + 186, + 520, + 197 + ], + "type": "text", + "content": "Table 2. Quantitative comparisons of IP character erasure. The best two results are highlighted with bold and underline." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 66, + 207, + 544, + 286 + ], + "lines": [ + { + "bbox": [ + 66, + 207, + 544, + 286 + ], + "spans": [ + { + "bbox": [ + 66, + 207, + 544, + 286 + ], + "type": "table", + "html": "
ButtocksBreast (F)Genitalia (F)Breast (M)Genitalia (M)FeetArmpitsBellyTotal↓FID30k↓CLIP30k↑
SD v1.4 [46]612043738167024118385014.070.313
ESD [14]15295111037683621113.800.304
SPM [36]1429721241532818614.630.312
AdvUnlearn [68]46208131275215.350.293
MACE [35]724810935613518912.600.294
RECE [17]14207161039453518614.450.309
Ours323496573914.690.308
", + "image_path": "813155dac35cee07efae839d98ba32311dccfed5ee2e05b5829bfe2ea31af116.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + } + ], + "index": 4 + }, + { + "type": "table", + "bbox": [ + 66, + 312, + 294, + 383 + ], + "blocks": [ + { + "bbox": [ + 72, + 293, + 536, + 304 + ], + "lines": [ + { + "bbox": [ + 72, + 293, + 536, + 304 + ], + "spans": [ + { + "bbox": [ + 72, + 293, + 536, + 304 + ], + "type": "text", + "content": "Table 3. Exposure detection of generated images in the I2P dataset. The best two results are highlighted with bold and underline." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 66, + 312, + 294, + 383 + ], + "lines": [ + { + "bbox": [ + 66, + 312, + 294, + 383 + ], + "spans": [ + { + "bbox": [ + 66, + 312, + 294, + 383 + ], + "type": "table", + "html": "
Erase ConceptRelate ConceptOverall
CLIPp↓LPIPSe↑CLIPp↑LPIPSp↓CLIPd↑LPIPSd↑
SD v1.4 [46]0.3100.0000.3100.0000.0000.000
ESD [14]0.2160.4440.2960.2410.0800.202
SPM [36]0.2660.2680.3080.0740.0420.195
AdvUnlearn [68]0.1860.4760.2290.4100.0430.066
MACE [35]0.2280.3660.2980.1960.0690.169
RECE [17]0.2530.3070.3090.0510.0570.255
Ours0.1600.4710.3030.1260.1430.345
", + "image_path": "602c3eb35c2fa8367585234ebb91fea5cdc696316756854c7eb851125168e6ce.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_body" + } + ], + "index": 6 + }, + { + "type": "table", + "bbox": [ + 66, + 445, + 294, + 514 + ], + "blocks": [ + { + "bbox": [ + 63, + 390, + 296, + 433 + ], + "lines": [ + { + "bbox": [ + 63, + 390, + 296, + 433 + ], + "spans": [ + { + "bbox": [ + 63, + 390, + 296, + 433 + ], + "type": "text", + "content": "Table 4. Quantitative evaluation of artist style erasure. The best two results are highlighted with bold and underline. Our ACE performs better in terms of thorough erasure and also demonstrates comparable prior preservation." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 66, + 445, + 294, + 514 + ], + "lines": [ + { + "bbox": [ + 66, + 445, + 294, + 514 + ], + "spans": [ + { + "bbox": [ + 66, + 445, + 294, + 514 + ], + "type": "table", + "html": "
Unlearn Diffusion↓P4D↓Ring a Bell↓Average↓
SD v1.4 [46]100%100%85.21%95.07%
ESD [14]73.05%74.47%38.73%62.08%
SPM [36]91.49%91.49%57.75%80.24%
AdvUnlearn [68]25.53%19.15%4.93%16.54%
MACE [35]64.53%66.67%14.79%48.66%
RECE [17]70.92%65.96%26.76%54.55%
Ours27.65%28.37%2.82%19.61%
", + "image_path": "b672329a1595d809d60e89f72e36ff29a9487106c59f9c4058c673aa41e6ca9e.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_body" + } + ], + "index": 8 + }, + { + "bbox": [ + 63, + 522, + 296, + 576 + ], + "lines": [ + { + "bbox": [ + 63, + 522, + 296, + 576 + ], + "spans": [ + { + "bbox": [ + 63, + 522, + 296, + 576 + ], + "type": "text", + "content": "Table 5. Robustness evaluation of nudity erasure. The best two results are highlighted with bold and underline. We report the attack success rates (ASR) of different adversarial methods under various erasure models. Our method achieved the second-best results without using adversarial training." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 63, + 597, + 295, + 620 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 597, + 295, + 620 + ], + "spans": [ + { + "bbox": [ + 63, + 597, + 295, + 620 + ], + "type": "inline_equation", + "content": "\\mathrm{CLIP}_d = \\mathrm{CLIP}_p - \\mathrm{CLIP}_e" + }, + { + "bbox": [ + 63, + 597, + 295, + 620 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 63, + 597, + 295, + 620 + ], + "type": "inline_equation", + "content": "\\mathrm{LPIPS}_d = \\mathrm{LPIPS}_e - \\mathrm{LPIPS}_p" + }, + { + "bbox": [ + 63, + 597, + 295, + 620 + ], + "type": "text", + "content": ". Higher " + }, + { + "bbox": [ + 63, + 597, + 295, + 620 + ], + "type": "inline_equation", + "content": "\\mathrm{CLIP}_d" + }, + { + "bbox": [ + 63, + 597, + 295, + 620 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 63, + 597, + 295, + 620 + ], + "type": "inline_equation", + "content": "\\mathrm{LPIPS}_d" + }, + { + "bbox": [ + 63, + 597, + 295, + 620 + ], + "type": "text", + "content": " indicate better trade-off." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 63, + 622, + 296, + 704 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 622, + 296, + 704 + ], + "spans": [ + { + "bbox": [ + 63, + 622, + 296, + 704 + ], + "type": "text", + "content": "For generation evaluation, we adopt 33 text templates for each character concept, and five images are generated for each text template using the erased model. To evaluate the effectiveness of editing filtration, we adopt the widely used LEDs++ [4] and MasaCtrl [6] as editing methods. For each concept, we utilize Stable Diffusion 3 [12] to generate 15 images based on 3 text templates as initial images, and" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 316, + 546, + 385 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 316, + 546, + 385 + ], + "spans": [ + { + "bbox": [ + 313, + 316, + 546, + 385 + ], + "type": "text", + "content": "then perform editing on them using erased models. Each image is manipulated using 11 editing texts, such as \"sun-glasses\". Finally, the CLIP score and LPIPS score are calculated based on edited images, concept text and original images. The final results are all reported by averaging 10 characters. More details can be found in Suppl." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 388, + 546, + 571 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 388, + 546, + 571 + ], + "spans": [ + { + "bbox": [ + 313, + 388, + 546, + 571 + ], + "type": "text", + "content": "Experiment Results. Fig. 3 illustrates the comparison of generation results against competing methods. One can see that, our ACE can successfully erase the target concept (i.e., Donald Duck) while retaining the capability to generate related prior concepts (e.g., Mickey Mouse and Pikachu). In contrast, methods such as ESD, AdvUnlearn, and RECE generate examples with noticeable concept erosion. From Table 2, our ACE demonstrates a comparable CLIP score for both the erased and related concepts. This indicates that our ACE achieves a better trade-off between target concept erasure and prior concept preservation, as further validated by the overall metrics in Table 2 (a). SPM and MACE exhibit inferior performance in thoroughly erasing the target concept. While AdvUnlearn performs well at erasing the target concept, it shows poor performance in prior preservation." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 312, + 575, + 547, + 704 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 575, + 547, + 704 + ], + "spans": [ + { + "bbox": [ + 312, + 575, + 547, + 704 + ], + "type": "text", + "content": "Fig. 4 further presents the comparison of editing results by LEDITS++. As shown in the figure, the competing method generates the erased concept with desired attributes after performing the editing on the given image, which is not wanted in practice. In contrast, our method can successfully hinder the editing of images containing erased concepts (e.g., Mickey), while keeping the editability of nontarget concepts (e.g., Hello Kitty and Elsa). Table 2 (b) reports the quantitative comparisons evaluated with LEDITS++. Our method shows a significant improvement in erasing concepts, demonstrating its ability to edit filtration." + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 743, + 317, + 753 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 743, + 317, + 753 + ], + "spans": [ + { + "bbox": [ + 293, + 743, + 317, + 753 + ], + "type": "text", + "content": "23511" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "bbox": [ + 63, + 83, + 297, + 106 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 83, + 297, + 106 + ], + "spans": [ + { + "bbox": [ + 63, + 83, + 297, + 106 + ], + "type": "text", + "content": "The comparison on MasaCtrl and more results can be found in Suppl." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 63, + 114, + 203, + 126 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 114, + 203, + 126 + ], + "spans": [ + { + "bbox": [ + 63, + 114, + 203, + 126 + ], + "type": "text", + "content": "4.2. Explicit Content Removal" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 63, + 132, + 296, + 363 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 132, + 296, + 363 + ], + "spans": [ + { + "bbox": [ + 63, + 132, + 296, + 363 + ], + "type": "text", + "content": "Experimental Setup. To evaluate our ACE on explicit content removal, we employ \"nudity\" as the target concept to train the model. Following [36], we utilize the I2P dataset [48] to evaluate the performance of explicit content generation. Specifically, we select 856 text prompts with explicit labels, and each prompt generates one image. Then, Nudenet [2] is used to quantify the number of nude body parts in these generated images. Additionally, following [14, 36], we employ COCO-30k Caption dataset [31] to evaluate the conditional generation capability of erased models. Specifically, we generate one image for each caption in COCO-30k and FID [19] is calculated between generated and natural images. CLIP score is also calculated between the generated images and the captions to access the semantic alignment of generated images. For robustness evaluation, we adopt UnlearnDiff [69], P4D [10] and Ring-A-Bell [53] as adversarial tools to calculate attack success rate (ASR). Adversarial attacks were conducted on 142 sensitive texts provided by UnlearnDiff. More details can be found in Suppl." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 63, + 364, + 296, + 583 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 364, + 296, + 583 + ], + "spans": [ + { + "bbox": [ + 63, + 364, + 296, + 583 + ], + "type": "text", + "content": "Experiment Results. From Table 5, we can see that our method has a lower success rate in adversarial attacks when trained only for \"nudity\", with only AdvUnlearn performing slightly better than us with using adversarial training. As shown in Fig. 5 and Table 3, our method can effectively erase nudity content and results in fewer exposure parts. In the generation evaluation, we dynamically attack the erased models using adversarial tools. As shown in Fig. 5, our method demonstrates excellent robustness. To further showcase our method's efficacy in editing filtration, we employ SD-Inpainting [46] as an editing tool to assess the exposure levels of images after different text-guided inpainting processes. In addition to conventional text editing (e.g., bikini) adversarial edited text in MMA-Diffusion [61] is also used for explicit editing. GroundingDINO [34] is used to detect clothing in the images. As shown in Fig. 5, our method successfully prevents inappropriate inpainting of exposed parts in masked areas, making it more practical for real-world applications." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 63, + 584, + 296, + 608 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 584, + 296, + 608 + ], + "spans": [ + { + "bbox": [ + 63, + 584, + 296, + 608 + ], + "type": "text", + "content": "More results for robustness and editing filtration evaluation can be found in Suppl." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 63, + 616, + 188, + 628 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 616, + 188, + 628 + ], + "spans": [ + { + "bbox": [ + 63, + 616, + 188, + 628 + ], + "type": "text", + "content": "4.3. Artistic Style Removal" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 63, + 634, + 296, + 704 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 634, + 296, + 704 + ], + "spans": [ + { + "bbox": [ + 63, + 634, + 296, + 704 + ], + "type": "text", + "content": "Experiment Setup. To validate the performance of our model in unlearning styles, we choose ten representative artistic styles, including Leonardo da Vinci, Pablo Picasso, Michelangelo, Van Gogh, Salvador Dali, Claude Monet, Andy Warhol, Jackson Pollock, Frida Kahlo, Georgia O'Keeffe. The evaluation process and metrics are simi" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 83, + 476, + 94 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 83, + 476, + 94 + ], + "spans": [ + { + "bbox": [ + 313, + 83, + 476, + 94 + ], + "type": "text", + "content": "lar to the IP character removal (Sec. 4.1)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 95, + 547, + 154 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 95, + 547, + 154 + ], + "spans": [ + { + "bbox": [ + 313, + 95, + 547, + 154 + ], + "type": "text", + "content": "Experiment Results. Fig. 6 illustrates the results of erasing artistic styles. As shown in the figure, our method can erase the style of Van Gogh and Andy Warhol from the T2I model, while generating other styles faithfully. From Table 4, our method achieves better " + }, + { + "bbox": [ + 313, + 95, + 547, + 154 + ], + "type": "inline_equation", + "content": "\\mathrm{CLIP}_e" + }, + { + "bbox": [ + 313, + 95, + 547, + 154 + ], + "type": "text", + "content": " on erased concept." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 167, + 405, + 179 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 167, + 405, + 179 + ], + "spans": [ + { + "bbox": [ + 313, + 167, + 405, + 179 + ], + "type": "text", + "content": "4.4. Ablation Study" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 185, + 547, + 451 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 185, + 547, + 451 + ], + "spans": [ + { + "bbox": [ + 313, + 185, + 547, + 451 + ], + "type": "text", + "content": "We further conduct the ablation study on the IP character erasure to evaluate the effectiveness of each component proposed in our ACE. Specifically, it contains the following variants: (1) Baseline: by only adopting the ESD loss to finetune the model. (2) Baseline + Unc: by employing unconditional erasure guidance alignment with ESD Loss to finetune the model. (3) Baseline + Unc + " + }, + { + "bbox": [ + 313, + 185, + 547, + 451 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{Cons}}" + }, + { + "bbox": [ + 313, + 185, + 547, + 451 + ], + "type": "text", + "content": ": by adopting ESD Loss, unconditional erasure guidance alignment, and " + }, + { + "bbox": [ + 313, + 185, + 547, + 451 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{Cons}}" + }, + { + "bbox": [ + 313, + 185, + 547, + 451 + ], + "type": "text", + "content": " to finetune the model. (4) Our method without ESD: Ours w/o " + }, + { + "bbox": [ + 313, + 185, + 547, + 451 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{ESD}}" + }, + { + "bbox": [ + 313, + 185, + 547, + 451 + ], + "type": "text", + "content": " is also effective in concept erasure and editing filtration, and performs better than ESD, indicating that our PG-UEG plays a crucial role in editing filtering. (5) Ours full method: by incorporating the ESD Loss, prior-guided unconditional erasure guidance alignment and " + }, + { + "bbox": [ + 313, + 185, + 547, + 451 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{Cons}}" + }, + { + "bbox": [ + 313, + 185, + 547, + 451 + ], + "type": "text", + "content": " together. From Table 1, we can see that: (i) Introducing unconditional erasure guidance improves the model's editing filtration performance, indicating its effectiveness in preventing unwanted edits. (ii) We use both unconditional erasure guidance and " + }, + { + "bbox": [ + 313, + 185, + 547, + 451 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{Cons}}" + }, + { + "bbox": [ + 313, + 185, + 547, + 451 + ], + "type": "text", + "content": " together leading to significant improvements in concept erasure and editing filtration performance, although it compromises the generation of related prior concepts. (iii) " + }, + { + "bbox": [ + 313, + 185, + 547, + 451 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{PUnc}}" + }, + { + "bbox": [ + 313, + 185, + 547, + 451 + ], + "type": "text", + "content": " enhances the prior preservation, and without affecting editing filtration." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 325, + 453, + 499, + 464 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 325, + 453, + 499, + 464 + ], + "spans": [ + { + "bbox": [ + 325, + 453, + 499, + 464 + ], + "type": "text", + "content": "More ablation results are provided in Suppl." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 475, + 384, + 487 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 475, + 384, + 487 + ], + "spans": [ + { + "bbox": [ + 313, + 475, + 384, + 487 + ], + "type": "text", + "content": "5. Conclusion" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 493, + 547, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 493, + 547, + 666 + ], + "spans": [ + { + "bbox": [ + 313, + 493, + 547, + 666 + ], + "type": "text", + "content": "In this paper, we investigate the potential risks of unsafe content creation through image editing, and propose an Anti-Editing Concept Erasure (ACE) method to prevent the production of such content during both generation and editing. In addition to the conditional erasure guidance used by existing methods, we further propose an unconditional noise erasure technique to enhance anti-editing concept erasure. This guidance steers the noise prediction away from the target concept, thereby effectively preventing the production of images containing the target concept. Moreover, a concept preservation mechanism is introduced to maintain the generation prior of non-target concepts. Experiments demonstrate that our ACE can successfully erase specific concepts and exhibits superior filtration capabilities during both generation and editing compared to existing methods." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 668, + 547, + 703 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 668, + 547, + 703 + ], + "spans": [ + { + "bbox": [ + 313, + 668, + 547, + 703 + ], + "type": "text", + "content": "Acknowledgement. The work was supported by National Key R&D Program of China under Grant No. 2022YFA1004100." + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 743, + 318, + 753 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 743, + 318, + 753 + ], + "spans": [ + { + "bbox": [ + 293, + 743, + 318, + 753 + ], + "type": "text", + "content": "23512" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 64, + 81, + 121, + 94 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 64, + 81, + 121, + 94 + ], + "spans": [ + { + "bbox": [ + 64, + 81, + 121, + 94 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 65, + 101, + 296, + 702 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 69, + 101, + 296, + 153 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 101, + 296, + 153 + ], + "spans": [ + { + "bbox": [ + 69, + 101, + 296, + 153 + ], + "type": "text", + "content": "[1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.5" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 156, + 296, + 177 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 156, + 296, + 177 + ], + "spans": [ + { + "bbox": [ + 69, + 156, + 296, + 177 + ], + "type": "text", + "content": "[2] P Bedapudi. Nudenet: Neural nets for nudity classification, detection and selective censoring, 2019. 8" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 178, + 296, + 231 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 178, + 296, + 231 + ], + "spans": [ + { + "bbox": [ + 69, + 178, + 296, + 231 + ], + "type": "text", + "content": "[3] Manuel Brack, Felix Friedrich, Dominik Hintersdorf, Lukas Struppek, Patrick Schramowski, and Kristian Kersting. Sega: Instructing text-to-image models using semantic guidance. Advances in Neural Information Processing Systems, 36: 25365-25389, 2023. 2" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 233, + 296, + 297 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 233, + 296, + 297 + ], + "spans": [ + { + "bbox": [ + 69, + 233, + 296, + 297 + ], + "type": "text", + "content": "[4] Manuel Brack, Felix Friedrich, Katharia Kornmeier, Linoy Tsaban, Patrick Schramowski, Kristian Kersting, and Apolinário Passos. Ledits++: Limitless image editing using text-to-image models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8861-8870, 2024. 1, 2, 4, 7" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 299, + 296, + 351 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 299, + 296, + 351 + ], + "spans": [ + { + "bbox": [ + 69, + 299, + 296, + 351 + ], + "type": "text", + "content": "[5] Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18392-18402, 2023. 2" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 354, + 296, + 416 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 354, + 296, + 416 + ], + "spans": [ + { + "bbox": [ + 69, + 354, + 296, + 416 + ], + "type": "text", + "content": "[6] Mingdeng Cao, Xintao Wang, Zhongang Qi, Ying Shan, Xiaohu Qie, and Yinqiang Zheng. Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 22560-22570, 2023. 2, 7, 3" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 418, + 296, + 472 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 418, + 296, + 472 + ], + "spans": [ + { + "bbox": [ + 69, + 418, + 296, + 472 + ], + "type": "text", + "content": "[7] Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T Freeman, Michael Rubinstein, et al. Muse: Text-to-image generation via masked generative transformers. arXiv preprint arXiv:2301.00704, 2023. 2" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 474, + 296, + 516 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 474, + 296, + 516 + ], + "spans": [ + { + "bbox": [ + 69, + 474, + 296, + 516 + ], + "type": "text", + "content": "[8] Hila Chefer, Yuval Alaluf, Yael Vinker, Lior Wolf, and Daniel Cohen-Or. Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models. ACM Transactions on Graphics (TOG), 42(4):1-10, 2023. 2" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 517, + 296, + 559 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 517, + 296, + 559 + ], + "spans": [ + { + "bbox": [ + 69, + 517, + 296, + 559 + ], + "type": "text", + "content": "[9] Die Chen, Zhiwen Li, Mingyuan Fan, Cen Chen, Wenmeng Zhou, and Yaliang Li. Eiup: A training-free approach to erase non-compliant concepts conditioned on implicit unsafe prompts. arXiv preprint arXiv:2408.01014, 2024. 2" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 65, + 562, + 295, + 604 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 562, + 295, + 604 + ], + "spans": [ + { + "bbox": [ + 65, + 562, + 295, + 604 + ], + "type": "text", + "content": "[10] Zhi-Yi Chin, Chieh-Ming Jiang, Ching-Chun Huang, PinYu Chen, and Wei-Chen Chiu. Prompting4debugging: Red-teaming text-to-image diffusion models by finding problematic prompts. arXiv preprint arXiv:2309.06135, 2023. 3, 8" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 65, + 606, + 295, + 638 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 606, + 295, + 638 + ], + "spans": [ + { + "bbox": [ + 65, + 606, + 295, + 638 + ], + "type": "text", + "content": "[11] Anudeep Das, Vasisht Duddu, Rui Zhang, and N Asokan. *Espresso: Robust concept filtering in text-to-image models.* arXiv preprint arXiv:2404.19227, 2024. 2" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 65, + 639, + 296, + 702 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 639, + 296, + 702 + ], + "spans": [ + { + "bbox": [ + 65, + 639, + 296, + 702 + ], + "type": "text", + "content": "[12] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis, march 2024. URL http://arxiv.org/abs/2403.03206, 2024.1, 6,7" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 315, + 83, + 547, + 703 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 315, + 83, + 546, + 125 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 83, + 546, + 125 + ], + "spans": [ + { + "bbox": [ + 315, + 83, + 546, + 125 + ], + "type": "text", + "content": "[13] Kailai Feng, Yabo Zhang, Haodong Yu, Zhilong Ji, Jinfeng Bai, Hongzhi Zhang, and Wangmeng Zuo. Vitaglyph: Vitalizing artistic typography with flexible dual-branch diffusion models. arXiv preprint arXiv:2410.01738, 2024. 1" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 315, + 127, + 547, + 179 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 127, + 547, + 179 + ], + "spans": [ + { + "bbox": [ + 315, + 127, + 547, + 179 + ], + "type": "text", + "content": "[14] Rohit Gandikota, Joanna Materzynska, Jaden Fiitto-Kaufman, and David Bau. Erasing concepts from diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2426-2436, 2023. 2, 3, 4, 6, 7, 8" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 182, + 547, + 233 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 182, + 547, + 233 + ], + "spans": [ + { + "bbox": [ + 316, + 182, + 547, + 233 + ], + "type": "text", + "content": "[15] Rohit Gandikota, Hadas Orgad, Yonatan Belinkov, Joanna Materzyńska, and David Bau. Unified concept editing in diffusion models. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 5111-5120, 2024. 2" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 236, + 546, + 277 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 236, + 546, + 277 + ], + "spans": [ + { + "bbox": [ + 316, + 236, + 546, + 277 + ], + "type": "text", + "content": "[16] Hongcheng Gao, Tianyu Pang, Chao Du, Taihang Hu, Zhijie Deng, and Min Lin. Meta-unlearning on diffusion models: Preventing relearning unlearned concepts. arXiv preprint arXiv:2410.12777, 2024. 2" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 280, + 546, + 321 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 280, + 546, + 321 + ], + "spans": [ + { + "bbox": [ + 316, + 280, + 546, + 321 + ], + "type": "text", + "content": "[17] Chao Gong, Kai Chen, Zhipeng Wei, Jingjing Chen, and YuGang Jiang. Reliable and efficient concept erasure of text-to-image diffusion models. arXiv preprint arXiv:2407.12383, 2024. 2, 6, 7, 4" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 323, + 546, + 376 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 323, + 546, + 376 + ], + "spans": [ + { + "bbox": [ + 316, + 323, + 546, + 376 + ], + "type": "text", + "content": "[18] Luxi He, Yangsibo Huang, Weijia Shi, Tinghao Xie, Haotian Liu, Yue Wang, Luke Zettlemoyer, Chiyuan Zhang, Danqi Chen, and Peter Henderson. *Fantastic copyrighted beasts and how (not) to generate them.* arXiv preprint arXiv:2406.14526, 2024. 2" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 377, + 546, + 430 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 377, + 546, + 430 + ], + "spans": [ + { + "bbox": [ + 316, + 377, + 546, + 430 + ], + "type": "text", + "content": "[19] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. 8, 2" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 315, + 432, + 546, + 453 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 432, + 546, + 453 + ], + "spans": [ + { + "bbox": [ + 315, + 432, + 546, + 453 + ], + "type": "text", + "content": "[20] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022. 2, 3, 4" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 315, + 454, + 546, + 505 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 454, + 546, + 505 + ], + "spans": [ + { + "bbox": [ + 315, + 454, + 546, + 505 + ], + "type": "text", + "content": "[21] Seunghoo Hong, Juhun Lee, and Simon S Woo. All but one: Surgical concept erasing with model preservation in text-to-image diffusion models. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 21143-21151, 2024. 2" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 315, + 508, + 546, + 551 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 508, + 546, + 551 + ], + "spans": [ + { + "bbox": [ + 315, + 508, + 546, + 551 + ], + "type": "text", + "content": "[22] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. 6" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 315, + 552, + 546, + 604 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 552, + 546, + 604 + ], + "spans": [ + { + "bbox": [ + 315, + 552, + 546, + 604 + ], + "type": "text", + "content": "[23] Chi-Pin Huang, Kai-Po Chang, Chung-Ting Tsai, Yung-Hsuan Lai, Fu-En Yang, and Yu-Chiang Frank Wang. Receler: Reliable concept erasing of text-to-image diffusion models via lightweight erasers. arXiv preprint arXiv:2311.17717, 2023. 2" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 315, + 606, + 546, + 648 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 606, + 546, + 648 + ], + "spans": [ + { + "bbox": [ + 315, + 606, + 546, + 648 + ], + "type": "text", + "content": "[24] Sanghyun Kim, Seohyeon Jung, Balhae Kim, Moonseok Choi, Jinwoo Shin, and Juho Lee. Safeguard text-to-image diffusion models with human feedback inversion. arXiv preprint arXiv:2407.21032, 2024." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 315, + 650, + 547, + 703 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 650, + 547, + 703 + ], + "spans": [ + { + "bbox": [ + 315, + 650, + 547, + 703 + ], + "type": "text", + "content": "[25] Nupur Kumari, Bingliang Zhang, Sheng-Yu Wang, Eli Shechtman, Richard Zhang, and Jun-Yan Zhu. Ablating concepts in text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 22691-22702, 2023. 2" + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 743, + 317, + 753 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 743, + 317, + 753 + ], + "spans": [ + { + "bbox": [ + 293, + 743, + 317, + 753 + ], + "type": "text", + "content": "23513" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 65, + 83, + 296, + 703 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 65, + 83, + 296, + 137 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 83, + 296, + 137 + ], + "spans": [ + { + "bbox": [ + 65, + 83, + 296, + 137 + ], + "type": "text", + "content": "[26] Nupur Kumari, Bingliang Zhang, Richard Zhang, Eli Shechtman, and Jun-Yan Zhu. Multi-concept customization of text-to-image diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1931-1941, 2023. 2" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 65, + 137, + 296, + 178 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 137, + 296, + 178 + ], + "spans": [ + { + "bbox": [ + 65, + 137, + 296, + 178 + ], + "type": "text", + "content": "[27] Fan Li, Zixiao Zhang, Yi Huang, Jianzhuang Liu, Renjing Pei, Bin Shao, and Songcen Xu. Magiceraser: Erasing any objects via semantics-aware control. arXiv preprint arXiv:2410.10207, 2024. 2" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 65, + 180, + 296, + 232 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 180, + 296, + 232 + ], + "spans": [ + { + "bbox": [ + 65, + 180, + 296, + 232 + ], + "type": "text", + "content": "[28] Hang Li, Chengzhi Shen, Philip Torr, Volker Tresp, and Jindong Gu. Self-discovering interpretable diffusion latent directions for responsible text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12006-12016, 2024. 2" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 65, + 233, + 296, + 275 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 233, + 296, + 275 + ], + "spans": [ + { + "bbox": [ + 65, + 233, + 296, + 275 + ], + "type": "text", + "content": "[29] Jia Li, Lijie Hu, Zhixian He, Jingfeng Zhang, Tianhang Zheng, and Di Wang. Text guided image editing with automatic concept locating and forgetting. arXiv preprint arXiv:2405.19708, 2024." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 65, + 276, + 296, + 328 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 276, + 296, + 328 + ], + "spans": [ + { + "bbox": [ + 65, + 276, + 296, + 328 + ], + "type": "text", + "content": "[30] Senmao Li, Joost van de Weijer, Taihang Hu, Fahad Shahbaz Khan, Qibin Hou, Yaxing Wang, and Jian Yang. Get what you want, not what you don't: Image content suppression for text-to-image diffusion models. arXiv preprint arXiv:2402.05375, 2024. 2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 65, + 330, + 296, + 393 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 330, + 296, + 393 + ], + "spans": [ + { + "bbox": [ + 65, + 330, + 296, + 393 + ], + "type": "text", + "content": "[31] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740-755. Springer, 2014. 8, 2" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 65, + 394, + 296, + 436 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 394, + 296, + 436 + ], + "spans": [ + { + "bbox": [ + 65, + 394, + 296, + 436 + ], + "type": "text", + "content": "[32] Ming Liu, Yuxiang Wei, Xiaohe Wu, Wangmeng Zuo, and Lei Zhang. Survey on leveraging pre-trained generative adversarial networks for image editing and restoration. Science China Information Sciences, 66(5):151101, 2023. 2" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 65, + 437, + 296, + 479 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 437, + 296, + 479 + ], + "spans": [ + { + "bbox": [ + 65, + 437, + 296, + 479 + ], + "type": "text", + "content": "[33] Runtao Liu, Ashkan Khakzar, Jindong Gu, Qifeng Chen, Philip Torr, and Fabio Pizzati. Latent guard: a safety framework for text-to-image generation. In European Conference on Computer Vision, pages 93-109. Springer, 2025. 2" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 65, + 479, + 296, + 531 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 479, + 296, + 531 + ], + "spans": [ + { + "bbox": [ + 65, + 479, + 296, + 531 + ], + "type": "text", + "content": "[34] Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. arXiv preprint arXiv:2303.05499, 2023. 8" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 65, + 533, + 296, + 585 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 533, + 296, + 585 + ], + "spans": [ + { + "bbox": [ + 65, + 533, + 296, + 585 + ], + "type": "text", + "content": "[35] Shilin Lu, Zilan Wang, Leyang Li, Yanzhu Liu, and Adams Wai-Kin Kong. Mace: Mass concept erasure in diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6430-6440, 2024. 2, 6, 7, 4" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 65, + 586, + 296, + 649 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 586, + 296, + 649 + ], + "spans": [ + { + "bbox": [ + 65, + 586, + 296, + 649 + ], + "type": "text", + "content": "[36] Mengyao Lyu, Yuhong Yang, Haiwen Hong, Hui Chen, Xuan Jin, Yuan He, Hui Xue, Jungong Han, and Guiguang Ding. One-dimensional adapter to rule them all: Concepts diffusion models and erasing applications. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7559-7568, 2024. 2, 3, 6, 7, 8, 4" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 65, + 650, + 296, + 703 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 650, + 296, + 703 + ], + "spans": [ + { + "bbox": [ + 65, + 650, + 296, + 703 + ], + "type": "text", + "content": "[37] Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Null-text inversion for editing real images using guided diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6038–6047, 2023. 2" + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 315, + 83, + 546, + 703 + ], + "type": "list", + "angle": 0, + "index": 25, + "blocks": [ + { + "bbox": [ + 315, + 83, + 546, + 137 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 83, + 546, + 137 + ], + "spans": [ + { + "bbox": [ + 315, + 83, + 546, + 137 + ], + "type": "text", + "content": "[38] Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, and Ying Shan. T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 4296-4304, 2024. 2" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 315, + 137, + 546, + 189 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 137, + 546, + 189 + ], + "spans": [ + { + "bbox": [ + 315, + 137, + 546, + 189 + ], + "type": "text", + "content": "[39] Yong-Hyun Park, Sangdoo Yun, Jin-Hwa Kim, Junho Kim, Geonhui Jang, Yonghyun Jeong, Junghyo Jo, and Gayoung Lee. Direct unlearning optimization for robust and safe text-to-image models. arXiv preprint arXiv:2407.21035, 2024. 2" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 315, + 190, + 546, + 232 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 190, + 546, + 232 + ], + "spans": [ + { + "bbox": [ + 315, + 190, + 546, + 232 + ], + "type": "text", + "content": "[40] Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, and Jun-Yan Zhu. Zero-shot image-to-image translation. In ACM SIGGRAPH 2023 Conference Proceedings, pages 1-11, 2023. 2" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 315, + 233, + 546, + 265 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 233, + 546, + 265 + ], + "spans": [ + { + "bbox": [ + 315, + 233, + 546, + 265 + ], + "type": "text", + "content": "[41] Minh Pham, Kelly O Marshall, Chinmay Hegde, and Niv Cohen. Robust concept erasure using task vectors. arXiv preprint arXiv:2404.03631, 2024. 2" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 315, + 266, + 546, + 318 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 266, + 546, + 318 + ], + "spans": [ + { + "bbox": [ + 315, + 266, + 546, + 318 + ], + "type": "text", + "content": "[42] Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952, 2023. 1" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 315, + 319, + 546, + 371 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 319, + 546, + 371 + ], + "spans": [ + { + "bbox": [ + 315, + 319, + 546, + 371 + ], + "type": "text", + "content": "[43] Samuele Poppi, Tobia Poppi, Federico Cocchi, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara, et al. Safe-clip: Removing nsfw concepts from vision-and-language models. In Proceedings of the European Conference on Computer Vision, 2024. 2" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 315, + 373, + 546, + 435 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 373, + 546, + 435 + ], + "spans": [ + { + "bbox": [ + 315, + 373, + 546, + 435 + ], + "type": "text", + "content": "[44] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 6" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 315, + 436, + 546, + 468 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 436, + 546, + 468 + ], + "spans": [ + { + "bbox": [ + 315, + 436, + 546, + 468 + ], + "type": "text", + "content": "[45] Javier Rando, Daniel Paleka, David Lindner, Lennart Heim, and Florian Tramèr. Red-teaming the stable diffusion safety filter. arXiv preprint arXiv:2210.04610, 2022. 2" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 315, + 468, + 546, + 521 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 468, + 546, + 521 + ], + "spans": [ + { + "bbox": [ + 315, + 468, + 546, + 521 + ], + "type": "text", + "content": "[46] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695, 2022. 1, 2, 3, 7, 8, 4" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 315, + 522, + 546, + 585 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 522, + 546, + 585 + ], + "spans": [ + { + "bbox": [ + 315, + 522, + 546, + 585 + ], + "type": "text", + "content": "[47] Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 22500-22510, 2023. 2" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 315, + 586, + 546, + 639 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 586, + 546, + 639 + ], + "spans": [ + { + "bbox": [ + 315, + 586, + 546, + 639 + ], + "type": "text", + "content": "[48] Patrick Schramowski, Manuel Brack, Björn Deiseroth, and Kristian Kersting. Safe latent diffusion: Mitigating inappropriate degeneration in diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22522-22531, 2023. 2, 8" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 315, + 639, + 546, + 703 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 639, + 546, + 703 + ], + "spans": [ + { + "bbox": [ + 315, + 639, + 546, + 703 + ], + "type": "text", + "content": "[49] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems, 35:25278-25294, 2022. 1" + } + ] + } + ], + "index": 24 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 743, + 318, + 753 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 743, + 318, + 753 + ], + "spans": [ + { + "bbox": [ + 293, + 743, + 318, + 753 + ], + "type": "text", + "content": "23514" + } + ] + } + ], + "index": 26 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 65, + 83, + 296, + 702 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 65, + 83, + 296, + 135 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 83, + 296, + 135 + ], + "spans": [ + { + "bbox": [ + 65, + 83, + 296, + 135 + ], + "type": "text", + "content": "[50] Jing Shi, Wei Xiong, Zhe Lin, and Hyun Joon Jung. Instantbooth: Personalized text-to-image generation without test-time finetuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8543-8552, 2024. 2" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 65, + 137, + 296, + 179 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 137, + 296, + 179 + ], + "spans": [ + { + "bbox": [ + 65, + 137, + 296, + 179 + ], + "type": "text", + "content": "[51] Zhuan Shi, Jing Yan, Xiaoli Tang, Lingjuan Lyu, and Boi Faltings. Rlcp: A reinforcement learning-based copyright protection method for text-to-image diffusion model. arXiv preprint arXiv:2408.16634, 2024. 2" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 65, + 180, + 296, + 222 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 180, + 296, + 222 + ], + "spans": [ + { + "bbox": [ + 65, + 180, + 296, + 222 + ], + "type": "text", + "content": "[52] Koushik Srivatsan, Fahad Shamshad, Muzammal Naseer, and Karthik Nandakumar. Stereo: Towards adversarially robust concept erasing from text-to-image generation models. arXiv preprint arXiv:2408.16807, 2024. 2" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 65, + 223, + 296, + 274 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 223, + 296, + 274 + ], + "spans": [ + { + "bbox": [ + 65, + 223, + 296, + 274 + ], + "type": "text", + "content": "[53] Yu-Lin Tsai, Chia-Yi Hsu, Chulin Xie, Chih-Hsun Lin, Jia-You Chen, Bo Li, Pin-Yu Chen, Chia-Mu Yu, and Chun-Ying Huang. Ring-a-bell! how reliable are concept removal methods for diffusion models? arXiv preprint arXiv:2310.10012, 2023. 3, 8" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 65, + 276, + 296, + 329 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 276, + 296, + 329 + ], + "spans": [ + { + "bbox": [ + 65, + 276, + 296, + 329 + ], + "type": "text", + "content": "[54] Narek Tumanyan, Michal Geyer, Shai Bagon, and Tali Dekel. Plug-and-play diffusion features for text-driven image-to-image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1921-1930, 2023. 2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 65, + 330, + 296, + 371 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 330, + 296, + 371 + ], + "spans": [ + { + "bbox": [ + 65, + 330, + 296, + 371 + ], + "type": "text", + "content": "[55] Haofan Wang, Matteo Spinelli, Qixun Wang, Xu Bai, Zekui Qin, and Anthony Chen. Instantstyle: Free lunch towards style-preserving in text-to-image generation. arXiv preprint arXiv:2404.02733, 2024. 1" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 65, + 373, + 296, + 434 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 373, + 296, + 434 + ], + "spans": [ + { + "bbox": [ + 65, + 373, + 296, + 434 + ], + "type": "text", + "content": "[56] Yuxiang Wei, Yabo Zhang, Zhilong Ji, Jinfeng Bai, Lei Zhang, and Wangmeng Zuo. Elite: Encoding visual concepts into textual embeddings for customized text-to-image generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15943-15953, 2023. 2" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 65, + 436, + 296, + 489 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 436, + 296, + 489 + ], + "spans": [ + { + "bbox": [ + 65, + 436, + 296, + 489 + ], + "type": "text", + "content": "[57] Yuxiang Wei, Zhilong Ji, Jinfeng Bai, Hongzhi Zhang, Lei Zhang, and Wangmeng Zuo. Masterweaver: Taming editability and face identity for personalized text-to-image generation. In European Conference on Computer Vision, pages 252-271. Springer, 2025. 2" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 65, + 490, + 296, + 532 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 490, + 296, + 532 + ], + "spans": [ + { + "bbox": [ + 65, + 490, + 296, + 532 + ], + "type": "text", + "content": "[58] Yuxiang Wei, Yiheng Zheng, Yabo Zhang, Ming Liu, Zhi-long Ji, Lei Zhang, and Wangmeng Zuo. Personalized image generation with deep generative models: A decade survey. arXiv preprint arXiv:2502.13081, 2025. 1" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 65, + 533, + 296, + 585 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 533, + 296, + 585 + ], + "spans": [ + { + "bbox": [ + 65, + 533, + 296, + 585 + ], + "type": "text", + "content": "[59] Yongliang Wu, Shiji Zhou, Mingzhuo Yang, Lianzhe Wang, Wenbo Zhu, Heng Chang, Xiao Zhou, and Xu Yang. Unlearning concepts in diffusion model via concept domain correction and concept preserving gradient. arXiv preprint arXiv:2405.15304, 2024. 2" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 65, + 586, + 296, + 648 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 586, + 296, + 648 + ], + "spans": [ + { + "bbox": [ + 65, + 586, + 296, + 648 + ], + "type": "text", + "content": "[60] Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, and Fang Wen. Paint by example: Exemplar-based image editing with diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18381-18391, 2023. 2" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 65, + 650, + 296, + 702 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 650, + 296, + 702 + ], + "spans": [ + { + "bbox": [ + 65, + 650, + 296, + 702 + ], + "type": "text", + "content": "[61] Yijun Yang, Ruiyuan Gao, Xiaosen Wang, Tsung-Yi Ho, Nan Xu, and Qiang Xu. Mma-diffusion: Multimodal attack on diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7737-7746, 2024. 8" + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 315, + 83, + 546, + 559 + ], + "type": "list", + "angle": 0, + "index": 23, + "blocks": [ + { + "bbox": [ + 315, + 83, + 546, + 125 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 83, + 546, + 125 + ], + "spans": [ + { + "bbox": [ + 315, + 83, + 546, + 125 + ], + "type": "text", + "content": "[62] Yijun Yang, Ruiyuan Gao, Xiao Yang, Jianyuan Zhong, and Qiang Xu. Guardt2i: Defending text-to-image models from adversarial prompts. arXiv preprint arXiv:2403.01446, 2024. 2" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 315, + 127, + 546, + 168 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 127, + 546, + 168 + ], + "spans": [ + { + "bbox": [ + 315, + 127, + 546, + 168 + ], + "type": "text", + "content": "[63] Jaehong Yoon, Shoubin Yu, Vaidehi Patil, Huaxiu Yao, and Mohit Bansal. Safree: Training-free and adaptive guard for safe text-to-image and video generation. arXiv preprint arXiv:2410.12761, 2024." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 315, + 171, + 546, + 223 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 171, + 546, + 223 + ], + "spans": [ + { + "bbox": [ + 315, + 171, + 546, + 223 + ], + "type": "text", + "content": "[64] Gong Zhang, Kai Wang, Xingqian Xu, Zhangyang Wang, and Humphrey Shi. Forget-me-not: Learning to forget in text-to-image diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1755-1764, 2024. 2" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 315, + 224, + 546, + 266 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 224, + 546, + 266 + ], + "spans": [ + { + "bbox": [ + 315, + 224, + 546, + 266 + ], + "type": "text", + "content": "[65] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3836-3847, 2023. 2" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 315, + 268, + 546, + 320 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 268, + 546, + 320 + ], + "spans": [ + { + "bbox": [ + 315, + 268, + 546, + 320 + ], + "type": "text", + "content": "[66] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586-595, 2018. 6" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 315, + 322, + 546, + 364 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 322, + 546, + 364 + ], + "spans": [ + { + "bbox": [ + 315, + 322, + 546, + 364 + ], + "type": "text", + "content": "[67] Yabo Zhang, Yuxiang Wei, Dongsheng Jiang, Xiaopeng Zhang, Wangmeng Zuo, and Qi Tian. Controlvideo: Training-free controllable text-to-video generation. arXiv preprint arXiv:2305.13077, 2023. 1" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 315, + 365, + 546, + 418 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 365, + 546, + 418 + ], + "spans": [ + { + "bbox": [ + 315, + 365, + 546, + 418 + ], + "type": "text", + "content": "[68] Yimeng Zhang, Xin Chen, Jinghan Jia, Yihua Zhang, Chongyu Fan, Jiancheng Liu, Mingyi Hong, Ke Ding, and Sijia Liu. Defensive unlearning with adversarial training for robust concept erasure in diffusion models. arXiv preprint arXiv:2405.15234, 2024. 2, 6, 7, 4" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 315, + 419, + 546, + 482 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 419, + 546, + 482 + ], + "spans": [ + { + "bbox": [ + 315, + 419, + 546, + 482 + ], + "type": "text", + "content": "[69] Yimeng Zhang, Jinghan Jia, Xin Chen, Aochuan Chen, Yihua Zhang, Jiancheng Liu, Ke Ding, and Sijia Liu. To generate or not? safety-driven unlearned diffusion models are still easy to generate unsafe images... for now. In European Conference on Computer Vision, pages 385-403. Springer, 2025. 3, 8" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 315, + 483, + 546, + 525 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 483, + 546, + 525 + ], + "spans": [ + { + "bbox": [ + 315, + 483, + 546, + 525 + ], + "type": "text", + "content": "[70] Yabo Zhang, Xinpeng Zhou, Yihan Zeng, Hang Xu, Hui Li, and Wangmeng Zuo. Framepainter: Endowing interactive image editing with video diffusion priors. arXiv preprint arXiv:2501.08225, 2025. 2" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 315, + 527, + 546, + 559 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 527, + 546, + 559 + ], + "spans": [ + { + "bbox": [ + 315, + 527, + 546, + 559 + ], + "type": "text", + "content": "[71] Mengnan Zhao, Lihe Zhang, Tianhang Zheng, Yuqiu Kong, and Baocai Yin. Separable multi-concept erasure from diffusion models. arXiv preprint arXiv:2402.05947, 2024. 2" + } + ] + } + ], + "index": 22 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 743, + 317, + 753 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 743, + 317, + 753 + ], + "spans": [ + { + "bbox": [ + 293, + 743, + 317, + 753 + ], + "type": "text", + "content": "23515" + } + ] + } + ], + "index": 24 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/ACL_ Activating Capability of Linear Attention for Image Restoration/b3cdf41c-47b4-40a8-888d-383f4742ab99_content_list.json b/2025/ACL_ Activating Capability of Linear Attention for Image Restoration/b3cdf41c-47b4-40a8-888d-383f4742ab99_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7c0db3a8b30c590535232bbde102a18e2bb57bb3 --- /dev/null +++ b/2025/ACL_ Activating Capability of Linear Attention for Image Restoration/b3cdf41c-47b4-40a8-888d-383f4742ab99_content_list.json @@ -0,0 +1,1844 @@ +[ + { + "type": "text", + "text": "ACL: Activating Capability of Linear Attention for Image Restoration", + "text_level": 1, + "bbox": [ + 143, + 130, + 854, + 152 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Yubin Gu $^{1}$ , Yuan Meng $^{1}$ , Jiayi Ji $^{1,2}$ , Xiaoshuai Sun $^{1*}$", + "bbox": [ + 292, + 179, + 709, + 198 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, 361005, P.R. China", + "bbox": [ + 194, + 198, + 800, + 233 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{2}$ National University of Singapore, Singapore", + "bbox": [ + 315, + 233, + 679, + 252 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "{guyubin,aprilmyy}@stu.xmu.edu.cn,jjyxmu@gmail.com,xssun@xmu.edu.cn", + "bbox": [ + 189, + 253, + 799, + 268 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 246, + 304, + 326, + 319 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Image restoration (IR), a key area in computer vision, has entered a new era with deep learning. Recent research has shifted toward Selective State Space Models (Mamba) to overcome CNNs' limited receptive fields and Transformers' computational inefficiency. However, due to Mamba's inherent one-dimensional scanning limitations, recent approaches have introduced multi-directional scanning to bolster inter-sequence correlations. Despite these enhancements, these methods still struggle with managing local pixel correlations across various directions. Moreover, the recursive computation in Mamba's SSM leads to reduced efficiency. To resolve these issues, we exploit the mathematical congruences between linear attention and SSM within Mamba to propose a novel model, ACL, which leverages news designs to Activate the Capability of Linear attention for IR. ACL integrates linear attention blocks instead of SSM within Mamba, serving as the core component of encoders/decoders, and aims to preserve a global perspective while boosting computational efficiency. Furthermore, we have designed a simple yet robust local enhancement module with multi-scale dilated convolutions to extract both coarse and fine features to improve local detail recovery. Experimental results confirm that our ACL model excels in classical IR tasks such as de-raining and de-blurring, while maintaining relatively low parameter counts and FLOPs1.", + "bbox": [ + 89, + 335, + 485, + 714 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 89, + 732, + 220, + 750 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "In the field of image processing, restoring degraded images to clarity is a crucial technology. High-quality image restoration (IR) methods play a key role in ensuring the smooth progression of downstream vision tasks. Vanilla IR methods rely primarily on manually designed feature extraction, but often perform poorly when faced with complicated degradation factors in reality.", + "bbox": [ + 89, + 758, + 483, + 864 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/87e0ea1dca4d6347f547973ef4454a772c0dd4e8960c638c717732cde503c547.jpg", + "image_caption": [ + "Figure 1. Comparison of our method's core block design with those of recent mainstream approaches." + ], + "image_footnote": [], + "bbox": [ + 522, + 301, + 903, + 459 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Over the past decade, with the rapid development of deep learning, many fields have been propelled forward, such as image segmentation and generation [11, 12, 14, 17, 48], and of course, IR as well. The initial IR models were primarily based on CNN designs [6, 41], whose translational invariance and high inferential efficiency have made them widely applied, as depicted in the core structure in Fig. 1(a). However, these CNN-based models face challenges in processing global features and often require stacking additional network layers to compensate for this deficiency. With the rise of attention mechanisms [34], particularly the application of the Transformer, which boasts exceptional global feature modeling capabilities, an increasing number of researchers have shifted toward Transformer-based designs [23, 36, 44], achieving remarkable restoration results, as shown in the core structure in Fig. 1 (b). Although the Transformer broadens the global perspective, its softmax attention mechanism's quadratic computational complexity significantly reduces efficiency during inference.", + "bbox": [ + 511, + 534, + 906, + 821 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Recently, state-space models from the field of control science, especially the Mamba model [8], have been proposed by related IR methods [13, 30] demonstrating superior training and inference efficiency on sequential data. These models have even outperformed some Transformer-", + "bbox": [ + 511, + 824, + 908, + 902 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 46 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "*Corresponding Author: Xiaoshuai Sun", + "bbox": [ + 107, + 875, + 321, + 887 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "$^{1}$ https://github.com/ClimBin/ACLNet", + "bbox": [ + 109, + 887, + 310, + 898 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "17913", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "based IR methods in terms of performance. Mamba is initially designed for one-dimensional sequence data modeling, direct adaptation to two-dimensional image poses challenges. The improved SSM module in Mamba, a one-dimensional unidirectional scanning, recursive computing structure, faces major issues when applied to the image, including: 1) conversion of image data into a one-dimensional sequence increases the distance between adjacent pixels, leading to the loss of local relationships [30]; 2) unidirectional modeling neglects the spatial relationships of pixels in multiple directions. Recently, MambaIR [13] and VMambaIR [30] have adopted multi-directional scanning methods to mitigate these problems, as illustrated in the main structure in Fig. 1 (c), but these methods have yet to effectively and directly establish multi-directional connections in the spatial dimension of pixels.", + "bbox": [ + 89, + 90, + 480, + 332 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In response to the limitations of the Mamba model, we propose a method that aims to leverage the advantages of the Mamba structure while addressing its unidirectional modeling constraint. A direct approach to overcoming this limitation is to introduce a long-range dependency attention mechanism, such as Linear Attention (LA) or Softmax Attention (SA). Although SA generally outperforms LA in traditional vision tasks, its computational complexity is significantly higher than that of LA. Upon further analysis, we observe that LA and the State-Space Model (SSM) in Mamba share a highly similar mathematical formulation, which has also been deeply analyzed in the work [15]. Inspired by this insight, we replace the SSM module in Mamba with LA layers, thereby designing a novel foundational IR architecture, as illustrated in Fig. 1 (d), and propose a new IR model, ACL.", + "bbox": [ + 89, + 337, + 482, + 578 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Specifically, the proposed ACL consists of two main components: the Mamba-based module with LA at its core (denoted as LAMA), which serves as the central part of the encoder/decoder and establishes global feature dependencies with linear computational complexity. Additionally, optimizing local features is equally essential. A recent approach involves partitioning the global features in the spatial domain into small windows and optimizing attention within these windows to enhance local feature modeling. While this approach has shown some effectiveness, it incurs high computational costs. In contrast, we propose a simple yet efficient multi-scale dilated convolution module (MDC), which captures local features at varying granularities by employing different dilation factors, thus improving detail restoration. By combining the Mamba structure with LA, ACL activates the capability of LA, achieving advanced performance on two classic IR tasks—deblurring and deraining. Compared to Transformer-based models [32, 44], ACL not only significantly reduces computational cost and parameter count, but also achieves superior or comparable performance.", + "bbox": [ + 89, + 584, + 482, + 900 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Our contributions are summarized as follows:", + "bbox": [ + 532, + 90, + 834, + 104 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- We explore an alternative CNN and Transformer-based architecture for Image Restoration, providing global receptive fields while maintaining computational efficiency. Specifically, we propose ACL, which upgrades the SSM in Mamba with the LA structure, enabling the model to perform global multi-directional scanning.", + "- The proposed ACL consists of two modules: LAMA and MDC. LAMA captures global feature dependencies, and MDC captures local features at varying granularities. Both modules enhance detail restoration.", + "- The proposed ACL demonstrates advanced performance in de-raining and de-blurring tasks, proving its advantages in terms of parameter count and computational cost." + ], + "bbox": [ + 511, + 107, + 903, + 303 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related Work", + "text_level": 1, + "bbox": [ + 513, + 319, + 653, + 335 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.1. Image Restorations", + "text_level": 1, + "bbox": [ + 513, + 344, + 699, + 359 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Image restoration (IR) technology provides clear visual data essential for numerous advanced downstream visual tasks. In recent years, the advent of deep learning has eclipsed traditional methods that rely on manually designed features, shifting the paradigm toward deep-learning-based approaches [9, 13, 20, 22, 45]. Initially, models were predominantly designed using CNNs, achieving significant advancements through sophisticated network designs that incorporated encoder-decoder patterns [5], dense connections, and residual connections [10]. However, CNN-based techniques face challenges in establishing global feature dependencies. With the evolution of Transformers in both Natural Language Processing (NLP) and Computer Vision (CV), many researchers have pivoted towards Transformer-based IR methods [23, 38, 44], leveraging their robust global receptive capabilities and marking substantial progress. Despite these improvements, the computational burden of calculating SA remains a drawback. Recently, the Mamba model [8], known for its efficient training and inference capabilities, has introduced a new potential paradigm in the IR field. Methods such as VMambaIR [30] and MambaIR [13], which employ multidirectional scanning strategies, aim to address the issues of unidirectional modeling inherent in Mamba's SSM blocks. Nonetheless, these methods have yet to establish multidirectional pixel connections directly. Thus, exploring how to utilize Mamba's superior design to establish comprehensive multidirectional global pixel correlations remains a promising direction.", + "bbox": [ + 511, + 367, + 903, + 806 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.2. Attention Mechanisms", + "text_level": 1, + "bbox": [ + 513, + 816, + 723, + 832 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Originally applied in the NLP field, attention mechanisms [34], particularly the Transformer with its SA mechanism, have achieved remarkable success. Subsequently, these mechanisms have been successfully adapted to the", + "bbox": [ + 511, + 839, + 903, + 900 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "17914", + "bbox": [ + 480, + 944, + 519, + 955 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "visual domain. In recent years, numerous IR methods based on the Transformer structure have emerged [1, 3, 23, 24], primarily utilizing various design modules to leverage self-attention mechanisms and enhance model efficiency. SwinIR [23] introduces twin-shifted window attention to boost performance. IPT [1] enhances local detail restoration by dividing images into multiple small windows and processing each window's features independently. However, adopting SA to establish global or local feature dependencies inevitably leads to quadratic computational complexity. Linear attention, which operates with linear complexity, has yet to match the performance of SA in classical vision tasks, thus its application remains limited. In this paper, we introduce a new IR model, ACL, activating the potential of linear attention.", + "bbox": [ + 89, + 90, + 483, + 316 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.3. State Space Model", + "text_level": 1, + "bbox": [ + 89, + 329, + 269, + 345 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The Mamba model, a newly proposed state space model (SSM), effectively facilitates sequence modeling with linear complexity [8]. Owing to its efficient training and inference speeds, many researchers have adapted it for visual tasks [16, 25, 30, 39, 50]. For instance, Local-Mamba [16] employs a cross-scanning module to scan image spaces. VMambaIR [30] enhances multidirectional relationships between pixels by scanning images from six directions. These methods focus on overcoming the limitations of unidirectional modeling in SSMs by proposing various scanning techniques. However, the features captured in each direction remain unidirectionally connected and potentially lead to redundant computational costs. The Mamba model is efficient due to its structural design, yet its unidirectional scanning is not entirely suitable for images. Therefore, we combine linear attention with the Mamba structure to achieve a new balance between computational efficiency and restoration effectiveness.", + "bbox": [ + 89, + 352, + 483, + 623 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. Methods", + "text_level": 1, + "bbox": [ + 89, + 638, + 189, + 655 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Preliminary Analysis", + "text_level": 1, + "bbox": [ + 89, + 665, + 290, + 681 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In recent years, very few IR models based on linear attention have been proposed, as they tend to perform slightly worse than SA in classical vision tasks. Leveraging the computational advantages of Linear Attention (LA) to further explore its potential in visual tasks is crucial.", + "bbox": [ + 89, + 686, + 482, + 763 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The improved SSM in Mamba shows significant potential in sequence processing. In fact, linear attention has a similar expression to SSM [15]. In linear attention, if the attention of the $i$ -th token is restricted to only be related to the previous $i$ tokens, it is expressed as follows:", + "bbox": [ + 89, + 763, + 483, + 840 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {A} _ {i} = \\frac {\\mathbf {Q} _ {i} \\left(\\sum_ {j = 1} ^ {i} \\mathbf {K} _ {j} ^ {\\top} \\mathbf {V} _ {j}\\right)}{\\mathbf {Q} _ {i} \\left(\\sum_ {j = 1} ^ {i} \\mathbf {K} _ {j} ^ {\\top}\\right)} = \\frac {\\left(\\mathbf {Q} _ {i} \\mathbf {D} _ {i}\\right)}{\\left(\\mathbf {Q} _ {i} \\mathbf {U} _ {i}\\right)}, \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 153, + 852, + 483, + 904 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $\\mathbf{D}_i = \\sum_{j=1}^i \\mathbf{K}_j^\\top \\mathbf{V}_j$ , $\\mathbf{U}_i = \\sum_{j=1}^i \\mathbf{K}_j^\\top$ . Therefore, the recursive expressions are:", + "bbox": [ + 511, + 88, + 903, + 121 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {U} _ {i} = \\mathbf {U} _ {i - 1} + \\mathbf {K} _ {i} ^ {\\top}, \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 642, + 132, + 906, + 150 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {D} _ {i} = \\mathbf {D} _ {i - 1} + \\mathbf {K} _ {i} ^ {\\top} \\mathbf {V} _ {i}, \\quad \\mathbf {A} _ {i} = \\frac {\\left(\\mathbf {Q} _ {i} \\mathbf {D} _ {i}\\right)}{\\left(\\mathbf {Q} _ {i} \\mathbf {U} _ {i}\\right)}. \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 571, + 160, + 906, + 194 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "To enable applications in deep neural networks, it is necessary to discretize the initial SSM using zero-order hold [8]. This involves discretizing the continuous parameters $\\mathbf{A}$ and $\\mathbf{B}$ into $\\bar{\\mathbf{A}}$ and $\\bar{\\mathbf{B}}$ through the time scale parameter $\\Delta$ . The specific expressions are as follows:", + "bbox": [ + 511, + 198, + 906, + 273 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {h} _ {i} = \\bar {\\mathbf {A}} \\mathbf {h} _ {i - 1} + \\mathbf {B} \\mathbf {x} _ {i}, \\quad \\mathbf {y} _ {i} = \\mathbf {C h} _ {i} + \\mathbf {D x} _ {i}, \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 566, + 286, + 906, + 303 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $\\bar{\\mathbf{A}} = \\exp (\\Delta \\mathbf{A})$ and $\\bar{\\mathbf{B}} = (\\Delta \\mathbf{A})^{-1}(\\exp (\\Delta \\mathbf{A}) - I)\\Delta \\mathbf{B}\\approx \\Delta \\mathbf{B}$ . For simplicity, we have omitted the feature dimension information of each part in the formulas.", + "bbox": [ + 511, + 313, + 903, + 359 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Further, Mamba enhances the discrete SSM by making $\\bar{\\mathbf{A}}$ , $\\bar{\\mathbf{B}}$ , and $\\Delta$ dependent on the input $\\mathbf{x}_i$ , breaking away from the assumption of input-independent models. Additionally, since $\\bar{\\mathbf{A}}_i$ in Mamba is a diagonal matrix, we have $\\tilde{\\mathbf{A}}_i = diag(\\tilde{\\mathbf{A}}_i)$ . The expression is thus transformed into:", + "bbox": [ + 511, + 359, + 905, + 435 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {h} _ {i} = \\tilde {\\mathbf {A}} _ {i} \\mathbf {h} _ {i - 1} + \\mathbf {B} _ {i} (\\Delta_ {i} \\mathbf {x} _ {i}), \\quad \\mathbf {y} _ {i} = \\mathbf {C} _ {i} \\mathbf {h} _ {i} + \\mathbf {D} \\mathbf {x} _ {i}. \\quad (5)\n$$\n", + "text_format": "latex", + "bbox": [ + 532, + 446, + 906, + 465 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The primary distinctions between Eq. 3 and Eq. 5 are as follows: 1) The improved State Space Model (SSM) incorporates an additional parameter for hidden state transitions, denoted as $\\tilde{\\mathbf{A}}_i$ . This parameter functions similarly to a forget gate, filtering previous states to enhance selective retention. 2) An additional term, $\\mathbf{D}\\mathbf{x}_i$ , is introduced, akin to an input skip connection. In the Mamba model, which aims to achieve input-dependent modeling, $\\tilde{\\mathbf{A}}_i$ must be recursively computed. Despite the utilization of hardware acceleration mechanisms, this process still adheres to unidirectional modeling. Linear attention represents an alternative form of SSM within Mamba and can transcend the limitations of unidirectional pixel modeling, presenting a potential capability. For a more in-depth analysis, one can refer to another outstanding analytical works [15].", + "bbox": [ + 511, + 476, + 906, + 704 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.2. Overall Structure of the Model", + "text_level": 1, + "bbox": [ + 511, + 712, + 787, + 727 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "As illustrated in Fig. 2 (a), the proposed IR model, ACL, is based on an encoder-decoder architecture. In this model, multiple downsampled degraded images are fed into the main pathway of the encoder through lateral convolution layers, and images restored at three different scales are output during the decoding phase. Both the encoder and decoder comprise three fundamental core blocks, the structure of which is depicted in Fig. 2. Each core unit consists of several successive LAMA modules. Furthermore, as shown in the framework diagram, following two core units with higher feature resolution in both encoding and decoding", + "bbox": [ + 511, + 734, + 906, + 902 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "17915", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/f7098aef4493c7621c002027cefe7603ad9677eede56e3b8f7c71f5544d7aadc.jpg", + "image_caption": [ + "(a) Overall Pipeline", + "(b) Encoder and Decoder" + ], + "image_footnote": [], + "bbox": [ + 93, + 87, + 493, + 281 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/73afeb19e508ef0c79efdc91ea6a607b59b6b98d0fc4a83f1a08de6ea5cfcd98.jpg", + "image_caption": [ + "Figure 2. (a) The overall framework of ACL, based on the encoder-decoder architecture. (b) The core structure of the encoder-decoder, which includes the improved LA-based Mamba (LAMA) module. (c) The structure of the LAMA module. (d) The MDC module." + ], + "image_footnote": [], + "bbox": [ + 495, + 88, + 612, + 281 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/9ee6b870f30c4854b13c04ce5b0494c60e29179a3217150a08f1140cd8aff9ee.jpg", + "image_caption": [ + "(c) LA-based Mamba", + "(d) Multi-Dilated Convolutions Module" + ], + "image_footnote": [], + "bbox": [ + 612, + 88, + 763, + 281 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/ea2bdffdc71374fad9556828ff17a8829c90fc14006c21c4a18807ac64811baf.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 763, + 89, + 895, + 281 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "phases, a local enhancement module is appended to augment the model's capability for local detail restoration. The structure of this local enhancement module is presented in Fig. 2 (d).", + "bbox": [ + 89, + 383, + 482, + 445 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Specifically, the network process begins with a degraded image $\\mathbf{I} \\in \\mathbb{R}^{3 \\times 256 \\times 256}$ . The model first transforms this image through a convolution layer into a feature map $\\mathbf{I}' \\in \\mathbb{R}^{C \\times H \\times W}$ , expanding the number of feature channels from 3 to $C$ , where $C = 32$ . Subsequently, this feature map is further processed through three encoder units $E_{i}$ and a local enhancement module, each unit encoding the feature maps at different scales into a latent space state, denoted as $\\mathbf{I}_e^i \\in \\mathbb{R}^{2^{(i-1)}C \\times \\frac{H}{2^{(i-1)}} \\times \\frac{W}{2^{(i-1)}}}$ , where $i = 1, 2, 3$ . Here, $C, H$ , and $W$ respectively represent the number of feature channels, the height, and the width of the feature maps. These processed feature maps are then passed to the decoder, where they are fused through direct or skip connections. In the decoder, the feature maps are gradually restored by decoding units $D_{i}$ , generating $\\mathbf{I}_d^i \\in \\mathbb{R}^{2^{(i-1)}C \\times \\frac{H}{2^{(i-1)}} \\times \\frac{W}{2^{(i-1)}}}$ . Each feature map is processed by the subsequent $D_{i}$ and, following lateral convolution operations, yields multi-scale output results, with $D_{1}$ producing the final restored image. Subsequently, we will elaborate on the two key modules that constitute ACL and the model's optimization function.", + "bbox": [ + 89, + 446, + 483, + 758 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.3. Linear Attention-based Mamba Module", + "text_level": 1, + "bbox": [ + 89, + 770, + 433, + 786 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The original Mamba model is an auto-regressive model capable of efficiently capturing sequence dependencies, and it has been proven effective in modeling temporal causal sequence data. However, due to its unidirectional modeling approach, Mamba exhibits limitations when handling data with weak causality, such as images, necessitating further improvements to address these challenges. To overcome", + "bbox": [ + 89, + 794, + 483, + 900 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "the limitations of unidirectional modeling, recent methods have proposed multi-directional cross-scanning techniques for image processing. Unlike these approaches, we embed linear attention into the Mamba structure, enabling it to establish global pixel dependencies when processing image data, and avoiding the need for recursive computation of the forget matrix $\\mathbf{A}_i$ . The module structure is illustrated in Fig. 2 (c).", + "bbox": [ + 511, + 383, + 906, + 503 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The input to the model is a feature map $\\mathbf{F} \\in \\mathbb{R}^{B \\times C \\times H \\times W}$ . First, $\\mathbf{F}$ undergoes a dimensional transformation, resulting in $\\mathbf{F}' \\in \\mathbb{R}^{B \\times HW \\times C}$ . Subsequently, $\\mathbf{F}'$ is passed into two branches: a main branch and a residual branch. The operations of the residual branch can be expressed as follows:", + "bbox": [ + 511, + 505, + 905, + 595 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {F} _ {r e s} = \\sigma (\\operatorname {L i n e a r} \\left(\\mathbf {F} ^ {\\prime}\\right)), \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 630, + 611, + 903, + 628 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\sigma (\\cdot)$ represents the SiLU activation function.", + "bbox": [ + 511, + 635, + 852, + 650 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The main branch comprises a linear mapping layer, a convolutional layer, and linear attention. The process can be expressed as follows:", + "bbox": [ + 511, + 651, + 905, + 696 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {F} _ {1} = \\operatorname {T o 4 D} \\left(\\operatorname {L i n e a r} \\left(\\mathbf {F} ^ {\\prime}\\right)\\right), \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 622, + 710, + 903, + 728 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {F} _ {2} = \\sigma \\left(\\operatorname {C o n v} \\left(\\mathbf {F} _ {1}\\right)\\right), \\tag {8}\n$$\n", + "text_format": "latex", + "bbox": [ + 638, + 734, + 903, + 752 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\mathrm{To4D}(\\cdot)$ indicates reshaping the feature map into a four-dimensional tensor to adapt convolution operation. Next, linear attention is applied to $\\mathbf{F}_2$ . The specific process is expressed as follows:", + "bbox": [ + 511, + 758, + 905, + 819 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {F} _ {2} ^ {\\prime} = \\operatorname {T o 3 D} \\left(\\mathbf {F} _ {2}\\right), \\tag {9}\n$$\n", + "text_format": "latex", + "bbox": [ + 648, + 832, + 903, + 849 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {Q} = \\phi \\left(\\operatorname {L i n e a r} \\left(\\mathbf {F} _ {2} ^ {\\prime}\\right)\\right), \\quad \\mathbf {K} = \\phi \\left(\\operatorname {L i n e a r} \\left(\\mathbf {F} _ {2} ^ {\\prime}\\right)\\right), \\tag {10}\n$$\n", + "text_format": "latex", + "bbox": [ + 550, + 859, + 903, + 877 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {K} \\mathbf {V} = \\mathbf {K} ^ {\\top} \\cdot \\mathbf {F} _ {2} ^ {\\prime}, \\tag {11}\n$$\n", + "text_format": "latex", + "bbox": [ + 653, + 883, + 903, + 901 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "17916", + "bbox": [ + 478, + 944, + 519, + 955 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {F} _ {\\text {a t t e n}} = \\mathbf {Q} \\cdot \\mathbf {K V}, \\tag {12}\n$$\n", + "text_format": "latex", + "bbox": [ + 223, + 90, + 480, + 107 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {F} _ {\\text {a t t e n}} = \\mathbf {F} _ {\\text {a t t e n}} + \\operatorname {C o n v} _ {\\text {p o s}} (\\mathbf {F} _ {2}), \\tag {13}\n$$\n", + "text_format": "latex", + "bbox": [ + 176, + 113, + 480, + 130 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\mathrm{To3D}(\\cdot)$ reshapes the feature map into a three-dimensional tensor for linear attention layer, $\\phi (\\cdot)$ is the kernel function, and $\\mathrm{Conv}_{pos}(\\cdot)$ is learnable position embedding function. Subsequently, $\\mathbf{F}_{atten}$ is multiplied by $\\mathbf{F}_{res}$ , followed by a linear mapping layer, yielding $\\mathbf{F}_{atten}$ , which is expressed as:", + "bbox": [ + 89, + 135, + 483, + 227 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {F} _ {\\text {e n h e n c e}} = \\operatorname {L i n e a r} \\left(\\mathbf {F} _ {\\text {a t t e n}} \\times \\mathbf {F} _ {\\text {r e s}}\\right). \\tag {14}\n$$\n", + "text_format": "latex", + "bbox": [ + 165, + 238, + 482, + 253 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Finally, $\\mathbf{F}_{\\textit{enhence}}$ is processed through a simple feedforward neural network to produce the output of the LAMA.", + "bbox": [ + 89, + 265, + 483, + 296 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.4. Multi-Dilated Convolutions Module", + "text_level": 1, + "bbox": [ + 89, + 304, + 401, + 319 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The LAMA module primarily functions to establish global feature connections, necessitating the learning of local features to enhance the quality of detail restoration. While some previous methods employed feature window-based self-attention strategies and achieved certain advancements, they incurred substantial computational costs. Consequently, we adopted a more straightforward and effective approach, namely the multi-scale dilated convolution module, the structure of which is depicted in Fig. 2(d). This module is equipped with filtering operations using various dilation factors aimed at capturing local features of different granularities within the image to enhance the detail restoration effects. The module comprises two dilated convolutions along with skip connections. The input feature, denoted as $\\mathbf{F}$ , is split into two pathways: one passes through a convolution layer with a kernel size of 5 and a dilation rate of 2, and the other through a convolution layer with a kernel size of 3 and a dilation rate of 2, resulting in two feature sets:", + "bbox": [ + 89, + 325, + 483, + 611 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {F} _ {1} = \\operatorname {C o n v} _ {5 \\times 5, d = 2} (\\mathbf {F}), \\tag {15}\n$$\n", + "text_format": "latex", + "bbox": [ + 205, + 613, + 480, + 628 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {F} _ {2} = \\operatorname {C o n v} _ {3 \\times 3, d = 2} (\\mathbf {F}). \\tag {16}\n$$\n", + "text_format": "latex", + "bbox": [ + 205, + 635, + 480, + 652 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "These are then concatenated to form $\\mathbf{F}'$ , which subsequently passes through a convolution layer with a kernel size of 1 to halve the channel count, aligning it with the dimensions of the input features. Furthermore, the input feature $\\mathbf{F}$ is merged with $\\mathbf{F}'$ via a skip connection, culminating in the output $\\mathbf{F}_{out}$ . The expressions are as follows:", + "bbox": [ + 89, + 657, + 483, + 748 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {F} _ {\\text {o u t}} = \\operatorname {C o n v} _ {1 \\times 1} \\left(\\operatorname {C o n c a t} \\left(\\mathbf {F} _ {1}, \\mathbf {F} _ {2}\\right)\\right) + \\mathbf {F}. \\tag {17}\n$$\n", + "text_format": "latex", + "bbox": [ + 151, + 763, + 482, + 780 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.5. Optimization Objectives", + "text_level": 1, + "bbox": [ + 89, + 787, + 313, + 804 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The ACL model, during its decoding phase, outputs restoration results at three distinct scales and computes the corresponding loss values. Following prior methodologies, we calculate the loss values concurrently in both the spatial and frequency domains. The traditional L1 loss function is employed to measure the discrepancy between the restored", + "bbox": [ + 89, + 810, + 483, + 900 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "outputs and the pristine reference images. Consequently, the total loss is computed as follows:", + "bbox": [ + 511, + 90, + 903, + 121 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nL _ {t o t a l} = \\sum_ {i = 1} ^ {3} \\frac {1}{N _ {i}} \\left| \\mathbf {P} _ {i} - \\mathbf {I} _ {i} \\right| + \\lambda \\cdot \\sum_ {i = 1} ^ {3} \\frac {1}{N _ {i}} | f f t (\\mathbf {P} _ {i}) - f f t (\\mathbf {I} _ {i}) | \\tag {18}\n$$\n", + "text_format": "latex", + "bbox": [ + 511, + 133, + 903, + 186 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\mathbf{P}_i$ and $\\mathbf{I}_i$ represent the restored result and the corresponding true image, respectively, and $N_i$ denotes the total number of pixels in the image. $fft(\\cdot)$ signifies the fast Fourier transform function. The hyperparameter $\\lambda$ , utilized to balance the contributions of spatial domain loss and frequency domain loss to the total loss, is set at 0.1, following previous methods.", + "bbox": [ + 511, + 186, + 906, + 292 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4. Experiments", + "text_level": 1, + "bbox": [ + 511, + 309, + 645, + 325 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "This section focuses on showcasing the effectiveness of our proposed ACL model in addressing various degraded image tasks, such as deraining and deblurring, evaluated across six test sets. We will outline the experimental procedures and datasets utilized, and confirm the impact of the proposed modules through a series of ablation studies.", + "bbox": [ + 511, + 334, + 905, + 426 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.1. Implementation Setup", + "text_level": 1, + "bbox": [ + 511, + 436, + 720, + 453 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "For each type of degradation, datasets are trained and evaluated independently. Unless specifically mentioned, all tests are conducted with the same hyperparameters. The training set undergoes random cropping of $256 \\times 256$ patches and random flipping as a data augmentation strategy. To compare computational complexity with other methods, FLOPs are tested at the mentioned crop size. A cosine annealing strategy is adopted to gradually adjust the learning rate each epoch, setting a minimum learning rate limit of 1e-6. The batch size is set to 8, and the Adam optimizer is adopted. Following the evaluation of previous methods, for deraining, the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) are calculated on the YCbCr color mode. For other tasks, evaluation metrics are calculated on RGB color mode. All experiments are implemented in an environment equipped with the NVIDIA 24GB 3090 GPUs, based on the PyTorch.", + "bbox": [ + 511, + 458, + 906, + 715 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.2. Single Image Deraining Results", + "text_level": 1, + "bbox": [ + 511, + 727, + 790, + 743 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We utilized several different rain removal datasets, including Rain100L/H [40], Rain200L/H, and DID-Data [46], to evaluate the model's ability to restore images with streaklike degradation elements. Each dataset was independently trained for 800 epochs, with an initial learning rate set to 1e-3. We compared our approach with previous methods, including the advanced Restormer (Transformer) [44], NAFNet (CNN) [2], IRNeXt (CNN) [6], and MambaIR (Mamba) [13]. The comparative results are shown in Table 1 and Table 2. Upon comparison of PSNR, it can", + "bbox": [ + 511, + 750, + 906, + 900 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "17917", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/ad5a14a0b0a7f3978db0059b422efce4cf37a3eb84b3ea664ea49aade4aa5272.jpg", + "table_caption": [ + "Table 1. Quantitative comparison results of the proposed model and seven other advanced models on the Rain100L and Rain100H. The larger the PSNR and SSIM values, the better the model effect." + ], + "table_footnote": [], + "table_body": "
MethodsRestormer [44]MAXIM [33]DRT [24]MPRNet [43]DAWN [19]IRNeXt [6]MambaIR [13]ACL(Ours)
DatasetPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
Rain100L38.990.97838.060.97737.610.94837.840.95936.730.97138.240.97238.780.97739.180.983
Rain100H31.460.90430.810.90329.470.84630.410.87430.620.89631.640.90230.620.89332.220.920
Average35.230.94134.440.94033.540.89734.130.91733.680.93434.940.93734.700.93535.700.952
", + "bbox": [ + 91, + 128, + 906, + 219 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/9bb344b06a5d6da15853df480dc19f3622a0ff5cf0b9cbfb10d8d184f3e0da09.jpg", + "table_caption": [ + "Table 2. Quantitative comparison results of the proposed model with 13 other advanced models, including CNN and Transformer-based methods, on three datasets." + ], + "table_footnote": [], + "table_body": "
MethodsRain200LRain200HDID-DatasetAverage
PSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
RESCAN [21]33.820.95526.220.82232.600.92730.880.901
PreNet [29]37.120.97629.040.89033.370.91933.180.928
DRT [24]38.810.98328.670.88033.880.92833.790.930
CCN [28]38.260.98129.990.91432.130.92433.460.940
Restormer [44]40.580.98630.830.91433.190.92634.870.942
Uformer [38]40.200.98630.310.91134.360.93334.960.943
MPRNet [43]39.820.98629.940.90034.500.93734.750.941
SmartAssign [37]38.410.98127.710.85433.110.91533.080.917
SFNet [7]39.500.98229.750.90134.510.93834.590.940
NAFNet [2]39.480.98229.190.88834.690.93734.450.936
ELFformer [18]38.850.98028.930.88533.540.93633.770.934
ESDNet [31]39.850.98630.010.91334.520.93934.790.946
MSGNN [35]39.090.98729.630.91833.110.92733.940.944
ACL(Ours)40.740.98830.450.91634.810.93835.330.947
", + "bbox": [ + 91, + 270, + 906, + 523 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/37815c6aa058b9a6405ab77600645ab7d332da0c3b3d60c0e460aa9998dfc842.jpg", + "image_caption": [ + "Figure 3. Visual comparison of ACL and other recent SOTA models on rainy image removal. The first two scenes contain slight rain streak degradation, while the last two scenes contain severe rain streak degradation." + ], + "image_footnote": [], + "bbox": [ + 91, + 532, + 906, + 834 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "17918", + "bbox": [ + 480, + 945, + 519, + 955 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/d6e63d5a57ae9aca7f31885283c2395d233a28af3f1df3b781cc7660b4132f36.jpg", + "table_caption": [ + "Table 3. Quantitative results of our method compared to recent approaches in blurry image restoration." + ], + "table_footnote": [], + "table_body": "
MethodsGoPro
PSNR ↑SSIM ↑FLOPs(G) ↓Param(M) ↓
MIMO [4]32.680.95961716.1
DMPHN [47]31.200.940-21.7
DBGAN [49]31.100.94275911.6
MPRNet [43]32.660.95977720.1
Restormer [44]32.920.96114026.1
IRNeXt [6]33.160.96211413.21
Stripformer [32]33.080.96217020
SSAMAN [42]33.530.96516518.3
LoFormer [26]33.730.9664716.4
Ours33.250.964554.6
", + "bbox": [ + 96, + 127, + 475, + 315 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/7b1ad184733f9d1c1bdc3b844871a93fdd19a96b48c0b71c62d586b76608246b.jpg", + "image_caption": [ + "(a) Blur Region" + ], + "image_footnote": [], + "bbox": [ + 94, + 325, + 220, + 383 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/d8bb188f0206f44d352c3f2a8973a6ee5b55486f6590550ff554f846e0f9b565.jpg", + "image_caption": [ + "(b) Reference" + ], + "image_footnote": [], + "bbox": [ + 223, + 325, + 349, + 383 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/e08ef39dab8f576ee34504a74786659c508db7f5bbd9dbb360eed3ec41917282.jpg", + "image_caption": [ + "(c) DMPHN" + ], + "image_footnote": [], + "bbox": [ + 354, + 325, + 480, + 383 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/8ffbd4209bdf41e4b00615e6fc2da32be63acc10047ac432ed1a180a5b2b865e.jpg", + "image_caption": [ + "(e) IRNeXt", + "Figure 4. Visual results of ACL and four other advanced models on motion blurred image restoration." + ], + "image_footnote": [], + "bbox": [ + 94, + 395, + 220, + 452 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/89f95e3017c67732be8fad5db1b74c8e4226d904de5a21aa699ac8ae56ee76db.jpg", + "image_caption": [ + "(f) Restormer" + ], + "image_footnote": [], + "bbox": [ + 223, + 395, + 351, + 450 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/65db8bcb34f6470354d3dfa31118106620092a3c69c4a6b9ad9508a49379462c.jpg", + "image_caption": [ + "(g) Ours" + ], + "image_footnote": [], + "bbox": [ + 354, + 395, + 480, + 450 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "be observed that our method outperforms the other compared methods, except for the powerful Transformer-based Restormer on the Rain200H. Additionally, Fig. 3 further illustrates the visual comparison results, where ACL demonstrates superior performance in restoring image details.", + "bbox": [ + 89, + 536, + 483, + 613 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.3. Single Image Deblurring Results", + "text_level": 1, + "bbox": [ + 89, + 619, + 377, + 638 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We conducted evaluations on motion blur image restoration using the GoPro dataset [27], which includes 2,103 training images and 1,111 testing images. The compared methods include the recently proposed 9 advanced methods. In Table 3, we present the comparative results of various metrics on this dataset. As shown, ACL achieves advanced performance while maintaining a low parameter count and low FLOPs. Additionally, Fig. 4 displays visual comparison results with other methods. We selected critical numerical information within the images, and it can be observed that ACL also exhibits good performance in restoring motion-blurred images.", + "bbox": [ + 89, + 642, + 482, + 825 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.4. Ablation Studies", + "text_level": 1, + "bbox": [ + 89, + 833, + 256, + 848 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "To further understand the contributions of each module in the ACL model and other factors affecting model performance, we conducted a unified experiment on the", + "bbox": [ + 89, + 854, + 483, + 902 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/3dab61c52e9fc94a0e6ce2f09b17d81ad2323ffb04fe3555bbfcd488d6af4ee2.jpg", + "table_caption": [ + "Table 4. Comparison of ablation experiments between two modules on Rain100L." + ], + "table_footnote": [], + "table_body": "
SettingsPSNR ↑SSIM ↑
Baseline38.240.978
Baseline + LAMA38.890.981
Baseline + LAMA + MDC39.180.983
", + "bbox": [ + 524, + 128, + 893, + 196 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Rain100L/H rain removal dataset. Specifically, we performed ablation studies on the two main modules of the model. Additionally, to verify that the linear attention capability can be restored using the Mamba structure, we conducted experiments by replacing LAMA with other structures to compare the results under different configurations.", + "bbox": [ + 511, + 220, + 906, + 313 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.4.1. LAMA and MDC modules:", + "text_level": 1, + "bbox": [ + 511, + 321, + 750, + 335 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "To validate the roles of the two main modules in ACL, namely LAMA and MDC, as well as their respective importance, we conducted ablation experiments. We removed the MDC module and replaced the core encoder/decoder modules with the structure shown in Fig. 1(b), where the Transformer block in the baseline model is implemented based on Linear Attention. We then gradually replaced the encoder-decoder modules with LAMA and added the MDC module to the baseline model, resulting in two different configurations, which were trained and tested separately. The experimental results are shown in Table 4. By comparing the \"Baseline\" and \"Baseline+LAMA\" configurations, it is evident that the Mamba structure, implemented with linear attention, achieves better results, proving the potential of the Mamba structure to activate linear attention. Besides, adding the MDC further enhances the model's performance.", + "bbox": [ + 511, + 340, + 906, + 584 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.4.2. Different Mechanisms for Core Modules:", + "text_level": 1, + "bbox": [ + 511, + 594, + 846, + 608 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "To validate the effectiveness of LAMA, we replaced the LA structure within LAMA with the original one-dimensional scanning SSM structure and the bidirectional scanning SSM structure for verification. Additionally, replacing LAMA with a standard LA structure resulted in a new configuration, referred to as the Baseline model. The results obtained from these various configurations are presented in Table 5. On one hand, the proposed method outperforms both the Baseline and the strategies employing unidirectional and multi-directional scanning for modeling. On the other hand, compared to the unidirectional Mamba model, the multi-directional scanning strategy demonstrates superior performance, indicating that unidirectional modeling is not optimal for image data. Furthermore, Fig. 5 provides visual examples corresponding to each configuration. It can be observed that while the SSM-based scanning methods are capable of removing rainy degradation elements, they result in significant loss of image details. In contrast, our method achieves superior visual outcomes.", + "bbox": [ + 511, + 613, + 908, + 901 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "17919", + "bbox": [ + 480, + 944, + 519, + 957 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/08784f57f0c7dcb9de3c6b547df8597cd1a1b86e0569d36db9f25b8382453a40.jpg", + "table_caption": [ + "Table 5. Quantitative results of different mechanisms used in the core module on rainy streak removal." + ], + "table_footnote": [], + "table_body": "
SettingsRain100LRain100H
PSNRSSIMPSNRSSIM
Baseline38.240.97831.020.913
Mamba (w. 1D Scan)36.470.95929.720.887
Mamba (w. 2D Scan)38.070.97730.930.907
Ours39.180.98332.220.920
", + "bbox": [ + 102, + 128, + 468, + 234 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/c1962d22190b23d3ed72c9670333e8887485c1a4d67cadeceaf8cb51fb011c6d.jpg", + "image_caption": [ + "(a) Rainy Image" + ], + "image_footnote": [], + "bbox": [ + 93, + 247, + 222, + 314 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/fddcf20267e95036fb0977ecc48fb2158fc462def51c8fdfbe27a04bed83f37c.jpg", + "image_caption": [ + "(b) Ground Truth" + ], + "image_footnote": [], + "bbox": [ + 222, + 247, + 352, + 314 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/eab1499fe7a3b50ffb30b0b8d4b4e54ad984a41d25e252c2321f3a62c301ec7e.jpg", + "image_caption": [ + "(c) LA Block" + ], + "image_footnote": [], + "bbox": [ + 352, + 247, + 480, + 314 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/dd4adae9e59865b8b10aec2712fe7a09f10cb5b78fe0e4fefa7d18e0e8477e3a.jpg", + "image_caption": [ + "(d) 1D scan Mamba", + "Figure 5. Visual results of the encoder/decoder blocks adopting different mechanisms." + ], + "image_footnote": [], + "bbox": [ + 93, + 327, + 222, + 393 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/37c0a6f77facfa54b45ca7c59e384fc385eb3a5dd5f8cfb631de832e997982b0.jpg", + "image_caption": [ + "(e) 2D scan Mamba" + ], + "image_footnote": [], + "bbox": [ + 223, + 327, + 352, + 393 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/16e07557232a2c55b7606977a7c8dd99488ea5d5dcdfdd2de206934c59680d5f.jpg", + "image_caption": [ + "(f) LA Mamba" + ], + "image_footnote": [], + "bbox": [ + 352, + 327, + 480, + 393 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/98411ba000f4827398497dadf851b5619f7c1e3dd65f310dcf1d83aae938cda8.jpg", + "table_caption": [ + "Table 6. Evaluation results of different $N_{i}$ configurations on the Rain100L dataset in ACL. $\\star$ represents the combination we adopted." + ], + "table_footnote": [], + "table_body": "
(N1,N2,N3)PSNR↑SSIM↑
(3,3,3)38.950.978
(4,4,4)39.030.980
(6,3,3)★39.180.983
(6,6,6)39.190.983
", + "bbox": [ + 112, + 521, + 459, + 601 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.4.3. The Impact of the Number of Encoders/Decoders", + "text_level": 1, + "bbox": [ + 89, + 627, + 478, + 643 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "As mentioned in the above, a crucial hyper-parameter in the model is the number of feature processing blocks at each encoding/decoding stage, denoted as $N_{i}$ . To investigate the impact of $N_{i}$ , we conducted a series of experiments on the deraining dataset using different combinations, with the results presented in Table 6. Additionally, Fig. 6 illustrates the output images generated by the models with various configurations. The numerical results indicate that increasing $N_{i}$ contributes to performance improvement, but the effect is limited. Notably, a significant enhancement is observed when $N_{1}$ is increased, suggesting that learning high-resolution feature maps plays a critical role in improving the model's recovery performance.", + "bbox": [ + 88, + 646, + 482, + 843 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.4.4. The Dilation Rate of Convolution in MDC Module", + "text_level": 1, + "bbox": [ + 89, + 851, + 482, + 866 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We propose a local feature enhancement module, MDC, which includes two dilated convolution layers, as shown in", + "bbox": [ + 89, + 869, + 482, + 900 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/1875a8b22794270df98e65352ba49d08c774fedf56a75a45dc43b159581210b1.jpg", + "image_caption": [ + "Input", + "Figure 6. Visual Results on Various $N_{i}$" + ], + "image_footnote": [], + "bbox": [ + 516, + 88, + 612, + 188 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/839f75d4d7639e7277ccb9e8dafd60c2bfeeb00555847031c6ec9d169b31e7c7.jpg", + "image_caption": [ + "(4,4,4)" + ], + "image_footnote": [], + "bbox": [ + 614, + 89, + 710, + 186 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/5c3add7916c909abc38bbb4c408594f56e53af0c91358ce01a64077e95eb6f15.jpg", + "image_caption": [ + "(6,6,6)" + ], + "image_footnote": [], + "bbox": [ + 710, + 89, + 807, + 186 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/0dedf5c2df23be8be95ae911bbe953ce4845bb353d3404a5cbcdc3623e74c6e7.jpg", + "image_caption": [ + "(6,3,3)" + ], + "image_footnote": [], + "bbox": [ + 808, + 89, + 903, + 186 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/4d800bdaf8a2e14ecf4b5e4540e723abdb1884bd1943165818737c07236a01d4.jpg", + "table_caption": [ + "Table 7. The evaluation results of the MDC module's convolution operations with varying dilation rates on the Rain100L dataset. $\\star$ represents the combination we adopted." + ], + "table_footnote": [], + "table_body": "
dilationPSNR↑SSIM↑
138.120.981
2★39.180.983
339.100.981
", + "bbox": [ + 539, + 313, + 877, + 378 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Fig. 2 (d). We further investigate the impact of dilation rates on the module's performance. As shown in Table 7, the results indicate that compared to the non-dilated convolution setting $(d = 1)$ , the performance is inferior to the other two configurations. However, with increased dilation rates, performance improves to varying degrees, with the best overall performance observed at $d = 2$ .", + "bbox": [ + 511, + 417, + 906, + 523 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.5. Conclusion and Limitations", + "text_level": 1, + "bbox": [ + 511, + 569, + 763, + 585 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We introduced ACL, a novel image restoration model that addresses the limitations of traditional CNNs and Transformer-based approaches in handling global receptive fields and computational efficiency. By integrating linear attention into the Mamba structure, we developed the LAMA module, which enhances global feature dependencies with linear computational complexity. Additionally, the MDC module was designed to improve local detail restoration through multi-scale dilated convolutions. Our experiments confirmed that the ACL achieves promising performance in de-raining tasks, demonstrating its effectiveness in both quantitative metrics and visual quality. Furthermore, our work provides a new perspective on leveraging the Mamba structure in the IR domain. Nevertheless, our method also has some limitations. The ACL model does not have an advantage over CNN-based models when processing large-sized images. Moreover, in the image deblurring task, there is still a gap between ACL and SOTA Transformer models. In the future, the ACL model still has room for further optimization to adapt more IR tasks.", + "bbox": [ + 511, + 598, + 906, + 902 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "17920", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Acknowledgments", + "text_level": 1, + "bbox": [ + 91, + 90, + 250, + 107 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "This work was supported by the National Key R&D Program of China (No.2023YFB4502804), the National Science Fund for Distinguished Young Scholars (No.62025603), the National Natural Science Foundation of China (No. U22B2051, No. 62302411), the Natural Science Foundation of Fujian Province of China (No.2021J06003), and China Postdoctoral Science Foundation (No. 2023M732948).", + "bbox": [ + 89, + 113, + 485, + 226 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 253, + 187, + 268 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. Pre-trained image processing transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12299-12310, 2021. 3", + "[2] Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. Simple baselines for image restoration. In European conference on computer vision, pages 17-33. Springer, 2022. 5, 6", + "[3] Xiang Chen, Hao Li, Mingqiang Li, and Jinshan Pan. Learning a sparse transformer network for effective image de- raining. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5896-5905, 2023. 3", + "[4] Sung-Jin Cho, Seo-Won Ji, Jun-Pyo Hong, Seung-Won Jung, and Sung-Jea Ko. Rethinking coarse-to-fine approach in single image deblurring. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4641-4650, 2021. 7", + "[5] Yuning Cui, Wenqi Ren, Xiaochun Cao, and Alois Knoll. Image restoration via frequency selection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023. 2", + "[6] Yuning Cui, Wenqi Ren, Sining Yang, Xiaochun Cao, and Alois Knoll. Irnext: Rethinking convolutional network design for image restoration. 2023. 1, 5, 6, 7", + "[7] Yuning Cui, Yi Tao, Zhenshan Bing, Wenqi Ren, Xinwei Gao, Xiaochun Cao, Kai Huang, and Alois Knoll. Selective frequency network for image restoration. In The Eleventh International Conference on Learning Representations, 2023. 6", + "[8] Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2023. 1, 2, 3", + "[9] Enxuan Gu, Hongwei Ge, and Yong Guo. Code: An explicit content decoupling framework for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2920-2930, 2024. 2", + "[10] Shuhang Gu, Yawei Li, Luc Van Gool, and Radu Timofte. Self-guided network for fast image denoising. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2511-2520, 2019. 2", + "[11] Yubin Gu, Honghui Xu, Yueqian Quan, Wanjun Chen, and Jianwei Zheng. Orsi salient object detection via bidimensional attention and full-stage semantic guidance. IEEE" + ], + "bbox": [ + 93, + 277, + 483, + 900 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Transactions on Geoscience and Remote Sensing, 61:1-13, 2023. 1", + "[12] Yubin Gu, Siting Chen, Xiaoshuai Sun, Jiayi Ji, Yiyi Zhou, and Rongrong Ji. Optical remote sensing image salient object detection via bidirectional cross-attention and attention restoration. Pattern Recognition, page 111478, 2025. 1", + "[13] Hang Guo, Jinmin Li, Tao Dai, Zhihao Ouyang, Xudong Ren, and Shu-Tao Xia. Mambair: A simple baseline for image restoration with state-space model. arXiv preprint arXiv:2402.15648, 2024. 1, 2, 5, 6", + "[14] Tianyu Guo, Haowei Wang, Yiwei Ma, Jiayi Ji, and Xiaoshuai Sun. Improving panoptic narrative grounding by harnessing semantic relationships and visual confirmation. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1985-1993, 2024. 1", + "[15] Dongchen Han, Ziyi Wang, Zhuofan Xia, Yizeng Han, Yifan Pu, Chunjiang Ge, Jun Song, Shiji Song, Bo Zheng, and Gao Huang. Demystify mamba in vision: A linear attention perspective. Advances in Neural Information Processing Systems, 37:127181-127203, 2025. 2, 3", + "[16] Tao Huang, Xiaohuan Pei, Shan You, Fei Wang, Chen Qian, and Chang Xu. Localmamba: Visual state space model with windowed selective scan. arXiv preprint arXiv:2403.09338, 2024. 3", + "[17] Jiayi Ji, Haowei Wang, Changli Wu, Yiwei Ma, Xiaoshuai Sun, and Rongrong Ji. Jm3d jm3d-llm: Elevating 3d representation with joint multi-modal cues. IEEE Transactions on Pattern Analysis and Machine Intelligence, 47(4):2475–2492, 2025. 1", + "[18] Kui Jiang, Zhongyuan Wang, Chen Chen, Zheng Wang, Laizhong Cui, and Chia-Wen Lin. Magic elf: Image deraining meets association learning and transformer. arXiv preprint arXiv:2207.10455, 2022. 6", + "[19] Kui Jiang, Wenxuan Liu, Zheng Wang, Xian Zhong, Junjun Jiang, and Chia-Wen Lin. Dawn: Direction-aware attention wavelet network for image deraining. In Proceedings of the 31st ACM international conference on multimedia, pages 7065-7074, 2023. 6", + "[20] Lingshun Kong, Jiangxin Dong, Jianjun Ge, Mingqiang Li, and Jinshan Pan. Efficient frequency domain-based transformers for high-quality image deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5886-5895, 2023. 2", + "[21] Xia Li, Jianlong Wu, Zhouchen Lin, Hong Liu, and Hongbin Zha. Recurrent squeeze-and-excitation context aggregation net for single image deraining. In Proceedings of the European conference on computer vision (ECCV), pages 254-269, 2018. 6", + "[22] Yawei Li, Yuchen Fan, Xiaoyu Xiang, Denis Demandolx, Rakesh Ranjan, Radu Timofte, and Luc Van Gool. Efficient and explicit modelling of image hierarchies for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18278-18289, 2023. 2", + "[23] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restoration using swim transformer. In Proceedings of the IEEE/CVF inter" + ], + "bbox": [ + 516, + 92, + 906, + 900 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "17921", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "national conference on computer vision, pages 1833-1844, 2021. 1, 2, 3", + "[24] Yuanchu Liang, Saeed Anwar, and Yang Liu. Drt: A lightweight single image deraining recursive transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 589-598, 2022. 3, 6", + "[25] Yue Liu, Yunjie Tian, Yuzhong Zhao, Hongtian Yu, Lingxi Xie, Yaowei Wang, Qixiang Ye, Jianbin Jiao, and Yunfan Liu. Vmamba: Visual state space model. Advances in neural information processing systems, 37:103031-103063, 2024. 3", + "[26] Xintian Mao, Jiansheng Wang, Xingran Xie, Qingli Li, and Yan Wang. Loformer: Local frequency transformer for image deblurring. In Proceedings of the 32nd ACM International Conference on Multimedia, pages 10382-10391, 2024. 7", + "[27] Seungjun Nah, Tae Hyun Kim, and Kyoung Mu Lee. Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3883-3891, 2017. 7", + "[28] Ruijie Quan, Xin Yu, Yuanzhi Liang, and Yi Yang. Removing raindrops and rain streaks in one go. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9147-9156, 2021. 6", + "[29] Dongwei Ren, Wangmeng Zuo, Qinghua Hu, Pengfei Zhu, and Deyu Meng. Progressive image deraining networks: A better and simpler baseline. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3937-3946, 2019. 6", + "[30] Yuan Shi, Bin Xia, Xiaoyu Jin, Xing Wang, Tianyu Zhao, Xin Xia, Xuefeng Xiao, and Wenming Yang. Vmambair: Visual state space model for image restoration. arXiv preprint arXiv:2403.11423, 2024. 1, 2, 3", + "[31] Tianyu Song, Guiyue Jin, Pengpeng Li, Kui Jiang, Xiang Chen, and Jiyu Jin. Learning a spiking neural network for efficient image deraining. *IJCAI*, 2024. 6", + "[32] Fu-Jen Tsai, Yan-Tsung Peng, Yen-Yu Lin, Chung-Chi Tsai, and Chia-Wen Lin. Stripformer: Strip transformer for fast image deblurring. In European Conference on Computer Vision, pages 146-162. Springer, 2022. 2, 7", + "[33] Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, and Yinxiao Li. Maxim: Multi-axis mlp for image processing. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5769-5780, 2022. 6", + "[34] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. 1, 2", + "[35] Cong Wang, Wei Wang, Chengjin Yu, and Jie Mu. Explore internal and external similarity for single image deraining with graph neural networks. *IJCAI*, 2024. 6", + "[36] Qiong Wang, Kui Jiang, Jinyi Lai, Zheng Wang, and Jianhui Zhang. Hpcnet: A hybrid progressive coupled network for image deraining. In 2023 IEEE International Conference on Multimedia and Expo (ICME), pages 2747-2752. IEEE, 2023. 1" + ], + "bbox": [ + 91, + 90, + 482, + 898 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[37] Yinglong Wang, Chao Ma, and Jianzhuang Liu. Smartassign: Learning a smart knowledge assignment strategy for deraining and desnowing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3677-3686, 2023. 6", + "[38] Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 17683-17693, 2022. 2, 6", + "[39] Xinyu Xie, Yawen Cui, Tao Tan, Xubin Zheng, and Zitong Yu. Fusionmamba: Dynamic feature enhancement for multimodal image fusion with mamba. Visual Intelligence, 2(1): 37, 2024. 3", + "[40] Wenhan Yang, Robby T Tan, Jiashi Feng, Jiaying Liu, Zongming Guo, and Shuicheng Yan. Deep joint rain detection and removal from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1357-1366, 2017. 5", + "[41] Hu Yu, Naishan Zheng, Man Zhou, Jie Huang, Zeyu Xiao, and Feng Zhao. Frequency and spatial dual guidance for image dehazing. In European Conference on Computer Vision, pages 181-198. Springer, 2022. 1", + "[42] Anas Zafar, Danyal Aftab, Rizwan Qureshi, Xinqi Fan, Pingjun Chen, Jia Wu, Hazrat Ali, Shah Nawaz, Sheheryar Khan, and Mubarak Shah. Single stage adaptive multi-attention network for image restoration. IEEE TIP, 2024. 7", + "[43] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14821-14831, 2021. 6, 7", + "[44] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5728-5739, 2022. 1, 2, 5, 6, 7", + "[45] Wengyi Zhan, Mingbao Lin, Chia-Wen Lin, and Rongrong Ji. Anysr: Realizing image super-resolution as any-scale, any-resource. IEEE Transactions on Image Processing, 2024. 2", + "[46] He Zhang and Vishal M Patel. Density-aware single image de-raining using a multi-stream dense network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 695-704, 2018. 5", + "[47] Hongguang Zhang, Yuchao Dai, Hongdong Li, and Piotr Koniusz. Deep stacked hierarchical multi-patch network for image deblurring. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5978-5986, 2019. 7", + "[48] Jinlu Zhang, Yiyi Zhou, Qiancheng Zheng, Xiaoxiong Du, Gen Luo, Jun Peng, Xiaoshuai Sun, and Rongrong Ji. Fast text-to-3D-aware face generation and manipulation via direct cross-modal mapping and geometric regularization. In Proceedings of the 41st International Conference on Machine Learning, pages 60605–60625. PMLR, 2024. 1" + ], + "bbox": [ + 516, + 92, + 903, + 900 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "17922", + "bbox": [ + 480, + 944, + 519, + 955 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[49] Kaihao Zhang, Wenhan Luo, Yiran Zhong, Lin Ma, Bjorn Stenger, Wei Liu, and Hongdong Li. Deblurring by realistic blurring. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2737-2746, 2020. 7", + "[50] Jianwei Zheng, Wei Li, Ni Xu, Junwei Zhu, Xiaoxu Lin, and Xiaqin Zhang. Alias-free mamba neural operator. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. 3" + ], + "bbox": [ + 91, + 90, + 480, + 218 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "17923", + "bbox": [ + 480, + 945, + 517, + 955 + ], + "page_idx": 10 + } +] \ No newline at end of file diff --git a/2025/ACL_ Activating Capability of Linear Attention for Image Restoration/b3cdf41c-47b4-40a8-888d-383f4742ab99_model.json b/2025/ACL_ Activating Capability of Linear Attention for Image Restoration/b3cdf41c-47b4-40a8-888d-383f4742ab99_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e8ae79dd18960b3d37e16aed632549efdb6f63a1 --- /dev/null +++ b/2025/ACL_ Activating Capability of Linear Attention for Image Restoration/b3cdf41c-47b4-40a8-888d-383f4742ab99_model.json @@ -0,0 +1,2609 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.043 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.001, + 0.812, + 0.047 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.145, + 0.131, + 0.856, + 0.154 + ], + "angle": 0, + "content": "ACL: Activating Capability of Linear Attention for Image Restoration" + }, + { + "type": "text", + "bbox": [ + 0.293, + 0.18, + 0.71, + 0.199 + ], + "angle": 0, + "content": "Yubin Gu\\(^{1}\\), Yuan Meng\\(^{1}\\), Jiayi Ji\\(^{1,2}\\), Xiaoshuai Sun\\(^{1*}\\)" + }, + { + "type": "text", + "bbox": [ + 0.195, + 0.199, + 0.802, + 0.234 + ], + "angle": 0, + "content": "1Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, 361005, P.R. China" + }, + { + "type": "text", + "bbox": [ + 0.316, + 0.234, + 0.681, + 0.253 + ], + "angle": 0, + "content": "\\(^{2}\\) National University of Singapore, Singapore" + }, + { + "type": "text", + "bbox": [ + 0.191, + 0.255, + 0.8, + 0.27 + ], + "angle": 0, + "content": "{guyubin,aprilmyy}@stu.xmu.edu.cn,jjyxmu@gmail.com,xssun@xmu.edu.cn" + }, + { + "type": "title", + "bbox": [ + 0.248, + 0.305, + 0.327, + 0.32 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.337, + 0.486, + 0.715 + ], + "angle": 0, + "content": "Image restoration (IR), a key area in computer vision, has entered a new era with deep learning. Recent research has shifted toward Selective State Space Models (Mamba) to overcome CNNs' limited receptive fields and Transformers' computational inefficiency. However, due to Mamba's inherent one-dimensional scanning limitations, recent approaches have introduced multi-directional scanning to bolster inter-sequence correlations. Despite these enhancements, these methods still struggle with managing local pixel correlations across various directions. Moreover, the recursive computation in Mamba's SSM leads to reduced efficiency. To resolve these issues, we exploit the mathematical congruences between linear attention and SSM within Mamba to propose a novel model, ACL, which leverages news designs to Activate the Capability of Linear attention for IR. ACL integrates linear attention blocks instead of SSM within Mamba, serving as the core component of encoders/decoders, and aims to preserve a global perspective while boosting computational efficiency. Furthermore, we have designed a simple yet robust local enhancement module with multi-scale dilated convolutions to extract both coarse and fine features to improve local detail recovery. Experimental results confirm that our ACL model excels in classical IR tasks such as de-raining and de-blurring, while maintaining relatively low parameter counts and FLOPs1." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.733, + 0.222, + 0.75 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.759, + 0.485, + 0.866 + ], + "angle": 0, + "content": "In the field of image processing, restoring degraded images to clarity is a crucial technology. High-quality image restoration (IR) methods play a key role in ensuring the smooth progression of downstream vision tasks. Vanilla IR methods rely primarily on manually designed feature extraction, but often perform poorly when faced with complicated degradation factors in reality." + }, + { + "type": "image", + "bbox": [ + 0.523, + 0.302, + 0.905, + 0.46 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.513, + 0.473, + 0.907, + 0.502 + ], + "angle": 0, + "content": "Figure 1. Comparison of our method's core block design with those of recent mainstream approaches." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.535, + 0.907, + 0.822 + ], + "angle": 0, + "content": "Over the past decade, with the rapid development of deep learning, many fields have been propelled forward, such as image segmentation and generation [11, 12, 14, 17, 48], and of course, IR as well. The initial IR models were primarily based on CNN designs [6, 41], whose translational invariance and high inferential efficiency have made them widely applied, as depicted in the core structure in Fig. 1(a). However, these CNN-based models face challenges in processing global features and often require stacking additional network layers to compensate for this deficiency. With the rise of attention mechanisms [34], particularly the application of the Transformer, which boasts exceptional global feature modeling capabilities, an increasing number of researchers have shifted toward Transformer-based designs [23, 36, 44], achieving remarkable restoration results, as shown in the core structure in Fig. 1 (b). Although the Transformer broadens the global perspective, its softmax attention mechanism's quadratic computational complexity significantly reduces efficiency during inference." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.825, + 0.909, + 0.903 + ], + "angle": 0, + "content": "Recently, state-space models from the field of control science, especially the Mamba model [8], have been proposed by related IR methods [13, 30] demonstrating superior training and inference efficiency on sequential data. These models have even outperformed some Transformer-" + }, + { + "type": "page_footnote", + "bbox": [ + 0.109, + 0.875, + 0.322, + 0.888 + ], + "angle": 0, + "content": "*Corresponding Author: Xiaoshuai Sun" + }, + { + "type": "page_footnote", + "bbox": [ + 0.111, + 0.888, + 0.312, + 0.9 + ], + "angle": 0, + "content": "\\(^{1}\\)https://github.com/ClimBin/ACLNet" + }, + { + "type": "list", + "bbox": [ + 0.109, + 0.875, + 0.322, + 0.9 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "17913" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.482, + 0.333 + ], + "angle": 0, + "content": "based IR methods in terms of performance. Mamba is initially designed for one-dimensional sequence data modeling, direct adaptation to two-dimensional image poses challenges. The improved SSM module in Mamba, a one-dimensional unidirectional scanning, recursive computing structure, faces major issues when applied to the image, including: 1) conversion of image data into a one-dimensional sequence increases the distance between adjacent pixels, leading to the loss of local relationships [30]; 2) unidirectional modeling neglects the spatial relationships of pixels in multiple directions. Recently, MambaIR [13] and VMambaIR [30] have adopted multi-directional scanning methods to mitigate these problems, as illustrated in the main structure in Fig. 1 (c), but these methods have yet to effectively and directly establish multi-directional connections in the spatial dimension of pixels." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.338, + 0.483, + 0.579 + ], + "angle": 0, + "content": "In response to the limitations of the Mamba model, we propose a method that aims to leverage the advantages of the Mamba structure while addressing its unidirectional modeling constraint. A direct approach to overcoming this limitation is to introduce a long-range dependency attention mechanism, such as Linear Attention (LA) or Softmax Attention (SA). Although SA generally outperforms LA in traditional vision tasks, its computational complexity is significantly higher than that of LA. Upon further analysis, we observe that LA and the State-Space Model (SSM) in Mamba share a highly similar mathematical formulation, which has also been deeply analyzed in the work [15]. Inspired by this insight, we replace the SSM module in Mamba with LA layers, thereby designing a novel foundational IR architecture, as illustrated in Fig. 1 (d), and propose a new IR model, ACL." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.585, + 0.483, + 0.901 + ], + "angle": 0, + "content": "Specifically, the proposed ACL consists of two main components: the Mamba-based module with LA at its core (denoted as LAMA), which serves as the central part of the encoder/decoder and establishes global feature dependencies with linear computational complexity. Additionally, optimizing local features is equally essential. A recent approach involves partitioning the global features in the spatial domain into small windows and optimizing attention within these windows to enhance local feature modeling. While this approach has shown some effectiveness, it incurs high computational costs. In contrast, we propose a simple yet efficient multi-scale dilated convolution module (MDC), which captures local features at varying granularities by employing different dilation factors, thus improving detail restoration. By combining the Mamba structure with LA, ACL activates the capability of LA, achieving advanced performance on two classic IR tasks—deblurring and deraining. Compared to Transformer-based models [32, 44], ACL not only significantly reduces computational cost and parameter count, but also achieves superior or comparable performance." + }, + { + "type": "text", + "bbox": [ + 0.533, + 0.092, + 0.835, + 0.105 + ], + "angle": 0, + "content": "Our contributions are summarized as follows:" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.108, + 0.905, + 0.198 + ], + "angle": 0, + "content": "- We explore an alternative CNN and Transformer-based architecture for Image Restoration, providing global receptive fields while maintaining computational efficiency. Specifically, we propose ACL, which upgrades the SSM in Mamba with the LA structure, enabling the model to perform global multi-directional scanning." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.199, + 0.905, + 0.257 + ], + "angle": 0, + "content": "- The proposed ACL consists of two modules: LAMA and MDC. LAMA captures global feature dependencies, and MDC captures local features at varying granularities. Both modules enhance detail restoration." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.259, + 0.905, + 0.304 + ], + "angle": 0, + "content": "- The proposed ACL demonstrates advanced performance in de-raining and de-blurring tasks, proving its advantages in terms of parameter count and computational cost." + }, + { + "type": "list", + "bbox": [ + 0.513, + 0.108, + 0.905, + 0.304 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.32, + 0.655, + 0.336 + ], + "angle": 0, + "content": "2. Related Work" + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.345, + 0.7, + 0.361 + ], + "angle": 0, + "content": "2.1. Image Restorations" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.368, + 0.905, + 0.807 + ], + "angle": 0, + "content": "Image restoration (IR) technology provides clear visual data essential for numerous advanced downstream visual tasks. In recent years, the advent of deep learning has eclipsed traditional methods that rely on manually designed features, shifting the paradigm toward deep-learning-based approaches [9, 13, 20, 22, 45]. Initially, models were predominantly designed using CNNs, achieving significant advancements through sophisticated network designs that incorporated encoder-decoder patterns [5], dense connections, and residual connections [10]. However, CNN-based techniques face challenges in establishing global feature dependencies. With the evolution of Transformers in both Natural Language Processing (NLP) and Computer Vision (CV), many researchers have pivoted towards Transformer-based IR methods [23, 38, 44], leveraging their robust global receptive capabilities and marking substantial progress. Despite these improvements, the computational burden of calculating SA remains a drawback. Recently, the Mamba model [8], known for its efficient training and inference capabilities, has introduced a new potential paradigm in the IR field. Methods such as VMambaIR [30] and MambaIR [13], which employ multidirectional scanning strategies, aim to address the issues of unidirectional modeling inherent in Mamba's SSM blocks. Nonetheless, these methods have yet to establish multidirectional pixel connections directly. Thus, exploring how to utilize Mamba's superior design to establish comprehensive multidirectional global pixel correlations remains a promising direction." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.818, + 0.724, + 0.833 + ], + "angle": 0, + "content": "2.2. Attention Mechanisms" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.84, + 0.905, + 0.901 + ], + "angle": 0, + "content": "Originally applied in the NLP field, attention mechanisms [34], particularly the Transformer with its SA mechanism, have achieved remarkable success. Subsequently, these mechanisms have been successfully adapted to the" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "17914" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.485, + 0.318 + ], + "angle": 0, + "content": "visual domain. In recent years, numerous IR methods based on the Transformer structure have emerged [1, 3, 23, 24], primarily utilizing various design modules to leverage self-attention mechanisms and enhance model efficiency. SwinIR [23] introduces twin-shifted window attention to boost performance. IPT [1] enhances local detail restoration by dividing images into multiple small windows and processing each window's features independently. However, adopting SA to establish global or local feature dependencies inevitably leads to quadratic computational complexity. Linear attention, which operates with linear complexity, has yet to match the performance of SA in classical vision tasks, thus its application remains limited. In this paper, we introduce a new IR model, ACL, activating the potential of linear attention." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.33, + 0.27, + 0.346 + ], + "angle": 0, + "content": "2.3. State Space Model" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.353, + 0.485, + 0.624 + ], + "angle": 0, + "content": "The Mamba model, a newly proposed state space model (SSM), effectively facilitates sequence modeling with linear complexity [8]. Owing to its efficient training and inference speeds, many researchers have adapted it for visual tasks [16, 25, 30, 39, 50]. For instance, Local-Mamba [16] employs a cross-scanning module to scan image spaces. VMambaIR [30] enhances multidirectional relationships between pixels by scanning images from six directions. These methods focus on overcoming the limitations of unidirectional modeling in SSMs by proposing various scanning techniques. However, the features captured in each direction remain unidirectionally connected and potentially lead to redundant computational costs. The Mamba model is efficient due to its structural design, yet its unidirectional scanning is not entirely suitable for images. Therefore, we combine linear attention with the Mamba structure to achieve a new balance between computational efficiency and restoration effectiveness." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.64, + 0.191, + 0.656 + ], + "angle": 0, + "content": "3. Methods" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.666, + 0.291, + 0.682 + ], + "angle": 0, + "content": "3.1. Preliminary Analysis" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.688, + 0.483, + 0.764 + ], + "angle": 0, + "content": "In recent years, very few IR models based on linear attention have been proposed, as they tend to perform slightly worse than SA in classical vision tasks. Leveraging the computational advantages of Linear Attention (LA) to further explore its potential in visual tasks is crucial." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.765, + 0.484, + 0.841 + ], + "angle": 0, + "content": "The improved SSM in Mamba shows significant potential in sequence processing. In fact, linear attention has a similar expression to SSM [15]. In linear attention, if the attention of the \\(i\\)-th token is restricted to only be related to the previous \\(i\\) tokens, it is expressed as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.154, + 0.853, + 0.484, + 0.905 + ], + "angle": 0, + "content": "\\[\n\\mathbf {A} _ {i} = \\frac {\\mathbf {Q} _ {i} \\left(\\sum_ {j = 1} ^ {i} \\mathbf {K} _ {j} ^ {\\top} \\mathbf {V} _ {j}\\right)}{\\mathbf {Q} _ {i} \\left(\\sum_ {j = 1} ^ {i} \\mathbf {K} _ {j} ^ {\\top}\\right)} = \\frac {\\left(\\mathbf {Q} _ {i} \\mathbf {D} _ {i}\\right)}{\\left(\\mathbf {Q} _ {i} \\mathbf {U} _ {i}\\right)}, \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.089, + 0.905, + 0.122 + ], + "angle": 0, + "content": "where \\(\\mathbf{D}_i = \\sum_{j=1}^i \\mathbf{K}_j^\\top \\mathbf{V}_j\\), \\(\\mathbf{U}_i = \\sum_{j=1}^i \\mathbf{K}_j^\\top\\). Therefore, the recursive expressions are:" + }, + { + "type": "equation", + "bbox": [ + 0.643, + 0.133, + 0.907, + 0.151 + ], + "angle": 0, + "content": "\\[\n\\mathbf {U} _ {i} = \\mathbf {U} _ {i - 1} + \\mathbf {K} _ {i} ^ {\\top}, \\tag {2}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.572, + 0.161, + 0.907, + 0.195 + ], + "angle": 0, + "content": "\\[\n\\mathbf {D} _ {i} = \\mathbf {D} _ {i - 1} + \\mathbf {K} _ {i} ^ {\\top} \\mathbf {V} _ {i}, \\quad \\mathbf {A} _ {i} = \\frac {\\left(\\mathbf {Q} _ {i} \\mathbf {D} _ {i}\\right)}{\\left(\\mathbf {Q} _ {i} \\mathbf {U} _ {i}\\right)}. \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.199, + 0.907, + 0.275 + ], + "angle": 0, + "content": "To enable applications in deep neural networks, it is necessary to discretize the initial SSM using zero-order hold [8]. This involves discretizing the continuous parameters \\(\\mathbf{A}\\) and \\(\\mathbf{B}\\) into \\(\\bar{\\mathbf{A}}\\) and \\(\\bar{\\mathbf{B}}\\) through the time scale parameter \\(\\Delta\\). The specific expressions are as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.568, + 0.287, + 0.907, + 0.304 + ], + "angle": 0, + "content": "\\[\n\\mathbf {h} _ {i} = \\bar {\\mathbf {A}} \\mathbf {h} _ {i - 1} + \\mathbf {B} \\mathbf {x} _ {i}, \\quad \\mathbf {y} _ {i} = \\mathbf {C h} _ {i} + \\mathbf {D x} _ {i}, \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.314, + 0.905, + 0.36 + ], + "angle": 0, + "content": "where \\(\\bar{\\mathbf{A}} = \\exp (\\Delta \\mathbf{A})\\) and \\(\\bar{\\mathbf{B}} = (\\Delta \\mathbf{A})^{-1}(\\exp (\\Delta \\mathbf{A}) - I)\\Delta \\mathbf{B}\\approx \\Delta \\mathbf{B}\\). For simplicity, we have omitted the feature dimension information of each part in the formulas." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.361, + 0.906, + 0.436 + ], + "angle": 0, + "content": "Further, Mamba enhances the discrete SSM by making \\(\\bar{\\mathbf{A}}\\), \\(\\bar{\\mathbf{B}}\\), and \\(\\Delta\\) dependent on the input \\(\\mathbf{x}_i\\), breaking away from the assumption of input-independent models. Additionally, since \\(\\bar{\\mathbf{A}}_i\\) in Mamba is a diagonal matrix, we have \\(\\tilde{\\mathbf{A}}_i = diag(\\tilde{\\mathbf{A}}_i)\\). The expression is thus transformed into:" + }, + { + "type": "equation", + "bbox": [ + 0.534, + 0.447, + 0.907, + 0.466 + ], + "angle": 0, + "content": "\\[\n\\mathbf {h} _ {i} = \\tilde {\\mathbf {A}} _ {i} \\mathbf {h} _ {i - 1} + \\mathbf {B} _ {i} (\\Delta_ {i} \\mathbf {x} _ {i}), \\quad \\mathbf {y} _ {i} = \\mathbf {C} _ {i} \\mathbf {h} _ {i} + \\mathbf {D} \\mathbf {x} _ {i}. \\quad (5)\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.477, + 0.907, + 0.705 + ], + "angle": 0, + "content": "The primary distinctions between Eq. 3 and Eq. 5 are as follows: 1) The improved State Space Model (SSM) incorporates an additional parameter for hidden state transitions, denoted as \\(\\tilde{\\mathbf{A}}_i\\). This parameter functions similarly to a forget gate, filtering previous states to enhance selective retention. 2) An additional term, \\(\\mathbf{D}\\mathbf{x}_i\\), is introduced, akin to an input skip connection. In the Mamba model, which aims to achieve input-dependent modeling, \\(\\tilde{\\mathbf{A}}_i\\) must be recursively computed. Despite the utilization of hardware acceleration mechanisms, this process still adheres to unidirectional modeling. Linear attention represents an alternative form of SSM within Mamba and can transcend the limitations of unidirectional pixel modeling, presenting a potential capability. For a more in-depth analysis, one can refer to another outstanding analytical works [15]." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.713, + 0.788, + 0.728 + ], + "angle": 0, + "content": "3.2. Overall Structure of the Model" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.735, + 0.907, + 0.903 + ], + "angle": 0, + "content": "As illustrated in Fig. 2 (a), the proposed IR model, ACL, is based on an encoder-decoder architecture. In this model, multiple downsampled degraded images are fed into the main pathway of the encoder through lateral convolution layers, and images restored at three different scales are output during the decoding phase. Both the encoder and decoder comprise three fundamental core blocks, the structure of which is depicted in Fig. 2. Each core unit consists of several successive LAMA modules. Furthermore, as shown in the framework diagram, following two core units with higher feature resolution in both encoding and decoding" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "17915" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.094, + 0.088, + 0.495, + 0.282 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.242, + 0.291, + 0.347, + 0.304 + ], + "angle": 0, + "content": "(a) Overall Pipeline" + }, + { + "type": "image", + "bbox": [ + 0.496, + 0.089, + 0.613, + 0.282 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.487, + 0.291, + 0.619, + 0.304 + ], + "angle": 0, + "content": "(b) Encoder and Decoder" + }, + { + "type": "image", + "bbox": [ + 0.613, + 0.089, + 0.764, + 0.282 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.631, + 0.291, + 0.744, + 0.304 + ], + "angle": 0, + "content": "(c) LA-based Mamba" + }, + { + "type": "image", + "bbox": [ + 0.764, + 0.09, + 0.896, + 0.282 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.771, + 0.289, + 0.887, + 0.312 + ], + "angle": 0, + "content": "(d) Multi-Dilated Convolutions Module" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.33, + 0.908, + 0.358 + ], + "angle": 0, + "content": "Figure 2. (a) The overall framework of ACL, based on the encoder-decoder architecture. (b) The core structure of the encoder-decoder, which includes the improved LA-based Mamba (LAMA) module. (c) The structure of the LAMA module. (d) The MDC module." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.385, + 0.483, + 0.446 + ], + "angle": 0, + "content": "phases, a local enhancement module is appended to augment the model's capability for local detail restoration. The structure of this local enhancement module is presented in Fig. 2 (d)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.447, + 0.484, + 0.759 + ], + "angle": 0, + "content": "Specifically, the network process begins with a degraded image \\(\\mathbf{I} \\in \\mathbb{R}^{3 \\times 256 \\times 256}\\). The model first transforms this image through a convolution layer into a feature map \\(\\mathbf{I}' \\in \\mathbb{R}^{C \\times H \\times W}\\), expanding the number of feature channels from 3 to \\(C\\), where \\(C = 32\\). Subsequently, this feature map is further processed through three encoder units \\(E_{i}\\) and a local enhancement module, each unit encoding the feature maps at different scales into a latent space state, denoted as \\(\\mathbf{I}_e^i \\in \\mathbb{R}^{2^{(i-1)}C \\times \\frac{H}{2^{(i-1)}} \\times \\frac{W}{2^{(i-1)}}}\\), where \\(i = 1, 2, 3\\). Here, \\(C, H\\), and \\(W\\) respectively represent the number of feature channels, the height, and the width of the feature maps. These processed feature maps are then passed to the decoder, where they are fused through direct or skip connections. In the decoder, the feature maps are gradually restored by decoding units \\(D_{i}\\), generating \\(\\mathbf{I}_d^i \\in \\mathbb{R}^{2^{(i-1)}C \\times \\frac{H}{2^{(i-1)}} \\times \\frac{W}{2^{(i-1)}}}\\). Each feature map is processed by the subsequent \\(D_{i}\\) and, following lateral convolution operations, yields multi-scale output results, with \\(D_{1}\\) producing the final restored image. Subsequently, we will elaborate on the two key modules that constitute ACL and the model's optimization function." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.771, + 0.434, + 0.787 + ], + "angle": 0, + "content": "3.3. Linear Attention-based Mamba Module" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.795, + 0.484, + 0.901 + ], + "angle": 0, + "content": "The original Mamba model is an auto-regressive model capable of efficiently capturing sequence dependencies, and it has been proven effective in modeling temporal causal sequence data. However, due to its unidirectional modeling approach, Mamba exhibits limitations when handling data with weak causality, such as images, necessitating further improvements to address these challenges. To overcome" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.385, + 0.907, + 0.505 + ], + "angle": 0, + "content": "the limitations of unidirectional modeling, recent methods have proposed multi-directional cross-scanning techniques for image processing. Unlike these approaches, we embed linear attention into the Mamba structure, enabling it to establish global pixel dependencies when processing image data, and avoiding the need for recursive computation of the forget matrix \\(\\mathbf{A}_i\\). The module structure is illustrated in Fig. 2 (c)." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.506, + 0.906, + 0.597 + ], + "angle": 0, + "content": "The input to the model is a feature map \\(\\mathbf{F} \\in \\mathbb{R}^{B \\times C \\times H \\times W}\\). First, \\(\\mathbf{F}\\) undergoes a dimensional transformation, resulting in \\(\\mathbf{F}' \\in \\mathbb{R}^{B \\times HW \\times C}\\). Subsequently, \\(\\mathbf{F}'\\) is passed into two branches: a main branch and a residual branch. The operations of the residual branch can be expressed as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.631, + 0.612, + 0.905, + 0.629 + ], + "angle": 0, + "content": "\\[\n\\mathbf {F} _ {r e s} = \\sigma (\\operatorname {L i n e a r} \\left(\\mathbf {F} ^ {\\prime}\\right)), \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.636, + 0.854, + 0.651 + ], + "angle": 0, + "content": "where \\(\\sigma (\\cdot)\\) represents the SiLU activation function." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.652, + 0.906, + 0.697 + ], + "angle": 0, + "content": "The main branch comprises a linear mapping layer, a convolutional layer, and linear attention. The process can be expressed as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.624, + 0.712, + 0.905, + 0.729 + ], + "angle": 0, + "content": "\\[\n\\mathbf {F} _ {1} = \\operatorname {T o 4 D} \\left(\\operatorname {L i n e a r} \\left(\\mathbf {F} ^ {\\prime}\\right)\\right), \\tag {7}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.64, + 0.736, + 0.905, + 0.753 + ], + "angle": 0, + "content": "\\[\n\\mathbf {F} _ {2} = \\sigma \\left(\\operatorname {C o n v} \\left(\\mathbf {F} _ {1}\\right)\\right), \\tag {8}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.76, + 0.906, + 0.82 + ], + "angle": 0, + "content": "where \\(\\mathrm{To4D}(\\cdot)\\) indicates reshaping the feature map into a four-dimensional tensor to adapt convolution operation. Next, linear attention is applied to \\(\\mathbf{F}_2\\). The specific process is expressed as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.649, + 0.833, + 0.905, + 0.85 + ], + "angle": 0, + "content": "\\[\n\\mathbf {F} _ {2} ^ {\\prime} = \\operatorname {T o 3 D} \\left(\\mathbf {F} _ {2}\\right), \\tag {9}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.552, + 0.861, + 0.905, + 0.878 + ], + "angle": 0, + "content": "\\[\n\\mathbf {Q} = \\phi \\left(\\operatorname {L i n e a r} \\left(\\mathbf {F} _ {2} ^ {\\prime}\\right)\\right), \\quad \\mathbf {K} = \\phi \\left(\\operatorname {L i n e a r} \\left(\\mathbf {F} _ {2} ^ {\\prime}\\right)\\right), \\tag {10}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.655, + 0.884, + 0.905, + 0.902 + ], + "angle": 0, + "content": "\\[\n\\mathbf {K} \\mathbf {V} = \\mathbf {K} ^ {\\top} \\cdot \\mathbf {F} _ {2} ^ {\\prime}, \\tag {11}\n\\]" + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "17916" + } + ], + [ + { + "type": "equation", + "bbox": [ + 0.225, + 0.092, + 0.482, + 0.108 + ], + "angle": 0, + "content": "\\[\n\\mathbf {F} _ {\\text {a t t e n}} = \\mathbf {Q} \\cdot \\mathbf {K V}, \\tag {12}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.177, + 0.114, + 0.482, + 0.131 + ], + "angle": 0, + "content": "\\[\n\\mathbf {F} _ {\\text {a t t e n}} = \\mathbf {F} _ {\\text {a t t e n}} + \\operatorname {C o n v} _ {\\text {p o s}} (\\mathbf {F} _ {2}), \\tag {13}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.136, + 0.484, + 0.228 + ], + "angle": 0, + "content": "where \\(\\mathrm{To3D}(\\cdot)\\) reshapes the feature map into a three-dimensional tensor for linear attention layer, \\(\\phi (\\cdot)\\) is the kernel function, and \\(\\mathrm{Conv}_{pos}(\\cdot)\\) is learnable position embedding function. Subsequently, \\(\\mathbf{F}_{atten}\\) is multiplied by \\(\\mathbf{F}_{res}\\), followed by a linear mapping layer, yielding \\(\\mathbf{F}_{atten}\\), which is expressed as:" + }, + { + "type": "equation", + "bbox": [ + 0.166, + 0.239, + 0.483, + 0.255 + ], + "angle": 0, + "content": "\\[\n\\mathbf {F} _ {\\text {e n h e n c e}} = \\operatorname {L i n e a r} \\left(\\mathbf {F} _ {\\text {a t t e n}} \\times \\mathbf {F} _ {\\text {r e s}}\\right). \\tag {14}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.266, + 0.484, + 0.297 + ], + "angle": 0, + "content": "Finally, \\(\\mathbf{F}_{\\textit{enhence}}\\) is processed through a simple feedforward neural network to produce the output of the LAMA." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.305, + 0.402, + 0.32 + ], + "angle": 0, + "content": "3.4. Multi-Dilated Convolutions Module" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.327, + 0.485, + 0.612 + ], + "angle": 0, + "content": "The LAMA module primarily functions to establish global feature connections, necessitating the learning of local features to enhance the quality of detail restoration. While some previous methods employed feature window-based self-attention strategies and achieved certain advancements, they incurred substantial computational costs. Consequently, we adopted a more straightforward and effective approach, namely the multi-scale dilated convolution module, the structure of which is depicted in Fig. 2(d). This module is equipped with filtering operations using various dilation factors aimed at capturing local features of different granularities within the image to enhance the detail restoration effects. The module comprises two dilated convolutions along with skip connections. The input feature, denoted as \\(\\mathbf{F}\\), is split into two pathways: one passes through a convolution layer with a kernel size of 5 and a dilation rate of 2, and the other through a convolution layer with a kernel size of 3 and a dilation rate of 2, resulting in two feature sets:" + }, + { + "type": "equation", + "bbox": [ + 0.206, + 0.614, + 0.482, + 0.63 + ], + "angle": 0, + "content": "\\[\n\\mathbf {F} _ {1} = \\operatorname {C o n v} _ {5 \\times 5, d = 2} (\\mathbf {F}), \\tag {15}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.207, + 0.636, + 0.482, + 0.653 + ], + "angle": 0, + "content": "\\[\n\\mathbf {F} _ {2} = \\operatorname {C o n v} _ {3 \\times 3, d = 2} (\\mathbf {F}). \\tag {16}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.659, + 0.484, + 0.749 + ], + "angle": 0, + "content": "These are then concatenated to form \\(\\mathbf{F}'\\), which subsequently passes through a convolution layer with a kernel size of 1 to halve the channel count, aligning it with the dimensions of the input features. Furthermore, the input feature \\(\\mathbf{F}\\) is merged with \\(\\mathbf{F}'\\) via a skip connection, culminating in the output \\(\\mathbf{F}_{out}\\). The expressions are as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.152, + 0.764, + 0.483, + 0.781 + ], + "angle": 0, + "content": "\\[\n\\mathbf {F} _ {\\text {o u t}} = \\operatorname {C o n v} _ {1 \\times 1} \\left(\\operatorname {C o n c a t} \\left(\\mathbf {F} _ {1}, \\mathbf {F} _ {2}\\right)\\right) + \\mathbf {F}. \\tag {17}\n\\]" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.788, + 0.314, + 0.805 + ], + "angle": 0, + "content": "3.5. Optimization Objectives" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.811, + 0.484, + 0.901 + ], + "angle": 0, + "content": "The ACL model, during its decoding phase, outputs restoration results at three distinct scales and computes the corresponding loss values. Following prior methodologies, we calculate the loss values concurrently in both the spatial and frequency domains. The traditional L1 loss function is employed to measure the discrepancy between the restored" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.905, + 0.122 + ], + "angle": 0, + "content": "outputs and the pristine reference images. Consequently, the total loss is computed as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.513, + 0.135, + 0.905, + 0.188 + ], + "angle": 0, + "content": "\\[\nL _ {t o t a l} = \\sum_ {i = 1} ^ {3} \\frac {1}{N _ {i}} \\left| \\mathbf {P} _ {i} - \\mathbf {I} _ {i} \\right| + \\lambda \\cdot \\sum_ {i = 1} ^ {3} \\frac {1}{N _ {i}} | f f t (\\mathbf {P} _ {i}) - f f t (\\mathbf {I} _ {i}) | \\tag {18}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.188, + 0.907, + 0.294 + ], + "angle": 0, + "content": "where \\(\\mathbf{P}_i\\) and \\(\\mathbf{I}_i\\) represent the restored result and the corresponding true image, respectively, and \\(N_i\\) denotes the total number of pixels in the image. \\(fft(\\cdot)\\) signifies the fast Fourier transform function. The hyperparameter \\(\\lambda\\), utilized to balance the contributions of spatial domain loss and frequency domain loss to the total loss, is set at 0.1, following previous methods." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.31, + 0.646, + 0.327 + ], + "angle": 0, + "content": "4. Experiments" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.335, + 0.906, + 0.427 + ], + "angle": 0, + "content": "This section focuses on showcasing the effectiveness of our proposed ACL model in addressing various degraded image tasks, such as deraining and deblurring, evaluated across six test sets. We will outline the experimental procedures and datasets utilized, and confirm the impact of the proposed modules through a series of ablation studies." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.438, + 0.722, + 0.454 + ], + "angle": 0, + "content": "4.1. Implementation Setup" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.459, + 0.907, + 0.717 + ], + "angle": 0, + "content": "For each type of degradation, datasets are trained and evaluated independently. Unless specifically mentioned, all tests are conducted with the same hyperparameters. The training set undergoes random cropping of \\(256 \\times 256\\) patches and random flipping as a data augmentation strategy. To compare computational complexity with other methods, FLOPs are tested at the mentioned crop size. A cosine annealing strategy is adopted to gradually adjust the learning rate each epoch, setting a minimum learning rate limit of 1e-6. The batch size is set to 8, and the Adam optimizer is adopted. Following the evaluation of previous methods, for deraining, the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) are calculated on the YCbCr color mode. For other tasks, evaluation metrics are calculated on RGB color mode. All experiments are implemented in an environment equipped with the NVIDIA 24GB 3090 GPUs, based on the PyTorch." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.728, + 0.792, + 0.744 + ], + "angle": 0, + "content": "4.2. Single Image Deraining Results" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.75, + 0.907, + 0.901 + ], + "angle": 0, + "content": "We utilized several different rain removal datasets, including Rain100L/H [40], Rain200L/H, and DID-Data [46], to evaluate the model's ability to restore images with streaklike degradation elements. Each dataset was independently trained for 800 epochs, with an initial learning rate set to 1e-3. We compared our approach with previous methods, including the advanced Restormer (Transformer) [44], NAFNet (CNN) [2], IRNeXt (CNN) [6], and MambaIR (Mamba) [13]. The comparative results are shown in Table 1 and Table 2. Upon comparison of PSNR, it can" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "17917" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.09, + 0.908, + 0.118 + ], + "angle": 0, + "content": "Table 1. Quantitative comparison results of the proposed model and seven other advanced models on the Rain100L and Rain100H. The larger the PSNR and SSIM values, the better the model effect." + }, + { + "type": "table", + "bbox": [ + 0.093, + 0.129, + 0.907, + 0.22 + ], + "angle": 0, + "content": "
MethodsRestormer [44]MAXIM [33]DRT [24]MPRNet [43]DAWN [19]IRNeXt [6]MambaIR [13]ACL(Ours)
DatasetPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
Rain100L38.990.97838.060.97737.610.94837.840.95936.730.97138.240.97238.780.97739.180.983
Rain100H31.460.90430.810.90329.470.84630.410.87430.620.89631.640.90230.620.89332.220.920
Average35.230.94134.440.94033.540.89734.130.91733.680.93434.940.93734.700.93535.700.952
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.232, + 0.908, + 0.259 + ], + "angle": 0, + "content": "Table 2. Quantitative comparison results of the proposed model with 13 other advanced models, including CNN and Transformer-based methods, on three datasets." + }, + { + "type": "table", + "bbox": [ + 0.093, + 0.271, + 0.907, + 0.525 + ], + "angle": 0, + "content": "
MethodsRain200LRain200HDID-DatasetAverage
PSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
RESCAN [21]33.820.95526.220.82232.600.92730.880.901
PreNet [29]37.120.97629.040.89033.370.91933.180.928
DRT [24]38.810.98328.670.88033.880.92833.790.930
CCN [28]38.260.98129.990.91432.130.92433.460.940
Restormer [44]40.580.98630.830.91433.190.92634.870.942
Uformer [38]40.200.98630.310.91134.360.93334.960.943
MPRNet [43]39.820.98629.940.90034.500.93734.750.941
SmartAssign [37]38.410.98127.710.85433.110.91533.080.917
SFNet [7]39.500.98229.750.90134.510.93834.590.940
NAFNet [2]39.480.98229.190.88834.690.93734.450.936
ELFformer [18]38.850.98028.930.88533.540.93633.770.934
ESDNet [31]39.850.98630.010.91334.520.93934.790.946
MSGNN [35]39.090.98729.630.91833.110.92733.940.944
ACL(Ours)40.740.98830.450.91634.810.93835.330.947
" + }, + { + "type": "image", + "bbox": [ + 0.093, + 0.534, + 0.907, + 0.835 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.849, + 0.908, + 0.879 + ], + "angle": 0, + "content": "Figure 3. Visual comparison of ACL and other recent SOTA models on rainy image removal. The first two scenes contain slight rain streak degradation, while the last two scenes contain severe rain streak degradation." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.946, + 0.52, + 0.957 + ], + "angle": 0, + "content": "17918" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.091, + 0.09, + 0.486, + 0.119 + ], + "angle": 0, + "content": "Table 3. Quantitative results of our method compared to recent approaches in blurry image restoration." + }, + { + "type": "table", + "bbox": [ + 0.097, + 0.128, + 0.477, + 0.316 + ], + "angle": 0, + "content": "
MethodsGoPro
PSNR ↑SSIM ↑FLOPs(G) ↓Param(M) ↓
MIMO [4]32.680.95961716.1
DMPHN [47]31.200.940-21.7
DBGAN [49]31.100.94275911.6
MPRNet [43]32.660.95977720.1
Restormer [44]32.920.96114026.1
IRNeXt [6]33.160.96211413.21
Stripformer [32]33.080.96217020
SSAMAN [42]33.530.96516518.3
LoFormer [26]33.730.9664716.4
Ours33.250.964554.6
" + }, + { + "type": "image", + "bbox": [ + 0.095, + 0.327, + 0.221, + 0.384 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.117, + 0.384, + 0.203, + 0.396 + ], + "angle": 0, + "content": "(a) Blur Region" + }, + { + "type": "image", + "bbox": [ + 0.225, + 0.326, + 0.351, + 0.384 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.251, + 0.385, + 0.325, + 0.395 + ], + "angle": 0, + "content": "(b) Reference" + }, + { + "type": "image", + "bbox": [ + 0.355, + 0.326, + 0.481, + 0.384 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.381, + 0.385, + 0.449, + 0.395 + ], + "angle": 0, + "content": "(c) DMPHN" + }, + { + "type": "image", + "bbox": [ + 0.095, + 0.396, + 0.222, + 0.453 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.129, + 0.453, + 0.192, + 0.464 + ], + "angle": 0, + "content": "(e) IRNeXt" + }, + { + "type": "image", + "bbox": [ + 0.225, + 0.396, + 0.352, + 0.452 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.251, + 0.453, + 0.325, + 0.464 + ], + "angle": 0, + "content": "(f) Restormer" + }, + { + "type": "image", + "bbox": [ + 0.355, + 0.396, + 0.481, + 0.452 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.392, + 0.453, + 0.439, + 0.465 + ], + "angle": 0, + "content": "(g) Ours" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.482, + 0.483, + 0.512 + ], + "angle": 0, + "content": "Figure 4. Visual results of ACL and four other advanced models on motion blurred image restoration." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.537, + 0.484, + 0.614 + ], + "angle": 0, + "content": "be observed that our method outperforms the other compared methods, except for the powerful Transformer-based Restormer on the Rain200H. Additionally, Fig. 3 further illustrates the visual comparison results, where ACL demonstrates superior performance in restoring image details." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.621, + 0.379, + 0.639 + ], + "angle": 0, + "content": "4.3. Single Image Deblurring Results" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.643, + 0.483, + 0.826 + ], + "angle": 0, + "content": "We conducted evaluations on motion blur image restoration using the GoPro dataset [27], which includes 2,103 training images and 1,111 testing images. The compared methods include the recently proposed 9 advanced methods. In Table 3, we present the comparative results of various metrics on this dataset. As shown, ACL achieves advanced performance while maintaining a low parameter count and low FLOPs. Additionally, Fig. 4 displays visual comparison results with other methods. We selected critical numerical information within the images, and it can be observed that ACL also exhibits good performance in restoring motion-blurred images." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.834, + 0.258, + 0.849 + ], + "angle": 0, + "content": "4.4. Ablation Studies" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.856, + 0.484, + 0.903 + ], + "angle": 0, + "content": "To further understand the contributions of each module in the ACL model and other factors affecting model performance, we conducted a unified experiment on the" + }, + { + "type": "table_caption", + "bbox": [ + 0.513, + 0.09, + 0.907, + 0.119 + ], + "angle": 0, + "content": "Table 4. Comparison of ablation experiments between two modules on Rain100L." + }, + { + "type": "table", + "bbox": [ + 0.526, + 0.129, + 0.894, + 0.197 + ], + "angle": 0, + "content": "
SettingsPSNR ↑SSIM ↑
Baseline38.240.978
Baseline + LAMA38.890.981
Baseline + LAMA + MDC39.180.983
" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.222, + 0.907, + 0.314 + ], + "angle": 0, + "content": "Rain100L/H rain removal dataset. Specifically, we performed ablation studies on the two main modules of the model. Additionally, to verify that the linear attention capability can be restored using the Mamba structure, we conducted experiments by replacing LAMA with other structures to compare the results under different configurations." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.323, + 0.75, + 0.337 + ], + "angle": 0, + "content": "4.4.1. LAMA and MDC modules:" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.342, + 0.907, + 0.585 + ], + "angle": 0, + "content": "To validate the roles of the two main modules in ACL, namely LAMA and MDC, as well as their respective importance, we conducted ablation experiments. We removed the MDC module and replaced the core encoder/decoder modules with the structure shown in Fig. 1(b), where the Transformer block in the baseline model is implemented based on Linear Attention. We then gradually replaced the encoder-decoder modules with LAMA and added the MDC module to the baseline model, resulting in two different configurations, which were trained and tested separately. The experimental results are shown in Table 4. By comparing the \"Baseline\" and \"Baseline+LAMA\" configurations, it is evident that the Mamba structure, implemented with linear attention, achieves better results, proving the potential of the Mamba structure to activate linear attention. Besides, adding the MDC further enhances the model's performance." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.595, + 0.847, + 0.609 + ], + "angle": 0, + "content": "4.4.2. Different Mechanisms for Core Modules:" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.614, + 0.909, + 0.902 + ], + "angle": 0, + "content": "To validate the effectiveness of LAMA, we replaced the LA structure within LAMA with the original one-dimensional scanning SSM structure and the bidirectional scanning SSM structure for verification. Additionally, replacing LAMA with a standard LA structure resulted in a new configuration, referred to as the Baseline model. The results obtained from these various configurations are presented in Table 5. On one hand, the proposed method outperforms both the Baseline and the strategies employing unidirectional and multi-directional scanning for modeling. On the other hand, compared to the unidirectional Mamba model, the multi-directional scanning strategy demonstrates superior performance, indicating that unidirectional modeling is not optimal for image data. Furthermore, Fig. 5 provides visual examples corresponding to each configuration. It can be observed that while the SSM-based scanning methods are capable of removing rainy degradation elements, they result in significant loss of image details. In contrast, our method achieves superior visual outcomes." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.52, + 0.958 + ], + "angle": 0, + "content": "17919" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.091, + 0.09, + 0.483, + 0.118 + ], + "angle": 0, + "content": "Table 5. Quantitative results of different mechanisms used in the core module on rainy streak removal." + }, + { + "type": "table", + "bbox": [ + 0.104, + 0.129, + 0.47, + 0.235 + ], + "angle": 0, + "content": "
SettingsRain100LRain100H
PSNRSSIMPSNRSSIM
Baseline38.240.97831.020.913
Mamba (w. 1D Scan)36.470.95929.720.887
Mamba (w. 2D Scan)38.070.97730.930.907
Ours39.180.98332.220.920
" + }, + { + "type": "image", + "bbox": [ + 0.094, + 0.248, + 0.223, + 0.315 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.114, + 0.316, + 0.205, + 0.326 + ], + "angle": 0, + "content": "(a) Rainy Image" + }, + { + "type": "image", + "bbox": [ + 0.223, + 0.248, + 0.353, + 0.315 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.24, + 0.315, + 0.335, + 0.326 + ], + "angle": 0, + "content": "(b) Ground Truth" + }, + { + "type": "image", + "bbox": [ + 0.354, + 0.248, + 0.482, + 0.315 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.379, + 0.315, + 0.453, + 0.326 + ], + "angle": 0, + "content": "(c) LA Block" + }, + { + "type": "image", + "bbox": [ + 0.094, + 0.328, + 0.223, + 0.394 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.104, + 0.396, + 0.215, + 0.407 + ], + "angle": 0, + "content": "(d) 1D scan Mamba" + }, + { + "type": "image", + "bbox": [ + 0.225, + 0.328, + 0.353, + 0.394 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.232, + 0.396, + 0.343, + 0.407 + ], + "angle": 0, + "content": "(e) 2D scan Mamba" + }, + { + "type": "image", + "bbox": [ + 0.354, + 0.328, + 0.482, + 0.394 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.376, + 0.396, + 0.457, + 0.407 + ], + "angle": 0, + "content": "(f) LA Mamba" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.425, + 0.483, + 0.454 + ], + "angle": 0, + "content": "Figure 5. Visual results of the encoder/decoder blocks adopting different mechanisms." + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.469, + 0.483, + 0.51 + ], + "angle": 0, + "content": "Table 6. Evaluation results of different \\( N_{i} \\) configurations on the Rain100L dataset in ACL. \\( \\star \\) represents the combination we adopted." + }, + { + "type": "table", + "bbox": [ + 0.113, + 0.522, + 0.46, + 0.602 + ], + "angle": 0, + "content": "
(N1,N2,N3)PSNR↑SSIM↑
(3,3,3)38.950.978
(4,4,4)39.030.980
(6,3,3)★39.180.983
(6,6,6)39.190.983
" + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.628, + 0.48, + 0.644 + ], + "angle": 0, + "content": "4.4.3. The Impact of the Number of Encoders/Decoders" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.647, + 0.483, + 0.844 + ], + "angle": 0, + "content": "As mentioned in the above, a crucial hyper-parameter in the model is the number of feature processing blocks at each encoding/decoding stage, denoted as \\( N_{i} \\). To investigate the impact of \\( N_{i} \\), we conducted a series of experiments on the deraining dataset using different combinations, with the results presented in Table 6. Additionally, Fig. 6 illustrates the output images generated by the models with various configurations. The numerical results indicate that increasing \\( N_{i} \\) contributes to performance improvement, but the effect is limited. Notably, a significant enhancement is observed when \\( N_{1} \\) is increased, suggesting that learning high-resolution feature maps plays a critical role in improving the model's recovery performance." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.852, + 0.483, + 0.867 + ], + "angle": 0, + "content": "4.4.4. The Dilation Rate of Convolution in MDC Module" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.871, + 0.483, + 0.901 + ], + "angle": 0, + "content": "We propose a local feature enhancement module, MDC, which includes two dilated convolution layers, as shown in" + }, + { + "type": "image", + "bbox": [ + 0.517, + 0.089, + 0.614, + 0.189 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.55, + 0.19, + 0.58, + 0.202 + ], + "angle": 0, + "content": "Input" + }, + { + "type": "image", + "bbox": [ + 0.615, + 0.09, + 0.712, + 0.188 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.644, + 0.19, + 0.68, + 0.202 + ], + "angle": 0, + "content": "(4,4,4)" + }, + { + "type": "image", + "bbox": [ + 0.712, + 0.09, + 0.808, + 0.188 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.743, + 0.19, + 0.777, + 0.202 + ], + "angle": 0, + "content": "(6,6,6)" + }, + { + "type": "image", + "bbox": [ + 0.809, + 0.09, + 0.905, + 0.188 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.839, + 0.19, + 0.874, + 0.202 + ], + "angle": 0, + "content": "(6,3,3)" + }, + { + "type": "image_caption", + "bbox": [ + 0.592, + 0.219, + 0.826, + 0.234 + ], + "angle": 0, + "content": "Figure 6. Visual Results on Various \\(N_{i}\\)" + }, + { + "type": "table_caption", + "bbox": [ + 0.512, + 0.261, + 0.906, + 0.303 + ], + "angle": 0, + "content": "Table 7. The evaluation results of the MDC module's convolution operations with varying dilation rates on the Rain100L dataset. \\(\\star\\) represents the combination we adopted." + }, + { + "type": "table", + "bbox": [ + 0.54, + 0.314, + 0.879, + 0.38 + ], + "angle": 0, + "content": "
dilationPSNR↑SSIM↑
138.120.981
2★39.180.983
339.100.981
" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.418, + 0.907, + 0.525 + ], + "angle": 0, + "content": "Fig. 2 (d). We further investigate the impact of dilation rates on the module's performance. As shown in Table 7, the results indicate that compared to the non-dilated convolution setting \\((d = 1)\\), the performance is inferior to the other two configurations. However, with increased dilation rates, performance improves to varying degrees, with the best overall performance observed at \\(d = 2\\)." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.57, + 0.764, + 0.586 + ], + "angle": 0, + "content": "4.5. Conclusion and Limitations" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.599, + 0.907, + 0.903 + ], + "angle": 0, + "content": "We introduced ACL, a novel image restoration model that addresses the limitations of traditional CNNs and Transformer-based approaches in handling global receptive fields and computational efficiency. By integrating linear attention into the Mamba structure, we developed the LAMA module, which enhances global feature dependencies with linear computational complexity. Additionally, the MDC module was designed to improve local detail restoration through multi-scale dilated convolutions. Our experiments confirmed that the ACL achieves promising performance in de-raining tasks, demonstrating its effectiveness in both quantitative metrics and visual quality. Furthermore, our work provides a new perspective on leveraging the Mamba structure in the IR domain. Nevertheless, our method also has some limitations. The ACL model does not have an advantage over CNN-based models when processing large-sized images. Moreover, in the image deblurring task, there is still a gap between ACL and SOTA Transformer models. In the future, the ACL model still has room for further optimization to adapt more IR tasks." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "17920" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.092, + 0.091, + 0.251, + 0.108 + ], + "angle": 0, + "content": "Acknowledgments" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.114, + 0.486, + 0.227 + ], + "angle": 0, + "content": "This work was supported by the National Key R&D Program of China (No.2023YFB4502804), the National Science Fund for Distinguished Young Scholars (No.62025603), the National Natural Science Foundation of China (No. U22B2051, No. 62302411), the Natural Science Foundation of Fujian Province of China (No.2021J06003), and China Postdoctoral Science Foundation (No. 2023M732948)." + }, + { + "type": "title", + "bbox": [ + 0.093, + 0.254, + 0.188, + 0.269 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.279, + 0.484, + 0.348 + ], + "angle": 0, + "content": "[1] Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. Pre-trained image processing transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12299-12310, 2021. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.1, + 0.35, + 0.484, + 0.404 + ], + "angle": 0, + "content": "[2] Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. Simple baselines for image restoration. In European conference on computer vision, pages 17-33. Springer, 2022. 5, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.407, + 0.484, + 0.475 + ], + "angle": 0, + "content": "[3] Xiang Chen, Hao Li, Mingqiang Li, and Jinshan Pan. Learning a sparse transformer network for effective image de- raining. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5896-5905, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.477, + 0.484, + 0.545 + ], + "angle": 0, + "content": "[4] Sung-Jin Cho, Seo-Won Ji, Jun-Pyo Hong, Seung-Won Jung, and Sung-Jea Ko. Rethinking coarse-to-fine approach in single image deblurring. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4641-4650, 2021. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.548, + 0.484, + 0.589 + ], + "angle": 0, + "content": "[5] Yuning Cui, Wenqi Ren, Xiaochun Cao, and Alois Knoll. Image restoration via frequency selection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.591, + 0.484, + 0.631 + ], + "angle": 0, + "content": "[6] Yuning Cui, Wenqi Ren, Sining Yang, Xiaochun Cao, and Alois Knoll. Irnext: Rethinking convolutional network design for image restoration. 2023. 1, 5, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.633, + 0.484, + 0.7 + ], + "angle": 0, + "content": "[7] Yuning Cui, Yi Tao, Zhenshan Bing, Wenqi Ren, Xinwei Gao, Xiaochun Cao, Kai Huang, and Alois Knoll. Selective frequency network for image restoration. In The Eleventh International Conference on Learning Representations, 2023. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.704, + 0.484, + 0.744 + ], + "angle": 0, + "content": "[8] Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2023. 1, 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.747, + 0.484, + 0.802 + ], + "angle": 0, + "content": "[9] Enxuan Gu, Hongwei Ge, and Yong Guo. Code: An explicit content decoupling framework for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2920-2930, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.804, + 0.484, + 0.858 + ], + "angle": 0, + "content": "[10] Shuhang Gu, Yawei Li, Luc Van Gool, and Radu Timofte. Self-guided network for fast image denoising. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2511-2520, 2019. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.86, + 0.484, + 0.901 + ], + "angle": 0, + "content": "[11] Yubin Gu, Honghui Xu, Yueqian Quan, Wanjun Chen, and Jianwei Zheng. Orsi salient object detection via bidimensional attention and full-stage semantic guidance. IEEE" + }, + { + "type": "list", + "bbox": [ + 0.094, + 0.279, + 0.484, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.548, + 0.093, + 0.905, + 0.119 + ], + "angle": 0, + "content": "Transactions on Geoscience and Remote Sensing, 61:1-13, 2023. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.122, + 0.905, + 0.178 + ], + "angle": 0, + "content": "[12] Yubin Gu, Siting Chen, Xiaoshuai Sun, Jiayi Ji, Yiyi Zhou, and Rongrong Ji. Optical remote sensing image salient object detection via bidirectional cross-attention and attention restoration. Pattern Recognition, page 111478, 2025. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.18, + 0.905, + 0.234 + ], + "angle": 0, + "content": "[13] Hang Guo, Jinmin Li, Tao Dai, Zhihao Ouyang, Xudong Ren, and Shu-Tao Xia. Mambair: A simple baseline for image restoration with state-space model. arXiv preprint arXiv:2402.15648, 2024. 1, 2, 5, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.236, + 0.905, + 0.305 + ], + "angle": 0, + "content": "[14] Tianyu Guo, Haowei Wang, Yiwei Ma, Jiayi Ji, and Xiaoshuai Sun. Improving panoptic narrative grounding by harnessing semantic relationships and visual confirmation. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1985-1993, 2024. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.307, + 0.905, + 0.375 + ], + "angle": 0, + "content": "[15] Dongchen Han, Ziyi Wang, Zhuofan Xia, Yizeng Han, Yifan Pu, Chunjiang Ge, Jun Song, Shiji Song, Bo Zheng, and Gao Huang. Demystify mamba in vision: A linear attention perspective. Advances in Neural Information Processing Systems, 37:127181-127203, 2025. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.378, + 0.905, + 0.432 + ], + "angle": 0, + "content": "[16] Tao Huang, Xiaohuan Pei, Shan You, Fei Wang, Chen Qian, and Chang Xu. Localmamba: Visual state space model with windowed selective scan. arXiv preprint arXiv:2403.09338, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.435, + 0.905, + 0.503 + ], + "angle": 0, + "content": "[17] Jiayi Ji, Haowei Wang, Changli Wu, Yiwei Ma, Xiaoshuai Sun, and Rongrong Ji. Jm3d jm3d-llm: Elevating 3d representation with joint multi-modal cues. IEEE Transactions on Pattern Analysis and Machine Intelligence, 47(4):2475–2492, 2025. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.505, + 0.905, + 0.56 + ], + "angle": 0, + "content": "[18] Kui Jiang, Zhongyuan Wang, Chen Chen, Zheng Wang, Laizhong Cui, and Chia-Wen Lin. Magic elf: Image deraining meets association learning and transformer. arXiv preprint arXiv:2207.10455, 2022. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.563, + 0.907, + 0.63 + ], + "angle": 0, + "content": "[19] Kui Jiang, Wenxuan Liu, Zheng Wang, Xian Zhong, Junjun Jiang, and Chia-Wen Lin. Dawn: Direction-aware attention wavelet network for image deraining. In Proceedings of the 31st ACM international conference on multimedia, pages 7065-7074, 2023. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.633, + 0.905, + 0.702 + ], + "angle": 0, + "content": "[20] Lingshun Kong, Jiangxin Dong, Jianjun Ge, Mingqiang Li, and Jinshan Pan. Efficient frequency domain-based transformers for high-quality image deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5886-5895, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.704, + 0.905, + 0.772 + ], + "angle": 0, + "content": "[21] Xia Li, Jianlong Wu, Zhouchen Lin, Hong Liu, and Hongbin Zha. Recurrent squeeze-and-excitation context aggregation net for single image deraining. In Proceedings of the European conference on computer vision (ECCV), pages 254-269, 2018. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.775, + 0.905, + 0.857 + ], + "angle": 0, + "content": "[22] Yawei Li, Yuchen Fan, Xiaoyu Xiang, Denis Demandolx, Rakesh Ranjan, Radu Timofte, and Luc Van Gool. Efficient and explicit modelling of image hierarchies for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18278-18289, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.859, + 0.905, + 0.901 + ], + "angle": 0, + "content": "[23] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restoration using swim transformer. In Proceedings of the IEEE/CVF inter" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.907, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.518, + 0.957 + ], + "angle": 0, + "content": "17921" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.125, + 0.092, + 0.482, + 0.119 + ], + "angle": 0, + "content": "national conference on computer vision, pages 1833-1844, 2021. 1, 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.122, + 0.483, + 0.178 + ], + "angle": 0, + "content": "[24] Yuanchu Liang, Saeed Anwar, and Yang Liu. Drt: A lightweight single image deraining recursive transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 589-598, 2022. 3, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.179, + 0.482, + 0.234 + ], + "angle": 0, + "content": "[25] Yue Liu, Yunjie Tian, Yuzhong Zhao, Hongtian Yu, Lingxi Xie, Yaowei Wang, Qixiang Ye, Jianbin Jiao, and Yunfan Liu. Vmamba: Visual state space model. Advances in neural information processing systems, 37:103031-103063, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.236, + 0.482, + 0.303 + ], + "angle": 0, + "content": "[26] Xintian Mao, Jiansheng Wang, Xingran Xie, Qingli Li, and Yan Wang. Loformer: Local frequency transformer for image deblurring. In Proceedings of the 32nd ACM International Conference on Multimedia, pages 10382-10391, 2024. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.307, + 0.482, + 0.374 + ], + "angle": 0, + "content": "[27] Seungjun Nah, Tae Hyun Kim, and Kyoung Mu Lee. Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3883-3891, 2017. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.377, + 0.483, + 0.432 + ], + "angle": 0, + "content": "[28] Ruijie Quan, Xin Yu, Yuanzhi Liang, and Yi Yang. Removing raindrops and rain streaks in one go. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9147-9156, 2021. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.434, + 0.482, + 0.503 + ], + "angle": 0, + "content": "[29] Dongwei Ren, Wangmeng Zuo, Qinghua Hu, Pengfei Zhu, and Deyu Meng. Progressive image deraining networks: A better and simpler baseline. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3937-3946, 2019. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.505, + 0.482, + 0.56 + ], + "angle": 0, + "content": "[30] Yuan Shi, Bin Xia, Xiaoyu Jin, Xing Wang, Tianyu Zhao, Xin Xia, Xuefeng Xiao, and Wenming Yang. Vmambair: Visual state space model for image restoration. arXiv preprint arXiv:2403.11423, 2024. 1, 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.562, + 0.482, + 0.602 + ], + "angle": 0, + "content": "[31] Tianyu Song, Guiyue Jin, Pengpeng Li, Kui Jiang, Xiang Chen, and Jiyu Jin. Learning a spiking neural network for efficient image deraining. *IJCAI*, 2024. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.604, + 0.482, + 0.659 + ], + "angle": 0, + "content": "[32] Fu-Jen Tsai, Yan-Tsung Peng, Yen-Yu Lin, Chung-Chi Tsai, and Chia-Wen Lin. Stripformer: Strip transformer for fast image deblurring. In European Conference on Computer Vision, pages 146-162. Springer, 2022. 2, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.661, + 0.482, + 0.731 + ], + "angle": 0, + "content": "[33] Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, and Yinxiao Li. Maxim: Multi-axis mlp for image processing. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5769-5780, 2022. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.733, + 0.482, + 0.787 + ], + "angle": 0, + "content": "[34] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.789, + 0.482, + 0.83 + ], + "angle": 0, + "content": "[35] Cong Wang, Wei Wang, Chengjin Yu, and Jie Mu. Explore internal and external similarity for single image deraining with graph neural networks. *IJCAI*, 2024. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.832, + 0.482, + 0.899 + ], + "angle": 0, + "content": "[36] Qiong Wang, Kui Jiang, Jinyi Lai, Zheng Wang, and Jianhui Zhang. Hpcnet: A hybrid progressive coupled network for image deraining. In 2023 IEEE International Conference on Multimedia and Expo (ICME), pages 2747-2752. IEEE, 2023. 1" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.483, + 0.899 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.161 + ], + "angle": 0, + "content": "[37] Yinglong Wang, Chao Ma, and Jianzhuang Liu. Smartassign: Learning a smart knowledge assignment strategy for deraining and desnowing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3677-3686, 2023. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.163, + 0.905, + 0.232 + ], + "angle": 0, + "content": "[38] Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 17683-17693, 2022. 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.233, + 0.905, + 0.287 + ], + "angle": 0, + "content": "[39] Xinyu Xie, Yawen Cui, Tao Tan, Xubin Zheng, and Zitong Yu. Fusionmamba: Dynamic feature enhancement for multimodal image fusion with mamba. Visual Intelligence, 2(1): 37, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.288, + 0.905, + 0.357 + ], + "angle": 0, + "content": "[40] Wenhan Yang, Robby T Tan, Jiashi Feng, Jiaying Liu, Zongming Guo, and Shuicheng Yan. Deep joint rain detection and removal from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1357-1366, 2017. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.358, + 0.905, + 0.413 + ], + "angle": 0, + "content": "[41] Hu Yu, Naishan Zheng, Man Zhou, Jie Huang, Zeyu Xiao, and Feng Zhao. Frequency and spatial dual guidance for image dehazing. In European Conference on Computer Vision, pages 181-198. Springer, 2022. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.414, + 0.905, + 0.481 + ], + "angle": 0, + "content": "[42] Anas Zafar, Danyal Aftab, Rizwan Qureshi, Xinqi Fan, Pingjun Chen, Jia Wu, Hazrat Ali, Shah Nawaz, Sheheryar Khan, and Mubarak Shah. Single stage adaptive multi-attention network for image restoration. IEEE TIP, 2024. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.484, + 0.905, + 0.552 + ], + "angle": 0, + "content": "[43] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14821-14831, 2021. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.554, + 0.905, + 0.635 + ], + "angle": 0, + "content": "[44] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5728-5739, 2022. 1, 2, 5, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.637, + 0.905, + 0.691 + ], + "angle": 0, + "content": "[45] Wengyi Zhan, Mingbao Lin, Chia-Wen Lin, and Rongrong Ji. Anysr: Realizing image super-resolution as any-scale, any-resource. IEEE Transactions on Image Processing, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.693, + 0.905, + 0.748 + ], + "angle": 0, + "content": "[46] He Zhang and Vishal M Patel. Density-aware single image de-raining using a multi-stream dense network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 695-704, 2018. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.749, + 0.905, + 0.816 + ], + "angle": 0, + "content": "[47] Hongguang Zhang, Yuchao Dai, Hongdong Li, and Piotr Koniusz. Deep stacked hierarchical multi-patch network for image deblurring. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5978-5986, 2019. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.818, + 0.905, + 0.901 + ], + "angle": 0, + "content": "[48] Jinlu Zhang, Yiyi Zhou, Qiancheng Zheng, Xiaoxiong Du, Gen Luo, Jun Peng, Xiaoshuai Sun, and Rongrong Ji. Fast text-to-3D-aware face generation and manipulation via direct cross-modal mapping and geometric regularization. In Proceedings of the 41st International Conference on Machine Learning, pages 60605–60625. PMLR, 2024. 1" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "17922" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.092, + 0.482, + 0.16 + ], + "angle": 0, + "content": "[49] Kaihao Zhang, Wenhan Luo, Yiran Zhong, Lin Ma, Bjorn Stenger, Wei Liu, and Hongdong Li. Deblurring by realistic blurring. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2737-2746, 2020. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.163, + 0.482, + 0.219 + ], + "angle": 0, + "content": "[50] Jianwei Zheng, Wei Li, Ni Xu, Junwei Zhu, Xiaoxu Lin, and Xiaqin Zhang. Alias-free mamba neural operator. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. 3" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.482, + 0.219 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.946, + 0.518, + 0.956 + ], + "angle": 0, + "content": "17923" + } + ] +] \ No newline at end of file diff --git a/2025/ACL_ Activating Capability of Linear Attention for Image Restoration/b3cdf41c-47b4-40a8-888d-383f4742ab99_origin.pdf b/2025/ACL_ Activating Capability of Linear Attention for Image Restoration/b3cdf41c-47b4-40a8-888d-383f4742ab99_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..10adcff5f869759d5439c0ac3eee293fe0762d6e --- /dev/null +++ b/2025/ACL_ Activating Capability of Linear Attention for Image Restoration/b3cdf41c-47b4-40a8-888d-383f4742ab99_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2f73f4b5f659de59443f8676d35a749e196ebb93f55dded0bb4f2dad13dbfe1 +size 3054281 diff --git a/2025/ACL_ Activating Capability of Linear Attention for Image Restoration/full.md b/2025/ACL_ Activating Capability of Linear Attention for Image Restoration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9811a653e2690a107ca062133d8633bc21440fa0 --- /dev/null +++ b/2025/ACL_ Activating Capability of Linear Attention for Image Restoration/full.md @@ -0,0 +1,386 @@ +# ACL: Activating Capability of Linear Attention for Image Restoration + +Yubin Gu $^{1}$ , Yuan Meng $^{1}$ , Jiayi Ji $^{1,2}$ , Xiaoshuai Sun $^{1*}$ + +1Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, 361005, P.R. China + +$^{2}$ National University of Singapore, Singapore + +{guyubin,aprilmyy}@stu.xmu.edu.cn,jjyxmu@gmail.com,xssun@xmu.edu.cn + +# Abstract + +Image restoration (IR), a key area in computer vision, has entered a new era with deep learning. Recent research has shifted toward Selective State Space Models (Mamba) to overcome CNNs' limited receptive fields and Transformers' computational inefficiency. However, due to Mamba's inherent one-dimensional scanning limitations, recent approaches have introduced multi-directional scanning to bolster inter-sequence correlations. Despite these enhancements, these methods still struggle with managing local pixel correlations across various directions. Moreover, the recursive computation in Mamba's SSM leads to reduced efficiency. To resolve these issues, we exploit the mathematical congruences between linear attention and SSM within Mamba to propose a novel model, ACL, which leverages news designs to Activate the Capability of Linear attention for IR. ACL integrates linear attention blocks instead of SSM within Mamba, serving as the core component of encoders/decoders, and aims to preserve a global perspective while boosting computational efficiency. Furthermore, we have designed a simple yet robust local enhancement module with multi-scale dilated convolutions to extract both coarse and fine features to improve local detail recovery. Experimental results confirm that our ACL model excels in classical IR tasks such as de-raining and de-blurring, while maintaining relatively low parameter counts and FLOPs1. + +# 1. Introduction + +In the field of image processing, restoring degraded images to clarity is a crucial technology. High-quality image restoration (IR) methods play a key role in ensuring the smooth progression of downstream vision tasks. Vanilla IR methods rely primarily on manually designed feature extraction, but often perform poorly when faced with complicated degradation factors in reality. + +![](images/87e0ea1dca4d6347f547973ef4454a772c0dd4e8960c638c717732cde503c547.jpg) +Figure 1. Comparison of our method's core block design with those of recent mainstream approaches. + +Over the past decade, with the rapid development of deep learning, many fields have been propelled forward, such as image segmentation and generation [11, 12, 14, 17, 48], and of course, IR as well. The initial IR models were primarily based on CNN designs [6, 41], whose translational invariance and high inferential efficiency have made them widely applied, as depicted in the core structure in Fig. 1(a). However, these CNN-based models face challenges in processing global features and often require stacking additional network layers to compensate for this deficiency. With the rise of attention mechanisms [34], particularly the application of the Transformer, which boasts exceptional global feature modeling capabilities, an increasing number of researchers have shifted toward Transformer-based designs [23, 36, 44], achieving remarkable restoration results, as shown in the core structure in Fig. 1 (b). Although the Transformer broadens the global perspective, its softmax attention mechanism's quadratic computational complexity significantly reduces efficiency during inference. + +Recently, state-space models from the field of control science, especially the Mamba model [8], have been proposed by related IR methods [13, 30] demonstrating superior training and inference efficiency on sequential data. These models have even outperformed some Transformer- + +based IR methods in terms of performance. Mamba is initially designed for one-dimensional sequence data modeling, direct adaptation to two-dimensional image poses challenges. The improved SSM module in Mamba, a one-dimensional unidirectional scanning, recursive computing structure, faces major issues when applied to the image, including: 1) conversion of image data into a one-dimensional sequence increases the distance between adjacent pixels, leading to the loss of local relationships [30]; 2) unidirectional modeling neglects the spatial relationships of pixels in multiple directions. Recently, MambaIR [13] and VMambaIR [30] have adopted multi-directional scanning methods to mitigate these problems, as illustrated in the main structure in Fig. 1 (c), but these methods have yet to effectively and directly establish multi-directional connections in the spatial dimension of pixels. + +In response to the limitations of the Mamba model, we propose a method that aims to leverage the advantages of the Mamba structure while addressing its unidirectional modeling constraint. A direct approach to overcoming this limitation is to introduce a long-range dependency attention mechanism, such as Linear Attention (LA) or Softmax Attention (SA). Although SA generally outperforms LA in traditional vision tasks, its computational complexity is significantly higher than that of LA. Upon further analysis, we observe that LA and the State-Space Model (SSM) in Mamba share a highly similar mathematical formulation, which has also been deeply analyzed in the work [15]. Inspired by this insight, we replace the SSM module in Mamba with LA layers, thereby designing a novel foundational IR architecture, as illustrated in Fig. 1 (d), and propose a new IR model, ACL. + +Specifically, the proposed ACL consists of two main components: the Mamba-based module with LA at its core (denoted as LAMA), which serves as the central part of the encoder/decoder and establishes global feature dependencies with linear computational complexity. Additionally, optimizing local features is equally essential. A recent approach involves partitioning the global features in the spatial domain into small windows and optimizing attention within these windows to enhance local feature modeling. While this approach has shown some effectiveness, it incurs high computational costs. In contrast, we propose a simple yet efficient multi-scale dilated convolution module (MDC), which captures local features at varying granularities by employing different dilation factors, thus improving detail restoration. By combining the Mamba structure with LA, ACL activates the capability of LA, achieving advanced performance on two classic IR tasks—deblurring and deraining. Compared to Transformer-based models [32, 44], ACL not only significantly reduces computational cost and parameter count, but also achieves superior or comparable performance. + +Our contributions are summarized as follows: + +- We explore an alternative CNN and Transformer-based architecture for Image Restoration, providing global receptive fields while maintaining computational efficiency. Specifically, we propose ACL, which upgrades the SSM in Mamba with the LA structure, enabling the model to perform global multi-directional scanning. +- The proposed ACL consists of two modules: LAMA and MDC. LAMA captures global feature dependencies, and MDC captures local features at varying granularities. Both modules enhance detail restoration. +- The proposed ACL demonstrates advanced performance in de-raining and de-blurring tasks, proving its advantages in terms of parameter count and computational cost. + +# 2. Related Work + +# 2.1. Image Restorations + +Image restoration (IR) technology provides clear visual data essential for numerous advanced downstream visual tasks. In recent years, the advent of deep learning has eclipsed traditional methods that rely on manually designed features, shifting the paradigm toward deep-learning-based approaches [9, 13, 20, 22, 45]. Initially, models were predominantly designed using CNNs, achieving significant advancements through sophisticated network designs that incorporated encoder-decoder patterns [5], dense connections, and residual connections [10]. However, CNN-based techniques face challenges in establishing global feature dependencies. With the evolution of Transformers in both Natural Language Processing (NLP) and Computer Vision (CV), many researchers have pivoted towards Transformer-based IR methods [23, 38, 44], leveraging their robust global receptive capabilities and marking substantial progress. Despite these improvements, the computational burden of calculating SA remains a drawback. Recently, the Mamba model [8], known for its efficient training and inference capabilities, has introduced a new potential paradigm in the IR field. Methods such as VMambaIR [30] and MambaIR [13], which employ multidirectional scanning strategies, aim to address the issues of unidirectional modeling inherent in Mamba's SSM blocks. Nonetheless, these methods have yet to establish multidirectional pixel connections directly. Thus, exploring how to utilize Mamba's superior design to establish comprehensive multidirectional global pixel correlations remains a promising direction. + +# 2.2. Attention Mechanisms + +Originally applied in the NLP field, attention mechanisms [34], particularly the Transformer with its SA mechanism, have achieved remarkable success. Subsequently, these mechanisms have been successfully adapted to the + +visual domain. In recent years, numerous IR methods based on the Transformer structure have emerged [1, 3, 23, 24], primarily utilizing various design modules to leverage self-attention mechanisms and enhance model efficiency. SwinIR [23] introduces twin-shifted window attention to boost performance. IPT [1] enhances local detail restoration by dividing images into multiple small windows and processing each window's features independently. However, adopting SA to establish global or local feature dependencies inevitably leads to quadratic computational complexity. Linear attention, which operates with linear complexity, has yet to match the performance of SA in classical vision tasks, thus its application remains limited. In this paper, we introduce a new IR model, ACL, activating the potential of linear attention. + +# 2.3. State Space Model + +The Mamba model, a newly proposed state space model (SSM), effectively facilitates sequence modeling with linear complexity [8]. Owing to its efficient training and inference speeds, many researchers have adapted it for visual tasks [16, 25, 30, 39, 50]. For instance, Local-Mamba [16] employs a cross-scanning module to scan image spaces. VMambaIR [30] enhances multidirectional relationships between pixels by scanning images from six directions. These methods focus on overcoming the limitations of unidirectional modeling in SSMs by proposing various scanning techniques. However, the features captured in each direction remain unidirectionally connected and potentially lead to redundant computational costs. The Mamba model is efficient due to its structural design, yet its unidirectional scanning is not entirely suitable for images. Therefore, we combine linear attention with the Mamba structure to achieve a new balance between computational efficiency and restoration effectiveness. + +# 3. Methods + +# 3.1. Preliminary Analysis + +In recent years, very few IR models based on linear attention have been proposed, as they tend to perform slightly worse than SA in classical vision tasks. Leveraging the computational advantages of Linear Attention (LA) to further explore its potential in visual tasks is crucial. + +The improved SSM in Mamba shows significant potential in sequence processing. In fact, linear attention has a similar expression to SSM [15]. In linear attention, if the attention of the $i$ -th token is restricted to only be related to the previous $i$ tokens, it is expressed as follows: + +$$ +\mathbf {A} _ {i} = \frac {\mathbf {Q} _ {i} \left(\sum_ {j = 1} ^ {i} \mathbf {K} _ {j} ^ {\top} \mathbf {V} _ {j}\right)}{\mathbf {Q} _ {i} \left(\sum_ {j = 1} ^ {i} \mathbf {K} _ {j} ^ {\top}\right)} = \frac {\left(\mathbf {Q} _ {i} \mathbf {D} _ {i}\right)}{\left(\mathbf {Q} _ {i} \mathbf {U} _ {i}\right)}, \tag {1} +$$ + +where $\mathbf{D}_i = \sum_{j=1}^i \mathbf{K}_j^\top \mathbf{V}_j$ , $\mathbf{U}_i = \sum_{j=1}^i \mathbf{K}_j^\top$ . Therefore, the recursive expressions are: + +$$ +\mathbf {U} _ {i} = \mathbf {U} _ {i - 1} + \mathbf {K} _ {i} ^ {\top}, \tag {2} +$$ + +$$ +\mathbf {D} _ {i} = \mathbf {D} _ {i - 1} + \mathbf {K} _ {i} ^ {\top} \mathbf {V} _ {i}, \quad \mathbf {A} _ {i} = \frac {\left(\mathbf {Q} _ {i} \mathbf {D} _ {i}\right)}{\left(\mathbf {Q} _ {i} \mathbf {U} _ {i}\right)}. \tag {3} +$$ + +To enable applications in deep neural networks, it is necessary to discretize the initial SSM using zero-order hold [8]. This involves discretizing the continuous parameters $\mathbf{A}$ and $\mathbf{B}$ into $\bar{\mathbf{A}}$ and $\bar{\mathbf{B}}$ through the time scale parameter $\Delta$ . The specific expressions are as follows: + +$$ +\mathbf {h} _ {i} = \bar {\mathbf {A}} \mathbf {h} _ {i - 1} + \mathbf {B} \mathbf {x} _ {i}, \quad \mathbf {y} _ {i} = \mathbf {C h} _ {i} + \mathbf {D x} _ {i}, \tag {4} +$$ + +where $\bar{\mathbf{A}} = \exp (\Delta \mathbf{A})$ and $\bar{\mathbf{B}} = (\Delta \mathbf{A})^{-1}(\exp (\Delta \mathbf{A}) - I)\Delta \mathbf{B}\approx \Delta \mathbf{B}$ . For simplicity, we have omitted the feature dimension information of each part in the formulas. + +Further, Mamba enhances the discrete SSM by making $\bar{\mathbf{A}}$ , $\bar{\mathbf{B}}$ , and $\Delta$ dependent on the input $\mathbf{x}_i$ , breaking away from the assumption of input-independent models. Additionally, since $\bar{\mathbf{A}}_i$ in Mamba is a diagonal matrix, we have $\tilde{\mathbf{A}}_i = diag(\tilde{\mathbf{A}}_i)$ . The expression is thus transformed into: + +$$ +\mathbf {h} _ {i} = \tilde {\mathbf {A}} _ {i} \mathbf {h} _ {i - 1} + \mathbf {B} _ {i} (\Delta_ {i} \mathbf {x} _ {i}), \quad \mathbf {y} _ {i} = \mathbf {C} _ {i} \mathbf {h} _ {i} + \mathbf {D} \mathbf {x} _ {i}. \quad (5) +$$ + +The primary distinctions between Eq. 3 and Eq. 5 are as follows: 1) The improved State Space Model (SSM) incorporates an additional parameter for hidden state transitions, denoted as $\tilde{\mathbf{A}}_i$ . This parameter functions similarly to a forget gate, filtering previous states to enhance selective retention. 2) An additional term, $\mathbf{D}\mathbf{x}_i$ , is introduced, akin to an input skip connection. In the Mamba model, which aims to achieve input-dependent modeling, $\tilde{\mathbf{A}}_i$ must be recursively computed. Despite the utilization of hardware acceleration mechanisms, this process still adheres to unidirectional modeling. Linear attention represents an alternative form of SSM within Mamba and can transcend the limitations of unidirectional pixel modeling, presenting a potential capability. For a more in-depth analysis, one can refer to another outstanding analytical works [15]. + +# 3.2. Overall Structure of the Model + +As illustrated in Fig. 2 (a), the proposed IR model, ACL, is based on an encoder-decoder architecture. In this model, multiple downsampled degraded images are fed into the main pathway of the encoder through lateral convolution layers, and images restored at three different scales are output during the decoding phase. Both the encoder and decoder comprise three fundamental core blocks, the structure of which is depicted in Fig. 2. Each core unit consists of several successive LAMA modules. Furthermore, as shown in the framework diagram, following two core units with higher feature resolution in both encoding and decoding + +![](images/f7098aef4493c7621c002027cefe7603ad9677eede56e3b8f7c71f5544d7aadc.jpg) +(a) Overall Pipeline +(b) Encoder and Decoder + +![](images/73afeb19e508ef0c79efdc91ea6a607b59b6b98d0fc4a83f1a08de6ea5cfcd98.jpg) +Figure 2. (a) The overall framework of ACL, based on the encoder-decoder architecture. (b) The core structure of the encoder-decoder, which includes the improved LA-based Mamba (LAMA) module. (c) The structure of the LAMA module. (d) The MDC module. + +![](images/9ee6b870f30c4854b13c04ce5b0494c60e29179a3217150a08f1140cd8aff9ee.jpg) +(c) LA-based Mamba +(d) Multi-Dilated Convolutions Module + +![](images/ea2bdffdc71374fad9556828ff17a8829c90fc14006c21c4a18807ac64811baf.jpg) + +phases, a local enhancement module is appended to augment the model's capability for local detail restoration. The structure of this local enhancement module is presented in Fig. 2 (d). + +Specifically, the network process begins with a degraded image $\mathbf{I} \in \mathbb{R}^{3 \times 256 \times 256}$ . The model first transforms this image through a convolution layer into a feature map $\mathbf{I}' \in \mathbb{R}^{C \times H \times W}$ , expanding the number of feature channels from 3 to $C$ , where $C = 32$ . Subsequently, this feature map is further processed through three encoder units $E_{i}$ and a local enhancement module, each unit encoding the feature maps at different scales into a latent space state, denoted as $\mathbf{I}_e^i \in \mathbb{R}^{2^{(i-1)}C \times \frac{H}{2^{(i-1)}} \times \frac{W}{2^{(i-1)}}}$ , where $i = 1, 2, 3$ . Here, $C, H$ , and $W$ respectively represent the number of feature channels, the height, and the width of the feature maps. These processed feature maps are then passed to the decoder, where they are fused through direct or skip connections. In the decoder, the feature maps are gradually restored by decoding units $D_{i}$ , generating $\mathbf{I}_d^i \in \mathbb{R}^{2^{(i-1)}C \times \frac{H}{2^{(i-1)}} \times \frac{W}{2^{(i-1)}}}$ . Each feature map is processed by the subsequent $D_{i}$ and, following lateral convolution operations, yields multi-scale output results, with $D_{1}$ producing the final restored image. Subsequently, we will elaborate on the two key modules that constitute ACL and the model's optimization function. + +# 3.3. Linear Attention-based Mamba Module + +The original Mamba model is an auto-regressive model capable of efficiently capturing sequence dependencies, and it has been proven effective in modeling temporal causal sequence data. However, due to its unidirectional modeling approach, Mamba exhibits limitations when handling data with weak causality, such as images, necessitating further improvements to address these challenges. To overcome + +the limitations of unidirectional modeling, recent methods have proposed multi-directional cross-scanning techniques for image processing. Unlike these approaches, we embed linear attention into the Mamba structure, enabling it to establish global pixel dependencies when processing image data, and avoiding the need for recursive computation of the forget matrix $\mathbf{A}_i$ . The module structure is illustrated in Fig. 2 (c). + +The input to the model is a feature map $\mathbf{F} \in \mathbb{R}^{B \times C \times H \times W}$ . First, $\mathbf{F}$ undergoes a dimensional transformation, resulting in $\mathbf{F}' \in \mathbb{R}^{B \times HW \times C}$ . Subsequently, $\mathbf{F}'$ is passed into two branches: a main branch and a residual branch. The operations of the residual branch can be expressed as follows: + +$$ +\mathbf {F} _ {r e s} = \sigma (\operatorname {L i n e a r} \left(\mathbf {F} ^ {\prime}\right)), \tag {6} +$$ + +where $\sigma (\cdot)$ represents the SiLU activation function. + +The main branch comprises a linear mapping layer, a convolutional layer, and linear attention. The process can be expressed as follows: + +$$ +\mathbf {F} _ {1} = \operatorname {T o 4 D} \left(\operatorname {L i n e a r} \left(\mathbf {F} ^ {\prime}\right)\right), \tag {7} +$$ + +$$ +\mathbf {F} _ {2} = \sigma \left(\operatorname {C o n v} \left(\mathbf {F} _ {1}\right)\right), \tag {8} +$$ + +where $\mathrm{To4D}(\cdot)$ indicates reshaping the feature map into a four-dimensional tensor to adapt convolution operation. Next, linear attention is applied to $\mathbf{F}_2$ . The specific process is expressed as follows: + +$$ +\mathbf {F} _ {2} ^ {\prime} = \operatorname {T o 3 D} \left(\mathbf {F} _ {2}\right), \tag {9} +$$ + +$$ +\mathbf {Q} = \phi \left(\operatorname {L i n e a r} \left(\mathbf {F} _ {2} ^ {\prime}\right)\right), \quad \mathbf {K} = \phi \left(\operatorname {L i n e a r} \left(\mathbf {F} _ {2} ^ {\prime}\right)\right), \tag {10} +$$ + +$$ +\mathbf {K} \mathbf {V} = \mathbf {K} ^ {\top} \cdot \mathbf {F} _ {2} ^ {\prime}, \tag {11} +$$ + +$$ +\mathbf {F} _ {\text {a t t e n}} = \mathbf {Q} \cdot \mathbf {K V}, \tag {12} +$$ + +$$ +\mathbf {F} _ {\text {a t t e n}} = \mathbf {F} _ {\text {a t t e n}} + \operatorname {C o n v} _ {\text {p o s}} (\mathbf {F} _ {2}), \tag {13} +$$ + +where $\mathrm{To3D}(\cdot)$ reshapes the feature map into a three-dimensional tensor for linear attention layer, $\phi (\cdot)$ is the kernel function, and $\mathrm{Conv}_{pos}(\cdot)$ is learnable position embedding function. Subsequently, $\mathbf{F}_{atten}$ is multiplied by $\mathbf{F}_{res}$ , followed by a linear mapping layer, yielding $\mathbf{F}_{atten}$ , which is expressed as: + +$$ +\mathbf {F} _ {\text {e n h e n c e}} = \operatorname {L i n e a r} \left(\mathbf {F} _ {\text {a t t e n}} \times \mathbf {F} _ {\text {r e s}}\right). \tag {14} +$$ + +Finally, $\mathbf{F}_{\textit{enhence}}$ is processed through a simple feedforward neural network to produce the output of the LAMA. + +# 3.4. Multi-Dilated Convolutions Module + +The LAMA module primarily functions to establish global feature connections, necessitating the learning of local features to enhance the quality of detail restoration. While some previous methods employed feature window-based self-attention strategies and achieved certain advancements, they incurred substantial computational costs. Consequently, we adopted a more straightforward and effective approach, namely the multi-scale dilated convolution module, the structure of which is depicted in Fig. 2(d). This module is equipped with filtering operations using various dilation factors aimed at capturing local features of different granularities within the image to enhance the detail restoration effects. The module comprises two dilated convolutions along with skip connections. The input feature, denoted as $\mathbf{F}$ , is split into two pathways: one passes through a convolution layer with a kernel size of 5 and a dilation rate of 2, and the other through a convolution layer with a kernel size of 3 and a dilation rate of 2, resulting in two feature sets: + +$$ +\mathbf {F} _ {1} = \operatorname {C o n v} _ {5 \times 5, d = 2} (\mathbf {F}), \tag {15} +$$ + +$$ +\mathbf {F} _ {2} = \operatorname {C o n v} _ {3 \times 3, d = 2} (\mathbf {F}). \tag {16} +$$ + +These are then concatenated to form $\mathbf{F}'$ , which subsequently passes through a convolution layer with a kernel size of 1 to halve the channel count, aligning it with the dimensions of the input features. Furthermore, the input feature $\mathbf{F}$ is merged with $\mathbf{F}'$ via a skip connection, culminating in the output $\mathbf{F}_{out}$ . The expressions are as follows: + +$$ +\mathbf {F} _ {\text {o u t}} = \operatorname {C o n v} _ {1 \times 1} \left(\operatorname {C o n c a t} \left(\mathbf {F} _ {1}, \mathbf {F} _ {2}\right)\right) + \mathbf {F}. \tag {17} +$$ + +# 3.5. Optimization Objectives + +The ACL model, during its decoding phase, outputs restoration results at three distinct scales and computes the corresponding loss values. Following prior methodologies, we calculate the loss values concurrently in both the spatial and frequency domains. The traditional L1 loss function is employed to measure the discrepancy between the restored + +outputs and the pristine reference images. Consequently, the total loss is computed as follows: + +$$ +L _ {t o t a l} = \sum_ {i = 1} ^ {3} \frac {1}{N _ {i}} \left| \mathbf {P} _ {i} - \mathbf {I} _ {i} \right| + \lambda \cdot \sum_ {i = 1} ^ {3} \frac {1}{N _ {i}} | f f t (\mathbf {P} _ {i}) - f f t (\mathbf {I} _ {i}) | \tag {18} +$$ + +where $\mathbf{P}_i$ and $\mathbf{I}_i$ represent the restored result and the corresponding true image, respectively, and $N_i$ denotes the total number of pixels in the image. $fft(\cdot)$ signifies the fast Fourier transform function. The hyperparameter $\lambda$ , utilized to balance the contributions of spatial domain loss and frequency domain loss to the total loss, is set at 0.1, following previous methods. + +# 4. Experiments + +This section focuses on showcasing the effectiveness of our proposed ACL model in addressing various degraded image tasks, such as deraining and deblurring, evaluated across six test sets. We will outline the experimental procedures and datasets utilized, and confirm the impact of the proposed modules through a series of ablation studies. + +# 4.1. Implementation Setup + +For each type of degradation, datasets are trained and evaluated independently. Unless specifically mentioned, all tests are conducted with the same hyperparameters. The training set undergoes random cropping of $256 \times 256$ patches and random flipping as a data augmentation strategy. To compare computational complexity with other methods, FLOPs are tested at the mentioned crop size. A cosine annealing strategy is adopted to gradually adjust the learning rate each epoch, setting a minimum learning rate limit of 1e-6. The batch size is set to 8, and the Adam optimizer is adopted. Following the evaluation of previous methods, for deraining, the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) are calculated on the YCbCr color mode. For other tasks, evaluation metrics are calculated on RGB color mode. All experiments are implemented in an environment equipped with the NVIDIA 24GB 3090 GPUs, based on the PyTorch. + +# 4.2. Single Image Deraining Results + +We utilized several different rain removal datasets, including Rain100L/H [40], Rain200L/H, and DID-Data [46], to evaluate the model's ability to restore images with streaklike degradation elements. Each dataset was independently trained for 800 epochs, with an initial learning rate set to 1e-3. We compared our approach with previous methods, including the advanced Restormer (Transformer) [44], NAFNet (CNN) [2], IRNeXt (CNN) [6], and MambaIR (Mamba) [13]. The comparative results are shown in Table 1 and Table 2. Upon comparison of PSNR, it can + +Table 1. Quantitative comparison results of the proposed model and seven other advanced models on the Rain100L and Rain100H. The larger the PSNR and SSIM values, the better the model effect. + +
MethodsRestormer [44]MAXIM [33]DRT [24]MPRNet [43]DAWN [19]IRNeXt [6]MambaIR [13]ACL(Ours)
DatasetPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
Rain100L38.990.97838.060.97737.610.94837.840.95936.730.97138.240.97238.780.97739.180.983
Rain100H31.460.90430.810.90329.470.84630.410.87430.620.89631.640.90230.620.89332.220.920
Average35.230.94134.440.94033.540.89734.130.91733.680.93434.940.93734.700.93535.700.952
+ +Table 2. Quantitative comparison results of the proposed model with 13 other advanced models, including CNN and Transformer-based methods, on three datasets. + +
MethodsRain200LRain200HDID-DatasetAverage
PSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
RESCAN [21]33.820.95526.220.82232.600.92730.880.901
PreNet [29]37.120.97629.040.89033.370.91933.180.928
DRT [24]38.810.98328.670.88033.880.92833.790.930
CCN [28]38.260.98129.990.91432.130.92433.460.940
Restormer [44]40.580.98630.830.91433.190.92634.870.942
Uformer [38]40.200.98630.310.91134.360.93334.960.943
MPRNet [43]39.820.98629.940.90034.500.93734.750.941
SmartAssign [37]38.410.98127.710.85433.110.91533.080.917
SFNet [7]39.500.98229.750.90134.510.93834.590.940
NAFNet [2]39.480.98229.190.88834.690.93734.450.936
ELFformer [18]38.850.98028.930.88533.540.93633.770.934
ESDNet [31]39.850.98630.010.91334.520.93934.790.946
MSGNN [35]39.090.98729.630.91833.110.92733.940.944
ACL(Ours)40.740.98830.450.91634.810.93835.330.947
+ +![](images/37815c6aa058b9a6405ab77600645ab7d332da0c3b3d60c0e460aa9998dfc842.jpg) +Figure 3. Visual comparison of ACL and other recent SOTA models on rainy image removal. The first two scenes contain slight rain streak degradation, while the last two scenes contain severe rain streak degradation. + +Table 3. Quantitative results of our method compared to recent approaches in blurry image restoration. + +
MethodsGoPro
PSNR ↑SSIM ↑FLOPs(G) ↓Param(M) ↓
MIMO [4]32.680.95961716.1
DMPHN [47]31.200.940-21.7
DBGAN [49]31.100.94275911.6
MPRNet [43]32.660.95977720.1
Restormer [44]32.920.96114026.1
IRNeXt [6]33.160.96211413.21
Stripformer [32]33.080.96217020
SSAMAN [42]33.530.96516518.3
LoFormer [26]33.730.9664716.4
Ours33.250.964554.6
+ +![](images/7b1ad184733f9d1c1bdc3b844871a93fdd19a96b48c0b71c62d586b76608246b.jpg) +(a) Blur Region + +![](images/d8bb188f0206f44d352c3f2a8973a6ee5b55486f6590550ff554f846e0f9b565.jpg) +(b) Reference + +![](images/e08ef39dab8f576ee34504a74786659c508db7f5bbd9dbb360eed3ec41917282.jpg) +(c) DMPHN + +![](images/8ffbd4209bdf41e4b00615e6fc2da32be63acc10047ac432ed1a180a5b2b865e.jpg) +(e) IRNeXt +Figure 4. Visual results of ACL and four other advanced models on motion blurred image restoration. + +![](images/89f95e3017c67732be8fad5db1b74c8e4226d904de5a21aa699ac8ae56ee76db.jpg) +(f) Restormer + +![](images/65db8bcb34f6470354d3dfa31118106620092a3c69c4a6b9ad9508a49379462c.jpg) +(g) Ours + +be observed that our method outperforms the other compared methods, except for the powerful Transformer-based Restormer on the Rain200H. Additionally, Fig. 3 further illustrates the visual comparison results, where ACL demonstrates superior performance in restoring image details. + +# 4.3. Single Image Deblurring Results + +We conducted evaluations on motion blur image restoration using the GoPro dataset [27], which includes 2,103 training images and 1,111 testing images. The compared methods include the recently proposed 9 advanced methods. In Table 3, we present the comparative results of various metrics on this dataset. As shown, ACL achieves advanced performance while maintaining a low parameter count and low FLOPs. Additionally, Fig. 4 displays visual comparison results with other methods. We selected critical numerical information within the images, and it can be observed that ACL also exhibits good performance in restoring motion-blurred images. + +# 4.4. Ablation Studies + +To further understand the contributions of each module in the ACL model and other factors affecting model performance, we conducted a unified experiment on the + +Table 4. Comparison of ablation experiments between two modules on Rain100L. + +
SettingsPSNR ↑SSIM ↑
Baseline38.240.978
Baseline + LAMA38.890.981
Baseline + LAMA + MDC39.180.983
+ +Rain100L/H rain removal dataset. Specifically, we performed ablation studies on the two main modules of the model. Additionally, to verify that the linear attention capability can be restored using the Mamba structure, we conducted experiments by replacing LAMA with other structures to compare the results under different configurations. + +# 4.4.1. LAMA and MDC modules: + +To validate the roles of the two main modules in ACL, namely LAMA and MDC, as well as their respective importance, we conducted ablation experiments. We removed the MDC module and replaced the core encoder/decoder modules with the structure shown in Fig. 1(b), where the Transformer block in the baseline model is implemented based on Linear Attention. We then gradually replaced the encoder-decoder modules with LAMA and added the MDC module to the baseline model, resulting in two different configurations, which were trained and tested separately. The experimental results are shown in Table 4. By comparing the "Baseline" and "Baseline+LAMA" configurations, it is evident that the Mamba structure, implemented with linear attention, achieves better results, proving the potential of the Mamba structure to activate linear attention. Besides, adding the MDC further enhances the model's performance. + +# 4.4.2. Different Mechanisms for Core Modules: + +To validate the effectiveness of LAMA, we replaced the LA structure within LAMA with the original one-dimensional scanning SSM structure and the bidirectional scanning SSM structure for verification. Additionally, replacing LAMA with a standard LA structure resulted in a new configuration, referred to as the Baseline model. The results obtained from these various configurations are presented in Table 5. On one hand, the proposed method outperforms both the Baseline and the strategies employing unidirectional and multi-directional scanning for modeling. On the other hand, compared to the unidirectional Mamba model, the multi-directional scanning strategy demonstrates superior performance, indicating that unidirectional modeling is not optimal for image data. Furthermore, Fig. 5 provides visual examples corresponding to each configuration. It can be observed that while the SSM-based scanning methods are capable of removing rainy degradation elements, they result in significant loss of image details. In contrast, our method achieves superior visual outcomes. + +Table 5. Quantitative results of different mechanisms used in the core module on rainy streak removal. + +
SettingsRain100LRain100H
PSNRSSIMPSNRSSIM
Baseline38.240.97831.020.913
Mamba (w. 1D Scan)36.470.95929.720.887
Mamba (w. 2D Scan)38.070.97730.930.907
Ours39.180.98332.220.920
+ +![](images/c1962d22190b23d3ed72c9670333e8887485c1a4d67cadeceaf8cb51fb011c6d.jpg) +(a) Rainy Image + +![](images/fddcf20267e95036fb0977ecc48fb2158fc462def51c8fdfbe27a04bed83f37c.jpg) +(b) Ground Truth + +![](images/eab1499fe7a3b50ffb30b0b8d4b4e54ad984a41d25e252c2321f3a62c301ec7e.jpg) +(c) LA Block + +![](images/dd4adae9e59865b8b10aec2712fe7a09f10cb5b78fe0e4fefa7d18e0e8477e3a.jpg) +(d) 1D scan Mamba +Figure 5. Visual results of the encoder/decoder blocks adopting different mechanisms. + +![](images/37c0a6f77facfa54b45ca7c59e384fc385eb3a5dd5f8cfb631de832e997982b0.jpg) +(e) 2D scan Mamba + +![](images/16e07557232a2c55b7606977a7c8dd99488ea5d5dcdfdd2de206934c59680d5f.jpg) +(f) LA Mamba + +Table 6. Evaluation results of different $N_{i}$ configurations on the Rain100L dataset in ACL. $\star$ represents the combination we adopted. + +
(N1,N2,N3)PSNR↑SSIM↑
(3,3,3)38.950.978
(4,4,4)39.030.980
(6,3,3)★39.180.983
(6,6,6)39.190.983
+ +# 4.4.3. The Impact of the Number of Encoders/Decoders + +As mentioned in the above, a crucial hyper-parameter in the model is the number of feature processing blocks at each encoding/decoding stage, denoted as $N_{i}$ . To investigate the impact of $N_{i}$ , we conducted a series of experiments on the deraining dataset using different combinations, with the results presented in Table 6. Additionally, Fig. 6 illustrates the output images generated by the models with various configurations. The numerical results indicate that increasing $N_{i}$ contributes to performance improvement, but the effect is limited. Notably, a significant enhancement is observed when $N_{1}$ is increased, suggesting that learning high-resolution feature maps plays a critical role in improving the model's recovery performance. + +# 4.4.4. The Dilation Rate of Convolution in MDC Module + +We propose a local feature enhancement module, MDC, which includes two dilated convolution layers, as shown in + +![](images/1875a8b22794270df98e65352ba49d08c774fedf56a75a45dc43b159581210b1.jpg) +Input +Figure 6. Visual Results on Various $N_{i}$ + +![](images/839f75d4d7639e7277ccb9e8dafd60c2bfeeb00555847031c6ec9d169b31e7c7.jpg) +(4,4,4) + +![](images/5c3add7916c909abc38bbb4c408594f56e53af0c91358ce01a64077e95eb6f15.jpg) +(6,6,6) + +![](images/0dedf5c2df23be8be95ae911bbe953ce4845bb353d3404a5cbcdc3623e74c6e7.jpg) +(6,3,3) + +Table 7. The evaluation results of the MDC module's convolution operations with varying dilation rates on the Rain100L dataset. $\star$ represents the combination we adopted. + +
dilationPSNR↑SSIM↑
138.120.981
2★39.180.983
339.100.981
+ +Fig. 2 (d). We further investigate the impact of dilation rates on the module's performance. As shown in Table 7, the results indicate that compared to the non-dilated convolution setting $(d = 1)$ , the performance is inferior to the other two configurations. However, with increased dilation rates, performance improves to varying degrees, with the best overall performance observed at $d = 2$ . + +# 4.5. Conclusion and Limitations + +We introduced ACL, a novel image restoration model that addresses the limitations of traditional CNNs and Transformer-based approaches in handling global receptive fields and computational efficiency. By integrating linear attention into the Mamba structure, we developed the LAMA module, which enhances global feature dependencies with linear computational complexity. Additionally, the MDC module was designed to improve local detail restoration through multi-scale dilated convolutions. Our experiments confirmed that the ACL achieves promising performance in de-raining tasks, demonstrating its effectiveness in both quantitative metrics and visual quality. Furthermore, our work provides a new perspective on leveraging the Mamba structure in the IR domain. Nevertheless, our method also has some limitations. The ACL model does not have an advantage over CNN-based models when processing large-sized images. Moreover, in the image deblurring task, there is still a gap between ACL and SOTA Transformer models. In the future, the ACL model still has room for further optimization to adapt more IR tasks. + +# Acknowledgments + +This work was supported by the National Key R&D Program of China (No.2023YFB4502804), the National Science Fund for Distinguished Young Scholars (No.62025603), the National Natural Science Foundation of China (No. U22B2051, No. 62302411), the Natural Science Foundation of Fujian Province of China (No.2021J06003), and China Postdoctoral Science Foundation (No. 2023M732948). + +# References + +[1] Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. Pre-trained image processing transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12299-12310, 2021. 3 +[2] Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. Simple baselines for image restoration. In European conference on computer vision, pages 17-33. Springer, 2022. 5, 6 +[3] Xiang Chen, Hao Li, Mingqiang Li, and Jinshan Pan. Learning a sparse transformer network for effective image de- raining. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5896-5905, 2023. 3 +[4] Sung-Jin Cho, Seo-Won Ji, Jun-Pyo Hong, Seung-Won Jung, and Sung-Jea Ko. Rethinking coarse-to-fine approach in single image deblurring. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4641-4650, 2021. 7 +[5] Yuning Cui, Wenqi Ren, Xiaochun Cao, and Alois Knoll. Image restoration via frequency selection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023. 2 +[6] Yuning Cui, Wenqi Ren, Sining Yang, Xiaochun Cao, and Alois Knoll. Irnext: Rethinking convolutional network design for image restoration. 2023. 1, 5, 6, 7 +[7] Yuning Cui, Yi Tao, Zhenshan Bing, Wenqi Ren, Xinwei Gao, Xiaochun Cao, Kai Huang, and Alois Knoll. Selective frequency network for image restoration. In The Eleventh International Conference on Learning Representations, 2023. 6 +[8] Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2023. 1, 2, 3 +[9] Enxuan Gu, Hongwei Ge, and Yong Guo. Code: An explicit content decoupling framework for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2920-2930, 2024. 2 +[10] Shuhang Gu, Yawei Li, Luc Van Gool, and Radu Timofte. Self-guided network for fast image denoising. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2511-2520, 2019. 2 +[11] Yubin Gu, Honghui Xu, Yueqian Quan, Wanjun Chen, and Jianwei Zheng. Orsi salient object detection via bidimensional attention and full-stage semantic guidance. IEEE + +Transactions on Geoscience and Remote Sensing, 61:1-13, 2023. 1 +[12] Yubin Gu, Siting Chen, Xiaoshuai Sun, Jiayi Ji, Yiyi Zhou, and Rongrong Ji. Optical remote sensing image salient object detection via bidirectional cross-attention and attention restoration. Pattern Recognition, page 111478, 2025. 1 +[13] Hang Guo, Jinmin Li, Tao Dai, Zhihao Ouyang, Xudong Ren, and Shu-Tao Xia. Mambair: A simple baseline for image restoration with state-space model. arXiv preprint arXiv:2402.15648, 2024. 1, 2, 5, 6 +[14] Tianyu Guo, Haowei Wang, Yiwei Ma, Jiayi Ji, and Xiaoshuai Sun. Improving panoptic narrative grounding by harnessing semantic relationships and visual confirmation. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1985-1993, 2024. 1 +[15] Dongchen Han, Ziyi Wang, Zhuofan Xia, Yizeng Han, Yifan Pu, Chunjiang Ge, Jun Song, Shiji Song, Bo Zheng, and Gao Huang. Demystify mamba in vision: A linear attention perspective. Advances in Neural Information Processing Systems, 37:127181-127203, 2025. 2, 3 +[16] Tao Huang, Xiaohuan Pei, Shan You, Fei Wang, Chen Qian, and Chang Xu. Localmamba: Visual state space model with windowed selective scan. arXiv preprint arXiv:2403.09338, 2024. 3 +[17] Jiayi Ji, Haowei Wang, Changli Wu, Yiwei Ma, Xiaoshuai Sun, and Rongrong Ji. Jm3d jm3d-llm: Elevating 3d representation with joint multi-modal cues. IEEE Transactions on Pattern Analysis and Machine Intelligence, 47(4):2475–2492, 2025. 1 +[18] Kui Jiang, Zhongyuan Wang, Chen Chen, Zheng Wang, Laizhong Cui, and Chia-Wen Lin. Magic elf: Image deraining meets association learning and transformer. arXiv preprint arXiv:2207.10455, 2022. 6 +[19] Kui Jiang, Wenxuan Liu, Zheng Wang, Xian Zhong, Junjun Jiang, and Chia-Wen Lin. Dawn: Direction-aware attention wavelet network for image deraining. In Proceedings of the 31st ACM international conference on multimedia, pages 7065-7074, 2023. 6 +[20] Lingshun Kong, Jiangxin Dong, Jianjun Ge, Mingqiang Li, and Jinshan Pan. Efficient frequency domain-based transformers for high-quality image deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5886-5895, 2023. 2 +[21] Xia Li, Jianlong Wu, Zhouchen Lin, Hong Liu, and Hongbin Zha. Recurrent squeeze-and-excitation context aggregation net for single image deraining. In Proceedings of the European conference on computer vision (ECCV), pages 254-269, 2018. 6 +[22] Yawei Li, Yuchen Fan, Xiaoyu Xiang, Denis Demandolx, Rakesh Ranjan, Radu Timofte, and Luc Van Gool. Efficient and explicit modelling of image hierarchies for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18278-18289, 2023. 2 +[23] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restoration using swim transformer. In Proceedings of the IEEE/CVF inter + +national conference on computer vision, pages 1833-1844, 2021. 1, 2, 3 +[24] Yuanchu Liang, Saeed Anwar, and Yang Liu. Drt: A lightweight single image deraining recursive transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 589-598, 2022. 3, 6 +[25] Yue Liu, Yunjie Tian, Yuzhong Zhao, Hongtian Yu, Lingxi Xie, Yaowei Wang, Qixiang Ye, Jianbin Jiao, and Yunfan Liu. Vmamba: Visual state space model. Advances in neural information processing systems, 37:103031-103063, 2024. 3 +[26] Xintian Mao, Jiansheng Wang, Xingran Xie, Qingli Li, and Yan Wang. Loformer: Local frequency transformer for image deblurring. In Proceedings of the 32nd ACM International Conference on Multimedia, pages 10382-10391, 2024. 7 +[27] Seungjun Nah, Tae Hyun Kim, and Kyoung Mu Lee. Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3883-3891, 2017. 7 +[28] Ruijie Quan, Xin Yu, Yuanzhi Liang, and Yi Yang. Removing raindrops and rain streaks in one go. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9147-9156, 2021. 6 +[29] Dongwei Ren, Wangmeng Zuo, Qinghua Hu, Pengfei Zhu, and Deyu Meng. Progressive image deraining networks: A better and simpler baseline. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3937-3946, 2019. 6 +[30] Yuan Shi, Bin Xia, Xiaoyu Jin, Xing Wang, Tianyu Zhao, Xin Xia, Xuefeng Xiao, and Wenming Yang. Vmambair: Visual state space model for image restoration. arXiv preprint arXiv:2403.11423, 2024. 1, 2, 3 +[31] Tianyu Song, Guiyue Jin, Pengpeng Li, Kui Jiang, Xiang Chen, and Jiyu Jin. Learning a spiking neural network for efficient image deraining. *IJCAI*, 2024. 6 +[32] Fu-Jen Tsai, Yan-Tsung Peng, Yen-Yu Lin, Chung-Chi Tsai, and Chia-Wen Lin. Stripformer: Strip transformer for fast image deblurring. In European Conference on Computer Vision, pages 146-162. Springer, 2022. 2, 7 +[33] Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, and Yinxiao Li. Maxim: Multi-axis mlp for image processing. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5769-5780, 2022. 6 +[34] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. 1, 2 +[35] Cong Wang, Wei Wang, Chengjin Yu, and Jie Mu. Explore internal and external similarity for single image deraining with graph neural networks. *IJCAI*, 2024. 6 +[36] Qiong Wang, Kui Jiang, Jinyi Lai, Zheng Wang, and Jianhui Zhang. Hpcnet: A hybrid progressive coupled network for image deraining. In 2023 IEEE International Conference on Multimedia and Expo (ICME), pages 2747-2752. IEEE, 2023. 1 + +[37] Yinglong Wang, Chao Ma, and Jianzhuang Liu. Smartassign: Learning a smart knowledge assignment strategy for deraining and desnowing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3677-3686, 2023. 6 +[38] Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 17683-17693, 2022. 2, 6 +[39] Xinyu Xie, Yawen Cui, Tao Tan, Xubin Zheng, and Zitong Yu. Fusionmamba: Dynamic feature enhancement for multimodal image fusion with mamba. Visual Intelligence, 2(1): 37, 2024. 3 +[40] Wenhan Yang, Robby T Tan, Jiashi Feng, Jiaying Liu, Zongming Guo, and Shuicheng Yan. Deep joint rain detection and removal from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1357-1366, 2017. 5 +[41] Hu Yu, Naishan Zheng, Man Zhou, Jie Huang, Zeyu Xiao, and Feng Zhao. Frequency and spatial dual guidance for image dehazing. In European Conference on Computer Vision, pages 181-198. Springer, 2022. 1 +[42] Anas Zafar, Danyal Aftab, Rizwan Qureshi, Xinqi Fan, Pingjun Chen, Jia Wu, Hazrat Ali, Shah Nawaz, Sheheryar Khan, and Mubarak Shah. Single stage adaptive multi-attention network for image restoration. IEEE TIP, 2024. 7 +[43] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14821-14831, 2021. 6, 7 +[44] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5728-5739, 2022. 1, 2, 5, 6, 7 +[45] Wengyi Zhan, Mingbao Lin, Chia-Wen Lin, and Rongrong Ji. Anysr: Realizing image super-resolution as any-scale, any-resource. IEEE Transactions on Image Processing, 2024. 2 +[46] He Zhang and Vishal M Patel. Density-aware single image de-raining using a multi-stream dense network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 695-704, 2018. 5 +[47] Hongguang Zhang, Yuchao Dai, Hongdong Li, and Piotr Koniusz. Deep stacked hierarchical multi-patch network for image deblurring. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5978-5986, 2019. 7 +[48] Jinlu Zhang, Yiyi Zhou, Qiancheng Zheng, Xiaoxiong Du, Gen Luo, Jun Peng, Xiaoshuai Sun, and Rongrong Ji. Fast text-to-3D-aware face generation and manipulation via direct cross-modal mapping and geometric regularization. In Proceedings of the 41st International Conference on Machine Learning, pages 60605–60625. PMLR, 2024. 1 + +[49] Kaihao Zhang, Wenhan Luo, Yiran Zhong, Lin Ma, Bjorn Stenger, Wei Liu, and Hongdong Li. Deblurring by realistic blurring. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2737-2746, 2020. 7 +[50] Jianwei Zheng, Wei Li, Ni Xu, Junwei Zhu, Xiaoxu Lin, and Xiaqin Zhang. Alias-free mamba neural operator. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. 3 \ No newline at end of file diff --git a/2025/ACL_ Activating Capability of Linear Attention for Image Restoration/images.zip b/2025/ACL_ Activating Capability of Linear Attention for Image Restoration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..8a1dfb597016bbb51995abd6259084e64dec48ec --- /dev/null +++ b/2025/ACL_ Activating Capability of Linear Attention for Image Restoration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4da398e500292ab6fb7255dcf5b17f7d6cb562e435dd728bda583a2890f1d1d5 +size 790320 diff --git a/2025/ACL_ Activating Capability of Linear Attention for Image Restoration/layout.json b/2025/ACL_ Activating Capability of Linear Attention for Image Restoration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..91e292c95f606502dc049b986c37a27bd363b91b --- /dev/null +++ b/2025/ACL_ Activating Capability of Linear Attention for Image Restoration/layout.json @@ -0,0 +1,9568 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 88, + 103, + 523, + 121 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 103, + 523, + 121 + ], + "spans": [ + { + "bbox": [ + 88, + 103, + 523, + 121 + ], + "type": "text", + "content": "ACL: Activating Capability of Linear Attention for Image Restoration" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 179, + 142, + 434, + 157 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 179, + 142, + 434, + 157 + ], + "spans": [ + { + "bbox": [ + 179, + 142, + 434, + 157 + ], + "type": "text", + "content": "Yubin Gu" + }, + { + "bbox": [ + 179, + 142, + 434, + 157 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 179, + 142, + 434, + 157 + ], + "type": "text", + "content": ", Yuan Meng" + }, + { + "bbox": [ + 179, + 142, + 434, + 157 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 179, + 142, + 434, + 157 + ], + "type": "text", + "content": ", Jiayi Ji" + }, + { + "bbox": [ + 179, + 142, + 434, + 157 + ], + "type": "inline_equation", + "content": "^{1,2}" + }, + { + "bbox": [ + 179, + 142, + 434, + 157 + ], + "type": "text", + "content": ", Xiaoshuai Sun" + }, + { + "bbox": [ + 179, + 142, + 434, + 157 + ], + "type": "inline_equation", + "content": "^{1*}" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 119, + 157, + 490, + 185 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 119, + 157, + 490, + 185 + ], + "spans": [ + { + "bbox": [ + 119, + 157, + 490, + 185 + ], + "type": "text", + "content": "1Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, 361005, P.R. China" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 193, + 185, + 416, + 200 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 193, + 185, + 416, + 200 + ], + "spans": [ + { + "bbox": [ + 193, + 185, + 416, + 200 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 193, + 185, + 416, + 200 + ], + "type": "text", + "content": " National University of Singapore, Singapore" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 116, + 201, + 489, + 213 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 116, + 201, + 489, + 213 + ], + "spans": [ + { + "bbox": [ + 116, + 201, + 489, + 213 + ], + "type": "text", + "content": "{guyubin,aprilmyy}@stu.xmu.edu.cn,jjyxmu@gmail.com,xssun@xmu.edu.cn" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 151, + 241, + 200, + 253 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 241, + 200, + 253 + ], + "spans": [ + { + "bbox": [ + 151, + 241, + 200, + 253 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 266, + 297, + 566 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 266, + 297, + 566 + ], + "spans": [ + { + "bbox": [ + 55, + 266, + 297, + 566 + ], + "type": "text", + "content": "Image restoration (IR), a key area in computer vision, has entered a new era with deep learning. Recent research has shifted toward Selective State Space Models (Mamba) to overcome CNNs' limited receptive fields and Transformers' computational inefficiency. However, due to Mamba's inherent one-dimensional scanning limitations, recent approaches have introduced multi-directional scanning to bolster inter-sequence correlations. Despite these enhancements, these methods still struggle with managing local pixel correlations across various directions. Moreover, the recursive computation in Mamba's SSM leads to reduced efficiency. To resolve these issues, we exploit the mathematical congruences between linear attention and SSM within Mamba to propose a novel model, ACL, which leverages news designs to Activate the Capability of Linear attention for IR. ACL integrates linear attention blocks instead of SSM within Mamba, serving as the core component of encoders/decoders, and aims to preserve a global perspective while boosting computational efficiency. Furthermore, we have designed a simple yet robust local enhancement module with multi-scale dilated convolutions to extract both coarse and fine features to improve local detail recovery. Experimental results confirm that our ACL model excels in classical IR tasks such as de-raining and de-blurring, while maintaining relatively low parameter counts and FLOPs1." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 580, + 135, + 594 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 580, + 135, + 594 + ], + "spans": [ + { + "bbox": [ + 55, + 580, + 135, + 594 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 601, + 296, + 685 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 601, + 296, + 685 + ], + "spans": [ + { + "bbox": [ + 55, + 601, + 296, + 685 + ], + "type": "text", + "content": "In the field of image processing, restoring degraded images to clarity is a crucial technology. High-quality image restoration (IR) methods play a key role in ensuring the smooth progression of downstream vision tasks. Vanilla IR methods rely primarily on manually designed feature extraction, but often perform poorly when faced with complicated degradation factors in reality." + } + ] + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 320, + 239, + 553, + 364 + ], + "blocks": [ + { + "bbox": [ + 320, + 239, + 553, + 364 + ], + "lines": [ + { + "bbox": [ + 320, + 239, + 553, + 364 + ], + "spans": [ + { + "bbox": [ + 320, + 239, + 553, + 364 + ], + "type": "image", + "image_path": "87e0ea1dca4d6347f547973ef4454a772c0dd4e8960c638c717732cde503c547.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 374, + 555, + 397 + ], + "lines": [ + { + "bbox": [ + 313, + 374, + 555, + 397 + ], + "spans": [ + { + "bbox": [ + 313, + 374, + 555, + 397 + ], + "type": "text", + "content": "Figure 1. Comparison of our method's core block design with those of recent mainstream approaches." + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 423, + 555, + 651 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 423, + 555, + 651 + ], + "spans": [ + { + "bbox": [ + 313, + 423, + 555, + 651 + ], + "type": "text", + "content": "Over the past decade, with the rapid development of deep learning, many fields have been propelled forward, such as image segmentation and generation [11, 12, 14, 17, 48], and of course, IR as well. The initial IR models were primarily based on CNN designs [6, 41], whose translational invariance and high inferential efficiency have made them widely applied, as depicted in the core structure in Fig. 1(a). However, these CNN-based models face challenges in processing global features and often require stacking additional network layers to compensate for this deficiency. With the rise of attention mechanisms [34], particularly the application of the Transformer, which boasts exceptional global feature modeling capabilities, an increasing number of researchers have shifted toward Transformer-based designs [23, 36, 44], achieving remarkable restoration results, as shown in the core structure in Fig. 1 (b). Although the Transformer broadens the global perspective, its softmax attention mechanism's quadratic computational complexity significantly reduces efficiency during inference." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 653, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 653, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 653, + 556, + 715 + ], + "type": "text", + "content": "Recently, state-space models from the field of control science, especially the Mamba model [8], have been proposed by related IR methods [13, 30] demonstrating superior training and inference efficiency on sequential data. These models have even outperformed some Transformer-" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 66, + 693, + 197, + 703 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 693, + 197, + 703 + ], + "spans": [ + { + "bbox": [ + 66, + 693, + 197, + 703 + ], + "type": "text", + "content": "*Corresponding Author: Xiaoshuai Sun" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 67, + 703, + 190, + 712 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 703, + 190, + 712 + ], + "spans": [ + { + "bbox": [ + 67, + 703, + 190, + 712 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 67, + 703, + 190, + 712 + ], + "type": "text", + "content": "https://github.com/ClimBin/ACLNet" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "17913" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 294, + 263 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 294, + 263 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 294, + 263 + ], + "type": "text", + "content": "based IR methods in terms of performance. Mamba is initially designed for one-dimensional sequence data modeling, direct adaptation to two-dimensional image poses challenges. The improved SSM module in Mamba, a one-dimensional unidirectional scanning, recursive computing structure, faces major issues when applied to the image, including: 1) conversion of image data into a one-dimensional sequence increases the distance between adjacent pixels, leading to the loss of local relationships [30]; 2) unidirectional modeling neglects the spatial relationships of pixels in multiple directions. Recently, MambaIR [13] and VMambaIR [30] have adopted multi-directional scanning methods to mitigate these problems, as illustrated in the main structure in Fig. 1 (c), but these methods have yet to effectively and directly establish multi-directional connections in the spatial dimension of pixels." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 267, + 295, + 458 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 267, + 295, + 458 + ], + "spans": [ + { + "bbox": [ + 55, + 267, + 295, + 458 + ], + "type": "text", + "content": "In response to the limitations of the Mamba model, we propose a method that aims to leverage the advantages of the Mamba structure while addressing its unidirectional modeling constraint. A direct approach to overcoming this limitation is to introduce a long-range dependency attention mechanism, such as Linear Attention (LA) or Softmax Attention (SA). Although SA generally outperforms LA in traditional vision tasks, its computational complexity is significantly higher than that of LA. Upon further analysis, we observe that LA and the State-Space Model (SSM) in Mamba share a highly similar mathematical formulation, which has also been deeply analyzed in the work [15]. Inspired by this insight, we replace the SSM module in Mamba with LA layers, thereby designing a novel foundational IR architecture, as illustrated in Fig. 1 (d), and propose a new IR model, ACL." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 463, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 463, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 463, + 295, + 713 + ], + "type": "text", + "content": "Specifically, the proposed ACL consists of two main components: the Mamba-based module with LA at its core (denoted as LAMA), which serves as the central part of the encoder/decoder and establishes global feature dependencies with linear computational complexity. Additionally, optimizing local features is equally essential. A recent approach involves partitioning the global features in the spatial domain into small windows and optimizing attention within these windows to enhance local feature modeling. While this approach has shown some effectiveness, it incurs high computational costs. In contrast, we propose a simple yet efficient multi-scale dilated convolution module (MDC), which captures local features at varying granularities by employing different dilation factors, thus improving detail restoration. By combining the Mamba structure with LA, ACL activates the capability of LA, achieving advanced performance on two classic IR tasks—deblurring and deraining. Compared to Transformer-based models [32, 44], ACL not only significantly reduces computational cost and parameter count, but also achieves superior or comparable performance." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 326, + 72, + 511, + 83 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 326, + 72, + 511, + 83 + ], + "spans": [ + { + "bbox": [ + 326, + 72, + 511, + 83 + ], + "type": "text", + "content": "Our contributions are summarized as follows:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 313, + 85, + 553, + 240 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 314, + 85, + 553, + 156 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 85, + 553, + 156 + ], + "spans": [ + { + "bbox": [ + 314, + 85, + 553, + 156 + ], + "type": "text", + "content": "- We explore an alternative CNN and Transformer-based architecture for Image Restoration, providing global receptive fields while maintaining computational efficiency. Specifically, we propose ACL, which upgrades the SSM in Mamba with the LA structure, enabling the model to perform global multi-directional scanning." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 313, + 157, + 553, + 203 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 157, + 553, + 203 + ], + "spans": [ + { + "bbox": [ + 313, + 157, + 553, + 203 + ], + "type": "text", + "content": "- The proposed ACL consists of two modules: LAMA and MDC. LAMA captures global feature dependencies, and MDC captures local features at varying granularities. Both modules enhance detail restoration." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 205, + 553, + 240 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 205, + 553, + 240 + ], + "spans": [ + { + "bbox": [ + 313, + 205, + 553, + 240 + ], + "type": "text", + "content": "- The proposed ACL demonstrates advanced performance in de-raining and de-blurring tasks, proving its advantages in terms of parameter count and computational cost." + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 314, + 253, + 400, + 266 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 253, + 400, + 266 + ], + "spans": [ + { + "bbox": [ + 314, + 253, + 400, + 266 + ], + "type": "text", + "content": "2. Related Work" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 314, + 273, + 428, + 285 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 273, + 428, + 285 + ], + "spans": [ + { + "bbox": [ + 314, + 273, + 428, + 285 + ], + "type": "text", + "content": "2.1. Image Restorations" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 291, + 553, + 639 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 291, + 553, + 639 + ], + "spans": [ + { + "bbox": [ + 313, + 291, + 553, + 639 + ], + "type": "text", + "content": "Image restoration (IR) technology provides clear visual data essential for numerous advanced downstream visual tasks. In recent years, the advent of deep learning has eclipsed traditional methods that rely on manually designed features, shifting the paradigm toward deep-learning-based approaches [9, 13, 20, 22, 45]. Initially, models were predominantly designed using CNNs, achieving significant advancements through sophisticated network designs that incorporated encoder-decoder patterns [5], dense connections, and residual connections [10]. However, CNN-based techniques face challenges in establishing global feature dependencies. With the evolution of Transformers in both Natural Language Processing (NLP) and Computer Vision (CV), many researchers have pivoted towards Transformer-based IR methods [23, 38, 44], leveraging their robust global receptive capabilities and marking substantial progress. Despite these improvements, the computational burden of calculating SA remains a drawback. Recently, the Mamba model [8], known for its efficient training and inference capabilities, has introduced a new potential paradigm in the IR field. Methods such as VMambaIR [30] and MambaIR [13], which employ multidirectional scanning strategies, aim to address the issues of unidirectional modeling inherent in Mamba's SSM blocks. Nonetheless, these methods have yet to establish multidirectional pixel connections directly. Thus, exploring how to utilize Mamba's superior design to establish comprehensive multidirectional global pixel correlations remains a promising direction." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 314, + 647, + 443, + 659 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 647, + 443, + 659 + ], + "spans": [ + { + "bbox": [ + 314, + 647, + 443, + 659 + ], + "type": "text", + "content": "2.2. Attention Mechanisms" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 665, + 553, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 665, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 665, + 553, + 713 + ], + "type": "text", + "content": "Originally applied in the NLP field, attention mechanisms [34], particularly the Transformer with its SA mechanism, have achieved remarkable success. Subsequently, these mechanisms have been successfully adapted to the" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "text", + "content": "17914" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 296, + 251 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 296, + 251 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 296, + 251 + ], + "type": "text", + "content": "visual domain. In recent years, numerous IR methods based on the Transformer structure have emerged [1, 3, 23, 24], primarily utilizing various design modules to leverage self-attention mechanisms and enhance model efficiency. SwinIR [23] introduces twin-shifted window attention to boost performance. IPT [1] enhances local detail restoration by dividing images into multiple small windows and processing each window's features independently. However, adopting SA to establish global or local feature dependencies inevitably leads to quadratic computational complexity. Linear attention, which operates with linear complexity, has yet to match the performance of SA in classical vision tasks, thus its application remains limited. In this paper, we introduce a new IR model, ACL, activating the potential of linear attention." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 261, + 165, + 274 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 261, + 165, + 274 + ], + "spans": [ + { + "bbox": [ + 55, + 261, + 165, + 274 + ], + "type": "text", + "content": "2.3. State Space Model" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 279, + 296, + 494 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 279, + 296, + 494 + ], + "spans": [ + { + "bbox": [ + 55, + 279, + 296, + 494 + ], + "type": "text", + "content": "The Mamba model, a newly proposed state space model (SSM), effectively facilitates sequence modeling with linear complexity [8]. Owing to its efficient training and inference speeds, many researchers have adapted it for visual tasks [16, 25, 30, 39, 50]. For instance, Local-Mamba [16] employs a cross-scanning module to scan image spaces. VMambaIR [30] enhances multidirectional relationships between pixels by scanning images from six directions. These methods focus on overcoming the limitations of unidirectional modeling in SSMs by proposing various scanning techniques. However, the features captured in each direction remain unidirectionally connected and potentially lead to redundant computational costs. The Mamba model is efficient due to its structural design, yet its unidirectional scanning is not entirely suitable for images. Therefore, we combine linear attention with the Mamba structure to achieve a new balance between computational efficiency and restoration effectiveness." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 506, + 116, + 519 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 506, + 116, + 519 + ], + "spans": [ + { + "bbox": [ + 55, + 506, + 116, + 519 + ], + "type": "text", + "content": "3. Methods" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 527, + 178, + 540 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 527, + 178, + 540 + ], + "spans": [ + { + "bbox": [ + 55, + 527, + 178, + 540 + ], + "type": "text", + "content": "3.1. Preliminary Analysis" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 544, + 295, + 605 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 544, + 295, + 605 + ], + "spans": [ + { + "bbox": [ + 55, + 544, + 295, + 605 + ], + "type": "text", + "content": "In recent years, very few IR models based on linear attention have been proposed, as they tend to perform slightly worse than SA in classical vision tasks. Leveraging the computational advantages of Linear Attention (LA) to further explore its potential in visual tasks is crucial." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 605, + 296, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 605, + 296, + 666 + ], + "spans": [ + { + "bbox": [ + 55, + 605, + 296, + 666 + ], + "type": "text", + "content": "The improved SSM in Mamba shows significant potential in sequence processing. In fact, linear attention has a similar expression to SSM [15]. In linear attention, if the attention of the " + }, + { + "bbox": [ + 55, + 605, + 296, + 666 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 55, + 605, + 296, + 666 + ], + "type": "text", + "content": "-th token is restricted to only be related to the previous " + }, + { + "bbox": [ + 55, + 605, + 296, + 666 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 55, + 605, + 296, + 666 + ], + "type": "text", + "content": " tokens, it is expressed as follows:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 94, + 675, + 296, + 716 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 675, + 296, + 716 + ], + "spans": [ + { + "bbox": [ + 94, + 675, + 296, + 716 + ], + "type": "interline_equation", + "content": "\\mathbf {A} _ {i} = \\frac {\\mathbf {Q} _ {i} \\left(\\sum_ {j = 1} ^ {i} \\mathbf {K} _ {j} ^ {\\top} \\mathbf {V} _ {j}\\right)}{\\mathbf {Q} _ {i} \\left(\\sum_ {j = 1} ^ {i} \\mathbf {K} _ {j} ^ {\\top}\\right)} = \\frac {\\left(\\mathbf {Q} _ {i} \\mathbf {D} _ {i}\\right)}{\\left(\\mathbf {Q} _ {i} \\mathbf {U} _ {i}\\right)}, \\tag {1}", + "image_path": "e19d1ff851ec7fb9367d4b366cf9fe8282c410fab40e5db490a6985b11445349.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 70, + 553, + 96 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 70, + 553, + 96 + ], + "spans": [ + { + "bbox": [ + 313, + 70, + 553, + 96 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 70, + 553, + 96 + ], + "type": "inline_equation", + "content": "\\mathbf{D}_i = \\sum_{j=1}^i \\mathbf{K}_j^\\top \\mathbf{V}_j" + }, + { + "bbox": [ + 313, + 70, + 553, + 96 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 70, + 553, + 96 + ], + "type": "inline_equation", + "content": "\\mathbf{U}_i = \\sum_{j=1}^i \\mathbf{K}_j^\\top" + }, + { + "bbox": [ + 313, + 70, + 553, + 96 + ], + "type": "text", + "content": ". Therefore, the recursive expressions are:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 393, + 105, + 555, + 119 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 393, + 105, + 555, + 119 + ], + "spans": [ + { + "bbox": [ + 393, + 105, + 555, + 119 + ], + "type": "interline_equation", + "content": "\\mathbf {U} _ {i} = \\mathbf {U} _ {i - 1} + \\mathbf {K} _ {i} ^ {\\top}, \\tag {2}", + "image_path": "319dc241419720fc3ff491f8849030d225f0a3657f5b02776db6f12f23cf9230.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 350, + 127, + 555, + 154 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 350, + 127, + 555, + 154 + ], + "spans": [ + { + "bbox": [ + 350, + 127, + 555, + 154 + ], + "type": "interline_equation", + "content": "\\mathbf {D} _ {i} = \\mathbf {D} _ {i - 1} + \\mathbf {K} _ {i} ^ {\\top} \\mathbf {V} _ {i}, \\quad \\mathbf {A} _ {i} = \\frac {\\left(\\mathbf {Q} _ {i} \\mathbf {D} _ {i}\\right)}{\\left(\\mathbf {Q} _ {i} \\mathbf {U} _ {i}\\right)}. \\tag {3}", + "image_path": "f2fd31df6080a9138ad9201872aa6a5e341c9cca36cc3c6218e4036fa4e29c70.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 157, + 555, + 217 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 157, + 555, + 217 + ], + "spans": [ + { + "bbox": [ + 313, + 157, + 555, + 217 + ], + "type": "text", + "content": "To enable applications in deep neural networks, it is necessary to discretize the initial SSM using zero-order hold [8]. This involves discretizing the continuous parameters " + }, + { + "bbox": [ + 313, + 157, + 555, + 217 + ], + "type": "inline_equation", + "content": "\\mathbf{A}" + }, + { + "bbox": [ + 313, + 157, + 555, + 217 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 157, + 555, + 217 + ], + "type": "inline_equation", + "content": "\\mathbf{B}" + }, + { + "bbox": [ + 313, + 157, + 555, + 217 + ], + "type": "text", + "content": " into " + }, + { + "bbox": [ + 313, + 157, + 555, + 217 + ], + "type": "inline_equation", + "content": "\\bar{\\mathbf{A}}" + }, + { + "bbox": [ + 313, + 157, + 555, + 217 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 157, + 555, + 217 + ], + "type": "inline_equation", + "content": "\\bar{\\mathbf{B}}" + }, + { + "bbox": [ + 313, + 157, + 555, + 217 + ], + "type": "text", + "content": " through the time scale parameter " + }, + { + "bbox": [ + 313, + 157, + 555, + 217 + ], + "type": "inline_equation", + "content": "\\Delta" + }, + { + "bbox": [ + 313, + 157, + 555, + 217 + ], + "type": "text", + "content": ". The specific expressions are as follows:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 347, + 227, + 555, + 240 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 347, + 227, + 555, + 240 + ], + "spans": [ + { + "bbox": [ + 347, + 227, + 555, + 240 + ], + "type": "interline_equation", + "content": "\\mathbf {h} _ {i} = \\bar {\\mathbf {A}} \\mathbf {h} _ {i - 1} + \\mathbf {B} \\mathbf {x} _ {i}, \\quad \\mathbf {y} _ {i} = \\mathbf {C h} _ {i} + \\mathbf {D x} _ {i}, \\tag {4}", + "image_path": "dffc8695689b2a2703b1e150c4dff698bfdb99b8e0585906d3df60f7833ce1b5.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 248, + 553, + 285 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 248, + 553, + 285 + ], + "spans": [ + { + "bbox": [ + 313, + 248, + 553, + 285 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 248, + 553, + 285 + ], + "type": "inline_equation", + "content": "\\bar{\\mathbf{A}} = \\exp (\\Delta \\mathbf{A})" + }, + { + "bbox": [ + 313, + 248, + 553, + 285 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 248, + 553, + 285 + ], + "type": "inline_equation", + "content": "\\bar{\\mathbf{B}} = (\\Delta \\mathbf{A})^{-1}(\\exp (\\Delta \\mathbf{A}) - I)\\Delta \\mathbf{B}\\approx \\Delta \\mathbf{B}" + }, + { + "bbox": [ + 313, + 248, + 553, + 285 + ], + "type": "text", + "content": ". For simplicity, we have omitted the feature dimension information of each part in the formulas." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 285, + 554, + 345 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 285, + 554, + 345 + ], + "spans": [ + { + "bbox": [ + 313, + 285, + 554, + 345 + ], + "type": "text", + "content": "Further, Mamba enhances the discrete SSM by making " + }, + { + "bbox": [ + 313, + 285, + 554, + 345 + ], + "type": "inline_equation", + "content": "\\bar{\\mathbf{A}}" + }, + { + "bbox": [ + 313, + 285, + 554, + 345 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 285, + 554, + 345 + ], + "type": "inline_equation", + "content": "\\bar{\\mathbf{B}}" + }, + { + "bbox": [ + 313, + 285, + 554, + 345 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 313, + 285, + 554, + 345 + ], + "type": "inline_equation", + "content": "\\Delta" + }, + { + "bbox": [ + 313, + 285, + 554, + 345 + ], + "type": "text", + "content": " dependent on the input " + }, + { + "bbox": [ + 313, + 285, + 554, + 345 + ], + "type": "inline_equation", + "content": "\\mathbf{x}_i" + }, + { + "bbox": [ + 313, + 285, + 554, + 345 + ], + "type": "text", + "content": ", breaking away from the assumption of input-independent models. Additionally, since " + }, + { + "bbox": [ + 313, + 285, + 554, + 345 + ], + "type": "inline_equation", + "content": "\\bar{\\mathbf{A}}_i" + }, + { + "bbox": [ + 313, + 285, + 554, + 345 + ], + "type": "text", + "content": " in Mamba is a diagonal matrix, we have " + }, + { + "bbox": [ + 313, + 285, + 554, + 345 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{A}}_i = diag(\\tilde{\\mathbf{A}}_i)" + }, + { + "bbox": [ + 313, + 285, + 554, + 345 + ], + "type": "text", + "content": ". The expression is thus transformed into:" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 326, + 354, + 555, + 369 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 326, + 354, + 555, + 369 + ], + "spans": [ + { + "bbox": [ + 326, + 354, + 555, + 369 + ], + "type": "interline_equation", + "content": "\\mathbf {h} _ {i} = \\tilde {\\mathbf {A}} _ {i} \\mathbf {h} _ {i - 1} + \\mathbf {B} _ {i} (\\Delta_ {i} \\mathbf {x} _ {i}), \\quad \\mathbf {y} _ {i} = \\mathbf {C} _ {i} \\mathbf {h} _ {i} + \\mathbf {D} \\mathbf {x} _ {i}. \\quad (5)", + "image_path": "6536c6bf15d771310cd59a92f102dacda3aafd98caabd2d7deef51c743ea52f8.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 377, + 555, + 558 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 377, + 555, + 558 + ], + "spans": [ + { + "bbox": [ + 313, + 377, + 555, + 558 + ], + "type": "text", + "content": "The primary distinctions between Eq. 3 and Eq. 5 are as follows: 1) The improved State Space Model (SSM) incorporates an additional parameter for hidden state transitions, denoted as " + }, + { + "bbox": [ + 313, + 377, + 555, + 558 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{A}}_i" + }, + { + "bbox": [ + 313, + 377, + 555, + 558 + ], + "type": "text", + "content": ". This parameter functions similarly to a forget gate, filtering previous states to enhance selective retention. 2) An additional term, " + }, + { + "bbox": [ + 313, + 377, + 555, + 558 + ], + "type": "inline_equation", + "content": "\\mathbf{D}\\mathbf{x}_i" + }, + { + "bbox": [ + 313, + 377, + 555, + 558 + ], + "type": "text", + "content": ", is introduced, akin to an input skip connection. In the Mamba model, which aims to achieve input-dependent modeling, " + }, + { + "bbox": [ + 313, + 377, + 555, + 558 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{A}}_i" + }, + { + "bbox": [ + 313, + 377, + 555, + 558 + ], + "type": "text", + "content": " must be recursively computed. Despite the utilization of hardware acceleration mechanisms, this process still adheres to unidirectional modeling. Linear attention represents an alternative form of SSM within Mamba and can transcend the limitations of unidirectional pixel modeling, presenting a potential capability. For a more in-depth analysis, one can refer to another outstanding analytical works [15]." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 564, + 482, + 576 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 564, + 482, + 576 + ], + "spans": [ + { + "bbox": [ + 313, + 564, + 482, + 576 + ], + "type": "text", + "content": "3.2. Overall Structure of the Model" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 582, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 582, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 582, + 555, + 715 + ], + "type": "text", + "content": "As illustrated in Fig. 2 (a), the proposed IR model, ACL, is based on an encoder-decoder architecture. In this model, multiple downsampled degraded images are fed into the main pathway of the encoder through lateral convolution layers, and images restored at three different scales are output during the decoding phase. Both the encoder and decoder comprise three fundamental core blocks, the structure of which is depicted in Fig. 2. Each core unit consists of several successive LAMA modules. Furthermore, as shown in the framework diagram, following two core units with higher feature resolution in both encoding and decoding" + } + ] + } + ], + "index": 18 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "17915" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 57, + 69, + 302, + 223 + ], + "blocks": [ + { + "bbox": [ + 57, + 69, + 302, + 223 + ], + "lines": [ + { + "bbox": [ + 57, + 69, + 302, + 223 + ], + "spans": [ + { + "bbox": [ + 57, + 69, + 302, + 223 + ], + "type": "image", + "image_path": "f7098aef4493c7621c002027cefe7603ad9677eede56e3b8f7c71f5544d7aadc.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 148, + 230, + 212, + 240 + ], + "lines": [ + { + "bbox": [ + 148, + 230, + 212, + 240 + ], + "spans": [ + { + "bbox": [ + 148, + 230, + 212, + 240 + ], + "type": "text", + "content": "(a) Overall Pipeline" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 298, + 230, + 378, + 240 + ], + "lines": [ + { + "bbox": [ + 298, + 230, + 378, + 240 + ], + "spans": [ + { + "bbox": [ + 298, + 230, + 378, + 240 + ], + "type": "text", + "content": "(b) Encoder and Decoder" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 303, + 70, + 375, + 223 + ], + "blocks": [ + { + "bbox": [ + 303, + 70, + 375, + 223 + ], + "lines": [ + { + "bbox": [ + 303, + 70, + 375, + 223 + ], + "spans": [ + { + "bbox": [ + 303, + 70, + 375, + 223 + ], + "type": "image", + "image_path": "73afeb19e508ef0c79efdc91ea6a607b59b6b98d0fc4a83f1a08de6ea5cfcd98.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 261, + 555, + 283 + ], + "lines": [ + { + "bbox": [ + 55, + 261, + 555, + 283 + ], + "spans": [ + { + "bbox": [ + 55, + 261, + 555, + 283 + ], + "type": "text", + "content": "Figure 2. (a) The overall framework of ACL, based on the encoder-decoder architecture. (b) The core structure of the encoder-decoder, which includes the improved LA-based Mamba (LAMA) module. (c) The structure of the LAMA module. (d) The MDC module." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 375, + 70, + 467, + 223 + ], + "blocks": [ + { + "bbox": [ + 375, + 70, + 467, + 223 + ], + "lines": [ + { + "bbox": [ + 375, + 70, + 467, + 223 + ], + "spans": [ + { + "bbox": [ + 375, + 70, + 467, + 223 + ], + "type": "image", + "image_path": "9ee6b870f30c4854b13c04ce5b0494c60e29179a3217150a08f1140cd8aff9ee.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 386, + 230, + 455, + 240 + ], + "lines": [ + { + "bbox": [ + 386, + 230, + 455, + 240 + ], + "spans": [ + { + "bbox": [ + 386, + 230, + 455, + 240 + ], + "type": "text", + "content": "(c) LA-based Mamba" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 471, + 228, + 542, + 247 + ], + "lines": [ + { + "bbox": [ + 471, + 228, + 542, + 247 + ], + "spans": [ + { + "bbox": [ + 471, + 228, + 542, + 247 + ], + "type": "text", + "content": "(d) Multi-Dilated Convolutions Module" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 467, + 71, + 548, + 223 + ], + "blocks": [ + { + "bbox": [ + 467, + 71, + 548, + 223 + ], + "lines": [ + { + "bbox": [ + 467, + 71, + 548, + 223 + ], + "spans": [ + { + "bbox": [ + 467, + 71, + 548, + 223 + ], + "type": "image", + "image_path": "ea2bdffdc71374fad9556828ff17a8829c90fc14006c21c4a18807ac64811baf.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 304, + 295, + 353 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 304, + 295, + 353 + ], + "spans": [ + { + "bbox": [ + 55, + 304, + 295, + 353 + ], + "type": "text", + "content": "phases, a local enhancement module is appended to augment the model's capability for local detail restoration. The structure of this local enhancement module is presented in Fig. 2 (d)." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "spans": [ + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "text", + "content": "Specifically, the network process begins with a degraded image " + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "inline_equation", + "content": "\\mathbf{I} \\in \\mathbb{R}^{3 \\times 256 \\times 256}" + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "text", + "content": ". The model first transforms this image through a convolution layer into a feature map " + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "inline_equation", + "content": "\\mathbf{I}' \\in \\mathbb{R}^{C \\times H \\times W}" + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "text", + "content": ", expanding the number of feature channels from 3 to " + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "inline_equation", + "content": "C = 32" + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "text", + "content": ". Subsequently, this feature map is further processed through three encoder units " + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "inline_equation", + "content": "E_{i}" + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "text", + "content": " and a local enhancement module, each unit encoding the feature maps at different scales into a latent space state, denoted as " + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "inline_equation", + "content": "\\mathbf{I}_e^i \\in \\mathbb{R}^{2^{(i-1)}C \\times \\frac{H}{2^{(i-1)}} \\times \\frac{W}{2^{(i-1)}}}" + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "inline_equation", + "content": "i = 1, 2, 3" + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "text", + "content": ". Here, " + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "inline_equation", + "content": "C, H" + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "inline_equation", + "content": "W" + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "text", + "content": " respectively represent the number of feature channels, the height, and the width of the feature maps. These processed feature maps are then passed to the decoder, where they are fused through direct or skip connections. In the decoder, the feature maps are gradually restored by decoding units " + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "inline_equation", + "content": "D_{i}" + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "text", + "content": ", generating " + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "inline_equation", + "content": "\\mathbf{I}_d^i \\in \\mathbb{R}^{2^{(i-1)}C \\times \\frac{H}{2^{(i-1)}} \\times \\frac{W}{2^{(i-1)}}}" + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "text", + "content": ". Each feature map is processed by the subsequent " + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "inline_equation", + "content": "D_{i}" + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "text", + "content": " and, following lateral convolution operations, yields multi-scale output results, with " + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "inline_equation", + "content": "D_{1}" + }, + { + "bbox": [ + 55, + 354, + 296, + 601 + ], + "type": "text", + "content": " producing the final restored image. Subsequently, we will elaborate on the two key modules that constitute ACL and the model's optimization function." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 55, + 610, + 265, + 623 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 610, + 265, + 623 + ], + "spans": [ + { + "bbox": [ + 55, + 610, + 265, + 623 + ], + "type": "text", + "content": "3.3. Linear Attention-based Mamba Module" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 55, + 629, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 629, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 629, + 296, + 713 + ], + "type": "text", + "content": "The original Mamba model is an auto-regressive model capable of efficiently capturing sequence dependencies, and it has been proven effective in modeling temporal causal sequence data. However, due to its unidirectional modeling approach, Mamba exhibits limitations when handling data with weak causality, such as images, necessitating further improvements to address these challenges. To overcome" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 304, + 555, + 399 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 304, + 555, + 399 + ], + "spans": [ + { + "bbox": [ + 313, + 304, + 555, + 399 + ], + "type": "text", + "content": "the limitations of unidirectional modeling, recent methods have proposed multi-directional cross-scanning techniques for image processing. Unlike these approaches, we embed linear attention into the Mamba structure, enabling it to establish global pixel dependencies when processing image data, and avoiding the need for recursive computation of the forget matrix " + }, + { + "bbox": [ + 313, + 304, + 555, + 399 + ], + "type": "inline_equation", + "content": "\\mathbf{A}_i" + }, + { + "bbox": [ + 313, + 304, + 555, + 399 + ], + "type": "text", + "content": ". The module structure is illustrated in Fig. 2 (c)." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 400, + 554, + 472 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 400, + 554, + 472 + ], + "spans": [ + { + "bbox": [ + 313, + 400, + 554, + 472 + ], + "type": "text", + "content": "The input to the model is a feature map " + }, + { + "bbox": [ + 313, + 400, + 554, + 472 + ], + "type": "inline_equation", + "content": "\\mathbf{F} \\in \\mathbb{R}^{B \\times C \\times H \\times W}" + }, + { + "bbox": [ + 313, + 400, + 554, + 472 + ], + "type": "text", + "content": ". First, " + }, + { + "bbox": [ + 313, + 400, + 554, + 472 + ], + "type": "inline_equation", + "content": "\\mathbf{F}" + }, + { + "bbox": [ + 313, + 400, + 554, + 472 + ], + "type": "text", + "content": " undergoes a dimensional transformation, resulting in " + }, + { + "bbox": [ + 313, + 400, + 554, + 472 + ], + "type": "inline_equation", + "content": "\\mathbf{F}' \\in \\mathbb{R}^{B \\times HW \\times C}" + }, + { + "bbox": [ + 313, + 400, + 554, + 472 + ], + "type": "text", + "content": ". Subsequently, " + }, + { + "bbox": [ + 313, + 400, + 554, + 472 + ], + "type": "inline_equation", + "content": "\\mathbf{F}'" + }, + { + "bbox": [ + 313, + 400, + 554, + 472 + ], + "type": "text", + "content": " is passed into two branches: a main branch and a residual branch. The operations of the residual branch can be expressed as follows:" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 386, + 484, + 553, + 498 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 386, + 484, + 553, + 498 + ], + "spans": [ + { + "bbox": [ + 386, + 484, + 553, + 498 + ], + "type": "interline_equation", + "content": "\\mathbf {F} _ {r e s} = \\sigma (\\operatorname {L i n e a r} \\left(\\mathbf {F} ^ {\\prime}\\right)), \\tag {6}", + "image_path": "f9141fae85ec314684c2df842a8a9c2020192d763ae02513075a97f1fca21be3.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 503, + 522, + 515 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 503, + 522, + 515 + ], + "spans": [ + { + "bbox": [ + 313, + 503, + 522, + 515 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 503, + 522, + 515 + ], + "type": "inline_equation", + "content": "\\sigma (\\cdot)" + }, + { + "bbox": [ + 313, + 503, + 522, + 515 + ], + "type": "text", + "content": " represents the SiLU activation function." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 516, + 554, + 552 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 516, + 554, + 552 + ], + "spans": [ + { + "bbox": [ + 313, + 516, + 554, + 552 + ], + "type": "text", + "content": "The main branch comprises a linear mapping layer, a convolutional layer, and linear attention. The process can be expressed as follows:" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 381, + 563, + 553, + 577 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 381, + 563, + 553, + 577 + ], + "spans": [ + { + "bbox": [ + 381, + 563, + 553, + 577 + ], + "type": "interline_equation", + "content": "\\mathbf {F} _ {1} = \\operatorname {T o 4 D} \\left(\\operatorname {L i n e a r} \\left(\\mathbf {F} ^ {\\prime}\\right)\\right), \\tag {7}", + "image_path": "95ead55444177c3e77608b27adf351c1ee3e4bd838838a4282707ef642a2ab6f.jpg" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 391, + 582, + 553, + 596 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 391, + 582, + 553, + 596 + ], + "spans": [ + { + "bbox": [ + 391, + 582, + 553, + 596 + ], + "type": "interline_equation", + "content": "\\mathbf {F} _ {2} = \\sigma \\left(\\operatorname {C o n v} \\left(\\mathbf {F} _ {1}\\right)\\right), \\tag {8}", + "image_path": "4f55f72ad4624c8839e20f49af8a5d9726745a70a5abb0260cef8da950d4be19.jpg" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 313, + 601, + 554, + 649 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 601, + 554, + 649 + ], + "spans": [ + { + "bbox": [ + 313, + 601, + 554, + 649 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 601, + 554, + 649 + ], + "type": "inline_equation", + "content": "\\mathrm{To4D}(\\cdot)" + }, + { + "bbox": [ + 313, + 601, + 554, + 649 + ], + "type": "text", + "content": " indicates reshaping the feature map into a four-dimensional tensor to adapt convolution operation. Next, linear attention is applied to " + }, + { + "bbox": [ + 313, + 601, + 554, + 649 + ], + "type": "inline_equation", + "content": "\\mathbf{F}_2" + }, + { + "bbox": [ + 313, + 601, + 554, + 649 + ], + "type": "text", + "content": ". The specific process is expressed as follows:" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 397, + 659, + 553, + 673 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 397, + 659, + 553, + 673 + ], + "spans": [ + { + "bbox": [ + 397, + 659, + 553, + 673 + ], + "type": "interline_equation", + "content": "\\mathbf {F} _ {2} ^ {\\prime} = \\operatorname {T o 3 D} \\left(\\mathbf {F} _ {2}\\right), \\tag {9}", + "image_path": "affc5a05a18a349103832b99b174e3a2250548514a3ce3d3f7254223380ee389.jpg" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 337, + 681, + 553, + 695 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 337, + 681, + 553, + 695 + ], + "spans": [ + { + "bbox": [ + 337, + 681, + 553, + 695 + ], + "type": "interline_equation", + "content": "\\mathbf {Q} = \\phi \\left(\\operatorname {L i n e a r} \\left(\\mathbf {F} _ {2} ^ {\\prime}\\right)\\right), \\quad \\mathbf {K} = \\phi \\left(\\operatorname {L i n e a r} \\left(\\mathbf {F} _ {2} ^ {\\prime}\\right)\\right), \\tag {10}", + "image_path": "663baff26624df971910e50b2525d0486401655068e18d9cebb5b74e344c46a6.jpg" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 400, + 700, + 553, + 714 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 400, + 700, + 553, + 714 + ], + "spans": [ + { + "bbox": [ + 400, + 700, + 553, + 714 + ], + "type": "interline_equation", + "content": "\\mathbf {K} \\mathbf {V} = \\mathbf {K} ^ {\\top} \\cdot \\mathbf {F} _ {2} ^ {\\prime}, \\tag {11}", + "image_path": "7d60a3e4919959bfa27a29258e70e21179fdbc50d7ef902e9d7688532815d3a3.jpg" + } + ] + } + ], + "index": 23 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "text", + "content": "17916" + } + ] + } + ], + "index": 24 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 137, + 72, + 294, + 85 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 72, + 294, + 85 + ], + "spans": [ + { + "bbox": [ + 137, + 72, + 294, + 85 + ], + "type": "interline_equation", + "content": "\\mathbf {F} _ {\\text {a t t e n}} = \\mathbf {Q} \\cdot \\mathbf {K V}, \\tag {12}", + "image_path": "cfd89ba4969571b1d38454cb2c5b46be7051c0398c04bf82a655e80011029127.jpg" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 108, + 90, + 294, + 103 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 90, + 294, + 103 + ], + "spans": [ + { + "bbox": [ + 108, + 90, + 294, + 103 + ], + "type": "interline_equation", + "content": "\\mathbf {F} _ {\\text {a t t e n}} = \\mathbf {F} _ {\\text {a t t e n}} + \\operatorname {C o n v} _ {\\text {p o s}} (\\mathbf {F} _ {2}), \\tag {13}", + "image_path": "6e247d09e9a0818c9bd849f850b55b2e3524e20bc38c88e25feb00c7667f5793.jpg" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 107, + 296, + 180 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 107, + 296, + 180 + ], + "spans": [ + { + "bbox": [ + 55, + 107, + 296, + 180 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 107, + 296, + 180 + ], + "type": "inline_equation", + "content": "\\mathrm{To3D}(\\cdot)" + }, + { + "bbox": [ + 55, + 107, + 296, + 180 + ], + "type": "text", + "content": " reshapes the feature map into a three-dimensional tensor for linear attention layer, " + }, + { + "bbox": [ + 55, + 107, + 296, + 180 + ], + "type": "inline_equation", + "content": "\\phi (\\cdot)" + }, + { + "bbox": [ + 55, + 107, + 296, + 180 + ], + "type": "text", + "content": " is the kernel function, and " + }, + { + "bbox": [ + 55, + 107, + 296, + 180 + ], + "type": "inline_equation", + "content": "\\mathrm{Conv}_{pos}(\\cdot)" + }, + { + "bbox": [ + 55, + 107, + 296, + 180 + ], + "type": "text", + "content": " is learnable position embedding function. Subsequently, " + }, + { + "bbox": [ + 55, + 107, + 296, + 180 + ], + "type": "inline_equation", + "content": "\\mathbf{F}_{atten}" + }, + { + "bbox": [ + 55, + 107, + 296, + 180 + ], + "type": "text", + "content": " is multiplied by " + }, + { + "bbox": [ + 55, + 107, + 296, + 180 + ], + "type": "inline_equation", + "content": "\\mathbf{F}_{res}" + }, + { + "bbox": [ + 55, + 107, + 296, + 180 + ], + "type": "text", + "content": ", followed by a linear mapping layer, yielding " + }, + { + "bbox": [ + 55, + 107, + 296, + 180 + ], + "type": "inline_equation", + "content": "\\mathbf{F}_{atten}" + }, + { + "bbox": [ + 55, + 107, + 296, + 180 + ], + "type": "text", + "content": ", which is expressed as:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 101, + 189, + 295, + 201 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 101, + 189, + 295, + 201 + ], + "spans": [ + { + "bbox": [ + 101, + 189, + 295, + 201 + ], + "type": "interline_equation", + "content": "\\mathbf {F} _ {\\text {e n h e n c e}} = \\operatorname {L i n e a r} \\left(\\mathbf {F} _ {\\text {a t t e n}} \\times \\mathbf {F} _ {\\text {r e s}}\\right). \\tag {14}", + "image_path": "af76a2dc486a06ec35e89e4b99e05324d0dbc17ed07f6f79973ea89e8975ded8.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 210, + 296, + 235 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 210, + 296, + 235 + ], + "spans": [ + { + "bbox": [ + 55, + 210, + 296, + 235 + ], + "type": "text", + "content": "Finally, " + }, + { + "bbox": [ + 55, + 210, + 296, + 235 + ], + "type": "inline_equation", + "content": "\\mathbf{F}_{\\textit{enhence}}" + }, + { + "bbox": [ + 55, + 210, + 296, + 235 + ], + "type": "text", + "content": " is processed through a simple feedforward neural network to produce the output of the LAMA." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 241, + 246, + 253 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 241, + 246, + 253 + ], + "spans": [ + { + "bbox": [ + 55, + 241, + 246, + 253 + ], + "type": "text", + "content": "3.4. Multi-Dilated Convolutions Module" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 258, + 296, + 484 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 258, + 296, + 484 + ], + "spans": [ + { + "bbox": [ + 55, + 258, + 296, + 484 + ], + "type": "text", + "content": "The LAMA module primarily functions to establish global feature connections, necessitating the learning of local features to enhance the quality of detail restoration. While some previous methods employed feature window-based self-attention strategies and achieved certain advancements, they incurred substantial computational costs. Consequently, we adopted a more straightforward and effective approach, namely the multi-scale dilated convolution module, the structure of which is depicted in Fig. 2(d). This module is equipped with filtering operations using various dilation factors aimed at capturing local features of different granularities within the image to enhance the detail restoration effects. The module comprises two dilated convolutions along with skip connections. The input feature, denoted as " + }, + { + "bbox": [ + 55, + 258, + 296, + 484 + ], + "type": "inline_equation", + "content": "\\mathbf{F}" + }, + { + "bbox": [ + 55, + 258, + 296, + 484 + ], + "type": "text", + "content": ", is split into two pathways: one passes through a convolution layer with a kernel size of 5 and a dilation rate of 2, and the other through a convolution layer with a kernel size of 3 and a dilation rate of 2, resulting in two feature sets:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 126, + 486, + 294, + 498 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 126, + 486, + 294, + 498 + ], + "spans": [ + { + "bbox": [ + 126, + 486, + 294, + 498 + ], + "type": "interline_equation", + "content": "\\mathbf {F} _ {1} = \\operatorname {C o n v} _ {5 \\times 5, d = 2} (\\mathbf {F}), \\tag {15}", + "image_path": "2a44e6b9e7fbe5901c70aadf35db710c08d6a6233d583b146a4ff77d116fe747.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 126, + 503, + 294, + 517 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 126, + 503, + 294, + 517 + ], + "spans": [ + { + "bbox": [ + 126, + 503, + 294, + 517 + ], + "type": "interline_equation", + "content": "\\mathbf {F} _ {2} = \\operatorname {C o n v} _ {3 \\times 3, d = 2} (\\mathbf {F}). \\tag {16}", + "image_path": "0a80be20b5ce3d34c21e893e39582046973f2868de39f2ec62728c7a84146013.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 521, + 296, + 593 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 521, + 296, + 593 + ], + "spans": [ + { + "bbox": [ + 55, + 521, + 296, + 593 + ], + "type": "text", + "content": "These are then concatenated to form " + }, + { + "bbox": [ + 55, + 521, + 296, + 593 + ], + "type": "inline_equation", + "content": "\\mathbf{F}'" + }, + { + "bbox": [ + 55, + 521, + 296, + 593 + ], + "type": "text", + "content": ", which subsequently passes through a convolution layer with a kernel size of 1 to halve the channel count, aligning it with the dimensions of the input features. Furthermore, the input feature " + }, + { + "bbox": [ + 55, + 521, + 296, + 593 + ], + "type": "inline_equation", + "content": "\\mathbf{F}" + }, + { + "bbox": [ + 55, + 521, + 296, + 593 + ], + "type": "text", + "content": " is merged with " + }, + { + "bbox": [ + 55, + 521, + 296, + 593 + ], + "type": "inline_equation", + "content": "\\mathbf{F}'" + }, + { + "bbox": [ + 55, + 521, + 296, + 593 + ], + "type": "text", + "content": " via a skip connection, culminating in the output " + }, + { + "bbox": [ + 55, + 521, + 296, + 593 + ], + "type": "inline_equation", + "content": "\\mathbf{F}_{out}" + }, + { + "bbox": [ + 55, + 521, + 296, + 593 + ], + "type": "text", + "content": ". The expressions are as follows:" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 93, + 605, + 295, + 618 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 93, + 605, + 295, + 618 + ], + "spans": [ + { + "bbox": [ + 93, + 605, + 295, + 618 + ], + "type": "interline_equation", + "content": "\\mathbf {F} _ {\\text {o u t}} = \\operatorname {C o n v} _ {1 \\times 1} \\left(\\operatorname {C o n c a t} \\left(\\mathbf {F} _ {1}, \\mathbf {F} _ {2}\\right)\\right) + \\mathbf {F}. \\tag {17}", + "image_path": "fc79079e40bf191341eb5352e3d473600b1cb5a155f0e3493942cb9a38dbbfd5.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 55, + 624, + 192, + 637 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 624, + 192, + 637 + ], + "spans": [ + { + "bbox": [ + 55, + 624, + 192, + 637 + ], + "type": "text", + "content": "3.5. Optimization Objectives" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 55, + 642, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 642, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 642, + 296, + 713 + ], + "type": "text", + "content": "The ACL model, during its decoding phase, outputs restoration results at three distinct scales and computes the corresponding loss values. Following prior methodologies, we calculate the loss values concurrently in both the spatial and frequency domains. The traditional L1 loss function is employed to measure the discrepancy between the restored" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 72, + 553, + 96 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 553, + 96 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 553, + 96 + ], + "type": "text", + "content": "outputs and the pristine reference images. Consequently, the total loss is computed as follows:" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 106, + 553, + 148 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 106, + 553, + 148 + ], + "spans": [ + { + "bbox": [ + 313, + 106, + 553, + 148 + ], + "type": "interline_equation", + "content": "L _ {t o t a l} = \\sum_ {i = 1} ^ {3} \\frac {1}{N _ {i}} \\left| \\mathbf {P} _ {i} - \\mathbf {I} _ {i} \\right| + \\lambda \\cdot \\sum_ {i = 1} ^ {3} \\frac {1}{N _ {i}} | f f t (\\mathbf {P} _ {i}) - f f t (\\mathbf {I} _ {i}) | \\tag {18}", + "image_path": "e2d49528da844d3b6968b68f799bd8482bf6ba18491b2d349b4999a460163958.jpg" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 148, + 555, + 232 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 148, + 555, + 232 + ], + "spans": [ + { + "bbox": [ + 313, + 148, + 555, + 232 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 148, + 555, + 232 + ], + "type": "inline_equation", + "content": "\\mathbf{P}_i" + }, + { + "bbox": [ + 313, + 148, + 555, + 232 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 148, + 555, + 232 + ], + "type": "inline_equation", + "content": "\\mathbf{I}_i" + }, + { + "bbox": [ + 313, + 148, + 555, + 232 + ], + "type": "text", + "content": " represent the restored result and the corresponding true image, respectively, and " + }, + { + "bbox": [ + 313, + 148, + 555, + 232 + ], + "type": "inline_equation", + "content": "N_i" + }, + { + "bbox": [ + 313, + 148, + 555, + 232 + ], + "type": "text", + "content": " denotes the total number of pixels in the image. " + }, + { + "bbox": [ + 313, + 148, + 555, + 232 + ], + "type": "inline_equation", + "content": "fft(\\cdot)" + }, + { + "bbox": [ + 313, + 148, + 555, + 232 + ], + "type": "text", + "content": " signifies the fast Fourier transform function. The hyperparameter " + }, + { + "bbox": [ + 313, + 148, + 555, + 232 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 313, + 148, + 555, + 232 + ], + "type": "text", + "content": ", utilized to balance the contributions of spatial domain loss and frequency domain loss to the total loss, is set at 0.1, following previous methods." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 245, + 395, + 258 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 245, + 395, + 258 + ], + "spans": [ + { + "bbox": [ + 313, + 245, + 395, + 258 + ], + "type": "text", + "content": "4. Experiments" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 265, + 554, + 338 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 265, + 554, + 338 + ], + "spans": [ + { + "bbox": [ + 313, + 265, + 554, + 338 + ], + "type": "text", + "content": "This section focuses on showcasing the effectiveness of our proposed ACL model in addressing various degraded image tasks, such as deraining and deblurring, evaluated across six test sets. We will outline the experimental procedures and datasets utilized, and confirm the impact of the proposed modules through a series of ablation studies." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 346, + 441, + 359 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 346, + 441, + 359 + ], + "spans": [ + { + "bbox": [ + 313, + 346, + 441, + 359 + ], + "type": "text", + "content": "4.1. Implementation Setup" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 313, + 363, + 555, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 363, + 555, + 567 + ], + "spans": [ + { + "bbox": [ + 313, + 363, + 555, + 567 + ], + "type": "text", + "content": "For each type of degradation, datasets are trained and evaluated independently. Unless specifically mentioned, all tests are conducted with the same hyperparameters. The training set undergoes random cropping of " + }, + { + "bbox": [ + 313, + 363, + 555, + 567 + ], + "type": "inline_equation", + "content": "256 \\times 256" + }, + { + "bbox": [ + 313, + 363, + 555, + 567 + ], + "type": "text", + "content": " patches and random flipping as a data augmentation strategy. To compare computational complexity with other methods, FLOPs are tested at the mentioned crop size. A cosine annealing strategy is adopted to gradually adjust the learning rate each epoch, setting a minimum learning rate limit of 1e-6. The batch size is set to 8, and the Adam optimizer is adopted. Following the evaluation of previous methods, for deraining, the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) are calculated on the YCbCr color mode. For other tasks, evaluation metrics are calculated on RGB color mode. All experiments are implemented in an environment equipped with the NVIDIA 24GB 3090 GPUs, based on the PyTorch." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 313, + 576, + 484, + 589 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 576, + 484, + 589 + ], + "spans": [ + { + "bbox": [ + 313, + 576, + 484, + 589 + ], + "type": "text", + "content": "4.2. Single Image Deraining Results" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 313, + 594, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 594, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 594, + 555, + 713 + ], + "type": "text", + "content": "We utilized several different rain removal datasets, including Rain100L/H [40], Rain200L/H, and DID-Data [46], to evaluate the model's ability to restore images with streaklike degradation elements. Each dataset was independently trained for 800 epochs, with an initial learning rate set to 1e-3. We compared our approach with previous methods, including the advanced Restormer (Transformer) [44], NAFNet (CNN) [2], IRNeXt (CNN) [6], and MambaIR (Mamba) [13]. The comparative results are shown in Table 1 and Table 2. Upon comparison of PSNR, it can" + } + ] + } + ], + "index": 21 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "17917" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 56, + 102, + 555, + 174 + ], + "blocks": [ + { + "bbox": [ + 55, + 71, + 555, + 93 + ], + "lines": [ + { + "bbox": [ + 55, + 71, + 555, + 93 + ], + "spans": [ + { + "bbox": [ + 55, + 71, + 555, + 93 + ], + "type": "text", + "content": "Table 1. Quantitative comparison results of the proposed model and seven other advanced models on the Rain100L and Rain100H. The larger the PSNR and SSIM values, the better the model effect." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 56, + 102, + 555, + 174 + ], + "lines": [ + { + "bbox": [ + 56, + 102, + 555, + 174 + ], + "spans": [ + { + "bbox": [ + 56, + 102, + 555, + 174 + ], + "type": "table", + "html": "
MethodsRestormer [44]MAXIM [33]DRT [24]MPRNet [43]DAWN [19]IRNeXt [6]MambaIR [13]ACL(Ours)
DatasetPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
Rain100L38.990.97838.060.97737.610.94837.840.95936.730.97138.240.97238.780.97739.180.983
Rain100H31.460.90430.810.90329.470.84630.410.87430.620.89631.640.90230.620.89332.220.920
Average35.230.94134.440.94033.540.89734.130.91733.680.93434.940.93734.700.93535.700.952
", + "image_path": "ad5a14a0b0a7f3978db0059b422efce4cf37a3eb84b3ea664ea49aade4aa5272.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "type": "table", + "bbox": [ + 56, + 214, + 555, + 415 + ], + "blocks": [ + { + "bbox": [ + 55, + 183, + 555, + 205 + ], + "lines": [ + { + "bbox": [ + 55, + 183, + 555, + 205 + ], + "spans": [ + { + "bbox": [ + 55, + 183, + 555, + 205 + ], + "type": "text", + "content": "Table 2. Quantitative comparison results of the proposed model with 13 other advanced models, including CNN and Transformer-based methods, on three datasets." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 56, + 214, + 555, + 415 + ], + "lines": [ + { + "bbox": [ + 56, + 214, + 555, + 415 + ], + "spans": [ + { + "bbox": [ + 56, + 214, + 555, + 415 + ], + "type": "table", + "html": "
MethodsRain200LRain200HDID-DatasetAverage
PSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
RESCAN [21]33.820.95526.220.82232.600.92730.880.901
PreNet [29]37.120.97629.040.89033.370.91933.180.928
DRT [24]38.810.98328.670.88033.880.92833.790.930
CCN [28]38.260.98129.990.91432.130.92433.460.940
Restormer [44]40.580.98630.830.91433.190.92634.870.942
Uformer [38]40.200.98630.310.91134.360.93334.960.943
MPRNet [43]39.820.98629.940.90034.500.93734.750.941
SmartAssign [37]38.410.98127.710.85433.110.91533.080.917
SFNet [7]39.500.98229.750.90134.510.93834.590.940
NAFNet [2]39.480.98229.190.88834.690.93734.450.936
ELFformer [18]38.850.98028.930.88533.540.93633.770.934
ESDNet [31]39.850.98630.010.91334.520.93934.790.946
MSGNN [35]39.090.98729.630.91833.110.92733.940.944
ACL(Ours)40.740.98830.450.91634.810.93835.330.947
", + "image_path": "9bb344b06a5d6da15853df480dc19f3622a0ff5cf0b9cbfb10d8d184f3e0da09.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 56, + 422, + 555, + 661 + ], + "blocks": [ + { + "bbox": [ + 56, + 422, + 555, + 661 + ], + "lines": [ + { + "bbox": [ + 56, + 422, + 555, + 661 + ], + "spans": [ + { + "bbox": [ + 56, + 422, + 555, + 661 + ], + "type": "image", + "image_path": "37815c6aa058b9a6405ab77600645ab7d332da0c3b3d60c0e460aa9998dfc842.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 672, + 555, + 696 + ], + "lines": [ + { + "bbox": [ + 55, + 672, + 555, + 696 + ], + "spans": [ + { + "bbox": [ + 55, + 672, + 555, + 696 + ], + "type": "text", + "content": "Figure 3. Visual comparison of ACL and other recent SOTA models on rainy image removal. The first two scenes contain slight rain streak degradation, while the last two scenes contain severe rain streak degradation." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 749, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 749, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 749, + 318, + 757 + ], + "type": "text", + "content": "17918" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 59, + 101, + 291, + 250 + ], + "blocks": [ + { + "bbox": [ + 55, + 71, + 297, + 94 + ], + "lines": [ + { + "bbox": [ + 55, + 71, + 297, + 94 + ], + "spans": [ + { + "bbox": [ + 55, + 71, + 297, + 94 + ], + "type": "text", + "content": "Table 3. Quantitative results of our method compared to recent approaches in blurry image restoration." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 59, + 101, + 291, + 250 + ], + "lines": [ + { + "bbox": [ + 59, + 101, + 291, + 250 + ], + "spans": [ + { + "bbox": [ + 59, + 101, + 291, + 250 + ], + "type": "table", + "html": "
MethodsGoPro
PSNR ↑SSIM ↑FLOPs(G) ↓Param(M) ↓
MIMO [4]32.680.95961716.1
DMPHN [47]31.200.940-21.7
DBGAN [49]31.100.94275911.6
MPRNet [43]32.660.95977720.1
Restormer [44]32.920.96114026.1
IRNeXt [6]33.160.96211413.21
Stripformer [32]33.080.96217020
SSAMAN [42]33.530.96516518.3
LoFormer [26]33.730.9664716.4
Ours33.250.964554.6
", + "image_path": "d6e63d5a57ae9aca7f31885283c2395d233a28af3f1df3b781cc7660b4132f36.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 58, + 258, + 135, + 304 + ], + "blocks": [ + { + "bbox": [ + 58, + 258, + 135, + 304 + ], + "lines": [ + { + "bbox": [ + 58, + 258, + 135, + 304 + ], + "spans": [ + { + "bbox": [ + 58, + 258, + 135, + 304 + ], + "type": "image", + "image_path": "7b1ad184733f9d1c1bdc3b844871a93fdd19a96b48c0b71c62d586b76608246b.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 71, + 304, + 124, + 313 + ], + "lines": [ + { + "bbox": [ + 71, + 304, + 124, + 313 + ], + "spans": [ + { + "bbox": [ + 71, + 304, + 124, + 313 + ], + "type": "text", + "content": "(a) Blur Region" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 137, + 258, + 214, + 304 + ], + "blocks": [ + { + "bbox": [ + 137, + 258, + 214, + 304 + ], + "lines": [ + { + "bbox": [ + 137, + 258, + 214, + 304 + ], + "spans": [ + { + "bbox": [ + 137, + 258, + 214, + 304 + ], + "type": "image", + "image_path": "d8bb188f0206f44d352c3f2a8973a6ee5b55486f6590550ff554f846e0f9b565.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 153, + 304, + 198, + 312 + ], + "lines": [ + { + "bbox": [ + 153, + 304, + 198, + 312 + ], + "spans": [ + { + "bbox": [ + 153, + 304, + 198, + 312 + ], + "type": "text", + "content": "(b) Reference" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 217, + 258, + 294, + 304 + ], + "blocks": [ + { + "bbox": [ + 217, + 258, + 294, + 304 + ], + "lines": [ + { + "bbox": [ + 217, + 258, + 294, + 304 + ], + "spans": [ + { + "bbox": [ + 217, + 258, + 294, + 304 + ], + "type": "image", + "image_path": "e08ef39dab8f576ee34504a74786659c508db7f5bbd9dbb360eed3ec41917282.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 233, + 304, + 274, + 312 + ], + "lines": [ + { + "bbox": [ + 233, + 304, + 274, + 312 + ], + "spans": [ + { + "bbox": [ + 233, + 304, + 274, + 312 + ], + "type": "text", + "content": "(c) DMPHN" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 58, + 313, + 135, + 358 + ], + "blocks": [ + { + "bbox": [ + 58, + 313, + 135, + 358 + ], + "lines": [ + { + "bbox": [ + 58, + 313, + 135, + 358 + ], + "spans": [ + { + "bbox": [ + 58, + 313, + 135, + 358 + ], + "type": "image", + "image_path": "8ffbd4209bdf41e4b00615e6fc2da32be63acc10047ac432ed1a180a5b2b865e.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 78, + 358, + 117, + 367 + ], + "lines": [ + { + "bbox": [ + 78, + 358, + 117, + 367 + ], + "spans": [ + { + "bbox": [ + 78, + 358, + 117, + 367 + ], + "type": "text", + "content": "(e) IRNeXt" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 55, + 381, + 295, + 405 + ], + "lines": [ + { + "bbox": [ + 55, + 381, + 295, + 405 + ], + "spans": [ + { + "bbox": [ + 55, + 381, + 295, + 405 + ], + "type": "text", + "content": "Figure 4. Visual results of ACL and four other advanced models on motion blurred image restoration." + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 137, + 313, + 215, + 357 + ], + "blocks": [ + { + "bbox": [ + 137, + 313, + 215, + 357 + ], + "lines": [ + { + "bbox": [ + 137, + 313, + 215, + 357 + ], + "spans": [ + { + "bbox": [ + 137, + 313, + 215, + 357 + ], + "type": "image", + "image_path": "89f95e3017c67732be8fad5db1b74c8e4226d904de5a21aa699ac8ae56ee76db.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 153, + 358, + 198, + 367 + ], + "lines": [ + { + "bbox": [ + 153, + 358, + 198, + 367 + ], + "spans": [ + { + "bbox": [ + 153, + 358, + 198, + 367 + ], + "type": "text", + "content": "(f) Restormer" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 217, + 313, + 294, + 357 + ], + "blocks": [ + { + "bbox": [ + 217, + 313, + 294, + 357 + ], + "lines": [ + { + "bbox": [ + 217, + 313, + 294, + 357 + ], + "spans": [ + { + "bbox": [ + 217, + 313, + 294, + 357 + ], + "type": "image", + "image_path": "65db8bcb34f6470354d3dfa31118106620092a3c69c4a6b9ad9508a49379462c.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 239, + 358, + 268, + 368 + ], + "lines": [ + { + "bbox": [ + 239, + 358, + 268, + 368 + ], + "spans": [ + { + "bbox": [ + 239, + 358, + 268, + 368 + ], + "type": "text", + "content": "(g) Ours" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + } + ], + "index": 12 + }, + { + "bbox": [ + 55, + 425, + 296, + 486 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 425, + 296, + 486 + ], + "spans": [ + { + "bbox": [ + 55, + 425, + 296, + 486 + ], + "type": "text", + "content": "be observed that our method outperforms the other compared methods, except for the powerful Transformer-based Restormer on the Rain200H. Additionally, Fig. 3 further illustrates the visual comparison results, where ACL demonstrates superior performance in restoring image details." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 55, + 491, + 231, + 506 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 491, + 231, + 506 + ], + "spans": [ + { + "bbox": [ + 55, + 491, + 231, + 506 + ], + "type": "text", + "content": "4.3. Single Image Deblurring Results" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 55, + 509, + 295, + 654 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 509, + 295, + 654 + ], + "spans": [ + { + "bbox": [ + 55, + 509, + 295, + 654 + ], + "type": "text", + "content": "We conducted evaluations on motion blur image restoration using the GoPro dataset [27], which includes 2,103 training images and 1,111 testing images. The compared methods include the recently proposed 9 advanced methods. In Table 3, we present the comparative results of various metrics on this dataset. As shown, ACL achieves advanced performance while maintaining a low parameter count and low FLOPs. Additionally, Fig. 4 displays visual comparison results with other methods. We selected critical numerical information within the images, and it can be observed that ACL also exhibits good performance in restoring motion-blurred images." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 55, + 660, + 157, + 672 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 660, + 157, + 672 + ], + "spans": [ + { + "bbox": [ + 55, + 660, + 157, + 672 + ], + "type": "text", + "content": "4.4. Ablation Studies" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 55, + 677, + 296, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 677, + 296, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 677, + 296, + 715 + ], + "type": "text", + "content": "To further understand the contributions of each module in the ACL model and other factors affecting model performance, we conducted a unified experiment on the" + } + ] + } + ], + "index": 19 + }, + { + "type": "table", + "bbox": [ + 321, + 102, + 547, + 156 + ], + "blocks": [ + { + "bbox": [ + 313, + 71, + 555, + 94 + ], + "lines": [ + { + "bbox": [ + 313, + 71, + 555, + 94 + ], + "spans": [ + { + "bbox": [ + 313, + 71, + 555, + 94 + ], + "type": "text", + "content": "Table 4. Comparison of ablation experiments between two modules on Rain100L." + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 321, + 102, + 547, + 156 + ], + "lines": [ + { + "bbox": [ + 321, + 102, + 547, + 156 + ], + "spans": [ + { + "bbox": [ + 321, + 102, + 547, + 156 + ], + "type": "table", + "html": "
SettingsPSNR ↑SSIM ↑
Baseline38.240.978
Baseline + LAMA38.890.981
Baseline + LAMA + MDC39.180.983
", + "image_path": "3dab61c52e9fc94a0e6ce2f09b17d81ad2323ffb04fe3555bbfcd488d6af4ee2.jpg" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "table_body" + } + ], + "index": 21 + }, + { + "bbox": [ + 313, + 175, + 555, + 248 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 175, + 555, + 248 + ], + "spans": [ + { + "bbox": [ + 313, + 175, + 555, + 248 + ], + "type": "text", + "content": "Rain100L/H rain removal dataset. Specifically, we performed ablation studies on the two main modules of the model. Additionally, to verify that the linear attention capability can be restored using the Mamba structure, we conducted experiments by replacing LAMA with other structures to compare the results under different configurations." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 313, + 255, + 459, + 266 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 255, + 459, + 266 + ], + "spans": [ + { + "bbox": [ + 313, + 255, + 459, + 266 + ], + "type": "text", + "content": "4.4.1. LAMA and MDC modules:" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 313, + 270, + 555, + 463 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 270, + 555, + 463 + ], + "spans": [ + { + "bbox": [ + 313, + 270, + 555, + 463 + ], + "type": "text", + "content": "To validate the roles of the two main modules in ACL, namely LAMA and MDC, as well as their respective importance, we conducted ablation experiments. We removed the MDC module and replaced the core encoder/decoder modules with the structure shown in Fig. 1(b), where the Transformer block in the baseline model is implemented based on Linear Attention. We then gradually replaced the encoder-decoder modules with LAMA and added the MDC module to the baseline model, resulting in two different configurations, which were trained and tested separately. The experimental results are shown in Table 4. By comparing the \"Baseline\" and \"Baseline+LAMA\" configurations, it is evident that the Mamba structure, implemented with linear attention, achieves better results, proving the potential of the Mamba structure to activate linear attention. Besides, adding the MDC further enhances the model's performance." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 313, + 471, + 518, + 482 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 471, + 518, + 482 + ], + "spans": [ + { + "bbox": [ + 313, + 471, + 518, + 482 + ], + "type": "text", + "content": "4.4.2. Different Mechanisms for Core Modules:" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 313, + 486, + 556, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 486, + 556, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 486, + 556, + 714 + ], + "type": "text", + "content": "To validate the effectiveness of LAMA, we replaced the LA structure within LAMA with the original one-dimensional scanning SSM structure and the bidirectional scanning SSM structure for verification. Additionally, replacing LAMA with a standard LA structure resulted in a new configuration, referred to as the Baseline model. The results obtained from these various configurations are presented in Table 5. On one hand, the proposed method outperforms both the Baseline and the strategies employing unidirectional and multi-directional scanning for modeling. On the other hand, compared to the unidirectional Mamba model, the multi-directional scanning strategy demonstrates superior performance, indicating that unidirectional modeling is not optimal for image data. Furthermore, Fig. 5 provides visual examples corresponding to each configuration. It can be observed that while the SSM-based scanning methods are capable of removing rainy degradation elements, they result in significant loss of image details. In contrast, our method achieves superior visual outcomes." + } + ] + } + ], + "index": 26 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 318, + 758 + ], + "type": "text", + "content": "17919" + } + ] + } + ], + "index": 27 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 63, + 102, + 287, + 186 + ], + "blocks": [ + { + "bbox": [ + 55, + 71, + 295, + 93 + ], + "lines": [ + { + "bbox": [ + 55, + 71, + 295, + 93 + ], + "spans": [ + { + "bbox": [ + 55, + 71, + 295, + 93 + ], + "type": "text", + "content": "Table 5. Quantitative results of different mechanisms used in the core module on rainy streak removal." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 63, + 102, + 287, + 186 + ], + "lines": [ + { + "bbox": [ + 63, + 102, + 287, + 186 + ], + "spans": [ + { + "bbox": [ + 63, + 102, + 287, + 186 + ], + "type": "table", + "html": "
SettingsRain100LRain100H
PSNRSSIMPSNRSSIM
Baseline38.240.97831.020.913
Mamba (w. 1D Scan)36.470.95929.720.887
Mamba (w. 2D Scan)38.070.97730.930.907
Ours39.180.98332.220.920
", + "image_path": "08784f57f0c7dcb9de3c6b547df8597cd1a1b86e0569d36db9f25b8382453a40.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 57, + 196, + 136, + 249 + ], + "blocks": [ + { + "bbox": [ + 57, + 196, + 136, + 249 + ], + "lines": [ + { + "bbox": [ + 57, + 196, + 136, + 249 + ], + "spans": [ + { + "bbox": [ + 57, + 196, + 136, + 249 + ], + "type": "image", + "image_path": "c1962d22190b23d3ed72c9670333e8887485c1a4d67cadeceaf8cb51fb011c6d.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 69, + 250, + 125, + 258 + ], + "lines": [ + { + "bbox": [ + 69, + 250, + 125, + 258 + ], + "spans": [ + { + "bbox": [ + 69, + 250, + 125, + 258 + ], + "type": "text", + "content": "(a) Rainy Image" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 136, + 196, + 216, + 249 + ], + "blocks": [ + { + "bbox": [ + 136, + 196, + 216, + 249 + ], + "lines": [ + { + "bbox": [ + 136, + 196, + 216, + 249 + ], + "spans": [ + { + "bbox": [ + 136, + 196, + 216, + 249 + ], + "type": "image", + "image_path": "fddcf20267e95036fb0977ecc48fb2158fc462def51c8fdfbe27a04bed83f37c.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 146, + 249, + 205, + 258 + ], + "lines": [ + { + "bbox": [ + 146, + 249, + 205, + 258 + ], + "spans": [ + { + "bbox": [ + 146, + 249, + 205, + 258 + ], + "type": "text", + "content": "(b) Ground Truth" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 216, + 196, + 294, + 249 + ], + "blocks": [ + { + "bbox": [ + 216, + 196, + 294, + 249 + ], + "lines": [ + { + "bbox": [ + 216, + 196, + 294, + 249 + ], + "spans": [ + { + "bbox": [ + 216, + 196, + 294, + 249 + ], + "type": "image", + "image_path": "eab1499fe7a3b50ffb30b0b8d4b4e54ad984a41d25e252c2321f3a62c301ec7e.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 231, + 249, + 277, + 258 + ], + "lines": [ + { + "bbox": [ + 231, + 249, + 277, + 258 + ], + "spans": [ + { + "bbox": [ + 231, + 249, + 277, + 258 + ], + "type": "text", + "content": "(c) LA Block" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 57, + 259, + 136, + 312 + ], + "blocks": [ + { + "bbox": [ + 57, + 259, + 136, + 312 + ], + "lines": [ + { + "bbox": [ + 57, + 259, + 136, + 312 + ], + "spans": [ + { + "bbox": [ + 57, + 259, + 136, + 312 + ], + "type": "image", + "image_path": "dd4adae9e59865b8b10aec2712fe7a09f10cb5b78fe0e4fefa7d18e0e8477e3a.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 63, + 313, + 131, + 322 + ], + "lines": [ + { + "bbox": [ + 63, + 313, + 131, + 322 + ], + "spans": [ + { + "bbox": [ + 63, + 313, + 131, + 322 + ], + "type": "text", + "content": "(d) 1D scan Mamba" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 55, + 336, + 295, + 359 + ], + "lines": [ + { + "bbox": [ + 55, + 336, + 295, + 359 + ], + "spans": [ + { + "bbox": [ + 55, + 336, + 295, + 359 + ], + "type": "text", + "content": "Figure 5. Visual results of the encoder/decoder blocks adopting different mechanisms." + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 137, + 259, + 216, + 312 + ], + "blocks": [ + { + "bbox": [ + 137, + 259, + 216, + 312 + ], + "lines": [ + { + "bbox": [ + 137, + 259, + 216, + 312 + ], + "spans": [ + { + "bbox": [ + 137, + 259, + 216, + 312 + ], + "type": "image", + "image_path": "37c0a6f77facfa54b45ca7c59e384fc385eb3a5dd5f8cfb631de832e997982b0.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 141, + 313, + 209, + 322 + ], + "lines": [ + { + "bbox": [ + 141, + 313, + 209, + 322 + ], + "spans": [ + { + "bbox": [ + 141, + 313, + 209, + 322 + ], + "type": "text", + "content": "(e) 2D scan Mamba" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 216, + 259, + 294, + 312 + ], + "blocks": [ + { + "bbox": [ + 216, + 259, + 294, + 312 + ], + "lines": [ + { + "bbox": [ + 216, + 259, + 294, + 312 + ], + "spans": [ + { + "bbox": [ + 216, + 259, + 294, + 312 + ], + "type": "image", + "image_path": "16e07557232a2c55b7606977a7c8dd99488ea5d5dcdfdd2de206934c59680d5f.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 230, + 313, + 279, + 322 + ], + "lines": [ + { + "bbox": [ + 230, + 313, + 279, + 322 + ], + "spans": [ + { + "bbox": [ + 230, + 313, + 279, + 322 + ], + "type": "text", + "content": "(f) LA Mamba" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + } + ], + "index": 12 + }, + { + "type": "table", + "bbox": [ + 69, + 413, + 281, + 476 + ], + "blocks": [ + { + "bbox": [ + 55, + 371, + 295, + 403 + ], + "lines": [ + { + "bbox": [ + 55, + 371, + 295, + 403 + ], + "spans": [ + { + "bbox": [ + 55, + 371, + 295, + 403 + ], + "type": "text", + "content": "Table 6. Evaluation results of different " + }, + { + "bbox": [ + 55, + 371, + 295, + 403 + ], + "type": "inline_equation", + "content": "N_{i}" + }, + { + "bbox": [ + 55, + 371, + 295, + 403 + ], + "type": "text", + "content": " configurations on the Rain100L dataset in ACL. " + }, + { + "bbox": [ + 55, + 371, + 295, + 403 + ], + "type": "inline_equation", + "content": "\\star" + }, + { + "bbox": [ + 55, + 371, + 295, + 403 + ], + "type": "text", + "content": " represents the combination we adopted." + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 69, + 413, + 281, + 476 + ], + "lines": [ + { + "bbox": [ + 69, + 413, + 281, + 476 + ], + "spans": [ + { + "bbox": [ + 69, + 413, + 281, + 476 + ], + "type": "table", + "html": "
(N1,N2,N3)PSNR↑SSIM↑
(3,3,3)38.950.978
(4,4,4)39.030.980
(6,3,3)★39.180.983
(6,6,6)39.190.983
", + "image_path": "98411ba000f4827398497dadf851b5619f7c1e3dd65f310dcf1d83aae938cda8.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "table_body" + } + ], + "index": 16 + }, + { + "bbox": [ + 55, + 497, + 293, + 510 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 497, + 293, + 510 + ], + "spans": [ + { + "bbox": [ + 55, + 497, + 293, + 510 + ], + "type": "text", + "content": "4.4.3. The Impact of the Number of Encoders/Decoders" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 54, + 512, + 295, + 668 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 512, + 295, + 668 + ], + "spans": [ + { + "bbox": [ + 54, + 512, + 295, + 668 + ], + "type": "text", + "content": "As mentioned in the above, a crucial hyper-parameter in the model is the number of feature processing blocks at each encoding/decoding stage, denoted as " + }, + { + "bbox": [ + 54, + 512, + 295, + 668 + ], + "type": "inline_equation", + "content": "N_{i}" + }, + { + "bbox": [ + 54, + 512, + 295, + 668 + ], + "type": "text", + "content": ". To investigate the impact of " + }, + { + "bbox": [ + 54, + 512, + 295, + 668 + ], + "type": "inline_equation", + "content": "N_{i}" + }, + { + "bbox": [ + 54, + 512, + 295, + 668 + ], + "type": "text", + "content": ", we conducted a series of experiments on the deraining dataset using different combinations, with the results presented in Table 6. Additionally, Fig. 6 illustrates the output images generated by the models with various configurations. The numerical results indicate that increasing " + }, + { + "bbox": [ + 54, + 512, + 295, + 668 + ], + "type": "inline_equation", + "content": "N_{i}" + }, + { + "bbox": [ + 54, + 512, + 295, + 668 + ], + "type": "text", + "content": " contributes to performance improvement, but the effect is limited. Notably, a significant enhancement is observed when " + }, + { + "bbox": [ + 54, + 512, + 295, + 668 + ], + "type": "inline_equation", + "content": "N_{1}" + }, + { + "bbox": [ + 54, + 512, + 295, + 668 + ], + "type": "text", + "content": " is increased, suggesting that learning high-resolution feature maps plays a critical role in improving the model's recovery performance." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 55, + 674, + 295, + 686 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 674, + 295, + 686 + ], + "spans": [ + { + "bbox": [ + 55, + 674, + 295, + 686 + ], + "type": "text", + "content": "4.4.4. The Dilation Rate of Convolution in MDC Module" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 55, + 689, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 689, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 689, + 295, + 713 + ], + "type": "text", + "content": "We propose a local feature enhancement module, MDC, which includes two dilated convolution layers, as shown in" + } + ] + } + ], + "index": 20 + }, + { + "type": "image", + "bbox": [ + 316, + 70, + 375, + 149 + ], + "blocks": [ + { + "bbox": [ + 316, + 70, + 375, + 149 + ], + "lines": [ + { + "bbox": [ + 316, + 70, + 375, + 149 + ], + "spans": [ + { + "bbox": [ + 316, + 70, + 375, + 149 + ], + "type": "image", + "image_path": "1875a8b22794270df98e65352ba49d08c774fedf56a75a45dc43b159581210b1.jpg" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 336, + 150, + 354, + 159 + ], + "lines": [ + { + "bbox": [ + 336, + 150, + 354, + 159 + ], + "spans": [ + { + "bbox": [ + 336, + 150, + 354, + 159 + ], + "type": "text", + "content": "Input" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 362, + 173, + 505, + 185 + ], + "lines": [ + { + "bbox": [ + 362, + 173, + 505, + 185 + ], + "spans": [ + { + "bbox": [ + 362, + 173, + 505, + 185 + ], + "type": "text", + "content": "Figure 6. Visual Results on Various " + }, + { + "bbox": [ + 362, + 173, + 505, + 185 + ], + "type": "inline_equation", + "content": "N_{i}" + } + ] + } + ], + "index": 29, + "angle": 0, + "type": "image_caption" + } + ], + "index": 21 + }, + { + "type": "image", + "bbox": [ + 376, + 71, + 435, + 148 + ], + "blocks": [ + { + "bbox": [ + 376, + 71, + 435, + 148 + ], + "lines": [ + { + "bbox": [ + 376, + 71, + 435, + 148 + ], + "spans": [ + { + "bbox": [ + 376, + 71, + 435, + 148 + ], + "type": "image", + "image_path": "839f75d4d7639e7277ccb9e8dafd60c2bfeeb00555847031c6ec9d169b31e7c7.jpg" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 394, + 150, + 416, + 159 + ], + "lines": [ + { + "bbox": [ + 394, + 150, + 416, + 159 + ], + "spans": [ + { + "bbox": [ + 394, + 150, + 416, + 159 + ], + "type": "text", + "content": "(4,4,4)" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_caption" + } + ], + "index": 23 + }, + { + "type": "image", + "bbox": [ + 435, + 71, + 494, + 148 + ], + "blocks": [ + { + "bbox": [ + 435, + 71, + 494, + 148 + ], + "lines": [ + { + "bbox": [ + 435, + 71, + 494, + 148 + ], + "spans": [ + { + "bbox": [ + 435, + 71, + 494, + 148 + ], + "type": "image", + "image_path": "5c3add7916c909abc38bbb4c408594f56e53af0c91358ce01a64077e95eb6f15.jpg" + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 454, + 150, + 475, + 159 + ], + "lines": [ + { + "bbox": [ + 454, + 150, + 475, + 159 + ], + "spans": [ + { + "bbox": [ + 454, + 150, + 475, + 159 + ], + "type": "text", + "content": "(6,6,6)" + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_caption" + } + ], + "index": 25 + }, + { + "type": "image", + "bbox": [ + 495, + 71, + 553, + 148 + ], + "blocks": [ + { + "bbox": [ + 495, + 71, + 553, + 148 + ], + "lines": [ + { + "bbox": [ + 495, + 71, + 553, + 148 + ], + "spans": [ + { + "bbox": [ + 495, + 71, + 553, + 148 + ], + "type": "image", + "image_path": "0dedf5c2df23be8be95ae911bbe953ce4845bb353d3404a5cbcdc3623e74c6e7.jpg" + } + ] + } + ], + "index": 27, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 513, + 150, + 534, + 159 + ], + "lines": [ + { + "bbox": [ + 513, + 150, + 534, + 159 + ], + "spans": [ + { + "bbox": [ + 513, + 150, + 534, + 159 + ], + "type": "text", + "content": "(6,3,3)" + } + ] + } + ], + "index": 28, + "angle": 0, + "type": "image_caption" + } + ], + "index": 27 + }, + { + "type": "table", + "bbox": [ + 330, + 248, + 537, + 300 + ], + "blocks": [ + { + "bbox": [ + 313, + 206, + 554, + 239 + ], + "lines": [ + { + "bbox": [ + 313, + 206, + 554, + 239 + ], + "spans": [ + { + "bbox": [ + 313, + 206, + 554, + 239 + ], + "type": "text", + "content": "Table 7. The evaluation results of the MDC module's convolution operations with varying dilation rates on the Rain100L dataset. " + }, + { + "bbox": [ + 313, + 206, + 554, + 239 + ], + "type": "inline_equation", + "content": "\\star" + }, + { + "bbox": [ + 313, + 206, + 554, + 239 + ], + "type": "text", + "content": " represents the combination we adopted." + } + ] + } + ], + "index": 30, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 330, + 248, + 537, + 300 + ], + "lines": [ + { + "bbox": [ + 330, + 248, + 537, + 300 + ], + "spans": [ + { + "bbox": [ + 330, + 248, + 537, + 300 + ], + "type": "table", + "html": "
dilationPSNR↑SSIM↑
138.120.981
2★39.180.983
339.100.981
", + "image_path": "4d800bdaf8a2e14ecf4b5e4540e723abdb1884bd1943165818737c07236a01d4.jpg" + } + ] + } + ], + "index": 31, + "angle": 0, + "type": "table_body" + } + ], + "index": 31 + }, + { + "bbox": [ + 313, + 331, + 555, + 415 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 331, + 555, + 415 + ], + "spans": [ + { + "bbox": [ + 313, + 331, + 555, + 415 + ], + "type": "text", + "content": "Fig. 2 (d). We further investigate the impact of dilation rates on the module's performance. As shown in Table 7, the results indicate that compared to the non-dilated convolution setting " + }, + { + "bbox": [ + 313, + 331, + 555, + 415 + ], + "type": "inline_equation", + "content": "(d = 1)" + }, + { + "bbox": [ + 313, + 331, + 555, + 415 + ], + "type": "text", + "content": ", the performance is inferior to the other two configurations. However, with increased dilation rates, performance improves to varying degrees, with the best overall performance observed at " + }, + { + "bbox": [ + 313, + 331, + 555, + 415 + ], + "type": "inline_equation", + "content": "d = 2" + }, + { + "bbox": [ + 313, + 331, + 555, + 415 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 313, + 451, + 467, + 464 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 451, + 467, + 464 + ], + "spans": [ + { + "bbox": [ + 313, + 451, + 467, + 464 + ], + "type": "text", + "content": "4.5. Conclusion and Limitations" + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 313, + 474, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 474, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 474, + 555, + 715 + ], + "type": "text", + "content": "We introduced ACL, a novel image restoration model that addresses the limitations of traditional CNNs and Transformer-based approaches in handling global receptive fields and computational efficiency. By integrating linear attention into the Mamba structure, we developed the LAMA module, which enhances global feature dependencies with linear computational complexity. Additionally, the MDC module was designed to improve local detail restoration through multi-scale dilated convolutions. Our experiments confirmed that the ACL achieves promising performance in de-raining tasks, demonstrating its effectiveness in both quantitative metrics and visual quality. Furthermore, our work provides a new perspective on leveraging the Mamba structure in the IR domain. Nevertheless, our method also has some limitations. The ACL model does not have an advantage over CNN-based models when processing large-sized images. Moreover, in the image deblurring task, there is still a gap between ACL and SOTA Transformer models. In the future, the ACL model still has room for further optimization to adapt more IR tasks." + } + ] + } + ], + "index": 34 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "17920" + } + ] + } + ], + "index": 35 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 153, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 153, + 85 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 153, + 85 + ], + "type": "text", + "content": "Acknowledgments" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 90, + 297, + 179 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 90, + 297, + 179 + ], + "spans": [ + { + "bbox": [ + 55, + 90, + 297, + 179 + ], + "type": "text", + "content": "This work was supported by the National Key R&D Program of China (No.2023YFB4502804), the National Science Fund for Distinguished Young Scholars (No.62025603), the National Natural Science Foundation of China (No. U22B2051, No. 62302411), the Natural Science Foundation of Fujian Province of China (No.2021J06003), and China Postdoctoral Science Foundation (No. 2023M732948)." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 201, + 115, + 213 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 201, + 115, + 213 + ], + "spans": [ + { + "bbox": [ + 56, + 201, + 115, + 213 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 57, + 220, + 296, + 713 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 61, + 220, + 296, + 275 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 220, + 296, + 275 + ], + "spans": [ + { + "bbox": [ + 61, + 220, + 296, + 275 + ], + "type": "text", + "content": "[1] Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. Pre-trained image processing transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12299-12310, 2021. 3" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 277, + 296, + 319 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 277, + 296, + 319 + ], + "spans": [ + { + "bbox": [ + 61, + 277, + 296, + 319 + ], + "type": "text", + "content": "[2] Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. Simple baselines for image restoration. In European conference on computer vision, pages 17-33. Springer, 2022. 5, 6" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 61, + 322, + 296, + 376 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 322, + 296, + 376 + ], + "spans": [ + { + "bbox": [ + 61, + 322, + 296, + 376 + ], + "type": "text", + "content": "[3] Xiang Chen, Hao Li, Mingqiang Li, and Jinshan Pan. Learning a sparse transformer network for effective image de- raining. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5896-5905, 2023. 3" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 61, + 377, + 296, + 431 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 377, + 296, + 431 + ], + "spans": [ + { + "bbox": [ + 61, + 377, + 296, + 431 + ], + "type": "text", + "content": "[4] Sung-Jin Cho, Seo-Won Ji, Jun-Pyo Hong, Seung-Won Jung, and Sung-Jea Ko. Rethinking coarse-to-fine approach in single image deblurring. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4641-4650, 2021. 7" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 434, + 296, + 466 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 434, + 296, + 466 + ], + "spans": [ + { + "bbox": [ + 62, + 434, + 296, + 466 + ], + "type": "text", + "content": "[5] Yuning Cui, Wenqi Ren, Xiaochun Cao, and Alois Knoll. Image restoration via frequency selection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023. 2" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 468, + 296, + 499 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 468, + 296, + 499 + ], + "spans": [ + { + "bbox": [ + 62, + 468, + 296, + 499 + ], + "type": "text", + "content": "[6] Yuning Cui, Wenqi Ren, Sining Yang, Xiaochun Cao, and Alois Knoll. Irnext: Rethinking convolutional network design for image restoration. 2023. 1, 5, 6, 7" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 501, + 296, + 554 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 501, + 296, + 554 + ], + "spans": [ + { + "bbox": [ + 62, + 501, + 296, + 554 + ], + "type": "text", + "content": "[7] Yuning Cui, Yi Tao, Zhenshan Bing, Wenqi Ren, Xinwei Gao, Xiaochun Cao, Kai Huang, and Alois Knoll. Selective frequency network for image restoration. In The Eleventh International Conference on Learning Representations, 2023. 6" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 62, + 557, + 296, + 589 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 557, + 296, + 589 + ], + "spans": [ + { + "bbox": [ + 62, + 557, + 296, + 589 + ], + "type": "text", + "content": "[8] Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2023. 1, 2, 3" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 62, + 591, + 296, + 635 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 591, + 296, + 635 + ], + "spans": [ + { + "bbox": [ + 62, + 591, + 296, + 635 + ], + "type": "text", + "content": "[9] Enxuan Gu, Hongwei Ge, and Yong Guo. Code: An explicit content decoupling framework for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2920-2930, 2024. 2" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 636, + 296, + 679 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 636, + 296, + 679 + ], + "spans": [ + { + "bbox": [ + 57, + 636, + 296, + 679 + ], + "type": "text", + "content": "[10] Shuhang Gu, Yawei Li, Luc Van Gool, and Radu Timofte. Self-guided network for fast image denoising. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2511-2520, 2019. 2" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 57, + 681, + 296, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 681, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 57, + 681, + 296, + 713 + ], + "type": "text", + "content": "[11] Yubin Gu, Honghui Xu, Yueqian Quan, Wanjun Chen, and Jianwei Zheng. Orsi salient object detection via bidimensional attention and full-stage semantic guidance. IEEE" + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 555, + 713 + ], + "type": "list", + "angle": 0, + "index": 28, + "blocks": [ + { + "bbox": [ + 335, + 73, + 553, + 94 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 335, + 73, + 553, + 94 + ], + "spans": [ + { + "bbox": [ + 335, + 73, + 553, + 94 + ], + "type": "text", + "content": "Transactions on Geoscience and Remote Sensing, 61:1-13, 2023. 1" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 96, + 553, + 140 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 96, + 553, + 140 + ], + "spans": [ + { + "bbox": [ + 316, + 96, + 553, + 140 + ], + "type": "text", + "content": "[12] Yubin Gu, Siting Chen, Xiaoshuai Sun, Jiayi Ji, Yiyi Zhou, and Rongrong Ji. Optical remote sensing image salient object detection via bidirectional cross-attention and attention restoration. Pattern Recognition, page 111478, 2025. 1" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 142, + 553, + 185 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 142, + 553, + 185 + ], + "spans": [ + { + "bbox": [ + 316, + 142, + 553, + 185 + ], + "type": "text", + "content": "[13] Hang Guo, Jinmin Li, Tao Dai, Zhihao Ouyang, Xudong Ren, and Shu-Tao Xia. Mambair: A simple baseline for image restoration with state-space model. arXiv preprint arXiv:2402.15648, 2024. 1, 2, 5, 6" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 317, + 186, + 553, + 241 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 186, + 553, + 241 + ], + "spans": [ + { + "bbox": [ + 317, + 186, + 553, + 241 + ], + "type": "text", + "content": "[14] Tianyu Guo, Haowei Wang, Yiwei Ma, Jiayi Ji, and Xiaoshuai Sun. Improving panoptic narrative grounding by harnessing semantic relationships and visual confirmation. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1985-1993, 2024. 1" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 243, + 553, + 297 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 243, + 553, + 297 + ], + "spans": [ + { + "bbox": [ + 316, + 243, + 553, + 297 + ], + "type": "text", + "content": "[15] Dongchen Han, Ziyi Wang, Zhuofan Xia, Yizeng Han, Yifan Pu, Chunjiang Ge, Jun Song, Shiji Song, Bo Zheng, and Gao Huang. Demystify mamba in vision: A linear attention perspective. Advances in Neural Information Processing Systems, 37:127181-127203, 2025. 2, 3" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 299, + 553, + 342 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 299, + 553, + 342 + ], + "spans": [ + { + "bbox": [ + 316, + 299, + 553, + 342 + ], + "type": "text", + "content": "[16] Tao Huang, Xiaohuan Pei, Shan You, Fei Wang, Chen Qian, and Chang Xu. Localmamba: Visual state space model with windowed selective scan. arXiv preprint arXiv:2403.09338, 2024. 3" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 344, + 553, + 398 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 344, + 553, + 398 + ], + "spans": [ + { + "bbox": [ + 316, + 344, + 553, + 398 + ], + "type": "text", + "content": "[17] Jiayi Ji, Haowei Wang, Changli Wu, Yiwei Ma, Xiaoshuai Sun, and Rongrong Ji. Jm3d jm3d-llm: Elevating 3d representation with joint multi-modal cues. IEEE Transactions on Pattern Analysis and Machine Intelligence, 47(4):2475–2492, 2025. 1" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 399, + 553, + 443 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 399, + 553, + 443 + ], + "spans": [ + { + "bbox": [ + 316, + 399, + 553, + 443 + ], + "type": "text", + "content": "[18] Kui Jiang, Zhongyuan Wang, Chen Chen, Zheng Wang, Laizhong Cui, and Chia-Wen Lin. Magic elf: Image deraining meets association learning and transformer. arXiv preprint arXiv:2207.10455, 2022. 6" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 317, + 445, + 555, + 498 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 445, + 555, + 498 + ], + "spans": [ + { + "bbox": [ + 317, + 445, + 555, + 498 + ], + "type": "text", + "content": "[19] Kui Jiang, Wenxuan Liu, Zheng Wang, Xian Zhong, Junjun Jiang, and Chia-Wen Lin. Dawn: Direction-aware attention wavelet network for image deraining. In Proceedings of the 31st ACM international conference on multimedia, pages 7065-7074, 2023. 6" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 501, + 553, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 501, + 553, + 555 + ], + "spans": [ + { + "bbox": [ + 316, + 501, + 553, + 555 + ], + "type": "text", + "content": "[20] Lingshun Kong, Jiangxin Dong, Jianjun Ge, Mingqiang Li, and Jinshan Pan. Efficient frequency domain-based transformers for high-quality image deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5886-5895, 2023. 2" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 557, + 553, + 611 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 557, + 553, + 611 + ], + "spans": [ + { + "bbox": [ + 316, + 557, + 553, + 611 + ], + "type": "text", + "content": "[21] Xia Li, Jianlong Wu, Zhouchen Lin, Hong Liu, and Hongbin Zha. Recurrent squeeze-and-excitation context aggregation net for single image deraining. In Proceedings of the European conference on computer vision (ECCV), pages 254-269, 2018. 6" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 317, + 613, + 553, + 678 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 613, + 553, + 678 + ], + "spans": [ + { + "bbox": [ + 317, + 613, + 553, + 678 + ], + "type": "text", + "content": "[22] Yawei Li, Yuchen Fan, Xiaoyu Xiang, Denis Demandolx, Rakesh Ranjan, Radu Timofte, and Luc Van Gool. Efficient and explicit modelling of image hierarchies for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18278-18289, 2023. 2" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 316, + 680, + 553, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 680, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 680, + 553, + 713 + ], + "type": "text", + "content": "[23] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restoration using swim transformer. In Proceedings of the IEEE/CVF inter" + } + ] + } + ], + "index": 27 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "17921" + } + ] + } + ], + "index": 29 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 295, + 712 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 76, + 72, + 294, + 94 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 72, + 294, + 94 + ], + "spans": [ + { + "bbox": [ + 76, + 72, + 294, + 94 + ], + "type": "text", + "content": "national conference on computer vision, pages 1833-1844, 2021. 1, 2, 3" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 96, + 295, + 140 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 96, + 295, + 140 + ], + "spans": [ + { + "bbox": [ + 56, + 96, + 295, + 140 + ], + "type": "text", + "content": "[24] Yuanchu Liang, Saeed Anwar, and Yang Liu. Drt: A lightweight single image deraining recursive transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 589-598, 2022. 3, 6" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 141, + 294, + 185 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 141, + 294, + 185 + ], + "spans": [ + { + "bbox": [ + 56, + 141, + 294, + 185 + ], + "type": "text", + "content": "[25] Yue Liu, Yunjie Tian, Yuzhong Zhao, Hongtian Yu, Lingxi Xie, Yaowei Wang, Qixiang Ye, Jianbin Jiao, and Yunfan Liu. Vmamba: Visual state space model. Advances in neural information processing systems, 37:103031-103063, 2024. 3" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 186, + 294, + 239 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 186, + 294, + 239 + ], + "spans": [ + { + "bbox": [ + 56, + 186, + 294, + 239 + ], + "type": "text", + "content": "[26] Xintian Mao, Jiansheng Wang, Xingran Xie, Qingli Li, and Yan Wang. Loformer: Local frequency transformer for image deblurring. In Proceedings of the 32nd ACM International Conference on Multimedia, pages 10382-10391, 2024. 7" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 243, + 294, + 296 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 243, + 294, + 296 + ], + "spans": [ + { + "bbox": [ + 56, + 243, + 294, + 296 + ], + "type": "text", + "content": "[27] Seungjun Nah, Tae Hyun Kim, and Kyoung Mu Lee. Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3883-3891, 2017. 7" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 298, + 295, + 342 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 298, + 295, + 342 + ], + "spans": [ + { + "bbox": [ + 56, + 298, + 295, + 342 + ], + "type": "text", + "content": "[28] Ruijie Quan, Xin Yu, Yuanzhi Liang, and Yi Yang. Removing raindrops and rain streaks in one go. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9147-9156, 2021. 6" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 343, + 294, + 398 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 343, + 294, + 398 + ], + "spans": [ + { + "bbox": [ + 56, + 343, + 294, + 398 + ], + "type": "text", + "content": "[29] Dongwei Ren, Wangmeng Zuo, Qinghua Hu, Pengfei Zhu, and Deyu Meng. Progressive image deraining networks: A better and simpler baseline. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3937-3946, 2019. 6" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 399, + 294, + 443 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 399, + 294, + 443 + ], + "spans": [ + { + "bbox": [ + 56, + 399, + 294, + 443 + ], + "type": "text", + "content": "[30] Yuan Shi, Bin Xia, Xiaoyu Jin, Xing Wang, Tianyu Zhao, Xin Xia, Xuefeng Xiao, and Wenming Yang. Vmambair: Visual state space model for image restoration. arXiv preprint arXiv:2403.11423, 2024. 1, 2, 3" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 445, + 294, + 476 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 445, + 294, + 476 + ], + "spans": [ + { + "bbox": [ + 56, + 445, + 294, + 476 + ], + "type": "text", + "content": "[31] Tianyu Song, Guiyue Jin, Pengpeng Li, Kui Jiang, Xiang Chen, and Jiyu Jin. Learning a spiking neural network for efficient image deraining. *IJCAI*, 2024. 6" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 478, + 294, + 521 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 478, + 294, + 521 + ], + "spans": [ + { + "bbox": [ + 56, + 478, + 294, + 521 + ], + "type": "text", + "content": "[32] Fu-Jen Tsai, Yan-Tsung Peng, Yen-Yu Lin, Chung-Chi Tsai, and Chia-Wen Lin. Stripformer: Strip transformer for fast image deblurring. In European Conference on Computer Vision, pages 146-162. Springer, 2022. 2, 7" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 523, + 294, + 578 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 523, + 294, + 578 + ], + "spans": [ + { + "bbox": [ + 56, + 523, + 294, + 578 + ], + "type": "text", + "content": "[33] Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, and Yinxiao Li. Maxim: Multi-axis mlp for image processing. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5769-5780, 2022. 6" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 580, + 294, + 623 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 580, + 294, + 623 + ], + "spans": [ + { + "bbox": [ + 56, + 580, + 294, + 623 + ], + "type": "text", + "content": "[34] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. 1, 2" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 56, + 624, + 294, + 657 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 624, + 294, + 657 + ], + "spans": [ + { + "bbox": [ + 56, + 624, + 294, + 657 + ], + "type": "text", + "content": "[35] Cong Wang, Wei Wang, Chengjin Yu, and Jie Mu. Explore internal and external similarity for single image deraining with graph neural networks. *IJCAI*, 2024. 6" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 56, + 658, + 294, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 658, + 294, + 712 + ], + "spans": [ + { + "bbox": [ + 56, + 658, + 294, + 712 + ], + "type": "text", + "content": "[36] Qiong Wang, Kui Jiang, Jinyi Lai, Zheng Wang, and Jianhui Zhang. Hpcnet: A hybrid progressive coupled network for image deraining. In 2023 IEEE International Conference on Multimedia and Expo (ICME), pages 2747-2752. IEEE, 2023. 1" + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 553, + 713 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 316, + 73, + 553, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 73, + 553, + 127 + ], + "spans": [ + { + "bbox": [ + 316, + 73, + 553, + 127 + ], + "type": "text", + "content": "[37] Yinglong Wang, Chao Ma, and Jianzhuang Liu. Smartassign: Learning a smart knowledge assignment strategy for deraining and desnowing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3677-3686, 2023. 6" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 129, + 553, + 183 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 129, + 553, + 183 + ], + "spans": [ + { + "bbox": [ + 316, + 129, + 553, + 183 + ], + "type": "text", + "content": "[38] Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 17683-17693, 2022. 2, 6" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 184, + 553, + 227 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 184, + 553, + 227 + ], + "spans": [ + { + "bbox": [ + 316, + 184, + 553, + 227 + ], + "type": "text", + "content": "[39] Xinyu Xie, Yawen Cui, Tao Tan, Xubin Zheng, and Zitong Yu. Fusionmamba: Dynamic feature enhancement for multimodal image fusion with mamba. Visual Intelligence, 2(1): 37, 2024. 3" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 228, + 553, + 282 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 228, + 553, + 282 + ], + "spans": [ + { + "bbox": [ + 316, + 228, + 553, + 282 + ], + "type": "text", + "content": "[40] Wenhan Yang, Robby T Tan, Jiashi Feng, Jiaying Liu, Zongming Guo, and Shuicheng Yan. Deep joint rain detection and removal from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1357-1366, 2017. 5" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 283, + 553, + 327 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 283, + 553, + 327 + ], + "spans": [ + { + "bbox": [ + 316, + 283, + 553, + 327 + ], + "type": "text", + "content": "[41] Hu Yu, Naishan Zheng, Man Zhou, Jie Huang, Zeyu Xiao, and Feng Zhao. Frequency and spatial dual guidance for image dehazing. In European Conference on Computer Vision, pages 181-198. Springer, 2022. 1" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 327, + 553, + 380 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 327, + 553, + 380 + ], + "spans": [ + { + "bbox": [ + 316, + 327, + 553, + 380 + ], + "type": "text", + "content": "[42] Anas Zafar, Danyal Aftab, Rizwan Qureshi, Xinqi Fan, Pingjun Chen, Jia Wu, Hazrat Ali, Shah Nawaz, Sheheryar Khan, and Mubarak Shah. Single stage adaptive multi-attention network for image restoration. IEEE TIP, 2024. 7" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 383, + 553, + 437 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 383, + 553, + 437 + ], + "spans": [ + { + "bbox": [ + 316, + 383, + 553, + 437 + ], + "type": "text", + "content": "[43] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14821-14831, 2021. 6, 7" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 438, + 553, + 502 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 438, + 553, + 502 + ], + "spans": [ + { + "bbox": [ + 316, + 438, + 553, + 502 + ], + "type": "text", + "content": "[44] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5728-5739, 2022. 1, 2, 5, 6, 7" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 504, + 553, + 547 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 504, + 553, + 547 + ], + "spans": [ + { + "bbox": [ + 316, + 504, + 553, + 547 + ], + "type": "text", + "content": "[45] Wengyi Zhan, Mingbao Lin, Chia-Wen Lin, and Rongrong Ji. Anysr: Realizing image super-resolution as any-scale, any-resource. IEEE Transactions on Image Processing, 2024. 2" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 548, + 553, + 592 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 548, + 553, + 592 + ], + "spans": [ + { + "bbox": [ + 316, + 548, + 553, + 592 + ], + "type": "text", + "content": "[46] He Zhang and Vishal M Patel. Density-aware single image de-raining using a multi-stream dense network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 695-704, 2018. 5" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 593, + 553, + 646 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 593, + 553, + 646 + ], + "spans": [ + { + "bbox": [ + 316, + 593, + 553, + 646 + ], + "type": "text", + "content": "[47] Hongguang Zhang, Yuchao Dai, Hongdong Li, and Piotr Koniusz. Deep stacked hierarchical multi-patch network for image deblurring. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5978-5986, 2019. 7" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 647, + 553, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 647, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 647, + 553, + 713 + ], + "type": "text", + "content": "[48] Jinlu Zhang, Yiyi Zhou, Qiancheng Zheng, Xiaoxiong Du, Gen Luo, Jun Peng, Xiaoshuai Sun, and Rongrong Ji. Fast text-to-3D-aware face generation and manipulation via direct cross-modal mapping and geometric regularization. In Proceedings of the 41st International Conference on Machine Learning, pages 60605–60625. PMLR, 2024. 1" + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "text", + "content": "17922" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 294, + 173 + ], + "type": "list", + "angle": 0, + "index": 2, + "blocks": [ + { + "bbox": [ + 56, + 72, + 294, + 126 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 294, + 126 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 294, + 126 + ], + "type": "text", + "content": "[49] Kaihao Zhang, Wenhan Luo, Yiran Zhong, Lin Ma, Bjorn Stenger, Wei Liu, and Hongdong Li. Deblurring by realistic blurring. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2737-2746, 2020. 7" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 129, + 294, + 173 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 129, + 294, + 173 + ], + "spans": [ + { + "bbox": [ + 56, + 129, + 294, + 173 + ], + "type": "text", + "content": "[50] Jianwei Zheng, Wei Li, Ni Xu, Junwei Zhu, Xiaoxu Lin, and Xiaqin Zhang. Alias-free mamba neural operator. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. 3" + } + ] + } + ], + "index": 1 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 749, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 749, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 749, + 317, + 757 + ], + "type": "text", + "content": "17923" + } + ] + } + ], + "index": 3 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/ADD_ Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution/e29ca394-45e6-4bf5-b8ea-58a74dbe24fa_content_list.json b/2025/ADD_ Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution/e29ca394-45e6-4bf5-b8ea-58a74dbe24fa_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0ddb62bbe6c0ea0addfcac7c770187526e576edf --- /dev/null +++ b/2025/ADD_ Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution/e29ca394-45e6-4bf5-b8ea-58a74dbe24fa_content_list.json @@ -0,0 +1,2289 @@ +[ + { + "type": "text", + "text": "ADD: Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution", + "text_level": 1, + "bbox": [ + 107, + 128, + 890, + 176 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Ze-Yu Mi Yu-Bin Yang*", + "bbox": [ + 392, + 203, + 609, + 220 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China", + "bbox": [ + 142, + 220, + 854, + 239 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "mizeyu@smail.nju.edu.cn yangyubin@nju.edu.cn", + "bbox": [ + 290, + 242, + 699, + 257 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 246, + 291, + 326, + 306 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Data augmentation (DA) stands out as a powerful technique to enhance the generalization capabilities of deep neural networks across diverse tasks. However, in low-level vision tasks, DA remains rudimentary (i.e., vanilla DA), facing a critical bottleneck due to information loss. In this paper, we introduce a novel Calibrated Attribution Maps (CAM) to generate saliency masks, followed by two saliency-based DA methods—Attribution-Driven Data augmentation (ADD) and $\\text{ADD+}$ —designed to address this issue. CAM leverages integrated gradients and incorporates two key innovations: a global feature detector and calibrated integrated gradients. Based on CAM and the proposed methods, we have two new insights for low-level vision tasks: (1) increasing pixel diversity, as seen in vanilla DA, can improve performance, and (2) focusing on salient features while minimizing the impact of irrelevant pixels, as seen in saliency-based DA, more effectively enhances model performance. Additionally, we find and highlight the key guiding principle for designing saliency-based DA: a wider spectrum of degradation patterns. Extensive experiments demonstrate the compatibility and consistency of our method, as well as the significant performance improvement across various SR tasks and networks. Our code is available at https://github.com/mizeyu/ADD.", + "bbox": [ + 91, + 323, + 483, + 685 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 91, + 715, + 220, + 732 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Data augmentation (DA) is a fundamental technique for improving the generalization ability of deep neural networks (DNNs). In high-level vision tasks such as image recognition and semantic segmentation, early vanilla approaches like Mixup [35] and CutMix [34] laid the groundwork. Recently, saliency-based DA methods [13, 14, 18, 28, 29] have gained considerable attention, demonstrating superior performance over vanilla counterparts in high-level vision tasks.", + "bbox": [ + 89, + 741, + 482, + 876 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "However, applying saliency-based DA techniques directly to low-level vision tasks remains challenging due to a mismatch in task objectives, inconsistent feature requirements, differences in data characteristics, and inappropriate method designs (see Appendix for further details). These challenges have led to the continued use of vanilla DA techniques, such as rotation, flipping [27], Mixup [8], and Cutblur [33], in low-level vision tasks.", + "bbox": [ + 511, + 292, + 903, + 412 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "In low-level vision tasks [2, 3, 5-7, 10, 12, 15, 16, 19, 25, 31, 36, 40], the use of vanilla DA methods [8, 20, 27, 32, 33] is hindered by the issue of information loss, which can severely degrade the quality of the reconstructed images. As shown in Fig. 1, vanilla DA techniques often generate images with flat regions and simplistic edges (e.g., a3), which provide minimal support for model learning in tasks like super-resolution (SR), leading to significant loss of crucial information. In contrast, saliency-based DA methods [13, 14, 18, 28, 29] target information-rich regions (e.g., b3), which facilitate better reconstruction performance. The phase spectrum analysis in Fig. 1 (c, d, e) further illustrates that augmented images incorporating saliency information (b3) retain more critical details, alleviating the problem of information loss. This observation motivates the need for new DA techniques that focus on more meaningful regions in low-level tasks.", + "bbox": [ + 511, + 414, + 903, + 671 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Existing saliency methods, however, are often prone to background noise and struggle to accurately capture important features. To address these limitations, we propose a new attribution method, the Calibrated Attribution Maps (CAM), which is capable of identifying important features without being influenced by background noise. CAM builds upon LAM [9] and integrated gradients (IG) [11, 26], introducing two key innovations: a global feature detector and calibrated integrated gradients. Using CAM, we introduce Attribution-Driven Data augmentation (ADD) techniques, and its enhanced version $\\mathrm{ADD+}$ , which represent the first attempt to explore saliency in DA from an attribution analysis perspective.", + "bbox": [ + 511, + 672, + 903, + 869 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Through CAM, we derive two key insights for low-level vision tasks: (1) incorporating a broader range of pixels,", + "bbox": [ + 511, + 869, + 903, + 901 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 46 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "*Corresponding author.", + "bbox": [ + 109, + 887, + 235, + 900 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "23101", + "bbox": [ + 478, + 944, + 517, + 957 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/9cfe4b07aef406f7a6bdb632db291bd7f959082c28a745dd69279a0b10dacd6a.jpg", + "image_caption": [ + "a1. Input Image" + ], + "image_footnote": [], + "bbox": [ + 94, + 75, + 254, + 160 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/82b42d05c83757c35ee88269e2fc72ef657f1e54b232c8de1aa63008149e0b42.jpg", + "image_caption": [ + "a2. Vanilla DA" + ], + "image_footnote": [], + "bbox": [ + 254, + 78, + 413, + 159 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/e29336ef1308576ee9111fb70e67a41bc17c845d203b3e55e3335b07c9993406.jpg", + "image_caption": [ + "a3. Aug. Image of (a2)" + ], + "image_footnote": [], + "bbox": [ + 413, + 78, + 571, + 160 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/a844a318a7ef23c68787b56b64d2d4a4fed7fc5099662a225f6d378a1bff8d22.jpg", + "image_caption": [ + "c. Phase Spec. of (a3)" + ], + "image_footnote": [], + "bbox": [ + 573, + 78, + 733, + 160 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/c14b315881d6e50a7e1126fb8a88a351dfffc2332a3e779f812b22a3bd62d9fa.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 736, + 94, + 900, + 145 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/1050ae2b62ee86c0b84c62947421b066973762ce3380424b0b891935a6c81063.jpg", + "image_caption": [ + "b1. Saliency Image" + ], + "image_footnote": [], + "bbox": [ + 94, + 176, + 254, + 258 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/82d1761302c81b97ff9877bbe53989a67e5674e89bcc3f489a7b05a4fbdefd07.jpg", + "image_caption": [ + "b2. Saliency-based DA" + ], + "image_footnote": [], + "bbox": [ + 256, + 176, + 413, + 258 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/e372be01e9a42d5c42d638fea638a6ea2afed3aeddd766be787a72305c27a556.jpg", + "image_caption": [ + "b3. Aug. Image of (b2)", + "Figure 1. Motivation: Provide more meaningful information for vanilla DA to address the critical bottleneck of information loss. (a1) Input image. (a2) Vanilla DA randomly selects an area. (a3) Augmented image of vanilla DA. (b1) Saliency map of the input image. (b2) Select the maximum saliency region. (b3) Augmented image of saliency-based DA. (c) Phase spectrum of augmented image (a3). (d) Phase spectrum of augmented image (b3). (e) Residual map calculated between (c) and (d)." + ], + "image_footnote": [], + "bbox": [ + 413, + 176, + 573, + 258 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/2f586e6390cb193d68c725a516e757c3d711c1c47a297699c96cc4ee831642f5.jpg", + "image_caption": [ + "d. Phase Spec. of (b3)" + ], + "image_footnote": [], + "bbox": [ + 576, + 176, + 733, + 258 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/3e3804e93aaedbdab3982e38b538a2ea383249b7db05abb0b826828c2dd3d783.jpg", + "image_caption": [ + "e. Residual Map of d & c" + ], + "image_footnote": [], + "bbox": [ + 738, + 176, + 900, + 258 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "as in vanilla DA, can improve model performance, and (2) focusing on more influential pixels—using saliency information—while minimizing the inclusion of irrelevant pixels, as in saliency-based DA, results in more effective enhancement. This insight explains why ADD, which utilizes saliency to prioritize important pixels, alleviates information loss and outperforms traditional vanilla DA methods, which indiscriminately incorporate additional pixels.", + "bbox": [ + 88, + 349, + 482, + 469 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Furthermore, to design effective saliency-based DA methods in low-level vision tasks, we propose the key guiding principle: a wider spectrum of degradation patterns. Specifically, DA methods should: (1) prefer maintaining continuous boundaries to avoid boundary effects, and (2) prioritize diverse augmentation strategies over single augmented images to provide richer learning signals.", + "bbox": [ + 89, + 470, + 483, + 577 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Contributions:", + "text_level": 1, + "bbox": [ + 89, + 578, + 196, + 590 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- We introduce CAM, a robust attribution method that accurately identifies important features while mitigating background noise, improving saliency map generation in low-level vision tasks.", + "- We propose ADD, a novel attribution-driven DA method, which overcomes the limitations of vanilla DA and significantly reduces information loss in low-level tasks.", + "- We provide new insights and guiding principles for saliency-based DA in low-level vision, offering a framework for improving model performance.", + "- Our extensive experiments validate the effectiveness of our methods, demonstrating significant performance gains across a range of super-resolution tasks." + ], + "bbox": [ + 91, + 594, + 482, + 789 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related Works", + "text_level": 1, + "bbox": [ + 89, + 806, + 238, + 821 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.1. Image Super-Resolution", + "text_level": 1, + "bbox": [ + 89, + 832, + 313, + 848 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Image super-resolution (SR) is a key technology in computer vision [21, 39]. Since SRCNN [5], numerous CNN-based methods have emerged, incorporating residual [12,", + "bbox": [ + 89, + 854, + 483, + 900 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "36] and dense blocks [25, 31, 40], as well as attention mechanisms [2, 19]. More recently, transformer-based SR models [6, 7, 16] have achieved state-of-the-art performance.", + "bbox": [ + 511, + 349, + 905, + 395 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.2. Data Augmentation in Vision Tasks", + "text_level": 1, + "bbox": [ + 511, + 407, + 818, + 422 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Traditional data augmentation (DA) methods in high-level vision include geometric and intensity transformations [4, 41], as well as mixed-based techniques [34, 35]. Recently, saliency-based DA has gained popularity, focusing on preserving critical regions [13, 14, 18, 28, 29]. In low-level vision, DA remains largely limited to conventional approaches [27]. Some works explore Mixup [8] and Cutblur [33], with CutMIB extending them to light-field SR [32].", + "bbox": [ + 511, + 429, + 906, + 551 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "3. Methods", + "text_level": 1, + "bbox": [ + 513, + 566, + 611, + 580 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this section, we first discuss the key challenges and motivations for incorporating saliency-based data augmentation (DA) in low-level vision tasks in Sec. 3.1. Then, in Sec. 3.2, we briefly review prior attribution methods, which serve as the foundation for our proposed approach. Next, we introduce the concept of Calibrated Attribution Maps (CAM) in Sec. 3.3, followed by the presentation of saliency-based DA techniques, ADD and $\\mathrm{ADD + }$ , in Sec. 3.4.", + "bbox": [ + 511, + 593, + 905, + 713 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "3.1. Integrating Saliency into Low-Level DA", + "text_level": 1, + "bbox": [ + 511, + 724, + 854, + 741 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Why introduce saliency into low-level vanilla DA? As discussed in Sec. 1, leveraging saliency provides valuable insights into the most important image features, addressing the problem of information loss. This motivates the integration of saliency information into data augmentation strategies for low-level vision tasks, where preserving critical image details is essential for enhancing performance.", + "bbox": [ + 511, + 748, + 905, + 853 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "How to introduce saliency into low-level vanilla DA? Traditional saliency methods, such as LAM and integrated gradient (IG), are limited by the challenge of distinguishing", + "bbox": [ + 511, + 854, + 905, + 900 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "23102", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "relevant features from irrelevant background noise. To overcome this limitation, we build upon LAM and IG and introduce a novel approach, CAM, which incorporates two new components: a global feature detector and a calibrated integrated gradient. These innovations, detailed in Sec. 3.3, effectively address the noise issue and improve the accuracy of saliency estimation in low-level tasks.", + "bbox": [ + 89, + 90, + 483, + 196 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.2. Overview of Vanilla Attribution Methods", + "text_level": 1, + "bbox": [ + 89, + 208, + 444, + 224 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Before presenting our method, we first provide a brief overview of existing attribution methods that form the foundation for our approach [1, 11, 22-24]. Let $\\mathcal{I} \\in \\mathbb{R}^d$ be the input image, and let $\\mathcal{C}: \\mathbb{R}^d \\mapsto \\mathbb{R}$ represent a classification network. Gradient-based methods, such as Integrated Gradients (IG), quantify the impact of changes in the input dimensions by computing the gradient of the output with respect to the input image:", + "bbox": [ + 89, + 231, + 483, + 353 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathrm {I G} _ {\\mathcal {C}} \\mathcal {I} = (\\mathcal {I} - \\mathcal {I} ^ {\\prime}) \\int_ {0} ^ {1} \\frac {\\partial \\mathcal {C} (\\mathcal {I} ^ {\\prime} + \\alpha (\\mathcal {I} - \\mathcal {I} ^ {\\prime}))}{\\partial \\mathcal {I}} d \\alpha , (1)\n$$\n", + "text_format": "latex", + "bbox": [ + 129, + 364, + 483, + 400 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $\\mathcal{I}'$ is a baseline image (often a blank image in high-level tasks) and $\\alpha$ is a continuous parameter that interpolates between the baseline and the target input.", + "bbox": [ + 89, + 411, + 483, + 455 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In the context of image SR, Local Attribution Maps (LAM) convert the original baseline image $\\mathcal{I}'$ to a blurred version $\\mathcal{I}' = \\omega(\\sigma) \\otimes \\mathcal{I}$ , where $\\omega(\\sigma)$ is a Gaussian blur kernel with width $\\sigma$ , and $\\otimes$ represents convolution. Besides, LAM adapts IG for SR tasks by using a gradient detection method $D$ that focuses on local feature detection in SR networks. However, LAM suffers from the limitation of irrelevant gradient accumulations, which can easily lead to a focus on irrelevant areas. To address this issue, we introduce CAM to eliminate irrelevant area interference.", + "bbox": [ + 89, + 458, + 483, + 608 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.3. Calibrated Attribution Maps (CAM)", + "text_level": 1, + "bbox": [ + 89, + 619, + 410, + 637 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In this section, we introduce the concept of Calibrated Attribution Maps (CAM) as shown in Fig. 2 (a), which is inspired by the IG [26] and LAM [9]. The goal of CAM is to provide a more accurate and reliable estimation of feature importance, particularly in the context of image SR tasks.", + "bbox": [ + 89, + 643, + 482, + 718 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Global Feature Detector (GD). Given an input image pair $\\mathcal{I}^{LR}$ (low resolution) and $\\mathcal{I}^{HR}$ (high resolution), we aim to learn a mapping function $\\mathcal{F}$ that produces the super-resolved image $\\mathcal{I}^{SR}$ . Traditional attribution methods such as LAM detect local pixel-wise gradients, which can easily lead to saturation effects, as shown in Fig. 2 (d). To alleviate this problem, we introduce a Global Feature Detector (GD), which aims to capture global features in the image by applying convolutional filters such as the Sobel filter. This approach smoothes the detected gradients and enhances the robustness of saliency maps. To achieve a more robust global feature representation, the GD operation is defined", + "bbox": [ + 89, + 719, + 483, + 901 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "as:", + "bbox": [ + 514, + 94, + 537, + 104 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {G} \\boldsymbol {D} (\\mathcal {I} ^ {S R}) = \\| \\boldsymbol {S o b e l} _ {x y} (\\mathcal {I} ^ {S R}) \\| _ {2}, \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 594, + 104, + 903, + 122 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $Sobel_{xy}$ denotes the Sobel filter applied in both the $x$ and $y$ directions to capture edge features in the image. This approach smooths the gradients and reduces the saturation problem, as demonstrated in Fig. 2 (d) & (g).", + "bbox": [ + 511, + 128, + 903, + 186 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "To analyze the attributes of the SR network, given the current input image $\\mathcal{I}$ , a baseline image $\\mathcal{I}'$ satisfies that $\\mathcal{F}(\\mathcal{I}')$ absent certain features existed in $\\mathcal{F}(\\mathcal{I})$ is also needed. Accordingly, feature scalar $GD(\\mathcal{F}(\\mathcal{I}))$ will show significant numerical advantage over $GD(\\mathcal{F}(\\mathcal{I}'))$ . We calculate the path-integrated gradient along the gradually changing path from $\\mathcal{I}'$ to $\\mathcal{I}$ and obtain the attribution map for $GD(\\mathcal{F}(\\mathcal{I}))$ . Then, the $i$ th dimension of the calibrated attribution maps is defined as follows:", + "bbox": [ + 511, + 188, + 906, + 324 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\phi_ {i} ^ {C A M} (\\mathcal {F}, \\boldsymbol {G D}) = \\left(\\mathcal {I} _ {i} - \\mathcal {I} _ {i} ^ {\\prime}\\right) \\int_ {a = 0} ^ {1} \\frac {\\partial \\boldsymbol {G D} \\left(\\boldsymbol {F} \\left(\\mathcal {I} ^ {\\prime} + \\alpha \\left(\\mathcal {I} - \\mathcal {I} ^ {\\prime}\\right)\\right)\\right)}{\\partial \\mathcal {I} _ {i}} d \\alpha . \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 534, + 332, + 903, + 362 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Calibrated Path Integrated Gradient (CPIG). We now introduce the Calibrated Path Integrated Gradient (CPIG), which can efficiently and effectively analyze global attribution. In SR tasks, the high-frequency components (e.g., edges and textures) contribute much more than the low-frequency components (e.g., color and brightness) to the network performance. In this work, we obtain baseline inputs by eliminating high-frequency components, setting them as the blurred version of LR images denoted as $\\mathcal{I}' = \\omega(\\sigma) \\otimes \\mathcal{I}$ . Here, $\\omega(\\sigma)$ represents the Gaussian blur kernel parameterized by the kernel width $\\sigma$ and $\\otimes$ is the convolution operation. Following previous works, we construct a smooth transformation from $\\mathcal{I}'$ to $\\mathcal{I}$ , which is expressed as $\\gamma(a) = \\omega(\\sigma - \\alpha\\sigma) \\otimes \\mathcal{I}$ . Accordingly, we have $\\gamma(0) = \\mathcal{I}'$ and $\\gamma(1) = \\mathcal{I}$ . The gradients at points are sampled in $k$ steps along the path and the gradient of the $i$ -th step is:", + "bbox": [ + 511, + 369, + 906, + 592 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\phi_ {i} ^ {C A M} (\\mathcal {F}, \\boldsymbol {G D}, \\gamma) = \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 586, + 598, + 903, + 616 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\left(\\gamma \\left(\\frac {i}{k}\\right) - \\gamma \\left(\\frac {i + 1}{k}\\right)\\right) \\times \\frac {\\partial G D \\left(\\mathcal {F} \\left(\\gamma \\left(\\frac {i}{k}\\right)\\right)\\right)}{\\partial \\gamma \\left(\\frac {i}{k}\\right)} d \\alpha .\n$$\n", + "text_format": "latex", + "bbox": [ + 589, + 619, + 843, + 652 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "To improve the stability and accuracy of the path-integrated gradients, we introduce a calibration step that limits the deviation at each step and ensures that the gradients focus on the most relevant features of the image. This is achieved by limiting the range of each step to fluctuate around a central value, $a_{\\mathrm{min}} = \\max (a - d,0.0)$ and $a_{\\mathrm{max}} = \\min (a + d,1.0)$ .", + "bbox": [ + 511, + 659, + 905, + 763 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Attribution analysis progressively adjusts the value of each pixel to make the interpolated image $\\mathcal{I}'$ gradually approach the target image $\\mathcal{I}$ , and the overall loss function can be defined as: $\\mathcal{L}_{OA} = \\| \\mathcal{I}' - \\mathcal{I}\\|_1$ . Correspondingly, the target loss function for approximating the target image with the interpolation image in step $i$ is: $\\mathcal{L}_{TG} = \\| \\mathcal{I}' - \\mathcal{I}\\|_1 \\times (1 - \\frac{i}{k})$ . Then, the difference between the actual current interpolated image and the target image can be represented by the current loss: $\\mathcal{L}_{CU} = \\| \\gamma (\\frac{i}{k}) - \\mathcal{I}\\|_1$ .", + "bbox": [ + 511, + 765, + 906, + 902 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "23103", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/936fbdbf84c83a2257bd1936f5f8fbe9399dc7195c449c4c14210c65bec2dda8.jpg", + "image_caption": [ + "Figure 2. The illustration of CAM, the ADD framework, and the CAM-LAM comparison. (a) Illustration of CAM. (b) Process of ADD and $\\mathrm{ADD + }$ . (c) Comparison of global attribution analysis between CAM and LAM, with red boxes highlighting regions where gradients are presented after global attribution analysis." + ], + "image_footnote": [], + "bbox": [ + 94, + 85, + 883, + 392 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "To further prevent instability caused by large gradient changes and to maintain the smoothness of the path, we select only those pixels with gradient magnitudes below a predefined threshold $T_{f}$ for updating:", + "bbox": [ + 89, + 472, + 483, + 532 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nT _ {f} = \\operatorname {s o r t e d} \\left(\\left| \\phi_ {i} ^ {C A M} \\right|\\right) _ {\\left[ p _ {f} \\cdot \\text {n u m . p i x e l s} \\right]} ^ {\\min },\n$$\n", + "text_format": "latex", + "bbox": [ + 166, + 550, + 403, + 571 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nM _ {f} = \\left\\{ \\begin{array}{l l} 1, & \\text {i f} | \\phi_ {i} ^ {C A M} | \\leq T _ {f}, \\\\ 0, & \\text {o t h e r w i s e}, \\end{array} \\right. \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 168, + 574, + 480, + 614 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $M_{f}$ is a binary mask representing the pixels that need to be calibrated. Based on the difference between the current loss $\\mathcal{L}_{CU}$ and the target loss $\\mathcal{L}_{TG}$ , combined with the mask threshold loss function $\\mathcal{L}_{MF}$ between the pixels to be corrected and the target image, we generate calibration factor $\\delta$ to control the step size of each update:", + "bbox": [ + 89, + 631, + 483, + 720 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {M F} = \\left\\| M _ {f} \\odot (\\gamma (a) - \\gamma \\left(a _ {\\max }\\right)) \\right\\| _ {1},\n$$\n", + "text_format": "latex", + "bbox": [ + 161, + 739, + 410, + 755 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\delta = \\frac {\\mathcal {L} _ {C U} - \\mathcal {L} _ {T G}}{\\mathcal {L} _ {M F}}. \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 210, + 758, + 480, + 789 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "With the calibration factor $\\delta$ , the calibrated interpolation image $\\gamma_{c}(a)$ can be formulated as: $\\gamma_{c}(a) = \\gamma (a + \\delta \\times (a_{\\mathrm{max}} - a))$ . Accordingly, the calibrated gradient $\\psi_i^{CAM}$ in step $i$ can be updated by the calibrated interpolation image $\\gamma_{c}(a)$ and represented as: $\\psi_i^{CAM} = \\phi_i^{CAM} + (\\gamma_c(a) - \\gamma (a))\\times$ $\\phi_i^{CAM}$ . Finally, we obtain the approximate integrated gra", + "bbox": [ + 89, + 810, + 483, + 901 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "dient by summing up the calibrated gradients of $k$ steps:", + "bbox": [ + 511, + 472, + 883, + 488 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {I} _ {s} ^ {L R} = \\sum_ {i = 0} ^ {k} \\psi_ {i} ^ {C A M}. \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 642, + 500, + 905, + 541 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "As shown in the Fig. 2 (f) and (i), our calibrated attribution maps focus on the most important edge texture information compared to LAM, and are not affected by the noise of flat areas and background.", + "bbox": [ + 511, + 551, + 905, + 612 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.4. Attribution-Driven Data Augmentation (ADD)", + "text_level": 1, + "bbox": [ + 511, + 625, + 905, + 640 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In this section, as depicted in Fig. 2 (b), we introduce the specific process of the proposed ADD and the enhanced version $\\mathrm{ADD + }$ . Let $\\mathcal{I}^{LR}\\in R^{H\\times W\\times C}$ be the input LR image, and the corresponding saliency map $\\mathcal{I}_s^{LR}$ can be obtained with Eq. (7). To accurately obtain the region of maximum saliency, we follow the principle (1) of continuous boundaries in Sec. 3.5 and select the maximum saliency pixels with a proportion of $p$ and obtain the corresponding irregularly shaped patch:", + "bbox": [ + 511, + 646, + 906, + 782 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nT _ {p} = \\operatorname {s o r t e d} \\left(\\mathcal {I} _ {s} ^ {L R}\\right) _ {\\lceil p \\cdot \\text {n u m . p i x e l s} \\rceil} ^ {m a x},\n$$\n", + "text_format": "latex", + "bbox": [ + 586, + 792, + 792, + 814 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nM (i, j) = \\left\\{ \\begin{array}{l l} 1, & \\text {i f} \\mathcal {I} _ {s} ^ {L R} (i, j) \\geq T _ {p}, \\\\ 0, & \\text {o t h e r w i s e}, \\end{array} \\right. \\tag {8}\n$$\n", + "text_format": "latex", + "bbox": [ + 588, + 816, + 903, + 857 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where the $T_{p}$ represents the maximum saliency value of the top $p$ proportion in the $\\mathcal{I}_s^{LR}$ and the $M\\in \\{0,1\\}^{H\\times W}$ is a", + "bbox": [ + 511, + 868, + 906, + 902 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "23104", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "binary mask indicating the area to be cut. Following the second principle (2) of diverse augmented images in Sec. 3.5, we use the mask $M$ to cut the patch to the corresponding position on another image and combine it with different DA strategies. Given the LR-HR image pair $\\{\\mathcal{I}_i^{LR},\\mathcal{I}_i^{HR}\\}$ and the corresponding binary mask $M$ , the augmentation process is explained below.", + "bbox": [ + 89, + 90, + 480, + 196 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "ADD. We first adopt a mixed strategy to generate augmented input images to enable the model to learn richer and more complex degradation patterns. We cut the patch and mix it with the patch from another LR-HR image pair $\\{\\mathcal{I}_j^{LR},\\mathcal{I}_j^{HR}\\}$ and generate new training samples:", + "bbox": [ + 89, + 196, + 480, + 275 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nP _ {m i x} ^ {L R} = \\lambda \\times M \\odot \\mathcal {I} _ {i} ^ {L R} + (1 - \\lambda) \\times M \\odot \\mathcal {I} _ {j} ^ {L R}, \\tag {9}\n$$\n", + "text_format": "latex", + "bbox": [ + 112, + 285, + 482, + 311 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nP _ {m i x} ^ {H R} = \\lambda \\times M \\odot \\mathcal {I} _ {i} ^ {H R} + (1 - \\lambda) \\times M \\odot \\mathcal {I} _ {j} ^ {H R},\n$$\n", + "text_format": "latex", + "bbox": [ + 112, + 308, + 482, + 327 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {I} _ {i} ^ {L R} = P _ {m i x} ^ {L R} + (1 - M) \\odot \\mathcal {I} _ {j} ^ {L R}, \\tag {10}\n$$\n", + "text_format": "latex", + "bbox": [ + 169, + 339, + 482, + 367 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {I} _ {i} ^ {H R} = P _ {m i x} ^ {H R} + (1 - M) \\odot \\mathcal {I} _ {j} ^ {H R}.\n$$\n", + "text_format": "latex", + "bbox": [ + 171, + 361, + 398, + 381 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Then we adopt an intensity strategy to make the model learn how and where to augment the input images. We cut the patch $P_{i}^{LR}$ by $M \\odot \\mathcal{I}_{i}^{LR}$ and upsample it by scale $s$ with bicubic kernel, get $P_{i}^{LR(s \\times \\uparrow)}$ . The HR patch can be generated similarly and we can get augmented samples as:", + "bbox": [ + 89, + 388, + 482, + 465 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\hat {\\mathcal {I}} _ {i} ^ {L R \\rightarrow H R} = P _ {i} ^ {L R (s \\times \\uparrow)} + (\\mathbf {1} - M) \\odot \\mathcal {I} _ {i} ^ {H R}, \\tag {11}\n$$\n", + "text_format": "latex", + "bbox": [ + 125, + 489, + 482, + 518 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\hat {\\mathcal {I}} _ {i} ^ {H R \\rightarrow L R} = P _ {i} ^ {H R (s \\times \\downarrow)} + (\\mathbf {1} - M) \\odot \\mathcal {I} _ {i} ^ {L R}.\n$$\n", + "text_format": "latex", + "bbox": [ + 127, + 513, + 418, + 532 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "$\\mathbf{ADD}+$ . To validate the efficacy of saliency in mixed enhancement and surpass the performance limits, we proposed the method with an enhanced version. We additionally generate a pair of new training samples with another LR-HR image pair $\\{\\mathcal{I}_j^{LR},\\mathcal{I}_j^{HR}\\}$ :", + "bbox": [ + 89, + 545, + 482, + 623 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {I} _ {i} ^ {L R} = M \\odot \\mathcal {I} _ {i} ^ {L R} + (1 - M) \\odot \\mathcal {I} _ {j} ^ {L R}, \\tag {12}\n$$\n", + "text_format": "latex", + "bbox": [ + 151, + 633, + 482, + 661 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {I} _ {i} ^ {H R} = M \\odot \\mathcal {I} _ {i} ^ {H R} + (1 - M) \\odot \\mathcal {I} _ {j} ^ {H R},\n$$\n", + "text_format": "latex", + "bbox": [ + 153, + 656, + 418, + 675 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\odot$ denotes the element-wise Hadamard product operation. Following previous work [33], in each training iteration, using $\\mathrm{ADD + }$ , each above augmentation method and traditional augmentation method (e.g., color, channel) have a probability $p$ of being applied by the model to enhance the input image.", + "bbox": [ + 89, + 686, + 482, + 777 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.5. Discussions", + "text_level": 1, + "bbox": [ + 89, + 787, + 215, + 801 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Incorporate saliency into vanilla DA. Incorporating saliency into existing vanilla DA methods revolves around two key aspects: patch cutting and pasting. Consequently, two fundamental questions arise: (1) What manner should be used for segmenting the source image? (2) Where should the cut patches be pasted? To address these questions, we", + "bbox": [ + 89, + 809, + 482, + 900 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "conduct a comprehensive analysis and reveal the key principle: a wider spectrum of degradation patterns. Specifically, it includes: (1) continuous boundaries rather than abrupt boundaries, and (2) diverse augmented images over single augmented images. The results and analysis of saliency in DA methods are presented in Tab. 1 and further elaborated in the experiments (see Sec. 4.4).", + "bbox": [ + 511, + 90, + 903, + 196 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Table 1. Quantitative PSNR comparison of various saliency incorporation methods in super-resolution. $Patch_{n \\times n}$ denotes image division into $n \\times n$ patches. $X\\mathcal{Q}Y$ indicates cutting a patch from area $X$ and pasting it onto region $Y$ in another image, where 'Sa' stands for 'Saliency', 'Non' for 'Non-saliency', 'Ce' for 'Center', and 'Cor' for 'Corresponding'.", + "bbox": [ + 511, + 210, + 903, + 294 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/6996eec2dd0bf23ff7d1f81aac39a0b690df1d1ba9516793b7c24cf16d3f38d4.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
MethodScaleTypes of Saliency UtilizationDIV2KRealSR
PSNRΔPSNRΔ
EDSR×4baseline29.21-28.86-
Patch1×1×4Granularity29.28+0.0729.10+0.24
Patch2×2×4Granularity29.26+0.0529.07+0.21
Patch3×3×4Granularity29.24+0.0329.02+0.16
Patch4×4×4Granularity29.22+0.0128.91+0.05
Patch5×5×4Granularity29.14-0.0728.75-0.11
Patch6×6×4Granularity29.07-0.1428.61-0.25
Patch7×7×4Granularity28.92-0.2928.45-0.41
Sa2Cor×4Diversity29.27+0.0629.11+0.25
Ce2Ce×4Diversity29.26+0.0529.08+0.22
Sa2Sa×4Diversity29.21+0.0028..89+0.03
Sa2Non×4Diversity29.24+0.0329.97+0.11
Non2Sa×4Diversity29.19-0.0228.79-0.07
Non2Non×4Diversity29.16-0.0528.76-0.10
ADD+×4Granularity&Diversity29.32+0.1129.14+0.28
", + "bbox": [ + 517, + 305, + 921, + 520 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4. Experiments", + "text_level": 1, + "bbox": [ + 511, + 549, + 645, + 566 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.1. Preliminaries", + "text_level": 1, + "bbox": [ + 511, + 574, + 653, + 588 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Network structures. We adopt several advanced and typical SR networks to verify the effectiveness and compatibility of our ADD and the proposed three new DA strategies. We consider CNN-based methods: RCAN [38], and EDSR [17], in which well-designed CNN-based structures are proven effective on SR tasks. The transformer-based method, SwinIR [16], is also adopted in our experiments.", + "bbox": [ + 511, + 595, + 903, + 700 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Datasets and implementation details. We use the DIV2K and RealSR datasets for training. We select ten images (index 0801-0810) from the DIV2K validation set for validation during training. For evaluation, we use six benchmark datasets, including Set5, Set14, BSD100, Urban100, Manga109, and test sets of RealSR. We keep the hyperparameters (e.g., learning rate, batch size) the same as reported in the original paper. All experiments are conducted using PyTorch on NVIDIA V100 GPUs.", + "bbox": [ + 511, + 702, + 905, + 838 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.2. Comparison of Interpretation Capability", + "text_level": 1, + "bbox": [ + 511, + 848, + 862, + 864 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We conduct visualization experiments to evaluate the effectiveness of the proposed CAM. As shown in the left part of", + "bbox": [ + 511, + 869, + 905, + 900 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "23105", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/1b5dec18cf36a63ed299da488981529b64291193ef0d01134a64b2dbfb2504b8.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 96, + 92, + 496, + 257 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/a51527f8612d1b895ed066df1139b55f63b5f74cb19216ff7b6aee3ec6157fd2.jpg", + "image_caption": [ + "Fig. 3, the blue arrow highlights important regions, while the red arrow points to areas considered background noise. CAM accurately identifies relevant information, demonstrating robustness to background noise, unlike LAM." + ], + "image_footnote": [], + "bbox": [ + 496, + 98, + 697, + 252 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/c47120e0d2e103ac181511fcee2064834384487ffb6a7e10ac27b4425edd8fe8.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 697, + 99, + 898, + 252 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/67d264ba064a6235f2bca34effcdd7bf772d3f4e4fc03a82d96c35276ee2a132.jpg", + "image_caption": [ + "Figure 3. Saliency maps (Left) and Insertion/Deletion curves (Right) on DIV2K etc. super-resolution sets. The blue arrow indicates that the proposed CAM accurately reflects important and intuitive content without being affected by background noise, while the red arrow indicates that LAM is affected by background noise and produces undesired attribution results." + ], + "image_footnote": [], + "bbox": [ + 96, + 261, + 496, + 414 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/28fac85b2f8652710df47a3617e71950b617503ff67d4973df4dd1d554faed0b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 496, + 260, + 697, + 411 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/54c4214e32f28a30ef9793e9dc51e7ef60acf8f602069eb4ef4fb0c5940ebcf5.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 697, + 260, + 898, + 411 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/aebcefdb6aa5488feea4db084e0988a4e23641d049eeac5d3168d9ac5423e6ec.jpg", + "table_caption": [ + "Table 2. Quantitative PSNR (dB) comparison of our ADD and existing DA methods in SR on DIV2K and RealSR datasets. $\\Delta$ denotes the performance gap." + ], + "table_footnote": [], + "table_body": "
MethodScaleTraining SetDIV2KTraining SetRealSR
PSNRΔPSNRΔ
RCAN×4DIV2K29.22-RealSR29.20-
+ CutMix×4DIV2K29.24+0.02RealSR29.25+0.05
+ CutMixup×4DIV2K29.28+0.06RealSR29.30+0.10
+ CutBlur×4DIV2K29.25+0.03RealSR29.29+0.09
+ ADD×4DIV2K29.32+0.10RealSR29.34+0.14
+ ADD+×4DIV2K29.36+0.14RealSR29.46+0.26
EDSR×4DIV2K29.21-RealSR28.86-
+ CutMix×4DIV2K29.22+0.01RealSR28.90+0.04
+ CutMixup×4DIV2K29.26+0.05RealSR28.97+0.11
+ CutBlur×4DIV2K29.25+0.04RealSR28.94+0.08
+ ADD×4DIV2K29.30+0.09RealSR29.01+0.15
+ ADD+×4DIV2K29.32+0.11RealSR29.14+0.28
SwinIR×4DIV2K29.40-RealSR29.26-
+ CutMix×4DIV2K29.40+0.00RealSR29.29+0.03
+ CutMixup×4DIV2K29.43+0.03RealSR29.34+0.08
+ CutBlur×4DIV2K29.43+0.03RealSR29.32+0.06
+ ADD×4DIV2K29.46+0.06RealSR29.37+0.11
+ ADD+×4DIV2K29.48+0.08RealSR29.43+0.17
", + "bbox": [ + 94, + 512, + 496, + 752 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Following established protocols [30, 37], we perform Insertion and Deletion tests, as shown in the right part of Fig. 3. In the Insertion test, a progressively increasing fraction $(3.6\\%)$ of pixels from the high-resolution (HR) image is inserted into the super-resolved image, guided by the pixel", + "bbox": [ + 89, + 824, + 483, + 902 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "importance values in the attribution map, until the reconstructed image closely matches the HR image. In the Deletion test, $3.6\\%$ of the pixels in the HR image, starting from those with the highest attribution map values, are progressively replaced with black pixels until the entire image is replaced. The Insertion and Deletion curves provide further evidence that CAM more effectively captures the network's critical information compared to LAM.", + "bbox": [ + 511, + 479, + 906, + 601 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.3. Results on Various Models and Datasets", + "text_level": 1, + "bbox": [ + 511, + 623, + 856, + 638 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We conduct quantitative comparisons between the proposed ADD method and existing vanilla DA approaches across classical benchmark datasets, namely DIV2K and RealSR, as detailed in Tab. 2. The results demonstrate that both ADD and $\\mathrm{ADD + }$ consistently outperform vanilla DA methods on both synthetic (DIV2K) and real-world (RealSR) datasets, with performance improvements reaching up to $0.28\\mathrm{dB}$ . Further comparisons between $\\mathrm{ADD + }$ and baseline models on the Set5, Set14, Manga109, Urban100, and BSD100 datasets, presented in Tab. 3, show that networks trained with $\\mathrm{ADD + }$ consistently achieve superior reconstruction performance. Qualitative results, depicted in Fig. 4, reveal that networks trained with $\\mathrm{ADD + }$ exhibit enhanced visual quality compared to their baseline counterparts, capturing finer details. Notably, in the area of stripes on the building in Fig. 4 (img_012), $\\mathrm{ADD + }$ yields more accurate and sharper details than the baselines.", + "bbox": [ + 511, + 643, + 906, + 900 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "23106", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/74bce8a1b7e09e01e729caba85b960589dcce8c8558378877ad7d0e61cc28d21.jpg", + "table_caption": [ + "Table 3. Quantitative comparison with baseline methods in SR and the $\\Delta$ denotes the performance gap." + ], + "table_footnote": [], + "table_body": "
MethodScaleTraining DatasetSet5Set14Manga109Urban100BSD100
PSNRΔPSNRΔPSNRΔPSNRΔPSNRΔ
RCAN×2DIV2K38.23-34.11-39.41-33.29-32.37-
+ ADD+×2DIV2K38.34+0.1134.21+0.1039.55+0.1433.55+0.2632.44+0.07
EDSR×2DIV2K38.11-33.92-39.10-32.93-32.32-
+ ADD+×2DIV2K38.20+0.0934.04+0.1239.27+0.1733.16+0.2332.43+0.11
SwinIR×2DIV2K38.31-34.41-39.89-33.75-32.46-
+ ADD+×2DIV2K38.43+0.1234.49+0.0839.99+0.1033.90+0.1532.51+0.05
RCAN×4DIV2K32.58-28.84-31.22-26.75-27.74-
+ ADD+×4DIV2K32.64+0.0628.91+0.0731.43+0.2126.92+0.1727.79+0.05
EDSR×4DIV2K32.43-28.76-31.04-26.63-27.66-
+ ADD+×4DIV2K32.51+0.0828.88+0.1231.24+0.2026.81+0.1827.77+0.11
SwinIR×4DIV2K32.63-28.92-31.54-27.01-27.82-
+ ADD+×4DIV2K32.71+0.0828.99+0.0731.60+0.0627.13+0.1227.85+0.03
", + "bbox": [ + 94, + 95, + 903, + 334 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/de78c63daab6cb5676c36cbb3e05b5e5c2037249f5c661a5cbb062873cccab89.jpg", + "image_caption": [ + "Urban100(x4): img_092" + ], + "image_footnote": [], + "bbox": [ + 98, + 342, + 218, + 457 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/b8dcd9a020f089120ca5f77ceaae038355b671ae4387b9cd7b79ca8ceee8d468.jpg", + "image_caption": [ + "HR" + ], + "image_footnote": [], + "bbox": [ + 225, + 340, + 287, + 387 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/9d5f9288b54d82ea721ecdd5835fe9f08cd97b76c4f47c920c500863b8604f9e.jpg", + "image_caption": [ + "Bicubic" + ], + "image_footnote": [], + "bbox": [ + 225, + 398, + 285, + 443 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/70699b286b4656ac3076cd033ade92ea423b6036377f2e715fd309df9ff01b4e.jpg", + "image_caption": [ + "RCAN" + ], + "image_footnote": [], + "bbox": [ + 292, + 340, + 351, + 387 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/2c2975adaeceddc2bf474e7d336aad9d229acb77f823039b3c3d5da2e330ce41.jpg", + "image_caption": [ + "RCAN" + ], + "image_footnote": [], + "bbox": [ + 292, + 398, + 351, + 443 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/5b87a9b328dc134e1a97d913d09221fd3c49b15fa763bcb31943e62b5f28a0e6.jpg", + "image_caption": [ + "EDSR" + ], + "image_footnote": [], + "bbox": [ + 357, + 340, + 415, + 387 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/815bef0c5bcafc4cb0b87ea2b5d8e17621a0aedc88e0a64512aaf559ff7934d9.jpg", + "image_caption": [ + "+proposed +proposed +proposed", + "EDSR" + ], + "image_footnote": [], + "bbox": [ + 357, + 398, + 415, + 443 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "+proposed +proposed", + "bbox": [ + 356, + 455, + 482, + 469 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/4365e0c60e58b836d1a0ffb62d2f0b31d8c3ec3126a0db25585bfecefdd40e29.jpg", + "image_caption": [ + "Urban100(x4): img_095" + ], + "image_footnote": [], + "bbox": [ + 511, + 340, + 630, + 455 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/394ddbae49b2e969b327cc1919e2d371c9d503baed8b70d18e6d29aebcfe7bb9.jpg", + "image_caption": [ + "HR" + ], + "image_footnote": [], + "bbox": [ + 640, + 340, + 700, + 387 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/e2bbb8bdbb4f64783534f7f4a84aaffe542879dd79976f53122a8385abb22ef9.jpg", + "image_caption": [ + "+proposed +proposed +proposed" + ], + "image_footnote": [], + "bbox": [ + 640, + 398, + 699, + 454 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/a8a67a75522fbbcec87fdcc179cce34065f9b8a189ce8ea9710fb1a04d28724c.jpg", + "image_caption": [ + "RCAN" + ], + "image_footnote": [], + "bbox": [ + 705, + 340, + 764, + 387 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/de517a8cbaf25badbbc76af14d3c877274f3e175e7ec8b20ca0d9f6d7378f62e.jpg", + "image_caption": [ + "RCAN" + ], + "image_footnote": [], + "bbox": [ + 705, + 398, + 764, + 443 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/d5d7b376389ee0689ecd44b2122f8f74a29030a27ac346c306122d30124a4cea.jpg", + "image_caption": [ + "EDSR" + ], + "image_footnote": [], + "bbox": [ + 769, + 340, + 828, + 387 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/a0e4d907bfceff3845f90f61f65285ebcc6f4789bd267ad6c12c8994de11b624.jpg", + "image_caption": [ + "EDSR" + ], + "image_footnote": [], + "bbox": [ + 769, + 398, + 828, + 443 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/44d3686a627368ef66274250ead23a913256805c7a960591562cd3671e6edfba.jpg", + "image_caption": [ + "SwinIR" + ], + "image_footnote": [], + "bbox": [ + 834, + 340, + 893, + 387 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/fe44a5b7289d1c45e50d5c15f294601c872f22e9ba49caab6f4ca2aea0c4da17.jpg", + "image_caption": [ + "SwinIR" + ], + "image_footnote": [], + "bbox": [ + 834, + 398, + 892, + 443 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/c0ddf29918d715819df72d701c97fd9aae29410a10ebf71b8ee85ff35c85fa99.jpg", + "image_caption": [ + "Manga109(x4):", + "PrayerHaNemurenai" + ], + "image_footnote": [], + "bbox": [ + 99, + 487, + 215, + 599 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/50306f619c9bbc1c238522afdea410cdbcca9b717218ff6c77d8f5356176a48e.jpg", + "image_caption": [ + "HR" + ], + "image_footnote": [], + "bbox": [ + 223, + 487, + 285, + 532 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/bbbf594ba4c0d3868b8677c002ea7001a4fbe9a388b7a78301ae7bd8399f66b8.jpg", + "image_caption": [ + "Bicubic" + ], + "image_footnote": [], + "bbox": [ + 225, + 544, + 285, + 589 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/0b97099247bdfe10d322b539a93f3b8ebdd7b2bd9e34a2fbb1c3c15de90be3df.jpg", + "image_caption": [ + "RCAN" + ], + "image_footnote": [], + "bbox": [ + 290, + 487, + 351, + 532 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/19b018a5f89507203598c74da818b2968adadacd67ebbde97711ac034ab4f7a4.jpg", + "image_caption": [ + "RCAN", + "+proposed +proposed +proposed" + ], + "image_footnote": [], + "bbox": [ + 290, + 544, + 351, + 589 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/e4cfb74893081c95da7bbac00f08dc76b2134bfb11b6636c19f79aa879b916fe.jpg", + "image_caption": [ + "EDSR" + ], + "image_footnote": [], + "bbox": [ + 356, + 487, + 413, + 532 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/2132558eb107d99f4a7353ff35e8e07d5a0f4d3cc5130f5c1b40d15dfa01f61d.jpg", + "image_caption": [ + "EDSR" + ], + "image_footnote": [], + "bbox": [ + 356, + 544, + 413, + 589 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "+proposed +proposed", + "bbox": [ + 356, + 601, + 480, + 614 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/ce79336787e0335553da22f47ce92f9fb791087c0ad0bdd74bc563b0979eb5b6.jpg", + "image_caption": [ + "SwinIR" + ], + "image_footnote": [], + "bbox": [ + 421, + 487, + 480, + 532 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/099a0f5063333cdc69c08a32d3cafd8c1cfafda055e1b421f0a953f4a69f417e.jpg", + "image_caption": [ + "SwinIR" + ], + "image_footnote": [], + "bbox": [ + 421, + 544, + 480, + 589 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/858c5dd4401389fcb34065de8b066c768585fcc3366efbde50841a38607b9f40.jpg", + "image_caption": [ + "Urban100(x4): img_012", + "Figure 4. Visual comparison on $\\times 4$ SR with the baseline model and proposed method. The patches for comparison are marked with red boxes in the original images. Please zoom in for better visualization." + ], + "image_footnote": [], + "bbox": [ + 514, + 487, + 630, + 601 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/e6c5d12eba070fa80ec1d9d6b0e4c721cf6c911a765e0a801e481624f851c59c.jpg", + "image_caption": [ + "HR" + ], + "image_footnote": [], + "bbox": [ + 640, + 487, + 700, + 532 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/1898ea22869ed4be957d153bf955ff103862bfc3889ae37df2dc95891492872e.jpg", + "image_caption": [ + "Bicubic" + ], + "image_footnote": [], + "bbox": [ + 640, + 542, + 699, + 585 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/0367aa6b50e523d989e88e1ed39f06e2023eb75f1c59cbca6d423bc6f53cec1e.jpg", + "image_caption": [ + "RCAN" + ], + "image_footnote": [], + "bbox": [ + 705, + 487, + 763, + 532 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/f43556d28230571eb0f771e115255dcb32f60ed8fcf70e9a16237990dbe96041.jpg", + "image_caption": [ + "RCAN", + "+proposed +proposed +proposed" + ], + "image_footnote": [], + "bbox": [ + 705, + 542, + 763, + 587 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/ed9aa450dc2c069dea4e5c803f499d4a9c4e8f86487d60fd357eaa076a1b987f.jpg", + "image_caption": [ + "SwinLR" + ], + "image_footnote": [], + "bbox": [ + 831, + 487, + 890, + 532 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/3cc5764d44d96cb7d3b5442a77bafae88c5c0fe287cb0117ec5382cb8e06aebe.jpg", + "image_caption": [ + "SwinIR" + ], + "image_footnote": [], + "bbox": [ + 833, + 542, + 892, + 587 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.4. Guiding Principles for Saliency-based DA methods", + "text_level": 1, + "bbox": [ + 89, + 667, + 482, + 698 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "As discussed in Sec. 3.5, the key questions for incorporating saliency are: (1) the manners for segmenting the source image, and (2) the position for pasting the patches. We define the following settings for a comprehensive analysis. For question (1), we examined the impact of different granularities on DA strategies. We categorized granularity into coarse $(1\\times 1,2\\times 2)$ , medium $(3\\times 3,4\\times 4)$ , and fine $(5\\times 5,6\\times 6,7\\times 7)$ patches. We observed a decline in network performance with increasing granularity refinement as shown in Tab. 1, indicating that a lack of continuity at the boundary can cause serious boundary effects and subsequently impair performance. For question (2), we investigate six schemes for extracting and merging patches from the source image to", + "bbox": [ + 89, + 704, + 483, + 900 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "the target image: (i) Saliency to Corresponding, extracting the most salient region from the source image and merging it with the corresponding region of the target image; (ii) Center to Center, extracting the central region from the source image and merging it with the central region of the target image; (iii) Saliency to Saliency, extracting the most salient region from the source image and merging it with the most salient region of the target image; (iv) Saliency to Non-Saliency, extracting the most salient region from the source image and merging it with the non-salient region of the target image; (v) Non-Saliency to Saliency, extracting the non-salient region from the source image and merging it with the most salient region of the target image; (vi) Non-Saliency to Non-Saliency, extracting the non-salient region from the source image and merging it with the non-salient", + "bbox": [ + 511, + 667, + 906, + 895 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "23107", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/b772bacf0636b335b1cdd69e8c91870951003a0ff6eda6ca857eff909bca9193.jpg", + "image_caption": [ + "Figure 5. Attribution results for the baseline model, vanilla DA (CutMixup), and saliency-based DA (ADDCutMixup). The attribution results showcase the importance of each pixel in the input LR image for reconstructing the marked path. The diffusion index (DI) reflects the range of involved pixels, with a higher DI indicating a broader range of utilized pixels. Two key observations from the attribution and DI results emerge: (1) Vanilla DA methods enhance network performance by involving more pixels. (2) Saliency-based DA methods guide the model to focus more on meaningful details, reducing attention to irrelevant pixels. Please zoom in for better visualization." + ], + "image_footnote": [], + "bbox": [ + 94, + 88, + 903, + 215 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "region of the target image. As depicted in Tab. 1, scheme (i) Sa2Cor (with source 'Sa')—which incorporates a broader range of target positions such as 'Sa', 'Non', and others— aligns with a diversity principle and outperforms others. This highlights the importance of the saliency region in the source image and diverse augmented patterns. Based on these findings, we propose the key principle for low-level DA: a wider spectrum of degradation patterns.", + "bbox": [ + 88, + 313, + 485, + 434 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.5. What Vanilla And Saliency-Based DA learn", + "text_level": 1, + "bbox": [ + 89, + 446, + 460, + 464 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We conduct attribution analysis on baseline models, models trained with vanilla DA strategies, and models trained with our proposed saliency-based DA strategies. As depicted in Fig. 5, the model trained with DA methods exhibits a higher diffusion index (DI), indicating a broader range of involved pixels. Our ADD, highlighted by black arrows, focuses on more accurate details. Notably, we observe two key findings: (1) Both vanilla and saliency-based DA methods enhance the model's ability to involve more pixels, leading to improved performance. (2) Saliency-based DA directs the model to concentrate on influential pixels rather than indiscriminately incorporating more pixels.", + "bbox": [ + 88, + 470, + 482, + 651 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/449ab4b7db79591dbf0897915193a8bc7b29d6b5c1c0609016e9e817434d84b1.jpg", + "table_caption": [ + "Table 4. Gradients comparison with and without global feature detector (GD) on the DIV2K validation dataset." + ], + "table_footnote": [], + "table_body": "
MethodBackbonestep = 1step = 10step = 20step = 30step = 50
LAMEDSR33.9k108.7k119.9k120.0k120.0k
w/ GDEDSR228.6k419.2k643.8k784.9k869.1k
LAMRCAN29.8k660.0k962.6k969.5k987.3k
w/ GDRCAN174.9k494.1k749.2k815.7k1291.3k
", + "bbox": [ + 93, + 689, + 496, + 760 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.6. Ablation Studies", + "text_level": 1, + "bbox": [ + 89, + 771, + 254, + 786 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "The effectiveness of the CAM was demonstrated in Sec. 4.2. To further evaluate the impact of the proposed global feature detector (GD), we conduct additional experiments using EDSR and RCAN as backbone models, as outlined in Sec. 4.1. In these ablation studies, we substitute the global feature detector with the absolute cumulative value of the entire image. The results, shown in Tab. 4, highlight that the", + "bbox": [ + 88, + 794, + 482, + 900 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "inclusion of GD leads to smoother gradient changes, making uniformly sampled points more effective.", + "bbox": [ + 511, + 313, + 903, + 343 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.7. Extensions: Other Low-level Vision Tasks", + "text_level": 1, + "bbox": [ + 511, + 349, + 870, + 366 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We explore the applicability of our method to various low-level vision tasks, specifically examining its effectiveness in JPEG artifact removal. Utilizing CNN-based EDSR and Transformer-based SwinIR as baselines, we train the models from scratch. Following the prior works [33], we create a synthetic dataset with a compression quality parameter $q$ set to 10 (lower $q$ indicating stronger artifacts) for color images. Results in Tab. 5 reveal substantial improvements in PSNR and SSIM metrics, particularly at low compression levels ( $q$ ), highlighting the versatility of our method in benefiting various low-level vision tasks.", + "bbox": [ + 511, + 372, + 906, + 537 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Table 5. Quantitative comparison of JPEG compression artifact reduction on the LIVE1 dataset. The best results are highlighted, where $q$ denotes the compression level, with a smaller value indicating a higher compression level.", + "bbox": [ + 511, + 537, + 906, + 592 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/af67edc4d0f0c61ba79e3a7e9cb9bd076b9a75fc703c19d72e97bcd79e90851e.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Methodq = 10q = 20q = 30
PSNRSSIMPSNRSSIMPSNRSSIM
EDSR30.140.839131.830.884032.450.8992
+ ADD30.150.839432.310.894933.440.9173
SwinIR29.860.828732.250.890933.690.9174
+ ADD29.860.828532.560.895734.480.9287
", + "bbox": [ + 517, + 599, + 921, + 702 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5. Conclusion", + "text_level": 1, + "bbox": [ + 511, + 709, + 633, + 724 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this work, we introduce CAM and ADD, specifically designed for SR. Through a dedicated analysis, we reveal two new insights (i.e., involving more pixels and focusing on influential pixels rather than incorporating irrelevant pixels) for low-level tasks. Besides, We propose the key principle of the wider spectrum of degradation patterns for designing DA in low-level tasks. Experimental results underscore the effectiveness and adaptability of our method, significantly improving the performance of various SR tasks. Our work opens new avenues for exploring a more effective way to utilize image information in DA and low-level tasks.", + "bbox": [ + 511, + 734, + 906, + 900 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "23108", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Acknowledgments", + "text_level": 1, + "bbox": [ + 91, + 90, + 250, + 107 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "This work was supported by the Natural Science Foundation of China (Grant No. 62176119), and the Jiangsu Graduate Research Innovation Program (Grant No. KYCX24_0258).", + "bbox": [ + 89, + 114, + 483, + 160 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 172, + 187, + 188 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Naveed Akhtar and Mohammad A. A. K. Jalwana. Towards credible visual model interpretation with path attribution. In Proceedings of the International Conference on Machine Learning, ICML, pages 439-457, 2023. 3", + "[2] Tao Dai, Jianrui Cai, Yongbing Zhang, Shu-Tao Xia, and Lei Zhang. Second-order attention network for single image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 11065-11074, 2019. 1, 2", + "[3] Xin Deng, Yutong Zhang, Mai Xu, Shuhang Gu, and Yiping Duan. Deep coupled feedback network for joint exposure fusion and image super-resolution. IEEE Trans. Image Process., 30:3098-3112, 2021. 1", + "[4] Terrance Devries and Graham W. Taylor. Improved regularization of convolutional neural networks with cutout. ArXiv preprint, abs/1708.04552, 2017. 2", + "[5] Chao Dong, Chen Change Loy, Kaiming He, and Xiaou Tang. Learning a deep convolutional network for image super-resolution. In Proceedings of the European Conference on Computer Vision, ECCV, pages 184-199, 2014. 1, 2", + "[6] Xiaoyi Dong, Jianmin Bao, Dongdong Chen, Weiming Zhang, Nenghai Yu, Lu Yuan, Dong Chen, and Baining Guo. Cswin transformer: A general vision transformer backbone with cross-shaped windows. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 12114-12124, 2022. 2", + "[7] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In Proceedings of the International Conference on Learning Representations, ICLR, 2021. 1, 2", + "[8] Ruicheng Feng, Jinjin Gu, Yu Qiao, and Chao Dong. Suppressing model overfitting for image super-resolution networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW, pages 1964-1973, 2019. 1, 2", + "[9] Jinjin Gu and Chao Dong. Interpreting super-resolution networks with local attribution maps. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 9199-9208, 2021. 1, 3", + "[10] Takashi Isobe, Xu Jia, Shuhang Gu, Songjiang Li, Shengjin Wang, and Qi Tian. Video super-resolution with recurrent structure-detail network. In Proceedings of the European Conference on Computer Vision, ECCV, pages 645-660, 2020. 1", + "[11] Andrei Kapishnikov, Subhashini Venugopalan, Besim Avci, Ben Wedin, Michael Terry, and Tolga Bolukbasi. Guided in" + ], + "bbox": [ + 93, + 198, + 483, + 900 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "tegrated gradients: An adaptive path method for removing noise. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 5050-5058, 2021. 1, 3", + "[12] Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 1646-1654, 2016. 1, 2", + "[13] Jang-Hyun Kim, Wonho Choo, and Hyun Oh Song. Puzzle mix: Exploiting saliency and local statistics for optimal mixup. In Proceedings of the International Conference on Machine Learning, ICML, pages 5275-5285, 2020. 1, 2", + "[14] Jang-Hyun Kim, Wonho Choo, Hosan Jeong, and Hyun Oh Song. Co-mixup: Saliency guided joint mixup with supermodular diversity. In Proceedings of the International Conference on Learning Representations, ICLR, 2021. 1, 2", + "[15] Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja, and Ming-Hsuan Yang. Deep laplacian pyramid networks for fast and accurate super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 5835-5843, 2017. 1", + "[16] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, ICCV, pages 1833-1844, 2021. 1, 2, 5", + "[17] Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Young Mu Lee. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW, pages 1132-1140, 2017. 5", + "[18] Jihao Liu, Boxiao Liu, Hang Zhou, Hongsheng Li, and Yu Liu. Tokenmix: Rethinking image mixing for data augmentation in vision transformers. In Proceedings of the European Conference on Computer Vision, ECCV, pages 455-471, 2022. 1, 2", + "[19] Yiqun Mei, Yuchen Fan, and Yuqian Zhou. Image superresolution with non-local sparse attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 3517-3526, 2021. 1, 2", + "[20] Ze-Yu Mi and Yu-Bin Yang. Cutdem: Depth-aware enhanced multi-view image mixing for light field superresolution. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP, pages 3340-3344. IEEE, 2024. 1", + "[21] Mehdi S. M. Sajjadi, Bernhard Schölkopf, and Michael Hirsch. Enhancenet: Single image super-resolution through automated texture synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, ICCV, pages 4501-4510, 2017. 2", + "[22] Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. In Proceedings of the International Conference on Machine Learning, ICML, pages 3145-3153, 2017. 3", + "[23] Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. Smoothgrad: removing noise by adding noise. ArXiv preprint, abs/1706.03825, 2017." + ], + "bbox": [ + 516, + 92, + 906, + 900 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "23109", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[24] Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014. 3", + "[25] Jialu Sui, Xianping Ma, Xiaokang Zhang, and Man-On Pun. GCRDN: global context-driven residual dense network for remote sensing image superresolution. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens., 16:4457-4468, 2023. 1, 2", + "[26] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In Proceedings of the International Conference on Machine Learning, ICML, pages 3319-3328, 2017. 1, 3", + "[27] Radu Timofte, Rasmus Rothe, and Luc Van Gool. Seven ways to improve example-based single image super resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 1865-1873, 2016. 1, 2", + "[28] A. F. M. Shahab Uddin, Mst. Sirazam Monira, Wheemyung Shin, TaeChoong Chung, and Sung-Ho Bae. Saliencymix: A saliency guided data augmentation strategy for better regularization. In Proceedings of the International Conference on Learning Representations, ICLR, 2021. 1, 2", + "[29] Devesh Walawalkar, Zhiqiang Shen, Zechun Liu, and Marios Savvides. Attentive cutmix: An enhanced data augmentation approach for deep learning based image classification. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP, pages 3642-3646, 2020. 1, 2", + "[30] Haofan Wang, Zifan Wang, Mengnan Du, Fan Yang, Zijian Zhang, Sirui Ding, Piotr Mardziel, and Xia Hu. Scorecam: Score-weighted visual explanations for convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW, pages 111-119, 2020. 6", + "[31] Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Yu Qiao, and Chen Change Loy. ESRGAN: enhanced super-resolution generative adversarial networks. In Proceedings of the European Conference on Computer Vision Workshops, ECCVW, pages 63-79, 2018. 1, 2", + "[32] Zeyu Xiao, Yutong Liu, Ruisheng Gao, and Zhiwei Xiong. Cutmib: Boosting light field super-resolution via multi-view image blending. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 1672-1682, 2023. 1, 2", + "[33] Jaejun Yoo, Namhyuk Ahn, and Kyung-Ah Sohn. Rethinking data augmentation for image super-resolution: A comprehensive analysis and a new strategy. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 8372-8381, 2020. 1, 2, 5, 8", + "[34] Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Seong Joon Oh, Youngjoon Yoo, and Junsuk Choe. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, ICCV, pages 6022-6031. IEEE, 2019. 1, 2", + "[35] Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimiza" + ], + "bbox": [ + 91, + 92, + 480, + 900 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "tion. In Proceedings of the International Conference on Learning Representations, ICLR, 2018. 1, 2", + "[36] Kai Zhang, Yawei Li, Wangmeng Zuo, Lei Zhang, Luc Van Gool, and Radu Timofte. Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell., 44(10):6360-6376, 2022. 1, 2", + "[37] Qing-Long Zhang, Lu Rao, and Yubin Yang. Group-cam: Group score-weighted visual explanations for deep convolutional networks. ArXiv preprint, abs/2103.13859, 2021. 6", + "[38] Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European Conference on Computer Vision, ECCV, pages 294-310, 2018. 5", + "[39] Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu. Residual dense network for image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 2472-2481, 2018. 2", + "[40] Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu. Residual dense network for image restoration. IEEE Trans. Pattern Anal. Mach. Intell., 43(7):2480-2495, 2021. 1, 2", + "[41] Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. Random erasing data augmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, AAAI, pages 13001-13008, 2020. 2" + ], + "bbox": [ + 516, + 92, + 903, + 473 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "23110", + "bbox": [ + 478, + 944, + 519, + 955 + ], + "page_idx": 9 + } +] \ No newline at end of file diff --git a/2025/ADD_ Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution/e29ca394-45e6-4bf5-b8ea-58a74dbe24fa_model.json b/2025/ADD_ Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution/e29ca394-45e6-4bf5-b8ea-58a74dbe24fa_model.json new file mode 100644 index 0000000000000000000000000000000000000000..db1e864eca2ac0bf16c8e6b33dc8ff3e918175f3 --- /dev/null +++ b/2025/ADD_ Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution/e29ca394-45e6-4bf5-b8ea-58a74dbe24fa_model.json @@ -0,0 +1,3102 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.043 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.001, + 0.812, + 0.047 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.108, + 0.13, + 0.892, + 0.177 + ], + "angle": 0, + "content": "ADD: Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution" + }, + { + "type": "text", + "bbox": [ + 0.393, + 0.204, + 0.61, + 0.222 + ], + "angle": 0, + "content": "Ze-Yu Mi Yu-Bin Yang*" + }, + { + "type": "text", + "bbox": [ + 0.143, + 0.222, + 0.856, + 0.24 + ], + "angle": 0, + "content": "State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China" + }, + { + "type": "text", + "bbox": [ + 0.292, + 0.243, + 0.7, + 0.258 + ], + "angle": 0, + "content": "mizeyu@smail.nju.edu.cn yangyubin@nju.edu.cn" + }, + { + "type": "title", + "bbox": [ + 0.248, + 0.292, + 0.327, + 0.308 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.324, + 0.485, + 0.686 + ], + "angle": 0, + "content": "Data augmentation (DA) stands out as a powerful technique to enhance the generalization capabilities of deep neural networks across diverse tasks. However, in low-level vision tasks, DA remains rudimentary (i.e., vanilla DA), facing a critical bottleneck due to information loss. In this paper, we introduce a novel Calibrated Attribution Maps (CAM) to generate saliency masks, followed by two saliency-based DA methods—Attribution-Driven Data augmentation (ADD) and \\( \\text{ADD+} \\)—designed to address this issue. CAM leverages integrated gradients and incorporates two key innovations: a global feature detector and calibrated integrated gradients. Based on CAM and the proposed methods, we have two new insights for low-level vision tasks: (1) increasing pixel diversity, as seen in vanilla DA, can improve performance, and (2) focusing on salient features while minimizing the impact of irrelevant pixels, as seen in saliency-based DA, more effectively enhances model performance. Additionally, we find and highlight the key guiding principle for designing saliency-based DA: a wider spectrum of degradation patterns. Extensive experiments demonstrate the compatibility and consistency of our method, as well as the significant performance improvement across various SR tasks and networks. Our code is available at https://github.com/mizeyu/ADD." + }, + { + "type": "title", + "bbox": [ + 0.092, + 0.717, + 0.222, + 0.733 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.742, + 0.483, + 0.877 + ], + "angle": 0, + "content": "Data augmentation (DA) is a fundamental technique for improving the generalization ability of deep neural networks (DNNs). In high-level vision tasks such as image recognition and semantic segmentation, early vanilla approaches like Mixup [35] and CutMix [34] laid the groundwork. Recently, saliency-based DA methods [13, 14, 18, 28, 29] have gained considerable attention, demonstrating superior performance over vanilla counterparts in high-level vision tasks." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.293, + 0.905, + 0.414 + ], + "angle": 0, + "content": "However, applying saliency-based DA techniques directly to low-level vision tasks remains challenging due to a mismatch in task objectives, inconsistent feature requirements, differences in data characteristics, and inappropriate method designs (see Appendix for further details). These challenges have led to the continued use of vanilla DA techniques, such as rotation, flipping [27], Mixup [8], and Cutblur [33], in low-level vision tasks." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.415, + 0.905, + 0.672 + ], + "angle": 0, + "content": "In low-level vision tasks [2, 3, 5-7, 10, 12, 15, 16, 19, 25, 31, 36, 40], the use of vanilla DA methods [8, 20, 27, 32, 33] is hindered by the issue of information loss, which can severely degrade the quality of the reconstructed images. As shown in Fig. 1, vanilla DA techniques often generate images with flat regions and simplistic edges (e.g., a3), which provide minimal support for model learning in tasks like super-resolution (SR), leading to significant loss of crucial information. In contrast, saliency-based DA methods [13, 14, 18, 28, 29] target information-rich regions (e.g., b3), which facilitate better reconstruction performance. The phase spectrum analysis in Fig. 1 (c, d, e) further illustrates that augmented images incorporating saliency information (b3) retain more critical details, alleviating the problem of information loss. This observation motivates the need for new DA techniques that focus on more meaningful regions in low-level tasks." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.673, + 0.905, + 0.87 + ], + "angle": 0, + "content": "Existing saliency methods, however, are often prone to background noise and struggle to accurately capture important features. To address these limitations, we propose a new attribution method, the Calibrated Attribution Maps (CAM), which is capable of identifying important features without being influenced by background noise. CAM builds upon LAM [9] and integrated gradients (IG) [11, 26], introducing two key innovations: a global feature detector and calibrated integrated gradients. Using CAM, we introduce Attribution-Driven Data augmentation (ADD) techniques, and its enhanced version \\(\\mathrm{ADD+}\\), which represent the first attempt to explore saliency in DA from an attribution analysis perspective." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.871, + 0.905, + 0.902 + ], + "angle": 0, + "content": "Through CAM, we derive two key insights for low-level vision tasks: (1) incorporating a broader range of pixels," + }, + { + "type": "page_footnote", + "bbox": [ + 0.11, + 0.888, + 0.236, + 0.901 + ], + "angle": 0, + "content": "*Corresponding author." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.518, + 0.958 + ], + "angle": 0, + "content": "23101" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.095, + 0.077, + 0.255, + 0.161 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.121, + 0.163, + 0.223, + 0.176 + ], + "angle": 0, + "content": "a1. Input Image" + }, + { + "type": "image", + "bbox": [ + 0.256, + 0.079, + 0.414, + 0.16 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.287, + 0.163, + 0.383, + 0.175 + ], + "angle": 0, + "content": "a2. Vanilla DA" + }, + { + "type": "image", + "bbox": [ + 0.415, + 0.079, + 0.573, + 0.161 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.422, + 0.163, + 0.567, + 0.176 + ], + "angle": 0, + "content": "a3. Aug. Image of (a2)" + }, + { + "type": "image", + "bbox": [ + 0.575, + 0.079, + 0.734, + 0.161 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.583, + 0.163, + 0.723, + 0.176 + ], + "angle": 0, + "content": "c. Phase Spec. of (a3)" + }, + { + "type": "image", + "bbox": [ + 0.738, + 0.095, + 0.901, + 0.146 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.096, + 0.178, + 0.255, + 0.259 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.114, + 0.261, + 0.239, + 0.275 + ], + "angle": 0, + "content": "b1. Saliency Image" + }, + { + "type": "image", + "bbox": [ + 0.257, + 0.178, + 0.415, + 0.259 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.259, + 0.261, + 0.408, + 0.274 + ], + "angle": 0, + "content": "b2. Saliency-based DA" + }, + { + "type": "image", + "bbox": [ + 0.415, + 0.178, + 0.574, + 0.26 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.419, + 0.261, + 0.57, + 0.274 + ], + "angle": 0, + "content": "b3. Aug. Image of (b2)" + }, + { + "type": "image", + "bbox": [ + 0.577, + 0.178, + 0.735, + 0.26 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.585, + 0.261, + 0.726, + 0.274 + ], + "angle": 0, + "content": "d. Phase Spec. of (b3)" + }, + { + "type": "image", + "bbox": [ + 0.739, + 0.178, + 0.901, + 0.26 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.741, + 0.261, + 0.903, + 0.275 + ], + "angle": 0, + "content": "e. Residual Map of d & c" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.286, + 0.907, + 0.342 + ], + "angle": 0, + "content": "Figure 1. Motivation: Provide more meaningful information for vanilla DA to address the critical bottleneck of information loss. (a1) Input image. (a2) Vanilla DA randomly selects an area. (a3) Augmented image of vanilla DA. (b1) Saliency map of the input image. (b2) Select the maximum saliency region. (b3) Augmented image of saliency-based DA. (c) Phase spectrum of augmented image (a3). (d) Phase spectrum of augmented image (b3). (e) Residual map calculated between (c) and (d)." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.35, + 0.483, + 0.47 + ], + "angle": 0, + "content": "as in vanilla DA, can improve model performance, and (2) focusing on more influential pixels—using saliency information—while minimizing the inclusion of irrelevant pixels, as in saliency-based DA, results in more effective enhancement. This insight explains why ADD, which utilizes saliency to prioritize important pixels, alleviates information loss and outperforms traditional vanilla DA methods, which indiscriminately incorporate additional pixels." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.472, + 0.484, + 0.578 + ], + "angle": 0, + "content": "Furthermore, to design effective saliency-based DA methods in low-level vision tasks, we propose the key guiding principle: a wider spectrum of degradation patterns. Specifically, DA methods should: (1) prefer maintaining continuous boundaries to avoid boundary effects, and (2) prioritize diverse augmentation strategies over single augmented images to provide richer learning signals." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.579, + 0.197, + 0.592 + ], + "angle": 0, + "content": "Contributions:" + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.595, + 0.483, + 0.654 + ], + "angle": 0, + "content": "- We introduce CAM, a robust attribution method that accurately identifies important features while mitigating background noise, improving saliency map generation in low-level vision tasks." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.656, + 0.483, + 0.699 + ], + "angle": 0, + "content": "- We propose ADD, a novel attribution-driven DA method, which overcomes the limitations of vanilla DA and significantly reduces information loss in low-level tasks." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.7, + 0.483, + 0.745 + ], + "angle": 0, + "content": "- We provide new insights and guiding principles for saliency-based DA in low-level vision, offering a framework for improving model performance." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.746, + 0.483, + 0.79 + ], + "angle": 0, + "content": "- Our extensive experiments validate the effectiveness of our methods, demonstrating significant performance gains across a range of super-resolution tasks." + }, + { + "type": "list", + "bbox": [ + 0.092, + 0.595, + 0.483, + 0.79 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.807, + 0.24, + 0.822 + ], + "angle": 0, + "content": "2. Related Works" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.833, + 0.314, + 0.849 + ], + "angle": 0, + "content": "2.1. Image Super-Resolution" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.856, + 0.484, + 0.901 + ], + "angle": 0, + "content": "Image super-resolution (SR) is a key technology in computer vision [21, 39]. Since SRCNN [5], numerous CNN-based methods have emerged, incorporating residual [12," + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.35, + 0.906, + 0.396 + ], + "angle": 0, + "content": "36] and dense blocks [25, 31, 40], as well as attention mechanisms [2, 19]. More recently, transformer-based SR models [6, 7, 16] have achieved state-of-the-art performance." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.408, + 0.82, + 0.423 + ], + "angle": 0, + "content": "2.2. Data Augmentation in Vision Tasks" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.43, + 0.907, + 0.552 + ], + "angle": 0, + "content": "Traditional data augmentation (DA) methods in high-level vision include geometric and intensity transformations [4, 41], as well as mixed-based techniques [34, 35]. Recently, saliency-based DA has gained popularity, focusing on preserving critical regions [13, 14, 18, 28, 29]. In low-level vision, DA remains largely limited to conventional approaches [27]. Some works explore Mixup [8] and Cutblur [33], with CutMIB extending them to light-field SR [32]." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.567, + 0.612, + 0.582 + ], + "angle": 0, + "content": "3. Methods" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.594, + 0.906, + 0.714 + ], + "angle": 0, + "content": "In this section, we first discuss the key challenges and motivations for incorporating saliency-based data augmentation (DA) in low-level vision tasks in Sec. 3.1. Then, in Sec. 3.2, we briefly review prior attribution methods, which serve as the foundation for our proposed approach. Next, we introduce the concept of Calibrated Attribution Maps (CAM) in Sec. 3.3, followed by the presentation of saliency-based DA techniques, ADD and \\(\\mathrm{ADD + }\\), in Sec. 3.4." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.726, + 0.855, + 0.742 + ], + "angle": 0, + "content": "3.1. Integrating Saliency into Low-Level DA" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.749, + 0.906, + 0.854 + ], + "angle": 0, + "content": "Why introduce saliency into low-level vanilla DA? As discussed in Sec. 1, leveraging saliency provides valuable insights into the most important image features, addressing the problem of information loss. This motivates the integration of saliency information into data augmentation strategies for low-level vision tasks, where preserving critical image details is essential for enhancing performance." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.856, + 0.906, + 0.901 + ], + "angle": 0, + "content": "How to introduce saliency into low-level vanilla DA? Traditional saliency methods, such as LAM and integrated gradient (IG), are limited by the challenge of distinguishing" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "23102" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.485, + 0.198 + ], + "angle": 0, + "content": "relevant features from irrelevant background noise. To overcome this limitation, we build upon LAM and IG and introduce a novel approach, CAM, which incorporates two new components: a global feature detector and a calibrated integrated gradient. These innovations, detailed in Sec. 3.3, effectively address the noise issue and improve the accuracy of saliency estimation in low-level tasks." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.209, + 0.445, + 0.226 + ], + "angle": 0, + "content": "3.2. Overview of Vanilla Attribution Methods" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.232, + 0.485, + 0.354 + ], + "angle": 0, + "content": "Before presenting our method, we first provide a brief overview of existing attribution methods that form the foundation for our approach [1, 11, 22-24]. Let \\(\\mathcal{I} \\in \\mathbb{R}^d\\) be the input image, and let \\(\\mathcal{C}: \\mathbb{R}^d \\mapsto \\mathbb{R}\\) represent a classification network. Gradient-based methods, such as Integrated Gradients (IG), quantify the impact of changes in the input dimensions by computing the gradient of the output with respect to the input image:" + }, + { + "type": "equation", + "bbox": [ + 0.13, + 0.365, + 0.484, + 0.401 + ], + "angle": 0, + "content": "\\[\n\\mathrm {I G} _ {\\mathcal {C}} \\mathcal {I} = (\\mathcal {I} - \\mathcal {I} ^ {\\prime}) \\int_ {0} ^ {1} \\frac {\\partial \\mathcal {C} (\\mathcal {I} ^ {\\prime} + \\alpha (\\mathcal {I} - \\mathcal {I} ^ {\\prime}))}{\\partial \\mathcal {I}} d \\alpha , (1)\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.412, + 0.484, + 0.457 + ], + "angle": 0, + "content": "where \\(\\mathcal{I}'\\) is a baseline image (often a blank image in high-level tasks) and \\(\\alpha\\) is a continuous parameter that interpolates between the baseline and the target input." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.459, + 0.484, + 0.609 + ], + "angle": 0, + "content": "In the context of image SR, Local Attribution Maps (LAM) convert the original baseline image \\(\\mathcal{I}'\\) to a blurred version \\(\\mathcal{I}' = \\omega(\\sigma) \\otimes \\mathcal{I}\\), where \\(\\omega(\\sigma)\\) is a Gaussian blur kernel with width \\(\\sigma\\), and \\(\\otimes\\) represents convolution. Besides, LAM adapts IG for SR tasks by using a gradient detection method \\(D\\) that focuses on local feature detection in SR networks. However, LAM suffers from the limitation of irrelevant gradient accumulations, which can easily lead to a focus on irrelevant areas. To address this issue, we introduce CAM to eliminate irrelevant area interference." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.621, + 0.411, + 0.638 + ], + "angle": 0, + "content": "3.3. Calibrated Attribution Maps (CAM)" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.644, + 0.483, + 0.719 + ], + "angle": 0, + "content": "In this section, we introduce the concept of Calibrated Attribution Maps (CAM) as shown in Fig. 2 (a), which is inspired by the IG [26] and LAM [9]. The goal of CAM is to provide a more accurate and reliable estimation of feature importance, particularly in the context of image SR tasks." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.72, + 0.484, + 0.902 + ], + "angle": 0, + "content": "Global Feature Detector (GD). Given an input image pair \\(\\mathcal{I}^{LR}\\) (low resolution) and \\(\\mathcal{I}^{HR}\\) (high resolution), we aim to learn a mapping function \\(\\mathcal{F}\\) that produces the super-resolved image \\(\\mathcal{I}^{SR}\\). Traditional attribution methods such as LAM detect local pixel-wise gradients, which can easily lead to saturation effects, as shown in Fig. 2 (d). To alleviate this problem, we introduce a Global Feature Detector (GD), which aims to capture global features in the image by applying convolutional filters such as the Sobel filter. This approach smoothes the detected gradients and enhances the robustness of saliency maps. To achieve a more robust global feature representation, the GD operation is defined" + }, + { + "type": "text", + "bbox": [ + 0.515, + 0.095, + 0.538, + 0.105 + ], + "angle": 0, + "content": "as:" + }, + { + "type": "equation", + "bbox": [ + 0.595, + 0.105, + 0.905, + 0.123 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {G} \\boldsymbol {D} (\\mathcal {I} ^ {S R}) = \\| \\boldsymbol {S o b e l} _ {x y} (\\mathcal {I} ^ {S R}) \\| _ {2}, \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.129, + 0.905, + 0.188 + ], + "angle": 0, + "content": "where \\(Sobel_{xy}\\) denotes the Sobel filter applied in both the \\(x\\) and \\(y\\) directions to capture edge features in the image. This approach smooths the gradients and reduces the saturation problem, as demonstrated in Fig. 2 (d) & (g)." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.189, + 0.907, + 0.325 + ], + "angle": 0, + "content": "To analyze the attributes of the SR network, given the current input image \\(\\mathcal{I}\\), a baseline image \\(\\mathcal{I}'\\) satisfies that \\(\\mathcal{F}(\\mathcal{I}')\\) absent certain features existed in \\(\\mathcal{F}(\\mathcal{I})\\) is also needed. Accordingly, feature scalar \\(GD(\\mathcal{F}(\\mathcal{I}))\\) will show significant numerical advantage over \\(GD(\\mathcal{F}(\\mathcal{I}'))\\). We calculate the path-integrated gradient along the gradually changing path from \\(\\mathcal{I}'\\) to \\(\\mathcal{I}\\) and obtain the attribution map for \\(GD(\\mathcal{F}(\\mathcal{I}))\\). Then, the \\(i\\)th dimension of the calibrated attribution maps is defined as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.535, + 0.333, + 0.905, + 0.363 + ], + "angle": 0, + "content": "\\[\n\\phi_ {i} ^ {C A M} (\\mathcal {F}, \\boldsymbol {G D}) = \\left(\\mathcal {I} _ {i} - \\mathcal {I} _ {i} ^ {\\prime}\\right) \\int_ {a = 0} ^ {1} \\frac {\\partial \\boldsymbol {G D} \\left(\\boldsymbol {F} \\left(\\mathcal {I} ^ {\\prime} + \\alpha \\left(\\mathcal {I} - \\mathcal {I} ^ {\\prime}\\right)\\right)\\right)}{\\partial \\mathcal {I} _ {i}} d \\alpha . \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.37, + 0.907, + 0.593 + ], + "angle": 0, + "content": "Calibrated Path Integrated Gradient (CPIG). We now introduce the Calibrated Path Integrated Gradient (CPIG), which can efficiently and effectively analyze global attribution. In SR tasks, the high-frequency components (e.g., edges and textures) contribute much more than the low-frequency components (e.g., color and brightness) to the network performance. In this work, we obtain baseline inputs by eliminating high-frequency components, setting them as the blurred version of LR images denoted as \\(\\mathcal{I}' = \\omega(\\sigma) \\otimes \\mathcal{I}\\). Here, \\(\\omega(\\sigma)\\) represents the Gaussian blur kernel parameterized by the kernel width \\(\\sigma\\) and \\(\\otimes\\) is the convolution operation. Following previous works, we construct a smooth transformation from \\(\\mathcal{I}'\\) to \\(\\mathcal{I}\\), which is expressed as \\(\\gamma(a) = \\omega(\\sigma - \\alpha\\sigma) \\otimes \\mathcal{I}\\). Accordingly, we have \\(\\gamma(0) = \\mathcal{I}'\\) and \\(\\gamma(1) = \\mathcal{I}\\). The gradients at points are sampled in \\(k\\) steps along the path and the gradient of the \\(i\\)-th step is:" + }, + { + "type": "equation", + "bbox": [ + 0.588, + 0.599, + 0.905, + 0.617 + ], + "angle": 0, + "content": "\\[\n\\phi_ {i} ^ {C A M} (\\mathcal {F}, \\boldsymbol {G D}, \\gamma) = \\tag {4}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.59, + 0.62, + 0.844, + 0.653 + ], + "angle": 0, + "content": "\\[\n\\left(\\gamma \\left(\\frac {i}{k}\\right) - \\gamma \\left(\\frac {i + 1}{k}\\right)\\right) \\times \\frac {\\partial G D \\left(\\mathcal {F} \\left(\\gamma \\left(\\frac {i}{k}\\right)\\right)\\right)}{\\partial \\gamma \\left(\\frac {i}{k}\\right)} d \\alpha .\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.66, + 0.906, + 0.765 + ], + "angle": 0, + "content": "To improve the stability and accuracy of the path-integrated gradients, we introduce a calibration step that limits the deviation at each step and ensures that the gradients focus on the most relevant features of the image. This is achieved by limiting the range of each step to fluctuate around a central value, \\(a_{\\mathrm{min}} = \\max (a - d,0.0)\\) and \\(a_{\\mathrm{max}} = \\min (a + d,1.0)\\)." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.766, + 0.907, + 0.903 + ], + "angle": 0, + "content": "Attribution analysis progressively adjusts the value of each pixel to make the interpolated image \\(\\mathcal{I}'\\) gradually approach the target image \\(\\mathcal{I}\\), and the overall loss function can be defined as: \\(\\mathcal{L}_{OA} = \\| \\mathcal{I}' - \\mathcal{I}\\|_1\\). Correspondingly, the target loss function for approximating the target image with the interpolation image in step \\(i\\) is: \\(\\mathcal{L}_{TG} = \\| \\mathcal{I}' - \\mathcal{I}\\|_1 \\times (1 - \\frac{i}{k})\\). Then, the difference between the actual current interpolated image and the target image can be represented by the current loss: \\(\\mathcal{L}_{CU} = \\| \\gamma (\\frac{i}{k}) - \\mathcal{I}\\|_1\\)." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "23103" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.095, + 0.087, + 0.885, + 0.393 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.404, + 0.908, + 0.448 + ], + "angle": 0, + "content": "Figure 2. The illustration of CAM, the ADD framework, and the CAM-LAM comparison. (a) Illustration of CAM. (b) Process of ADD and \\(\\mathrm{ADD + }\\). (c) Comparison of global attribution analysis between CAM and LAM, with red boxes highlighting regions where gradients are presented after global attribution analysis." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.473, + 0.484, + 0.534 + ], + "angle": 0, + "content": "To further prevent instability caused by large gradient changes and to maintain the smoothness of the path, we select only those pixels with gradient magnitudes below a predefined threshold \\( T_{f} \\) for updating:" + }, + { + "type": "equation", + "bbox": [ + 0.167, + 0.551, + 0.404, + 0.573 + ], + "angle": 0, + "content": "\\[\nT _ {f} = \\operatorname {s o r t e d} \\left(\\left| \\phi_ {i} ^ {C A M} \\right|\\right) _ {\\left[ p _ {f} \\cdot \\text {n u m . p i x e l s} \\right]} ^ {\\min },\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.169, + 0.575, + 0.482, + 0.615 + ], + "angle": 0, + "content": "\\[\nM _ {f} = \\left\\{ \\begin{array}{l l} 1, & \\text {i f} | \\phi_ {i} ^ {C A M} | \\leq T _ {f}, \\\\ 0, & \\text {o t h e r w i s e}, \\end{array} \\right. \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.632, + 0.484, + 0.722 + ], + "angle": 0, + "content": "where \\(M_{f}\\) is a binary mask representing the pixels that need to be calibrated. Based on the difference between the current loss \\(\\mathcal{L}_{CU}\\) and the target loss \\(\\mathcal{L}_{TG}\\), combined with the mask threshold loss function \\(\\mathcal{L}_{MF}\\) between the pixels to be corrected and the target image, we generate calibration factor \\(\\delta\\) to control the step size of each update:" + }, + { + "type": "equation", + "bbox": [ + 0.162, + 0.74, + 0.411, + 0.756 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {M F} = \\left\\| M _ {f} \\odot (\\gamma (a) - \\gamma \\left(a _ {\\max }\\right)) \\right\\| _ {1},\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.212, + 0.759, + 0.482, + 0.79 + ], + "angle": 0, + "content": "\\[\n\\delta = \\frac {\\mathcal {L} _ {C U} - \\mathcal {L} _ {T G}}{\\mathcal {L} _ {M F}}. \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.811, + 0.484, + 0.902 + ], + "angle": 0, + "content": "With the calibration factor \\(\\delta\\), the calibrated interpolation image \\(\\gamma_{c}(a)\\) can be formulated as: \\(\\gamma_{c}(a) = \\gamma (a + \\delta \\times (a_{\\mathrm{max}} - a))\\). Accordingly, the calibrated gradient \\(\\psi_i^{CAM}\\) in step \\(i\\) can be updated by the calibrated interpolation image \\(\\gamma_{c}(a)\\) and represented as: \\(\\psi_i^{CAM} = \\phi_i^{CAM} + (\\gamma_c(a) - \\gamma (a))\\times\\) \\(\\phi_i^{CAM}\\). Finally, we obtain the approximate integrated gra" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.473, + 0.885, + 0.489 + ], + "angle": 0, + "content": "dient by summing up the calibrated gradients of \\(k\\) steps:" + }, + { + "type": "equation", + "bbox": [ + 0.643, + 0.501, + 0.906, + 0.542 + ], + "angle": 0, + "content": "\\[\n\\mathcal {I} _ {s} ^ {L R} = \\sum_ {i = 0} ^ {k} \\psi_ {i} ^ {C A M}. \\tag {7}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.553, + 0.906, + 0.613 + ], + "angle": 0, + "content": "As shown in the Fig. 2 (f) and (i), our calibrated attribution maps focus on the most important edge texture information compared to LAM, and are not affected by the noise of flat areas and background." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.625, + 0.906, + 0.641 + ], + "angle": 0, + "content": "3.4. Attribution-Driven Data Augmentation (ADD)" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.647, + 0.907, + 0.783 + ], + "angle": 0, + "content": "In this section, as depicted in Fig. 2 (b), we introduce the specific process of the proposed ADD and the enhanced version \\(\\mathrm{ADD + }\\). Let \\(\\mathcal{I}^{LR}\\in R^{H\\times W\\times C}\\) be the input LR image, and the corresponding saliency map \\(\\mathcal{I}_s^{LR}\\) can be obtained with Eq. (7). To accurately obtain the region of maximum saliency, we follow the principle (1) of continuous boundaries in Sec. 3.5 and select the maximum saliency pixels with a proportion of \\(p\\) and obtain the corresponding irregularly shaped patch:" + }, + { + "type": "equation", + "bbox": [ + 0.588, + 0.794, + 0.794, + 0.815 + ], + "angle": 0, + "content": "\\[\nT _ {p} = \\operatorname {s o r t e d} \\left(\\mathcal {I} _ {s} ^ {L R}\\right) _ {\\lceil p \\cdot \\text {n u m . p i x e l s} \\rceil} ^ {m a x},\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.589, + 0.817, + 0.905, + 0.858 + ], + "angle": 0, + "content": "\\[\nM (i, j) = \\left\\{ \\begin{array}{l l} 1, & \\text {i f} \\mathcal {I} _ {s} ^ {L R} (i, j) \\geq T _ {p}, \\\\ 0, & \\text {o t h e r w i s e}, \\end{array} \\right. \\tag {8}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.869, + 0.908, + 0.903 + ], + "angle": 0, + "content": "where the \\(T_{p}\\) represents the maximum saliency value of the top \\(p\\) proportion in the \\(\\mathcal{I}_s^{LR}\\) and the \\(M\\in \\{0,1\\}^{H\\times W}\\) is a" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.52, + 0.958 + ], + "angle": 0, + "content": "23104" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.482, + 0.197 + ], + "angle": 0, + "content": "binary mask indicating the area to be cut. Following the second principle (2) of diverse augmented images in Sec. 3.5, we use the mask \\(M\\) to cut the patch to the corresponding position on another image and combine it with different DA strategies. Given the LR-HR image pair \\(\\{\\mathcal{I}_i^{LR},\\mathcal{I}_i^{HR}\\}\\) and the corresponding binary mask \\(M\\), the augmentation process is explained below." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.198, + 0.482, + 0.276 + ], + "angle": 0, + "content": "ADD. We first adopt a mixed strategy to generate augmented input images to enable the model to learn richer and more complex degradation patterns. We cut the patch and mix it with the patch from another LR-HR image pair \\(\\{\\mathcal{I}_j^{LR},\\mathcal{I}_j^{HR}\\}\\) and generate new training samples:" + }, + { + "type": "equation", + "bbox": [ + 0.114, + 0.286, + 0.483, + 0.313 + ], + "angle": 0, + "content": "\\[\nP _ {m i x} ^ {L R} = \\lambda \\times M \\odot \\mathcal {I} _ {i} ^ {L R} + (1 - \\lambda) \\times M \\odot \\mathcal {I} _ {j} ^ {L R}, \\tag {9}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.114, + 0.309, + 0.483, + 0.328 + ], + "angle": 0, + "content": "\\[\nP _ {m i x} ^ {H R} = \\lambda \\times M \\odot \\mathcal {I} _ {i} ^ {H R} + (1 - \\lambda) \\times M \\odot \\mathcal {I} _ {j} ^ {H R},\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.17, + 0.34, + 0.483, + 0.368 + ], + "angle": 0, + "content": "\\[\n\\mathcal {I} _ {i} ^ {L R} = P _ {m i x} ^ {L R} + (1 - M) \\odot \\mathcal {I} _ {j} ^ {L R}, \\tag {10}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.172, + 0.362, + 0.4, + 0.382 + ], + "angle": 0, + "content": "\\[\n\\mathcal {I} _ {i} ^ {H R} = P _ {m i x} ^ {H R} + (1 - M) \\odot \\mathcal {I} _ {j} ^ {H R}.\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.389, + 0.483, + 0.467 + ], + "angle": 0, + "content": "Then we adopt an intensity strategy to make the model learn how and where to augment the input images. We cut the patch \\( P_{i}^{LR} \\) by \\( M \\odot \\mathcal{I}_{i}^{LR} \\) and upsample it by scale \\( s \\) with bicubic kernel, get \\( P_{i}^{LR(s \\times \\uparrow)} \\). The HR patch can be generated similarly and we can get augmented samples as:" + }, + { + "type": "equation", + "bbox": [ + 0.126, + 0.491, + 0.483, + 0.519 + ], + "angle": 0, + "content": "\\[\n\\hat {\\mathcal {I}} _ {i} ^ {L R \\rightarrow H R} = P _ {i} ^ {L R (s \\times \\uparrow)} + (\\mathbf {1} - M) \\odot \\mathcal {I} _ {i} ^ {H R}, \\tag {11}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.128, + 0.515, + 0.419, + 0.534 + ], + "angle": 0, + "content": "\\[\n\\hat {\\mathcal {I}} _ {i} ^ {H R \\rightarrow L R} = P _ {i} ^ {H R (s \\times \\downarrow)} + (\\mathbf {1} - M) \\odot \\mathcal {I} _ {i} ^ {L R}.\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.546, + 0.483, + 0.624 + ], + "angle": 0, + "content": "\\(\\mathbf{ADD}+\\). To validate the efficacy of saliency in mixed enhancement and surpass the performance limits, we proposed the method with an enhanced version. We additionally generate a pair of new training samples with another LR-HR image pair \\(\\{\\mathcal{I}_j^{LR},\\mathcal{I}_j^{HR}\\}\\):" + }, + { + "type": "equation", + "bbox": [ + 0.153, + 0.635, + 0.483, + 0.662 + ], + "angle": 0, + "content": "\\[\n\\mathcal {I} _ {i} ^ {L R} = M \\odot \\mathcal {I} _ {i} ^ {L R} + (1 - M) \\odot \\mathcal {I} _ {j} ^ {L R}, \\tag {12}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.155, + 0.657, + 0.419, + 0.676 + ], + "angle": 0, + "content": "\\[\n\\mathcal {I} _ {i} ^ {H R} = M \\odot \\mathcal {I} _ {i} ^ {H R} + (1 - M) \\odot \\mathcal {I} _ {j} ^ {H R},\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.687, + 0.483, + 0.779 + ], + "angle": 0, + "content": "where \\(\\odot\\) denotes the element-wise Hadamard product operation. Following previous work [33], in each training iteration, using \\(\\mathrm{ADD + }\\), each above augmentation method and traditional augmentation method (e.g., color, channel) have a probability \\(p\\) of being applied by the model to enhance the input image." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.788, + 0.216, + 0.803 + ], + "angle": 0, + "content": "3.5. Discussions" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.81, + 0.483, + 0.901 + ], + "angle": 0, + "content": "Incorporate saliency into vanilla DA. Incorporating saliency into existing vanilla DA methods revolves around two key aspects: patch cutting and pasting. Consequently, two fundamental questions arise: (1) What manner should be used for segmenting the source image? (2) Where should the cut patches be pasted? To address these questions, we" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.905, + 0.198 + ], + "angle": 0, + "content": "conduct a comprehensive analysis and reveal the key principle: a wider spectrum of degradation patterns. Specifically, it includes: (1) continuous boundaries rather than abrupt boundaries, and (2) diverse augmented images over single augmented images. The results and analysis of saliency in DA methods are presented in Tab. 1 and further elaborated in the experiments (see Sec. 4.4)." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.211, + 0.905, + 0.295 + ], + "angle": 0, + "content": "Table 1. Quantitative PSNR comparison of various saliency incorporation methods in super-resolution. \\( Patch_{n \\times n} \\) denotes image division into \\( n \\times n \\) patches. \\( X\\mathcal{Q}Y \\) indicates cutting a patch from area \\( X \\) and pasting it onto region \\( Y \\) in another image, where 'Sa' stands for 'Saliency', 'Non' for 'Non-saliency', 'Ce' for 'Center', and 'Cor' for 'Corresponding'." + }, + { + "type": "table", + "bbox": [ + 0.518, + 0.306, + 0.922, + 0.521 + ], + "angle": 0, + "content": "
MethodScaleTypes of Saliency UtilizationDIV2KRealSR
PSNRΔPSNRΔ
EDSR×4baseline29.21-28.86-
Patch1×1×4Granularity29.28+0.0729.10+0.24
Patch2×2×4Granularity29.26+0.0529.07+0.21
Patch3×3×4Granularity29.24+0.0329.02+0.16
Patch4×4×4Granularity29.22+0.0128.91+0.05
Patch5×5×4Granularity29.14-0.0728.75-0.11
Patch6×6×4Granularity29.07-0.1428.61-0.25
Patch7×7×4Granularity28.92-0.2928.45-0.41
Sa2Cor×4Diversity29.27+0.0629.11+0.25
Ce2Ce×4Diversity29.26+0.0529.08+0.22
Sa2Sa×4Diversity29.21+0.0028..89+0.03
Sa2Non×4Diversity29.24+0.0329.97+0.11
Non2Sa×4Diversity29.19-0.0228.79-0.07
Non2Non×4Diversity29.16-0.0528.76-0.10
ADD+×4Granularity&Diversity29.32+0.1129.14+0.28
" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.55, + 0.646, + 0.567 + ], + "angle": 0, + "content": "4. Experiments" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.575, + 0.654, + 0.589 + ], + "angle": 0, + "content": "4.1. Preliminaries" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.597, + 0.905, + 0.702 + ], + "angle": 0, + "content": "Network structures. We adopt several advanced and typical SR networks to verify the effectiveness and compatibility of our ADD and the proposed three new DA strategies. We consider CNN-based methods: RCAN [38], and EDSR [17], in which well-designed CNN-based structures are proven effective on SR tasks. The transformer-based method, SwinIR [16], is also adopted in our experiments." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.703, + 0.906, + 0.839 + ], + "angle": 0, + "content": "Datasets and implementation details. We use the DIV2K and RealSR datasets for training. We select ten images (index 0801-0810) from the DIV2K validation set for validation during training. For evaluation, we use six benchmark datasets, including Set5, Set14, BSD100, Urban100, Manga109, and test sets of RealSR. We keep the hyperparameters (e.g., learning rate, batch size) the same as reported in the original paper. All experiments are conducted using PyTorch on NVIDIA V100 GPUs." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.849, + 0.863, + 0.865 + ], + "angle": 0, + "content": "4.2. Comparison of Interpretation Capability" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.871, + 0.906, + 0.901 + ], + "angle": 0, + "content": "We conduct visualization experiments to evaluate the effectiveness of the proposed CAM. As shown in the left part of" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.518, + 0.957 + ], + "angle": 0, + "content": "23105" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.097, + 0.093, + 0.498, + 0.258 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.498, + 0.099, + 0.699, + 0.253 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.699, + 0.101, + 0.9, + 0.253 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.097, + 0.262, + 0.498, + 0.415 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.498, + 0.261, + 0.699, + 0.412 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.699, + 0.261, + 0.9, + 0.412 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.418, + 0.907, + 0.46 + ], + "angle": 0, + "content": "Figure 3. Saliency maps (Left) and Insertion/Deletion curves (Right) on DIV2K etc. super-resolution sets. The blue arrow indicates that the proposed CAM accurately reflects important and intuitive content without being affected by background noise, while the red arrow indicates that LAM is affected by background noise and produces undesired attribution results." + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.466, + 0.483, + 0.507 + ], + "angle": 0, + "content": "Table 2. Quantitative PSNR (dB) comparison of our ADD and existing DA methods in SR on DIV2K and RealSR datasets. \\(\\Delta\\) denotes the performance gap." + }, + { + "type": "table", + "bbox": [ + 0.095, + 0.513, + 0.498, + 0.753 + ], + "angle": 0, + "content": "
MethodScaleTraining SetDIV2KTraining SetRealSR
PSNRΔPSNRΔ
RCAN×4DIV2K29.22-RealSR29.20-
+ CutMix×4DIV2K29.24+0.02RealSR29.25+0.05
+ CutMixup×4DIV2K29.28+0.06RealSR29.30+0.10
+ CutBlur×4DIV2K29.25+0.03RealSR29.29+0.09
+ ADD×4DIV2K29.32+0.10RealSR29.34+0.14
+ ADD+×4DIV2K29.36+0.14RealSR29.46+0.26
EDSR×4DIV2K29.21-RealSR28.86-
+ CutMix×4DIV2K29.22+0.01RealSR28.90+0.04
+ CutMixup×4DIV2K29.26+0.05RealSR28.97+0.11
+ CutBlur×4DIV2K29.25+0.04RealSR28.94+0.08
+ ADD×4DIV2K29.30+0.09RealSR29.01+0.15
+ ADD+×4DIV2K29.32+0.11RealSR29.14+0.28
SwinIR×4DIV2K29.40-RealSR29.26-
+ CutMix×4DIV2K29.40+0.00RealSR29.29+0.03
+ CutMixup×4DIV2K29.43+0.03RealSR29.34+0.08
+ CutBlur×4DIV2K29.43+0.03RealSR29.32+0.06
+ ADD×4DIV2K29.46+0.06RealSR29.37+0.11
+ ADD+×4DIV2K29.48+0.08RealSR29.43+0.17
" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.755, + 0.483, + 0.815 + ], + "angle": 0, + "content": "Fig. 3, the blue arrow highlights important regions, while the red arrow points to areas considered background noise. CAM accurately identifies relevant information, demonstrating robustness to background noise, unlike LAM." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.825, + 0.484, + 0.903 + ], + "angle": 0, + "content": "Following established protocols [30, 37], we perform Insertion and Deletion tests, as shown in the right part of Fig. 3. In the Insertion test, a progressively increasing fraction \\((3.6\\%)\\) of pixels from the high-resolution (HR) image is inserted into the super-resolved image, guided by the pixel" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.48, + 0.907, + 0.602 + ], + "angle": 0, + "content": "importance values in the attribution map, until the reconstructed image closely matches the HR image. In the Deletion test, \\(3.6\\%\\) of the pixels in the HR image, starting from those with the highest attribution map values, are progressively replaced with black pixels until the entire image is replaced. The Insertion and Deletion curves provide further evidence that CAM more effectively captures the network's critical information compared to LAM." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.624, + 0.857, + 0.639 + ], + "angle": 0, + "content": "4.3. Results on Various Models and Datasets" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.644, + 0.907, + 0.901 + ], + "angle": 0, + "content": "We conduct quantitative comparisons between the proposed ADD method and existing vanilla DA approaches across classical benchmark datasets, namely DIV2K and RealSR, as detailed in Tab. 2. The results demonstrate that both ADD and \\( \\mathrm{ADD + } \\) consistently outperform vanilla DA methods on both synthetic (DIV2K) and real-world (RealSR) datasets, with performance improvements reaching up to \\( 0.28\\mathrm{dB} \\). Further comparisons between \\( \\mathrm{ADD + } \\) and baseline models on the Set5, Set14, Manga109, Urban100, and BSD100 datasets, presented in Tab. 3, show that networks trained with \\( \\mathrm{ADD + } \\) consistently achieve superior reconstruction performance. Qualitative results, depicted in Fig. 4, reveal that networks trained with \\( \\mathrm{ADD + } \\) exhibit enhanced visual quality compared to their baseline counterparts, capturing finer details. Notably, in the area of stripes on the building in Fig. 4 (img_012), \\( \\mathrm{ADD + } \\) yields more accurate and sharper details than the baselines." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.521, + 0.958 + ], + "angle": 0, + "content": "23106" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.193, + 0.077, + 0.804, + 0.092 + ], + "angle": 0, + "content": "Table 3. Quantitative comparison with baseline methods in SR and the \\( \\Delta \\) denotes the performance gap." + }, + { + "type": "table", + "bbox": [ + 0.095, + 0.097, + 0.904, + 0.335 + ], + "angle": 0, + "content": "
MethodScaleTraining DatasetSet5Set14Manga109Urban100BSD100
PSNRΔPSNRΔPSNRΔPSNRΔPSNRΔ
RCAN×2DIV2K38.23-34.11-39.41-33.29-32.37-
+ ADD+×2DIV2K38.34+0.1134.21+0.1039.55+0.1433.55+0.2632.44+0.07
EDSR×2DIV2K38.11-33.92-39.10-32.93-32.32-
+ ADD+×2DIV2K38.20+0.0934.04+0.1239.27+0.1733.16+0.2332.43+0.11
SwinIR×2DIV2K38.31-34.41-39.89-33.75-32.46-
+ ADD+×2DIV2K38.43+0.1234.49+0.0839.99+0.1033.90+0.1532.51+0.05
RCAN×4DIV2K32.58-28.84-31.22-26.75-27.74-
+ ADD+×4DIV2K32.64+0.0628.91+0.0731.43+0.2126.92+0.1727.79+0.05
EDSR×4DIV2K32.43-28.76-31.04-26.63-27.66-
+ ADD+×4DIV2K32.51+0.0828.88+0.1231.24+0.2026.81+0.1827.77+0.11
SwinIR×4DIV2K32.63-28.92-31.54-27.01-27.82-
+ ADD+×4DIV2K32.71+0.0828.99+0.0731.60+0.0627.13+0.1227.85+0.03
" + }, + { + "type": "image", + "bbox": [ + 0.099, + 0.343, + 0.219, + 0.458 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.1, + 0.459, + 0.217, + 0.471 + ], + "angle": 0, + "content": "Urban100(x4): img_092" + }, + { + "type": "image", + "bbox": [ + 0.227, + 0.342, + 0.288, + 0.388 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.245, + 0.389, + 0.266, + 0.398 + ], + "angle": 0, + "content": "HR" + }, + { + "type": "image", + "bbox": [ + 0.227, + 0.4, + 0.287, + 0.444 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.235, + 0.445, + 0.279, + 0.455 + ], + "angle": 0, + "content": "Bicubic" + }, + { + "type": "image", + "bbox": [ + 0.293, + 0.342, + 0.352, + 0.388 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.302, + 0.389, + 0.341, + 0.397 + ], + "angle": 0, + "content": "RCAN" + }, + { + "type": "image", + "bbox": [ + 0.293, + 0.4, + 0.352, + 0.444 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.301, + 0.445, + 0.34, + 0.455 + ], + "angle": 0, + "content": "RCAN" + }, + { + "type": "image_caption", + "bbox": [ + 0.29, + 0.457, + 0.483, + 0.47 + ], + "angle": 0, + "content": "+proposed +proposed +proposed" + }, + { + "type": "image", + "bbox": [ + 0.358, + 0.342, + 0.416, + 0.388 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.368, + 0.389, + 0.405, + 0.397 + ], + "angle": 0, + "content": "EDSR" + }, + { + "type": "image", + "bbox": [ + 0.358, + 0.4, + 0.416, + 0.444 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.367, + 0.445, + 0.404, + 0.454 + ], + "angle": 0, + "content": "EDSR" + }, + { + "type": "image_caption", + "bbox": [ + 0.357, + 0.457, + 0.483, + 0.47 + ], + "angle": 0, + "content": "+proposed +proposed" + }, + { + "type": "image", + "bbox": [ + 0.513, + 0.342, + 0.631, + 0.457 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.518, + 0.458, + 0.636, + 0.471 + ], + "angle": 0, + "content": "Urban100(x4): img_095" + }, + { + "type": "image", + "bbox": [ + 0.641, + 0.342, + 0.701, + 0.388 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.661, + 0.389, + 0.679, + 0.397 + ], + "angle": 0, + "content": "HR" + }, + { + "type": "image", + "bbox": [ + 0.642, + 0.399, + 0.7, + 0.455 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.706, + 0.342, + 0.766, + 0.388 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.715, + 0.389, + 0.756, + 0.397 + ], + "angle": 0, + "content": "RCAN" + }, + { + "type": "image", + "bbox": [ + 0.706, + 0.399, + 0.766, + 0.444 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.715, + 0.445, + 0.756, + 0.454 + ], + "angle": 0, + "content": "RCAN" + }, + { + "type": "image_caption", + "bbox": [ + 0.706, + 0.457, + 0.897, + 0.469 + ], + "angle": 0, + "content": "+proposed +proposed +proposed" + }, + { + "type": "image", + "bbox": [ + 0.771, + 0.342, + 0.829, + 0.388 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.785, + 0.389, + 0.816, + 0.397 + ], + "angle": 0, + "content": "EDSR" + }, + { + "type": "image", + "bbox": [ + 0.771, + 0.399, + 0.829, + 0.444 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.784, + 0.445, + 0.817, + 0.454 + ], + "angle": 0, + "content": "EDSR" + }, + { + "type": "image", + "bbox": [ + 0.835, + 0.342, + 0.894, + 0.388 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.845, + 0.389, + 0.885, + 0.397 + ], + "angle": 0, + "content": "SwinIR" + }, + { + "type": "image", + "bbox": [ + 0.835, + 0.399, + 0.893, + 0.444 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.845, + 0.445, + 0.886, + 0.454 + ], + "angle": 0, + "content": "SwinIR" + }, + { + "type": "image", + "bbox": [ + 0.101, + 0.488, + 0.217, + 0.601 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.123, + 0.602, + 0.197, + 0.611 + ], + "angle": 0, + "content": "Manga109(x4):" + }, + { + "type": "image_caption", + "bbox": [ + 0.111, + 0.612, + 0.21, + 0.622 + ], + "angle": 0, + "content": "PrayerHaNemurenai" + }, + { + "type": "image", + "bbox": [ + 0.225, + 0.488, + 0.286, + 0.534 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.243, + 0.535, + 0.264, + 0.544 + ], + "angle": 0, + "content": "HR" + }, + { + "type": "image", + "bbox": [ + 0.226, + 0.545, + 0.286, + 0.59 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.234, + 0.591, + 0.276, + 0.6 + ], + "angle": 0, + "content": "Bicubic" + }, + { + "type": "image", + "bbox": [ + 0.292, + 0.488, + 0.352, + 0.534 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.301, + 0.535, + 0.34, + 0.544 + ], + "angle": 0, + "content": "RCAN" + }, + { + "type": "image", + "bbox": [ + 0.292, + 0.545, + 0.352, + 0.59 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.299, + 0.591, + 0.339, + 0.6 + ], + "angle": 0, + "content": "RCAN" + }, + { + "type": "image_caption", + "bbox": [ + 0.288, + 0.602, + 0.481, + 0.615 + ], + "angle": 0, + "content": "+proposed +proposed +proposed" + }, + { + "type": "image", + "bbox": [ + 0.357, + 0.488, + 0.415, + 0.534 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.366, + 0.535, + 0.404, + 0.544 + ], + "angle": 0, + "content": "EDSR" + }, + { + "type": "image", + "bbox": [ + 0.357, + 0.545, + 0.415, + 0.59 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.366, + 0.591, + 0.403, + 0.6 + ], + "angle": 0, + "content": "EDSR" + }, + { + "type": "image_caption", + "bbox": [ + 0.357, + 0.602, + 0.481, + 0.615 + ], + "angle": 0, + "content": "+proposed +proposed" + }, + { + "type": "image", + "bbox": [ + 0.423, + 0.488, + 0.482, + 0.534 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.426, + 0.535, + 0.47, + 0.544 + ], + "angle": 0, + "content": "SwinIR" + }, + { + "type": "image", + "bbox": [ + 0.423, + 0.545, + 0.482, + 0.59 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.427, + 0.591, + 0.47, + 0.6 + ], + "angle": 0, + "content": "SwinIR" + }, + { + "type": "image", + "bbox": [ + 0.516, + 0.488, + 0.631, + 0.602 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.518, + 0.604, + 0.636, + 0.617 + ], + "angle": 0, + "content": "Urban100(x4): img_012" + }, + { + "type": "image", + "bbox": [ + 0.641, + 0.488, + 0.701, + 0.534 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.661, + 0.534, + 0.678, + 0.542 + ], + "angle": 0, + "content": "HR" + }, + { + "type": "image", + "bbox": [ + 0.641, + 0.543, + 0.7, + 0.587 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.647, + 0.589, + 0.693, + 0.598 + ], + "angle": 0, + "content": "Bicubic" + }, + { + "type": "image", + "bbox": [ + 0.706, + 0.488, + 0.764, + 0.533 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.714, + 0.533, + 0.754, + 0.541 + ], + "angle": 0, + "content": "RCAN" + }, + { + "type": "image", + "bbox": [ + 0.706, + 0.543, + 0.764, + 0.588 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.714, + 0.589, + 0.753, + 0.598 + ], + "angle": 0, + "content": "RCAN" + }, + { + "type": "image_caption", + "bbox": [ + 0.703, + 0.6, + 0.896, + 0.612 + ], + "angle": 0, + "content": "+proposed +proposed +proposed" + }, + { + "type": "image", + "bbox": [ + 0.833, + 0.488, + 0.892, + 0.533 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.842, + 0.533, + 0.885, + 0.541 + ], + "angle": 0, + "content": "SwinLR" + }, + { + "type": "image", + "bbox": [ + 0.834, + 0.543, + 0.893, + 0.588 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.843, + 0.589, + 0.886, + 0.597 + ], + "angle": 0, + "content": "SwinIR" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.627, + 0.907, + 0.656 + ], + "angle": 0, + "content": "Figure 4. Visual comparison on \\(\\times 4\\) SR with the baseline model and proposed method. The patches for comparison are marked with red boxes in the original images. Please zoom in for better visualization." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.668, + 0.483, + 0.699 + ], + "angle": 0, + "content": "4.4. Guiding Principles for Saliency-based DA methods" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.705, + 0.484, + 0.901 + ], + "angle": 0, + "content": "As discussed in Sec. 3.5, the key questions for incorporating saliency are: (1) the manners for segmenting the source image, and (2) the position for pasting the patches. We define the following settings for a comprehensive analysis. For question (1), we examined the impact of different granularities on DA strategies. We categorized granularity into coarse \\((1\\times 1,2\\times 2)\\), medium \\((3\\times 3,4\\times 4)\\), and fine \\((5\\times 5,6\\times 6,7\\times 7)\\) patches. We observed a decline in network performance with increasing granularity refinement as shown in Tab. 1, indicating that a lack of continuity at the boundary can cause serious boundary effects and subsequently impair performance. For question (2), we investigate six schemes for extracting and merging patches from the source image to" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.669, + 0.907, + 0.896 + ], + "angle": 0, + "content": "the target image: (i) Saliency to Corresponding, extracting the most salient region from the source image and merging it with the corresponding region of the target image; (ii) Center to Center, extracting the central region from the source image and merging it with the central region of the target image; (iii) Saliency to Saliency, extracting the most salient region from the source image and merging it with the most salient region of the target image; (iv) Saliency to Non-Saliency, extracting the most salient region from the source image and merging it with the non-salient region of the target image; (v) Non-Saliency to Saliency, extracting the non-salient region from the source image and merging it with the most salient region of the target image; (vi) Non-Saliency to Non-Saliency, extracting the non-salient region from the source image and merging it with the non-salient" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "23107" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.095, + 0.089, + 0.905, + 0.216 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.23, + 0.908, + 0.3 + ], + "angle": 0, + "content": "Figure 5. Attribution results for the baseline model, vanilla DA (CutMixup), and saliency-based DA (ADDCutMixup). The attribution results showcase the importance of each pixel in the input LR image for reconstructing the marked path. The diffusion index (DI) reflects the range of involved pixels, with a higher DI indicating a broader range of utilized pixels. Two key observations from the attribution and DI results emerge: (1) Vanilla DA methods enhance network performance by involving more pixels. (2) Saliency-based DA methods guide the model to focus more on meaningful details, reducing attention to irrelevant pixels. Please zoom in for better visualization." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.314, + 0.486, + 0.435 + ], + "angle": 0, + "content": "region of the target image. As depicted in Tab. 1, scheme (i) Sa2Cor (with source 'Sa')—which incorporates a broader range of target positions such as 'Sa', 'Non', and others— aligns with a diversity principle and outperforms others. This highlights the importance of the saliency region in the source image and diverse augmented patterns. Based on these findings, we propose the key principle for low-level DA: a wider spectrum of degradation patterns." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.448, + 0.462, + 0.465 + ], + "angle": 0, + "content": "4.5. What Vanilla And Saliency-Based DA learn" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.471, + 0.483, + 0.652 + ], + "angle": 0, + "content": "We conduct attribution analysis on baseline models, models trained with vanilla DA strategies, and models trained with our proposed saliency-based DA strategies. As depicted in Fig. 5, the model trained with DA methods exhibits a higher diffusion index (DI), indicating a broader range of involved pixels. Our ADD, highlighted by black arrows, focuses on more accurate details. Notably, we observe two key findings: (1) Both vanilla and saliency-based DA methods enhance the model's ability to involve more pixels, leading to improved performance. (2) Saliency-based DA directs the model to concentrate on influential pixels rather than indiscriminately incorporating more pixels." + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.656, + 0.483, + 0.684 + ], + "angle": 0, + "content": "Table 4. Gradients comparison with and without global feature detector (GD) on the DIV2K validation dataset." + }, + { + "type": "table", + "bbox": [ + 0.094, + 0.69, + 0.498, + 0.761 + ], + "angle": 0, + "content": "
MethodBackbonestep = 1step = 10step = 20step = 30step = 50
LAMEDSR33.9k108.7k119.9k120.0k120.0k
w/ GDEDSR228.6k419.2k643.8k784.9k869.1k
LAMRCAN29.8k660.0k962.6k969.5k987.3k
w/ GDRCAN174.9k494.1k749.2k815.7k1291.3k
" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.772, + 0.255, + 0.787 + ], + "angle": 0, + "content": "4.6. Ablation Studies" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.795, + 0.483, + 0.901 + ], + "angle": 0, + "content": "The effectiveness of the CAM was demonstrated in Sec. 4.2. To further evaluate the impact of the proposed global feature detector (GD), we conduct additional experiments using EDSR and RCAN as backbone models, as outlined in Sec. 4.1. In these ablation studies, we substitute the global feature detector with the absolute cumulative value of the entire image. The results, shown in Tab. 4, highlight that the" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.314, + 0.905, + 0.344 + ], + "angle": 0, + "content": "inclusion of GD leads to smoother gradient changes, making uniformly sampled points more effective." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.351, + 0.871, + 0.367 + ], + "angle": 0, + "content": "4.7. Extensions: Other Low-level Vision Tasks" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.373, + 0.907, + 0.538 + ], + "angle": 0, + "content": "We explore the applicability of our method to various low-level vision tasks, specifically examining its effectiveness in JPEG artifact removal. Utilizing CNN-based EDSR and Transformer-based SwinIR as baselines, we train the models from scratch. Following the prior works [33], we create a synthetic dataset with a compression quality parameter \\( q \\) set to 10 (lower \\( q \\) indicating stronger artifacts) for color images. Results in Tab. 5 reveal substantial improvements in PSNR and SSIM metrics, particularly at low compression levels (\\( q \\)), highlighting the versatility of our method in benefiting various low-level vision tasks." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.538, + 0.907, + 0.593 + ], + "angle": 0, + "content": "Table 5. Quantitative comparison of JPEG compression artifact reduction on the LIVE1 dataset. The best results are highlighted, where \\( q \\) denotes the compression level, with a smaller value indicating a higher compression level." + }, + { + "type": "table", + "bbox": [ + 0.518, + 0.6, + 0.922, + 0.703 + ], + "angle": 0, + "content": "
Methodq = 10q = 20q = 30
PSNRSSIMPSNRSSIMPSNRSSIM
EDSR30.140.839131.830.884032.450.8992
+ ADD30.150.839432.310.894933.440.9173
SwinIR29.860.828732.250.890933.690.9174
+ ADD29.860.828532.560.895734.480.9287
" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.71, + 0.634, + 0.725 + ], + "angle": 0, + "content": "5. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.735, + 0.907, + 0.901 + ], + "angle": 0, + "content": "In this work, we introduce CAM and ADD, specifically designed for SR. Through a dedicated analysis, we reveal two new insights (i.e., involving more pixels and focusing on influential pixels rather than incorporating irrelevant pixels) for low-level tasks. Besides, We propose the key principle of the wider spectrum of degradation patterns for designing DA in low-level tasks. Experimental results underscore the effectiveness and adaptability of our method, significantly improving the performance of various SR tasks. Our work opens new avenues for exploring a more effective way to utilize image information in DA and low-level tasks." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "23108" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.092, + 0.091, + 0.251, + 0.108 + ], + "angle": 0, + "content": "Acknowledgments" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.115, + 0.484, + 0.161 + ], + "angle": 0, + "content": "This work was supported by the Natural Science Foundation of China (Grant No. 62176119), and the Jiangsu Graduate Research Innovation Program (Grant No. KYCX24_0258)." + }, + { + "type": "title", + "bbox": [ + 0.093, + 0.173, + 0.188, + 0.189 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.199, + 0.484, + 0.254 + ], + "angle": 0, + "content": "[1] Naveed Akhtar and Mohammad A. A. K. Jalwana. Towards credible visual model interpretation with path attribution. In Proceedings of the International Conference on Machine Learning, ICML, pages 439-457, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.256, + 0.484, + 0.325 + ], + "angle": 0, + "content": "[2] Tao Dai, Jianrui Cai, Yongbing Zhang, Shu-Tao Xia, and Lei Zhang. Second-order attention network for single image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 11065-11074, 2019. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.326, + 0.483, + 0.38 + ], + "angle": 0, + "content": "[3] Xin Deng, Yutong Zhang, Mai Xu, Shuhang Gu, and Yiping Duan. Deep coupled feedback network for joint exposure fusion and image super-resolution. IEEE Trans. Image Process., 30:3098-3112, 2021. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.382, + 0.483, + 0.424 + ], + "angle": 0, + "content": "[4] Terrance Devries and Graham W. Taylor. Improved regularization of convolutional neural networks with cutout. ArXiv preprint, abs/1708.04552, 2017. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.425, + 0.483, + 0.491 + ], + "angle": 0, + "content": "[5] Chao Dong, Chen Change Loy, Kaiming He, and Xiaou Tang. Learning a deep convolutional network for image super-resolution. In Proceedings of the European Conference on Computer Vision, ECCV, pages 184-199, 2014. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.494, + 0.484, + 0.577 + ], + "angle": 0, + "content": "[6] Xiaoyi Dong, Jianmin Bao, Dongdong Chen, Weiming Zhang, Nenghai Yu, Lu Yuan, Dong Chen, and Baining Guo. Cswin transformer: A general vision transformer backbone with cross-shaped windows. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 12114-12124, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.579, + 0.484, + 0.675 + ], + "angle": 0, + "content": "[7] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In Proceedings of the International Conference on Learning Representations, ICLR, 2021. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.677, + 0.484, + 0.746 + ], + "angle": 0, + "content": "[8] Ruicheng Feng, Jinjin Gu, Yu Qiao, and Chao Dong. Suppressing model overfitting for image super-resolution networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW, pages 1964-1973, 2019. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.747, + 0.484, + 0.802 + ], + "angle": 0, + "content": "[9] Jinjin Gu and Chao Dong. Interpreting super-resolution networks with local attribution maps. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 9199-9208, 2021. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.804, + 0.483, + 0.87 + ], + "angle": 0, + "content": "[10] Takashi Isobe, Xu Jia, Shuhang Gu, Songjiang Li, Shengjin Wang, and Qi Tian. Video super-resolution with recurrent structure-detail network. In Proceedings of the European Conference on Computer Vision, ECCV, pages 645-660, 2020. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.872, + 0.483, + 0.901 + ], + "angle": 0, + "content": "[11] Andrei Kapishnikov, Subhashini Venugopalan, Besim Avci, Ben Wedin, Michael Terry, and Tolga Bolukbasi. Guided in" + }, + { + "type": "list", + "bbox": [ + 0.094, + 0.199, + 0.484, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.545, + 0.093, + 0.907, + 0.147 + ], + "angle": 0, + "content": "tegrated gradients: An adaptive path method for removing noise. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 5050-5058, 2021. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.149, + 0.906, + 0.217 + ], + "angle": 0, + "content": "[12] Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 1646-1654, 2016. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.218, + 0.906, + 0.273 + ], + "angle": 0, + "content": "[13] Jang-Hyun Kim, Wonho Choo, and Hyun Oh Song. Puzzle mix: Exploiting saliency and local statistics for optimal mixup. In Proceedings of the International Conference on Machine Learning, ICML, pages 5275-5285, 2020. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.274, + 0.906, + 0.328 + ], + "angle": 0, + "content": "[14] Jang-Hyun Kim, Wonho Choo, Hosan Jeong, and Hyun Oh Song. Co-mixup: Saliency guided joint mixup with supermodular diversity. In Proceedings of the International Conference on Learning Representations, ICLR, 2021. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.33, + 0.906, + 0.399 + ], + "angle": 0, + "content": "[15] Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja, and Ming-Hsuan Yang. Deep laplacian pyramid networks for fast and accurate super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 5835-5843, 2017. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.4, + 0.906, + 0.468 + ], + "angle": 0, + "content": "[16] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, ICCV, pages 1833-1844, 2021. 1, 2, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.47, + 0.906, + 0.538 + ], + "angle": 0, + "content": "[17] Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Young Mu Lee. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW, pages 1132-1140, 2017. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.539, + 0.906, + 0.607 + ], + "angle": 0, + "content": "[18] Jihao Liu, Boxiao Liu, Hang Zhou, Hongsheng Li, and Yu Liu. Tokenmix: Rethinking image mixing for data augmentation in vision transformers. In Proceedings of the European Conference on Computer Vision, ECCV, pages 455-471, 2022. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.609, + 0.908, + 0.663 + ], + "angle": 0, + "content": "[19] Yiqun Mei, Yuchen Fan, and Yuqian Zhou. Image superresolution with non-local sparse attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 3517-3526, 2021. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.665, + 0.906, + 0.734 + ], + "angle": 0, + "content": "[20] Ze-Yu Mi and Yu-Bin Yang. Cutdem: Depth-aware enhanced multi-view image mixing for light field superresolution. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP, pages 3340-3344. IEEE, 2024. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.735, + 0.906, + 0.803 + ], + "angle": 0, + "content": "[21] Mehdi S. M. Sajjadi, Bernhard Schölkopf, and Michael Hirsch. Enhancenet: Single image super-resolution through automated texture synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, ICCV, pages 4501-4510, 2017. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.804, + 0.906, + 0.859 + ], + "angle": 0, + "content": "[22] Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. In Proceedings of the International Conference on Machine Learning, ICML, pages 3145-3153, 2017. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.86, + 0.906, + 0.901 + ], + "angle": 0, + "content": "[23] Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. Smoothgrad: removing noise by adding noise. ArXiv preprint, abs/1706.03825, 2017." + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.908, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.521, + 0.958 + ], + "angle": 0, + "content": "23109" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.093, + 0.482, + 0.146 + ], + "angle": 0, + "content": "[24] Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.149, + 0.482, + 0.216 + ], + "angle": 0, + "content": "[25] Jialu Sui, Xianping Ma, Xiaokang Zhang, and Man-On Pun. GCRDN: global context-driven residual dense network for remote sensing image superresolution. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens., 16:4457-4468, 2023. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.219, + 0.482, + 0.272 + ], + "angle": 0, + "content": "[26] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In Proceedings of the International Conference on Machine Learning, ICML, pages 3319-3328, 2017. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.274, + 0.482, + 0.342 + ], + "angle": 0, + "content": "[27] Radu Timofte, Rasmus Rothe, and Luc Van Gool. Seven ways to improve example-based single image super resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 1865-1873, 2016. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.344, + 0.482, + 0.412 + ], + "angle": 0, + "content": "[28] A. F. M. Shahab Uddin, Mst. Sirazam Monira, Wheemyung Shin, TaeChoong Chung, and Sung-Ho Bae. Saliencymix: A saliency guided data augmentation strategy for better regularization. In Proceedings of the International Conference on Learning Representations, ICLR, 2021. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.414, + 0.482, + 0.495 + ], + "angle": 0, + "content": "[29] Devesh Walawalkar, Zhiqiang Shen, Zechun Liu, and Marios Savvides. Attentive cutmix: An enhanced data augmentation approach for deep learning based image classification. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP, pages 3642-3646, 2020. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.497, + 0.482, + 0.579 + ], + "angle": 0, + "content": "[30] Haofan Wang, Zifan Wang, Mengnan Du, Fan Yang, Zijian Zhang, Sirui Ding, Piotr Mardziel, and Xia Hu. Scorecam: Score-weighted visual explanations for convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW, pages 111-119, 2020. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.581, + 0.482, + 0.649 + ], + "angle": 0, + "content": "[31] Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Yu Qiao, and Chen Change Loy. ESRGAN: enhanced super-resolution generative adversarial networks. In Proceedings of the European Conference on Computer Vision Workshops, ECCVW, pages 63-79, 2018. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.651, + 0.482, + 0.718 + ], + "angle": 0, + "content": "[32] Zeyu Xiao, Yutong Liu, Ruisheng Gao, and Zhiwei Xiong. Cutmib: Boosting light field super-resolution via multi-view image blending. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 1672-1682, 2023. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.72, + 0.482, + 0.788 + ], + "angle": 0, + "content": "[33] Jaejun Yoo, Namhyuk Ahn, and Kyung-Ah Sohn. Rethinking data augmentation for image super-resolution: A comprehensive analysis and a new strategy. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 8372-8381, 2020. 1, 2, 5, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.79, + 0.482, + 0.871 + ], + "angle": 0, + "content": "[34] Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Seong Joon Oh, Youngjoon Yoo, and Junsuk Choe. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, ICCV, pages 6022-6031. IEEE, 2019. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.873, + 0.482, + 0.901 + ], + "angle": 0, + "content": "[35] Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimiza" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.093, + 0.482, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.545, + 0.093, + 0.905, + 0.12 + ], + "angle": 0, + "content": "tion. In Proceedings of the International Conference on Learning Representations, ICLR, 2018. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.122, + 0.905, + 0.176 + ], + "angle": 0, + "content": "[36] Kai Zhang, Yawei Li, Wangmeng Zuo, Lei Zhang, Luc Van Gool, and Radu Timofte. Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell., 44(10):6360-6376, 2022. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.179, + 0.905, + 0.219 + ], + "angle": 0, + "content": "[37] Qing-Long Zhang, Lu Rao, and Yubin Yang. Group-cam: Group score-weighted visual explanations for deep convolutional networks. ArXiv preprint, abs/2103.13859, 2021. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.221, + 0.905, + 0.289 + ], + "angle": 0, + "content": "[38] Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European Conference on Computer Vision, ECCV, pages 294-310, 2018. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.291, + 0.905, + 0.359 + ], + "angle": 0, + "content": "[39] Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu. Residual dense network for image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 2472-2481, 2018. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.362, + 0.905, + 0.415 + ], + "angle": 0, + "content": "[40] Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu. Residual dense network for image restoration. IEEE Trans. Pattern Anal. Mach. Intell., 43(7):2480-2495, 2021. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.418, + 0.905, + 0.474 + ], + "angle": 0, + "content": "[41] Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. Random erasing data augmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, AAAI, pages 13001-13008, 2020. 2" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.474 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "23110" + } + ] +] \ No newline at end of file diff --git a/2025/ADD_ Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution/e29ca394-45e6-4bf5-b8ea-58a74dbe24fa_origin.pdf b/2025/ADD_ Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution/e29ca394-45e6-4bf5-b8ea-58a74dbe24fa_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0e1234e29d2378e40759d9b289eaea74c188c5bc --- /dev/null +++ b/2025/ADD_ Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution/e29ca394-45e6-4bf5-b8ea-58a74dbe24fa_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3df26c69ae14512d82cfe26bdfd573df8a709cbed6e7cc88db4d3f9518f15d2 +size 2848816 diff --git a/2025/ADD_ Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution/full.md b/2025/ADD_ Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d3198434b6440a0be203ae46ec5d57696a8c8540 --- /dev/null +++ b/2025/ADD_ Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution/full.md @@ -0,0 +1,470 @@ +# ADD: Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution + +Ze-Yu Mi Yu-Bin Yang* + +State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China + +mizeyu@smail.nju.edu.cn yangyubin@nju.edu.cn + +# Abstract + +Data augmentation (DA) stands out as a powerful technique to enhance the generalization capabilities of deep neural networks across diverse tasks. However, in low-level vision tasks, DA remains rudimentary (i.e., vanilla DA), facing a critical bottleneck due to information loss. In this paper, we introduce a novel Calibrated Attribution Maps (CAM) to generate saliency masks, followed by two saliency-based DA methods—Attribution-Driven Data augmentation (ADD) and $\text{ADD+}$ —designed to address this issue. CAM leverages integrated gradients and incorporates two key innovations: a global feature detector and calibrated integrated gradients. Based on CAM and the proposed methods, we have two new insights for low-level vision tasks: (1) increasing pixel diversity, as seen in vanilla DA, can improve performance, and (2) focusing on salient features while minimizing the impact of irrelevant pixels, as seen in saliency-based DA, more effectively enhances model performance. Additionally, we find and highlight the key guiding principle for designing saliency-based DA: a wider spectrum of degradation patterns. Extensive experiments demonstrate the compatibility and consistency of our method, as well as the significant performance improvement across various SR tasks and networks. Our code is available at https://github.com/mizeyu/ADD. + +# 1. Introduction + +Data augmentation (DA) is a fundamental technique for improving the generalization ability of deep neural networks (DNNs). In high-level vision tasks such as image recognition and semantic segmentation, early vanilla approaches like Mixup [35] and CutMix [34] laid the groundwork. Recently, saliency-based DA methods [13, 14, 18, 28, 29] have gained considerable attention, demonstrating superior performance over vanilla counterparts in high-level vision tasks. + +However, applying saliency-based DA techniques directly to low-level vision tasks remains challenging due to a mismatch in task objectives, inconsistent feature requirements, differences in data characteristics, and inappropriate method designs (see Appendix for further details). These challenges have led to the continued use of vanilla DA techniques, such as rotation, flipping [27], Mixup [8], and Cutblur [33], in low-level vision tasks. + +In low-level vision tasks [2, 3, 5-7, 10, 12, 15, 16, 19, 25, 31, 36, 40], the use of vanilla DA methods [8, 20, 27, 32, 33] is hindered by the issue of information loss, which can severely degrade the quality of the reconstructed images. As shown in Fig. 1, vanilla DA techniques often generate images with flat regions and simplistic edges (e.g., a3), which provide minimal support for model learning in tasks like super-resolution (SR), leading to significant loss of crucial information. In contrast, saliency-based DA methods [13, 14, 18, 28, 29] target information-rich regions (e.g., b3), which facilitate better reconstruction performance. The phase spectrum analysis in Fig. 1 (c, d, e) further illustrates that augmented images incorporating saliency information (b3) retain more critical details, alleviating the problem of information loss. This observation motivates the need for new DA techniques that focus on more meaningful regions in low-level tasks. + +Existing saliency methods, however, are often prone to background noise and struggle to accurately capture important features. To address these limitations, we propose a new attribution method, the Calibrated Attribution Maps (CAM), which is capable of identifying important features without being influenced by background noise. CAM builds upon LAM [9] and integrated gradients (IG) [11, 26], introducing two key innovations: a global feature detector and calibrated integrated gradients. Using CAM, we introduce Attribution-Driven Data augmentation (ADD) techniques, and its enhanced version $\mathrm{ADD+}$ , which represent the first attempt to explore saliency in DA from an attribution analysis perspective. + +Through CAM, we derive two key insights for low-level vision tasks: (1) incorporating a broader range of pixels, + +![](images/9cfe4b07aef406f7a6bdb632db291bd7f959082c28a745dd69279a0b10dacd6a.jpg) +a1. Input Image + +![](images/82b42d05c83757c35ee88269e2fc72ef657f1e54b232c8de1aa63008149e0b42.jpg) +a2. Vanilla DA + +![](images/e29336ef1308576ee9111fb70e67a41bc17c845d203b3e55e3335b07c9993406.jpg) +a3. Aug. Image of (a2) + +![](images/a844a318a7ef23c68787b56b64d2d4a4fed7fc5099662a225f6d378a1bff8d22.jpg) +c. Phase Spec. of (a3) + +![](images/c14b315881d6e50a7e1126fb8a88a351dfffc2332a3e779f812b22a3bd62d9fa.jpg) + +![](images/1050ae2b62ee86c0b84c62947421b066973762ce3380424b0b891935a6c81063.jpg) +b1. Saliency Image + +![](images/82d1761302c81b97ff9877bbe53989a67e5674e89bcc3f489a7b05a4fbdefd07.jpg) +b2. Saliency-based DA + +![](images/e372be01e9a42d5c42d638fea638a6ea2afed3aeddd766be787a72305c27a556.jpg) +b3. Aug. Image of (b2) +Figure 1. Motivation: Provide more meaningful information for vanilla DA to address the critical bottleneck of information loss. (a1) Input image. (a2) Vanilla DA randomly selects an area. (a3) Augmented image of vanilla DA. (b1) Saliency map of the input image. (b2) Select the maximum saliency region. (b3) Augmented image of saliency-based DA. (c) Phase spectrum of augmented image (a3). (d) Phase spectrum of augmented image (b3). (e) Residual map calculated between (c) and (d). + +![](images/2f586e6390cb193d68c725a516e757c3d711c1c47a297699c96cc4ee831642f5.jpg) +d. Phase Spec. of (b3) + +![](images/3e3804e93aaedbdab3982e38b538a2ea383249b7db05abb0b826828c2dd3d783.jpg) +e. Residual Map of d & c + +as in vanilla DA, can improve model performance, and (2) focusing on more influential pixels—using saliency information—while minimizing the inclusion of irrelevant pixels, as in saliency-based DA, results in more effective enhancement. This insight explains why ADD, which utilizes saliency to prioritize important pixels, alleviates information loss and outperforms traditional vanilla DA methods, which indiscriminately incorporate additional pixels. + +Furthermore, to design effective saliency-based DA methods in low-level vision tasks, we propose the key guiding principle: a wider spectrum of degradation patterns. Specifically, DA methods should: (1) prefer maintaining continuous boundaries to avoid boundary effects, and (2) prioritize diverse augmentation strategies over single augmented images to provide richer learning signals. + +# Contributions: + +- We introduce CAM, a robust attribution method that accurately identifies important features while mitigating background noise, improving saliency map generation in low-level vision tasks. +- We propose ADD, a novel attribution-driven DA method, which overcomes the limitations of vanilla DA and significantly reduces information loss in low-level tasks. +- We provide new insights and guiding principles for saliency-based DA in low-level vision, offering a framework for improving model performance. +- Our extensive experiments validate the effectiveness of our methods, demonstrating significant performance gains across a range of super-resolution tasks. + +# 2. Related Works + +# 2.1. Image Super-Resolution + +Image super-resolution (SR) is a key technology in computer vision [21, 39]. Since SRCNN [5], numerous CNN-based methods have emerged, incorporating residual [12, + +36] and dense blocks [25, 31, 40], as well as attention mechanisms [2, 19]. More recently, transformer-based SR models [6, 7, 16] have achieved state-of-the-art performance. + +# 2.2. Data Augmentation in Vision Tasks + +Traditional data augmentation (DA) methods in high-level vision include geometric and intensity transformations [4, 41], as well as mixed-based techniques [34, 35]. Recently, saliency-based DA has gained popularity, focusing on preserving critical regions [13, 14, 18, 28, 29]. In low-level vision, DA remains largely limited to conventional approaches [27]. Some works explore Mixup [8] and Cutblur [33], with CutMIB extending them to light-field SR [32]. + +# 3. Methods + +In this section, we first discuss the key challenges and motivations for incorporating saliency-based data augmentation (DA) in low-level vision tasks in Sec. 3.1. Then, in Sec. 3.2, we briefly review prior attribution methods, which serve as the foundation for our proposed approach. Next, we introduce the concept of Calibrated Attribution Maps (CAM) in Sec. 3.3, followed by the presentation of saliency-based DA techniques, ADD and $\mathrm{ADD + }$ , in Sec. 3.4. + +# 3.1. Integrating Saliency into Low-Level DA + +Why introduce saliency into low-level vanilla DA? As discussed in Sec. 1, leveraging saliency provides valuable insights into the most important image features, addressing the problem of information loss. This motivates the integration of saliency information into data augmentation strategies for low-level vision tasks, where preserving critical image details is essential for enhancing performance. + +How to introduce saliency into low-level vanilla DA? Traditional saliency methods, such as LAM and integrated gradient (IG), are limited by the challenge of distinguishing + +relevant features from irrelevant background noise. To overcome this limitation, we build upon LAM and IG and introduce a novel approach, CAM, which incorporates two new components: a global feature detector and a calibrated integrated gradient. These innovations, detailed in Sec. 3.3, effectively address the noise issue and improve the accuracy of saliency estimation in low-level tasks. + +# 3.2. Overview of Vanilla Attribution Methods + +Before presenting our method, we first provide a brief overview of existing attribution methods that form the foundation for our approach [1, 11, 22-24]. Let $\mathcal{I} \in \mathbb{R}^d$ be the input image, and let $\mathcal{C}: \mathbb{R}^d \mapsto \mathbb{R}$ represent a classification network. Gradient-based methods, such as Integrated Gradients (IG), quantify the impact of changes in the input dimensions by computing the gradient of the output with respect to the input image: + +$$ +\mathrm {I G} _ {\mathcal {C}} \mathcal {I} = (\mathcal {I} - \mathcal {I} ^ {\prime}) \int_ {0} ^ {1} \frac {\partial \mathcal {C} (\mathcal {I} ^ {\prime} + \alpha (\mathcal {I} - \mathcal {I} ^ {\prime}))}{\partial \mathcal {I}} d \alpha , (1) +$$ + +where $\mathcal{I}'$ is a baseline image (often a blank image in high-level tasks) and $\alpha$ is a continuous parameter that interpolates between the baseline and the target input. + +In the context of image SR, Local Attribution Maps (LAM) convert the original baseline image $\mathcal{I}'$ to a blurred version $\mathcal{I}' = \omega(\sigma) \otimes \mathcal{I}$ , where $\omega(\sigma)$ is a Gaussian blur kernel with width $\sigma$ , and $\otimes$ represents convolution. Besides, LAM adapts IG for SR tasks by using a gradient detection method $D$ that focuses on local feature detection in SR networks. However, LAM suffers from the limitation of irrelevant gradient accumulations, which can easily lead to a focus on irrelevant areas. To address this issue, we introduce CAM to eliminate irrelevant area interference. + +# 3.3. Calibrated Attribution Maps (CAM) + +In this section, we introduce the concept of Calibrated Attribution Maps (CAM) as shown in Fig. 2 (a), which is inspired by the IG [26] and LAM [9]. The goal of CAM is to provide a more accurate and reliable estimation of feature importance, particularly in the context of image SR tasks. + +Global Feature Detector (GD). Given an input image pair $\mathcal{I}^{LR}$ (low resolution) and $\mathcal{I}^{HR}$ (high resolution), we aim to learn a mapping function $\mathcal{F}$ that produces the super-resolved image $\mathcal{I}^{SR}$ . Traditional attribution methods such as LAM detect local pixel-wise gradients, which can easily lead to saturation effects, as shown in Fig. 2 (d). To alleviate this problem, we introduce a Global Feature Detector (GD), which aims to capture global features in the image by applying convolutional filters such as the Sobel filter. This approach smoothes the detected gradients and enhances the robustness of saliency maps. To achieve a more robust global feature representation, the GD operation is defined + +as: + +$$ +\boldsymbol {G} \boldsymbol {D} (\mathcal {I} ^ {S R}) = \| \boldsymbol {S o b e l} _ {x y} (\mathcal {I} ^ {S R}) \| _ {2}, \tag {2} +$$ + +where $Sobel_{xy}$ denotes the Sobel filter applied in both the $x$ and $y$ directions to capture edge features in the image. This approach smooths the gradients and reduces the saturation problem, as demonstrated in Fig. 2 (d) & (g). + +To analyze the attributes of the SR network, given the current input image $\mathcal{I}$ , a baseline image $\mathcal{I}'$ satisfies that $\mathcal{F}(\mathcal{I}')$ absent certain features existed in $\mathcal{F}(\mathcal{I})$ is also needed. Accordingly, feature scalar $GD(\mathcal{F}(\mathcal{I}))$ will show significant numerical advantage over $GD(\mathcal{F}(\mathcal{I}'))$ . We calculate the path-integrated gradient along the gradually changing path from $\mathcal{I}'$ to $\mathcal{I}$ and obtain the attribution map for $GD(\mathcal{F}(\mathcal{I}))$ . Then, the $i$ th dimension of the calibrated attribution maps is defined as follows: + +$$ +\phi_ {i} ^ {C A M} (\mathcal {F}, \boldsymbol {G D}) = \left(\mathcal {I} _ {i} - \mathcal {I} _ {i} ^ {\prime}\right) \int_ {a = 0} ^ {1} \frac {\partial \boldsymbol {G D} \left(\boldsymbol {F} \left(\mathcal {I} ^ {\prime} + \alpha \left(\mathcal {I} - \mathcal {I} ^ {\prime}\right)\right)\right)}{\partial \mathcal {I} _ {i}} d \alpha . \tag {3} +$$ + +Calibrated Path Integrated Gradient (CPIG). We now introduce the Calibrated Path Integrated Gradient (CPIG), which can efficiently and effectively analyze global attribution. In SR tasks, the high-frequency components (e.g., edges and textures) contribute much more than the low-frequency components (e.g., color and brightness) to the network performance. In this work, we obtain baseline inputs by eliminating high-frequency components, setting them as the blurred version of LR images denoted as $\mathcal{I}' = \omega(\sigma) \otimes \mathcal{I}$ . Here, $\omega(\sigma)$ represents the Gaussian blur kernel parameterized by the kernel width $\sigma$ and $\otimes$ is the convolution operation. Following previous works, we construct a smooth transformation from $\mathcal{I}'$ to $\mathcal{I}$ , which is expressed as $\gamma(a) = \omega(\sigma - \alpha\sigma) \otimes \mathcal{I}$ . Accordingly, we have $\gamma(0) = \mathcal{I}'$ and $\gamma(1) = \mathcal{I}$ . The gradients at points are sampled in $k$ steps along the path and the gradient of the $i$ -th step is: + +$$ +\phi_ {i} ^ {C A M} (\mathcal {F}, \boldsymbol {G D}, \gamma) = \tag {4} +$$ + +$$ +\left(\gamma \left(\frac {i}{k}\right) - \gamma \left(\frac {i + 1}{k}\right)\right) \times \frac {\partial G D \left(\mathcal {F} \left(\gamma \left(\frac {i}{k}\right)\right)\right)}{\partial \gamma \left(\frac {i}{k}\right)} d \alpha . +$$ + +To improve the stability and accuracy of the path-integrated gradients, we introduce a calibration step that limits the deviation at each step and ensures that the gradients focus on the most relevant features of the image. This is achieved by limiting the range of each step to fluctuate around a central value, $a_{\mathrm{min}} = \max (a - d,0.0)$ and $a_{\mathrm{max}} = \min (a + d,1.0)$ . + +Attribution analysis progressively adjusts the value of each pixel to make the interpolated image $\mathcal{I}'$ gradually approach the target image $\mathcal{I}$ , and the overall loss function can be defined as: $\mathcal{L}_{OA} = \| \mathcal{I}' - \mathcal{I}\|_1$ . Correspondingly, the target loss function for approximating the target image with the interpolation image in step $i$ is: $\mathcal{L}_{TG} = \| \mathcal{I}' - \mathcal{I}\|_1 \times (1 - \frac{i}{k})$ . Then, the difference between the actual current interpolated image and the target image can be represented by the current loss: $\mathcal{L}_{CU} = \| \gamma (\frac{i}{k}) - \mathcal{I}\|_1$ . + +![](images/936fbdbf84c83a2257bd1936f5f8fbe9399dc7195c449c4c14210c65bec2dda8.jpg) +Figure 2. The illustration of CAM, the ADD framework, and the CAM-LAM comparison. (a) Illustration of CAM. (b) Process of ADD and $\mathrm{ADD + }$ . (c) Comparison of global attribution analysis between CAM and LAM, with red boxes highlighting regions where gradients are presented after global attribution analysis. + +To further prevent instability caused by large gradient changes and to maintain the smoothness of the path, we select only those pixels with gradient magnitudes below a predefined threshold $T_{f}$ for updating: + +$$ +T _ {f} = \operatorname {s o r t e d} \left(\left| \phi_ {i} ^ {C A M} \right|\right) _ {\left[ p _ {f} \cdot \text {n u m . p i x e l s} \right]} ^ {\min }, +$$ + +$$ +M _ {f} = \left\{ \begin{array}{l l} 1, & \text {i f} | \phi_ {i} ^ {C A M} | \leq T _ {f}, \\ 0, & \text {o t h e r w i s e}, \end{array} \right. \tag {5} +$$ + +where $M_{f}$ is a binary mask representing the pixels that need to be calibrated. Based on the difference between the current loss $\mathcal{L}_{CU}$ and the target loss $\mathcal{L}_{TG}$ , combined with the mask threshold loss function $\mathcal{L}_{MF}$ between the pixels to be corrected and the target image, we generate calibration factor $\delta$ to control the step size of each update: + +$$ +\mathcal {L} _ {M F} = \left\| M _ {f} \odot (\gamma (a) - \gamma \left(a _ {\max }\right)) \right\| _ {1}, +$$ + +$$ +\delta = \frac {\mathcal {L} _ {C U} - \mathcal {L} _ {T G}}{\mathcal {L} _ {M F}}. \tag {6} +$$ + +With the calibration factor $\delta$ , the calibrated interpolation image $\gamma_{c}(a)$ can be formulated as: $\gamma_{c}(a) = \gamma (a + \delta \times (a_{\mathrm{max}} - a))$ . Accordingly, the calibrated gradient $\psi_i^{CAM}$ in step $i$ can be updated by the calibrated interpolation image $\gamma_{c}(a)$ and represented as: $\psi_i^{CAM} = \phi_i^{CAM} + (\gamma_c(a) - \gamma (a))\times$ $\phi_i^{CAM}$ . Finally, we obtain the approximate integrated gra + +dient by summing up the calibrated gradients of $k$ steps: + +$$ +\mathcal {I} _ {s} ^ {L R} = \sum_ {i = 0} ^ {k} \psi_ {i} ^ {C A M}. \tag {7} +$$ + +As shown in the Fig. 2 (f) and (i), our calibrated attribution maps focus on the most important edge texture information compared to LAM, and are not affected by the noise of flat areas and background. + +# 3.4. Attribution-Driven Data Augmentation (ADD) + +In this section, as depicted in Fig. 2 (b), we introduce the specific process of the proposed ADD and the enhanced version $\mathrm{ADD + }$ . Let $\mathcal{I}^{LR}\in R^{H\times W\times C}$ be the input LR image, and the corresponding saliency map $\mathcal{I}_s^{LR}$ can be obtained with Eq. (7). To accurately obtain the region of maximum saliency, we follow the principle (1) of continuous boundaries in Sec. 3.5 and select the maximum saliency pixels with a proportion of $p$ and obtain the corresponding irregularly shaped patch: + +$$ +T _ {p} = \operatorname {s o r t e d} \left(\mathcal {I} _ {s} ^ {L R}\right) _ {\lceil p \cdot \text {n u m . p i x e l s} \rceil} ^ {m a x}, +$$ + +$$ +M (i, j) = \left\{ \begin{array}{l l} 1, & \text {i f} \mathcal {I} _ {s} ^ {L R} (i, j) \geq T _ {p}, \\ 0, & \text {o t h e r w i s e}, \end{array} \right. \tag {8} +$$ + +where the $T_{p}$ represents the maximum saliency value of the top $p$ proportion in the $\mathcal{I}_s^{LR}$ and the $M\in \{0,1\}^{H\times W}$ is a + +binary mask indicating the area to be cut. Following the second principle (2) of diverse augmented images in Sec. 3.5, we use the mask $M$ to cut the patch to the corresponding position on another image and combine it with different DA strategies. Given the LR-HR image pair $\{\mathcal{I}_i^{LR},\mathcal{I}_i^{HR}\}$ and the corresponding binary mask $M$ , the augmentation process is explained below. + +ADD. We first adopt a mixed strategy to generate augmented input images to enable the model to learn richer and more complex degradation patterns. We cut the patch and mix it with the patch from another LR-HR image pair $\{\mathcal{I}_j^{LR},\mathcal{I}_j^{HR}\}$ and generate new training samples: + +$$ +P _ {m i x} ^ {L R} = \lambda \times M \odot \mathcal {I} _ {i} ^ {L R} + (1 - \lambda) \times M \odot \mathcal {I} _ {j} ^ {L R}, \tag {9} +$$ + +$$ +P _ {m i x} ^ {H R} = \lambda \times M \odot \mathcal {I} _ {i} ^ {H R} + (1 - \lambda) \times M \odot \mathcal {I} _ {j} ^ {H R}, +$$ + +$$ +\mathcal {I} _ {i} ^ {L R} = P _ {m i x} ^ {L R} + (1 - M) \odot \mathcal {I} _ {j} ^ {L R}, \tag {10} +$$ + +$$ +\mathcal {I} _ {i} ^ {H R} = P _ {m i x} ^ {H R} + (1 - M) \odot \mathcal {I} _ {j} ^ {H R}. +$$ + +Then we adopt an intensity strategy to make the model learn how and where to augment the input images. We cut the patch $P_{i}^{LR}$ by $M \odot \mathcal{I}_{i}^{LR}$ and upsample it by scale $s$ with bicubic kernel, get $P_{i}^{LR(s \times \uparrow)}$ . The HR patch can be generated similarly and we can get augmented samples as: + +$$ +\hat {\mathcal {I}} _ {i} ^ {L R \rightarrow H R} = P _ {i} ^ {L R (s \times \uparrow)} + (\mathbf {1} - M) \odot \mathcal {I} _ {i} ^ {H R}, \tag {11} +$$ + +$$ +\hat {\mathcal {I}} _ {i} ^ {H R \rightarrow L R} = P _ {i} ^ {H R (s \times \downarrow)} + (\mathbf {1} - M) \odot \mathcal {I} _ {i} ^ {L R}. +$$ + +$\mathbf{ADD}+$ . To validate the efficacy of saliency in mixed enhancement and surpass the performance limits, we proposed the method with an enhanced version. We additionally generate a pair of new training samples with another LR-HR image pair $\{\mathcal{I}_j^{LR},\mathcal{I}_j^{HR}\}$ : + +$$ +\mathcal {I} _ {i} ^ {L R} = M \odot \mathcal {I} _ {i} ^ {L R} + (1 - M) \odot \mathcal {I} _ {j} ^ {L R}, \tag {12} +$$ + +$$ +\mathcal {I} _ {i} ^ {H R} = M \odot \mathcal {I} _ {i} ^ {H R} + (1 - M) \odot \mathcal {I} _ {j} ^ {H R}, +$$ + +where $\odot$ denotes the element-wise Hadamard product operation. Following previous work [33], in each training iteration, using $\mathrm{ADD + }$ , each above augmentation method and traditional augmentation method (e.g., color, channel) have a probability $p$ of being applied by the model to enhance the input image. + +# 3.5. Discussions + +Incorporate saliency into vanilla DA. Incorporating saliency into existing vanilla DA methods revolves around two key aspects: patch cutting and pasting. Consequently, two fundamental questions arise: (1) What manner should be used for segmenting the source image? (2) Where should the cut patches be pasted? To address these questions, we + +conduct a comprehensive analysis and reveal the key principle: a wider spectrum of degradation patterns. Specifically, it includes: (1) continuous boundaries rather than abrupt boundaries, and (2) diverse augmented images over single augmented images. The results and analysis of saliency in DA methods are presented in Tab. 1 and further elaborated in the experiments (see Sec. 4.4). + +Table 1. Quantitative PSNR comparison of various saliency incorporation methods in super-resolution. $Patch_{n \times n}$ denotes image division into $n \times n$ patches. $X\mathcal{Q}Y$ indicates cutting a patch from area $X$ and pasting it onto region $Y$ in another image, where 'Sa' stands for 'Saliency', 'Non' for 'Non-saliency', 'Ce' for 'Center', and 'Cor' for 'Corresponding'. + +
MethodScaleTypes of Saliency UtilizationDIV2KRealSR
PSNRΔPSNRΔ
EDSR×4baseline29.21-28.86-
Patch1×1×4Granularity29.28+0.0729.10+0.24
Patch2×2×4Granularity29.26+0.0529.07+0.21
Patch3×3×4Granularity29.24+0.0329.02+0.16
Patch4×4×4Granularity29.22+0.0128.91+0.05
Patch5×5×4Granularity29.14-0.0728.75-0.11
Patch6×6×4Granularity29.07-0.1428.61-0.25
Patch7×7×4Granularity28.92-0.2928.45-0.41
Sa2Cor×4Diversity29.27+0.0629.11+0.25
Ce2Ce×4Diversity29.26+0.0529.08+0.22
Sa2Sa×4Diversity29.21+0.0028..89+0.03
Sa2Non×4Diversity29.24+0.0329.97+0.11
Non2Sa×4Diversity29.19-0.0228.79-0.07
Non2Non×4Diversity29.16-0.0528.76-0.10
ADD+×4Granularity&Diversity29.32+0.1129.14+0.28
+ +# 4. Experiments + +# 4.1. Preliminaries + +Network structures. We adopt several advanced and typical SR networks to verify the effectiveness and compatibility of our ADD and the proposed three new DA strategies. We consider CNN-based methods: RCAN [38], and EDSR [17], in which well-designed CNN-based structures are proven effective on SR tasks. The transformer-based method, SwinIR [16], is also adopted in our experiments. + +Datasets and implementation details. We use the DIV2K and RealSR datasets for training. We select ten images (index 0801-0810) from the DIV2K validation set for validation during training. For evaluation, we use six benchmark datasets, including Set5, Set14, BSD100, Urban100, Manga109, and test sets of RealSR. We keep the hyperparameters (e.g., learning rate, batch size) the same as reported in the original paper. All experiments are conducted using PyTorch on NVIDIA V100 GPUs. + +# 4.2. Comparison of Interpretation Capability + +We conduct visualization experiments to evaluate the effectiveness of the proposed CAM. As shown in the left part of + +![](images/1b5dec18cf36a63ed299da488981529b64291193ef0d01134a64b2dbfb2504b8.jpg) + +![](images/a51527f8612d1b895ed066df1139b55f63b5f74cb19216ff7b6aee3ec6157fd2.jpg) +Fig. 3, the blue arrow highlights important regions, while the red arrow points to areas considered background noise. CAM accurately identifies relevant information, demonstrating robustness to background noise, unlike LAM. + +![](images/c47120e0d2e103ac181511fcee2064834384487ffb6a7e10ac27b4425edd8fe8.jpg) + +![](images/67d264ba064a6235f2bca34effcdd7bf772d3f4e4fc03a82d96c35276ee2a132.jpg) +Figure 3. Saliency maps (Left) and Insertion/Deletion curves (Right) on DIV2K etc. super-resolution sets. The blue arrow indicates that the proposed CAM accurately reflects important and intuitive content without being affected by background noise, while the red arrow indicates that LAM is affected by background noise and produces undesired attribution results. + +![](images/28fac85b2f8652710df47a3617e71950b617503ff67d4973df4dd1d554faed0b.jpg) + +![](images/54c4214e32f28a30ef9793e9dc51e7ef60acf8f602069eb4ef4fb0c5940ebcf5.jpg) + +Table 2. Quantitative PSNR (dB) comparison of our ADD and existing DA methods in SR on DIV2K and RealSR datasets. $\Delta$ denotes the performance gap. + +
MethodScaleTraining SetDIV2KTraining SetRealSR
PSNRΔPSNRΔ
RCAN×4DIV2K29.22-RealSR29.20-
+ CutMix×4DIV2K29.24+0.02RealSR29.25+0.05
+ CutMixup×4DIV2K29.28+0.06RealSR29.30+0.10
+ CutBlur×4DIV2K29.25+0.03RealSR29.29+0.09
+ ADD×4DIV2K29.32+0.10RealSR29.34+0.14
+ ADD+×4DIV2K29.36+0.14RealSR29.46+0.26
EDSR×4DIV2K29.21-RealSR28.86-
+ CutMix×4DIV2K29.22+0.01RealSR28.90+0.04
+ CutMixup×4DIV2K29.26+0.05RealSR28.97+0.11
+ CutBlur×4DIV2K29.25+0.04RealSR28.94+0.08
+ ADD×4DIV2K29.30+0.09RealSR29.01+0.15
+ ADD+×4DIV2K29.32+0.11RealSR29.14+0.28
SwinIR×4DIV2K29.40-RealSR29.26-
+ CutMix×4DIV2K29.40+0.00RealSR29.29+0.03
+ CutMixup×4DIV2K29.43+0.03RealSR29.34+0.08
+ CutBlur×4DIV2K29.43+0.03RealSR29.32+0.06
+ ADD×4DIV2K29.46+0.06RealSR29.37+0.11
+ ADD+×4DIV2K29.48+0.08RealSR29.43+0.17
+ +Following established protocols [30, 37], we perform Insertion and Deletion tests, as shown in the right part of Fig. 3. In the Insertion test, a progressively increasing fraction $(3.6\%)$ of pixels from the high-resolution (HR) image is inserted into the super-resolved image, guided by the pixel + +importance values in the attribution map, until the reconstructed image closely matches the HR image. In the Deletion test, $3.6\%$ of the pixels in the HR image, starting from those with the highest attribution map values, are progressively replaced with black pixels until the entire image is replaced. The Insertion and Deletion curves provide further evidence that CAM more effectively captures the network's critical information compared to LAM. + +# 4.3. Results on Various Models and Datasets + +We conduct quantitative comparisons between the proposed ADD method and existing vanilla DA approaches across classical benchmark datasets, namely DIV2K and RealSR, as detailed in Tab. 2. The results demonstrate that both ADD and $\mathrm{ADD + }$ consistently outperform vanilla DA methods on both synthetic (DIV2K) and real-world (RealSR) datasets, with performance improvements reaching up to $0.28\mathrm{dB}$ . Further comparisons between $\mathrm{ADD + }$ and baseline models on the Set5, Set14, Manga109, Urban100, and BSD100 datasets, presented in Tab. 3, show that networks trained with $\mathrm{ADD + }$ consistently achieve superior reconstruction performance. Qualitative results, depicted in Fig. 4, reveal that networks trained with $\mathrm{ADD + }$ exhibit enhanced visual quality compared to their baseline counterparts, capturing finer details. Notably, in the area of stripes on the building in Fig. 4 (img_012), $\mathrm{ADD + }$ yields more accurate and sharper details than the baselines. + +Table 3. Quantitative comparison with baseline methods in SR and the $\Delta$ denotes the performance gap. + +
MethodScaleTraining DatasetSet5Set14Manga109Urban100BSD100
PSNRΔPSNRΔPSNRΔPSNRΔPSNRΔ
RCAN×2DIV2K38.23-34.11-39.41-33.29-32.37-
+ ADD+×2DIV2K38.34+0.1134.21+0.1039.55+0.1433.55+0.2632.44+0.07
EDSR×2DIV2K38.11-33.92-39.10-32.93-32.32-
+ ADD+×2DIV2K38.20+0.0934.04+0.1239.27+0.1733.16+0.2332.43+0.11
SwinIR×2DIV2K38.31-34.41-39.89-33.75-32.46-
+ ADD+×2DIV2K38.43+0.1234.49+0.0839.99+0.1033.90+0.1532.51+0.05
RCAN×4DIV2K32.58-28.84-31.22-26.75-27.74-
+ ADD+×4DIV2K32.64+0.0628.91+0.0731.43+0.2126.92+0.1727.79+0.05
EDSR×4DIV2K32.43-28.76-31.04-26.63-27.66-
+ ADD+×4DIV2K32.51+0.0828.88+0.1231.24+0.2026.81+0.1827.77+0.11
SwinIR×4DIV2K32.63-28.92-31.54-27.01-27.82-
+ ADD+×4DIV2K32.71+0.0828.99+0.0731.60+0.0627.13+0.1227.85+0.03
+ +![](images/de78c63daab6cb5676c36cbb3e05b5e5c2037249f5c661a5cbb062873cccab89.jpg) +Urban100(x4): img_092 + +![](images/b8dcd9a020f089120ca5f77ceaae038355b671ae4387b9cd7b79ca8ceee8d468.jpg) +HR + +![](images/9d5f9288b54d82ea721ecdd5835fe9f08cd97b76c4f47c920c500863b8604f9e.jpg) +Bicubic + +![](images/70699b286b4656ac3076cd033ade92ea423b6036377f2e715fd309df9ff01b4e.jpg) +RCAN + +![](images/2c2975adaeceddc2bf474e7d336aad9d229acb77f823039b3c3d5da2e330ce41.jpg) +RCAN + +![](images/5b87a9b328dc134e1a97d913d09221fd3c49b15fa763bcb31943e62b5f28a0e6.jpg) +EDSR + +![](images/815bef0c5bcafc4cb0b87ea2b5d8e17621a0aedc88e0a64512aaf559ff7934d9.jpg) ++proposed +proposed +proposed +EDSR + ++proposed +proposed + +![](images/4365e0c60e58b836d1a0ffb62d2f0b31d8c3ec3126a0db25585bfecefdd40e29.jpg) +Urban100(x4): img_095 + +![](images/394ddbae49b2e969b327cc1919e2d371c9d503baed8b70d18e6d29aebcfe7bb9.jpg) +HR + +![](images/e2bbb8bdbb4f64783534f7f4a84aaffe542879dd79976f53122a8385abb22ef9.jpg) ++proposed +proposed +proposed + +![](images/a8a67a75522fbbcec87fdcc179cce34065f9b8a189ce8ea9710fb1a04d28724c.jpg) +RCAN + +![](images/de517a8cbaf25badbbc76af14d3c877274f3e175e7ec8b20ca0d9f6d7378f62e.jpg) +RCAN + +![](images/d5d7b376389ee0689ecd44b2122f8f74a29030a27ac346c306122d30124a4cea.jpg) +EDSR + +![](images/a0e4d907bfceff3845f90f61f65285ebcc6f4789bd267ad6c12c8994de11b624.jpg) +EDSR + +![](images/44d3686a627368ef66274250ead23a913256805c7a960591562cd3671e6edfba.jpg) +SwinIR + +![](images/fe44a5b7289d1c45e50d5c15f294601c872f22e9ba49caab6f4ca2aea0c4da17.jpg) +SwinIR + +![](images/c0ddf29918d715819df72d701c97fd9aae29410a10ebf71b8ee85ff35c85fa99.jpg) +Manga109(x4): +PrayerHaNemurenai + +![](images/50306f619c9bbc1c238522afdea410cdbcca9b717218ff6c77d8f5356176a48e.jpg) +HR + +![](images/bbbf594ba4c0d3868b8677c002ea7001a4fbe9a388b7a78301ae7bd8399f66b8.jpg) +Bicubic + +![](images/0b97099247bdfe10d322b539a93f3b8ebdd7b2bd9e34a2fbb1c3c15de90be3df.jpg) +RCAN + +![](images/19b018a5f89507203598c74da818b2968adadacd67ebbde97711ac034ab4f7a4.jpg) +RCAN ++proposed +proposed +proposed + +![](images/e4cfb74893081c95da7bbac00f08dc76b2134bfb11b6636c19f79aa879b916fe.jpg) +EDSR + +![](images/2132558eb107d99f4a7353ff35e8e07d5a0f4d3cc5130f5c1b40d15dfa01f61d.jpg) +EDSR + ++proposed +proposed + +![](images/ce79336787e0335553da22f47ce92f9fb791087c0ad0bdd74bc563b0979eb5b6.jpg) +SwinIR + +![](images/099a0f5063333cdc69c08a32d3cafd8c1cfafda055e1b421f0a953f4a69f417e.jpg) +SwinIR + +![](images/858c5dd4401389fcb34065de8b066c768585fcc3366efbde50841a38607b9f40.jpg) +Urban100(x4): img_012 +Figure 4. Visual comparison on $\times 4$ SR with the baseline model and proposed method. The patches for comparison are marked with red boxes in the original images. Please zoom in for better visualization. + +![](images/e6c5d12eba070fa80ec1d9d6b0e4c721cf6c911a765e0a801e481624f851c59c.jpg) +HR + +![](images/1898ea22869ed4be957d153bf955ff103862bfc3889ae37df2dc95891492872e.jpg) +Bicubic + +![](images/0367aa6b50e523d989e88e1ed39f06e2023eb75f1c59cbca6d423bc6f53cec1e.jpg) +RCAN + +![](images/f43556d28230571eb0f771e115255dcb32f60ed8fcf70e9a16237990dbe96041.jpg) +RCAN ++proposed +proposed +proposed + +![](images/ed9aa450dc2c069dea4e5c803f499d4a9c4e8f86487d60fd357eaa076a1b987f.jpg) +SwinLR + +![](images/3cc5764d44d96cb7d3b5442a77bafae88c5c0fe287cb0117ec5382cb8e06aebe.jpg) +SwinIR + +# 4.4. Guiding Principles for Saliency-based DA methods + +As discussed in Sec. 3.5, the key questions for incorporating saliency are: (1) the manners for segmenting the source image, and (2) the position for pasting the patches. We define the following settings for a comprehensive analysis. For question (1), we examined the impact of different granularities on DA strategies. We categorized granularity into coarse $(1\times 1,2\times 2)$ , medium $(3\times 3,4\times 4)$ , and fine $(5\times 5,6\times 6,7\times 7)$ patches. We observed a decline in network performance with increasing granularity refinement as shown in Tab. 1, indicating that a lack of continuity at the boundary can cause serious boundary effects and subsequently impair performance. For question (2), we investigate six schemes for extracting and merging patches from the source image to + +the target image: (i) Saliency to Corresponding, extracting the most salient region from the source image and merging it with the corresponding region of the target image; (ii) Center to Center, extracting the central region from the source image and merging it with the central region of the target image; (iii) Saliency to Saliency, extracting the most salient region from the source image and merging it with the most salient region of the target image; (iv) Saliency to Non-Saliency, extracting the most salient region from the source image and merging it with the non-salient region of the target image; (v) Non-Saliency to Saliency, extracting the non-salient region from the source image and merging it with the most salient region of the target image; (vi) Non-Saliency to Non-Saliency, extracting the non-salient region from the source image and merging it with the non-salient + +![](images/b772bacf0636b335b1cdd69e8c91870951003a0ff6eda6ca857eff909bca9193.jpg) +Figure 5. Attribution results for the baseline model, vanilla DA (CutMixup), and saliency-based DA (ADDCutMixup). The attribution results showcase the importance of each pixel in the input LR image for reconstructing the marked path. The diffusion index (DI) reflects the range of involved pixels, with a higher DI indicating a broader range of utilized pixels. Two key observations from the attribution and DI results emerge: (1) Vanilla DA methods enhance network performance by involving more pixels. (2) Saliency-based DA methods guide the model to focus more on meaningful details, reducing attention to irrelevant pixels. Please zoom in for better visualization. + +region of the target image. As depicted in Tab. 1, scheme (i) Sa2Cor (with source 'Sa')—which incorporates a broader range of target positions such as 'Sa', 'Non', and others— aligns with a diversity principle and outperforms others. This highlights the importance of the saliency region in the source image and diverse augmented patterns. Based on these findings, we propose the key principle for low-level DA: a wider spectrum of degradation patterns. + +# 4.5. What Vanilla And Saliency-Based DA learn + +We conduct attribution analysis on baseline models, models trained with vanilla DA strategies, and models trained with our proposed saliency-based DA strategies. As depicted in Fig. 5, the model trained with DA methods exhibits a higher diffusion index (DI), indicating a broader range of involved pixels. Our ADD, highlighted by black arrows, focuses on more accurate details. Notably, we observe two key findings: (1) Both vanilla and saliency-based DA methods enhance the model's ability to involve more pixels, leading to improved performance. (2) Saliency-based DA directs the model to concentrate on influential pixels rather than indiscriminately incorporating more pixels. + +Table 4. Gradients comparison with and without global feature detector (GD) on the DIV2K validation dataset. + +
MethodBackbonestep = 1step = 10step = 20step = 30step = 50
LAMEDSR33.9k108.7k119.9k120.0k120.0k
w/ GDEDSR228.6k419.2k643.8k784.9k869.1k
LAMRCAN29.8k660.0k962.6k969.5k987.3k
w/ GDRCAN174.9k494.1k749.2k815.7k1291.3k
+ +# 4.6. Ablation Studies + +The effectiveness of the CAM was demonstrated in Sec. 4.2. To further evaluate the impact of the proposed global feature detector (GD), we conduct additional experiments using EDSR and RCAN as backbone models, as outlined in Sec. 4.1. In these ablation studies, we substitute the global feature detector with the absolute cumulative value of the entire image. The results, shown in Tab. 4, highlight that the + +inclusion of GD leads to smoother gradient changes, making uniformly sampled points more effective. + +# 4.7. Extensions: Other Low-level Vision Tasks + +We explore the applicability of our method to various low-level vision tasks, specifically examining its effectiveness in JPEG artifact removal. Utilizing CNN-based EDSR and Transformer-based SwinIR as baselines, we train the models from scratch. Following the prior works [33], we create a synthetic dataset with a compression quality parameter $q$ set to 10 (lower $q$ indicating stronger artifacts) for color images. Results in Tab. 5 reveal substantial improvements in PSNR and SSIM metrics, particularly at low compression levels ( $q$ ), highlighting the versatility of our method in benefiting various low-level vision tasks. + +Table 5. Quantitative comparison of JPEG compression artifact reduction on the LIVE1 dataset. The best results are highlighted, where $q$ denotes the compression level, with a smaller value indicating a higher compression level. + +
Methodq = 10q = 20q = 30
PSNRSSIMPSNRSSIMPSNRSSIM
EDSR30.140.839131.830.884032.450.8992
+ ADD30.150.839432.310.894933.440.9173
SwinIR29.860.828732.250.890933.690.9174
+ ADD29.860.828532.560.895734.480.9287
+ +# 5. Conclusion + +In this work, we introduce CAM and ADD, specifically designed for SR. Through a dedicated analysis, we reveal two new insights (i.e., involving more pixels and focusing on influential pixels rather than incorporating irrelevant pixels) for low-level tasks. Besides, We propose the key principle of the wider spectrum of degradation patterns for designing DA in low-level tasks. Experimental results underscore the effectiveness and adaptability of our method, significantly improving the performance of various SR tasks. Our work opens new avenues for exploring a more effective way to utilize image information in DA and low-level tasks. + +# Acknowledgments + +This work was supported by the Natural Science Foundation of China (Grant No. 62176119), and the Jiangsu Graduate Research Innovation Program (Grant No. KYCX24_0258). + +# References + +[1] Naveed Akhtar and Mohammad A. A. K. Jalwana. Towards credible visual model interpretation with path attribution. In Proceedings of the International Conference on Machine Learning, ICML, pages 439-457, 2023. 3 +[2] Tao Dai, Jianrui Cai, Yongbing Zhang, Shu-Tao Xia, and Lei Zhang. Second-order attention network for single image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 11065-11074, 2019. 1, 2 +[3] Xin Deng, Yutong Zhang, Mai Xu, Shuhang Gu, and Yiping Duan. Deep coupled feedback network for joint exposure fusion and image super-resolution. IEEE Trans. Image Process., 30:3098-3112, 2021. 1 +[4] Terrance Devries and Graham W. Taylor. Improved regularization of convolutional neural networks with cutout. ArXiv preprint, abs/1708.04552, 2017. 2 +[5] Chao Dong, Chen Change Loy, Kaiming He, and Xiaou Tang. Learning a deep convolutional network for image super-resolution. In Proceedings of the European Conference on Computer Vision, ECCV, pages 184-199, 2014. 1, 2 +[6] Xiaoyi Dong, Jianmin Bao, Dongdong Chen, Weiming Zhang, Nenghai Yu, Lu Yuan, Dong Chen, and Baining Guo. Cswin transformer: A general vision transformer backbone with cross-shaped windows. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 12114-12124, 2022. 2 +[7] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In Proceedings of the International Conference on Learning Representations, ICLR, 2021. 1, 2 +[8] Ruicheng Feng, Jinjin Gu, Yu Qiao, and Chao Dong. Suppressing model overfitting for image super-resolution networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW, pages 1964-1973, 2019. 1, 2 +[9] Jinjin Gu and Chao Dong. Interpreting super-resolution networks with local attribution maps. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 9199-9208, 2021. 1, 3 +[10] Takashi Isobe, Xu Jia, Shuhang Gu, Songjiang Li, Shengjin Wang, and Qi Tian. Video super-resolution with recurrent structure-detail network. In Proceedings of the European Conference on Computer Vision, ECCV, pages 645-660, 2020. 1 +[11] Andrei Kapishnikov, Subhashini Venugopalan, Besim Avci, Ben Wedin, Michael Terry, and Tolga Bolukbasi. Guided in + +tegrated gradients: An adaptive path method for removing noise. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 5050-5058, 2021. 1, 3 +[12] Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 1646-1654, 2016. 1, 2 +[13] Jang-Hyun Kim, Wonho Choo, and Hyun Oh Song. Puzzle mix: Exploiting saliency and local statistics for optimal mixup. In Proceedings of the International Conference on Machine Learning, ICML, pages 5275-5285, 2020. 1, 2 +[14] Jang-Hyun Kim, Wonho Choo, Hosan Jeong, and Hyun Oh Song. Co-mixup: Saliency guided joint mixup with supermodular diversity. In Proceedings of the International Conference on Learning Representations, ICLR, 2021. 1, 2 +[15] Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja, and Ming-Hsuan Yang. Deep laplacian pyramid networks for fast and accurate super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 5835-5843, 2017. 1 +[16] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, ICCV, pages 1833-1844, 2021. 1, 2, 5 +[17] Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Young Mu Lee. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW, pages 1132-1140, 2017. 5 +[18] Jihao Liu, Boxiao Liu, Hang Zhou, Hongsheng Li, and Yu Liu. Tokenmix: Rethinking image mixing for data augmentation in vision transformers. In Proceedings of the European Conference on Computer Vision, ECCV, pages 455-471, 2022. 1, 2 +[19] Yiqun Mei, Yuchen Fan, and Yuqian Zhou. Image superresolution with non-local sparse attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 3517-3526, 2021. 1, 2 +[20] Ze-Yu Mi and Yu-Bin Yang. Cutdem: Depth-aware enhanced multi-view image mixing for light field superresolution. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP, pages 3340-3344. IEEE, 2024. 1 +[21] Mehdi S. M. Sajjadi, Bernhard Schölkopf, and Michael Hirsch. Enhancenet: Single image super-resolution through automated texture synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, ICCV, pages 4501-4510, 2017. 2 +[22] Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. In Proceedings of the International Conference on Machine Learning, ICML, pages 3145-3153, 2017. 3 +[23] Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. Smoothgrad: removing noise by adding noise. ArXiv preprint, abs/1706.03825, 2017. + +[24] Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014. 3 +[25] Jialu Sui, Xianping Ma, Xiaokang Zhang, and Man-On Pun. GCRDN: global context-driven residual dense network for remote sensing image superresolution. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens., 16:4457-4468, 2023. 1, 2 +[26] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In Proceedings of the International Conference on Machine Learning, ICML, pages 3319-3328, 2017. 1, 3 +[27] Radu Timofte, Rasmus Rothe, and Luc Van Gool. Seven ways to improve example-based single image super resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 1865-1873, 2016. 1, 2 +[28] A. F. M. Shahab Uddin, Mst. Sirazam Monira, Wheemyung Shin, TaeChoong Chung, and Sung-Ho Bae. Saliencymix: A saliency guided data augmentation strategy for better regularization. In Proceedings of the International Conference on Learning Representations, ICLR, 2021. 1, 2 +[29] Devesh Walawalkar, Zhiqiang Shen, Zechun Liu, and Marios Savvides. Attentive cutmix: An enhanced data augmentation approach for deep learning based image classification. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP, pages 3642-3646, 2020. 1, 2 +[30] Haofan Wang, Zifan Wang, Mengnan Du, Fan Yang, Zijian Zhang, Sirui Ding, Piotr Mardziel, and Xia Hu. Scorecam: Score-weighted visual explanations for convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW, pages 111-119, 2020. 6 +[31] Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Yu Qiao, and Chen Change Loy. ESRGAN: enhanced super-resolution generative adversarial networks. In Proceedings of the European Conference on Computer Vision Workshops, ECCVW, pages 63-79, 2018. 1, 2 +[32] Zeyu Xiao, Yutong Liu, Ruisheng Gao, and Zhiwei Xiong. Cutmib: Boosting light field super-resolution via multi-view image blending. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 1672-1682, 2023. 1, 2 +[33] Jaejun Yoo, Namhyuk Ahn, and Kyung-Ah Sohn. Rethinking data augmentation for image super-resolution: A comprehensive analysis and a new strategy. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 8372-8381, 2020. 1, 2, 5, 8 +[34] Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Seong Joon Oh, Youngjoon Yoo, and Junsuk Choe. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, ICCV, pages 6022-6031. IEEE, 2019. 1, 2 +[35] Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimiza + +tion. In Proceedings of the International Conference on Learning Representations, ICLR, 2018. 1, 2 +[36] Kai Zhang, Yawei Li, Wangmeng Zuo, Lei Zhang, Luc Van Gool, and Radu Timofte. Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell., 44(10):6360-6376, 2022. 1, 2 +[37] Qing-Long Zhang, Lu Rao, and Yubin Yang. Group-cam: Group score-weighted visual explanations for deep convolutional networks. ArXiv preprint, abs/2103.13859, 2021. 6 +[38] Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European Conference on Computer Vision, ECCV, pages 294-310, 2018. 5 +[39] Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu. Residual dense network for image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 2472-2481, 2018. 2 +[40] Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu. Residual dense network for image restoration. IEEE Trans. Pattern Anal. Mach. Intell., 43(7):2480-2495, 2021. 1, 2 +[41] Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. Random erasing data augmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, AAAI, pages 13001-13008, 2020. 2 \ No newline at end of file diff --git a/2025/ADD_ Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution/images.zip b/2025/ADD_ Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..82ebdddd24f66ea094c7aa95bdcd2ce6752f9c13 --- /dev/null +++ b/2025/ADD_ Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4fa073b906be8d9fe6071873b03a7af85c246fceb6963c98ba034716f41a5372 +size 960829 diff --git a/2025/ADD_ Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution/layout.json b/2025/ADD_ Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..8730843c59ddf0c6f4a84f87d7479aab6e0a5a6a --- /dev/null +++ b/2025/ADD_ Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution/layout.json @@ -0,0 +1,12106 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 66, + 102, + 545, + 140 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 102, + 545, + 140 + ], + "spans": [ + { + "bbox": [ + 66, + 102, + 545, + 140 + ], + "type": "text", + "content": "ADD: Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 240, + 161, + 373, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 240, + 161, + 373, + 175 + ], + "spans": [ + { + "bbox": [ + 240, + 161, + 373, + 175 + ], + "type": "text", + "content": "Ze-Yu Mi Yu-Bin Yang*" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 87, + 175, + 523, + 190 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 175, + 523, + 190 + ], + "spans": [ + { + "bbox": [ + 87, + 175, + 523, + 190 + ], + "type": "text", + "content": "State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 178, + 192, + 428, + 204 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 178, + 192, + 428, + 204 + ], + "spans": [ + { + "bbox": [ + 178, + 192, + 428, + 204 + ], + "type": "text", + "content": "mizeyu@smail.nju.edu.cn yangyubin@nju.edu.cn" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 151, + 231, + 200, + 243 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 231, + 200, + 243 + ], + "spans": [ + { + "bbox": [ + 151, + 231, + 200, + 243 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 256, + 296, + 543 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 256, + 296, + 543 + ], + "spans": [ + { + "bbox": [ + 56, + 256, + 296, + 543 + ], + "type": "text", + "content": "Data augmentation (DA) stands out as a powerful technique to enhance the generalization capabilities of deep neural networks across diverse tasks. However, in low-level vision tasks, DA remains rudimentary (i.e., vanilla DA), facing a critical bottleneck due to information loss. In this paper, we introduce a novel Calibrated Attribution Maps (CAM) to generate saliency masks, followed by two saliency-based DA methods—Attribution-Driven Data augmentation (ADD) and " + }, + { + "bbox": [ + 56, + 256, + 296, + 543 + ], + "type": "inline_equation", + "content": "\\text{ADD+}" + }, + { + "bbox": [ + 56, + 256, + 296, + 543 + ], + "type": "text", + "content": "—designed to address this issue. CAM leverages integrated gradients and incorporates two key innovations: a global feature detector and calibrated integrated gradients. Based on CAM and the proposed methods, we have two new insights for low-level vision tasks: (1) increasing pixel diversity, as seen in vanilla DA, can improve performance, and (2) focusing on salient features while minimizing the impact of irrelevant pixels, as seen in saliency-based DA, more effectively enhances model performance. Additionally, we find and highlight the key guiding principle for designing saliency-based DA: a wider spectrum of degradation patterns. Extensive experiments demonstrate the compatibility and consistency of our method, as well as the significant performance improvement across various SR tasks and networks. Our code is available at https://github.com/mizeyu/ADD." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 567, + 135, + 580 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 567, + 135, + 580 + ], + "spans": [ + { + "bbox": [ + 56, + 567, + 135, + 580 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 587, + 295, + 694 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 587, + 295, + 694 + ], + "spans": [ + { + "bbox": [ + 55, + 587, + 295, + 694 + ], + "type": "text", + "content": "Data augmentation (DA) is a fundamental technique for improving the generalization ability of deep neural networks (DNNs). In high-level vision tasks such as image recognition and semantic segmentation, early vanilla approaches like Mixup [35] and CutMix [34] laid the groundwork. Recently, saliency-based DA methods [13, 14, 18, 28, 29] have gained considerable attention, demonstrating superior performance over vanilla counterparts in high-level vision tasks." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 232, + 553, + 327 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 232, + 553, + 327 + ], + "spans": [ + { + "bbox": [ + 313, + 232, + 553, + 327 + ], + "type": "text", + "content": "However, applying saliency-based DA techniques directly to low-level vision tasks remains challenging due to a mismatch in task objectives, inconsistent feature requirements, differences in data characteristics, and inappropriate method designs (see Appendix for further details). These challenges have led to the continued use of vanilla DA techniques, such as rotation, flipping [27], Mixup [8], and Cutblur [33], in low-level vision tasks." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 328, + 553, + 532 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 328, + 553, + 532 + ], + "spans": [ + { + "bbox": [ + 313, + 328, + 553, + 532 + ], + "type": "text", + "content": "In low-level vision tasks [2, 3, 5-7, 10, 12, 15, 16, 19, 25, 31, 36, 40], the use of vanilla DA methods [8, 20, 27, 32, 33] is hindered by the issue of information loss, which can severely degrade the quality of the reconstructed images. As shown in Fig. 1, vanilla DA techniques often generate images with flat regions and simplistic edges (e.g., a3), which provide minimal support for model learning in tasks like super-resolution (SR), leading to significant loss of crucial information. In contrast, saliency-based DA methods [13, 14, 18, 28, 29] target information-rich regions (e.g., b3), which facilitate better reconstruction performance. The phase spectrum analysis in Fig. 1 (c, d, e) further illustrates that augmented images incorporating saliency information (b3) retain more critical details, alleviating the problem of information loss. This observation motivates the need for new DA techniques that focus on more meaningful regions in low-level tasks." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 533, + 553, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 533, + 553, + 689 + ], + "spans": [ + { + "bbox": [ + 313, + 533, + 553, + 689 + ], + "type": "text", + "content": "Existing saliency methods, however, are often prone to background noise and struggle to accurately capture important features. To address these limitations, we propose a new attribution method, the Calibrated Attribution Maps (CAM), which is capable of identifying important features without being influenced by background noise. CAM builds upon LAM [9] and integrated gradients (IG) [11, 26], introducing two key innovations: a global feature detector and calibrated integrated gradients. Using CAM, we introduce Attribution-Driven Data augmentation (ADD) techniques, and its enhanced version " + }, + { + "bbox": [ + 313, + 533, + 553, + 689 + ], + "type": "inline_equation", + "content": "\\mathrm{ADD+}" + }, + { + "bbox": [ + 313, + 533, + 553, + 689 + ], + "type": "text", + "content": ", which represent the first attempt to explore saliency in DA from an attribution analysis perspective." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 689, + 553, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 689, + 553, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 689, + 553, + 714 + ], + "type": "text", + "content": "Through CAM, we derive two key insights for low-level vision tasks: (1) incorporating a broader range of pixels," + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 703, + 144, + 713 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 703, + 144, + 713 + ], + "spans": [ + { + "bbox": [ + 67, + 703, + 144, + 713 + ], + "type": "text", + "content": "*Corresponding author." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 293, + 748, + 317, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 758 + ], + "type": "text", + "content": "23101" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 58, + 60, + 156, + 127 + ], + "blocks": [ + { + "bbox": [ + 58, + 60, + 156, + 127 + ], + "lines": [ + { + "bbox": [ + 58, + 60, + 156, + 127 + ], + "spans": [ + { + "bbox": [ + 58, + 60, + 156, + 127 + ], + "type": "image", + "image_path": "9cfe4b07aef406f7a6bdb632db291bd7f959082c28a745dd69279a0b10dacd6a.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 74, + 129, + 136, + 139 + ], + "lines": [ + { + "bbox": [ + 74, + 129, + 136, + 139 + ], + "spans": [ + { + "bbox": [ + 74, + 129, + 136, + 139 + ], + "type": "text", + "content": "a1. Input Image" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 156, + 62, + 253, + 126 + ], + "blocks": [ + { + "bbox": [ + 156, + 62, + 253, + 126 + ], + "lines": [ + { + "bbox": [ + 156, + 62, + 253, + 126 + ], + "spans": [ + { + "bbox": [ + 156, + 62, + 253, + 126 + ], + "type": "image", + "image_path": "82b42d05c83757c35ee88269e2fc72ef657f1e54b232c8de1aa63008149e0b42.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 175, + 129, + 234, + 138 + ], + "lines": [ + { + "bbox": [ + 175, + 129, + 234, + 138 + ], + "spans": [ + { + "bbox": [ + 175, + 129, + 234, + 138 + ], + "type": "text", + "content": "a2. Vanilla DA" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 253, + 62, + 350, + 127 + ], + "blocks": [ + { + "bbox": [ + 253, + 62, + 350, + 127 + ], + "lines": [ + { + "bbox": [ + 253, + 62, + 350, + 127 + ], + "spans": [ + { + "bbox": [ + 253, + 62, + 350, + 127 + ], + "type": "image", + "image_path": "e29336ef1308576ee9111fb70e67a41bc17c845d203b3e55e3335b07c9993406.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 258, + 129, + 347, + 139 + ], + "lines": [ + { + "bbox": [ + 258, + 129, + 347, + 139 + ], + "spans": [ + { + "bbox": [ + 258, + 129, + 347, + 139 + ], + "type": "text", + "content": "a3. Aug. Image of (a2)" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 351, + 62, + 449, + 127 + ], + "blocks": [ + { + "bbox": [ + 351, + 62, + 449, + 127 + ], + "lines": [ + { + "bbox": [ + 351, + 62, + 449, + 127 + ], + "spans": [ + { + "bbox": [ + 351, + 62, + 449, + 127 + ], + "type": "image", + "image_path": "a844a318a7ef23c68787b56b64d2d4a4fed7fc5099662a225f6d378a1bff8d22.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 356, + 129, + 442, + 139 + ], + "lines": [ + { + "bbox": [ + 356, + 129, + 442, + 139 + ], + "spans": [ + { + "bbox": [ + 356, + 129, + 442, + 139 + ], + "type": "text", + "content": "c. Phase Spec. of (a3)" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 451, + 75, + 551, + 115 + ], + "blocks": [ + { + "bbox": [ + 451, + 75, + 551, + 115 + ], + "lines": [ + { + "bbox": [ + 451, + 75, + 551, + 115 + ], + "spans": [ + { + "bbox": [ + 451, + 75, + 551, + 115 + ], + "type": "image", + "image_path": "c14b315881d6e50a7e1126fb8a88a351dfffc2332a3e779f812b22a3bd62d9fa.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 58, + 140, + 156, + 205 + ], + "blocks": [ + { + "bbox": [ + 58, + 140, + 156, + 205 + ], + "lines": [ + { + "bbox": [ + 58, + 140, + 156, + 205 + ], + "spans": [ + { + "bbox": [ + 58, + 140, + 156, + 205 + ], + "type": "image", + "image_path": "1050ae2b62ee86c0b84c62947421b066973762ce3380424b0b891935a6c81063.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 69, + 206, + 146, + 217 + ], + "lines": [ + { + "bbox": [ + 69, + 206, + 146, + 217 + ], + "spans": [ + { + "bbox": [ + 69, + 206, + 146, + 217 + ], + "type": "text", + "content": "b1. Saliency Image" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 157, + 140, + 253, + 205 + ], + "blocks": [ + { + "bbox": [ + 157, + 140, + 253, + 205 + ], + "lines": [ + { + "bbox": [ + 157, + 140, + 253, + 205 + ], + "spans": [ + { + "bbox": [ + 157, + 140, + 253, + 205 + ], + "type": "image", + "image_path": "82d1761302c81b97ff9877bbe53989a67e5674e89bcc3f489a7b05a4fbdefd07.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 158, + 206, + 249, + 217 + ], + "lines": [ + { + "bbox": [ + 158, + 206, + 249, + 217 + ], + "spans": [ + { + "bbox": [ + 158, + 206, + 249, + 217 + ], + "type": "text", + "content": "b2. Saliency-based DA" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 253, + 140, + 351, + 205 + ], + "blocks": [ + { + "bbox": [ + 253, + 140, + 351, + 205 + ], + "lines": [ + { + "bbox": [ + 253, + 140, + 351, + 205 + ], + "spans": [ + { + "bbox": [ + 253, + 140, + 351, + 205 + ], + "type": "image", + "image_path": "e372be01e9a42d5c42d638fea638a6ea2afed3aeddd766be787a72305c27a556.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 256, + 206, + 348, + 217 + ], + "lines": [ + { + "bbox": [ + 256, + 206, + 348, + 217 + ], + "spans": [ + { + "bbox": [ + 256, + 206, + 348, + 217 + ], + "type": "text", + "content": "b3. Aug. Image of (b2)" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 55, + 226, + 555, + 270 + ], + "lines": [ + { + "bbox": [ + 55, + 226, + 555, + 270 + ], + "spans": [ + { + "bbox": [ + 55, + 226, + 555, + 270 + ], + "type": "text", + "content": "Figure 1. Motivation: Provide more meaningful information for vanilla DA to address the critical bottleneck of information loss. (a1) Input image. (a2) Vanilla DA randomly selects an area. (a3) Augmented image of vanilla DA. (b1) Saliency map of the input image. (b2) Select the maximum saliency region. (b3) Augmented image of saliency-based DA. (c) Phase spectrum of augmented image (a3). (d) Phase spectrum of augmented image (b3). (e) Residual map calculated between (c) and (d)." + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_caption" + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 353, + 140, + 449, + 205 + ], + "blocks": [ + { + "bbox": [ + 353, + 140, + 449, + 205 + ], + "lines": [ + { + "bbox": [ + 353, + 140, + 449, + 205 + ], + "spans": [ + { + "bbox": [ + 353, + 140, + 449, + 205 + ], + "type": "image", + "image_path": "2f586e6390cb193d68c725a516e757c3d711c1c47a297699c96cc4ee831642f5.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 358, + 206, + 444, + 217 + ], + "lines": [ + { + "bbox": [ + 358, + 206, + 444, + 217 + ], + "spans": [ + { + "bbox": [ + 358, + 206, + 444, + 217 + ], + "type": "text", + "content": "d. Phase Spec. of (b3)" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_caption" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 452, + 140, + 551, + 205 + ], + "blocks": [ + { + "bbox": [ + 452, + 140, + 551, + 205 + ], + "lines": [ + { + "bbox": [ + 452, + 140, + 551, + 205 + ], + "spans": [ + { + "bbox": [ + 452, + 140, + 551, + 205 + ], + "type": "image", + "image_path": "3e3804e93aaedbdab3982e38b538a2ea383249b7db05abb0b826828c2dd3d783.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 453, + 206, + 552, + 217 + ], + "lines": [ + { + "bbox": [ + 453, + 206, + 552, + 217 + ], + "spans": [ + { + "bbox": [ + 453, + 206, + 552, + 217 + ], + "type": "text", + "content": "e. Residual Map of d & c" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_caption" + } + ], + "index": 17 + }, + { + "bbox": [ + 54, + 277, + 295, + 372 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 277, + 295, + 372 + ], + "spans": [ + { + "bbox": [ + 54, + 277, + 295, + 372 + ], + "type": "text", + "content": "as in vanilla DA, can improve model performance, and (2) focusing on more influential pixels—using saliency information—while minimizing the inclusion of irrelevant pixels, as in saliency-based DA, results in more effective enhancement. This insight explains why ADD, which utilizes saliency to prioritize important pixels, alleviates information loss and outperforms traditional vanilla DA methods, which indiscriminately incorporate additional pixels." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 55, + 373, + 296, + 457 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 373, + 296, + 457 + ], + "spans": [ + { + "bbox": [ + 55, + 373, + 296, + 457 + ], + "type": "text", + "content": "Furthermore, to design effective saliency-based DA methods in low-level vision tasks, we propose the key guiding principle: a wider spectrum of degradation patterns. Specifically, DA methods should: (1) prefer maintaining continuous boundaries to avoid boundary effects, and (2) prioritize diverse augmentation strategies over single augmented images to provide richer learning signals." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 55, + 458, + 120, + 468 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 458, + 120, + 468 + ], + "spans": [ + { + "bbox": [ + 55, + 458, + 120, + 468 + ], + "type": "text", + "content": "Contributions:" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 56, + 471, + 295, + 625 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 56, + 471, + 295, + 517 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 471, + 295, + 517 + ], + "spans": [ + { + "bbox": [ + 56, + 471, + 295, + 517 + ], + "type": "text", + "content": "- We introduce CAM, a robust attribution method that accurately identifies important features while mitigating background noise, improving saliency map generation in low-level vision tasks." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 56, + 519, + 295, + 553 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 519, + 295, + 553 + ], + "spans": [ + { + "bbox": [ + 56, + 519, + 295, + 553 + ], + "type": "text", + "content": "- We propose ADD, a novel attribution-driven DA method, which overcomes the limitations of vanilla DA and significantly reduces information loss in low-level tasks." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 56, + 554, + 295, + 590 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 554, + 295, + 590 + ], + "spans": [ + { + "bbox": [ + 56, + 554, + 295, + 590 + ], + "type": "text", + "content": "- We provide new insights and guiding principles for saliency-based DA in low-level vision, offering a framework for improving model performance." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 56, + 590, + 295, + 625 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 590, + 295, + 625 + ], + "spans": [ + { + "bbox": [ + 56, + 590, + 295, + 625 + ], + "type": "text", + "content": "- Our extensive experiments validate the effectiveness of our methods, demonstrating significant performance gains across a range of super-resolution tasks." + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 55, + 639, + 146, + 651 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 639, + 146, + 651 + ], + "spans": [ + { + "bbox": [ + 55, + 639, + 146, + 651 + ], + "type": "text", + "content": "2. Related Works" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 55, + 659, + 192, + 672 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 659, + 192, + 672 + ], + "spans": [ + { + "bbox": [ + 55, + 659, + 192, + 672 + ], + "type": "text", + "content": "2.1. Image Super-Resolution" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 55, + 677, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 677, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 677, + 296, + 713 + ], + "type": "text", + "content": "Image super-resolution (SR) is a key technology in computer vision [21, 39]. Since SRCNN [5], numerous CNN-based methods have emerged, incorporating residual [12," + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 313, + 277, + 554, + 313 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 277, + 554, + 313 + ], + "spans": [ + { + "bbox": [ + 313, + 277, + 554, + 313 + ], + "type": "text", + "content": "36] and dense blocks [25, 31, 40], as well as attention mechanisms [2, 19]. More recently, transformer-based SR models [6, 7, 16] have achieved state-of-the-art performance." + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 313, + 323, + 501, + 335 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 323, + 501, + 335 + ], + "spans": [ + { + "bbox": [ + 313, + 323, + 501, + 335 + ], + "type": "text", + "content": "2.2. Data Augmentation in Vision Tasks" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 313, + 340, + 555, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 340, + 555, + 437 + ], + "spans": [ + { + "bbox": [ + 313, + 340, + 555, + 437 + ], + "type": "text", + "content": "Traditional data augmentation (DA) methods in high-level vision include geometric and intensity transformations [4, 41], as well as mixed-based techniques [34, 35]. Recently, saliency-based DA has gained popularity, focusing on preserving critical regions [13, 14, 18, 28, 29]. In low-level vision, DA remains largely limited to conventional approaches [27]. Some works explore Mixup [8] and Cutblur [33], with CutMIB extending them to light-field SR [32]." + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 314, + 449, + 374, + 460 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 449, + 374, + 460 + ], + "spans": [ + { + "bbox": [ + 314, + 449, + 374, + 460 + ], + "type": "text", + "content": "3. Methods" + } + ] + } + ], + "index": 34 + }, + { + "bbox": [ + 313, + 470, + 554, + 565 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 470, + 554, + 565 + ], + "spans": [ + { + "bbox": [ + 313, + 470, + 554, + 565 + ], + "type": "text", + "content": "In this section, we first discuss the key challenges and motivations for incorporating saliency-based data augmentation (DA) in low-level vision tasks in Sec. 3.1. Then, in Sec. 3.2, we briefly review prior attribution methods, which serve as the foundation for our proposed approach. Next, we introduce the concept of Calibrated Attribution Maps (CAM) in Sec. 3.3, followed by the presentation of saliency-based DA techniques, ADD and " + }, + { + "bbox": [ + 313, + 470, + 554, + 565 + ], + "type": "inline_equation", + "content": "\\mathrm{ADD + }" + }, + { + "bbox": [ + 313, + 470, + 554, + 565 + ], + "type": "text", + "content": ", in Sec. 3.4." + } + ] + } + ], + "index": 35 + }, + { + "bbox": [ + 313, + 574, + 523, + 587 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 574, + 523, + 587 + ], + "spans": [ + { + "bbox": [ + 313, + 574, + 523, + 587 + ], + "type": "text", + "content": "3.1. Integrating Saliency into Low-Level DA" + } + ] + } + ], + "index": 36 + }, + { + "bbox": [ + 313, + 593, + 554, + 676 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 593, + 554, + 676 + ], + "spans": [ + { + "bbox": [ + 313, + 593, + 554, + 676 + ], + "type": "text", + "content": "Why introduce saliency into low-level vanilla DA? As discussed in Sec. 1, leveraging saliency provides valuable insights into the most important image features, addressing the problem of information loss. This motivates the integration of saliency information into data augmentation strategies for low-level vision tasks, where preserving critical image details is essential for enhancing performance." + } + ] + } + ], + "index": 37 + }, + { + "bbox": [ + 313, + 677, + 554, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 677, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 677, + 554, + 713 + ], + "type": "text", + "content": "How to introduce saliency into low-level vanilla DA? Traditional saliency methods, such as LAM and integrated gradient (IG), are limited by the challenge of distinguishing" + } + ] + } + ], + "index": 38 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "23102" + } + ] + } + ], + "index": 39 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 296, + 156 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 296, + 156 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 296, + 156 + ], + "type": "text", + "content": "relevant features from irrelevant background noise. To overcome this limitation, we build upon LAM and IG and introduce a novel approach, CAM, which incorporates two new components: a global feature detector and a calibrated integrated gradient. These innovations, detailed in Sec. 3.3, effectively address the noise issue and improve the accuracy of saliency estimation in low-level tasks." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 165, + 272, + 178 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 165, + 272, + 178 + ], + "spans": [ + { + "bbox": [ + 55, + 165, + 272, + 178 + ], + "type": "text", + "content": "3.2. Overview of Vanilla Attribution Methods" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 183, + 296, + 280 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 183, + 296, + 280 + ], + "spans": [ + { + "bbox": [ + 55, + 183, + 296, + 280 + ], + "type": "text", + "content": "Before presenting our method, we first provide a brief overview of existing attribution methods that form the foundation for our approach [1, 11, 22-24]. Let " + }, + { + "bbox": [ + 55, + 183, + 296, + 280 + ], + "type": "inline_equation", + "content": "\\mathcal{I} \\in \\mathbb{R}^d" + }, + { + "bbox": [ + 55, + 183, + 296, + 280 + ], + "type": "text", + "content": " be the input image, and let " + }, + { + "bbox": [ + 55, + 183, + 296, + 280 + ], + "type": "inline_equation", + "content": "\\mathcal{C}: \\mathbb{R}^d \\mapsto \\mathbb{R}" + }, + { + "bbox": [ + 55, + 183, + 296, + 280 + ], + "type": "text", + "content": " represent a classification network. Gradient-based methods, such as Integrated Gradients (IG), quantify the impact of changes in the input dimensions by computing the gradient of the output with respect to the input image:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 79, + 289, + 296, + 317 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 289, + 296, + 317 + ], + "spans": [ + { + "bbox": [ + 79, + 289, + 296, + 317 + ], + "type": "interline_equation", + "content": "\\mathrm {I G} _ {\\mathcal {C}} \\mathcal {I} = (\\mathcal {I} - \\mathcal {I} ^ {\\prime}) \\int_ {0} ^ {1} \\frac {\\partial \\mathcal {C} (\\mathcal {I} ^ {\\prime} + \\alpha (\\mathcal {I} - \\mathcal {I} ^ {\\prime}))}{\\partial \\mathcal {I}} d \\alpha , (1)", + "image_path": "acf27b4be0a8fb3fc65b2480ff2d803aba48a015c699ca69b7bca79471ebaf85.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 326, + 296, + 361 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 326, + 296, + 361 + ], + "spans": [ + { + "bbox": [ + 55, + 326, + 296, + 361 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 326, + 296, + 361 + ], + "type": "inline_equation", + "content": "\\mathcal{I}'" + }, + { + "bbox": [ + 55, + 326, + 296, + 361 + ], + "type": "text", + "content": " is a baseline image (often a blank image in high-level tasks) and " + }, + { + "bbox": [ + 55, + 326, + 296, + 361 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 55, + 326, + 296, + 361 + ], + "type": "text", + "content": " is a continuous parameter that interpolates between the baseline and the target input." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 363, + 296, + 482 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 363, + 296, + 482 + ], + "spans": [ + { + "bbox": [ + 55, + 363, + 296, + 482 + ], + "type": "text", + "content": "In the context of image SR, Local Attribution Maps (LAM) convert the original baseline image " + }, + { + "bbox": [ + 55, + 363, + 296, + 482 + ], + "type": "inline_equation", + "content": "\\mathcal{I}'" + }, + { + "bbox": [ + 55, + 363, + 296, + 482 + ], + "type": "text", + "content": " to a blurred version " + }, + { + "bbox": [ + 55, + 363, + 296, + 482 + ], + "type": "inline_equation", + "content": "\\mathcal{I}' = \\omega(\\sigma) \\otimes \\mathcal{I}" + }, + { + "bbox": [ + 55, + 363, + 296, + 482 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 55, + 363, + 296, + 482 + ], + "type": "inline_equation", + "content": "\\omega(\\sigma)" + }, + { + "bbox": [ + 55, + 363, + 296, + 482 + ], + "type": "text", + "content": " is a Gaussian blur kernel with width " + }, + { + "bbox": [ + 55, + 363, + 296, + 482 + ], + "type": "inline_equation", + "content": "\\sigma" + }, + { + "bbox": [ + 55, + 363, + 296, + 482 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 55, + 363, + 296, + 482 + ], + "type": "inline_equation", + "content": "\\otimes" + }, + { + "bbox": [ + 55, + 363, + 296, + 482 + ], + "type": "text", + "content": " represents convolution. Besides, LAM adapts IG for SR tasks by using a gradient detection method " + }, + { + "bbox": [ + 55, + 363, + 296, + 482 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 55, + 363, + 296, + 482 + ], + "type": "text", + "content": " that focuses on local feature detection in SR networks. However, LAM suffers from the limitation of irrelevant gradient accumulations, which can easily lead to a focus on irrelevant areas. To address this issue, we introduce CAM to eliminate irrelevant area interference." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 491, + 251, + 505 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 491, + 251, + 505 + ], + "spans": [ + { + "bbox": [ + 55, + 491, + 251, + 505 + ], + "type": "text", + "content": "3.3. Calibrated Attribution Maps (CAM)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 510, + 295, + 569 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 510, + 295, + 569 + ], + "spans": [ + { + "bbox": [ + 55, + 510, + 295, + 569 + ], + "type": "text", + "content": "In this section, we introduce the concept of Calibrated Attribution Maps (CAM) as shown in Fig. 2 (a), which is inspired by the IG [26] and LAM [9]. The goal of CAM is to provide a more accurate and reliable estimation of feature importance, particularly in the context of image SR tasks." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 570, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 570, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 570, + 296, + 714 + ], + "type": "text", + "content": "Global Feature Detector (GD). Given an input image pair " + }, + { + "bbox": [ + 55, + 570, + 296, + 714 + ], + "type": "inline_equation", + "content": "\\mathcal{I}^{LR}" + }, + { + "bbox": [ + 55, + 570, + 296, + 714 + ], + "type": "text", + "content": " (low resolution) and " + }, + { + "bbox": [ + 55, + 570, + 296, + 714 + ], + "type": "inline_equation", + "content": "\\mathcal{I}^{HR}" + }, + { + "bbox": [ + 55, + 570, + 296, + 714 + ], + "type": "text", + "content": " (high resolution), we aim to learn a mapping function " + }, + { + "bbox": [ + 55, + 570, + 296, + 714 + ], + "type": "inline_equation", + "content": "\\mathcal{F}" + }, + { + "bbox": [ + 55, + 570, + 296, + 714 + ], + "type": "text", + "content": " that produces the super-resolved image " + }, + { + "bbox": [ + 55, + 570, + 296, + 714 + ], + "type": "inline_equation", + "content": "\\mathcal{I}^{SR}" + }, + { + "bbox": [ + 55, + 570, + 296, + 714 + ], + "type": "text", + "content": ". Traditional attribution methods such as LAM detect local pixel-wise gradients, which can easily lead to saturation effects, as shown in Fig. 2 (d). To alleviate this problem, we introduce a Global Feature Detector (GD), which aims to capture global features in the image by applying convolutional filters such as the Sobel filter. This approach smoothes the detected gradients and enhances the robustness of saliency maps. To achieve a more robust global feature representation, the GD operation is defined" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 315, + 75, + 329, + 83 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 75, + 329, + 83 + ], + "spans": [ + { + "bbox": [ + 315, + 75, + 329, + 83 + ], + "type": "text", + "content": "as:" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 364, + 83, + 553, + 97 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 364, + 83, + 553, + 97 + ], + "spans": [ + { + "bbox": [ + 364, + 83, + 553, + 97 + ], + "type": "interline_equation", + "content": "\\boldsymbol {G} \\boldsymbol {D} (\\mathcal {I} ^ {S R}) = \\| \\boldsymbol {S o b e l} _ {x y} (\\mathcal {I} ^ {S R}) \\| _ {2}, \\tag {2}", + "image_path": "bc732dfc148e30858c8988b65b22e1645af888f5aab0b624ad8143d957a3b832.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 102, + 553, + 148 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 102, + 553, + 148 + ], + "spans": [ + { + "bbox": [ + 313, + 102, + 553, + 148 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 102, + 553, + 148 + ], + "type": "inline_equation", + "content": "Sobel_{xy}" + }, + { + "bbox": [ + 313, + 102, + 553, + 148 + ], + "type": "text", + "content": " denotes the Sobel filter applied in both the " + }, + { + "bbox": [ + 313, + 102, + 553, + 148 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 313, + 102, + 553, + 148 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 102, + 553, + 148 + ], + "type": "inline_equation", + "content": "y" + }, + { + "bbox": [ + 313, + 102, + 553, + 148 + ], + "type": "text", + "content": " directions to capture edge features in the image. This approach smooths the gradients and reduces the saturation problem, as demonstrated in Fig. 2 (d) & (g)." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 149, + 555, + 257 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 149, + 555, + 257 + ], + "spans": [ + { + "bbox": [ + 313, + 149, + 555, + 257 + ], + "type": "text", + "content": "To analyze the attributes of the SR network, given the current input image " + }, + { + "bbox": [ + 313, + 149, + 555, + 257 + ], + "type": "inline_equation", + "content": "\\mathcal{I}" + }, + { + "bbox": [ + 313, + 149, + 555, + 257 + ], + "type": "text", + "content": ", a baseline image " + }, + { + "bbox": [ + 313, + 149, + 555, + 257 + ], + "type": "inline_equation", + "content": "\\mathcal{I}'" + }, + { + "bbox": [ + 313, + 149, + 555, + 257 + ], + "type": "text", + "content": " satisfies that " + }, + { + "bbox": [ + 313, + 149, + 555, + 257 + ], + "type": "inline_equation", + "content": "\\mathcal{F}(\\mathcal{I}')" + }, + { + "bbox": [ + 313, + 149, + 555, + 257 + ], + "type": "text", + "content": " absent certain features existed in " + }, + { + "bbox": [ + 313, + 149, + 555, + 257 + ], + "type": "inline_equation", + "content": "\\mathcal{F}(\\mathcal{I})" + }, + { + "bbox": [ + 313, + 149, + 555, + 257 + ], + "type": "text", + "content": " is also needed. Accordingly, feature scalar " + }, + { + "bbox": [ + 313, + 149, + 555, + 257 + ], + "type": "inline_equation", + "content": "GD(\\mathcal{F}(\\mathcal{I}))" + }, + { + "bbox": [ + 313, + 149, + 555, + 257 + ], + "type": "text", + "content": " will show significant numerical advantage over " + }, + { + "bbox": [ + 313, + 149, + 555, + 257 + ], + "type": "inline_equation", + "content": "GD(\\mathcal{F}(\\mathcal{I}'))" + }, + { + "bbox": [ + 313, + 149, + 555, + 257 + ], + "type": "text", + "content": ". We calculate the path-integrated gradient along the gradually changing path from " + }, + { + "bbox": [ + 313, + 149, + 555, + 257 + ], + "type": "inline_equation", + "content": "\\mathcal{I}'" + }, + { + "bbox": [ + 313, + 149, + 555, + 257 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 313, + 149, + 555, + 257 + ], + "type": "inline_equation", + "content": "\\mathcal{I}" + }, + { + "bbox": [ + 313, + 149, + 555, + 257 + ], + "type": "text", + "content": " and obtain the attribution map for " + }, + { + "bbox": [ + 313, + 149, + 555, + 257 + ], + "type": "inline_equation", + "content": "GD(\\mathcal{F}(\\mathcal{I}))" + }, + { + "bbox": [ + 313, + 149, + 555, + 257 + ], + "type": "text", + "content": ". Then, the " + }, + { + "bbox": [ + 313, + 149, + 555, + 257 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 313, + 149, + 555, + 257 + ], + "type": "text", + "content": "th dimension of the calibrated attribution maps is defined as follows:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 327, + 263, + 553, + 287 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 327, + 263, + 553, + 287 + ], + "spans": [ + { + "bbox": [ + 327, + 263, + 553, + 287 + ], + "type": "interline_equation", + "content": "\\phi_ {i} ^ {C A M} (\\mathcal {F}, \\boldsymbol {G D}) = \\left(\\mathcal {I} _ {i} - \\mathcal {I} _ {i} ^ {\\prime}\\right) \\int_ {a = 0} ^ {1} \\frac {\\partial \\boldsymbol {G D} \\left(\\boldsymbol {F} \\left(\\mathcal {I} ^ {\\prime} + \\alpha \\left(\\mathcal {I} - \\mathcal {I} ^ {\\prime}\\right)\\right)\\right)}{\\partial \\mathcal {I} _ {i}} d \\alpha . \\tag {3}", + "image_path": "f31604fb2098a8b919724ed342f19b201f9202857691c80cb8fdd359ac646702.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "spans": [ + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "text", + "content": "Calibrated Path Integrated Gradient (CPIG). We now introduce the Calibrated Path Integrated Gradient (CPIG), which can efficiently and effectively analyze global attribution. In SR tasks, the high-frequency components (e.g., edges and textures) contribute much more than the low-frequency components (e.g., color and brightness) to the network performance. In this work, we obtain baseline inputs by eliminating high-frequency components, setting them as the blurred version of LR images denoted as " + }, + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "inline_equation", + "content": "\\mathcal{I}' = \\omega(\\sigma) \\otimes \\mathcal{I}" + }, + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "text", + "content": ". Here, " + }, + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "inline_equation", + "content": "\\omega(\\sigma)" + }, + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "text", + "content": " represents the Gaussian blur kernel parameterized by the kernel width " + }, + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "inline_equation", + "content": "\\sigma" + }, + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "inline_equation", + "content": "\\otimes" + }, + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "text", + "content": " is the convolution operation. Following previous works, we construct a smooth transformation from " + }, + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "inline_equation", + "content": "\\mathcal{I}'" + }, + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "inline_equation", + "content": "\\mathcal{I}" + }, + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "text", + "content": ", which is expressed as " + }, + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "inline_equation", + "content": "\\gamma(a) = \\omega(\\sigma - \\alpha\\sigma) \\otimes \\mathcal{I}" + }, + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "text", + "content": ". Accordingly, we have " + }, + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "inline_equation", + "content": "\\gamma(0) = \\mathcal{I}'" + }, + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "inline_equation", + "content": "\\gamma(1) = \\mathcal{I}" + }, + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "text", + "content": ". The gradients at points are sampled in " + }, + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "text", + "content": " steps along the path and the gradient of the " + }, + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 313, + 293, + 555, + 469 + ], + "type": "text", + "content": "-th step is:" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 359, + 474, + 553, + 488 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 359, + 474, + 553, + 488 + ], + "spans": [ + { + "bbox": [ + 359, + 474, + 553, + 488 + ], + "type": "interline_equation", + "content": "\\phi_ {i} ^ {C A M} (\\mathcal {F}, \\boldsymbol {G D}, \\gamma) = \\tag {4}", + "image_path": "f24f65a0b04a7f8c51a5665dc30c19ee92d427c26fbe9492ed21e5194217048f.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 361, + 491, + 516, + 517 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 361, + 491, + 516, + 517 + ], + "spans": [ + { + "bbox": [ + 361, + 491, + 516, + 517 + ], + "type": "interline_equation", + "content": "\\left(\\gamma \\left(\\frac {i}{k}\\right) - \\gamma \\left(\\frac {i + 1}{k}\\right)\\right) \\times \\frac {\\partial G D \\left(\\mathcal {F} \\left(\\gamma \\left(\\frac {i}{k}\\right)\\right)\\right)}{\\partial \\gamma \\left(\\frac {i}{k}\\right)} d \\alpha .", + "image_path": "caea17881979c1f4cb5dc4b48d17a3299b2ca807be968ecc2a8e4ffa71f9afb8.jpg" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 522, + 554, + 605 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 522, + 554, + 605 + ], + "spans": [ + { + "bbox": [ + 313, + 522, + 554, + 605 + ], + "type": "text", + "content": "To improve the stability and accuracy of the path-integrated gradients, we introduce a calibration step that limits the deviation at each step and ensures that the gradients focus on the most relevant features of the image. This is achieved by limiting the range of each step to fluctuate around a central value, " + }, + { + "bbox": [ + 313, + 522, + 554, + 605 + ], + "type": "inline_equation", + "content": "a_{\\mathrm{min}} = \\max (a - d,0.0)" + }, + { + "bbox": [ + 313, + 522, + 554, + 605 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 522, + 554, + 605 + ], + "type": "inline_equation", + "content": "a_{\\mathrm{max}} = \\min (a + d,1.0)" + }, + { + "bbox": [ + 313, + 522, + 554, + 605 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 606, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 606, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 606, + 555, + 715 + ], + "type": "text", + "content": "Attribution analysis progressively adjusts the value of each pixel to make the interpolated image " + }, + { + "bbox": [ + 313, + 606, + 555, + 715 + ], + "type": "inline_equation", + "content": "\\mathcal{I}'" + }, + { + "bbox": [ + 313, + 606, + 555, + 715 + ], + "type": "text", + "content": " gradually approach the target image " + }, + { + "bbox": [ + 313, + 606, + 555, + 715 + ], + "type": "inline_equation", + "content": "\\mathcal{I}" + }, + { + "bbox": [ + 313, + 606, + 555, + 715 + ], + "type": "text", + "content": ", and the overall loss function can be defined as: " + }, + { + "bbox": [ + 313, + 606, + 555, + 715 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{OA} = \\| \\mathcal{I}' - \\mathcal{I}\\|_1" + }, + { + "bbox": [ + 313, + 606, + 555, + 715 + ], + "type": "text", + "content": ". Correspondingly, the target loss function for approximating the target image with the interpolation image in step " + }, + { + "bbox": [ + 313, + 606, + 555, + 715 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 313, + 606, + 555, + 715 + ], + "type": "text", + "content": " is: " + }, + { + "bbox": [ + 313, + 606, + 555, + 715 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{TG} = \\| \\mathcal{I}' - \\mathcal{I}\\|_1 \\times (1 - \\frac{i}{k})" + }, + { + "bbox": [ + 313, + 606, + 555, + 715 + ], + "type": "text", + "content": ". Then, the difference between the actual current interpolated image and the target image can be represented by the current loss: " + }, + { + "bbox": [ + 313, + 606, + 555, + 715 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{CU} = \\| \\gamma (\\frac{i}{k}) - \\mathcal{I}\\|_1" + }, + { + "bbox": [ + 313, + 606, + 555, + 715 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 18 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "23103" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 58, + 68, + 541, + 311 + ], + "blocks": [ + { + "bbox": [ + 58, + 68, + 541, + 311 + ], + "lines": [ + { + "bbox": [ + 58, + 68, + 541, + 311 + ], + "spans": [ + { + "bbox": [ + 58, + 68, + 541, + 311 + ], + "type": "image", + "image_path": "936fbdbf84c83a2257bd1936f5f8fbe9399dc7195c449c4c14210c65bec2dda8.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 319, + 555, + 354 + ], + "lines": [ + { + "bbox": [ + 55, + 319, + 555, + 354 + ], + "spans": [ + { + "bbox": [ + 55, + 319, + 555, + 354 + ], + "type": "text", + "content": "Figure 2. The illustration of CAM, the ADD framework, and the CAM-LAM comparison. (a) Illustration of CAM. (b) Process of ADD and " + }, + { + "bbox": [ + 55, + 319, + 555, + 354 + ], + "type": "inline_equation", + "content": "\\mathrm{ADD + }" + }, + { + "bbox": [ + 55, + 319, + 555, + 354 + ], + "type": "text", + "content": ". (c) Comparison of global attribution analysis between CAM and LAM, with red boxes highlighting regions where gradients are presented after global attribution analysis." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 374, + 296, + 422 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 374, + 296, + 422 + ], + "spans": [ + { + "bbox": [ + 55, + 374, + 296, + 422 + ], + "type": "text", + "content": "To further prevent instability caused by large gradient changes and to maintain the smoothness of the path, we select only those pixels with gradient magnitudes below a predefined threshold " + }, + { + "bbox": [ + 55, + 374, + 296, + 422 + ], + "type": "inline_equation", + "content": "T_{f}" + }, + { + "bbox": [ + 55, + 374, + 296, + 422 + ], + "type": "text", + "content": " for updating:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 102, + 436, + 247, + 453 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 102, + 436, + 247, + 453 + ], + "spans": [ + { + "bbox": [ + 102, + 436, + 247, + 453 + ], + "type": "interline_equation", + "content": "T _ {f} = \\operatorname {s o r t e d} \\left(\\left| \\phi_ {i} ^ {C A M} \\right|\\right) _ {\\left[ p _ {f} \\cdot \\text {n u m . p i x e l s} \\right]} ^ {\\min },", + "image_path": "e3ddcf69e1113ecf9b53e4f829a3c74d1ff413ad5128474b3b7b61424d2c923e.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 103, + 455, + 294, + 487 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 103, + 455, + 294, + 487 + ], + "spans": [ + { + "bbox": [ + 103, + 455, + 294, + 487 + ], + "type": "interline_equation", + "content": "M _ {f} = \\left\\{ \\begin{array}{l l} 1, & \\text {i f} | \\phi_ {i} ^ {C A M} | \\leq T _ {f}, \\\\ 0, & \\text {o t h e r w i s e}, \\end{array} \\right. \\tag {5}", + "image_path": "7d04b8792a351aa1ea26aed210909357197d4414bbb88708c04cdeaff63c4b49.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 500, + 296, + 571 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 500, + 296, + 571 + ], + "spans": [ + { + "bbox": [ + 55, + 500, + 296, + 571 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 500, + 296, + 571 + ], + "type": "inline_equation", + "content": "M_{f}" + }, + { + "bbox": [ + 55, + 500, + 296, + 571 + ], + "type": "text", + "content": " is a binary mask representing the pixels that need to be calibrated. Based on the difference between the current loss " + }, + { + "bbox": [ + 55, + 500, + 296, + 571 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{CU}" + }, + { + "bbox": [ + 55, + 500, + 296, + 571 + ], + "type": "text", + "content": " and the target loss " + }, + { + "bbox": [ + 55, + 500, + 296, + 571 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{TG}" + }, + { + "bbox": [ + 55, + 500, + 296, + 571 + ], + "type": "text", + "content": ", combined with the mask threshold loss function " + }, + { + "bbox": [ + 55, + 500, + 296, + 571 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{MF}" + }, + { + "bbox": [ + 55, + 500, + 296, + 571 + ], + "type": "text", + "content": " between the pixels to be corrected and the target image, we generate calibration factor " + }, + { + "bbox": [ + 55, + 500, + 296, + 571 + ], + "type": "inline_equation", + "content": "\\delta" + }, + { + "bbox": [ + 55, + 500, + 296, + 571 + ], + "type": "text", + "content": " to control the step size of each update:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 99, + 586, + 251, + 598 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 99, + 586, + 251, + 598 + ], + "spans": [ + { + "bbox": [ + 99, + 586, + 251, + 598 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {M F} = \\left\\| M _ {f} \\odot (\\gamma (a) - \\gamma \\left(a _ {\\max }\\right)) \\right\\| _ {1},", + "image_path": "dcbef701e51e86d0049a214983bcd6e30fbeca3190da40a1509f0f26b6dc7066.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 129, + 601, + 294, + 625 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 601, + 294, + 625 + ], + "spans": [ + { + "bbox": [ + 129, + 601, + 294, + 625 + ], + "type": "interline_equation", + "content": "\\delta = \\frac {\\mathcal {L} _ {C U} - \\mathcal {L} _ {T G}}{\\mathcal {L} _ {M F}}. \\tag {6}", + "image_path": "eba3d6e6c9b2d6717691fd21e25e303a1f2a756f90e0b2e080547027d0ea08c9.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 642, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 642, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 642, + 296, + 714 + ], + "type": "text", + "content": "With the calibration factor " + }, + { + "bbox": [ + 55, + 642, + 296, + 714 + ], + "type": "inline_equation", + "content": "\\delta" + }, + { + "bbox": [ + 55, + 642, + 296, + 714 + ], + "type": "text", + "content": ", the calibrated interpolation image " + }, + { + "bbox": [ + 55, + 642, + 296, + 714 + ], + "type": "inline_equation", + "content": "\\gamma_{c}(a)" + }, + { + "bbox": [ + 55, + 642, + 296, + 714 + ], + "type": "text", + "content": " can be formulated as: " + }, + { + "bbox": [ + 55, + 642, + 296, + 714 + ], + "type": "inline_equation", + "content": "\\gamma_{c}(a) = \\gamma (a + \\delta \\times (a_{\\mathrm{max}} - a))" + }, + { + "bbox": [ + 55, + 642, + 296, + 714 + ], + "type": "text", + "content": ". Accordingly, the calibrated gradient " + }, + { + "bbox": [ + 55, + 642, + 296, + 714 + ], + "type": "inline_equation", + "content": "\\psi_i^{CAM}" + }, + { + "bbox": [ + 55, + 642, + 296, + 714 + ], + "type": "text", + "content": " in step " + }, + { + "bbox": [ + 55, + 642, + 296, + 714 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 55, + 642, + 296, + 714 + ], + "type": "text", + "content": " can be updated by the calibrated interpolation image " + }, + { + "bbox": [ + 55, + 642, + 296, + 714 + ], + "type": "inline_equation", + "content": "\\gamma_{c}(a)" + }, + { + "bbox": [ + 55, + 642, + 296, + 714 + ], + "type": "text", + "content": " and represented as: " + }, + { + "bbox": [ + 55, + 642, + 296, + 714 + ], + "type": "inline_equation", + "content": "\\psi_i^{CAM} = \\phi_i^{CAM} + (\\gamma_c(a) - \\gamma (a))\\times" + }, + { + "bbox": [ + 55, + 642, + 296, + 714 + ], + "type": "inline_equation", + "content": "\\phi_i^{CAM}" + }, + { + "bbox": [ + 55, + 642, + 296, + 714 + ], + "type": "text", + "content": ". Finally, we obtain the approximate integrated gra" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 374, + 541, + 387 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 374, + 541, + 387 + ], + "spans": [ + { + "bbox": [ + 313, + 374, + 541, + 387 + ], + "type": "text", + "content": "dient by summing up the calibrated gradients of " + }, + { + "bbox": [ + 313, + 374, + 541, + 387 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 313, + 374, + 541, + 387 + ], + "type": "text", + "content": " steps:" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 393, + 396, + 554, + 429 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 393, + 396, + 554, + 429 + ], + "spans": [ + { + "bbox": [ + 393, + 396, + 554, + 429 + ], + "type": "interline_equation", + "content": "\\mathcal {I} _ {s} ^ {L R} = \\sum_ {i = 0} ^ {k} \\psi_ {i} ^ {C A M}. \\tag {7}", + "image_path": "e3f0e13491c91e0de7dfd8f35b930a1060c57c35c5d06625d958edfc6724076a.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 437, + 554, + 485 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 437, + 554, + 485 + ], + "spans": [ + { + "bbox": [ + 313, + 437, + 554, + 485 + ], + "type": "text", + "content": "As shown in the Fig. 2 (f) and (i), our calibrated attribution maps focus on the most important edge texture information compared to LAM, and are not affected by the noise of flat areas and background." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 495, + 554, + 507 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 495, + 554, + 507 + ], + "spans": [ + { + "bbox": [ + 313, + 495, + 554, + 507 + ], + "type": "text", + "content": "3.4. Attribution-Driven Data Augmentation (ADD)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 512, + 555, + 620 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 512, + 555, + 620 + ], + "spans": [ + { + "bbox": [ + 313, + 512, + 555, + 620 + ], + "type": "text", + "content": "In this section, as depicted in Fig. 2 (b), we introduce the specific process of the proposed ADD and the enhanced version " + }, + { + "bbox": [ + 313, + 512, + 555, + 620 + ], + "type": "inline_equation", + "content": "\\mathrm{ADD + }" + }, + { + "bbox": [ + 313, + 512, + 555, + 620 + ], + "type": "text", + "content": ". Let " + }, + { + "bbox": [ + 313, + 512, + 555, + 620 + ], + "type": "inline_equation", + "content": "\\mathcal{I}^{LR}\\in R^{H\\times W\\times C}" + }, + { + "bbox": [ + 313, + 512, + 555, + 620 + ], + "type": "text", + "content": " be the input LR image, and the corresponding saliency map " + }, + { + "bbox": [ + 313, + 512, + 555, + 620 + ], + "type": "inline_equation", + "content": "\\mathcal{I}_s^{LR}" + }, + { + "bbox": [ + 313, + 512, + 555, + 620 + ], + "type": "text", + "content": " can be obtained with Eq. (7). To accurately obtain the region of maximum saliency, we follow the principle (1) of continuous boundaries in Sec. 3.5 and select the maximum saliency pixels with a proportion of " + }, + { + "bbox": [ + 313, + 512, + 555, + 620 + ], + "type": "inline_equation", + "content": "p" + }, + { + "bbox": [ + 313, + 512, + 555, + 620 + ], + "type": "text", + "content": " and obtain the corresponding irregularly shaped patch:" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 359, + 628, + 485, + 645 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 359, + 628, + 485, + 645 + ], + "spans": [ + { + "bbox": [ + 359, + 628, + 485, + 645 + ], + "type": "interline_equation", + "content": "T _ {p} = \\operatorname {s o r t e d} \\left(\\mathcal {I} _ {s} ^ {L R}\\right) _ {\\lceil p \\cdot \\text {n u m . p i x e l s} \\rceil} ^ {m a x},", + "image_path": "a426cd8c6e123010660051c2c3056994969c2b0ffd951ed72387134c0b4c4393.jpg" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 360, + 647, + 553, + 679 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 360, + 647, + 553, + 679 + ], + "spans": [ + { + "bbox": [ + 360, + 647, + 553, + 679 + ], + "type": "interline_equation", + "content": "M (i, j) = \\left\\{ \\begin{array}{l l} 1, & \\text {i f} \\mathcal {I} _ {s} ^ {L R} (i, j) \\geq T _ {p}, \\\\ 0, & \\text {o t h e r w i s e}, \\end{array} \\right. \\tag {8}", + "image_path": "b2ecf8456a39b31d64b75640253534c87709bee2c4b33ad3428f28ca8c75da2b.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 688, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 688, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 688, + 555, + 715 + ], + "type": "text", + "content": "where the " + }, + { + "bbox": [ + 313, + 688, + 555, + 715 + ], + "type": "inline_equation", + "content": "T_{p}" + }, + { + "bbox": [ + 313, + 688, + 555, + 715 + ], + "type": "text", + "content": " represents the maximum saliency value of the top " + }, + { + "bbox": [ + 313, + 688, + 555, + 715 + ], + "type": "inline_equation", + "content": "p" + }, + { + "bbox": [ + 313, + 688, + 555, + 715 + ], + "type": "text", + "content": " proportion in the " + }, + { + "bbox": [ + 313, + 688, + 555, + 715 + ], + "type": "inline_equation", + "content": "\\mathcal{I}_s^{LR}" + }, + { + "bbox": [ + 313, + 688, + 555, + 715 + ], + "type": "text", + "content": " and the " + }, + { + "bbox": [ + 313, + 688, + 555, + 715 + ], + "type": "inline_equation", + "content": "M\\in \\{0,1\\}^{H\\times W}" + }, + { + "bbox": [ + 313, + 688, + 555, + 715 + ], + "type": "text", + "content": " is a" + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "23104" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 294, + 156 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 294, + 156 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 294, + 156 + ], + "type": "text", + "content": "binary mask indicating the area to be cut. Following the second principle (2) of diverse augmented images in Sec. 3.5, we use the mask " + }, + { + "bbox": [ + 55, + 72, + 294, + 156 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 55, + 72, + 294, + 156 + ], + "type": "text", + "content": " to cut the patch to the corresponding position on another image and combine it with different DA strategies. Given the LR-HR image pair " + }, + { + "bbox": [ + 55, + 72, + 294, + 156 + ], + "type": "inline_equation", + "content": "\\{\\mathcal{I}_i^{LR},\\mathcal{I}_i^{HR}\\}" + }, + { + "bbox": [ + 55, + 72, + 294, + 156 + ], + "type": "text", + "content": " and the corresponding binary mask " + }, + { + "bbox": [ + 55, + 72, + 294, + 156 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 55, + 72, + 294, + 156 + ], + "type": "text", + "content": ", the augmentation process is explained below." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 156, + 294, + 218 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 156, + 294, + 218 + ], + "spans": [ + { + "bbox": [ + 55, + 156, + 294, + 218 + ], + "type": "text", + "content": "ADD. We first adopt a mixed strategy to generate augmented input images to enable the model to learn richer and more complex degradation patterns. We cut the patch and mix it with the patch from another LR-HR image pair " + }, + { + "bbox": [ + 55, + 156, + 294, + 218 + ], + "type": "inline_equation", + "content": "\\{\\mathcal{I}_j^{LR},\\mathcal{I}_j^{HR}\\}" + }, + { + "bbox": [ + 55, + 156, + 294, + 218 + ], + "type": "text", + "content": " and generate new training samples:" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 226, + 295, + 247 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 226, + 295, + 247 + ], + "spans": [ + { + "bbox": [ + 69, + 226, + 295, + 247 + ], + "type": "interline_equation", + "content": "P _ {m i x} ^ {L R} = \\lambda \\times M \\odot \\mathcal {I} _ {i} ^ {L R} + (1 - \\lambda) \\times M \\odot \\mathcal {I} _ {j} ^ {L R}, \\tag {9}", + "image_path": "2373316955a23fb7fe0c1e58abb5bdcf1d1c7c6ab76fc63ff4edb60829da5091.jpg" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 244, + 295, + 259 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 244, + 295, + 259 + ], + "spans": [ + { + "bbox": [ + 69, + 244, + 295, + 259 + ], + "type": "interline_equation", + "content": "P _ {m i x} ^ {H R} = \\lambda \\times M \\odot \\mathcal {I} _ {i} ^ {H R} + (1 - \\lambda) \\times M \\odot \\mathcal {I} _ {j} ^ {H R},", + "image_path": "0b017a2f50ba8adebaffd175f6e721608b2e033640e7df6a1cbc42455457e5a8.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 269, + 295, + 291 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 269, + 295, + 291 + ], + "spans": [ + { + "bbox": [ + 104, + 269, + 295, + 291 + ], + "type": "interline_equation", + "content": "\\mathcal {I} _ {i} ^ {L R} = P _ {m i x} ^ {L R} + (1 - M) \\odot \\mathcal {I} _ {j} ^ {L R}, \\tag {10}", + "image_path": "eabbb0825d517f8d415cc366f100da8ac16cc9ade6b8eeea948ae863e4141c7f.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 286, + 244, + 302 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 286, + 244, + 302 + ], + "spans": [ + { + "bbox": [ + 105, + 286, + 244, + 302 + ], + "type": "interline_equation", + "content": "\\mathcal {I} _ {i} ^ {H R} = P _ {m i x} ^ {H R} + (1 - M) \\odot \\mathcal {I} _ {j} ^ {H R}.", + "image_path": "e377f5ff734036b53aed76b4650abec15ff4ef1506b84c879317b00e789fc64e.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 308, + 295, + 369 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 308, + 295, + 369 + ], + "spans": [ + { + "bbox": [ + 55, + 308, + 295, + 369 + ], + "type": "text", + "content": "Then we adopt an intensity strategy to make the model learn how and where to augment the input images. We cut the patch " + }, + { + "bbox": [ + 55, + 308, + 295, + 369 + ], + "type": "inline_equation", + "content": "P_{i}^{LR}" + }, + { + "bbox": [ + 55, + 308, + 295, + 369 + ], + "type": "text", + "content": " by " + }, + { + "bbox": [ + 55, + 308, + 295, + 369 + ], + "type": "inline_equation", + "content": "M \\odot \\mathcal{I}_{i}^{LR}" + }, + { + "bbox": [ + 55, + 308, + 295, + 369 + ], + "type": "text", + "content": " and upsample it by scale " + }, + { + "bbox": [ + 55, + 308, + 295, + 369 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 55, + 308, + 295, + 369 + ], + "type": "text", + "content": " with bicubic kernel, get " + }, + { + "bbox": [ + 55, + 308, + 295, + 369 + ], + "type": "inline_equation", + "content": "P_{i}^{LR(s \\times \\uparrow)}" + }, + { + "bbox": [ + 55, + 308, + 295, + 369 + ], + "type": "text", + "content": ". The HR patch can be generated similarly and we can get augmented samples as:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 77, + 388, + 295, + 411 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 388, + 295, + 411 + ], + "spans": [ + { + "bbox": [ + 77, + 388, + 295, + 411 + ], + "type": "interline_equation", + "content": "\\hat {\\mathcal {I}} _ {i} ^ {L R \\rightarrow H R} = P _ {i} ^ {L R (s \\times \\uparrow)} + (\\mathbf {1} - M) \\odot \\mathcal {I} _ {i} ^ {H R}, \\tag {11}", + "image_path": "26c1d6b914623ac44a09f95c973cb3647b4d30eac21592c1d013704886235bd4.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 78, + 407, + 256, + 422 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 78, + 407, + 256, + 422 + ], + "spans": [ + { + "bbox": [ + 78, + 407, + 256, + 422 + ], + "type": "interline_equation", + "content": "\\hat {\\mathcal {I}} _ {i} ^ {H R \\rightarrow L R} = P _ {i} ^ {H R (s \\times \\downarrow)} + (\\mathbf {1} - M) \\odot \\mathcal {I} _ {i} ^ {L R}.", + "image_path": "0fe156068a51d2748c98924fac2c47cf4f4684718f0a52d211cf0a277dfa8ae7.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 432, + 295, + 494 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 432, + 295, + 494 + ], + "spans": [ + { + "bbox": [ + 55, + 432, + 295, + 494 + ], + "type": "inline_equation", + "content": "\\mathbf{ADD}+" + }, + { + "bbox": [ + 55, + 432, + 295, + 494 + ], + "type": "text", + "content": ". To validate the efficacy of saliency in mixed enhancement and surpass the performance limits, we proposed the method with an enhanced version. We additionally generate a pair of new training samples with another LR-HR image pair " + }, + { + "bbox": [ + 55, + 432, + 295, + 494 + ], + "type": "inline_equation", + "content": "\\{\\mathcal{I}_j^{LR},\\mathcal{I}_j^{HR}\\}" + }, + { + "bbox": [ + 55, + 432, + 295, + 494 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 93, + 502, + 295, + 524 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 93, + 502, + 295, + 524 + ], + "spans": [ + { + "bbox": [ + 93, + 502, + 295, + 524 + ], + "type": "interline_equation", + "content": "\\mathcal {I} _ {i} ^ {L R} = M \\odot \\mathcal {I} _ {i} ^ {L R} + (1 - M) \\odot \\mathcal {I} _ {j} ^ {L R}, \\tag {12}", + "image_path": "f6e36b01368b28313a696309b38b3a379e719478070b69ba258bd9ed26235dad.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 94, + 520, + 256, + 535 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 520, + 256, + 535 + ], + "spans": [ + { + "bbox": [ + 94, + 520, + 256, + 535 + ], + "type": "interline_equation", + "content": "\\mathcal {I} _ {i} ^ {H R} = M \\odot \\mathcal {I} _ {i} ^ {H R} + (1 - M) \\odot \\mathcal {I} _ {j} ^ {H R},", + "image_path": "d59c151d5e5e4740293dce8604d52b899c3488b0e6c1005ecc39c637570e63b1.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 55, + 544, + 295, + 616 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 544, + 295, + 616 + ], + "spans": [ + { + "bbox": [ + 55, + 544, + 295, + 616 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 544, + 295, + 616 + ], + "type": "inline_equation", + "content": "\\odot" + }, + { + "bbox": [ + 55, + 544, + 295, + 616 + ], + "type": "text", + "content": " denotes the element-wise Hadamard product operation. Following previous work [33], in each training iteration, using " + }, + { + "bbox": [ + 55, + 544, + 295, + 616 + ], + "type": "inline_equation", + "content": "\\mathrm{ADD + }" + }, + { + "bbox": [ + 55, + 544, + 295, + 616 + ], + "type": "text", + "content": ", each above augmentation method and traditional augmentation method (e.g., color, channel) have a probability " + }, + { + "bbox": [ + 55, + 544, + 295, + 616 + ], + "type": "inline_equation", + "content": "p" + }, + { + "bbox": [ + 55, + 544, + 295, + 616 + ], + "type": "text", + "content": " of being applied by the model to enhance the input image." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 55, + 624, + 132, + 635 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 624, + 132, + 635 + ], + "spans": [ + { + "bbox": [ + 55, + 624, + 132, + 635 + ], + "type": "text", + "content": "3.5. Discussions" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 55, + 641, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 641, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 641, + 295, + 713 + ], + "type": "text", + "content": "Incorporate saliency into vanilla DA. Incorporating saliency into existing vanilla DA methods revolves around two key aspects: patch cutting and pasting. Consequently, two fundamental questions arise: (1) What manner should be used for segmenting the source image? (2) Where should the cut patches be pasted? To address these questions, we" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 72, + 553, + 156 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 553, + 156 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 553, + 156 + ], + "type": "text", + "content": "conduct a comprehensive analysis and reveal the key principle: a wider spectrum of degradation patterns. Specifically, it includes: (1) continuous boundaries rather than abrupt boundaries, and (2) diverse augmented images over single augmented images. The results and analysis of saliency in DA methods are presented in Tab. 1 and further elaborated in the experiments (see Sec. 4.4)." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 167, + 553, + 233 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 167, + 553, + 233 + ], + "spans": [ + { + "bbox": [ + 313, + 167, + 553, + 233 + ], + "type": "text", + "content": "Table 1. Quantitative PSNR comparison of various saliency incorporation methods in super-resolution. " + }, + { + "bbox": [ + 313, + 167, + 553, + 233 + ], + "type": "inline_equation", + "content": "Patch_{n \\times n}" + }, + { + "bbox": [ + 313, + 167, + 553, + 233 + ], + "type": "text", + "content": " denotes image division into " + }, + { + "bbox": [ + 313, + 167, + 553, + 233 + ], + "type": "inline_equation", + "content": "n \\times n" + }, + { + "bbox": [ + 313, + 167, + 553, + 233 + ], + "type": "text", + "content": " patches. " + }, + { + "bbox": [ + 313, + 167, + 553, + 233 + ], + "type": "inline_equation", + "content": "X\\mathcal{Q}Y" + }, + { + "bbox": [ + 313, + 167, + 553, + 233 + ], + "type": "text", + "content": " indicates cutting a patch from area " + }, + { + "bbox": [ + 313, + 167, + 553, + 233 + ], + "type": "inline_equation", + "content": "X" + }, + { + "bbox": [ + 313, + 167, + 553, + 233 + ], + "type": "text", + "content": " and pasting it onto region " + }, + { + "bbox": [ + 313, + 167, + 553, + 233 + ], + "type": "inline_equation", + "content": "Y" + }, + { + "bbox": [ + 313, + 167, + 553, + 233 + ], + "type": "text", + "content": " in another image, where 'Sa' stands for 'Saliency', 'Non' for 'Non-saliency', 'Ce' for 'Center', and 'Cor' for 'Corresponding'." + } + ] + } + ], + "index": 16 + }, + { + "type": "table", + "bbox": [ + 317, + 242, + 564, + 412 + ], + "blocks": [ + { + "bbox": [ + 317, + 242, + 564, + 412 + ], + "lines": [ + { + "bbox": [ + 317, + 242, + 564, + 412 + ], + "spans": [ + { + "bbox": [ + 317, + 242, + 564, + 412 + ], + "type": "table", + "html": "
MethodScaleTypes of Saliency UtilizationDIV2KRealSR
PSNRΔPSNRΔ
EDSR×4baseline29.21-28.86-
Patch1×1×4Granularity29.28+0.0729.10+0.24
Patch2×2×4Granularity29.26+0.0529.07+0.21
Patch3×3×4Granularity29.24+0.0329.02+0.16
Patch4×4×4Granularity29.22+0.0128.91+0.05
Patch5×5×4Granularity29.14-0.0728.75-0.11
Patch6×6×4Granularity29.07-0.1428.61-0.25
Patch7×7×4Granularity28.92-0.2928.45-0.41
Sa2Cor×4Diversity29.27+0.0629.11+0.25
Ce2Ce×4Diversity29.26+0.0529.08+0.22
Sa2Sa×4Diversity29.21+0.0028..89+0.03
Sa2Non×4Diversity29.24+0.0329.97+0.11
Non2Sa×4Diversity29.19-0.0228.79-0.07
Non2Non×4Diversity29.16-0.0528.76-0.10
ADD+×4Granularity&Diversity29.32+0.1129.14+0.28
", + "image_path": "6996eec2dd0bf23ff7d1f81aac39a0b690df1d1ba9516793b7c24cf16d3f38d4.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "table_body" + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 435, + 395, + 449 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 435, + 395, + 449 + ], + "spans": [ + { + "bbox": [ + 313, + 435, + 395, + 449 + ], + "type": "text", + "content": "4. Experiments" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 313, + 455, + 400, + 466 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 455, + 400, + 466 + ], + "spans": [ + { + "bbox": [ + 313, + 455, + 400, + 466 + ], + "type": "text", + "content": "4.1. Preliminaries" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 313, + 472, + 553, + 555 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 472, + 553, + 555 + ], + "spans": [ + { + "bbox": [ + 313, + 472, + 553, + 555 + ], + "type": "text", + "content": "Network structures. We adopt several advanced and typical SR networks to verify the effectiveness and compatibility of our ADD and the proposed three new DA strategies. We consider CNN-based methods: RCAN [38], and EDSR [17], in which well-designed CNN-based structures are proven effective on SR tasks. The transformer-based method, SwinIR [16], is also adopted in our experiments." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 313, + 556, + 554, + 664 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 556, + 554, + 664 + ], + "spans": [ + { + "bbox": [ + 313, + 556, + 554, + 664 + ], + "type": "text", + "content": "Datasets and implementation details. We use the DIV2K and RealSR datasets for training. We select ten images (index 0801-0810) from the DIV2K validation set for validation during training. For evaluation, we use six benchmark datasets, including Set5, Set14, BSD100, Urban100, Manga109, and test sets of RealSR. We keep the hyperparameters (e.g., learning rate, batch size) the same as reported in the original paper. All experiments are conducted using PyTorch on NVIDIA V100 GPUs." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 313, + 672, + 528, + 685 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 672, + 528, + 685 + ], + "spans": [ + { + "bbox": [ + 313, + 672, + 528, + 685 + ], + "type": "text", + "content": "4.2. Comparison of Interpretation Capability" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 313, + 689, + 554, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 689, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 689, + 554, + 713 + ], + "type": "text", + "content": "We conduct visualization experiments to evaluate the effectiveness of the proposed CAM. As shown in the left part of" + } + ] + } + ], + "index": 23 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "23105" + } + ] + } + ], + "index": 24 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 59, + 73, + 304, + 204 + ], + "blocks": [ + { + "bbox": [ + 59, + 73, + 304, + 204 + ], + "lines": [ + { + "bbox": [ + 59, + 73, + 304, + 204 + ], + "spans": [ + { + "bbox": [ + 59, + 73, + 304, + 204 + ], + "type": "image", + "image_path": "1b5dec18cf36a63ed299da488981529b64291193ef0d01134a64b2dbfb2504b8.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 304, + 78, + 427, + 200 + ], + "blocks": [ + { + "bbox": [ + 304, + 78, + 427, + 200 + ], + "lines": [ + { + "bbox": [ + 304, + 78, + 427, + 200 + ], + "spans": [ + { + "bbox": [ + 304, + 78, + 427, + 200 + ], + "type": "image", + "image_path": "a51527f8612d1b895ed066df1139b55f63b5f74cb19216ff7b6aee3ec6157fd2.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 597, + 295, + 645 + ], + "lines": [ + { + "bbox": [ + 55, + 597, + 295, + 645 + ], + "spans": [ + { + "bbox": [ + 55, + 597, + 295, + 645 + ], + "type": "text", + "content": "Fig. 3, the blue arrow highlights important regions, while the red arrow points to areas considered background noise. CAM accurately identifies relevant information, demonstrating robustness to background noise, unlike LAM." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 427, + 79, + 550, + 200 + ], + "blocks": [ + { + "bbox": [ + 427, + 79, + 550, + 200 + ], + "lines": [ + { + "bbox": [ + 427, + 79, + 550, + 200 + ], + "spans": [ + { + "bbox": [ + 427, + 79, + 550, + 200 + ], + "type": "image", + "image_path": "c47120e0d2e103ac181511fcee2064834384487ffb6a7e10ac27b4425edd8fe8.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 59, + 207, + 304, + 328 + ], + "blocks": [ + { + "bbox": [ + 59, + 207, + 304, + 328 + ], + "lines": [ + { + "bbox": [ + 59, + 207, + 304, + 328 + ], + "spans": [ + { + "bbox": [ + 59, + 207, + 304, + 328 + ], + "type": "image", + "image_path": "67d264ba064a6235f2bca34effcdd7bf772d3f4e4fc03a82d96c35276ee2a132.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 331, + 555, + 364 + ], + "lines": [ + { + "bbox": [ + 55, + 331, + 555, + 364 + ], + "spans": [ + { + "bbox": [ + 55, + 331, + 555, + 364 + ], + "type": "text", + "content": "Figure 3. Saliency maps (Left) and Insertion/Deletion curves (Right) on DIV2K etc. super-resolution sets. The blue arrow indicates that the proposed CAM accurately reflects important and intuitive content without being affected by background noise, while the red arrow indicates that LAM is affected by background noise and produces undesired attribution results." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 304, + 206, + 427, + 326 + ], + "blocks": [ + { + "bbox": [ + 304, + 206, + 427, + 326 + ], + "lines": [ + { + "bbox": [ + 304, + 206, + 427, + 326 + ], + "spans": [ + { + "bbox": [ + 304, + 206, + 427, + 326 + ], + "type": "image", + "image_path": "28fac85b2f8652710df47a3617e71950b617503ff67d4973df4dd1d554faed0b.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 427, + 206, + 550, + 326 + ], + "blocks": [ + { + "bbox": [ + 427, + 206, + 550, + 326 + ], + "lines": [ + { + "bbox": [ + 427, + 206, + 550, + 326 + ], + "spans": [ + { + "bbox": [ + 427, + 206, + 550, + 326 + ], + "type": "image", + "image_path": "54c4214e32f28a30ef9793e9dc51e7ef60acf8f602069eb4ef4fb0c5940ebcf5.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "table", + "bbox": [ + 58, + 406, + 304, + 596 + ], + "blocks": [ + { + "bbox": [ + 55, + 369, + 295, + 401 + ], + "lines": [ + { + "bbox": [ + 55, + 369, + 295, + 401 + ], + "spans": [ + { + "bbox": [ + 55, + 369, + 295, + 401 + ], + "type": "text", + "content": "Table 2. Quantitative PSNR (dB) comparison of our ADD and existing DA methods in SR on DIV2K and RealSR datasets. " + }, + { + "bbox": [ + 55, + 369, + 295, + 401 + ], + "type": "inline_equation", + "content": "\\Delta" + }, + { + "bbox": [ + 55, + 369, + 295, + 401 + ], + "type": "text", + "content": " denotes the performance gap." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 58, + 406, + 304, + 596 + ], + "lines": [ + { + "bbox": [ + 58, + 406, + 304, + 596 + ], + "spans": [ + { + "bbox": [ + 58, + 406, + 304, + 596 + ], + "type": "table", + "html": "
MethodScaleTraining SetDIV2KTraining SetRealSR
PSNRΔPSNRΔ
RCAN×4DIV2K29.22-RealSR29.20-
+ CutMix×4DIV2K29.24+0.02RealSR29.25+0.05
+ CutMixup×4DIV2K29.28+0.06RealSR29.30+0.10
+ CutBlur×4DIV2K29.25+0.03RealSR29.29+0.09
+ ADD×4DIV2K29.32+0.10RealSR29.34+0.14
+ ADD+×4DIV2K29.36+0.14RealSR29.46+0.26
EDSR×4DIV2K29.21-RealSR28.86-
+ CutMix×4DIV2K29.22+0.01RealSR28.90+0.04
+ CutMixup×4DIV2K29.26+0.05RealSR28.97+0.11
+ CutBlur×4DIV2K29.25+0.04RealSR28.94+0.08
+ ADD×4DIV2K29.30+0.09RealSR29.01+0.15
+ ADD+×4DIV2K29.32+0.11RealSR29.14+0.28
SwinIR×4DIV2K29.40-RealSR29.26-
+ CutMix×4DIV2K29.40+0.00RealSR29.29+0.03
+ CutMixup×4DIV2K29.43+0.03RealSR29.34+0.08
+ CutBlur×4DIV2K29.43+0.03RealSR29.32+0.06
+ ADD×4DIV2K29.46+0.06RealSR29.37+0.11
+ ADD+×4DIV2K29.48+0.08RealSR29.43+0.17
", + "image_path": "aebcefdb6aa5488feea4db084e0988a4e23641d049eeac5d3168d9ac5423e6ec.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_body" + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 653, + 296, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 653, + 296, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 653, + 296, + 715 + ], + "type": "text", + "content": "Following established protocols [30, 37], we perform Insertion and Deletion tests, as shown in the right part of Fig. 3. In the Insertion test, a progressively increasing fraction " + }, + { + "bbox": [ + 55, + 653, + 296, + 715 + ], + "type": "inline_equation", + "content": "(3.6\\%)" + }, + { + "bbox": [ + 55, + 653, + 296, + 715 + ], + "type": "text", + "content": " of pixels from the high-resolution (HR) image is inserted into the super-resolved image, guided by the pixel" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 380, + 555, + 476 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 380, + 555, + 476 + ], + "spans": [ + { + "bbox": [ + 313, + 380, + 555, + 476 + ], + "type": "text", + "content": "importance values in the attribution map, until the reconstructed image closely matches the HR image. In the Deletion test, " + }, + { + "bbox": [ + 313, + 380, + 555, + 476 + ], + "type": "inline_equation", + "content": "3.6\\%" + }, + { + "bbox": [ + 313, + 380, + 555, + 476 + ], + "type": "text", + "content": " of the pixels in the HR image, starting from those with the highest attribution map values, are progressively replaced with black pixels until the entire image is replaced. The Insertion and Deletion curves provide further evidence that CAM more effectively captures the network's critical information compared to LAM." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 494, + 524, + 506 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 494, + 524, + 506 + ], + "spans": [ + { + "bbox": [ + 313, + 494, + 524, + 506 + ], + "type": "text", + "content": "4.3. Results on Various Models and Datasets" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 510, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 510, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 510, + 555, + 713 + ], + "type": "text", + "content": "We conduct quantitative comparisons between the proposed ADD method and existing vanilla DA approaches across classical benchmark datasets, namely DIV2K and RealSR, as detailed in Tab. 2. The results demonstrate that both ADD and " + }, + { + "bbox": [ + 313, + 510, + 555, + 713 + ], + "type": "inline_equation", + "content": "\\mathrm{ADD + }" + }, + { + "bbox": [ + 313, + 510, + 555, + 713 + ], + "type": "text", + "content": " consistently outperform vanilla DA methods on both synthetic (DIV2K) and real-world (RealSR) datasets, with performance improvements reaching up to " + }, + { + "bbox": [ + 313, + 510, + 555, + 713 + ], + "type": "inline_equation", + "content": "0.28\\mathrm{dB}" + }, + { + "bbox": [ + 313, + 510, + 555, + 713 + ], + "type": "text", + "content": ". Further comparisons between " + }, + { + "bbox": [ + 313, + 510, + 555, + 713 + ], + "type": "inline_equation", + "content": "\\mathrm{ADD + }" + }, + { + "bbox": [ + 313, + 510, + 555, + 713 + ], + "type": "text", + "content": " and baseline models on the Set5, Set14, Manga109, Urban100, and BSD100 datasets, presented in Tab. 3, show that networks trained with " + }, + { + "bbox": [ + 313, + 510, + 555, + 713 + ], + "type": "inline_equation", + "content": "\\mathrm{ADD + }" + }, + { + "bbox": [ + 313, + 510, + 555, + 713 + ], + "type": "text", + "content": " consistently achieve superior reconstruction performance. Qualitative results, depicted in Fig. 4, reveal that networks trained with " + }, + { + "bbox": [ + 313, + 510, + 555, + 713 + ], + "type": "inline_equation", + "content": "\\mathrm{ADD + }" + }, + { + "bbox": [ + 313, + 510, + 555, + 713 + ], + "type": "text", + "content": " exhibit enhanced visual quality compared to their baseline counterparts, capturing finer details. Notably, in the area of stripes on the building in Fig. 4 (img_012), " + }, + { + "bbox": [ + 313, + 510, + 555, + 713 + ], + "type": "inline_equation", + "content": "\\mathrm{ADD + }" + }, + { + "bbox": [ + 313, + 510, + 555, + 713 + ], + "type": "text", + "content": " yields more accurate and sharper details than the baselines." + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "23106" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 58, + 76, + 553, + 265 + ], + "blocks": [ + { + "bbox": [ + 118, + 60, + 492, + 72 + ], + "lines": [ + { + "bbox": [ + 118, + 60, + 492, + 72 + ], + "spans": [ + { + "bbox": [ + 118, + 60, + 492, + 72 + ], + "type": "text", + "content": "Table 3. Quantitative comparison with baseline methods in SR and the " + }, + { + "bbox": [ + 118, + 60, + 492, + 72 + ], + "type": "inline_equation", + "content": "\\Delta" + }, + { + "bbox": [ + 118, + 60, + 492, + 72 + ], + "type": "text", + "content": " denotes the performance gap." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 58, + 76, + 553, + 265 + ], + "lines": [ + { + "bbox": [ + 58, + 76, + 553, + 265 + ], + "spans": [ + { + "bbox": [ + 58, + 76, + 553, + 265 + ], + "type": "table", + "html": "
MethodScaleTraining DatasetSet5Set14Manga109Urban100BSD100
PSNRΔPSNRΔPSNRΔPSNRΔPSNRΔ
RCAN×2DIV2K38.23-34.11-39.41-33.29-32.37-
+ ADD+×2DIV2K38.34+0.1134.21+0.1039.55+0.1433.55+0.2632.44+0.07
EDSR×2DIV2K38.11-33.92-39.10-32.93-32.32-
+ ADD+×2DIV2K38.20+0.0934.04+0.1239.27+0.1733.16+0.2332.43+0.11
SwinIR×2DIV2K38.31-34.41-39.89-33.75-32.46-
+ ADD+×2DIV2K38.43+0.1234.49+0.0839.99+0.1033.90+0.1532.51+0.05
RCAN×4DIV2K32.58-28.84-31.22-26.75-27.74-
+ ADD+×4DIV2K32.64+0.0628.91+0.0731.43+0.2126.92+0.1727.79+0.05
EDSR×4DIV2K32.43-28.76-31.04-26.63-27.66-
+ ADD+×4DIV2K32.51+0.0828.88+0.1231.24+0.2026.81+0.1827.77+0.11
SwinIR×4DIV2K32.63-28.92-31.54-27.01-27.82-
+ ADD+×4DIV2K32.71+0.0828.99+0.0731.60+0.0627.13+0.1227.85+0.03
", + "image_path": "74bce8a1b7e09e01e729caba85b960589dcce8c8558378877ad7d0e61cc28d21.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 60, + 271, + 134, + 362 + ], + "blocks": [ + { + "bbox": [ + 60, + 271, + 134, + 362 + ], + "lines": [ + { + "bbox": [ + 60, + 271, + 134, + 362 + ], + "spans": [ + { + "bbox": [ + 60, + 271, + 134, + 362 + ], + "type": "image", + "image_path": "de78c63daab6cb5676c36cbb3e05b5e5c2037249f5c661a5cbb062873cccab89.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 61, + 363, + 132, + 373 + ], + "lines": [ + { + "bbox": [ + 61, + 363, + 132, + 373 + ], + "spans": [ + { + "bbox": [ + 61, + 363, + 132, + 373 + ], + "type": "text", + "content": "Urban100(x4): img_092" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 138, + 270, + 176, + 307 + ], + "blocks": [ + { + "bbox": [ + 138, + 270, + 176, + 307 + ], + "lines": [ + { + "bbox": [ + 138, + 270, + 176, + 307 + ], + "spans": [ + { + "bbox": [ + 138, + 270, + 176, + 307 + ], + "type": "image", + "image_path": "b8dcd9a020f089120ca5f77ceaae038355b671ae4387b9cd7b79ca8ceee8d468.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 149, + 308, + 162, + 315 + ], + "lines": [ + { + "bbox": [ + 149, + 308, + 162, + 315 + ], + "spans": [ + { + "bbox": [ + 149, + 308, + 162, + 315 + ], + "type": "text", + "content": "HR" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 138, + 316, + 175, + 351 + ], + "blocks": [ + { + "bbox": [ + 138, + 316, + 175, + 351 + ], + "lines": [ + { + "bbox": [ + 138, + 316, + 175, + 351 + ], + "spans": [ + { + "bbox": [ + 138, + 316, + 175, + 351 + ], + "type": "image", + "image_path": "9d5f9288b54d82ea721ecdd5835fe9f08cd97b76c4f47c920c500863b8604f9e.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 143, + 352, + 170, + 360 + ], + "lines": [ + { + "bbox": [ + 143, + 352, + 170, + 360 + ], + "spans": [ + { + "bbox": [ + 143, + 352, + 170, + 360 + ], + "type": "text", + "content": "Bicubic" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 179, + 270, + 215, + 307 + ], + "blocks": [ + { + "bbox": [ + 179, + 270, + 215, + 307 + ], + "lines": [ + { + "bbox": [ + 179, + 270, + 215, + 307 + ], + "spans": [ + { + "bbox": [ + 179, + 270, + 215, + 307 + ], + "type": "image", + "image_path": "70699b286b4656ac3076cd033ade92ea423b6036377f2e715fd309df9ff01b4e.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 184, + 308, + 208, + 314 + ], + "lines": [ + { + "bbox": [ + 184, + 308, + 208, + 314 + ], + "spans": [ + { + "bbox": [ + 184, + 308, + 208, + 314 + ], + "type": "text", + "content": "RCAN" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 179, + 316, + 215, + 351 + ], + "blocks": [ + { + "bbox": [ + 179, + 316, + 215, + 351 + ], + "lines": [ + { + "bbox": [ + 179, + 316, + 215, + 351 + ], + "spans": [ + { + "bbox": [ + 179, + 316, + 215, + 351 + ], + "type": "image", + "image_path": "2c2975adaeceddc2bf474e7d336aad9d229acb77f823039b3c3d5da2e330ce41.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 184, + 352, + 208, + 360 + ], + "lines": [ + { + "bbox": [ + 184, + 352, + 208, + 360 + ], + "spans": [ + { + "bbox": [ + 184, + 352, + 208, + 360 + ], + "type": "text", + "content": "RCAN" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 219, + 270, + 254, + 307 + ], + "blocks": [ + { + "bbox": [ + 219, + 270, + 254, + 307 + ], + "lines": [ + { + "bbox": [ + 219, + 270, + 254, + 307 + ], + "spans": [ + { + "bbox": [ + 219, + 270, + 254, + 307 + ], + "type": "image", + "image_path": "5b87a9b328dc134e1a97d913d09221fd3c49b15fa763bcb31943e62b5f28a0e6.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 225, + 308, + 247, + 314 + ], + "lines": [ + { + "bbox": [ + 225, + 308, + 247, + 314 + ], + "spans": [ + { + "bbox": [ + 225, + 308, + 247, + 314 + ], + "type": "text", + "content": "EDSR" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_caption" + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 219, + 316, + 254, + 351 + ], + "blocks": [ + { + "bbox": [ + 177, + 361, + 295, + 372 + ], + "lines": [ + { + "bbox": [ + 177, + 361, + 295, + 372 + ], + "spans": [ + { + "bbox": [ + 177, + 361, + 295, + 372 + ], + "type": "text", + "content": "+proposed +proposed +proposed" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 219, + 316, + 254, + 351 + ], + "lines": [ + { + "bbox": [ + 219, + 316, + 254, + 351 + ], + "spans": [ + { + "bbox": [ + 219, + 316, + 254, + 351 + ], + "type": "image", + "image_path": "815bef0c5bcafc4cb0b87ea2b5d8e17621a0aedc88e0a64512aaf559ff7934d9.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 224, + 352, + 247, + 359 + ], + "lines": [ + { + "bbox": [ + 224, + 352, + 247, + 359 + ], + "spans": [ + { + "bbox": [ + 224, + 352, + 247, + 359 + ], + "type": "text", + "content": "EDSR" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_caption" + } + ], + "index": 15 + }, + { + "bbox": [ + 218, + 361, + 295, + 372 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 218, + 361, + 295, + 372 + ], + "spans": [ + { + "bbox": [ + 218, + 361, + 295, + 372 + ], + "type": "text", + "content": "+proposed +proposed" + } + ] + } + ], + "index": 17, + "type": "text" + }, + { + "type": "image", + "bbox": [ + 313, + 270, + 386, + 361 + ], + "blocks": [ + { + "bbox": [ + 313, + 270, + 386, + 361 + ], + "lines": [ + { + "bbox": [ + 313, + 270, + 386, + 361 + ], + "spans": [ + { + "bbox": [ + 313, + 270, + 386, + 361 + ], + "type": "image", + "image_path": "4365e0c60e58b836d1a0ffb62d2f0b31d8c3ec3126a0db25585bfecefdd40e29.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 317, + 362, + 389, + 373 + ], + "lines": [ + { + "bbox": [ + 317, + 362, + 389, + 373 + ], + "spans": [ + { + "bbox": [ + 317, + 362, + 389, + 373 + ], + "type": "text", + "content": "Urban100(x4): img_095" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_caption" + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 392, + 270, + 429, + 307 + ], + "blocks": [ + { + "bbox": [ + 392, + 270, + 429, + 307 + ], + "lines": [ + { + "bbox": [ + 392, + 270, + 429, + 307 + ], + "spans": [ + { + "bbox": [ + 392, + 270, + 429, + 307 + ], + "type": "image", + "image_path": "394ddbae49b2e969b327cc1919e2d371c9d503baed8b70d18e6d29aebcfe7bb9.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 404, + 308, + 415, + 314 + ], + "lines": [ + { + "bbox": [ + 404, + 308, + 415, + 314 + ], + "spans": [ + { + "bbox": [ + 404, + 308, + 415, + 314 + ], + "type": "text", + "content": "HR" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_caption" + } + ], + "index": 20 + }, + { + "type": "image", + "bbox": [ + 392, + 316, + 428, + 360 + ], + "blocks": [ + { + "bbox": [ + 392, + 316, + 428, + 360 + ], + "lines": [ + { + "bbox": [ + 392, + 316, + 428, + 360 + ], + "spans": [ + { + "bbox": [ + 392, + 316, + 428, + 360 + ], + "type": "image", + "image_path": "e2bbb8bdbb4f64783534f7f4a84aaffe542879dd79976f53122a8385abb22ef9.jpg" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 432, + 361, + 548, + 371 + ], + "lines": [ + { + "bbox": [ + 432, + 361, + 548, + 371 + ], + "spans": [ + { + "bbox": [ + 432, + 361, + 548, + 371 + ], + "type": "text", + "content": "+proposed +proposed +proposed" + } + ] + } + ], + "index": 27, + "angle": 0, + "type": "image_caption" + } + ], + "index": 22 + }, + { + "type": "image", + "bbox": [ + 432, + 270, + 468, + 307 + ], + "blocks": [ + { + "bbox": [ + 432, + 270, + 468, + 307 + ], + "lines": [ + { + "bbox": [ + 432, + 270, + 468, + 307 + ], + "spans": [ + { + "bbox": [ + 432, + 270, + 468, + 307 + ], + "type": "image", + "image_path": "a8a67a75522fbbcec87fdcc179cce34065f9b8a189ce8ea9710fb1a04d28724c.jpg" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 437, + 308, + 462, + 314 + ], + "lines": [ + { + "bbox": [ + 437, + 308, + 462, + 314 + ], + "spans": [ + { + "bbox": [ + 437, + 308, + 462, + 314 + ], + "type": "text", + "content": "RCAN" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_caption" + } + ], + "index": 23 + }, + { + "type": "image", + "bbox": [ + 432, + 316, + 468, + 351 + ], + "blocks": [ + { + "bbox": [ + 432, + 316, + 468, + 351 + ], + "lines": [ + { + "bbox": [ + 432, + 316, + 468, + 351 + ], + "spans": [ + { + "bbox": [ + 432, + 316, + 468, + 351 + ], + "type": "image", + "image_path": "de517a8cbaf25badbbc76af14d3c877274f3e175e7ec8b20ca0d9f6d7378f62e.jpg" + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 437, + 352, + 462, + 359 + ], + "lines": [ + { + "bbox": [ + 437, + 352, + 462, + 359 + ], + "spans": [ + { + "bbox": [ + 437, + 352, + 462, + 359 + ], + "type": "text", + "content": "RCAN" + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_caption" + } + ], + "index": 25 + }, + { + "type": "image", + "bbox": [ + 471, + 270, + 507, + 307 + ], + "blocks": [ + { + "bbox": [ + 471, + 270, + 507, + 307 + ], + "lines": [ + { + "bbox": [ + 471, + 270, + 507, + 307 + ], + "spans": [ + { + "bbox": [ + 471, + 270, + 507, + 307 + ], + "type": "image", + "image_path": "d5d7b376389ee0689ecd44b2122f8f74a29030a27ac346c306122d30124a4cea.jpg" + } + ] + } + ], + "index": 28, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 480, + 308, + 499, + 314 + ], + "lines": [ + { + "bbox": [ + 480, + 308, + 499, + 314 + ], + "spans": [ + { + "bbox": [ + 480, + 308, + 499, + 314 + ], + "type": "text", + "content": "EDSR" + } + ] + } + ], + "index": 29, + "angle": 0, + "type": "image_caption" + } + ], + "index": 28 + }, + { + "type": "image", + "bbox": [ + 471, + 316, + 507, + 351 + ], + "blocks": [ + { + "bbox": [ + 471, + 316, + 507, + 351 + ], + "lines": [ + { + "bbox": [ + 471, + 316, + 507, + 351 + ], + "spans": [ + { + "bbox": [ + 471, + 316, + 507, + 351 + ], + "type": "image", + "image_path": "a0e4d907bfceff3845f90f61f65285ebcc6f4789bd267ad6c12c8994de11b624.jpg" + } + ] + } + ], + "index": 30, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 479, + 352, + 500, + 359 + ], + "lines": [ + { + "bbox": [ + 479, + 352, + 500, + 359 + ], + "spans": [ + { + "bbox": [ + 479, + 352, + 500, + 359 + ], + "type": "text", + "content": "EDSR" + } + ] + } + ], + "index": 31, + "angle": 0, + "type": "image_caption" + } + ], + "index": 30 + }, + { + "type": "image", + "bbox": [ + 511, + 270, + 547, + 307 + ], + "blocks": [ + { + "bbox": [ + 511, + 270, + 547, + 307 + ], + "lines": [ + { + "bbox": [ + 511, + 270, + 547, + 307 + ], + "spans": [ + { + "bbox": [ + 511, + 270, + 547, + 307 + ], + "type": "image", + "image_path": "44d3686a627368ef66274250ead23a913256805c7a960591562cd3671e6edfba.jpg" + } + ] + } + ], + "index": 32, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 517, + 308, + 541, + 314 + ], + "lines": [ + { + "bbox": [ + 517, + 308, + 541, + 314 + ], + "spans": [ + { + "bbox": [ + 517, + 308, + 541, + 314 + ], + "type": "text", + "content": "SwinIR" + } + ] + } + ], + "index": 33, + "angle": 0, + "type": "image_caption" + } + ], + "index": 32 + }, + { + "type": "image", + "bbox": [ + 511, + 316, + 546, + 351 + ], + "blocks": [ + { + "bbox": [ + 511, + 316, + 546, + 351 + ], + "lines": [ + { + "bbox": [ + 511, + 316, + 546, + 351 + ], + "spans": [ + { + "bbox": [ + 511, + 316, + 546, + 351 + ], + "type": "image", + "image_path": "fe44a5b7289d1c45e50d5c15f294601c872f22e9ba49caab6f4ca2aea0c4da17.jpg" + } + ] + } + ], + "index": 34, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 517, + 352, + 542, + 359 + ], + "lines": [ + { + "bbox": [ + 517, + 352, + 542, + 359 + ], + "spans": [ + { + "bbox": [ + 517, + 352, + 542, + 359 + ], + "type": "text", + "content": "SwinIR" + } + ] + } + ], + "index": 35, + "angle": 0, + "type": "image_caption" + } + ], + "index": 34 + }, + { + "type": "image", + "bbox": [ + 61, + 386, + 132, + 475 + ], + "blocks": [ + { + "bbox": [ + 61, + 386, + 132, + 475 + ], + "lines": [ + { + "bbox": [ + 61, + 386, + 132, + 475 + ], + "spans": [ + { + "bbox": [ + 61, + 386, + 132, + 475 + ], + "type": "image", + "image_path": "c0ddf29918d715819df72d701c97fd9aae29410a10ebf71b8ee85ff35c85fa99.jpg" + } + ] + } + ], + "index": 36, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 75, + 476, + 120, + 483 + ], + "lines": [ + { + "bbox": [ + 75, + 476, + 120, + 483 + ], + "spans": [ + { + "bbox": [ + 75, + 476, + 120, + 483 + ], + "type": "text", + "content": "Manga109(x4):" + } + ] + } + ], + "index": 37, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 67, + 484, + 128, + 492 + ], + "lines": [ + { + "bbox": [ + 67, + 484, + 128, + 492 + ], + "spans": [ + { + "bbox": [ + 67, + 484, + 128, + 492 + ], + "type": "text", + "content": "PrayerHaNemurenai" + } + ] + } + ], + "index": 38, + "angle": 0, + "type": "image_caption" + } + ], + "index": 36 + }, + { + "type": "image", + "bbox": [ + 137, + 386, + 175, + 422 + ], + "blocks": [ + { + "bbox": [ + 137, + 386, + 175, + 422 + ], + "lines": [ + { + "bbox": [ + 137, + 386, + 175, + 422 + ], + "spans": [ + { + "bbox": [ + 137, + 386, + 175, + 422 + ], + "type": "image", + "image_path": "50306f619c9bbc1c238522afdea410cdbcca9b717218ff6c77d8f5356176a48e.jpg" + } + ] + } + ], + "index": 39, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 148, + 423, + 161, + 430 + ], + "lines": [ + { + "bbox": [ + 148, + 423, + 161, + 430 + ], + "spans": [ + { + "bbox": [ + 148, + 423, + 161, + 430 + ], + "type": "text", + "content": "HR" + } + ] + } + ], + "index": 40, + "angle": 0, + "type": "image_caption" + } + ], + "index": 39 + }, + { + "type": "image", + "bbox": [ + 138, + 431, + 175, + 467 + ], + "blocks": [ + { + "bbox": [ + 138, + 431, + 175, + 467 + ], + "lines": [ + { + "bbox": [ + 138, + 431, + 175, + 467 + ], + "spans": [ + { + "bbox": [ + 138, + 431, + 175, + 467 + ], + "type": "image", + "image_path": "bbbf594ba4c0d3868b8677c002ea7001a4fbe9a388b7a78301ae7bd8399f66b8.jpg" + } + ] + } + ], + "index": 41, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 143, + 468, + 168, + 475 + ], + "lines": [ + { + "bbox": [ + 143, + 468, + 168, + 475 + ], + "spans": [ + { + "bbox": [ + 143, + 468, + 168, + 475 + ], + "type": "text", + "content": "Bicubic" + } + ] + } + ], + "index": 42, + "angle": 0, + "type": "image_caption" + } + ], + "index": 41 + }, + { + "type": "image", + "bbox": [ + 178, + 386, + 215, + 422 + ], + "blocks": [ + { + "bbox": [ + 178, + 386, + 215, + 422 + ], + "lines": [ + { + "bbox": [ + 178, + 386, + 215, + 422 + ], + "spans": [ + { + "bbox": [ + 178, + 386, + 215, + 422 + ], + "type": "image", + "image_path": "0b97099247bdfe10d322b539a93f3b8ebdd7b2bd9e34a2fbb1c3c15de90be3df.jpg" + } + ] + } + ], + "index": 43, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 184, + 423, + 208, + 430 + ], + "lines": [ + { + "bbox": [ + 184, + 423, + 208, + 430 + ], + "spans": [ + { + "bbox": [ + 184, + 423, + 208, + 430 + ], + "type": "text", + "content": "RCAN" + } + ] + } + ], + "index": 44, + "angle": 0, + "type": "image_caption" + } + ], + "index": 43 + }, + { + "type": "image", + "bbox": [ + 178, + 431, + 215, + 467 + ], + "blocks": [ + { + "bbox": [ + 178, + 431, + 215, + 467 + ], + "lines": [ + { + "bbox": [ + 178, + 431, + 215, + 467 + ], + "spans": [ + { + "bbox": [ + 178, + 431, + 215, + 467 + ], + "type": "image", + "image_path": "19b018a5f89507203598c74da818b2968adadacd67ebbde97711ac034ab4f7a4.jpg" + } + ] + } + ], + "index": 45, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 182, + 468, + 207, + 475 + ], + "lines": [ + { + "bbox": [ + 182, + 468, + 207, + 475 + ], + "spans": [ + { + "bbox": [ + 182, + 468, + 207, + 475 + ], + "type": "text", + "content": "RCAN" + } + ] + } + ], + "index": 46, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 176, + 476, + 294, + 487 + ], + "lines": [ + { + "bbox": [ + 176, + 476, + 294, + 487 + ], + "spans": [ + { + "bbox": [ + 176, + 476, + 294, + 487 + ], + "type": "text", + "content": "+proposed +proposed +proposed" + } + ] + } + ], + "index": 47, + "angle": 0, + "type": "image_caption" + } + ], + "index": 45 + }, + { + "type": "image", + "bbox": [ + 218, + 386, + 253, + 422 + ], + "blocks": [ + { + "bbox": [ + 218, + 386, + 253, + 422 + ], + "lines": [ + { + "bbox": [ + 218, + 386, + 253, + 422 + ], + "spans": [ + { + "bbox": [ + 218, + 386, + 253, + 422 + ], + "type": "image", + "image_path": "e4cfb74893081c95da7bbac00f08dc76b2134bfb11b6636c19f79aa879b916fe.jpg" + } + ] + } + ], + "index": 48, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 223, + 423, + 247, + 430 + ], + "lines": [ + { + "bbox": [ + 223, + 423, + 247, + 430 + ], + "spans": [ + { + "bbox": [ + 223, + 423, + 247, + 430 + ], + "type": "text", + "content": "EDSR" + } + ] + } + ], + "index": 49, + "angle": 0, + "type": "image_caption" + } + ], + "index": 48 + }, + { + "type": "image", + "bbox": [ + 218, + 431, + 253, + 467 + ], + "blocks": [ + { + "bbox": [ + 218, + 431, + 253, + 467 + ], + "lines": [ + { + "bbox": [ + 218, + 431, + 253, + 467 + ], + "spans": [ + { + "bbox": [ + 218, + 431, + 253, + 467 + ], + "type": "image", + "image_path": "2132558eb107d99f4a7353ff35e8e07d5a0f4d3cc5130f5c1b40d15dfa01f61d.jpg" + } + ] + } + ], + "index": 50, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 223, + 468, + 246, + 475 + ], + "lines": [ + { + "bbox": [ + 223, + 468, + 246, + 475 + ], + "spans": [ + { + "bbox": [ + 223, + 468, + 246, + 475 + ], + "type": "text", + "content": "EDSR" + } + ] + } + ], + "index": 51, + "angle": 0, + "type": "image_caption" + } + ], + "index": 50 + }, + { + "bbox": [ + 218, + 476, + 294, + 487 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 218, + 476, + 294, + 487 + ], + "spans": [ + { + "bbox": [ + 218, + 476, + 294, + 487 + ], + "type": "text", + "content": "+proposed +proposed" + } + ] + } + ], + "index": 52, + "type": "text" + }, + { + "type": "image", + "bbox": [ + 258, + 386, + 294, + 422 + ], + "blocks": [ + { + "bbox": [ + 258, + 386, + 294, + 422 + ], + "lines": [ + { + "bbox": [ + 258, + 386, + 294, + 422 + ], + "spans": [ + { + "bbox": [ + 258, + 386, + 294, + 422 + ], + "type": "image", + "image_path": "ce79336787e0335553da22f47ce92f9fb791087c0ad0bdd74bc563b0979eb5b6.jpg" + } + ] + } + ], + "index": 53, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 260, + 423, + 287, + 430 + ], + "lines": [ + { + "bbox": [ + 260, + 423, + 287, + 430 + ], + "spans": [ + { + "bbox": [ + 260, + 423, + 287, + 430 + ], + "type": "text", + "content": "SwinIR" + } + ] + } + ], + "index": 54, + "angle": 0, + "type": "image_caption" + } + ], + "index": 53 + }, + { + "type": "image", + "bbox": [ + 258, + 431, + 294, + 467 + ], + "blocks": [ + { + "bbox": [ + 258, + 431, + 294, + 467 + ], + "lines": [ + { + "bbox": [ + 258, + 431, + 294, + 467 + ], + "spans": [ + { + "bbox": [ + 258, + 431, + 294, + 467 + ], + "type": "image", + "image_path": "099a0f5063333cdc69c08a32d3cafd8c1cfafda055e1b421f0a953f4a69f417e.jpg" + } + ] + } + ], + "index": 55, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 261, + 468, + 287, + 475 + ], + "lines": [ + { + "bbox": [ + 261, + 468, + 287, + 475 + ], + "spans": [ + { + "bbox": [ + 261, + 468, + 287, + 475 + ], + "type": "text", + "content": "SwinIR" + } + ] + } + ], + "index": 56, + "angle": 0, + "type": "image_caption" + } + ], + "index": 55 + }, + { + "type": "image", + "bbox": [ + 315, + 386, + 386, + 476 + ], + "blocks": [ + { + "bbox": [ + 315, + 386, + 386, + 476 + ], + "lines": [ + { + "bbox": [ + 315, + 386, + 386, + 476 + ], + "spans": [ + { + "bbox": [ + 315, + 386, + 386, + 476 + ], + "type": "image", + "image_path": "858c5dd4401389fcb34065de8b066c768585fcc3366efbde50841a38607b9f40.jpg" + } + ] + } + ], + "index": 57, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 317, + 478, + 389, + 488 + ], + "lines": [ + { + "bbox": [ + 317, + 478, + 389, + 488 + ], + "spans": [ + { + "bbox": [ + 317, + 478, + 389, + 488 + ], + "type": "text", + "content": "Urban100(x4): img_012" + } + ] + } + ], + "index": 58, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 55, + 496, + 555, + 519 + ], + "lines": [ + { + "bbox": [ + 55, + 496, + 555, + 519 + ], + "spans": [ + { + "bbox": [ + 55, + 496, + 555, + 519 + ], + "type": "text", + "content": "Figure 4. Visual comparison on " + }, + { + "bbox": [ + 55, + 496, + 555, + 519 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 55, + 496, + 555, + 519 + ], + "type": "text", + "content": " SR with the baseline model and proposed method. The patches for comparison are marked with red boxes in the original images. Please zoom in for better visualization." + } + ] + } + ], + "index": 72, + "angle": 0, + "type": "image_caption" + } + ], + "index": 57 + }, + { + "type": "image", + "bbox": [ + 392, + 386, + 429, + 422 + ], + "blocks": [ + { + "bbox": [ + 392, + 386, + 429, + 422 + ], + "lines": [ + { + "bbox": [ + 392, + 386, + 429, + 422 + ], + "spans": [ + { + "bbox": [ + 392, + 386, + 429, + 422 + ], + "type": "image", + "image_path": "e6c5d12eba070fa80ec1d9d6b0e4c721cf6c911a765e0a801e481624f851c59c.jpg" + } + ] + } + ], + "index": 59, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 404, + 422, + 414, + 429 + ], + "lines": [ + { + "bbox": [ + 404, + 422, + 414, + 429 + ], + "spans": [ + { + "bbox": [ + 404, + 422, + 414, + 429 + ], + "type": "text", + "content": "HR" + } + ] + } + ], + "index": 60, + "angle": 0, + "type": "image_caption" + } + ], + "index": 59 + }, + { + "type": "image", + "bbox": [ + 392, + 430, + 428, + 464 + ], + "blocks": [ + { + "bbox": [ + 392, + 430, + 428, + 464 + ], + "lines": [ + { + "bbox": [ + 392, + 430, + 428, + 464 + ], + "spans": [ + { + "bbox": [ + 392, + 430, + 428, + 464 + ], + "type": "image", + "image_path": "1898ea22869ed4be957d153bf955ff103862bfc3889ae37df2dc95891492872e.jpg" + } + ] + } + ], + "index": 61, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 395, + 466, + 424, + 473 + ], + "lines": [ + { + "bbox": [ + 395, + 466, + 424, + 473 + ], + "spans": [ + { + "bbox": [ + 395, + 466, + 424, + 473 + ], + "type": "text", + "content": "Bicubic" + } + ] + } + ], + "index": 62, + "angle": 0, + "type": "image_caption" + } + ], + "index": 61 + }, + { + "type": "image", + "bbox": [ + 432, + 386, + 467, + 422 + ], + "blocks": [ + { + "bbox": [ + 432, + 386, + 467, + 422 + ], + "lines": [ + { + "bbox": [ + 432, + 386, + 467, + 422 + ], + "spans": [ + { + "bbox": [ + 432, + 386, + 467, + 422 + ], + "type": "image", + "image_path": "0367aa6b50e523d989e88e1ed39f06e2023eb75f1c59cbca6d423bc6f53cec1e.jpg" + } + ] + } + ], + "index": 63, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 436, + 422, + 461, + 428 + ], + "lines": [ + { + "bbox": [ + 436, + 422, + 461, + 428 + ], + "spans": [ + { + "bbox": [ + 436, + 422, + 461, + 428 + ], + "type": "text", + "content": "RCAN" + } + ] + } + ], + "index": 64, + "angle": 0, + "type": "image_caption" + } + ], + "index": 63 + }, + { + "type": "image", + "bbox": [ + 432, + 430, + 467, + 465 + ], + "blocks": [ + { + "bbox": [ + 432, + 430, + 467, + 465 + ], + "lines": [ + { + "bbox": [ + 432, + 430, + 467, + 465 + ], + "spans": [ + { + "bbox": [ + 432, + 430, + 467, + 465 + ], + "type": "image", + "image_path": "f43556d28230571eb0f771e115255dcb32f60ed8fcf70e9a16237990dbe96041.jpg" + } + ] + } + ], + "index": 65, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 436, + 466, + 460, + 473 + ], + "lines": [ + { + "bbox": [ + 436, + 466, + 460, + 473 + ], + "spans": [ + { + "bbox": [ + 436, + 466, + 460, + 473 + ], + "type": "text", + "content": "RCAN" + } + ] + } + ], + "index": 66, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 430, + 475, + 548, + 484 + ], + "lines": [ + { + "bbox": [ + 430, + 475, + 548, + 484 + ], + "spans": [ + { + "bbox": [ + 430, + 475, + 548, + 484 + ], + "type": "text", + "content": "+proposed +proposed +proposed" + } + ] + } + ], + "index": 67, + "angle": 0, + "type": "image_caption" + } + ], + "index": 65 + }, + { + "type": "image", + "bbox": [ + 509, + 386, + 545, + 422 + ], + "blocks": [ + { + "bbox": [ + 509, + 386, + 545, + 422 + ], + "lines": [ + { + "bbox": [ + 509, + 386, + 545, + 422 + ], + "spans": [ + { + "bbox": [ + 509, + 386, + 545, + 422 + ], + "type": "image", + "image_path": "ed9aa450dc2c069dea4e5c803f499d4a9c4e8f86487d60fd357eaa076a1b987f.jpg" + } + ] + } + ], + "index": 68, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 515, + 422, + 541, + 428 + ], + "lines": [ + { + "bbox": [ + 515, + 422, + 541, + 428 + ], + "spans": [ + { + "bbox": [ + 515, + 422, + 541, + 428 + ], + "type": "text", + "content": "SwinLR" + } + ] + } + ], + "index": 69, + "angle": 0, + "type": "image_caption" + } + ], + "index": 68 + }, + { + "type": "image", + "bbox": [ + 510, + 430, + 546, + 465 + ], + "blocks": [ + { + "bbox": [ + 510, + 430, + 546, + 465 + ], + "lines": [ + { + "bbox": [ + 510, + 430, + 546, + 465 + ], + "spans": [ + { + "bbox": [ + 510, + 430, + 546, + 465 + ], + "type": "image", + "image_path": "3cc5764d44d96cb7d3b5442a77bafae88c5c0fe287cb0117ec5382cb8e06aebe.jpg" + } + ] + } + ], + "index": 70, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 515, + 466, + 542, + 472 + ], + "lines": [ + { + "bbox": [ + 515, + 466, + 542, + 472 + ], + "spans": [ + { + "bbox": [ + 515, + 466, + 542, + 472 + ], + "type": "text", + "content": "SwinIR" + } + ] + } + ], + "index": 71, + "angle": 0, + "type": "image_caption" + } + ], + "index": 70 + }, + { + "bbox": [ + 55, + 529, + 295, + 553 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 529, + 295, + 553 + ], + "spans": [ + { + "bbox": [ + 55, + 529, + 295, + 553 + ], + "type": "text", + "content": "4.4. Guiding Principles for Saliency-based DA methods" + } + ] + } + ], + "index": 73 + }, + { + "bbox": [ + 55, + 558, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 558, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 558, + 296, + 713 + ], + "type": "text", + "content": "As discussed in Sec. 3.5, the key questions for incorporating saliency are: (1) the manners for segmenting the source image, and (2) the position for pasting the patches. We define the following settings for a comprehensive analysis. For question (1), we examined the impact of different granularities on DA strategies. We categorized granularity into coarse " + }, + { + "bbox": [ + 55, + 558, + 296, + 713 + ], + "type": "inline_equation", + "content": "(1\\times 1,2\\times 2)" + }, + { + "bbox": [ + 55, + 558, + 296, + 713 + ], + "type": "text", + "content": ", medium " + }, + { + "bbox": [ + 55, + 558, + 296, + 713 + ], + "type": "inline_equation", + "content": "(3\\times 3,4\\times 4)" + }, + { + "bbox": [ + 55, + 558, + 296, + 713 + ], + "type": "text", + "content": ", and fine " + }, + { + "bbox": [ + 55, + 558, + 296, + 713 + ], + "type": "inline_equation", + "content": "(5\\times 5,6\\times 6,7\\times 7)" + }, + { + "bbox": [ + 55, + 558, + 296, + 713 + ], + "type": "text", + "content": " patches. We observed a decline in network performance with increasing granularity refinement as shown in Tab. 1, indicating that a lack of continuity at the boundary can cause serious boundary effects and subsequently impair performance. For question (2), we investigate six schemes for extracting and merging patches from the source image to" + } + ] + } + ], + "index": 74 + }, + { + "bbox": [ + 313, + 529, + 555, + 709 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 529, + 555, + 709 + ], + "spans": [ + { + "bbox": [ + 313, + 529, + 555, + 709 + ], + "type": "text", + "content": "the target image: (i) Saliency to Corresponding, extracting the most salient region from the source image and merging it with the corresponding region of the target image; (ii) Center to Center, extracting the central region from the source image and merging it with the central region of the target image; (iii) Saliency to Saliency, extracting the most salient region from the source image and merging it with the most salient region of the target image; (iv) Saliency to Non-Saliency, extracting the most salient region from the source image and merging it with the non-salient region of the target image; (v) Non-Saliency to Saliency, extracting the non-salient region from the source image and merging it with the most salient region of the target image; (vi) Non-Saliency to Non-Saliency, extracting the non-salient region from the source image and merging it with the non-salient" + } + ] + } + ], + "index": 75 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "23107" + } + ] + } + ], + "index": 76 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 58, + 70, + 553, + 171 + ], + "blocks": [ + { + "bbox": [ + 58, + 70, + 553, + 171 + ], + "lines": [ + { + "bbox": [ + 58, + 70, + 553, + 171 + ], + "spans": [ + { + "bbox": [ + 58, + 70, + 553, + 171 + ], + "type": "image", + "image_path": "b772bacf0636b335b1cdd69e8c91870951003a0ff6eda6ca857eff909bca9193.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 54, + 182, + 555, + 237 + ], + "lines": [ + { + "bbox": [ + 54, + 182, + 555, + 237 + ], + "spans": [ + { + "bbox": [ + 54, + 182, + 555, + 237 + ], + "type": "text", + "content": "Figure 5. Attribution results for the baseline model, vanilla DA (CutMixup), and saliency-based DA (ADDCutMixup). The attribution results showcase the importance of each pixel in the input LR image for reconstructing the marked path. The diffusion index (DI) reflects the range of involved pixels, with a higher DI indicating a broader range of utilized pixels. Two key observations from the attribution and DI results emerge: (1) Vanilla DA methods enhance network performance by involving more pixels. (2) Saliency-based DA methods guide the model to focus more on meaningful details, reducing attention to irrelevant pixels. Please zoom in for better visualization." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 54, + 248, + 297, + 344 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 248, + 297, + 344 + ], + "spans": [ + { + "bbox": [ + 54, + 248, + 297, + 344 + ], + "type": "text", + "content": "region of the target image. As depicted in Tab. 1, scheme (i) Sa2Cor (with source 'Sa')—which incorporates a broader range of target positions such as 'Sa', 'Non', and others— aligns with a diversity principle and outperforms others. This highlights the importance of the saliency region in the source image and diverse augmented patterns. Based on these findings, we propose the key principle for low-level DA: a wider spectrum of degradation patterns." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 354, + 282, + 368 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 354, + 282, + 368 + ], + "spans": [ + { + "bbox": [ + 55, + 354, + 282, + 368 + ], + "type": "text", + "content": "4.5. What Vanilla And Saliency-Based DA learn" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 54, + 373, + 295, + 516 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 373, + 295, + 516 + ], + "spans": [ + { + "bbox": [ + 54, + 373, + 295, + 516 + ], + "type": "text", + "content": "We conduct attribution analysis on baseline models, models trained with vanilla DA strategies, and models trained with our proposed saliency-based DA strategies. As depicted in Fig. 5, the model trained with DA methods exhibits a higher diffusion index (DI), indicating a broader range of involved pixels. Our ADD, highlighted by black arrows, focuses on more accurate details. Notably, we observe two key findings: (1) Both vanilla and saliency-based DA methods enhance the model's ability to involve more pixels, leading to improved performance. (2) Saliency-based DA directs the model to concentrate on influential pixels rather than indiscriminately incorporating more pixels." + } + ] + } + ], + "index": 4 + }, + { + "type": "table", + "bbox": [ + 57, + 546, + 304, + 602 + ], + "blocks": [ + { + "bbox": [ + 55, + 519, + 295, + 541 + ], + "lines": [ + { + "bbox": [ + 55, + 519, + 295, + 541 + ], + "spans": [ + { + "bbox": [ + 55, + 519, + 295, + 541 + ], + "type": "text", + "content": "Table 4. Gradients comparison with and without global feature detector (GD) on the DIV2K validation dataset." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 57, + 546, + 304, + 602 + ], + "lines": [ + { + "bbox": [ + 57, + 546, + 304, + 602 + ], + "spans": [ + { + "bbox": [ + 57, + 546, + 304, + 602 + ], + "type": "table", + "html": "
MethodBackbonestep = 1step = 10step = 20step = 30step = 50
LAMEDSR33.9k108.7k119.9k120.0k120.0k
w/ GDEDSR228.6k419.2k643.8k784.9k869.1k
LAMRCAN29.8k660.0k962.6k969.5k987.3k
w/ GDRCAN174.9k494.1k749.2k815.7k1291.3k
", + "image_path": "449ab4b7db79591dbf0897915193a8bc7b29d6b5c1c0609016e9e817434d84b1.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_body" + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 611, + 156, + 623 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 611, + 156, + 623 + ], + "spans": [ + { + "bbox": [ + 55, + 611, + 156, + 623 + ], + "type": "text", + "content": "4.6. Ablation Studies" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 54, + 629, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 629, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 54, + 629, + 295, + 713 + ], + "type": "text", + "content": "The effectiveness of the CAM was demonstrated in Sec. 4.2. To further evaluate the impact of the proposed global feature detector (GD), we conduct additional experiments using EDSR and RCAN as backbone models, as outlined in Sec. 4.1. In these ablation studies, we substitute the global feature detector with the absolute cumulative value of the entire image. The results, shown in Tab. 4, highlight that the" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 248, + 553, + 272 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 248, + 553, + 272 + ], + "spans": [ + { + "bbox": [ + 313, + 248, + 553, + 272 + ], + "type": "text", + "content": "inclusion of GD leads to smoother gradient changes, making uniformly sampled points more effective." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 277, + 533, + 290 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 277, + 533, + 290 + ], + "spans": [ + { + "bbox": [ + 313, + 277, + 533, + 290 + ], + "type": "text", + "content": "4.7. Extensions: Other Low-level Vision Tasks" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 295, + 555, + 426 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 295, + 555, + 426 + ], + "spans": [ + { + "bbox": [ + 313, + 295, + 555, + 426 + ], + "type": "text", + "content": "We explore the applicability of our method to various low-level vision tasks, specifically examining its effectiveness in JPEG artifact removal. Utilizing CNN-based EDSR and Transformer-based SwinIR as baselines, we train the models from scratch. Following the prior works [33], we create a synthetic dataset with a compression quality parameter " + }, + { + "bbox": [ + 313, + 295, + 555, + 426 + ], + "type": "inline_equation", + "content": "q" + }, + { + "bbox": [ + 313, + 295, + 555, + 426 + ], + "type": "text", + "content": " set to 10 (lower " + }, + { + "bbox": [ + 313, + 295, + 555, + 426 + ], + "type": "inline_equation", + "content": "q" + }, + { + "bbox": [ + 313, + 295, + 555, + 426 + ], + "type": "text", + "content": " indicating stronger artifacts) for color images. Results in Tab. 5 reveal substantial improvements in PSNR and SSIM metrics, particularly at low compression levels (" + }, + { + "bbox": [ + 313, + 295, + 555, + 426 + ], + "type": "inline_equation", + "content": "q" + }, + { + "bbox": [ + 313, + 295, + 555, + 426 + ], + "type": "text", + "content": "), highlighting the versatility of our method in benefiting various low-level vision tasks." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 426, + 555, + 469 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 426, + 555, + 469 + ], + "spans": [ + { + "bbox": [ + 313, + 426, + 555, + 469 + ], + "type": "text", + "content": "Table 5. Quantitative comparison of JPEG compression artifact reduction on the LIVE1 dataset. The best results are highlighted, where " + }, + { + "bbox": [ + 313, + 426, + 555, + 469 + ], + "type": "inline_equation", + "content": "q" + }, + { + "bbox": [ + 313, + 426, + 555, + 469 + ], + "type": "text", + "content": " denotes the compression level, with a smaller value indicating a higher compression level." + } + ] + } + ], + "index": 12 + }, + { + "type": "table", + "bbox": [ + 317, + 475, + 564, + 556 + ], + "blocks": [ + { + "bbox": [ + 317, + 475, + 564, + 556 + ], + "lines": [ + { + "bbox": [ + 317, + 475, + 564, + 556 + ], + "spans": [ + { + "bbox": [ + 317, + 475, + 564, + 556 + ], + "type": "table", + "html": "
Methodq = 10q = 20q = 30
PSNRSSIMPSNRSSIMPSNRSSIM
EDSR30.140.839131.830.884032.450.8992
+ ADD30.150.839432.310.894933.440.9173
SwinIR29.860.828732.250.890933.690.9174
+ ADD29.860.828532.560.895734.480.9287
", + "image_path": "af67edc4d0f0c61ba79e3a7e9cb9bd076b9a75fc703c19d72e97bcd79e90851e.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "table_body" + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 562, + 388, + 574 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 562, + 388, + 574 + ], + "spans": [ + { + "bbox": [ + 313, + 562, + 388, + 574 + ], + "type": "text", + "content": "5. Conclusion" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 582, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 582, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 582, + 555, + 713 + ], + "type": "text", + "content": "In this work, we introduce CAM and ADD, specifically designed for SR. Through a dedicated analysis, we reveal two new insights (i.e., involving more pixels and focusing on influential pixels rather than incorporating irrelevant pixels) for low-level tasks. Besides, We propose the key principle of the wider spectrum of degradation patterns for designing DA in low-level tasks. Experimental results underscore the effectiveness and adaptability of our method, significantly improving the performance of various SR tasks. Our work opens new avenues for exploring a more effective way to utilize image information in DA and low-level tasks." + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "23108" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 153, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 153, + 85 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 153, + 85 + ], + "type": "text", + "content": "Acknowledgments" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 91, + 296, + 127 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 91, + 296, + 127 + ], + "spans": [ + { + "bbox": [ + 55, + 91, + 296, + 127 + ], + "type": "text", + "content": "This work was supported by the Natural Science Foundation of China (Grant No. 62176119), and the Jiangsu Graduate Research Innovation Program (Grant No. KYCX24_0258)." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 137, + 115, + 149 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 137, + 115, + 149 + ], + "spans": [ + { + "bbox": [ + 56, + 137, + 115, + 149 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 57, + 157, + 296, + 713 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 61, + 157, + 296, + 201 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 157, + 296, + 201 + ], + "spans": [ + { + "bbox": [ + 61, + 157, + 296, + 201 + ], + "type": "text", + "content": "[1] Naveed Akhtar and Mohammad A. A. K. Jalwana. Towards credible visual model interpretation with path attribution. In Proceedings of the International Conference on Machine Learning, ICML, pages 439-457, 2023. 3" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 202, + 296, + 257 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 202, + 296, + 257 + ], + "spans": [ + { + "bbox": [ + 61, + 202, + 296, + 257 + ], + "type": "text", + "content": "[2] Tao Dai, Jianrui Cai, Yongbing Zhang, Shu-Tao Xia, and Lei Zhang. Second-order attention network for single image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 11065-11074, 2019. 1, 2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 62, + 258, + 295, + 300 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 258, + 295, + 300 + ], + "spans": [ + { + "bbox": [ + 62, + 258, + 295, + 300 + ], + "type": "text", + "content": "[3] Xin Deng, Yutong Zhang, Mai Xu, Shuhang Gu, and Yiping Duan. Deep coupled feedback network for joint exposure fusion and image super-resolution. IEEE Trans. Image Process., 30:3098-3112, 2021. 1" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 302, + 295, + 335 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 302, + 295, + 335 + ], + "spans": [ + { + "bbox": [ + 62, + 302, + 295, + 335 + ], + "type": "text", + "content": "[4] Terrance Devries and Graham W. Taylor. Improved regularization of convolutional neural networks with cutout. ArXiv preprint, abs/1708.04552, 2017. 2" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 336, + 295, + 388 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 336, + 295, + 388 + ], + "spans": [ + { + "bbox": [ + 62, + 336, + 295, + 388 + ], + "type": "text", + "content": "[5] Chao Dong, Chen Change Loy, Kaiming He, and Xiaou Tang. Learning a deep convolutional network for image super-resolution. In Proceedings of the European Conference on Computer Vision, ECCV, pages 184-199, 2014. 1, 2" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 391, + 296, + 456 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 391, + 296, + 456 + ], + "spans": [ + { + "bbox": [ + 62, + 391, + 296, + 456 + ], + "type": "text", + "content": "[6] Xiaoyi Dong, Jianmin Bao, Dongdong Chen, Weiming Zhang, Nenghai Yu, Lu Yuan, Dong Chen, and Baining Guo. Cswin transformer: A general vision transformer backbone with cross-shaped windows. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 12114-12124, 2022. 2" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 458, + 296, + 534 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 458, + 296, + 534 + ], + "spans": [ + { + "bbox": [ + 62, + 458, + 296, + 534 + ], + "type": "text", + "content": "[7] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In Proceedings of the International Conference on Learning Representations, ICLR, 2021. 1, 2" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 62, + 536, + 296, + 590 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 536, + 296, + 590 + ], + "spans": [ + { + "bbox": [ + 62, + 536, + 296, + 590 + ], + "type": "text", + "content": "[8] Ruicheng Feng, Jinjin Gu, Yu Qiao, and Chao Dong. Suppressing model overfitting for image super-resolution networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW, pages 1964-1973, 2019. 1, 2" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 62, + 591, + 296, + 635 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 591, + 296, + 635 + ], + "spans": [ + { + "bbox": [ + 62, + 591, + 296, + 635 + ], + "type": "text", + "content": "[9] Jinjin Gu and Chao Dong. Interpreting super-resolution networks with local attribution maps. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 9199-9208, 2021. 1, 3" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 636, + 295, + 689 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 636, + 295, + 689 + ], + "spans": [ + { + "bbox": [ + 57, + 636, + 295, + 689 + ], + "type": "text", + "content": "[10] Takashi Isobe, Xu Jia, Shuhang Gu, Songjiang Li, Shengjin Wang, and Qi Tian. Video super-resolution with recurrent structure-detail network. In Proceedings of the European Conference on Computer Vision, ECCV, pages 645-660, 2020. 1" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 57, + 690, + 295, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 690, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 57, + 690, + 295, + 713 + ], + "type": "text", + "content": "[11] Andrei Kapishnikov, Subhashini Venugopalan, Besim Avci, Ben Wedin, Michael Terry, and Tolga Bolukbasi. Guided in" + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 555, + 713 + ], + "type": "list", + "angle": 0, + "index": 28, + "blocks": [ + { + "bbox": [ + 333, + 73, + 555, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 333, + 73, + 555, + 116 + ], + "spans": [ + { + "bbox": [ + 333, + 73, + 555, + 116 + ], + "type": "text", + "content": "tegrated gradients: An adaptive path method for removing noise. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 5050-5058, 2021. 1, 3" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 118, + 554, + 171 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 118, + 554, + 171 + ], + "spans": [ + { + "bbox": [ + 316, + 118, + 554, + 171 + ], + "type": "text", + "content": "[12] Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 1646-1654, 2016. 1, 2" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 172, + 554, + 216 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 172, + 554, + 216 + ], + "spans": [ + { + "bbox": [ + 316, + 172, + 554, + 216 + ], + "type": "text", + "content": "[13] Jang-Hyun Kim, Wonho Choo, and Hyun Oh Song. Puzzle mix: Exploiting saliency and local statistics for optimal mixup. In Proceedings of the International Conference on Machine Learning, ICML, pages 5275-5285, 2020. 1, 2" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 217, + 554, + 259 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 217, + 554, + 259 + ], + "spans": [ + { + "bbox": [ + 316, + 217, + 554, + 259 + ], + "type": "text", + "content": "[14] Jang-Hyun Kim, Wonho Choo, Hosan Jeong, and Hyun Oh Song. Co-mixup: Saliency guided joint mixup with supermodular diversity. In Proceedings of the International Conference on Learning Representations, ICLR, 2021. 1, 2" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 261, + 554, + 316 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 261, + 554, + 316 + ], + "spans": [ + { + "bbox": [ + 316, + 261, + 554, + 316 + ], + "type": "text", + "content": "[15] Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja, and Ming-Hsuan Yang. Deep laplacian pyramid networks for fast and accurate super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 5835-5843, 2017. 1" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 317, + 316, + 554, + 370 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 316, + 554, + 370 + ], + "spans": [ + { + "bbox": [ + 317, + 316, + 554, + 370 + ], + "type": "text", + "content": "[16] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, ICCV, pages 1833-1844, 2021. 1, 2, 5" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 372, + 554, + 426 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 372, + 554, + 426 + ], + "spans": [ + { + "bbox": [ + 316, + 372, + 554, + 426 + ], + "type": "text", + "content": "[17] Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Young Mu Lee. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW, pages 1132-1140, 2017. 5" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 426, + 554, + 480 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 426, + 554, + 480 + ], + "spans": [ + { + "bbox": [ + 316, + 426, + 554, + 480 + ], + "type": "text", + "content": "[18] Jihao Liu, Boxiao Liu, Hang Zhou, Hongsheng Li, and Yu Liu. Tokenmix: Rethinking image mixing for data augmentation in vision transformers. In Proceedings of the European Conference on Computer Vision, ECCV, pages 455-471, 2022. 1, 2" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 317, + 482, + 555, + 525 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 482, + 555, + 525 + ], + "spans": [ + { + "bbox": [ + 317, + 482, + 555, + 525 + ], + "type": "text", + "content": "[19] Yiqun Mei, Yuchen Fan, and Yuqian Zhou. Image superresolution with non-local sparse attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 3517-3526, 2021. 1, 2" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 317, + 526, + 554, + 581 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 526, + 554, + 581 + ], + "spans": [ + { + "bbox": [ + 317, + 526, + 554, + 581 + ], + "type": "text", + "content": "[20] Ze-Yu Mi and Yu-Bin Yang. Cutdem: Depth-aware enhanced multi-view image mixing for light field superresolution. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP, pages 3340-3344. IEEE, 2024. 1" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 582, + 554, + 635 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 582, + 554, + 635 + ], + "spans": [ + { + "bbox": [ + 316, + 582, + 554, + 635 + ], + "type": "text", + "content": "[21] Mehdi S. M. Sajjadi, Bernhard Schölkopf, and Michael Hirsch. Enhancenet: Single image super-resolution through automated texture synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, ICCV, pages 4501-4510, 2017. 2" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 636, + 554, + 680 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 636, + 554, + 680 + ], + "spans": [ + { + "bbox": [ + 316, + 636, + 554, + 680 + ], + "type": "text", + "content": "[22] Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. In Proceedings of the International Conference on Machine Learning, ICML, pages 3145-3153, 2017. 3" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 316, + 681, + 554, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 681, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 681, + 554, + 713 + ], + "type": "text", + "content": "[23] Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. Smoothgrad: removing noise by adding noise. ArXiv preprint, abs/1706.03825, 2017." + } + ] + } + ], + "index": 27 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "23109" + } + ] + } + ], + "index": 29 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 73, + 294, + 713 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 56, + 73, + 294, + 115 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 73, + 294, + 115 + ], + "spans": [ + { + "bbox": [ + 56, + 73, + 294, + 115 + ], + "type": "text", + "content": "[24] Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014. 3" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 118, + 294, + 171 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 118, + 294, + 171 + ], + "spans": [ + { + "bbox": [ + 56, + 118, + 294, + 171 + ], + "type": "text", + "content": "[25] Jialu Sui, Xianping Ma, Xiaokang Zhang, and Man-On Pun. GCRDN: global context-driven residual dense network for remote sensing image superresolution. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens., 16:4457-4468, 2023. 1, 2" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 173, + 294, + 215 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 173, + 294, + 215 + ], + "spans": [ + { + "bbox": [ + 56, + 173, + 294, + 215 + ], + "type": "text", + "content": "[26] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In Proceedings of the International Conference on Machine Learning, ICML, pages 3319-3328, 2017. 1, 3" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 217, + 294, + 270 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 217, + 294, + 270 + ], + "spans": [ + { + "bbox": [ + 56, + 217, + 294, + 270 + ], + "type": "text", + "content": "[27] Radu Timofte, Rasmus Rothe, and Luc Van Gool. Seven ways to improve example-based single image super resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 1865-1873, 2016. 1, 2" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 272, + 294, + 326 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 272, + 294, + 326 + ], + "spans": [ + { + "bbox": [ + 56, + 272, + 294, + 326 + ], + "type": "text", + "content": "[28] A. F. M. Shahab Uddin, Mst. Sirazam Monira, Wheemyung Shin, TaeChoong Chung, and Sung-Ho Bae. Saliencymix: A saliency guided data augmentation strategy for better regularization. In Proceedings of the International Conference on Learning Representations, ICLR, 2021. 1, 2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 327, + 294, + 392 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 327, + 294, + 392 + ], + "spans": [ + { + "bbox": [ + 56, + 327, + 294, + 392 + ], + "type": "text", + "content": "[29] Devesh Walawalkar, Zhiqiang Shen, Zechun Liu, and Marios Savvides. Attentive cutmix: An enhanced data augmentation approach for deep learning based image classification. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP, pages 3642-3646, 2020. 1, 2" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 393, + 294, + 458 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 393, + 294, + 458 + ], + "spans": [ + { + "bbox": [ + 56, + 393, + 294, + 458 + ], + "type": "text", + "content": "[30] Haofan Wang, Zifan Wang, Mengnan Du, Fan Yang, Zijian Zhang, Sirui Ding, Piotr Mardziel, and Xia Hu. Scorecam: Score-weighted visual explanations for convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW, pages 111-119, 2020. 6" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 460, + 294, + 514 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 460, + 294, + 514 + ], + "spans": [ + { + "bbox": [ + 56, + 460, + 294, + 514 + ], + "type": "text", + "content": "[31] Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Yu Qiao, and Chen Change Loy. ESRGAN: enhanced super-resolution generative adversarial networks. In Proceedings of the European Conference on Computer Vision Workshops, ECCVW, pages 63-79, 2018. 1, 2" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 515, + 294, + 568 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 515, + 294, + 568 + ], + "spans": [ + { + "bbox": [ + 56, + 515, + 294, + 568 + ], + "type": "text", + "content": "[32] Zeyu Xiao, Yutong Liu, Ruisheng Gao, and Zhiwei Xiong. Cutmib: Boosting light field super-resolution via multi-view image blending. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 1672-1682, 2023. 1, 2" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 570, + 294, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 570, + 294, + 624 + ], + "spans": [ + { + "bbox": [ + 56, + 570, + 294, + 624 + ], + "type": "text", + "content": "[33] Jaejun Yoo, Namhyuk Ahn, and Kyung-Ah Sohn. Rethinking data augmentation for image super-resolution: A comprehensive analysis and a new strategy. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 8372-8381, 2020. 1, 2, 5, 8" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 625, + 294, + 689 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 625, + 294, + 689 + ], + "spans": [ + { + "bbox": [ + 56, + 625, + 294, + 689 + ], + "type": "text", + "content": "[34] Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Seong Joon Oh, Youngjoon Yoo, and Junsuk Choe. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, ICCV, pages 6022-6031. IEEE, 2019. 1, 2" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 691, + 294, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 691, + 294, + 713 + ], + "spans": [ + { + "bbox": [ + 56, + 691, + 294, + 713 + ], + "type": "text", + "content": "[35] Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimiza" + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 553, + 375 + ], + "type": "list", + "angle": 0, + "index": 20, + "blocks": [ + { + "bbox": [ + 333, + 73, + 553, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 333, + 73, + 553, + 95 + ], + "spans": [ + { + "bbox": [ + 333, + 73, + 553, + 95 + ], + "type": "text", + "content": "tion. In Proceedings of the International Conference on Learning Representations, ICLR, 2018. 1, 2" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 316, + 96, + 553, + 139 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 96, + 553, + 139 + ], + "spans": [ + { + "bbox": [ + 316, + 96, + 553, + 139 + ], + "type": "text", + "content": "[36] Kai Zhang, Yawei Li, Wangmeng Zuo, Lei Zhang, Luc Van Gool, and Radu Timofte. Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell., 44(10):6360-6376, 2022. 1, 2" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 141, + 553, + 173 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 141, + 553, + 173 + ], + "spans": [ + { + "bbox": [ + 316, + 141, + 553, + 173 + ], + "type": "text", + "content": "[37] Qing-Long Zhang, Lu Rao, and Yubin Yang. Group-cam: Group score-weighted visual explanations for deep convolutional networks. ArXiv preprint, abs/2103.13859, 2021. 6" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 175, + 553, + 228 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 175, + 553, + 228 + ], + "spans": [ + { + "bbox": [ + 316, + 175, + 553, + 228 + ], + "type": "text", + "content": "[38] Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European Conference on Computer Vision, ECCV, pages 294-310, 2018. 5" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 230, + 553, + 284 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 230, + 553, + 284 + ], + "spans": [ + { + "bbox": [ + 316, + 230, + 553, + 284 + ], + "type": "text", + "content": "[39] Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu. Residual dense network for image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pages 2472-2481, 2018. 2" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 286, + 553, + 328 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 286, + 553, + 328 + ], + "spans": [ + { + "bbox": [ + 316, + 286, + 553, + 328 + ], + "type": "text", + "content": "[40] Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu. Residual dense network for image restoration. IEEE Trans. Pattern Anal. Mach. Intell., 43(7):2480-2495, 2021. 1, 2" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 331, + 553, + 375 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 331, + 553, + 375 + ], + "spans": [ + { + "bbox": [ + 316, + 331, + 553, + 375 + ], + "type": "text", + "content": "[41] Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. Random erasing data augmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, AAAI, pages 13001-13008, 2020. 2" + } + ] + } + ], + "index": 19 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "text", + "content": "23110" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/ADU_ Adaptive Detection of Unknown Categories in Black-Box Domain Adaptation/a3a5f623-9c8e-40c7-93d2-175e52310d81_content_list.json b/2025/ADU_ Adaptive Detection of Unknown Categories in Black-Box Domain Adaptation/a3a5f623-9c8e-40c7-93d2-175e52310d81_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ada64692132e83c624ee85727d92ec328fc1e2f1 --- /dev/null +++ b/2025/ADU_ Adaptive Detection of Unknown Categories in Black-Box Domain Adaptation/a3a5f623-9c8e-40c7-93d2-175e52310d81_content_list.json @@ -0,0 +1,1429 @@ +[ + { + "type": "text", + "text": "ADU: Adaptive Detection of Unknown Categories in Black-Box Domain Adaptation", + "text_level": 1, + "bbox": [ + 246, + 130, + 751, + 176 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Yushan Lai, Guowen Li, Haoyuan Liang, Juepeng Zheng*, and Zhiyu Ye School of Artificial Intelligence, Sun Yat-Sen University", + "bbox": [ + 212, + 203, + 785, + 239 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "{laiysh6,ligw8,lianghy68,yezhy26}@mail2.sysu.edu.cn,zhengjp8@mail.sysu.edu.cn", + "bbox": [ + 142, + 241, + 854, + 257 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 246, + 291, + 328, + 306 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Black-box Domain Adaptation (BDA) utilizes a black-box predictor of the source domain to label target domain data, addressing privacy concerns in Unsupervised Domain Adaptation (UDA). However, BDA assumes identical label sets across domains, which is unrealistic. To overcome this limitation, we propose a study on BDA with unknown classes in the target domain. It uses a black-box predictor to label target data and identify \"unknown\" categories, without requiring access to source domain data or predictor parameters, thus addressing both data privacy and category shift issues in traditional UDA. Existing methods face two main challenges: (i) Noisy pseudo-labels in knowledge distillation (KD) accumulate prediction errors, and (ii) relying on a preset threshold fails to adapt to varying category shifts. To address these, we propose ADU, a framework that allows the target domain to autonomously learn pseudo-labels guided by quality and use an adaptive threshold to identify \"unknown\" categories. Specifically, ADU consists of Selective Amplification Knowledge Distillation (SAKD) and Entropy-Driven Label Differentiation (EDLD). SAKD improves KD by focusing on high-quality pseudo-labels, mitigating the impact of noisy labels. EDLD categorizes pseudo-labels by quality and applies tailored training strategies to distinguish \"unknown\" categories, improving detection accuracy and adaptability. Extensive experiments show that ADU achieves state-of-the-art results, outperforming the best existing method by $3.1\\%$ on VisDA in the OPBDA scenario.", + "bbox": [ + 89, + 321, + 485, + 744 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 91, + 775, + 223, + 792 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Unsupervised domain adaptation (UDA) [12] aims to transfer knowledge from a well-labeled source domain to an unlabeled target domain, which can ease the burden of manual labeling. Recently, UDA has been applied in a range of computer vision tasks, including image classification", + "bbox": [ + 89, + 801, + 483, + 878 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "[13, 33, 48], objection detection [7, 20, 56] and semantic segmentation [6, 35, 47]. However, UDA methods may raise concerns about data privacy and portability issues due to their requirement for access to raw source data and source model parameters. Therefore, source-free domain adaptation (SFDA) [19, 27, 58] is proposed to protect the source data privacy. In the SFDA scenario, only the source model is provided to the target domain without access to source data. However, it still faces the issue of source information privacy, which can be compromised through techniques such as white-box attacks [45, 46]. To mitigate these concerns, Black-box Domain Adaptation (BDA) [28, 61] has been proposed recently, as shown in Figure 1(a), which aims to learn a model solely using the unlabeled data from the target domain, based on the predictions from a black-box predictor trained on the source data. This setting can effectively mitigate data privacy issues related to data and model parameter leakage.", + "bbox": [ + 511, + 292, + 906, + 564 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "However, traditional BDA [28, 57, 60] always assumes that the source and target domains share identical category sets, which frequently fails to apply in practice. In real-world scenarios, the target domain is typically unlabeled, making it difficult to satisfy this assumption due to potential category shifts. Currently, there are two UDA settings that involve unknown classes in the target domain: Open-Set Domain Adaptation (OSDA) [36, 42] and Open-Partial Domain Adaptation (OPDA) [25, 41, 59]. OSDA deals with scenarios where the target domain contains private classes that are unknown to the source domain, while OPDA handles cases where both the source and target domains each have their own private classes. Black-box Domain Adaptation has been applied to OPDA recently [9]. As shown in Figure 1(b), this setting is designed to learn a robust model for the target domain that not only recognizes classes shared by two domains but also identifies \"unknown\" categories absent in the source domain despite having no information about difference of two label sets.", + "bbox": [ + 511, + 566, + 908, + 851 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Currently, only one study has addressed the above problem. [9] applies knowledge distillation to train the target model to mimic source predictor outputs and uses a man", + "bbox": [ + 511, + 854, + 908, + 902 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 46 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "*Corresponding author", + "bbox": [ + 109, + 887, + 235, + 900 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "30588", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/c7e73379d0d7153b199e6d7b6e853ccf121aa10dbaaf0d288d16850c7ba1e900.jpg", + "image_caption": [ + "(a) Black-box Domain Adaptation" + ], + "image_footnote": [], + "bbox": [ + 96, + 88, + 519, + 305 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/679d31d87a5b79f77bb55edf66b522db859480439b2b9c951e3e153089bc6abb.jpg", + "image_caption": [ + "(b) Open-patial Black-box Domain Adaptation", + "Figure 1. Black-box domain adaptation and Open-partial black-box domain adaptation settings with respect to label sets of source and target domains (red labels indicate common labels of two domains). Compared to BDA, ADU is able to deal with BDA with unknown classes in the target by adaptively detecting unknown categories." + ], + "image_footnote": [], + "bbox": [ + 529, + 88, + 903, + 305 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "ually preset threshold to identify \"unknown\" categories. Though inspiring, it still has the following limitations. (i) Due to domain and category shifts between the source and target domains, predictions from the source model are inevitably noisy. Directly utilizing these noisy pseudo-labels will accumulate model prediction errors, making the adaptation process unreliable. (ii) Employing a preset threshold fails to accommodate the variability and complexity of category shifts in different target domains, which is inadequate for accurately detecting \"unknown\" classes across diverse domains, often resulting in misclassification and reduced adaptability.", + "bbox": [ + 88, + 402, + 483, + 585 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To address the issues mentioned above, we propose a simple yet effective framework called ADU, specifically designed for Open-Set BDA (OSBDA) and Open-Partial BDA (OPBDA). ADU incorporates two core modules: Selective Amplification Knowledge Distillation (SAKD) and Entropy-Driven Label Differentiation (EDLD). For the first challenge, SAKD enhances traditional knowledge distillation techniques, specifically tailoring KD to BDA with unknown classes in the target domain by amplifying learning from high-quality pseudo-labels produced by source API. This refinement ensures that the target model emphasizes learning from high-quality pseudo-labels, effectively mitigating the impact of noisy data. For the second challenge, EDLD enhances the framework's ability to handle diverse domain conditions. Initially, EDLD categorizes pseudo-labels based on their quality and then applies tailored training strategies to widen the distance between \"unknown\" classes and the others, while minimizing the impact of noisy pseudo-labels. This adaptive differentiation of labels heightens the effectiveness of employing the av", + "bbox": [ + 91, + 599, + 483, + 900 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "erage entropy of the target model's predictions as a threshold. Consequently, this refined approach significantly improves the detection accuracy of \"unknown\" categories and adapts more adeptly to category shifts across various target domains. Additionally, we iteratively refine the pseudolabels generated by the source API, which can significantly enhance their quality.", + "bbox": [ + 511, + 402, + 903, + 508 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Our main contributions in this paper could be summarized as follows:", + "bbox": [ + 511, + 510, + 903, + 539 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. We propose Selective Amplification Knowledge Distillation (SAKD), a refined knowledge distillation technique specifically designed for the OPBDA and OSBDA scenarios, which can effectively mitigate the impact of noisy pseudo-labels.", + "2. We introduce Entropy-Driven Label Differentiation (EDLD), which categorizes pseudo-labels by quality and applies customized training strategies to enhance the distinction between \"unknown\" and others, thereby improving detection accuracy and domain adaptability through adaptive entropy-based thresholding.", + "3. Extensive experiments on four public benchmarks demonstrate the superior performance of our proposed method compared with existing SOTA works, surpassing the best existing method by $3.1\\%$ on VisDA in the OPBDA scenario." + ], + "bbox": [ + 511, + 541, + 905, + 781 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related work", + "text_level": 1, + "bbox": [ + 511, + 799, + 650, + 814 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Black-box domain adaptation. Unsupervised Domain Adaptation (UDA) [12] aims to adapt a model trained on a labeled source domain to an unlabeled target domain. Many early methods relied on techniques such as instance weighting [52, 55], feature transformation [18, 26, 43], and feature", + "bbox": [ + 511, + 825, + 905, + 900 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "30589", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "space [30, 51]. Despite their effectiveness, these methods require access to source domain data, raising privacy and portability concerns [22]. To address privacy issues associated with UDA, Source-Free Domain Adaptation (SFDA) methods [19, 27, 58] have been proposed. These methods adapt models using only the source model and unlabeled target data, eliminating the need for source data during adaptation. Techniques such as entropy minimization [2] and pseudo-labeling [62] have been explored. However, SFDA methods still face potential privacy risks due to the use of generative models and other techniques that might inadvertently reveal source data characteristics. Therefore, Black-box Domain Adaptation (BDA) [28, 61] has emerged as a solution to further mitigate privacy concerns by only accessing the source model's outputs without any internal details. This approach ensures better privacy preservation compared to traditional UDA and SFDA methods. Recent methods such as DINE [28] and BETA [57] have made significant strides in this area. Nevertheless, they struggle with inconsistent label sets between domains.", + "bbox": [ + 89, + 90, + 480, + 391 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Open-set and open-partial domain adaptation. Closed-set domain adaptation assumes identical label sets between source and target domains, focusing on minimizing distribution shifts using techniques like discrepancy minimization [31, 32] and adversarial training [8, 15]. However, these methods often struggle when label sets are not perfectly aligned. To address this issue, Partial Domain Adaptation (PDA) assumes that only the source domain contains private classes, with methods such as SAN [4] employing class-wise domain discriminators, and ETN [5] using progressive weighting schemes. Meanwhile, Open-Set Domain Adaptation (OSDA) handles scenarios where the target domain has private classes unknown to the source. Besides, Open-Partial Domain Adaptation (OPDA) addresses both domains having their own private classes. UAN [59] quantifies sample-level uncertainty using entropy and domain similarity, and Fu et al. [11] combines entropy, confidence, and consistency for better uncertainty measurement. To address the challenges faced by black-box domain adaptation, [9] combines OPDA with BDA to address both category shift and privacy concerns. It applies knowledge distillation to train the target model to emulate source predictor outputs, using a preset threshold to identify \"unknown\" categories. Though inspiring, it still faces significant limitations regarding pseudo-label quality and the detection of \"unknown\" categories. To address these issues, we propose the ADU framework, applying it to Open-Set BDA (OSBDA) and Open-Partial BDA (OPBDA) to mitigate the impact of noisy pseudo-labels and enhance adaptability to category shifts across varied target domains. This approach provides a robust solution to the limitations of existing methods.", + "bbox": [ + 91, + 397, + 482, + 881 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Learning with noisy labels. Deep learning models often", + "bbox": [ + 109, + 885, + 480, + 900 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "overfit on noisy labels, leading to poor generalization [60]. To address this, various approaches have been proposed, including noise-robust losses [23, 44], noise-transition matrix estimation [14], clean sample selection [53], and loss reweighting [34]. However, these methods often require noise-free validation sets or make assumptions about the noise distribution, which are impractical in BDA settings. These methods [29, 34] differ by not assuming any specific noise distribution and leveraging noisy scores from source training classes. Recently, NEL [1] introduced a novel approach by integrating a Negative Learning loss with a pseudo-label refinement framework that leverages ensembling techniques. Negative Learning [23] is an indirect learning method that employs complementary labels to address noise issues effectively. In our work, we use Negative Learning to refine high-quality pseudo-labels without ensembling, reducing computational cost and making our approach more flexible and robust for OSBDA and OPBDA.", + "bbox": [ + 511, + 90, + 903, + 362 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. Methodology", + "text_level": 1, + "bbox": [ + 513, + 376, + 648, + 393 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In this paper, we are provided with a target domain $\\mathcal{D}_t = \\left\\{x_t^i\\right\\}_{i=1}^{N_t}$ with $N_t$ unlabeled samples where $x_t^i \\in \\mathcal{X}_t$ , and a black-box predictor $f_s$ trained by a source domain $\\mathcal{D}_s = \\left\\{\\left(x_s^i, y_s^i\\right)\\right\\}_{i=1}^{N_s}$ with $N_s$ labeled samples where $x_s^i \\in \\mathcal{X}_s$ . We use $L_s$ and $L_t$ to denote the label spaces of the source domain and target domain respectively. In general, model $f$ consists of a feature extractor $G$ and a fully connected layer-based classifier $C$ . We have no access to the source domain data $\\mathcal{D}_s$ and the parameters of the source model $f_s$ . Only a black-box predictor trained on the source domain, i.e., an API, is available. The objective is to leverage the predictions of the API of the source domain to learn a mapping model $f_t$ which can label the target samples with either one of the $L_s$ labels or the \"unknown\" label. The overall workflow is shown in Fig. 2.", + "bbox": [ + 511, + 402, + 906, + 633 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Selective amplification knowledge distillation", + "text_level": 1, + "bbox": [ + 511, + 642, + 893, + 657 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Knowledge distillation (KD) [17] has been widely applied to address the black-box domain adaptation problem [9, 28, 57], as it enables the transfer of knowledge from one model (teacher) to another (student) by guiding the target model (student) to emulate the predictions of the source model (teacher). This approach is particularly suitable for BDA scenarios, where only the predictions of the source model are accessible. To better leverage the information available from the source domain's API, we use a knowledge distillation loss with both the source model's probabilities and hard pseudo-labels. This can be formulated as:", + "bbox": [ + 511, + 664, + 905, + 830 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {K D} = \\mathbb {E} _ {x _ {t} \\sim \\mathcal {X} _ {t}} \\left[ C E \\left(\\tilde {\\boldsymbol {y}} _ {t}, p _ {t}\\right) + C E \\left(p _ {s}, p _ {t}\\right) \\right], \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 562, + 857, + 906, + 875 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $CE(\\cdot ,\\cdot)$ denotes the cross entropy function, and", + "bbox": [ + 532, + 885, + 906, + 901 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "30590", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/e78f2281412e7b1985b1f6c1d82c971484177a84c607abf68d737a3067bb3a8d.jpg", + "image_caption": [ + "Figure 2. An overview of the proposed ADU framework. We utilize the black-box source predictor solely as an API service, obtaining only the source predictions from it. \"PL\" in the figure means pseudo-labels." + ], + "image_footnote": [], + "bbox": [ + 102, + 88, + 893, + 354 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "$\\tilde{\\pmb{y}}_t$ is a one-hot pseudo-label derived from $f_{s}(x_{t})$ . In addition, we use $p_s$ and $p_t$ to replace $f_{s}(x_{t})$ and $f_{t}(x_{t})$ for simplicity. However, due to domain and category shifts, the predictions from the source model are inevitably noisy. Consequently, Eq. (1) processes information from these predictions equally, which can adversely affect the performance of the target model. In order to solve the issue, we propose Selective Amplification Knowledge Distillation (SAKD), a method that enhances knowledge distillation by leveraging the confidence of pseudo-labels produced by the source model.", + "bbox": [ + 88, + 422, + 483, + 588 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Firstly, we simplify Eq. (1) to derive the following formulation:", + "bbox": [ + 89, + 589, + 483, + 619 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\mathcal {L} _ {K D} = - \\mathbb {E} _ {x _ {t} \\sim \\chi_ {t}} (\\log p _ {t} ^ {\\hat {c}} + \\sum_ {c = 1} ^ {| L _ {s} |} p _ {s} ^ {c} \\log p _ {t} ^ {c}) \\\\ = - \\mathbb {E} _ {x _ {t} \\sim \\chi_ {t}} [ (1 + p _ {s} ^ {\\hat {c}}) \\log p _ {t} ^ {\\hat {c}} + \\sum_ {\\substack {c = 1 \\\\ c \\neq \\hat {c}}} ^ {| L _ {s} |} p _ {s} ^ {c} \\log p _ {t} ^ {c} ], \\end{array} \\tag{2}\n$$\n", + "text_format": "latex", + "bbox": [ + 107, + 643, + 482, + 741 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\hat{c}$ represents the label predicted by the source model, defined as $\\hat{c} = \\arg \\max_c p_s^c$ . Subsequently, we formulate the SAKD loss by incorporating a modulating parameter $\\theta \\geq 1$ into the term $(1 + p_s^{\\hat{c}})$ , which only pertains to $\\hat{c}$ :", + "bbox": [ + 89, + 752, + 483, + 828 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {S A K D} = - \\mathbb {E} _ {x _ {t} \\sim \\chi_ {t}} \\left[ \\left(1 + p _ {s} ^ {\\hat {c}}\\right) ^ {\\theta} \\log p _ {t} ^ {\\hat {c}} + \\sum_ {\\substack {c = 1 \\\\ c \\neq \\hat {c}}} ^ {| L _ {s} |} p _ {s} ^ {c} \\log p _ {t} ^ {c} \\right], \\tag{3}\n$$\n", + "text_format": "latex", + "bbox": [ + 98, + 851, + 483, + 902 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Discussion of SAKD loss: For simplicity, we consider the case with a single sample, where the SAKD loss simplifies to:", + "bbox": [ + 511, + 422, + 906, + 467 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\mathrm {S A K D}} = - \\left[ \\left(1 + p _ {s} ^ {\\hat {c}}\\right) ^ {\\theta} \\log p _ {t} ^ {\\hat {c}} + \\sum_ {c = 1, c \\neq \\hat {c}} ^ {| L _ {s} |} p _ {s} ^ {c} \\log p _ {t} ^ {c} \\right] \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 537, + 484, + 906, + 531 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Next, we apply the generalized binomial theorem to expand $\\left(1 + p_s^{\\hat{c}}\\right)^\\theta$ as follows:", + "bbox": [ + 511, + 553, + 905, + 588 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\mathcal {L} _ {\\mathrm {S A K D}} = - \\left[ \\sum_ {k = 0} ^ {\\infty} \\binom {\\theta} {k} \\left(p _ {s} ^ {\\hat {c}}\\right) ^ {k} \\log p _ {t} ^ {\\hat {c}} + \\sum_ {c = 1, c \\neq \\hat {c}} ^ {| L _ {s} |} p _ {s} ^ {c} \\log p _ {t} ^ {c} \\right] \\\\ = \\mathcal {L} _ {\\mathrm {K D}} - \\left[ \\sum_ {k = 1} ^ {\\infty} \\binom {\\theta} {k} \\left(p _ {s} ^ {\\hat {c}}\\right) ^ {k} - p _ {s} ^ {\\hat {c}} \\right] \\log p _ {t} ^ {\\hat {c}} \\\\ \\approx \\mathcal {L} _ {\\mathrm {K D}} - \\left[ (\\theta - 1) p _ {s} ^ {\\hat {c}} + \\frac {\\theta (\\theta - 1)}{2} \\left(p _ {s} ^ {\\hat {c}}\\right) ^ {2} \\right] \\log p _ {t} ^ {\\hat {c}} \\tag {5} \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 519, + 619, + 903, + 760 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In Eq. (5), the first term $\\mathcal{L}_{\\mathrm{KD}}$ represents the original KD loss in Eq. (2), while the second term introduces an additional term, which is positive and solely depends on the target class $\\hat{c}$ . We demonstrate that this additional term enables the SAKD loss to capture more information from pseudolabels with high confidence, meaning those with higher values of $p_s^{\\hat{c}}$ , thereby reducing the impact of noisy pseudolabels. A comprehensive proof of this claim is provided in the supplemental material.", + "bbox": [ + 511, + 763, + 906, + 901 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "30591", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2. Entropy-driven label differentiation", + "text_level": 1, + "bbox": [ + 89, + 90, + 403, + 107 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "As stated above, the outputs from the source model are highly likely to be inaccurate and noisy due to the domain shift [3] and category shift. Even if we propose a promising solution in Eq. (3), we still face a tough challenge to detect the \"unknown\" categories, which means we should widen the difference between \"unknown\" and others. [59] shows entropy is an effective tool to detect \"unknown\" in domain adaptation. Entropy quantifies the prediction uncertainty, and smaller entropy represents a more certain prediction. In order to effectively address the influence brought by the category shift, we implement an automatic threshold determined by average entropy. The prediction process can be formulated as follows:", + "bbox": [ + 89, + 112, + 485, + 308 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\ny _ {t} = \\left\\{ \\begin{array}{l l} \\arg \\max _ {c} p _ {t} ^ {c} & H \\left(p _ {t}\\right) < w \\\\ \\text {u n k n o w n} & H \\left(p _ {t}\\right) \\geq w, \\end{array} \\right. \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 169, + 319, + 483, + 359 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $H(p_{t})$ and $w$ are computed as:", + "bbox": [ + 109, + 364, + 359, + 380 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nH \\left(p _ {t}\\right) = - \\sum_ {c = 1} ^ {\\left| L _ {s} \\right|} p _ {t} ^ {c} \\log p _ {t} ^ {c}, \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 200, + 388, + 482, + 433 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nw = \\mathbb {E} _ {x _ {t} \\sim \\mathcal {X} _ {t}} H (p _ {t}). \\tag {8}\n$$\n", + "text_format": "latex", + "bbox": [ + 218, + 446, + 482, + 463 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Taking the average as a threshold eliminates the requirement of per-dataset hyper-parameter tuning and makes our selection process highly adaptive. As we employ entropy as a threshold to detect \"unknown\" categories, we still face a challenge to widen the gap between \"unknown\" and others. In order to address the issue, we propose Entropy-Driven Label Differentiation (EDLD) to make the \"unknown\" distinguishable and enhance the quality of pseudo-labels with high certainty. We use entropy to calculate the uncertainty level of pseudo-labels. Higher entropy always shows more uncertain predictions. We define the EDLD loss by dividing pseudo-labels into high-quality (HQ) and low-quality (LQ) by their entropy, the loss is defined as follows:", + "bbox": [ + 89, + 467, + 483, + 664 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {E D L D} = \\mathbb {E} _ {x _ {t} \\sim \\mathcal {X} _ {t}} \\left[ \\left\\{ \\begin{array}{l l} \\mathcal {L} _ {H Q} (p _ {t}) & \\text {i f} H (p _ {t}) < w \\\\ \\mathcal {L} _ {L Q} (p _ {t}) & \\text {i f} H (p _ {t}) \\geq w \\end{array} \\right. \\right], \\tag {9}\n$$\n", + "text_format": "latex", + "bbox": [ + 107, + 684, + 483, + 728 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {H Q} = H \\left(p _ {t}\\right) + \\mathcal {L} _ {N L} \\left(p _ {t}, \\bar {y} _ {t}\\right), \\quad \\mathcal {L} _ {L Q} = - H \\left(p _ {t}\\right), \\tag {10}\n$$\n", + "text_format": "latex", + "bbox": [ + 99, + 748, + 483, + 766 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In the EDLD module, we not only use the entropy loss to widen the entropy gap between high-quality and low-quality pseudo-labels but also use a negative learning loss [21] to refine the high-quality pseudo-labels, the negative loss is the following:", + "bbox": [ + 89, + 773, + 483, + 851 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {N L} \\left(p _ {t}, \\bar {y} _ {t}\\right) = - \\sum_ {c = 1} ^ {| L _ {s} |} \\bar {\\mathbf {y}} _ {t} ^ {c} \\log \\left(1 - p _ {t} ^ {c}\\right), \\tag {11}\n$$\n", + "text_format": "latex", + "bbox": [ + 158, + 861, + 483, + 904 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where is $\\bar{y}_t$ a complementary label $\\bar{y}_t \\in \\{1, \\dots, |L_s|\\} \\setminus \\{y_t\\}$ chosen randomly from the set of labels, and $\\bar{y}_t$ is one-hot label derived from $\\bar{y}_t$ . Eq. (11) enables the probability value of the complementary label to be optimized as zero, resulting in an increase in the probability values of other classes, which can effectively refine the high-quality pseudo-labels.", + "bbox": [ + 511, + 90, + 906, + 196 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.3. Adaptive refinement of pseudo labels", + "text_level": 1, + "bbox": [ + 511, + 205, + 834, + 220 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "To further mitigate the impact of noise in the pseudo-labels generated by the source model, we employ an exponential moving average (EMA) of the target predictions. This allows for a gradual and controlled update of the pseudolabels supplied by the source model at each iteration. The update process is defined as follows:", + "bbox": [ + 511, + 226, + 906, + 316 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\np _ {s} \\leftarrow \\gamma p _ {s} + (1 - \\gamma) f _ {t} \\left(x _ {t}\\right), \\quad \\forall x _ {t} \\in \\mathcal {X} _ {t}, \\tag {12}\n$$\n", + "text_format": "latex", + "bbox": [ + 571, + 332, + 906, + 349 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\gamma$ is a smoothing factor that determines the extent to which the pseudo-labels should adapt to the most recent predictions from the target model. A higher value of $\\gamma$ places more weight on the existing pseudo-labels, while a lower value allows quicker adaptation to new information.", + "bbox": [ + 511, + 353, + 906, + 428 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "This strategy refines the pseudo-labels iteratively, balancing consistency with adaptability. By adjusting the pseudo-labels in a controlled manner, the model can better handle noise and gradually align the source model's outputs with the distribution of the target data. The EMA strategy ensures that updates are not overly reactive to fluctuations, thus enhancing the robustness of the model and improving performance in scenarios with diverse target domains.", + "bbox": [ + 511, + 429, + 908, + 550 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.4. The overall objective", + "text_level": 1, + "bbox": [ + 511, + 556, + 712, + 573 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Integrating these objectives introduced in Eqs. (3, 9) together, we obtain the final loss function as follows:", + "bbox": [ + 511, + 579, + 905, + 609 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} = \\mathcal {L} _ {S A K D} + \\lambda \\mathcal {L} _ {E D L D}, \\tag {13}\n$$\n", + "text_format": "latex", + "bbox": [ + 620, + 625, + 903, + 641 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\lambda$ is a hyper-parameter empirically set to 1.0, controlling the importance of $L_{SAKD}$ and $L_{EDLD}$ during distillation.", + "bbox": [ + 511, + 645, + 906, + 689 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4. Experiments", + "text_level": 1, + "bbox": [ + 511, + 703, + 645, + 720 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.1. Setup", + "text_level": 1, + "bbox": [ + 511, + 727, + 596, + 743 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Datasets. To assess the effectiveness of our approach, we conduct experiments using the Office31 [40], OfficeHome [50], VisDA [38], DomainNet [39] datasets. Office31 is a popular benchmark for UDA, consisting of three domains (Amazon, Webcam, Dslr) in 31 categories. OfficeHome is a more challenging benchmark for its distant domain shifts, which consists of four domains (Art, Clipart, Product, Real World) in 65 categories. VisDA is a large-scale benchmark containing 2 different 12-class domains, with a source domain with 152k synthetic images and a target domain with", + "bbox": [ + 511, + 750, + 908, + 902 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "30592", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/6829a6c811b7add73ec392459a7f7cd68a866d58d8909be1f4c6f87861a43edf.jpg", + "table_caption": [ + "Table 1. H-score (\\%) comparison in OPBDA scenario on the OfficeHome dataset." + ], + "table_footnote": [], + "table_body": "
MethodAr→ClAr→PrAr→ReCl→ArCl→PrCl→RePr→ArPr→ClPr→ReRe→ArRe→ClRe→PrAvg.
No Adapt.55.867.372.864.262.370.565.752.171.766.156.769.264.5
DINE [28]45.346.154.651.045.352.449.944.552.152.446.745.748.8
BETA [57]45.947.454.849.345.150.149.345.553.551.545.848.848.9
SEAL [54]40.646.847.844.542.745.247.340.047.145.546.746.645.1
UB²DA [9]60.969.676.374.469.276.574.560.376.274.162.071.170.4
ADU61.272.777.970.372.577.375.962.084.773.264.174.972.2
", + "bbox": [ + 93, + 114, + 903, + 237 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/7d9540ed93890a1b45b14cbe33bc0bb0619a2eae693df7962650a9e9edcc6097.jpg", + "table_caption": [ + "Table 2. H-score (%) comparison in OPBDA scenario on the Office31, VisDA, and DomainNet datasets, respectively." + ], + "table_footnote": [], + "table_body": "
MethodOffice31VisDADomainNet
A→DA→WD→AD→WW→AW→DAvg.S→RP→RP→SR→PR→SS→PS→RAvg.
No Adapt.79.971.980.191.578.789.882.037.752.835.043.632.535.751.841.9
DINE [28]50.351.456.663.054.060.155.943.548.439.543.438.137.645.642.1
BETA [57]52.454.051.661.253.457.655.045.549.240.343.138.238.048.442.9
SEAL [54]73.870.351.155.647.057.059.145.652.739.943.638.439.249.743.9
UB2DA [9]80.978.292.692.689.487.986.945.257.147.254.844.041.451.549.3
ADU87.585.287.094.483.890.588.148.759.847.852.546.642.756.451.0
", + "bbox": [ + 93, + 273, + 903, + 417 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "55k real images from Microsoft COCO. DomainNet is the largest DA dataset with about 0.6 million images. Like [11, 24], we conduct experiments on three subsets of it, i.e., Painting, Real, and Sketch. Following existing works [11, 24, 59], we separate the label set into three parts: common $(|L_s \\cap L_t|)$ , source-private $(|L_s - L_t|)$ and target-private $(|L_t - L_s|)$ . The classes are separated according to their alphabetical order. We evaluate ADU in OPBDA using the four datasets, and in OSBDA using the first three datasets.", + "bbox": [ + 89, + 441, + 482, + 575 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Evaluation protocols. Considering the trade-off between the accuracy of known and unknown classes is important in evaluating OSDA and OPDA methods. We evaluate methods using H-score [11]. H-score is the harmonic mean of the accuracy on common classes $(\\mathrm{Acc}_c)$ and accuracy on the \"unknow\" classes $(\\mathrm{Acc}_u)$ and is defined as:", + "bbox": [ + 89, + 577, + 483, + 667 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\nh = 2 \\cdot \\frac {A c c _ {c} \\cdot A c c _ {u}}{A c c _ {c} + A c c _ {u}} \\tag {14}\n$$\n", + "text_format": "latex", + "bbox": [ + 210, + 676, + 482, + 710 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "So, this metric is designed to provide a more comprehensive evaluation by ensuring that improvements in one area do not come at the expense of the other. It can measure both accuracies well.", + "bbox": [ + 89, + 719, + 482, + 777 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Implementation details. All experiments are implemented in Pytorch [37]. For fair comparisons to previous methods, we use the same backbone of ResNet50 [16] pre-trained on ImageNet [10] as the feature extractor in all experiments. For the source model, we fine-tune the model on source examples optimizing with cross-entropy loss function and then treat it like a black-box by only requiring the input-output interfaces of this model in our experiments. We", + "bbox": [ + 89, + 780, + 483, + 901 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "use SGD optimizer with a learning rate of 0.01, a momentum of 0.9 with a weight decay of 5e-4 and a batch size of 128. Concerning the parameters in ADU, We set $\\theta = 1.1$ , $\\gamma = 0.6$ and $\\lambda = 1.0$ for all datasets and tasks. Additionally, following [23], we set the ratio of $L_{NL}(p_t,\\bar{y}_t)$ to $H(p_{t})$ as 0.01:1 in Eq. (10).", + "bbox": [ + 511, + 441, + 906, + 531 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Baselines. We compare the proposed ADU with (i) BDA: DINE [28], BETA [57], SEAL [54] (ii) OPBDA: $\\mathbf{UB}^2\\mathbf{DA}$ [9]. These methods represent the state-of-the-art in their respective settings. Notably, owing to black-box DA lacking the capability to identify \"unknown\" categories, we apply the average entropy as the threshold to them, similar to our setting. The term \"No Adapt.\" refers to the baseline scenario where the source model is used directly for target label prediction, without any form of adaptation.", + "bbox": [ + 511, + 532, + 908, + 670 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2. Results", + "text_level": 1, + "bbox": [ + 511, + 681, + 607, + 695 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Results for OPBDA. We first perform experiments under the most challenging scenario, namely OPBDA, in which both the source and target domains contain private categories. The results for the OfficeHome dataset are presented in Table 1, while those for the Office31, VisDA, and DomainNet datasets are shown in Table 2. As illustrated in these tables, our proposed ADU method achieves a new state-of-the-art, surpassing all existing methods across the four datasets. Notably, ADU consistently improves the H-score compared to the \"No Adapt.\" baseline in each experimental setting, with a significant increase of $11.0\\%$ on the VisDA dataset. This improvement demonstrates that our method effectively mitigates the influence of noise from", + "bbox": [ + 511, + 704, + 908, + 902 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "30593", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/5aeded3920a09d3f3f818a880df0f9b5d2220ff2224638f10f7fae9322a1ff88.jpg", + "table_caption": [ + "Table 3. H-score (\\%) comparison in OSBDA scenario on the OfficeHome dataset." + ], + "table_footnote": [], + "table_body": "
MethodAr→ClAr→PrAr→ReCl→ArCl→PrCl→RePr→ArPr→ClPr→ReRe→ArRe→ClRe→PrAvg.
No Adapt.59.668.175.767.166.770.463.854.955.771.458.670.565.2
DINE [28]47.045.852.249.747.150.048.243.850.452.646.046.848.3
BETA [57]46.648.354.947.748.450.549.142.951.850.245.248.948.7
SEAL [54]43.346.547.143.745.945.445.340.845.643.541.346.544.6
UB2DA [9]65.570.475.567.869.374.471.256.775.070.763.369.869.1
ADU66.070.577.472.270.175.069.063.676.173.464.175.271.1
", + "bbox": [ + 93, + 114, + 903, + 234 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/22ca625704e23f7c721369acd8cf3977af66ba4c18b6589e3807e6bfd1ffc5df.jpg", + "table_caption": [ + "Table 4. H-score $(\\%)$ comparison in OSBDA scenario on the Office31 and VisDA datasets, respectively." + ], + "table_footnote": [], + "table_body": "
MethodOffice31VisDA
A→DA→WD→AD→WW→AW→DAvg.S→R
No Adapt.81.480.885.388.178.088.083.644.6
DINE [28]60.554.656.869.056.361.559.843.1
BETA [57]48.353.054.360.654.457.354.748.3
SEAL [54]50.443.753.444.854.050.849.540.6
UB2DA [9]85.787.491.089.285.184.187.148.1
ADU86.984.989.791.386.389.288.150.8
", + "bbox": [ + 99, + 297, + 478, + 409 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "source model predictions on the target model and accurately identifies unknown categories within the target data. An examination of Tables 1 and 2 reveals that methods such as DINE [28], BETA [57], and SEAL [54] perform poorly compared to our approach and $\\mathrm{UB}^2\\mathrm{DA}$ [9], with performance even falling below the \"No Adapt.\" baseline on the Office31 and OfficeHome datasets. This underperformance is likely due to the lack of design tailored specifically for the OPBDA scenario in these methods, which hinders their ability to effectively differentiate unknown categories from other classes. These results underscore the importance of our ADU approach, which is specifically designed for OPBDA. When compared to $\\mathrm{UB}^2\\mathrm{DA}$ [9], ADU achieves higher H-scores on the Office31, OfficeHome, VisDA, and DomainNet datasets, with improvements of $1.2\\%$ , $1.8\\%$ , $3.5\\%$ , and $1.7\\%$ , respectively. These gains further highlight the effectiveness of our proposed approach.", + "bbox": [ + 88, + 441, + 482, + 700 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Results for OSBDA. We subsequently conduct experiments under OSBDA scenarios, where only the target domain includes categories absent from the source domain. The results for the OfficeHome dataset are provided in Table 3, while those for the Office31 and VisDA datasets are presented in Table 4. As shown in these tables, our proposed ADU method achieves performance that surpasses the current state-of-the-art. Specifically, ADU consistently outperforms the \"No Adapt.\" baseline in terms of H-score across all experimental settings. Notably, for the $\\mathrm{Pr} \\rightarrow \\mathrm{Re}$ scenario, it achieves an improvement of $20.4\\%$ . This substantial enhancement demonstrates that our method effectively reduces the influence of noise from source model pre", + "bbox": [ + 89, + 704, + 482, + 900 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/5e363a7f229f409a873965440ca20af475b9beab5552586c94d7b55ac24dbf15.jpg", + "table_caption": [ + "Table 5. Ablation Study. H-score $(\\%)$ of different variants in OPBDA scenarios. $\\mathcal{L}_{HQ}^{1}$ , $\\mathcal{L}_{HQ}^{2}$ , $\\mathcal{L}_{LQ}$ refer to the objectives corresponding to the negative loss in $\\mathcal{L}_{HQ}$ , entropy loss in $\\mathcal{L}_{HQ}$ , and loss associated with low-quality labels, respectively." + ], + "table_footnote": [], + "table_body": "
L1HQL2HQL2QOffice31OfficeHomeAvg.
A → WD → WAr → ClCl → RePr → ArRe → Cl
---82.992.661.272.971.562.073.7
--84.393.561.173.072.062.774.6
--83.592.761.774.372.162.574.5
--84.193.861.474.172.662.674.8
-84.394.162.773.771.761.374.6
-83.194.263.374.772.362.675.0
-85.194.062.674.672.162.875.2
85.294.461.277.375.964.176.3
", + "bbox": [ + 517, + 325, + 906, + 445 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "dictions on the target model, allowing for accurate identification of unknown categories within the target data. Compared to $\\mathrm{UB^{2}DA}$ [9], ADU achieves higher H-scores on the Office31, OfficeHome and Visda datasets, with improvements of $1.0\\%$ , $2.0\\%$ , and $2.7\\%$ , respectively. These results further validate the effectiveness of our proposed approach.", + "bbox": [ + 511, + 470, + 906, + 564 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.3. Analysis", + "text_level": 1, + "bbox": [ + 511, + 575, + 616, + 590 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Ablation study. To comprehensively assess the individual contribution of the components comprising our method, we conduct extensive ablation studies on two tasks from the Office31 dataset and four tasks from the OfficeHome dataset in OPBDA scenarios. The results are summarized in Table 5. Here, $\\mathcal{L}_{HQ}^{1}$ , $\\mathcal{L}_{HQ}^{2}$ , $\\mathcal{L}_{LQ}$ refer to the objectives corresponding to the negative loss in $\\mathcal{L}_{HQ}$ , entropy loss in $\\mathcal{L}_{HQ}$ , and loss associated with low-quality labels, respectively. More detailed results can be found in supplementary material. It is important to emphasize that in all ablation experiments, we consistently employ the SAKD loss, which is a critical component of the ADU framework. Without it, the model would be unable to transfer knowledge from the source domain to the target model effectively. From the ablation study results, we can draw the following conclusions: (i) The introduction of any component alongside the SAKD loss leads to performance improvements, underscoring the vital role of the EDLD module. (ii) The full EDLD loss, which includes the negative loss term, yields better performance compared to its version without the negative loss,", + "bbox": [ + 511, + 598, + 906, + 902 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "30594", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/4484827b5b8c63032e103d251105fe0b4bb9899dce6ca76aba4e8a7b83e52bb8.jpg", + "image_caption": [ + "Figure 3. Parameters sensitivity analysis for six tasks. (a-b) plot the H-score with different values of $\\lambda$ , $\\theta$ ; (c) plot the H-score, $\\mathrm{Acc}_c$ , $\\mathrm{Acc}_u$ with different values of $\\gamma$ . The default values of these hyperparameters are set to $\\lambda = 1.0$ , $\\theta = 1.1$ , and $\\gamma = 0.6$ ." + ], + "image_footnote": [], + "bbox": [ + 98, + 94, + 364, + 246 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/08c225f96d48980126e7f969dfe663e036f9dfaf482f30dae474a9f76f098d72.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 377, + 94, + 630, + 246 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/0adff36860afc575c9c033912b7b501e3e31ed6569084e9ee11cda382153ac60.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 645, + 94, + 895, + 246 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/1355f2a705c8410e1d496dcc7c31f228a86a6862ec4e2355b6692ffc730caac4.jpg", + "image_caption": [ + "(a) No Adapt.", + "Figure 4. t-SNE feature visualization of target representations in $\\mathrm{D}\\rightarrow \\mathrm{A}$ OPBDA task. Blue dots represent target \"known\" examples $(L_{s}\\cap L_{t})$ while red dots are unknown\" examples $(L_{s} - L_{t})$" + ], + "image_footnote": [], + "bbox": [ + 96, + 318, + 272, + 458 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/bdf150df8dc7c29414b5907ff567505aba668e7afc1ab553079aa505144a5981.jpg", + "image_caption": [ + "(b) ADU" + ], + "image_footnote": [], + "bbox": [ + 295, + 318, + 473, + 455 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "demonstrating the effectiveness of incorporating this term. (iii) The integration of all components results in the highest H-scores, providing clear evidence of the synergy and efficacy of the combined modules.", + "bbox": [ + 88, + 566, + 482, + 626 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Feature visualization. Fig. 4 displays the visualization of the target feature with t-SNE [49], providing a clear representation of the feature distribution. As expected, ADU achieves excellent alignment between the source and target domain features. Taking a closer look at the visualization, it is evident that ADU excels in distinguishing the \"unknown\" categories from the other classes. This improvement aligns well with the intended function of the EDLD module, which is designed to enhance the separation of \"unknown\" categories from known categories. This result further highlights the effectiveness of ADU in handling the challenges posed by unknown classes in black-box domain adaptation tasks.", + "bbox": [ + 88, + 627, + 482, + 808 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Parameters sensitivity analysis. To better assess the impact of different hyperparameters, we conduct a detailed sensitivity analysis. We investigate the sensitivity of the parameters $\\lambda$ , $\\theta$ , and $\\gamma$ by performing experiments on two tasks from the Office31 dataset and four tasks from the OfficeHome dataset in OPBDA scenarios, as shown in Fig. 3.", + "bbox": [ + 88, + 810, + 482, + 900 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "The parameter $\\lambda$ is varied over the range [0.0, 0.2, 0.5, 1.0, 2.0, 5.0], $\\theta$ spans [1.00, 1.05, 1.10, 1.15, 1.20, 1.25], and $\\gamma$ is explored within the range [0.0, 0.2, 0.4, 0.6, 0.8, 1.0]. It is evident that the results are stable around the selected values of $\\lambda = 1.0$ , $\\theta = 1.1$ , and $\\gamma = 0.6$ . Additionally, as shown in Fig. 3(b), we examine the effect of varying $\\theta$ in Eq. (3). The results around the chosen parameter $\\theta = 1.1$ remain stable, and we also observe that increasing $\\theta$ slightly from 1.0 leads to an improvement in the H-score, thereby highlighting the effectiveness of Eq. (3). Finally, we analyze the impact of $\\gamma$ . As shown in Fig. 3(c), there is an inverse relationship between $\\mathrm{Acc}_c$ and $\\mathrm{Acc}_u$ . However, when $\\gamma = 0.6$ , the two metrics reach a relatively balanced state, and at this point, the H-score achieves an optimal result.", + "bbox": [ + 511, + 316, + 906, + 529 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5. Conclusion", + "text_level": 1, + "bbox": [ + 511, + 551, + 633, + 566 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this paper, we introduce the ADU model, a framework specifically designed to tackle Black-box Domain Adaptation with unknown classes in the target domain. ADU integrates two key innovations: Selective Amplification Knowledge Distillation (SAKD) and Entropy-Driven Label Differentiation (EDLD). SAKD enhances model accuracy by selectively amplifying high-confidence pseudolabels, thereby effectively mitigating the influence of noisy pseudo-labels. Meanwhile, EDLD improves the recognition of unknown categories through an entropy-driven threshold, expanding the difference between unknown categories and others and bolstering the robustness of the method across a range of diverse target domains. Experiments across four benchmark datasets demonstrate that ADU outperforms existing state-of-the-art approaches, highlighting its exceptional adaptability and efficacy, setting a new benchmark for future research in the field.", + "bbox": [ + 511, + 579, + 906, + 835 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Acknowledgement. This work was supported by the National Natural Science Foundation of China (Grant T2125006 and 42401415) and Jiangsu Innovation Capacity Building Program (Project BM2022028).", + "bbox": [ + 511, + 839, + 908, + 901 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "30595", + "bbox": [ + 478, + 944, + 517, + 957 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 90, + 187, + 104 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Waqar Ahmed, Pietro Morerio, and Vittorio Murino. Cleaning noisy labels by negative ensemble learning for source-free unsupervised domain adaptation. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 1616-1625, 2022. 3", + "[2] Mathilde Bateson, Hoel Kervadec, Jose Dolz, Herve Lombaert, and Ismail Ben Ayed. Source-free domain adaptation for image segmentation. Medical Image Analysis, 82: 102617, 2022. 3", + "[3] Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. Advances in neural information processing systems, 19, 2006. 5", + "[4] Zhangjie Cao, Mingsheng Long, Jianmin Wang, and Michael I Jordan. Partial transfer learning with selective adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2724-2732, 2018. 3", + "[5] Zhangjie Cao, Kaichao You, Mingsheng Long, Jianmin Wang, and Qiang Yang. Learning to transfer examples for partial domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2985-2994, 2019. 3", + "[6] Minghao Chen, Hongyang Xue, and Deng Cai. Domain adaptation for semantic segmentation with maximum squares loss. In Proceedings of the IEEE/CVF international conference on computer vision, pages 2090-2099, 2019. 1", + "[7] Yuhua Chen, Wen Li, Christos Sakaridis, Dengxin Dai, and Luc Van Gool. Domain adaptive faster r-cnn for object detection in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3339-3348, 2018. 1", + "[8] Shuhao Cui, Shuhui Wang, Junbao Zhuo, Chi Su, Qingming Huang, and Qi Tian. Gradually vanishing bridge for adversarial domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12455-12464, 2020. 3", + "[9] Bin Deng, Yabin Zhang, Hui Tang, Changxing Ding, and Kui Jia. On universal black-box domain adaptation. arXiv preprint arXiv:2104.04665, 2021. 1, 3, 6, 7", + "[10] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE, 2009. 6", + "[11] Bo Fu, Zhangjie Cao, Mingsheng Long, and Jianmin Wang. Learning to detect open classes for universal domain adaptation. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XV 16, pages 567-583. Springer, 2020. 3, 6", + "[12] Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In International conference on machine learning, pages 1180-1189. PMLR, 2015. 1, 2", + "[13] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario March, and Victor Lempitsky. Domain-adversarial training" + ], + "bbox": [ + 93, + 114, + 483, + 900 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "of neural networks. Journal of machine learning research, 17(59):1-35, 2016. 1", + "[14] Jacob Goldberger and Ehud Ben-Reuven. Training deep neural-networks using a noise adaptation layer. In International conference on learning representations, 2022. 3", + "[15] Rui Gong, Wen Li, Yuhua Chen, and Luc Van Gool. Dlow: Domain flow for adaptation and generalization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2477-2486, 2019. 3", + "[16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 6", + "[17] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. 3", + "[18] Judy Hoffman, Erik Rodner, Jeff Donahue, Brian Kulis, and Kate Saenko. Asymmetric and category invariant feature transformations for domain adaptation. International journal of computer vision, 109:28-41, 2014. 2", + "[19] Yunzhong Hou and Liang Zheng. Source free domain adaptation with image translation. arXiv preprint arXiv:2008.07514, 2020. 1, 3", + "[20] Mehran Khodabandeh, Arash Vahdat, Mani Ranjbar, and William G Macready. A robust learning approach to domain adaptive object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 480-490, 2019. 1", + "[21] Youngdong Kim, Junho Yim, Juseung Yun, and Junmo Kim. Nlnl: Negative learning for noisy labels. In Proceedings of the IEEE/CVF international conference on computer vision, pages 101-110, 2019. 5", + "[22] Youngeun Kim, Donghyeon Cho, Kyeongtak Han, Priyadarshini Panda, and Sungeun Hong. Domain adaptation without source data. IEEE Transactions on Artificial Intelligence, 2(6):508-518, 2021. 3", + "[23] Youngdong Kim, Juseung Yun, Hyounguk Shon, and Junmo Kim. Joint negative and positive learning for noisy labels. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9442-9451, 2021. 3, 6", + "[24] Guangrui Li, Guoliang Kang, Yi Zhu, Yunchao Wei, and Yi Yang. Domain consensus clustering for universal domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9757-9766, 2021. 6", + "[25] Qingmei Li, Yibin Wen, Juepeng Zheng, Yuxiang Zhang, and Haohuan Fu. Hyunida: Breaking label set constraints for universal domain adaptation in cross-scene hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing, 2024. 1", + "[26] Shuang Li, Shiji Song, Gao Huang, Zhengming Ding, and Cheng Wu. Domain invariant and class discriminative feature learning for visual domain adaptation. IEEE transactions on image processing, 27(9):4260-4273, 2018. 2", + "[27] Jian Liang, Dapeng Hu, and Jiashi Feng. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In International conference on machine learning, pages 6028-6039. PMLR, 2020. 1, 3" + ], + "bbox": [ + 517, + 93, + 906, + 900 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "30596", + "bbox": [ + 478, + 945, + 519, + 955 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[28] Jian Liang, Dapeng Hu, Jiashi Feng, and Ran He. Dine: Domain adaptation from single and multiple black-box predictors. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8003-8013, 2022. 1, 3, 6, 7", + "[29] Mattia Litrico, Alessio Del Bue, and Pietro Morerio. Guiding pseudo-labels with uncertainty estimation for source-free unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7640-7650, 2023. 3", + "[30] Long Liu, Lechao Yang, and Bin Zhu. Sparse feature space representation: A unified framework for semi-supervised and domain adaptation learning. Knowledge-Based Systems, 156:43-61, 2018. 3", + "[31] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3431-3440, 2015. 3", + "[32] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In International conference on machine learning, pages 97-105. PMLR, 2015. 3", + "[33] Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. Advances in neural information processing systems, 31, 2018. 1", + "[34] Bekhzod Olimov, Jeonghong Kim, and Anand Paul. Dcbt-net: Training deep convolutional neural networks with extremely noisy labels. IEEE Access, 8:220482-220495, 2020. 3", + "[35] Fei Pan, Inkyu Shin, Francois Rameau, Seokju Lee, and In So Kweon. Unsupervised intra-domain adaptation for semantic segmentation through self-supervision. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3764-3773, 2020. 1", + "[36] Pau Panareda Busto and Juergen Gall. Open set domain adaptation. In Proceedings of the IEEE international conference on computer vision, pages 754-763, 2017. 1", + "[37] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017. 6", + "[38] Xingchao Peng, Ben Usman, Neela Kaushik, Judy Hoffman, Dequan Wang, and Kate Saenko. Visda: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924, 2017.5", + "[39] Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1406-1415, 2019. 5", + "[40] Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In Computer Vision-ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, September 5-11, 2010, Proceedings, Part IV 11, pages 213-226. Springer, 2010. 5" + ], + "bbox": [ + 91, + 92, + 482, + 898 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[41] Kuniaki Saito and Kate Saenko. Ovanet: One-vs-all network for universal domain adaptation. In Proceedings of the IEEE/cvf international conference on computer vision, pages 9000-9009, 2021. 1", + "[42] Kuniaki Saito, Shohei Yamamoto, Yoshitaka Ushiku, and Tatsuya Harada. Open set domain adaptation by backpropagation. In Proceedings of the European conference on computer vision (ECCV), pages 153-168, 2018. 1", + "[43] Chen Shen and Yuhong Guo. Unsupervised heterogeneous domain adaptation with sparse feature transformation. In Asian conference on machine learning, pages 375-390. PMLR, 2018. 2", + "[44] Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, and Jae-Gil Lee. Learning from noisy labels with deep neural networks: A survey. IEEE transactions on neural networks and learning systems, 2022. 3", + "[45] Simen Thys, Wiebe Van Ranst, and Toon Goedemé. Fooling automated surveillance cameras: adversarial patches to attack person detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 0–0, 2019. 1", + "[46] Florian Tramér, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204, 2017. 1", + "[47] Yi-Hsuan Tsai, Wei-Chih Hung, Samuel Schulter, Kiyuk Sohn, Ming-Hsuan Yang, and Manmohan Chandraker. Learning to adapt structured output space for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7472-7481, 2018. 1", + "[48] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7167-7176, 2017. 1", + "[49] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9 (11), 2008. 8", + "[50] Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5018-5027, 2017. 5", + "[51] Riccardo Volpi, Pietro Morerio, Silvio Savarese, and Vittorio Murino. Adversarial feature augmentation for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5495-5504, 2018. 3", + "[52] Rui Wang, Masao Utiyama, Lemao Liu, Kehai Chen, and Eichiro Sumita. Instance weighting for neural machine translation domain adaptation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1482-1488, 2017. 2", + "[53] Hongxin Wei, Lei Feng, Xiangyu Chen, and Bo An. Combating noisy labels by agreement: A joint training method with co-regularization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 13726–13735, 2020. 3" + ], + "bbox": [ + 516, + 92, + 903, + 898 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "30597", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[54] Mingxuan Xia, Junbo Zhao, Gengyu Lyu, Zenan Huang, Tianlei Hu, Gang Chen, and Haobo Wang. A separation and alignment framework for black-box domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 16005-16013, 2024. 6, 7", + "[55] Rui Xia, Zhenchun Pan, and Feng Xu. Instance weighting for domain adaptation via trading off sample selection bias and variance. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden, pages 13-19, 2018. 2", + "[56] Minghao Xu, Hang Wang, Bingbing Ni, Qi Tian, and Wenjun Zhang. Cross-domain detection via graph-induced prototype alignment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12355-12364, 2020. 1", + "[57] Jianfei Yang, Xiangyu Peng, Kai Wang, Zheng Zhu, Jiashi Feng, Lihua Xie, and Yang You. Divide to adapt: Mitigating confirmation bias for domain adaptation of black-box predictors. arXiv preprint arXiv:2205.14467, 2022. 1, 3, 6, 7", + "[58] Shiqi Yang, Yaxing Wang, Joost Van De Weijer, Luis Herranz, and Shangling Jui. Unsupervised domain adaptation without source data by casting a bait. arXiv preprint arXiv:2010.12427, 1(2):5, 2020. 1, 3", + "[59] Kaichao You, Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Universal domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2720-2729, 2019. 1, 3, 5, 6", + "[60] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3):107-115, 2021. 1, 3", + "[61] Haojian Zhang, Yabin Zhang, Kui Jia, and Lei Zhang. Unsupervised domain adaptation of black-box source models. arXiv preprint arXiv:2101.02839, 2021. 1, 3", + "[62] Siqi Zhang, Lu Zhang, and Zhiyong Liu. Refined pseudo labeling for source-free domain adaptive object detection. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1-5. IEEE, 2023. 3" + ], + "bbox": [ + 91, + 92, + 480, + 654 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "30598", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 10 + } +] \ No newline at end of file diff --git a/2025/ADU_ Adaptive Detection of Unknown Categories in Black-Box Domain Adaptation/a3a5f623-9c8e-40c7-93d2-175e52310d81_model.json b/2025/ADU_ Adaptive Detection of Unknown Categories in Black-Box Domain Adaptation/a3a5f623-9c8e-40c7-93d2-175e52310d81_model.json new file mode 100644 index 0000000000000000000000000000000000000000..037440f3fd522f330995e9e729663a9238e3ba17 --- /dev/null +++ b/2025/ADU_ Adaptive Detection of Unknown Categories in Black-Box Domain Adaptation/a3a5f623-9c8e-40c7-93d2-175e52310d81_model.json @@ -0,0 +1,2158 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.043 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.001, + 0.812, + 0.047 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.248, + 0.131, + 0.753, + 0.177 + ], + "angle": 0, + "content": "ADU: Adaptive Detection of Unknown Categories in Black-Box Domain Adaptation" + }, + { + "type": "text", + "bbox": [ + 0.213, + 0.204, + 0.786, + 0.24 + ], + "angle": 0, + "content": "Yushan Lai, Guowen Li, Haoyuan Liang, Juepeng Zheng*, and Zhiyu Ye School of Artificial Intelligence, Sun Yat-Sen University" + }, + { + "type": "text", + "bbox": [ + 0.143, + 0.242, + 0.856, + 0.258 + ], + "angle": 0, + "content": "{laiysh6,ligw8,lianghy68,yezhy26}@mail2.sysu.edu.cn,zhengjp8@mail.sysu.edu.cn" + }, + { + "type": "title", + "bbox": [ + 0.248, + 0.292, + 0.329, + 0.308 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.323, + 0.486, + 0.745 + ], + "angle": 0, + "content": "Black-box Domain Adaptation (BDA) utilizes a black-box predictor of the source domain to label target domain data, addressing privacy concerns in Unsupervised Domain Adaptation (UDA). However, BDA assumes identical label sets across domains, which is unrealistic. To overcome this limitation, we propose a study on BDA with unknown classes in the target domain. It uses a black-box predictor to label target data and identify \"unknown\" categories, without requiring access to source domain data or predictor parameters, thus addressing both data privacy and category shift issues in traditional UDA. Existing methods face two main challenges: (i) Noisy pseudo-labels in knowledge distillation (KD) accumulate prediction errors, and (ii) relying on a preset threshold fails to adapt to varying category shifts. To address these, we propose ADU, a framework that allows the target domain to autonomously learn pseudo-labels guided by quality and use an adaptive threshold to identify \"unknown\" categories. Specifically, ADU consists of Selective Amplification Knowledge Distillation (SAKD) and Entropy-Driven Label Differentiation (EDLD). SAKD improves KD by focusing on high-quality pseudo-labels, mitigating the impact of noisy labels. EDLD categorizes pseudo-labels by quality and applies tailored training strategies to distinguish \"unknown\" categories, improving detection accuracy and adaptability. Extensive experiments show that ADU achieves state-of-the-art results, outperforming the best existing method by \\(3.1\\%\\) on VisDA in the OPBDA scenario." + }, + { + "type": "title", + "bbox": [ + 0.092, + 0.776, + 0.224, + 0.793 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.802, + 0.485, + 0.879 + ], + "angle": 0, + "content": "Unsupervised domain adaptation (UDA) [12] aims to transfer knowledge from a well-labeled source domain to an unlabeled target domain, which can ease the burden of manual labeling. Recently, UDA has been applied in a range of computer vision tasks, including image classification" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.293, + 0.908, + 0.565 + ], + "angle": 0, + "content": "[13, 33, 48], objection detection [7, 20, 56] and semantic segmentation [6, 35, 47]. However, UDA methods may raise concerns about data privacy and portability issues due to their requirement for access to raw source data and source model parameters. Therefore, source-free domain adaptation (SFDA) [19, 27, 58] is proposed to protect the source data privacy. In the SFDA scenario, only the source model is provided to the target domain without access to source data. However, it still faces the issue of source information privacy, which can be compromised through techniques such as white-box attacks [45, 46]. To mitigate these concerns, Black-box Domain Adaptation (BDA) [28, 61] has been proposed recently, as shown in Figure 1(a), which aims to learn a model solely using the unlabeled data from the target domain, based on the predictions from a black-box predictor trained on the source data. This setting can effectively mitigate data privacy issues related to data and model parameter leakage." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.567, + 0.909, + 0.852 + ], + "angle": 0, + "content": "However, traditional BDA [28, 57, 60] always assumes that the source and target domains share identical category sets, which frequently fails to apply in practice. In real-world scenarios, the target domain is typically unlabeled, making it difficult to satisfy this assumption due to potential category shifts. Currently, there are two UDA settings that involve unknown classes in the target domain: Open-Set Domain Adaptation (OSDA) [36, 42] and Open-Partial Domain Adaptation (OPDA) [25, 41, 59]. OSDA deals with scenarios where the target domain contains private classes that are unknown to the source domain, while OPDA handles cases where both the source and target domains each have their own private classes. Black-box Domain Adaptation has been applied to OPDA recently [9]. As shown in Figure 1(b), this setting is designed to learn a robust model for the target domain that not only recognizes classes shared by two domains but also identifies \"unknown\" categories absent in the source domain despite having no information about difference of two label sets." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.856, + 0.909, + 0.903 + ], + "angle": 0, + "content": "Currently, only one study has addressed the above problem. [9] applies knowledge distillation to train the target model to mimic source predictor outputs and uses a man" + }, + { + "type": "page_footnote", + "bbox": [ + 0.11, + 0.888, + 0.236, + 0.901 + ], + "angle": 0, + "content": "*Corresponding author" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.521, + 0.958 + ], + "angle": 0, + "content": "30588" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.097, + 0.089, + 0.52, + 0.306 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.234, + 0.311, + 0.432, + 0.324 + ], + "angle": 0, + "content": "(a) Black-box Domain Adaptation" + }, + { + "type": "image", + "bbox": [ + 0.53, + 0.089, + 0.905, + 0.306 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.58, + 0.311, + 0.851, + 0.324 + ], + "angle": 0, + "content": "(b) Open-patial Black-box Domain Adaptation" + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.335, + 0.908, + 0.379 + ], + "angle": 0, + "content": "Figure 1. Black-box domain adaptation and Open-partial black-box domain adaptation settings with respect to label sets of source and target domains (red labels indicate common labels of two domains). Compared to BDA, ADU is able to deal with BDA with unknown classes in the target by adaptively detecting unknown categories." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.404, + 0.484, + 0.587 + ], + "angle": 0, + "content": "ually preset threshold to identify \"unknown\" categories. Though inspiring, it still has the following limitations. (i) Due to domain and category shifts between the source and target domains, predictions from the source model are inevitably noisy. Directly utilizing these noisy pseudo-labels will accumulate model prediction errors, making the adaptation process unreliable. (ii) Employing a preset threshold fails to accommodate the variability and complexity of category shifts in different target domains, which is inadequate for accurately detecting \"unknown\" classes across diverse domains, often resulting in misclassification and reduced adaptability." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.6, + 0.484, + 0.901 + ], + "angle": 0, + "content": "To address the issues mentioned above, we propose a simple yet effective framework called ADU, specifically designed for Open-Set BDA (OSBDA) and Open-Partial BDA (OPBDA). ADU incorporates two core modules: Selective Amplification Knowledge Distillation (SAKD) and Entropy-Driven Label Differentiation (EDLD). For the first challenge, SAKD enhances traditional knowledge distillation techniques, specifically tailoring KD to BDA with unknown classes in the target domain by amplifying learning from high-quality pseudo-labels produced by source API. This refinement ensures that the target model emphasizes learning from high-quality pseudo-labels, effectively mitigating the impact of noisy data. For the second challenge, EDLD enhances the framework's ability to handle diverse domain conditions. Initially, EDLD categorizes pseudo-labels based on their quality and then applies tailored training strategies to widen the distance between \"unknown\" classes and the others, while minimizing the impact of noisy pseudo-labels. This adaptive differentiation of labels heightens the effectiveness of employing the av" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.404, + 0.905, + 0.51 + ], + "angle": 0, + "content": "erage entropy of the target model's predictions as a threshold. Consequently, this refined approach significantly improves the detection accuracy of \"unknown\" categories and adapts more adeptly to category shifts across various target domains. Additionally, we iteratively refine the pseudolabels generated by the source API, which can significantly enhance their quality." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.511, + 0.905, + 0.54 + ], + "angle": 0, + "content": "Our main contributions in this paper could be summarized as follows:" + }, + { + "type": "text", + "bbox": [ + 0.515, + 0.542, + 0.906, + 0.617 + ], + "angle": 0, + "content": "1. We propose Selective Amplification Knowledge Distillation (SAKD), a refined knowledge distillation technique specifically designed for the OPBDA and OSBDA scenarios, which can effectively mitigate the impact of noisy pseudo-labels." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.617, + 0.906, + 0.707 + ], + "angle": 0, + "content": "2. We introduce Entropy-Driven Label Differentiation (EDLD), which categorizes pseudo-labels by quality and applies customized training strategies to enhance the distinction between \"unknown\" and others, thereby improving detection accuracy and domain adaptability through adaptive entropy-based thresholding." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.708, + 0.905, + 0.782 + ], + "angle": 0, + "content": "3. Extensive experiments on four public benchmarks demonstrate the superior performance of our proposed method compared with existing SOTA works, surpassing the best existing method by \\(3.1\\%\\) on VisDA in the OPBDA scenario." + }, + { + "type": "list", + "bbox": [ + 0.513, + 0.542, + 0.906, + 0.782 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.8, + 0.651, + 0.815 + ], + "angle": 0, + "content": "2. Related work" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.826, + 0.906, + 0.901 + ], + "angle": 0, + "content": "Black-box domain adaptation. Unsupervised Domain Adaptation (UDA) [12] aims to adapt a model trained on a labeled source domain to an unlabeled target domain. Many early methods relied on techniques such as instance weighting [52, 55], feature transformation [18, 26, 43], and feature" + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "30589" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.091, + 0.482, + 0.392 + ], + "angle": 0, + "content": "space [30, 51]. Despite their effectiveness, these methods require access to source domain data, raising privacy and portability concerns [22]. To address privacy issues associated with UDA, Source-Free Domain Adaptation (SFDA) methods [19, 27, 58] have been proposed. These methods adapt models using only the source model and unlabeled target data, eliminating the need for source data during adaptation. Techniques such as entropy minimization [2] and pseudo-labeling [62] have been explored. However, SFDA methods still face potential privacy risks due to the use of generative models and other techniques that might inadvertently reveal source data characteristics. Therefore, Black-box Domain Adaptation (BDA) [28, 61] has emerged as a solution to further mitigate privacy concerns by only accessing the source model's outputs without any internal details. This approach ensures better privacy preservation compared to traditional UDA and SFDA methods. Recent methods such as DINE [28] and BETA [57] have made significant strides in this area. Nevertheless, they struggle with inconsistent label sets between domains." + }, + { + "type": "text", + "bbox": [ + 0.093, + 0.398, + 0.483, + 0.882 + ], + "angle": 0, + "content": "Open-set and open-partial domain adaptation. Closed-set domain adaptation assumes identical label sets between source and target domains, focusing on minimizing distribution shifts using techniques like discrepancy minimization [31, 32] and adversarial training [8, 15]. However, these methods often struggle when label sets are not perfectly aligned. To address this issue, Partial Domain Adaptation (PDA) assumes that only the source domain contains private classes, with methods such as SAN [4] employing class-wise domain discriminators, and ETN [5] using progressive weighting schemes. Meanwhile, Open-Set Domain Adaptation (OSDA) handles scenarios where the target domain has private classes unknown to the source. Besides, Open-Partial Domain Adaptation (OPDA) addresses both domains having their own private classes. UAN [59] quantifies sample-level uncertainty using entropy and domain similarity, and Fu et al. [11] combines entropy, confidence, and consistency for better uncertainty measurement. To address the challenges faced by black-box domain adaptation, [9] combines OPDA with BDA to address both category shift and privacy concerns. It applies knowledge distillation to train the target model to emulate source predictor outputs, using a preset threshold to identify \"unknown\" categories. Though inspiring, it still faces significant limitations regarding pseudo-label quality and the detection of \"unknown\" categories. To address these issues, we propose the ADU framework, applying it to Open-Set BDA (OSBDA) and Open-Partial BDA (OPBDA) to mitigate the impact of noisy pseudo-labels and enhance adaptability to category shifts across varied target domains. This approach provides a robust solution to the limitations of existing methods." + }, + { + "type": "text", + "bbox": [ + 0.11, + 0.886, + 0.482, + 0.901 + ], + "angle": 0, + "content": "Learning with noisy labels. Deep learning models often" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.905, + 0.363 + ], + "angle": 0, + "content": "overfit on noisy labels, leading to poor generalization [60]. To address this, various approaches have been proposed, including noise-robust losses [23, 44], noise-transition matrix estimation [14], clean sample selection [53], and loss reweighting [34]. However, these methods often require noise-free validation sets or make assumptions about the noise distribution, which are impractical in BDA settings. These methods [29, 34] differ by not assuming any specific noise distribution and leveraging noisy scores from source training classes. Recently, NEL [1] introduced a novel approach by integrating a Negative Learning loss with a pseudo-label refinement framework that leverages ensembling techniques. Negative Learning [23] is an indirect learning method that employs complementary labels to address noise issues effectively. In our work, we use Negative Learning to refine high-quality pseudo-labels without ensembling, reducing computational cost and making our approach more flexible and robust for OSBDA and OPBDA." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.377, + 0.649, + 0.394 + ], + "angle": 0, + "content": "3. Methodology" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.403, + 0.907, + 0.634 + ], + "angle": 0, + "content": "In this paper, we are provided with a target domain \\(\\mathcal{D}_t = \\left\\{x_t^i\\right\\}_{i=1}^{N_t}\\) with \\(N_t\\) unlabeled samples where \\(x_t^i \\in \\mathcal{X}_t\\), and a black-box predictor \\(f_s\\) trained by a source domain \\(\\mathcal{D}_s = \\left\\{\\left(x_s^i, y_s^i\\right)\\right\\}_{i=1}^{N_s}\\) with \\(N_s\\) labeled samples where \\(x_s^i \\in \\mathcal{X}_s\\). We use \\(L_s\\) and \\(L_t\\) to denote the label spaces of the source domain and target domain respectively. In general, model \\(f\\) consists of a feature extractor \\(G\\) and a fully connected layer-based classifier \\(C\\). We have no access to the source domain data \\(\\mathcal{D}_s\\) and the parameters of the source model \\(f_s\\). Only a black-box predictor trained on the source domain, i.e., an API, is available. The objective is to leverage the predictions of the API of the source domain to learn a mapping model \\(f_t\\) which can label the target samples with either one of the \\(L_s\\) labels or the \"unknown\" label. The overall workflow is shown in Fig. 2." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.643, + 0.895, + 0.659 + ], + "angle": 0, + "content": "3.1. Selective amplification knowledge distillation" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.665, + 0.906, + 0.831 + ], + "angle": 0, + "content": "Knowledge distillation (KD) [17] has been widely applied to address the black-box domain adaptation problem [9, 28, 57], as it enables the transfer of knowledge from one model (teacher) to another (student) by guiding the target model (student) to emulate the predictions of the source model (teacher). This approach is particularly suitable for BDA scenarios, where only the predictions of the source model are accessible. To better leverage the information available from the source domain's API, we use a knowledge distillation loss with both the source model's probabilities and hard pseudo-labels. This can be formulated as:" + }, + { + "type": "equation", + "bbox": [ + 0.563, + 0.858, + 0.907, + 0.875 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {K D} = \\mathbb {E} _ {x _ {t} \\sim \\mathcal {X} _ {t}} \\left[ C E \\left(\\tilde {\\boldsymbol {y}} _ {t}, p _ {t}\\right) + C E \\left(p _ {s}, p _ {t}\\right) \\right], \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.534, + 0.886, + 0.907, + 0.902 + ], + "angle": 0, + "content": "where \\(CE(\\cdot ,\\cdot)\\) denotes the cross entropy function, and" + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.521, + 0.958 + ], + "angle": 0, + "content": "30590" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.104, + 0.089, + 0.895, + 0.356 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.369, + 0.908, + 0.399 + ], + "angle": 0, + "content": "Figure 2. An overview of the proposed ADU framework. We utilize the black-box source predictor solely as an API service, obtaining only the source predictions from it. \"PL\" in the figure means pseudo-labels." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.424, + 0.484, + 0.589 + ], + "angle": 0, + "content": "\\(\\tilde{\\pmb{y}}_t\\) is a one-hot pseudo-label derived from \\(f_{s}(x_{t})\\). In addition, we use \\(p_s\\) and \\(p_t\\) to replace \\(f_{s}(x_{t})\\) and \\(f_{t}(x_{t})\\) for simplicity. However, due to domain and category shifts, the predictions from the source model are inevitably noisy. Consequently, Eq. (1) processes information from these predictions equally, which can adversely affect the performance of the target model. In order to solve the issue, we propose Selective Amplification Knowledge Distillation (SAKD), a method that enhances knowledge distillation by leveraging the confidence of pseudo-labels produced by the source model." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.59, + 0.484, + 0.621 + ], + "angle": 0, + "content": "Firstly, we simplify Eq. (1) to derive the following formulation:" + }, + { + "type": "equation", + "bbox": [ + 0.109, + 0.645, + 0.483, + 0.742 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\mathcal {L} _ {K D} = - \\mathbb {E} _ {x _ {t} \\sim \\chi_ {t}} (\\log p _ {t} ^ {\\hat {c}} + \\sum_ {c = 1} ^ {| L _ {s} |} p _ {s} ^ {c} \\log p _ {t} ^ {c}) \\\\ = - \\mathbb {E} _ {x _ {t} \\sim \\chi_ {t}} [ (1 + p _ {s} ^ {\\hat {c}}) \\log p _ {t} ^ {\\hat {c}} + \\sum_ {\\substack {c = 1 \\\\ c \\neq \\hat {c}}} ^ {| L _ {s} |} p _ {s} ^ {c} \\log p _ {t} ^ {c} ], \\end{array} \\tag{2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.753, + 0.484, + 0.829 + ], + "angle": 0, + "content": "where \\(\\hat{c}\\) represents the label predicted by the source model, defined as \\(\\hat{c} = \\arg \\max_c p_s^c\\). Subsequently, we formulate the SAKD loss by incorporating a modulating parameter \\(\\theta \\geq 1\\) into the term \\((1 + p_s^{\\hat{c}})\\), which only pertains to \\(\\hat{c}\\):" + }, + { + "type": "equation", + "bbox": [ + 0.099, + 0.852, + 0.484, + 0.904 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {S A K D} = - \\mathbb {E} _ {x _ {t} \\sim \\chi_ {t}} \\left[ \\left(1 + p _ {s} ^ {\\hat {c}}\\right) ^ {\\theta} \\log p _ {t} ^ {\\hat {c}} + \\sum_ {\\substack {c = 1 \\\\ c \\neq \\hat {c}}} ^ {| L _ {s} |} p _ {s} ^ {c} \\log p _ {t} ^ {c} \\right], \\tag{3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.424, + 0.907, + 0.468 + ], + "angle": 0, + "content": "Discussion of SAKD loss: For simplicity, we consider the case with a single sample, where the SAKD loss simplifies to:" + }, + { + "type": "equation", + "bbox": [ + 0.538, + 0.486, + 0.907, + 0.532 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\mathrm {S A K D}} = - \\left[ \\left(1 + p _ {s} ^ {\\hat {c}}\\right) ^ {\\theta} \\log p _ {t} ^ {\\hat {c}} + \\sum_ {c = 1, c \\neq \\hat {c}} ^ {| L _ {s} |} p _ {s} ^ {c} \\log p _ {t} ^ {c} \\right] \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.554, + 0.906, + 0.589 + ], + "angle": 0, + "content": "Next, we apply the generalized binomial theorem to expand \\(\\left(1 + p_s^{\\hat{c}}\\right)^\\theta\\) as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.521, + 0.621, + 0.905, + 0.761 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\mathcal {L} _ {\\mathrm {S A K D}} = - \\left[ \\sum_ {k = 0} ^ {\\infty} \\binom {\\theta} {k} \\left(p _ {s} ^ {\\hat {c}}\\right) ^ {k} \\log p _ {t} ^ {\\hat {c}} + \\sum_ {c = 1, c \\neq \\hat {c}} ^ {| L _ {s} |} p _ {s} ^ {c} \\log p _ {t} ^ {c} \\right] \\\\ = \\mathcal {L} _ {\\mathrm {K D}} - \\left[ \\sum_ {k = 1} ^ {\\infty} \\binom {\\theta} {k} \\left(p _ {s} ^ {\\hat {c}}\\right) ^ {k} - p _ {s} ^ {\\hat {c}} \\right] \\log p _ {t} ^ {\\hat {c}} \\\\ \\approx \\mathcal {L} _ {\\mathrm {K D}} - \\left[ (\\theta - 1) p _ {s} ^ {\\hat {c}} + \\frac {\\theta (\\theta - 1)}{2} \\left(p _ {s} ^ {\\hat {c}}\\right) ^ {2} \\right] \\log p _ {t} ^ {\\hat {c}} \\tag {5} \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.765, + 0.907, + 0.902 + ], + "angle": 0, + "content": "In Eq. (5), the first term \\(\\mathcal{L}_{\\mathrm{KD}}\\) represents the original KD loss in Eq. (2), while the second term introduces an additional term, which is positive and solely depends on the target class \\(\\hat{c}\\). We demonstrate that this additional term enables the SAKD loss to capture more information from pseudolabels with high confidence, meaning those with higher values of \\(p_s^{\\hat{c}}\\), thereby reducing the impact of noisy pseudolabels. A comprehensive proof of this claim is provided in the supplemental material." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.518, + 0.957 + ], + "angle": 0, + "content": "30591" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.091, + 0.091, + 0.405, + 0.108 + ], + "angle": 0, + "content": "3.2. Entropy-driven label differentiation" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.113, + 0.486, + 0.309 + ], + "angle": 0, + "content": "As stated above, the outputs from the source model are highly likely to be inaccurate and noisy due to the domain shift [3] and category shift. Even if we propose a promising solution in Eq. (3), we still face a tough challenge to detect the \"unknown\" categories, which means we should widen the difference between \"unknown\" and others. [59] shows entropy is an effective tool to detect \"unknown\" in domain adaptation. Entropy quantifies the prediction uncertainty, and smaller entropy represents a more certain prediction. In order to effectively address the influence brought by the category shift, we implement an automatic threshold determined by average entropy. The prediction process can be formulated as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.17, + 0.32, + 0.484, + 0.36 + ], + "angle": 0, + "content": "\\[\ny _ {t} = \\left\\{ \\begin{array}{l l} \\arg \\max _ {c} p _ {t} ^ {c} & H \\left(p _ {t}\\right) < w \\\\ \\text {u n k n o w n} & H \\left(p _ {t}\\right) \\geq w, \\end{array} \\right. \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.111, + 0.365, + 0.361, + 0.381 + ], + "angle": 0, + "content": "where \\( H(p_{t}) \\) and \\( w \\) are computed as:" + }, + { + "type": "equation", + "bbox": [ + 0.202, + 0.39, + 0.483, + 0.434 + ], + "angle": 0, + "content": "\\[\nH \\left(p _ {t}\\right) = - \\sum_ {c = 1} ^ {\\left| L _ {s} \\right|} p _ {t} ^ {c} \\log p _ {t} ^ {c}, \\tag {7}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.22, + 0.447, + 0.483, + 0.464 + ], + "angle": 0, + "content": "\\[\nw = \\mathbb {E} _ {x _ {t} \\sim \\mathcal {X} _ {t}} H (p _ {t}). \\tag {8}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.468, + 0.485, + 0.665 + ], + "angle": 0, + "content": "Taking the average as a threshold eliminates the requirement of per-dataset hyper-parameter tuning and makes our selection process highly adaptive. As we employ entropy as a threshold to detect \"unknown\" categories, we still face a challenge to widen the gap between \"unknown\" and others. In order to address the issue, we propose Entropy-Driven Label Differentiation (EDLD) to make the \"unknown\" distinguishable and enhance the quality of pseudo-labels with high certainty. We use entropy to calculate the uncertainty level of pseudo-labels. Higher entropy always shows more uncertain predictions. We define the EDLD loss by dividing pseudo-labels into high-quality (HQ) and low-quality (LQ) by their entropy, the loss is defined as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.108, + 0.685, + 0.484, + 0.729 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {E D L D} = \\mathbb {E} _ {x _ {t} \\sim \\mathcal {X} _ {t}} \\left[ \\left\\{ \\begin{array}{l l} \\mathcal {L} _ {H Q} (p _ {t}) & \\text {i f} H (p _ {t}) < w \\\\ \\mathcal {L} _ {L Q} (p _ {t}) & \\text {i f} H (p _ {t}) \\geq w \\end{array} \\right. \\right], \\tag {9}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.1, + 0.749, + 0.484, + 0.767 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {H Q} = H \\left(p _ {t}\\right) + \\mathcal {L} _ {N L} \\left(p _ {t}, \\bar {y} _ {t}\\right), \\quad \\mathcal {L} _ {L Q} = - H \\left(p _ {t}\\right), \\tag {10}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.775, + 0.485, + 0.852 + ], + "angle": 0, + "content": "In the EDLD module, we not only use the entropy loss to widen the entropy gap between high-quality and low-quality pseudo-labels but also use a negative learning loss [21] to refine the high-quality pseudo-labels, the negative loss is the following:" + }, + { + "type": "equation", + "bbox": [ + 0.159, + 0.862, + 0.484, + 0.905 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {N L} \\left(p _ {t}, \\bar {y} _ {t}\\right) = - \\sum_ {c = 1} ^ {| L _ {s} |} \\bar {\\mathbf {y}} _ {t} ^ {c} \\log \\left(1 - p _ {t} ^ {c}\\right), \\tag {11}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.907, + 0.198 + ], + "angle": 0, + "content": "where is \\(\\bar{y}_t\\) a complementary label \\(\\bar{y}_t \\in \\{1, \\dots, |L_s|\\} \\setminus \\{y_t\\}\\) chosen randomly from the set of labels, and \\(\\bar{y}_t\\) is one-hot label derived from \\(\\bar{y}_t\\). Eq. (11) enables the probability value of the complementary label to be optimized as zero, resulting in an increase in the probability values of other classes, which can effectively refine the high-quality pseudo-labels." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.206, + 0.836, + 0.222 + ], + "angle": 0, + "content": "3.3. Adaptive refinement of pseudo labels" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.227, + 0.907, + 0.318 + ], + "angle": 0, + "content": "To further mitigate the impact of noise in the pseudo-labels generated by the source model, we employ an exponential moving average (EMA) of the target predictions. This allows for a gradual and controlled update of the pseudolabels supplied by the source model at each iteration. The update process is defined as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.573, + 0.333, + 0.907, + 0.351 + ], + "angle": 0, + "content": "\\[\np _ {s} \\leftarrow \\gamma p _ {s} + (1 - \\gamma) f _ {t} \\left(x _ {t}\\right), \\quad \\forall x _ {t} \\in \\mathcal {X} _ {t}, \\tag {12}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.354, + 0.907, + 0.429 + ], + "angle": 0, + "content": "where \\(\\gamma\\) is a smoothing factor that determines the extent to which the pseudo-labels should adapt to the most recent predictions from the target model. A higher value of \\(\\gamma\\) places more weight on the existing pseudo-labels, while a lower value allows quicker adaptation to new information." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.43, + 0.909, + 0.551 + ], + "angle": 0, + "content": "This strategy refines the pseudo-labels iteratively, balancing consistency with adaptability. By adjusting the pseudo-labels in a controlled manner, the model can better handle noise and gradually align the source model's outputs with the distribution of the target data. The EMA strategy ensures that updates are not overly reactive to fluctuations, thus enhancing the robustness of the model and improving performance in scenarios with diverse target domains." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.558, + 0.714, + 0.574 + ], + "angle": 0, + "content": "3.4. The overall objective" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.58, + 0.906, + 0.611 + ], + "angle": 0, + "content": "Integrating these objectives introduced in Eqs. (3, 9) together, we obtain the final loss function as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.621, + 0.625, + 0.905, + 0.642 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} = \\mathcal {L} _ {S A K D} + \\lambda \\mathcal {L} _ {E D L D}, \\tag {13}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.646, + 0.907, + 0.69 + ], + "angle": 0, + "content": "where \\(\\lambda\\) is a hyper-parameter empirically set to 1.0, controlling the importance of \\(L_{SAKD}\\) and \\(L_{EDLD}\\) during distillation." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.704, + 0.647, + 0.721 + ], + "angle": 0, + "content": "4. Experiments" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.728, + 0.597, + 0.744 + ], + "angle": 0, + "content": "4.1. Setup" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.75, + 0.909, + 0.903 + ], + "angle": 0, + "content": "Datasets. To assess the effectiveness of our approach, we conduct experiments using the Office31 [40], OfficeHome [50], VisDA [38], DomainNet [39] datasets. Office31 is a popular benchmark for UDA, consisting of three domains (Amazon, Webcam, Dslr) in 31 categories. OfficeHome is a more challenging benchmark for its distant domain shifts, which consists of four domains (Art, Clipart, Product, Real World) in 65 categories. VisDA is a large-scale benchmark containing 2 different 12-class domains, with a source domain with 152k synthetic images and a target domain with" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.521, + 0.958 + ], + "angle": 0, + "content": "30592" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.256, + 0.09, + 0.744, + 0.104 + ], + "angle": 0, + "content": "Table 1. H-score (\\%) comparison in OPBDA scenario on the OfficeHome dataset." + }, + { + "type": "table", + "bbox": [ + 0.094, + 0.115, + 0.905, + 0.238 + ], + "angle": 0, + "content": "
MethodAr→ClAr→PrAr→ReCl→ArCl→PrCl→RePr→ArPr→ClPr→ReRe→ArRe→ClRe→PrAvg.
No Adapt.55.867.372.864.262.370.565.752.171.766.156.769.264.5
DINE [28]45.346.154.651.045.352.449.944.552.152.446.745.748.8
BETA [57]45.947.454.849.345.150.149.345.553.551.545.848.848.9
SEAL [54]40.646.847.844.542.745.247.340.047.145.546.746.645.1
UB²DA [9]60.969.676.374.469.276.574.560.376.274.162.071.170.4
ADU61.272.777.970.372.577.375.962.084.773.264.174.972.2
" + }, + { + "type": "table_caption", + "bbox": [ + 0.151, + 0.249, + 0.846, + 0.264 + ], + "angle": 0, + "content": "Table 2. H-score (%) comparison in OPBDA scenario on the Office31, VisDA, and DomainNet datasets, respectively." + }, + { + "type": "table", + "bbox": [ + 0.094, + 0.274, + 0.905, + 0.418 + ], + "angle": 0, + "content": "
MethodOffice31VisDADomainNet
A→DA→WD→AD→WW→AW→DAvg.S→RP→RP→SR→PR→SS→PS→RAvg.
No Adapt.79.971.980.191.578.789.882.037.752.835.043.632.535.751.841.9
DINE [28]50.351.456.663.054.060.155.943.548.439.543.438.137.645.642.1
BETA [57]52.454.051.661.253.457.655.045.549.240.343.138.238.048.442.9
SEAL [54]73.870.351.155.647.057.059.145.652.739.943.638.439.249.743.9
UB2DA [9]80.978.292.692.689.487.986.945.257.147.254.844.041.451.549.3
ADU87.585.287.094.483.890.588.148.759.847.852.546.642.756.451.0
" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.442, + 0.483, + 0.577 + ], + "angle": 0, + "content": "55k real images from Microsoft COCO. DomainNet is the largest DA dataset with about 0.6 million images. Like [11, 24], we conduct experiments on three subsets of it, i.e., Painting, Real, and Sketch. Following existing works [11, 24, 59], we separate the label set into three parts: common \\((|L_s \\cap L_t|)\\), source-private \\((|L_s - L_t|)\\) and target-private \\((|L_t - L_s|)\\). The classes are separated according to their alphabetical order. We evaluate ADU in OPBDA using the four datasets, and in OSBDA using the first three datasets." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.578, + 0.484, + 0.669 + ], + "angle": 0, + "content": "Evaluation protocols. Considering the trade-off between the accuracy of known and unknown classes is important in evaluating OSDA and OPDA methods. We evaluate methods using H-score [11]. H-score is the harmonic mean of the accuracy on common classes \\((\\mathrm{Acc}_c)\\) and accuracy on the \"unknow\" classes \\((\\mathrm{Acc}_u)\\) and is defined as:" + }, + { + "type": "equation", + "bbox": [ + 0.212, + 0.678, + 0.483, + 0.711 + ], + "angle": 0, + "content": "\\[\nh = 2 \\cdot \\frac {A c c _ {c} \\cdot A c c _ {u}}{A c c _ {c} + A c c _ {u}} \\tag {14}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.72, + 0.483, + 0.779 + ], + "angle": 0, + "content": "So, this metric is designed to provide a more comprehensive evaluation by ensuring that improvements in one area do not come at the expense of the other. It can measure both accuracies well." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.781, + 0.484, + 0.902 + ], + "angle": 0, + "content": "Implementation details. All experiments are implemented in Pytorch [37]. For fair comparisons to previous methods, we use the same backbone of ResNet50 [16] pre-trained on ImageNet [10] as the feature extractor in all experiments. For the source model, we fine-tune the model on source examples optimizing with cross-entropy loss function and then treat it like a black-box by only requiring the input-output interfaces of this model in our experiments. We" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.442, + 0.907, + 0.532 + ], + "angle": 0, + "content": "use SGD optimizer with a learning rate of 0.01, a momentum of 0.9 with a weight decay of 5e-4 and a batch size of 128. Concerning the parameters in ADU, We set \\(\\theta = 1.1\\), \\(\\gamma = 0.6\\) and \\(\\lambda = 1.0\\) for all datasets and tasks. Additionally, following [23], we set the ratio of \\(L_{NL}(p_t,\\bar{y}_t)\\) to \\(H(p_{t})\\) as 0.01:1 in Eq. (10)." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.533, + 0.909, + 0.671 + ], + "angle": 0, + "content": "Baselines. We compare the proposed ADU with (i) BDA: DINE [28], BETA [57], SEAL [54] (ii) OPBDA: \\(\\mathbf{UB}^2\\mathbf{DA}\\) [9]. These methods represent the state-of-the-art in their respective settings. Notably, owing to black-box DA lacking the capability to identify \"unknown\" categories, we apply the average entropy as the threshold to them, similar to our setting. The term \"No Adapt.\" refers to the baseline scenario where the source model is used directly for target label prediction, without any form of adaptation." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.682, + 0.608, + 0.696 + ], + "angle": 0, + "content": "4.2. Results" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.705, + 0.909, + 0.903 + ], + "angle": 0, + "content": "Results for OPBDA. We first perform experiments under the most challenging scenario, namely OPBDA, in which both the source and target domains contain private categories. The results for the OfficeHome dataset are presented in Table 1, while those for the Office31, VisDA, and DomainNet datasets are shown in Table 2. As illustrated in these tables, our proposed ADU method achieves a new state-of-the-art, surpassing all existing methods across the four datasets. Notably, ADU consistently improves the H-score compared to the \"No Adapt.\" baseline in each experimental setting, with a significant increase of \\(11.0\\%\\) on the VisDA dataset. This improvement demonstrates that our method effectively mitigates the influence of noise from" + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "30593" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.256, + 0.09, + 0.744, + 0.104 + ], + "angle": 0, + "content": "Table 3. H-score (\\%) comparison in OSBDA scenario on the OfficeHome dataset." + }, + { + "type": "table", + "bbox": [ + 0.094, + 0.115, + 0.905, + 0.236 + ], + "angle": 0, + "content": "
MethodAr→ClAr→PrAr→ReCl→ArCl→PrCl→RePr→ArPr→ClPr→ReRe→ArRe→ClRe→PrAvg.
No Adapt.59.668.175.767.166.770.463.854.955.771.458.670.565.2
DINE [28]47.045.852.249.747.150.048.243.850.452.646.046.848.3
BETA [57]46.648.354.947.748.450.549.142.951.850.245.248.948.7
SEAL [54]43.346.547.143.745.945.445.340.845.643.541.346.544.6
UB2DA [9]65.570.475.567.869.374.471.256.775.070.763.369.869.1
ADU66.070.577.472.270.175.069.063.676.173.464.175.271.1
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.259, + 0.483, + 0.288 + ], + "angle": 0, + "content": "Table 4. H-score \\((\\%)\\) comparison in OSBDA scenario on the Office31 and VisDA datasets, respectively." + }, + { + "type": "table", + "bbox": [ + 0.101, + 0.299, + 0.48, + 0.41 + ], + "angle": 0, + "content": "
MethodOffice31VisDA
A→DA→WD→AD→WW→AW→DAvg.S→R
No Adapt.81.480.885.388.178.088.083.644.6
DINE [28]60.554.656.869.056.361.559.843.1
BETA [57]48.353.054.360.654.457.354.748.3
SEAL [54]50.443.753.444.854.050.849.540.6
UB2DA [9]85.787.491.089.285.184.187.148.1
ADU86.984.989.791.386.389.288.150.8
" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.443, + 0.483, + 0.701 + ], + "angle": 0, + "content": "source model predictions on the target model and accurately identifies unknown categories within the target data. An examination of Tables 1 and 2 reveals that methods such as DINE [28], BETA [57], and SEAL [54] perform poorly compared to our approach and \\(\\mathrm{UB}^2\\mathrm{DA}\\) [9], with performance even falling below the \"No Adapt.\" baseline on the Office31 and OfficeHome datasets. This underperformance is likely due to the lack of design tailored specifically for the OPBDA scenario in these methods, which hinders their ability to effectively differentiate unknown categories from other classes. These results underscore the importance of our ADU approach, which is specifically designed for OPBDA. When compared to \\(\\mathrm{UB}^2\\mathrm{DA}\\) [9], ADU achieves higher H-scores on the Office31, OfficeHome, VisDA, and DomainNet datasets, with improvements of \\(1.2\\%\\), \\(1.8\\%\\), \\(3.5\\%\\), and \\(1.7\\%\\), respectively. These gains further highlight the effectiveness of our proposed approach." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.705, + 0.483, + 0.901 + ], + "angle": 0, + "content": "Results for OSBDA. We subsequently conduct experiments under OSBDA scenarios, where only the target domain includes categories absent from the source domain. The results for the OfficeHome dataset are provided in Table 3, while those for the Office31 and VisDA datasets are presented in Table 4. As shown in these tables, our proposed ADU method achieves performance that surpasses the current state-of-the-art. Specifically, ADU consistently outperforms the \"No Adapt.\" baseline in terms of H-score across all experimental settings. Notably, for the \\(\\mathrm{Pr} \\rightarrow \\mathrm{Re}\\) scenario, it achieves an improvement of \\(20.4\\%\\). This substantial enhancement demonstrates that our method effectively reduces the influence of noise from source model pre" + }, + { + "type": "table_caption", + "bbox": [ + 0.513, + 0.259, + 0.907, + 0.316 + ], + "angle": 0, + "content": "Table 5. Ablation Study. H-score \\((\\%)\\) of different variants in OPBDA scenarios. \\(\\mathcal{L}_{HQ}^{1}\\), \\(\\mathcal{L}_{HQ}^{2}\\), \\(\\mathcal{L}_{LQ}\\) refer to the objectives corresponding to the negative loss in \\(\\mathcal{L}_{HQ}\\), entropy loss in \\(\\mathcal{L}_{HQ}\\), and loss associated with low-quality labels, respectively." + }, + { + "type": "table", + "bbox": [ + 0.518, + 0.326, + 0.908, + 0.446 + ], + "angle": 0, + "content": "
L1HQL2HQL2QOffice31OfficeHomeAvg.
A → WD → WAr → ClCl → RePr → ArRe → Cl
---82.992.661.272.971.562.073.7
--84.393.561.173.072.062.774.6
--83.592.761.774.372.162.574.5
--84.193.861.474.172.662.674.8
-84.394.162.773.771.761.374.6
-83.194.263.374.772.362.675.0
-85.194.062.674.672.162.875.2
85.294.461.277.375.964.176.3
" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.472, + 0.907, + 0.565 + ], + "angle": 0, + "content": "dictions on the target model, allowing for accurate identification of unknown categories within the target data. Compared to \\(\\mathrm{UB^{2}DA}\\) [9], ADU achieves higher H-scores on the Office31, OfficeHome and Visda datasets, with improvements of \\(1.0\\%\\), \\(2.0\\%\\), and \\(2.7\\%\\), respectively. These results further validate the effectiveness of our proposed approach." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.576, + 0.617, + 0.592 + ], + "angle": 0, + "content": "4.3. Analysis" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.599, + 0.907, + 0.903 + ], + "angle": 0, + "content": "Ablation study. To comprehensively assess the individual contribution of the components comprising our method, we conduct extensive ablation studies on two tasks from the Office31 dataset and four tasks from the OfficeHome dataset in OPBDA scenarios. The results are summarized in Table 5. Here, \\(\\mathcal{L}_{HQ}^{1}\\), \\(\\mathcal{L}_{HQ}^{2}\\), \\(\\mathcal{L}_{LQ}\\) refer to the objectives corresponding to the negative loss in \\(\\mathcal{L}_{HQ}\\), entropy loss in \\(\\mathcal{L}_{HQ}\\), and loss associated with low-quality labels, respectively. More detailed results can be found in supplementary material. It is important to emphasize that in all ablation experiments, we consistently employ the SAKD loss, which is a critical component of the ADU framework. Without it, the model would be unable to transfer knowledge from the source domain to the target model effectively. From the ablation study results, we can draw the following conclusions: (i) The introduction of any component alongside the SAKD loss leads to performance improvements, underscoring the vital role of the EDLD module. (ii) The full EDLD loss, which includes the negative loss term, yields better performance compared to its version without the negative loss," + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.521, + 0.958 + ], + "angle": 0, + "content": "30594" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.099, + 0.095, + 0.365, + 0.247 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.379, + 0.095, + 0.631, + 0.247 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.647, + 0.095, + 0.897, + 0.247 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.262, + 0.905, + 0.292 + ], + "angle": 0, + "content": "Figure 3. Parameters sensitivity analysis for six tasks. (a-b) plot the H-score with different values of \\(\\lambda\\), \\(\\theta\\); (c) plot the H-score, \\(\\mathrm{Acc}_c\\), \\(\\mathrm{Acc}_u\\) with different values of \\(\\gamma\\). The default values of these hyperparameters are set to \\(\\lambda = 1.0\\), \\(\\theta = 1.1\\), and \\(\\gamma = 0.6\\)." + }, + { + "type": "image", + "bbox": [ + 0.097, + 0.319, + 0.273, + 0.459 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.155, + 0.465, + 0.23, + 0.478 + ], + "angle": 0, + "content": "(a) No Adapt." + }, + { + "type": "image", + "bbox": [ + 0.296, + 0.319, + 0.475, + 0.457 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.36, + 0.465, + 0.411, + 0.477 + ], + "angle": 0, + "content": "(b) ADU" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.495, + 0.483, + 0.539 + ], + "angle": 0, + "content": "Figure 4. t-SNE feature visualization of target representations in \\(\\mathrm{D}\\rightarrow \\mathrm{A}\\) OPBDA task. Blue dots represent target \"known\" examples \\((L_{s}\\cap L_{t})\\) while red dots are unknown\" examples \\((L_{s} - L_{t})\\)" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.567, + 0.483, + 0.627 + ], + "angle": 0, + "content": "demonstrating the effectiveness of incorporating this term. (iii) The integration of all components results in the highest H-scores, providing clear evidence of the synergy and efficacy of the combined modules." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.628, + 0.483, + 0.809 + ], + "angle": 0, + "content": "Feature visualization. Fig. 4 displays the visualization of the target feature with t-SNE [49], providing a clear representation of the feature distribution. As expected, ADU achieves excellent alignment between the source and target domain features. Taking a closer look at the visualization, it is evident that ADU excels in distinguishing the \"unknown\" categories from the other classes. This improvement aligns well with the intended function of the EDLD module, which is designed to enhance the separation of \"unknown\" categories from known categories. This result further highlights the effectiveness of ADU in handling the challenges posed by unknown classes in black-box domain adaptation tasks." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.811, + 0.483, + 0.901 + ], + "angle": 0, + "content": "Parameters sensitivity analysis. To better assess the impact of different hyperparameters, we conduct a detailed sensitivity analysis. We investigate the sensitivity of the parameters \\(\\lambda\\), \\(\\theta\\), and \\(\\gamma\\) by performing experiments on two tasks from the Office31 dataset and four tasks from the OfficeHome dataset in OPBDA scenarios, as shown in Fig. 3." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.317, + 0.908, + 0.53 + ], + "angle": 0, + "content": "The parameter \\(\\lambda\\) is varied over the range [0.0, 0.2, 0.5, 1.0, 2.0, 5.0], \\(\\theta\\) spans [1.00, 1.05, 1.10, 1.15, 1.20, 1.25], and \\(\\gamma\\) is explored within the range [0.0, 0.2, 0.4, 0.6, 0.8, 1.0]. It is evident that the results are stable around the selected values of \\(\\lambda = 1.0\\), \\(\\theta = 1.1\\), and \\(\\gamma = 0.6\\). Additionally, as shown in Fig. 3(b), we examine the effect of varying \\(\\theta\\) in Eq. (3). The results around the chosen parameter \\(\\theta = 1.1\\) remain stable, and we also observe that increasing \\(\\theta\\) slightly from 1.0 leads to an improvement in the H-score, thereby highlighting the effectiveness of Eq. (3). Finally, we analyze the impact of \\(\\gamma\\). As shown in Fig. 3(c), there is an inverse relationship between \\(\\mathrm{Acc}_c\\) and \\(\\mathrm{Acc}_u\\). However, when \\(\\gamma = 0.6\\), the two metrics reach a relatively balanced state, and at this point, the H-score achieves an optimal result." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.552, + 0.634, + 0.568 + ], + "angle": 0, + "content": "5. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.58, + 0.907, + 0.837 + ], + "angle": 0, + "content": "In this paper, we introduce the ADU model, a framework specifically designed to tackle Black-box Domain Adaptation with unknown classes in the target domain. ADU integrates two key innovations: Selective Amplification Knowledge Distillation (SAKD) and Entropy-Driven Label Differentiation (EDLD). SAKD enhances model accuracy by selectively amplifying high-confidence pseudolabels, thereby effectively mitigating the influence of noisy pseudo-labels. Meanwhile, EDLD improves the recognition of unknown categories through an entropy-driven threshold, expanding the difference between unknown categories and others and bolstering the robustness of the method across a range of diverse target domains. Experiments across four benchmark datasets demonstrate that ADU outperforms existing state-of-the-art approaches, highlighting its exceptional adaptability and efficacy, setting a new benchmark for future research in the field." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.84, + 0.909, + 0.902 + ], + "angle": 0, + "content": "Acknowledgement. This work was supported by the National Natural Science Foundation of China (Grant T2125006 and 42401415) and Jiangsu Innovation Capacity Building Program (Project BM2022028)." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.519, + 0.958 + ], + "angle": 0, + "content": "30595" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.093, + 0.091, + 0.188, + 0.106 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.115, + 0.484, + 0.184 + ], + "angle": 0, + "content": "[1] Waqar Ahmed, Pietro Morerio, and Vittorio Murino. Cleaning noisy labels by negative ensemble learning for source-free unsupervised domain adaptation. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 1616-1625, 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.187, + 0.484, + 0.241 + ], + "angle": 0, + "content": "[2] Mathilde Bateson, Hoel Kervadec, Jose Dolz, Herve Lombaert, and Ismail Ben Ayed. Source-free domain adaptation for image segmentation. Medical Image Analysis, 82: 102617, 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.245, + 0.483, + 0.299 + ], + "angle": 0, + "content": "[3] Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. Advances in neural information processing systems, 19, 2006. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.302, + 0.483, + 0.369 + ], + "angle": 0, + "content": "[4] Zhangjie Cao, Mingsheng Long, Jianmin Wang, and Michael I Jordan. Partial transfer learning with selective adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2724-2732, 2018. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.373, + 0.483, + 0.442 + ], + "angle": 0, + "content": "[5] Zhangjie Cao, Kaichao You, Mingsheng Long, Jianmin Wang, and Qiang Yang. Learning to transfer examples for partial domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2985-2994, 2019. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.444, + 0.483, + 0.5 + ], + "angle": 0, + "content": "[6] Minghao Chen, Hongyang Xue, and Deng Cai. Domain adaptation for semantic segmentation with maximum squares loss. In Proceedings of the IEEE/CVF international conference on computer vision, pages 2090-2099, 2019. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.502, + 0.483, + 0.57 + ], + "angle": 0, + "content": "[7] Yuhua Chen, Wen Li, Christos Sakaridis, Dengxin Dai, and Luc Van Gool. Domain adaptive faster r-cnn for object detection in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3339-3348, 2018. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.573, + 0.483, + 0.642 + ], + "angle": 0, + "content": "[8] Shuhao Cui, Shuhui Wang, Junbao Zhuo, Chi Su, Qingming Huang, and Qi Tian. Gradually vanishing bridge for adversarial domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12455-12464, 2020. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.645, + 0.483, + 0.685 + ], + "angle": 0, + "content": "[9] Bin Deng, Yabin Zhang, Hui Tang, Changxing Ding, and Kui Jia. On universal black-box domain adaptation. arXiv preprint arXiv:2104.04665, 2021. 1, 3, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.687, + 0.483, + 0.743 + ], + "angle": 0, + "content": "[10] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE, 2009. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.746, + 0.483, + 0.814 + ], + "angle": 0, + "content": "[11] Bo Fu, Zhangjie Cao, Mingsheng Long, and Jianmin Wang. Learning to detect open classes for universal domain adaptation. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XV 16, pages 567-583. Springer, 2020. 3, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.817, + 0.483, + 0.857 + ], + "angle": 0, + "content": "[12] Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In International conference on machine learning, pages 1180-1189. PMLR, 2015. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.86, + 0.483, + 0.901 + ], + "angle": 0, + "content": "[13] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario March, and Victor Lempitsky. Domain-adversarial training" + }, + { + "type": "list", + "bbox": [ + 0.094, + 0.115, + 0.484, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.549, + 0.094, + 0.905, + 0.12 + ], + "angle": 0, + "content": "of neural networks. Journal of machine learning research, 17(59):1-35, 2016. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.122, + 0.905, + 0.162 + ], + "angle": 0, + "content": "[14] Jacob Goldberger and Ehud Ben-Reuven. Training deep neural-networks using a noise adaptation layer. In International conference on learning representations, 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.164, + 0.905, + 0.219 + ], + "angle": 0, + "content": "[15] Rui Gong, Wen Li, Yuhua Chen, and Luc Van Gool. Dlow: Domain flow for adaptation and generalization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2477-2486, 2019. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.22, + 0.905, + 0.274 + ], + "angle": 0, + "content": "[16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.275, + 0.905, + 0.315 + ], + "angle": 0, + "content": "[17] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.317, + 0.905, + 0.372 + ], + "angle": 0, + "content": "[18] Judy Hoffman, Erik Rodner, Jeff Donahue, Brian Kulis, and Kate Saenko. Asymmetric and category invariant feature transformations for domain adaptation. International journal of computer vision, 109:28-41, 2014. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.373, + 0.905, + 0.413 + ], + "angle": 0, + "content": "[19] Yunzhong Hou and Liang Zheng. Source free domain adaptation with image translation. arXiv preprint arXiv:2008.07514, 2020. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.415, + 0.905, + 0.482 + ], + "angle": 0, + "content": "[20] Mehran Khodabandeh, Arash Vahdat, Mani Ranjbar, and William G Macready. A robust learning approach to domain adaptive object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 480-490, 2019. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.484, + 0.907, + 0.538 + ], + "angle": 0, + "content": "[21] Youngdong Kim, Junho Yim, Juseung Yun, and Junmo Kim. Nlnl: Negative learning for noisy labels. In Proceedings of the IEEE/CVF international conference on computer vision, pages 101-110, 2019. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.54, + 0.905, + 0.594 + ], + "angle": 0, + "content": "[22] Youngeun Kim, Donghyeon Cho, Kyeongtak Han, Priyadarshini Panda, and Sungeun Hong. Domain adaptation without source data. IEEE Transactions on Artificial Intelligence, 2(6):508-518, 2021. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.597, + 0.905, + 0.651 + ], + "angle": 0, + "content": "[23] Youngdong Kim, Juseung Yun, Hyounguk Shon, and Junmo Kim. Joint negative and positive learning for noisy labels. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9442-9451, 2021. 3, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.652, + 0.905, + 0.718 + ], + "angle": 0, + "content": "[24] Guangrui Li, Guoliang Kang, Yi Zhu, Yunchao Wei, and Yi Yang. Domain consensus clustering for universal domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9757-9766, 2021. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.721, + 0.905, + 0.789 + ], + "angle": 0, + "content": "[25] Qingmei Li, Yibin Wen, Juepeng Zheng, Yuxiang Zhang, and Haohuan Fu. Hyunida: Breaking label set constraints for universal domain adaptation in cross-scene hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing, 2024. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.791, + 0.905, + 0.845 + ], + "angle": 0, + "content": "[26] Shuang Li, Shiji Song, Gao Huang, Zhengming Ding, and Cheng Wu. Domain invariant and class discriminative feature learning for visual domain adaptation. IEEE transactions on image processing, 27(9):4260-4273, 2018. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.847, + 0.905, + 0.901 + ], + "angle": 0, + "content": "[27] Jian Liang, Dapeng Hu, and Jiashi Feng. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In International conference on machine learning, pages 6028-6039. PMLR, 2020. 1, 3" + }, + { + "type": "list", + "bbox": [ + 0.518, + 0.094, + 0.907, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.946, + 0.52, + 0.957 + ], + "angle": 0, + "content": "30596" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.093, + 0.482, + 0.161 + ], + "angle": 0, + "content": "[28] Jian Liang, Dapeng Hu, Jiashi Feng, and Ran He. Dine: Domain adaptation from single and multiple black-box predictors. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8003-8013, 2022. 1, 3, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.164, + 0.483, + 0.234 + ], + "angle": 0, + "content": "[29] Mattia Litrico, Alessio Del Bue, and Pietro Morerio. Guiding pseudo-labels with uncertainty estimation for source-free unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7640-7650, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.235, + 0.482, + 0.289 + ], + "angle": 0, + "content": "[30] Long Liu, Lechao Yang, and Bin Zhu. Sparse feature space representation: A unified framework for semi-supervised and domain adaptation learning. Knowledge-Based Systems, 156:43-61, 2018. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.292, + 0.482, + 0.347 + ], + "angle": 0, + "content": "[31] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3431-3440, 2015. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.349, + 0.482, + 0.404 + ], + "angle": 0, + "content": "[32] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In International conference on machine learning, pages 97-105. PMLR, 2015. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.406, + 0.482, + 0.46 + ], + "angle": 0, + "content": "[33] Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. Advances in neural information processing systems, 31, 2018. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.462, + 0.482, + 0.517 + ], + "angle": 0, + "content": "[34] Bekhzod Olimov, Jeonghong Kim, and Anand Paul. Dcbt-net: Training deep convolutional neural networks with extremely noisy labels. IEEE Access, 8:220482-220495, 2020. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.52, + 0.482, + 0.589 + ], + "angle": 0, + "content": "[35] Fei Pan, Inkyu Shin, Francois Rameau, Seokju Lee, and In So Kweon. Unsupervised intra-domain adaptation for semantic segmentation through self-supervision. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3764-3773, 2020. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.591, + 0.482, + 0.632 + ], + "angle": 0, + "content": "[36] Pau Panareda Busto and Juergen Gall. Open set domain adaptation. In Proceedings of the IEEE international conference on computer vision, pages 754-763, 2017. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.634, + 0.482, + 0.689 + ], + "angle": 0, + "content": "[37] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.691, + 0.482, + 0.744 + ], + "angle": 0, + "content": "[38] Xingchao Peng, Ben Usman, Neela Kaushik, Judy Hoffman, Dequan Wang, and Kate Saenko. Visda: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924, 2017.5" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.747, + 0.482, + 0.815 + ], + "angle": 0, + "content": "[39] Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1406-1415, 2019. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.818, + 0.482, + 0.899 + ], + "angle": 0, + "content": "[40] Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In Computer Vision-ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, September 5-11, 2010, Proceedings, Part IV 11, pages 213-226. Springer, 2010. 5" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.093, + 0.483, + 0.899 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.147 + ], + "angle": 0, + "content": "[41] Kuniaki Saito and Kate Saenko. Ovanet: One-vs-all network for universal domain adaptation. In Proceedings of the IEEE/cvf international conference on computer vision, pages 9000-9009, 2021. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.149, + 0.905, + 0.204 + ], + "angle": 0, + "content": "[42] Kuniaki Saito, Shohei Yamamoto, Yoshitaka Ushiku, and Tatsuya Harada. Open set domain adaptation by backpropagation. In Proceedings of the European conference on computer vision (ECCV), pages 153-168, 2018. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.205, + 0.905, + 0.259 + ], + "angle": 0, + "content": "[43] Chen Shen and Yuhong Guo. Unsupervised heterogeneous domain adaptation with sparse feature transformation. In Asian conference on machine learning, pages 375-390. PMLR, 2018. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.261, + 0.905, + 0.315 + ], + "angle": 0, + "content": "[44] Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, and Jae-Gil Lee. Learning from noisy labels with deep neural networks: A survey. IEEE transactions on neural networks and learning systems, 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.317, + 0.905, + 0.386 + ], + "angle": 0, + "content": "[45] Simen Thys, Wiebe Van Ranst, and Toon Goedemé. Fooling automated surveillance cameras: adversarial patches to attack person detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 0–0, 2019. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.387, + 0.905, + 0.44 + ], + "angle": 0, + "content": "[46] Florian Tramér, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204, 2017. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.442, + 0.905, + 0.523 + ], + "angle": 0, + "content": "[47] Yi-Hsuan Tsai, Wei-Chih Hung, Samuel Schulter, Kiyuk Sohn, Ming-Hsuan Yang, and Manmohan Chandraker. Learning to adapt structured output space for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7472-7481, 2018. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.525, + 0.905, + 0.58 + ], + "angle": 0, + "content": "[48] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7167-7176, 2017. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.581, + 0.905, + 0.622 + ], + "angle": 0, + "content": "[49] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9 (11), 2008. 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.624, + 0.905, + 0.691 + ], + "angle": 0, + "content": "[50] Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5018-5027, 2017. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.693, + 0.905, + 0.761 + ], + "angle": 0, + "content": "[51] Riccardo Volpi, Pietro Morerio, Silvio Savarese, and Vittorio Murino. Adversarial feature augmentation for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5495-5504, 2018. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.763, + 0.905, + 0.831 + ], + "angle": 0, + "content": "[52] Rui Wang, Masao Utiyama, Lemao Liu, Kehai Chen, and Eichiro Sumita. Instance weighting for neural machine translation domain adaptation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1482-1488, 2017. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.833, + 0.905, + 0.9 + ], + "angle": 0, + "content": "[53] Hongxin Wei, Lei Feng, Xiangyu Chen, and Bo An. Combating noisy labels by agreement: A joint training method with co-regularization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 13726–13735, 2020. 3" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.9 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "30597" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.093, + 0.482, + 0.161 + ], + "angle": 0, + "content": "[54] Mingxuan Xia, Junbo Zhao, Gengyu Lyu, Zenan Huang, Tianlei Hu, Gang Chen, and Haobo Wang. A separation and alignment framework for black-box domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 16005-16013, 2024. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.164, + 0.482, + 0.232 + ], + "angle": 0, + "content": "[55] Rui Xia, Zhenchun Pan, and Feng Xu. Instance weighting for domain adaptation via trading off sample selection bias and variance. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden, pages 13-19, 2018. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.234, + 0.482, + 0.302 + ], + "angle": 0, + "content": "[56] Minghao Xu, Hang Wang, Bingbing Ni, Qi Tian, and Wenjun Zhang. Cross-domain detection via graph-induced prototype alignment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12355-12364, 2020. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.304, + 0.482, + 0.359 + ], + "angle": 0, + "content": "[57] Jianfei Yang, Xiangyu Peng, Kai Wang, Zheng Zhu, Jiashi Feng, Lihua Xie, and Yang You. Divide to adapt: Mitigating confirmation bias for domain adaptation of black-box predictors. arXiv preprint arXiv:2205.14467, 2022. 1, 3, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.361, + 0.482, + 0.415 + ], + "angle": 0, + "content": "[58] Shiqi Yang, Yaxing Wang, Joost Van De Weijer, Luis Herranz, and Shangling Jui. Unsupervised domain adaptation without source data by casting a bait. arXiv preprint arXiv:2010.12427, 1(2):5, 2020. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.418, + 0.482, + 0.485 + ], + "angle": 0, + "content": "[59] Kaichao You, Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Universal domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2720-2729, 2019. 1, 3, 5, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.488, + 0.482, + 0.543 + ], + "angle": 0, + "content": "[60] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3):107-115, 2021. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.545, + 0.482, + 0.586 + ], + "angle": 0, + "content": "[61] Haojian Zhang, Yabin Zhang, Kui Jia, and Lei Zhang. Unsupervised domain adaptation of black-box source models. arXiv preprint arXiv:2101.02839, 2021. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.588, + 0.482, + 0.655 + ], + "angle": 0, + "content": "[62] Siqi Zhang, Lu Zhang, and Zhiyong Liu. Refined pseudo labeling for source-free domain adaptive object detection. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1-5. IEEE, 2023. 3" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.093, + 0.482, + 0.655 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "30598" + } + ] +] \ No newline at end of file diff --git a/2025/ADU_ Adaptive Detection of Unknown Categories in Black-Box Domain Adaptation/a3a5f623-9c8e-40c7-93d2-175e52310d81_origin.pdf b/2025/ADU_ Adaptive Detection of Unknown Categories in Black-Box Domain Adaptation/a3a5f623-9c8e-40c7-93d2-175e52310d81_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8d2d5431dab7e331458bb1ad019be6109f81e1dd --- /dev/null +++ b/2025/ADU_ Adaptive Detection of Unknown Categories in Black-Box Domain Adaptation/a3a5f623-9c8e-40c7-93d2-175e52310d81_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:952d68df6a80fa2d08de5dd0311f74693b5e0e8c4aded71dda12dad135eac89a +size 653879 diff --git a/2025/ADU_ Adaptive Detection of Unknown Categories in Black-Box Domain Adaptation/full.md b/2025/ADU_ Adaptive Detection of Unknown Categories in Black-Box Domain Adaptation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c8fd0e3da03d5e4f4fb03c6ecdb4d196133f75f9 --- /dev/null +++ b/2025/ADU_ Adaptive Detection of Unknown Categories in Black-Box Domain Adaptation/full.md @@ -0,0 +1,307 @@ +# ADU: Adaptive Detection of Unknown Categories in Black-Box Domain Adaptation + +Yushan Lai, Guowen Li, Haoyuan Liang, Juepeng Zheng*, and Zhiyu Ye School of Artificial Intelligence, Sun Yat-Sen University + +{laiysh6,ligw8,lianghy68,yezhy26}@mail2.sysu.edu.cn,zhengjp8@mail.sysu.edu.cn + +# Abstract + +Black-box Domain Adaptation (BDA) utilizes a black-box predictor of the source domain to label target domain data, addressing privacy concerns in Unsupervised Domain Adaptation (UDA). However, BDA assumes identical label sets across domains, which is unrealistic. To overcome this limitation, we propose a study on BDA with unknown classes in the target domain. It uses a black-box predictor to label target data and identify "unknown" categories, without requiring access to source domain data or predictor parameters, thus addressing both data privacy and category shift issues in traditional UDA. Existing methods face two main challenges: (i) Noisy pseudo-labels in knowledge distillation (KD) accumulate prediction errors, and (ii) relying on a preset threshold fails to adapt to varying category shifts. To address these, we propose ADU, a framework that allows the target domain to autonomously learn pseudo-labels guided by quality and use an adaptive threshold to identify "unknown" categories. Specifically, ADU consists of Selective Amplification Knowledge Distillation (SAKD) and Entropy-Driven Label Differentiation (EDLD). SAKD improves KD by focusing on high-quality pseudo-labels, mitigating the impact of noisy labels. EDLD categorizes pseudo-labels by quality and applies tailored training strategies to distinguish "unknown" categories, improving detection accuracy and adaptability. Extensive experiments show that ADU achieves state-of-the-art results, outperforming the best existing method by $3.1\%$ on VisDA in the OPBDA scenario. + +# 1. Introduction + +Unsupervised domain adaptation (UDA) [12] aims to transfer knowledge from a well-labeled source domain to an unlabeled target domain, which can ease the burden of manual labeling. Recently, UDA has been applied in a range of computer vision tasks, including image classification + +[13, 33, 48], objection detection [7, 20, 56] and semantic segmentation [6, 35, 47]. However, UDA methods may raise concerns about data privacy and portability issues due to their requirement for access to raw source data and source model parameters. Therefore, source-free domain adaptation (SFDA) [19, 27, 58] is proposed to protect the source data privacy. In the SFDA scenario, only the source model is provided to the target domain without access to source data. However, it still faces the issue of source information privacy, which can be compromised through techniques such as white-box attacks [45, 46]. To mitigate these concerns, Black-box Domain Adaptation (BDA) [28, 61] has been proposed recently, as shown in Figure 1(a), which aims to learn a model solely using the unlabeled data from the target domain, based on the predictions from a black-box predictor trained on the source data. This setting can effectively mitigate data privacy issues related to data and model parameter leakage. + +However, traditional BDA [28, 57, 60] always assumes that the source and target domains share identical category sets, which frequently fails to apply in practice. In real-world scenarios, the target domain is typically unlabeled, making it difficult to satisfy this assumption due to potential category shifts. Currently, there are two UDA settings that involve unknown classes in the target domain: Open-Set Domain Adaptation (OSDA) [36, 42] and Open-Partial Domain Adaptation (OPDA) [25, 41, 59]. OSDA deals with scenarios where the target domain contains private classes that are unknown to the source domain, while OPDA handles cases where both the source and target domains each have their own private classes. Black-box Domain Adaptation has been applied to OPDA recently [9]. As shown in Figure 1(b), this setting is designed to learn a robust model for the target domain that not only recognizes classes shared by two domains but also identifies "unknown" categories absent in the source domain despite having no information about difference of two label sets. + +Currently, only one study has addressed the above problem. [9] applies knowledge distillation to train the target model to mimic source predictor outputs and uses a man + +![](images/c7e73379d0d7153b199e6d7b6e853ccf121aa10dbaaf0d288d16850c7ba1e900.jpg) +(a) Black-box Domain Adaptation + +![](images/679d31d87a5b79f77bb55edf66b522db859480439b2b9c951e3e153089bc6abb.jpg) +(b) Open-patial Black-box Domain Adaptation +Figure 1. Black-box domain adaptation and Open-partial black-box domain adaptation settings with respect to label sets of source and target domains (red labels indicate common labels of two domains). Compared to BDA, ADU is able to deal with BDA with unknown classes in the target by adaptively detecting unknown categories. + +ually preset threshold to identify "unknown" categories. Though inspiring, it still has the following limitations. (i) Due to domain and category shifts between the source and target domains, predictions from the source model are inevitably noisy. Directly utilizing these noisy pseudo-labels will accumulate model prediction errors, making the adaptation process unreliable. (ii) Employing a preset threshold fails to accommodate the variability and complexity of category shifts in different target domains, which is inadequate for accurately detecting "unknown" classes across diverse domains, often resulting in misclassification and reduced adaptability. + +To address the issues mentioned above, we propose a simple yet effective framework called ADU, specifically designed for Open-Set BDA (OSBDA) and Open-Partial BDA (OPBDA). ADU incorporates two core modules: Selective Amplification Knowledge Distillation (SAKD) and Entropy-Driven Label Differentiation (EDLD). For the first challenge, SAKD enhances traditional knowledge distillation techniques, specifically tailoring KD to BDA with unknown classes in the target domain by amplifying learning from high-quality pseudo-labels produced by source API. This refinement ensures that the target model emphasizes learning from high-quality pseudo-labels, effectively mitigating the impact of noisy data. For the second challenge, EDLD enhances the framework's ability to handle diverse domain conditions. Initially, EDLD categorizes pseudo-labels based on their quality and then applies tailored training strategies to widen the distance between "unknown" classes and the others, while minimizing the impact of noisy pseudo-labels. This adaptive differentiation of labels heightens the effectiveness of employing the av + +erage entropy of the target model's predictions as a threshold. Consequently, this refined approach significantly improves the detection accuracy of "unknown" categories and adapts more adeptly to category shifts across various target domains. Additionally, we iteratively refine the pseudolabels generated by the source API, which can significantly enhance their quality. + +Our main contributions in this paper could be summarized as follows: + +1. We propose Selective Amplification Knowledge Distillation (SAKD), a refined knowledge distillation technique specifically designed for the OPBDA and OSBDA scenarios, which can effectively mitigate the impact of noisy pseudo-labels. +2. We introduce Entropy-Driven Label Differentiation (EDLD), which categorizes pseudo-labels by quality and applies customized training strategies to enhance the distinction between "unknown" and others, thereby improving detection accuracy and domain adaptability through adaptive entropy-based thresholding. +3. Extensive experiments on four public benchmarks demonstrate the superior performance of our proposed method compared with existing SOTA works, surpassing the best existing method by $3.1\%$ on VisDA in the OPBDA scenario. + +# 2. Related work + +Black-box domain adaptation. Unsupervised Domain Adaptation (UDA) [12] aims to adapt a model trained on a labeled source domain to an unlabeled target domain. Many early methods relied on techniques such as instance weighting [52, 55], feature transformation [18, 26, 43], and feature + +space [30, 51]. Despite their effectiveness, these methods require access to source domain data, raising privacy and portability concerns [22]. To address privacy issues associated with UDA, Source-Free Domain Adaptation (SFDA) methods [19, 27, 58] have been proposed. These methods adapt models using only the source model and unlabeled target data, eliminating the need for source data during adaptation. Techniques such as entropy minimization [2] and pseudo-labeling [62] have been explored. However, SFDA methods still face potential privacy risks due to the use of generative models and other techniques that might inadvertently reveal source data characteristics. Therefore, Black-box Domain Adaptation (BDA) [28, 61] has emerged as a solution to further mitigate privacy concerns by only accessing the source model's outputs without any internal details. This approach ensures better privacy preservation compared to traditional UDA and SFDA methods. Recent methods such as DINE [28] and BETA [57] have made significant strides in this area. Nevertheless, they struggle with inconsistent label sets between domains. + +Open-set and open-partial domain adaptation. Closed-set domain adaptation assumes identical label sets between source and target domains, focusing on minimizing distribution shifts using techniques like discrepancy minimization [31, 32] and adversarial training [8, 15]. However, these methods often struggle when label sets are not perfectly aligned. To address this issue, Partial Domain Adaptation (PDA) assumes that only the source domain contains private classes, with methods such as SAN [4] employing class-wise domain discriminators, and ETN [5] using progressive weighting schemes. Meanwhile, Open-Set Domain Adaptation (OSDA) handles scenarios where the target domain has private classes unknown to the source. Besides, Open-Partial Domain Adaptation (OPDA) addresses both domains having their own private classes. UAN [59] quantifies sample-level uncertainty using entropy and domain similarity, and Fu et al. [11] combines entropy, confidence, and consistency for better uncertainty measurement. To address the challenges faced by black-box domain adaptation, [9] combines OPDA with BDA to address both category shift and privacy concerns. It applies knowledge distillation to train the target model to emulate source predictor outputs, using a preset threshold to identify "unknown" categories. Though inspiring, it still faces significant limitations regarding pseudo-label quality and the detection of "unknown" categories. To address these issues, we propose the ADU framework, applying it to Open-Set BDA (OSBDA) and Open-Partial BDA (OPBDA) to mitigate the impact of noisy pseudo-labels and enhance adaptability to category shifts across varied target domains. This approach provides a robust solution to the limitations of existing methods. + +Learning with noisy labels. Deep learning models often + +overfit on noisy labels, leading to poor generalization [60]. To address this, various approaches have been proposed, including noise-robust losses [23, 44], noise-transition matrix estimation [14], clean sample selection [53], and loss reweighting [34]. However, these methods often require noise-free validation sets or make assumptions about the noise distribution, which are impractical in BDA settings. These methods [29, 34] differ by not assuming any specific noise distribution and leveraging noisy scores from source training classes. Recently, NEL [1] introduced a novel approach by integrating a Negative Learning loss with a pseudo-label refinement framework that leverages ensembling techniques. Negative Learning [23] is an indirect learning method that employs complementary labels to address noise issues effectively. In our work, we use Negative Learning to refine high-quality pseudo-labels without ensembling, reducing computational cost and making our approach more flexible and robust for OSBDA and OPBDA. + +# 3. Methodology + +In this paper, we are provided with a target domain $\mathcal{D}_t = \left\{x_t^i\right\}_{i=1}^{N_t}$ with $N_t$ unlabeled samples where $x_t^i \in \mathcal{X}_t$ , and a black-box predictor $f_s$ trained by a source domain $\mathcal{D}_s = \left\{\left(x_s^i, y_s^i\right)\right\}_{i=1}^{N_s}$ with $N_s$ labeled samples where $x_s^i \in \mathcal{X}_s$ . We use $L_s$ and $L_t$ to denote the label spaces of the source domain and target domain respectively. In general, model $f$ consists of a feature extractor $G$ and a fully connected layer-based classifier $C$ . We have no access to the source domain data $\mathcal{D}_s$ and the parameters of the source model $f_s$ . Only a black-box predictor trained on the source domain, i.e., an API, is available. The objective is to leverage the predictions of the API of the source domain to learn a mapping model $f_t$ which can label the target samples with either one of the $L_s$ labels or the "unknown" label. The overall workflow is shown in Fig. 2. + +# 3.1. Selective amplification knowledge distillation + +Knowledge distillation (KD) [17] has been widely applied to address the black-box domain adaptation problem [9, 28, 57], as it enables the transfer of knowledge from one model (teacher) to another (student) by guiding the target model (student) to emulate the predictions of the source model (teacher). This approach is particularly suitable for BDA scenarios, where only the predictions of the source model are accessible. To better leverage the information available from the source domain's API, we use a knowledge distillation loss with both the source model's probabilities and hard pseudo-labels. This can be formulated as: + +$$ +\mathcal {L} _ {K D} = \mathbb {E} _ {x _ {t} \sim \mathcal {X} _ {t}} \left[ C E \left(\tilde {\boldsymbol {y}} _ {t}, p _ {t}\right) + C E \left(p _ {s}, p _ {t}\right) \right], \tag {1} +$$ + +where $CE(\cdot ,\cdot)$ denotes the cross entropy function, and + +![](images/e78f2281412e7b1985b1f6c1d82c971484177a84c607abf68d737a3067bb3a8d.jpg) +Figure 2. An overview of the proposed ADU framework. We utilize the black-box source predictor solely as an API service, obtaining only the source predictions from it. "PL" in the figure means pseudo-labels. + +$\tilde{\pmb{y}}_t$ is a one-hot pseudo-label derived from $f_{s}(x_{t})$ . In addition, we use $p_s$ and $p_t$ to replace $f_{s}(x_{t})$ and $f_{t}(x_{t})$ for simplicity. However, due to domain and category shifts, the predictions from the source model are inevitably noisy. Consequently, Eq. (1) processes information from these predictions equally, which can adversely affect the performance of the target model. In order to solve the issue, we propose Selective Amplification Knowledge Distillation (SAKD), a method that enhances knowledge distillation by leveraging the confidence of pseudo-labels produced by the source model. + +Firstly, we simplify Eq. (1) to derive the following formulation: + +$$ +\begin{array}{l} \mathcal {L} _ {K D} = - \mathbb {E} _ {x _ {t} \sim \chi_ {t}} (\log p _ {t} ^ {\hat {c}} + \sum_ {c = 1} ^ {| L _ {s} |} p _ {s} ^ {c} \log p _ {t} ^ {c}) \\ = - \mathbb {E} _ {x _ {t} \sim \chi_ {t}} [ (1 + p _ {s} ^ {\hat {c}}) \log p _ {t} ^ {\hat {c}} + \sum_ {\substack {c = 1 \\ c \neq \hat {c}}} ^ {| L _ {s} |} p _ {s} ^ {c} \log p _ {t} ^ {c} ], \end{array} \tag{2} +$$ + +where $\hat{c}$ represents the label predicted by the source model, defined as $\hat{c} = \arg \max_c p_s^c$ . Subsequently, we formulate the SAKD loss by incorporating a modulating parameter $\theta \geq 1$ into the term $(1 + p_s^{\hat{c}})$ , which only pertains to $\hat{c}$ : + +$$ +\mathcal {L} _ {S A K D} = - \mathbb {E} _ {x _ {t} \sim \chi_ {t}} \left[ \left(1 + p _ {s} ^ {\hat {c}}\right) ^ {\theta} \log p _ {t} ^ {\hat {c}} + \sum_ {\substack {c = 1 \\ c \neq \hat {c}}} ^ {| L _ {s} |} p _ {s} ^ {c} \log p _ {t} ^ {c} \right], \tag{3} +$$ + +Discussion of SAKD loss: For simplicity, we consider the case with a single sample, where the SAKD loss simplifies to: + +$$ +\mathcal {L} _ {\mathrm {S A K D}} = - \left[ \left(1 + p _ {s} ^ {\hat {c}}\right) ^ {\theta} \log p _ {t} ^ {\hat {c}} + \sum_ {c = 1, c \neq \hat {c}} ^ {| L _ {s} |} p _ {s} ^ {c} \log p _ {t} ^ {c} \right] \tag {4} +$$ + +Next, we apply the generalized binomial theorem to expand $\left(1 + p_s^{\hat{c}}\right)^\theta$ as follows: + +$$ +\begin{array}{l} \mathcal {L} _ {\mathrm {S A K D}} = - \left[ \sum_ {k = 0} ^ {\infty} \binom {\theta} {k} \left(p _ {s} ^ {\hat {c}}\right) ^ {k} \log p _ {t} ^ {\hat {c}} + \sum_ {c = 1, c \neq \hat {c}} ^ {| L _ {s} |} p _ {s} ^ {c} \log p _ {t} ^ {c} \right] \\ = \mathcal {L} _ {\mathrm {K D}} - \left[ \sum_ {k = 1} ^ {\infty} \binom {\theta} {k} \left(p _ {s} ^ {\hat {c}}\right) ^ {k} - p _ {s} ^ {\hat {c}} \right] \log p _ {t} ^ {\hat {c}} \\ \approx \mathcal {L} _ {\mathrm {K D}} - \left[ (\theta - 1) p _ {s} ^ {\hat {c}} + \frac {\theta (\theta - 1)}{2} \left(p _ {s} ^ {\hat {c}}\right) ^ {2} \right] \log p _ {t} ^ {\hat {c}} \tag {5} \\ \end{array} +$$ + +In Eq. (5), the first term $\mathcal{L}_{\mathrm{KD}}$ represents the original KD loss in Eq. (2), while the second term introduces an additional term, which is positive and solely depends on the target class $\hat{c}$ . We demonstrate that this additional term enables the SAKD loss to capture more information from pseudolabels with high confidence, meaning those with higher values of $p_s^{\hat{c}}$ , thereby reducing the impact of noisy pseudolabels. A comprehensive proof of this claim is provided in the supplemental material. + +# 3.2. Entropy-driven label differentiation + +As stated above, the outputs from the source model are highly likely to be inaccurate and noisy due to the domain shift [3] and category shift. Even if we propose a promising solution in Eq. (3), we still face a tough challenge to detect the "unknown" categories, which means we should widen the difference between "unknown" and others. [59] shows entropy is an effective tool to detect "unknown" in domain adaptation. Entropy quantifies the prediction uncertainty, and smaller entropy represents a more certain prediction. In order to effectively address the influence brought by the category shift, we implement an automatic threshold determined by average entropy. The prediction process can be formulated as follows: + +$$ +y _ {t} = \left\{ \begin{array}{l l} \arg \max _ {c} p _ {t} ^ {c} & H \left(p _ {t}\right) < w \\ \text {u n k n o w n} & H \left(p _ {t}\right) \geq w, \end{array} \right. \tag {6} +$$ + +where $H(p_{t})$ and $w$ are computed as: + +$$ +H \left(p _ {t}\right) = - \sum_ {c = 1} ^ {\left| L _ {s} \right|} p _ {t} ^ {c} \log p _ {t} ^ {c}, \tag {7} +$$ + +$$ +w = \mathbb {E} _ {x _ {t} \sim \mathcal {X} _ {t}} H (p _ {t}). \tag {8} +$$ + +Taking the average as a threshold eliminates the requirement of per-dataset hyper-parameter tuning and makes our selection process highly adaptive. As we employ entropy as a threshold to detect "unknown" categories, we still face a challenge to widen the gap between "unknown" and others. In order to address the issue, we propose Entropy-Driven Label Differentiation (EDLD) to make the "unknown" distinguishable and enhance the quality of pseudo-labels with high certainty. We use entropy to calculate the uncertainty level of pseudo-labels. Higher entropy always shows more uncertain predictions. We define the EDLD loss by dividing pseudo-labels into high-quality (HQ) and low-quality (LQ) by their entropy, the loss is defined as follows: + +$$ +\mathcal {L} _ {E D L D} = \mathbb {E} _ {x _ {t} \sim \mathcal {X} _ {t}} \left[ \left\{ \begin{array}{l l} \mathcal {L} _ {H Q} (p _ {t}) & \text {i f} H (p _ {t}) < w \\ \mathcal {L} _ {L Q} (p _ {t}) & \text {i f} H (p _ {t}) \geq w \end{array} \right. \right], \tag {9} +$$ + +$$ +\mathcal {L} _ {H Q} = H \left(p _ {t}\right) + \mathcal {L} _ {N L} \left(p _ {t}, \bar {y} _ {t}\right), \quad \mathcal {L} _ {L Q} = - H \left(p _ {t}\right), \tag {10} +$$ + +In the EDLD module, we not only use the entropy loss to widen the entropy gap between high-quality and low-quality pseudo-labels but also use a negative learning loss [21] to refine the high-quality pseudo-labels, the negative loss is the following: + +$$ +\mathcal {L} _ {N L} \left(p _ {t}, \bar {y} _ {t}\right) = - \sum_ {c = 1} ^ {| L _ {s} |} \bar {\mathbf {y}} _ {t} ^ {c} \log \left(1 - p _ {t} ^ {c}\right), \tag {11} +$$ + +where is $\bar{y}_t$ a complementary label $\bar{y}_t \in \{1, \dots, |L_s|\} \setminus \{y_t\}$ chosen randomly from the set of labels, and $\bar{y}_t$ is one-hot label derived from $\bar{y}_t$ . Eq. (11) enables the probability value of the complementary label to be optimized as zero, resulting in an increase in the probability values of other classes, which can effectively refine the high-quality pseudo-labels. + +# 3.3. Adaptive refinement of pseudo labels + +To further mitigate the impact of noise in the pseudo-labels generated by the source model, we employ an exponential moving average (EMA) of the target predictions. This allows for a gradual and controlled update of the pseudolabels supplied by the source model at each iteration. The update process is defined as follows: + +$$ +p _ {s} \leftarrow \gamma p _ {s} + (1 - \gamma) f _ {t} \left(x _ {t}\right), \quad \forall x _ {t} \in \mathcal {X} _ {t}, \tag {12} +$$ + +where $\gamma$ is a smoothing factor that determines the extent to which the pseudo-labels should adapt to the most recent predictions from the target model. A higher value of $\gamma$ places more weight on the existing pseudo-labels, while a lower value allows quicker adaptation to new information. + +This strategy refines the pseudo-labels iteratively, balancing consistency with adaptability. By adjusting the pseudo-labels in a controlled manner, the model can better handle noise and gradually align the source model's outputs with the distribution of the target data. The EMA strategy ensures that updates are not overly reactive to fluctuations, thus enhancing the robustness of the model and improving performance in scenarios with diverse target domains. + +# 3.4. The overall objective + +Integrating these objectives introduced in Eqs. (3, 9) together, we obtain the final loss function as follows: + +$$ +\mathcal {L} = \mathcal {L} _ {S A K D} + \lambda \mathcal {L} _ {E D L D}, \tag {13} +$$ + +where $\lambda$ is a hyper-parameter empirically set to 1.0, controlling the importance of $L_{SAKD}$ and $L_{EDLD}$ during distillation. + +# 4. Experiments + +# 4.1. Setup + +Datasets. To assess the effectiveness of our approach, we conduct experiments using the Office31 [40], OfficeHome [50], VisDA [38], DomainNet [39] datasets. Office31 is a popular benchmark for UDA, consisting of three domains (Amazon, Webcam, Dslr) in 31 categories. OfficeHome is a more challenging benchmark for its distant domain shifts, which consists of four domains (Art, Clipart, Product, Real World) in 65 categories. VisDA is a large-scale benchmark containing 2 different 12-class domains, with a source domain with 152k synthetic images and a target domain with + +Table 1. H-score (\%) comparison in OPBDA scenario on the OfficeHome dataset. + +
MethodAr→ClAr→PrAr→ReCl→ArCl→PrCl→RePr→ArPr→ClPr→ReRe→ArRe→ClRe→PrAvg.
No Adapt.55.867.372.864.262.370.565.752.171.766.156.769.264.5
DINE [28]45.346.154.651.045.352.449.944.552.152.446.745.748.8
BETA [57]45.947.454.849.345.150.149.345.553.551.545.848.848.9
SEAL [54]40.646.847.844.542.745.247.340.047.145.546.746.645.1
UB²DA [9]60.969.676.374.469.276.574.560.376.274.162.071.170.4
ADU61.272.777.970.372.577.375.962.084.773.264.174.972.2
+ +Table 2. H-score (%) comparison in OPBDA scenario on the Office31, VisDA, and DomainNet datasets, respectively. + +
MethodOffice31VisDADomainNet
A→DA→WD→AD→WW→AW→DAvg.S→RP→RP→SR→PR→SS→PS→RAvg.
No Adapt.79.971.980.191.578.789.882.037.752.835.043.632.535.751.841.9
DINE [28]50.351.456.663.054.060.155.943.548.439.543.438.137.645.642.1
BETA [57]52.454.051.661.253.457.655.045.549.240.343.138.238.048.442.9
SEAL [54]73.870.351.155.647.057.059.145.652.739.943.638.439.249.743.9
UB2DA [9]80.978.292.692.689.487.986.945.257.147.254.844.041.451.549.3
ADU87.585.287.094.483.890.588.148.759.847.852.546.642.756.451.0
+ +55k real images from Microsoft COCO. DomainNet is the largest DA dataset with about 0.6 million images. Like [11, 24], we conduct experiments on three subsets of it, i.e., Painting, Real, and Sketch. Following existing works [11, 24, 59], we separate the label set into three parts: common $(|L_s \cap L_t|)$ , source-private $(|L_s - L_t|)$ and target-private $(|L_t - L_s|)$ . The classes are separated according to their alphabetical order. We evaluate ADU in OPBDA using the four datasets, and in OSBDA using the first three datasets. + +Evaluation protocols. Considering the trade-off between the accuracy of known and unknown classes is important in evaluating OSDA and OPDA methods. We evaluate methods using H-score [11]. H-score is the harmonic mean of the accuracy on common classes $(\mathrm{Acc}_c)$ and accuracy on the "unknow" classes $(\mathrm{Acc}_u)$ and is defined as: + +$$ +h = 2 \cdot \frac {A c c _ {c} \cdot A c c _ {u}}{A c c _ {c} + A c c _ {u}} \tag {14} +$$ + +So, this metric is designed to provide a more comprehensive evaluation by ensuring that improvements in one area do not come at the expense of the other. It can measure both accuracies well. + +Implementation details. All experiments are implemented in Pytorch [37]. For fair comparisons to previous methods, we use the same backbone of ResNet50 [16] pre-trained on ImageNet [10] as the feature extractor in all experiments. For the source model, we fine-tune the model on source examples optimizing with cross-entropy loss function and then treat it like a black-box by only requiring the input-output interfaces of this model in our experiments. We + +use SGD optimizer with a learning rate of 0.01, a momentum of 0.9 with a weight decay of 5e-4 and a batch size of 128. Concerning the parameters in ADU, We set $\theta = 1.1$ , $\gamma = 0.6$ and $\lambda = 1.0$ for all datasets and tasks. Additionally, following [23], we set the ratio of $L_{NL}(p_t,\bar{y}_t)$ to $H(p_{t})$ as 0.01:1 in Eq. (10). + +Baselines. We compare the proposed ADU with (i) BDA: DINE [28], BETA [57], SEAL [54] (ii) OPBDA: $\mathbf{UB}^2\mathbf{DA}$ [9]. These methods represent the state-of-the-art in their respective settings. Notably, owing to black-box DA lacking the capability to identify "unknown" categories, we apply the average entropy as the threshold to them, similar to our setting. The term "No Adapt." refers to the baseline scenario where the source model is used directly for target label prediction, without any form of adaptation. + +# 4.2. Results + +Results for OPBDA. We first perform experiments under the most challenging scenario, namely OPBDA, in which both the source and target domains contain private categories. The results for the OfficeHome dataset are presented in Table 1, while those for the Office31, VisDA, and DomainNet datasets are shown in Table 2. As illustrated in these tables, our proposed ADU method achieves a new state-of-the-art, surpassing all existing methods across the four datasets. Notably, ADU consistently improves the H-score compared to the "No Adapt." baseline in each experimental setting, with a significant increase of $11.0\%$ on the VisDA dataset. This improvement demonstrates that our method effectively mitigates the influence of noise from + +Table 3. H-score (\%) comparison in OSBDA scenario on the OfficeHome dataset. + +
MethodAr→ClAr→PrAr→ReCl→ArCl→PrCl→RePr→ArPr→ClPr→ReRe→ArRe→ClRe→PrAvg.
No Adapt.59.668.175.767.166.770.463.854.955.771.458.670.565.2
DINE [28]47.045.852.249.747.150.048.243.850.452.646.046.848.3
BETA [57]46.648.354.947.748.450.549.142.951.850.245.248.948.7
SEAL [54]43.346.547.143.745.945.445.340.845.643.541.346.544.6
UB2DA [9]65.570.475.567.869.374.471.256.775.070.763.369.869.1
ADU66.070.577.472.270.175.069.063.676.173.464.175.271.1
+ +Table 4. H-score $(\%)$ comparison in OSBDA scenario on the Office31 and VisDA datasets, respectively. + +
MethodOffice31VisDA
A→DA→WD→AD→WW→AW→DAvg.S→R
No Adapt.81.480.885.388.178.088.083.644.6
DINE [28]60.554.656.869.056.361.559.843.1
BETA [57]48.353.054.360.654.457.354.748.3
SEAL [54]50.443.753.444.854.050.849.540.6
UB2DA [9]85.787.491.089.285.184.187.148.1
ADU86.984.989.791.386.389.288.150.8
+ +source model predictions on the target model and accurately identifies unknown categories within the target data. An examination of Tables 1 and 2 reveals that methods such as DINE [28], BETA [57], and SEAL [54] perform poorly compared to our approach and $\mathrm{UB}^2\mathrm{DA}$ [9], with performance even falling below the "No Adapt." baseline on the Office31 and OfficeHome datasets. This underperformance is likely due to the lack of design tailored specifically for the OPBDA scenario in these methods, which hinders their ability to effectively differentiate unknown categories from other classes. These results underscore the importance of our ADU approach, which is specifically designed for OPBDA. When compared to $\mathrm{UB}^2\mathrm{DA}$ [9], ADU achieves higher H-scores on the Office31, OfficeHome, VisDA, and DomainNet datasets, with improvements of $1.2\%$ , $1.8\%$ , $3.5\%$ , and $1.7\%$ , respectively. These gains further highlight the effectiveness of our proposed approach. + +Results for OSBDA. We subsequently conduct experiments under OSBDA scenarios, where only the target domain includes categories absent from the source domain. The results for the OfficeHome dataset are provided in Table 3, while those for the Office31 and VisDA datasets are presented in Table 4. As shown in these tables, our proposed ADU method achieves performance that surpasses the current state-of-the-art. Specifically, ADU consistently outperforms the "No Adapt." baseline in terms of H-score across all experimental settings. Notably, for the $\mathrm{Pr} \rightarrow \mathrm{Re}$ scenario, it achieves an improvement of $20.4\%$ . This substantial enhancement demonstrates that our method effectively reduces the influence of noise from source model pre + +Table 5. Ablation Study. H-score $(\%)$ of different variants in OPBDA scenarios. $\mathcal{L}_{HQ}^{1}$ , $\mathcal{L}_{HQ}^{2}$ , $\mathcal{L}_{LQ}$ refer to the objectives corresponding to the negative loss in $\mathcal{L}_{HQ}$ , entropy loss in $\mathcal{L}_{HQ}$ , and loss associated with low-quality labels, respectively. + +
L1HQL2HQL2QOffice31OfficeHomeAvg.
A → WD → WAr → ClCl → RePr → ArRe → Cl
---82.992.661.272.971.562.073.7
--84.393.561.173.072.062.774.6
--83.592.761.774.372.162.574.5
--84.193.861.474.172.662.674.8
-84.394.162.773.771.761.374.6
-83.194.263.374.772.362.675.0
-85.194.062.674.672.162.875.2
85.294.461.277.375.964.176.3
+ +dictions on the target model, allowing for accurate identification of unknown categories within the target data. Compared to $\mathrm{UB^{2}DA}$ [9], ADU achieves higher H-scores on the Office31, OfficeHome and Visda datasets, with improvements of $1.0\%$ , $2.0\%$ , and $2.7\%$ , respectively. These results further validate the effectiveness of our proposed approach. + +# 4.3. Analysis + +Ablation study. To comprehensively assess the individual contribution of the components comprising our method, we conduct extensive ablation studies on two tasks from the Office31 dataset and four tasks from the OfficeHome dataset in OPBDA scenarios. The results are summarized in Table 5. Here, $\mathcal{L}_{HQ}^{1}$ , $\mathcal{L}_{HQ}^{2}$ , $\mathcal{L}_{LQ}$ refer to the objectives corresponding to the negative loss in $\mathcal{L}_{HQ}$ , entropy loss in $\mathcal{L}_{HQ}$ , and loss associated with low-quality labels, respectively. More detailed results can be found in supplementary material. It is important to emphasize that in all ablation experiments, we consistently employ the SAKD loss, which is a critical component of the ADU framework. Without it, the model would be unable to transfer knowledge from the source domain to the target model effectively. From the ablation study results, we can draw the following conclusions: (i) The introduction of any component alongside the SAKD loss leads to performance improvements, underscoring the vital role of the EDLD module. (ii) The full EDLD loss, which includes the negative loss term, yields better performance compared to its version without the negative loss, + +![](images/4484827b5b8c63032e103d251105fe0b4bb9899dce6ca76aba4e8a7b83e52bb8.jpg) +Figure 3. Parameters sensitivity analysis for six tasks. (a-b) plot the H-score with different values of $\lambda$ , $\theta$ ; (c) plot the H-score, $\mathrm{Acc}_c$ , $\mathrm{Acc}_u$ with different values of $\gamma$ . The default values of these hyperparameters are set to $\lambda = 1.0$ , $\theta = 1.1$ , and $\gamma = 0.6$ . + +![](images/08c225f96d48980126e7f969dfe663e036f9dfaf482f30dae474a9f76f098d72.jpg) + +![](images/0adff36860afc575c9c033912b7b501e3e31ed6569084e9ee11cda382153ac60.jpg) + +![](images/1355f2a705c8410e1d496dcc7c31f228a86a6862ec4e2355b6692ffc730caac4.jpg) +(a) No Adapt. +Figure 4. t-SNE feature visualization of target representations in $\mathrm{D}\rightarrow \mathrm{A}$ OPBDA task. Blue dots represent target "known" examples $(L_{s}\cap L_{t})$ while red dots are unknown" examples $(L_{s} - L_{t})$ + +![](images/bdf150df8dc7c29414b5907ff567505aba668e7afc1ab553079aa505144a5981.jpg) +(b) ADU + +demonstrating the effectiveness of incorporating this term. (iii) The integration of all components results in the highest H-scores, providing clear evidence of the synergy and efficacy of the combined modules. + +Feature visualization. Fig. 4 displays the visualization of the target feature with t-SNE [49], providing a clear representation of the feature distribution. As expected, ADU achieves excellent alignment between the source and target domain features. Taking a closer look at the visualization, it is evident that ADU excels in distinguishing the "unknown" categories from the other classes. This improvement aligns well with the intended function of the EDLD module, which is designed to enhance the separation of "unknown" categories from known categories. This result further highlights the effectiveness of ADU in handling the challenges posed by unknown classes in black-box domain adaptation tasks. + +Parameters sensitivity analysis. To better assess the impact of different hyperparameters, we conduct a detailed sensitivity analysis. We investigate the sensitivity of the parameters $\lambda$ , $\theta$ , and $\gamma$ by performing experiments on two tasks from the Office31 dataset and four tasks from the OfficeHome dataset in OPBDA scenarios, as shown in Fig. 3. + +The parameter $\lambda$ is varied over the range [0.0, 0.2, 0.5, 1.0, 2.0, 5.0], $\theta$ spans [1.00, 1.05, 1.10, 1.15, 1.20, 1.25], and $\gamma$ is explored within the range [0.0, 0.2, 0.4, 0.6, 0.8, 1.0]. It is evident that the results are stable around the selected values of $\lambda = 1.0$ , $\theta = 1.1$ , and $\gamma = 0.6$ . Additionally, as shown in Fig. 3(b), we examine the effect of varying $\theta$ in Eq. (3). The results around the chosen parameter $\theta = 1.1$ remain stable, and we also observe that increasing $\theta$ slightly from 1.0 leads to an improvement in the H-score, thereby highlighting the effectiveness of Eq. (3). Finally, we analyze the impact of $\gamma$ . As shown in Fig. 3(c), there is an inverse relationship between $\mathrm{Acc}_c$ and $\mathrm{Acc}_u$ . However, when $\gamma = 0.6$ , the two metrics reach a relatively balanced state, and at this point, the H-score achieves an optimal result. + +# 5. Conclusion + +In this paper, we introduce the ADU model, a framework specifically designed to tackle Black-box Domain Adaptation with unknown classes in the target domain. ADU integrates two key innovations: Selective Amplification Knowledge Distillation (SAKD) and Entropy-Driven Label Differentiation (EDLD). SAKD enhances model accuracy by selectively amplifying high-confidence pseudolabels, thereby effectively mitigating the influence of noisy pseudo-labels. Meanwhile, EDLD improves the recognition of unknown categories through an entropy-driven threshold, expanding the difference between unknown categories and others and bolstering the robustness of the method across a range of diverse target domains. Experiments across four benchmark datasets demonstrate that ADU outperforms existing state-of-the-art approaches, highlighting its exceptional adaptability and efficacy, setting a new benchmark for future research in the field. + +Acknowledgement. This work was supported by the National Natural Science Foundation of China (Grant T2125006 and 42401415) and Jiangsu Innovation Capacity Building Program (Project BM2022028). + +# References + +[1] Waqar Ahmed, Pietro Morerio, and Vittorio Murino. Cleaning noisy labels by negative ensemble learning for source-free unsupervised domain adaptation. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 1616-1625, 2022. 3 +[2] Mathilde Bateson, Hoel Kervadec, Jose Dolz, Herve Lombaert, and Ismail Ben Ayed. Source-free domain adaptation for image segmentation. Medical Image Analysis, 82: 102617, 2022. 3 +[3] Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. Advances in neural information processing systems, 19, 2006. 5 +[4] Zhangjie Cao, Mingsheng Long, Jianmin Wang, and Michael I Jordan. Partial transfer learning with selective adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2724-2732, 2018. 3 +[5] Zhangjie Cao, Kaichao You, Mingsheng Long, Jianmin Wang, and Qiang Yang. Learning to transfer examples for partial domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2985-2994, 2019. 3 +[6] Minghao Chen, Hongyang Xue, and Deng Cai. Domain adaptation for semantic segmentation with maximum squares loss. In Proceedings of the IEEE/CVF international conference on computer vision, pages 2090-2099, 2019. 1 +[7] Yuhua Chen, Wen Li, Christos Sakaridis, Dengxin Dai, and Luc Van Gool. Domain adaptive faster r-cnn for object detection in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3339-3348, 2018. 1 +[8] Shuhao Cui, Shuhui Wang, Junbao Zhuo, Chi Su, Qingming Huang, and Qi Tian. Gradually vanishing bridge for adversarial domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12455-12464, 2020. 3 +[9] Bin Deng, Yabin Zhang, Hui Tang, Changxing Ding, and Kui Jia. On universal black-box domain adaptation. arXiv preprint arXiv:2104.04665, 2021. 1, 3, 6, 7 +[10] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE, 2009. 6 +[11] Bo Fu, Zhangjie Cao, Mingsheng Long, and Jianmin Wang. Learning to detect open classes for universal domain adaptation. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XV 16, pages 567-583. Springer, 2020. 3, 6 +[12] Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In International conference on machine learning, pages 1180-1189. PMLR, 2015. 1, 2 +[13] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario March, and Victor Lempitsky. Domain-adversarial training + +of neural networks. Journal of machine learning research, 17(59):1-35, 2016. 1 +[14] Jacob Goldberger and Ehud Ben-Reuven. Training deep neural-networks using a noise adaptation layer. In International conference on learning representations, 2022. 3 +[15] Rui Gong, Wen Li, Yuhua Chen, and Luc Van Gool. Dlow: Domain flow for adaptation and generalization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2477-2486, 2019. 3 +[16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 6 +[17] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. 3 +[18] Judy Hoffman, Erik Rodner, Jeff Donahue, Brian Kulis, and Kate Saenko. Asymmetric and category invariant feature transformations for domain adaptation. International journal of computer vision, 109:28-41, 2014. 2 +[19] Yunzhong Hou and Liang Zheng. Source free domain adaptation with image translation. arXiv preprint arXiv:2008.07514, 2020. 1, 3 +[20] Mehran Khodabandeh, Arash Vahdat, Mani Ranjbar, and William G Macready. A robust learning approach to domain adaptive object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 480-490, 2019. 1 +[21] Youngdong Kim, Junho Yim, Juseung Yun, and Junmo Kim. Nlnl: Negative learning for noisy labels. In Proceedings of the IEEE/CVF international conference on computer vision, pages 101-110, 2019. 5 +[22] Youngeun Kim, Donghyeon Cho, Kyeongtak Han, Priyadarshini Panda, and Sungeun Hong. Domain adaptation without source data. IEEE Transactions on Artificial Intelligence, 2(6):508-518, 2021. 3 +[23] Youngdong Kim, Juseung Yun, Hyounguk Shon, and Junmo Kim. Joint negative and positive learning for noisy labels. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9442-9451, 2021. 3, 6 +[24] Guangrui Li, Guoliang Kang, Yi Zhu, Yunchao Wei, and Yi Yang. Domain consensus clustering for universal domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9757-9766, 2021. 6 +[25] Qingmei Li, Yibin Wen, Juepeng Zheng, Yuxiang Zhang, and Haohuan Fu. Hyunida: Breaking label set constraints for universal domain adaptation in cross-scene hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing, 2024. 1 +[26] Shuang Li, Shiji Song, Gao Huang, Zhengming Ding, and Cheng Wu. Domain invariant and class discriminative feature learning for visual domain adaptation. IEEE transactions on image processing, 27(9):4260-4273, 2018. 2 +[27] Jian Liang, Dapeng Hu, and Jiashi Feng. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In International conference on machine learning, pages 6028-6039. PMLR, 2020. 1, 3 + +[28] Jian Liang, Dapeng Hu, Jiashi Feng, and Ran He. Dine: Domain adaptation from single and multiple black-box predictors. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8003-8013, 2022. 1, 3, 6, 7 +[29] Mattia Litrico, Alessio Del Bue, and Pietro Morerio. Guiding pseudo-labels with uncertainty estimation for source-free unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7640-7650, 2023. 3 +[30] Long Liu, Lechao Yang, and Bin Zhu. Sparse feature space representation: A unified framework for semi-supervised and domain adaptation learning. Knowledge-Based Systems, 156:43-61, 2018. 3 +[31] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3431-3440, 2015. 3 +[32] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In International conference on machine learning, pages 97-105. PMLR, 2015. 3 +[33] Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. Advances in neural information processing systems, 31, 2018. 1 +[34] Bekhzod Olimov, Jeonghong Kim, and Anand Paul. Dcbt-net: Training deep convolutional neural networks with extremely noisy labels. IEEE Access, 8:220482-220495, 2020. 3 +[35] Fei Pan, Inkyu Shin, Francois Rameau, Seokju Lee, and In So Kweon. Unsupervised intra-domain adaptation for semantic segmentation through self-supervision. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3764-3773, 2020. 1 +[36] Pau Panareda Busto and Juergen Gall. Open set domain adaptation. In Proceedings of the IEEE international conference on computer vision, pages 754-763, 2017. 1 +[37] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017. 6 +[38] Xingchao Peng, Ben Usman, Neela Kaushik, Judy Hoffman, Dequan Wang, and Kate Saenko. Visda: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924, 2017.5 +[39] Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1406-1415, 2019. 5 +[40] Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In Computer Vision-ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, September 5-11, 2010, Proceedings, Part IV 11, pages 213-226. Springer, 2010. 5 + +[41] Kuniaki Saito and Kate Saenko. Ovanet: One-vs-all network for universal domain adaptation. In Proceedings of the IEEE/cvf international conference on computer vision, pages 9000-9009, 2021. 1 +[42] Kuniaki Saito, Shohei Yamamoto, Yoshitaka Ushiku, and Tatsuya Harada. Open set domain adaptation by backpropagation. In Proceedings of the European conference on computer vision (ECCV), pages 153-168, 2018. 1 +[43] Chen Shen and Yuhong Guo. Unsupervised heterogeneous domain adaptation with sparse feature transformation. In Asian conference on machine learning, pages 375-390. PMLR, 2018. 2 +[44] Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, and Jae-Gil Lee. Learning from noisy labels with deep neural networks: A survey. IEEE transactions on neural networks and learning systems, 2022. 3 +[45] Simen Thys, Wiebe Van Ranst, and Toon Goedemé. Fooling automated surveillance cameras: adversarial patches to attack person detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 0–0, 2019. 1 +[46] Florian Tramér, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204, 2017. 1 +[47] Yi-Hsuan Tsai, Wei-Chih Hung, Samuel Schulter, Kiyuk Sohn, Ming-Hsuan Yang, and Manmohan Chandraker. Learning to adapt structured output space for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7472-7481, 2018. 1 +[48] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7167-7176, 2017. 1 +[49] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9 (11), 2008. 8 +[50] Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5018-5027, 2017. 5 +[51] Riccardo Volpi, Pietro Morerio, Silvio Savarese, and Vittorio Murino. Adversarial feature augmentation for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5495-5504, 2018. 3 +[52] Rui Wang, Masao Utiyama, Lemao Liu, Kehai Chen, and Eichiro Sumita. Instance weighting for neural machine translation domain adaptation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1482-1488, 2017. 2 +[53] Hongxin Wei, Lei Feng, Xiangyu Chen, and Bo An. Combating noisy labels by agreement: A joint training method with co-regularization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 13726–13735, 2020. 3 + +[54] Mingxuan Xia, Junbo Zhao, Gengyu Lyu, Zenan Huang, Tianlei Hu, Gang Chen, and Haobo Wang. A separation and alignment framework for black-box domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 16005-16013, 2024. 6, 7 +[55] Rui Xia, Zhenchun Pan, and Feng Xu. Instance weighting for domain adaptation via trading off sample selection bias and variance. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden, pages 13-19, 2018. 2 +[56] Minghao Xu, Hang Wang, Bingbing Ni, Qi Tian, and Wenjun Zhang. Cross-domain detection via graph-induced prototype alignment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12355-12364, 2020. 1 +[57] Jianfei Yang, Xiangyu Peng, Kai Wang, Zheng Zhu, Jiashi Feng, Lihua Xie, and Yang You. Divide to adapt: Mitigating confirmation bias for domain adaptation of black-box predictors. arXiv preprint arXiv:2205.14467, 2022. 1, 3, 6, 7 +[58] Shiqi Yang, Yaxing Wang, Joost Van De Weijer, Luis Herranz, and Shangling Jui. Unsupervised domain adaptation without source data by casting a bait. arXiv preprint arXiv:2010.12427, 1(2):5, 2020. 1, 3 +[59] Kaichao You, Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Universal domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2720-2729, 2019. 1, 3, 5, 6 +[60] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3):107-115, 2021. 1, 3 +[61] Haojian Zhang, Yabin Zhang, Kui Jia, and Lei Zhang. Unsupervised domain adaptation of black-box source models. arXiv preprint arXiv:2101.02839, 2021. 1, 3 +[62] Siqi Zhang, Lu Zhang, and Zhiyong Liu. Refined pseudo labeling for source-free domain adaptive object detection. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1-5. IEEE, 2023. 3 \ No newline at end of file diff --git a/2025/ADU_ Adaptive Detection of Unknown Categories in Black-Box Domain Adaptation/images.zip b/2025/ADU_ Adaptive Detection of Unknown Categories in Black-Box Domain Adaptation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..052422ef42a5eaa70fa774cd713b0eb045d3715e --- /dev/null +++ b/2025/ADU_ Adaptive Detection of Unknown Categories in Black-Box Domain Adaptation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bae1f2441884ce6b7c281d3f61735d5fbbbc0b2ce49ce9476fef65059222676e +size 579063 diff --git a/2025/ADU_ Adaptive Detection of Unknown Categories in Black-Box Domain Adaptation/layout.json b/2025/ADU_ Adaptive Detection of Unknown Categories in Black-Box Domain Adaptation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d90536d7436efaa5765275ec7792716dd36cdbea --- /dev/null +++ b/2025/ADU_ Adaptive Detection of Unknown Categories in Black-Box Domain Adaptation/layout.json @@ -0,0 +1,8730 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 151, + 103, + 460, + 140 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 103, + 460, + 140 + ], + "spans": [ + { + "bbox": [ + 151, + 103, + 460, + 140 + ], + "type": "text", + "content": "ADU: Adaptive Detection of Unknown Categories in Black-Box Domain Adaptation" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 161, + 481, + 190 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 161, + 481, + 190 + ], + "spans": [ + { + "bbox": [ + 130, + 161, + 481, + 190 + ], + "type": "text", + "content": "Yushan Lai, Guowen Li, Haoyuan Liang, Juepeng Zheng*, and Zhiyu Ye School of Artificial Intelligence, Sun Yat-Sen University" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 87, + 191, + 523, + 204 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 191, + 523, + 204 + ], + "spans": [ + { + "bbox": [ + 87, + 191, + 523, + 204 + ], + "type": "text", + "content": "{laiysh6,ligw8,lianghy68,yezhy26}@mail2.sysu.edu.cn,zhengjp8@mail.sysu.edu.cn" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 151, + 231, + 201, + 243 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 231, + 201, + 243 + ], + "spans": [ + { + "bbox": [ + 151, + 231, + 201, + 243 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 255, + 297, + 590 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 255, + 297, + 590 + ], + "spans": [ + { + "bbox": [ + 55, + 255, + 297, + 590 + ], + "type": "text", + "content": "Black-box Domain Adaptation (BDA) utilizes a black-box predictor of the source domain to label target domain data, addressing privacy concerns in Unsupervised Domain Adaptation (UDA). However, BDA assumes identical label sets across domains, which is unrealistic. To overcome this limitation, we propose a study on BDA with unknown classes in the target domain. It uses a black-box predictor to label target data and identify \"unknown\" categories, without requiring access to source domain data or predictor parameters, thus addressing both data privacy and category shift issues in traditional UDA. Existing methods face two main challenges: (i) Noisy pseudo-labels in knowledge distillation (KD) accumulate prediction errors, and (ii) relying on a preset threshold fails to adapt to varying category shifts. To address these, we propose ADU, a framework that allows the target domain to autonomously learn pseudo-labels guided by quality and use an adaptive threshold to identify \"unknown\" categories. Specifically, ADU consists of Selective Amplification Knowledge Distillation (SAKD) and Entropy-Driven Label Differentiation (EDLD). SAKD improves KD by focusing on high-quality pseudo-labels, mitigating the impact of noisy labels. EDLD categorizes pseudo-labels by quality and applies tailored training strategies to distinguish \"unknown\" categories, improving detection accuracy and adaptability. Extensive experiments show that ADU achieves state-of-the-art results, outperforming the best existing method by " + }, + { + "bbox": [ + 55, + 255, + 297, + 590 + ], + "type": "inline_equation", + "content": "3.1\\%" + }, + { + "bbox": [ + 55, + 255, + 297, + 590 + ], + "type": "text", + "content": " on VisDA in the OPBDA scenario." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 614, + 137, + 628 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 614, + 137, + 628 + ], + "spans": [ + { + "bbox": [ + 56, + 614, + 137, + 628 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 635, + 296, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 635, + 296, + 696 + ], + "spans": [ + { + "bbox": [ + 55, + 635, + 296, + 696 + ], + "type": "text", + "content": "Unsupervised domain adaptation (UDA) [12] aims to transfer knowledge from a well-labeled source domain to an unlabeled target domain, which can ease the burden of manual labeling. Recently, UDA has been applied in a range of computer vision tasks, including image classification" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 232, + 555, + 447 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 232, + 555, + 447 + ], + "spans": [ + { + "bbox": [ + 313, + 232, + 555, + 447 + ], + "type": "text", + "content": "[13, 33, 48], objection detection [7, 20, 56] and semantic segmentation [6, 35, 47]. However, UDA methods may raise concerns about data privacy and portability issues due to their requirement for access to raw source data and source model parameters. Therefore, source-free domain adaptation (SFDA) [19, 27, 58] is proposed to protect the source data privacy. In the SFDA scenario, only the source model is provided to the target domain without access to source data. However, it still faces the issue of source information privacy, which can be compromised through techniques such as white-box attacks [45, 46]. To mitigate these concerns, Black-box Domain Adaptation (BDA) [28, 61] has been proposed recently, as shown in Figure 1(a), which aims to learn a model solely using the unlabeled data from the target domain, based on the predictions from a black-box predictor trained on the source data. This setting can effectively mitigate data privacy issues related to data and model parameter leakage." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 449, + 556, + 674 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 449, + 556, + 674 + ], + "spans": [ + { + "bbox": [ + 313, + 449, + 556, + 674 + ], + "type": "text", + "content": "However, traditional BDA [28, 57, 60] always assumes that the source and target domains share identical category sets, which frequently fails to apply in practice. In real-world scenarios, the target domain is typically unlabeled, making it difficult to satisfy this assumption due to potential category shifts. Currently, there are two UDA settings that involve unknown classes in the target domain: Open-Set Domain Adaptation (OSDA) [36, 42] and Open-Partial Domain Adaptation (OPDA) [25, 41, 59]. OSDA deals with scenarios where the target domain contains private classes that are unknown to the source domain, while OPDA handles cases where both the source and target domains each have their own private classes. Black-box Domain Adaptation has been applied to OPDA recently [9]. As shown in Figure 1(b), this setting is designed to learn a robust model for the target domain that not only recognizes classes shared by two domains but also identifies \"unknown\" categories absent in the source domain despite having no information about difference of two label sets." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 677, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 677, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 677, + 556, + 715 + ], + "type": "text", + "content": "Currently, only one study has addressed the above problem. [9] applies knowledge distillation to train the target model to mimic source predictor outputs and uses a man" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 703, + 144, + 713 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 703, + 144, + 713 + ], + "spans": [ + { + "bbox": [ + 67, + 703, + 144, + 713 + ], + "type": "text", + "content": "*Corresponding author" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "30588" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 59, + 70, + 318, + 242 + ], + "blocks": [ + { + "bbox": [ + 59, + 70, + 318, + 242 + ], + "lines": [ + { + "bbox": [ + 59, + 70, + 318, + 242 + ], + "spans": [ + { + "bbox": [ + 59, + 70, + 318, + 242 + ], + "type": "image", + "image_path": "c7e73379d0d7153b199e6d7b6e853ccf121aa10dbaaf0d288d16850c7ba1e900.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 143, + 246, + 264, + 256 + ], + "lines": [ + { + "bbox": [ + 143, + 246, + 264, + 256 + ], + "spans": [ + { + "bbox": [ + 143, + 246, + 264, + 256 + ], + "type": "text", + "content": "(a) Black-box Domain Adaptation" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 324, + 70, + 553, + 242 + ], + "blocks": [ + { + "bbox": [ + 324, + 70, + 553, + 242 + ], + "lines": [ + { + "bbox": [ + 324, + 70, + 553, + 242 + ], + "spans": [ + { + "bbox": [ + 324, + 70, + 553, + 242 + ], + "type": "image", + "image_path": "679d31d87a5b79f77bb55edf66b522db859480439b2b9c951e3e153089bc6abb.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 354, + 246, + 520, + 256 + ], + "lines": [ + { + "bbox": [ + 354, + 246, + 520, + 256 + ], + "spans": [ + { + "bbox": [ + 354, + 246, + 520, + 256 + ], + "type": "text", + "content": "(b) Open-patial Black-box Domain Adaptation" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 54, + 265, + 555, + 300 + ], + "lines": [ + { + "bbox": [ + 54, + 265, + 555, + 300 + ], + "spans": [ + { + "bbox": [ + 54, + 265, + 555, + 300 + ], + "type": "text", + "content": "Figure 1. Black-box domain adaptation and Open-partial black-box domain adaptation settings with respect to label sets of source and target domains (red labels indicate common labels of two domains). Compared to BDA, ADU is able to deal with BDA with unknown classes in the target by adaptively detecting unknown categories." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 54, + 319, + 296, + 464 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 319, + 296, + 464 + ], + "spans": [ + { + "bbox": [ + 54, + 319, + 296, + 464 + ], + "type": "text", + "content": "ually preset threshold to identify \"unknown\" categories. Though inspiring, it still has the following limitations. (i) Due to domain and category shifts between the source and target domains, predictions from the source model are inevitably noisy. Directly utilizing these noisy pseudo-labels will accumulate model prediction errors, making the adaptation process unreliable. (ii) Employing a preset threshold fails to accommodate the variability and complexity of category shifts in different target domains, which is inadequate for accurately detecting \"unknown\" classes across diverse domains, often resulting in misclassification and reduced adaptability." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 475, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 475, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 56, + 475, + 296, + 713 + ], + "type": "text", + "content": "To address the issues mentioned above, we propose a simple yet effective framework called ADU, specifically designed for Open-Set BDA (OSBDA) and Open-Partial BDA (OPBDA). ADU incorporates two core modules: Selective Amplification Knowledge Distillation (SAKD) and Entropy-Driven Label Differentiation (EDLD). For the first challenge, SAKD enhances traditional knowledge distillation techniques, specifically tailoring KD to BDA with unknown classes in the target domain by amplifying learning from high-quality pseudo-labels produced by source API. This refinement ensures that the target model emphasizes learning from high-quality pseudo-labels, effectively mitigating the impact of noisy data. For the second challenge, EDLD enhances the framework's ability to handle diverse domain conditions. Initially, EDLD categorizes pseudo-labels based on their quality and then applies tailored training strategies to widen the distance between \"unknown\" classes and the others, while minimizing the impact of noisy pseudo-labels. This adaptive differentiation of labels heightens the effectiveness of employing the av" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 319, + 553, + 403 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 319, + 553, + 403 + ], + "spans": [ + { + "bbox": [ + 313, + 319, + 553, + 403 + ], + "type": "text", + "content": "erage entropy of the target model's predictions as a threshold. Consequently, this refined approach significantly improves the detection accuracy of \"unknown\" categories and adapts more adeptly to category shifts across various target domains. Additionally, we iteratively refine the pseudolabels generated by the source API, which can significantly enhance their quality." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 404, + 553, + 427 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 404, + 553, + 427 + ], + "spans": [ + { + "bbox": [ + 313, + 404, + 553, + 427 + ], + "type": "text", + "content": "Our main contributions in this paper could be summarized as follows:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 429, + 554, + 619 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 315, + 429, + 554, + 488 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 429, + 554, + 488 + ], + "spans": [ + { + "bbox": [ + 315, + 429, + 554, + 488 + ], + "type": "text", + "content": "1. We propose Selective Amplification Knowledge Distillation (SAKD), a refined knowledge distillation technique specifically designed for the OPBDA and OSBDA scenarios, which can effectively mitigate the impact of noisy pseudo-labels." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 488, + 554, + 559 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 488, + 554, + 559 + ], + "spans": [ + { + "bbox": [ + 313, + 488, + 554, + 559 + ], + "type": "text", + "content": "2. We introduce Entropy-Driven Label Differentiation (EDLD), which categorizes pseudo-labels by quality and applies customized training strategies to enhance the distinction between \"unknown\" and others, thereby improving detection accuracy and domain adaptability through adaptive entropy-based thresholding." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 560, + 553, + 619 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 560, + 553, + 619 + ], + "spans": [ + { + "bbox": [ + 313, + 560, + 553, + 619 + ], + "type": "text", + "content": "3. Extensive experiments on four public benchmarks demonstrate the superior performance of our proposed method compared with existing SOTA works, surpassing the best existing method by " + }, + { + "bbox": [ + 313, + 560, + 553, + 619 + ], + "type": "inline_equation", + "content": "3.1\\%" + }, + { + "bbox": [ + 313, + 560, + 553, + 619 + ], + "type": "text", + "content": " on VisDA in the OPBDA scenario." + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 313, + 633, + 398, + 645 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 633, + 398, + 645 + ], + "spans": [ + { + "bbox": [ + 313, + 633, + 398, + 645 + ], + "type": "text", + "content": "2. Related work" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 654, + 554, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 654, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 654, + 554, + 713 + ], + "type": "text", + "content": "Black-box domain adaptation. Unsupervised Domain Adaptation (UDA) [12] aims to adapt a model trained on a labeled source domain to an unlabeled target domain. Many early methods relied on techniques such as instance weighting [52, 55], feature transformation [18, 26, 43], and feature" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "30589" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 294, + 310 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 294, + 310 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 294, + 310 + ], + "type": "text", + "content": "space [30, 51]. Despite their effectiveness, these methods require access to source domain data, raising privacy and portability concerns [22]. To address privacy issues associated with UDA, Source-Free Domain Adaptation (SFDA) methods [19, 27, 58] have been proposed. These methods adapt models using only the source model and unlabeled target data, eliminating the need for source data during adaptation. Techniques such as entropy minimization [2] and pseudo-labeling [62] have been explored. However, SFDA methods still face potential privacy risks due to the use of generative models and other techniques that might inadvertently reveal source data characteristics. Therefore, Black-box Domain Adaptation (BDA) [28, 61] has emerged as a solution to further mitigate privacy concerns by only accessing the source model's outputs without any internal details. This approach ensures better privacy preservation compared to traditional UDA and SFDA methods. Recent methods such as DINE [28] and BETA [57] have made significant strides in this area. Nevertheless, they struggle with inconsistent label sets between domains." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 315, + 295, + 698 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 315, + 295, + 698 + ], + "spans": [ + { + "bbox": [ + 56, + 315, + 295, + 698 + ], + "type": "text", + "content": "Open-set and open-partial domain adaptation. Closed-set domain adaptation assumes identical label sets between source and target domains, focusing on minimizing distribution shifts using techniques like discrepancy minimization [31, 32] and adversarial training [8, 15]. However, these methods often struggle when label sets are not perfectly aligned. To address this issue, Partial Domain Adaptation (PDA) assumes that only the source domain contains private classes, with methods such as SAN [4] employing class-wise domain discriminators, and ETN [5] using progressive weighting schemes. Meanwhile, Open-Set Domain Adaptation (OSDA) handles scenarios where the target domain has private classes unknown to the source. Besides, Open-Partial Domain Adaptation (OPDA) addresses both domains having their own private classes. UAN [59] quantifies sample-level uncertainty using entropy and domain similarity, and Fu et al. [11] combines entropy, confidence, and consistency for better uncertainty measurement. To address the challenges faced by black-box domain adaptation, [9] combines OPDA with BDA to address both category shift and privacy concerns. It applies knowledge distillation to train the target model to emulate source predictor outputs, using a preset threshold to identify \"unknown\" categories. Though inspiring, it still faces significant limitations regarding pseudo-label quality and the detection of \"unknown\" categories. To address these issues, we propose the ADU framework, applying it to Open-Set BDA (OSBDA) and Open-Partial BDA (OPBDA) to mitigate the impact of noisy pseudo-labels and enhance adaptability to category shifts across varied target domains. This approach provides a robust solution to the limitations of existing methods." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 701, + 294, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 701, + 294, + 713 + ], + "spans": [ + { + "bbox": [ + 67, + 701, + 294, + 713 + ], + "type": "text", + "content": "Learning with noisy labels. Deep learning models often" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 313, + 72, + 553, + 287 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 553, + 287 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 553, + 287 + ], + "type": "text", + "content": "overfit on noisy labels, leading to poor generalization [60]. To address this, various approaches have been proposed, including noise-robust losses [23, 44], noise-transition matrix estimation [14], clean sample selection [53], and loss reweighting [34]. However, these methods often require noise-free validation sets or make assumptions about the noise distribution, which are impractical in BDA settings. These methods [29, 34] differ by not assuming any specific noise distribution and leveraging noisy scores from source training classes. Recently, NEL [1] introduced a novel approach by integrating a Negative Learning loss with a pseudo-label refinement framework that leverages ensembling techniques. Negative Learning [23] is an indirect learning method that employs complementary labels to address noise issues effectively. In our work, we use Negative Learning to refine high-quality pseudo-labels without ensembling, reducing computational cost and making our approach more flexible and robust for OSBDA and OPBDA." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 314, + 298, + 397, + 312 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 298, + 397, + 312 + ], + "spans": [ + { + "bbox": [ + 314, + 298, + 397, + 312 + ], + "type": "text", + "content": "3. Methodology" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "spans": [ + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "text", + "content": "In this paper, we are provided with a target domain " + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_t = \\left\\{x_t^i\\right\\}_{i=1}^{N_t}" + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "text", + "content": " with " + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "inline_equation", + "content": "N_t" + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "text", + "content": " unlabeled samples where " + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "inline_equation", + "content": "x_t^i \\in \\mathcal{X}_t" + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "text", + "content": ", and a black-box predictor " + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "inline_equation", + "content": "f_s" + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "text", + "content": " trained by a source domain " + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_s = \\left\\{\\left(x_s^i, y_s^i\\right)\\right\\}_{i=1}^{N_s}" + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "text", + "content": " with " + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "inline_equation", + "content": "N_s" + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "text", + "content": " labeled samples where " + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "inline_equation", + "content": "x_s^i \\in \\mathcal{X}_s" + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "text", + "content": ". We use " + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "inline_equation", + "content": "L_s" + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "inline_equation", + "content": "L_t" + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "text", + "content": " to denote the label spaces of the source domain and target domain respectively. In general, model " + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "inline_equation", + "content": "f" + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "text", + "content": " consists of a feature extractor " + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "inline_equation", + "content": "G" + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "text", + "content": " and a fully connected layer-based classifier " + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "text", + "content": ". We have no access to the source domain data " + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_s" + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "text", + "content": " and the parameters of the source model " + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "inline_equation", + "content": "f_s" + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "text", + "content": ". Only a black-box predictor trained on the source domain, i.e., an API, is available. The objective is to leverage the predictions of the API of the source domain to learn a mapping model " + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "inline_equation", + "content": "f_t" + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "text", + "content": " which can label the target samples with either one of the " + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "inline_equation", + "content": "L_s" + }, + { + "bbox": [ + 313, + 319, + 555, + 502 + ], + "type": "text", + "content": " labels or the \"unknown\" label. The overall workflow is shown in Fig. 2." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 509, + 547, + 521 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 509, + 547, + 521 + ], + "spans": [ + { + "bbox": [ + 313, + 509, + 547, + 521 + ], + "type": "text", + "content": "3.1. Selective amplification knowledge distillation" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 526, + 554, + 658 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 526, + 554, + 658 + ], + "spans": [ + { + "bbox": [ + 313, + 526, + 554, + 658 + ], + "type": "text", + "content": "Knowledge distillation (KD) [17] has been widely applied to address the black-box domain adaptation problem [9, 28, 57], as it enables the transfer of knowledge from one model (teacher) to another (student) by guiding the target model (student) to emulate the predictions of the source model (teacher). This approach is particularly suitable for BDA scenarios, where only the predictions of the source model are accessible. To better leverage the information available from the source domain's API, we use a knowledge distillation loss with both the source model's probabilities and hard pseudo-labels. This can be formulated as:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 344, + 679, + 555, + 693 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 344, + 679, + 555, + 693 + ], + "spans": [ + { + "bbox": [ + 344, + 679, + 555, + 693 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {K D} = \\mathbb {E} _ {x _ {t} \\sim \\mathcal {X} _ {t}} \\left[ C E \\left(\\tilde {\\boldsymbol {y}} _ {t}, p _ {t}\\right) + C E \\left(p _ {s}, p _ {t}\\right) \\right], \\tag {1}", + "image_path": "e1178b85003e1a147315115214bee20f77e63d98310eda16bb8a6e921259606b.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 326, + 701, + 555, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 326, + 701, + 555, + 714 + ], + "spans": [ + { + "bbox": [ + 326, + 701, + 555, + 714 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 326, + 701, + 555, + 714 + ], + "type": "inline_equation", + "content": "CE(\\cdot ,\\cdot)" + }, + { + "bbox": [ + 326, + 701, + 555, + 714 + ], + "type": "text", + "content": " denotes the cross entropy function, and" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "30590" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 63, + 70, + 547, + 281 + ], + "blocks": [ + { + "bbox": [ + 63, + 70, + 547, + 281 + ], + "lines": [ + { + "bbox": [ + 63, + 70, + 547, + 281 + ], + "spans": [ + { + "bbox": [ + 63, + 70, + 547, + 281 + ], + "type": "image", + "image_path": "e78f2281412e7b1985b1f6c1d82c971484177a84c607abf68d737a3067bb3a8d.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 54, + 292, + 555, + 316 + ], + "lines": [ + { + "bbox": [ + 54, + 292, + 555, + 316 + ], + "spans": [ + { + "bbox": [ + 54, + 292, + 555, + 316 + ], + "type": "text", + "content": "Figure 2. An overview of the proposed ADU framework. We utilize the black-box source predictor solely as an API service, obtaining only the source predictions from it. \"PL\" in the figure means pseudo-labels." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 54, + 335, + 296, + 466 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 335, + 296, + 466 + ], + "spans": [ + { + "bbox": [ + 54, + 335, + 296, + 466 + ], + "type": "inline_equation", + "content": "\\tilde{\\pmb{y}}_t" + }, + { + "bbox": [ + 54, + 335, + 296, + 466 + ], + "type": "text", + "content": " is a one-hot pseudo-label derived from " + }, + { + "bbox": [ + 54, + 335, + 296, + 466 + ], + "type": "inline_equation", + "content": "f_{s}(x_{t})" + }, + { + "bbox": [ + 54, + 335, + 296, + 466 + ], + "type": "text", + "content": ". In addition, we use " + }, + { + "bbox": [ + 54, + 335, + 296, + 466 + ], + "type": "inline_equation", + "content": "p_s" + }, + { + "bbox": [ + 54, + 335, + 296, + 466 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 54, + 335, + 296, + 466 + ], + "type": "inline_equation", + "content": "p_t" + }, + { + "bbox": [ + 54, + 335, + 296, + 466 + ], + "type": "text", + "content": " to replace " + }, + { + "bbox": [ + 54, + 335, + 296, + 466 + ], + "type": "inline_equation", + "content": "f_{s}(x_{t})" + }, + { + "bbox": [ + 54, + 335, + 296, + 466 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 54, + 335, + 296, + 466 + ], + "type": "inline_equation", + "content": "f_{t}(x_{t})" + }, + { + "bbox": [ + 54, + 335, + 296, + 466 + ], + "type": "text", + "content": " for simplicity. However, due to domain and category shifts, the predictions from the source model are inevitably noisy. Consequently, Eq. (1) processes information from these predictions equally, which can adversely affect the performance of the target model. In order to solve the issue, we propose Selective Amplification Knowledge Distillation (SAKD), a method that enhances knowledge distillation by leveraging the confidence of pseudo-labels produced by the source model." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 467, + 296, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 467, + 296, + 491 + ], + "spans": [ + { + "bbox": [ + 55, + 467, + 296, + 491 + ], + "type": "text", + "content": "Firstly, we simplify Eq. (1) to derive the following formulation:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 66, + 510, + 295, + 587 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 510, + 295, + 587 + ], + "spans": [ + { + "bbox": [ + 66, + 510, + 295, + 587 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\mathcal {L} _ {K D} = - \\mathbb {E} _ {x _ {t} \\sim \\chi_ {t}} (\\log p _ {t} ^ {\\hat {c}} + \\sum_ {c = 1} ^ {| L _ {s} |} p _ {s} ^ {c} \\log p _ {t} ^ {c}) \\\\ = - \\mathbb {E} _ {x _ {t} \\sim \\chi_ {t}} [ (1 + p _ {s} ^ {\\hat {c}}) \\log p _ {t} ^ {\\hat {c}} + \\sum_ {\\substack {c = 1 \\\\ c \\neq \\hat {c}}} ^ {| L _ {s} |} p _ {s} ^ {c} \\log p _ {t} ^ {c} ], \\end{array} \\tag{2}", + "image_path": "178e557ba41a73a7752ca588ba59475ffa131c02c4674f3717121039dc5796e7.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 596, + 296, + 656 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 596, + 296, + 656 + ], + "spans": [ + { + "bbox": [ + 55, + 596, + 296, + 656 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 596, + 296, + 656 + ], + "type": "inline_equation", + "content": "\\hat{c}" + }, + { + "bbox": [ + 55, + 596, + 296, + 656 + ], + "type": "text", + "content": " represents the label predicted by the source model, defined as " + }, + { + "bbox": [ + 55, + 596, + 296, + 656 + ], + "type": "inline_equation", + "content": "\\hat{c} = \\arg \\max_c p_s^c" + }, + { + "bbox": [ + 55, + 596, + 296, + 656 + ], + "type": "text", + "content": ". Subsequently, we formulate the SAKD loss by incorporating a modulating parameter " + }, + { + "bbox": [ + 55, + 596, + 296, + 656 + ], + "type": "inline_equation", + "content": "\\theta \\geq 1" + }, + { + "bbox": [ + 55, + 596, + 296, + 656 + ], + "type": "text", + "content": " into the term " + }, + { + "bbox": [ + 55, + 596, + 296, + 656 + ], + "type": "inline_equation", + "content": "(1 + p_s^{\\hat{c}})" + }, + { + "bbox": [ + 55, + 596, + 296, + 656 + ], + "type": "text", + "content": ", which only pertains to " + }, + { + "bbox": [ + 55, + 596, + 296, + 656 + ], + "type": "inline_equation", + "content": "\\hat{c}" + }, + { + "bbox": [ + 55, + 596, + 296, + 656 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 60, + 674, + 296, + 715 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 674, + 296, + 715 + ], + "spans": [ + { + "bbox": [ + 60, + 674, + 296, + 715 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {S A K D} = - \\mathbb {E} _ {x _ {t} \\sim \\chi_ {t}} \\left[ \\left(1 + p _ {s} ^ {\\hat {c}}\\right) ^ {\\theta} \\log p _ {t} ^ {\\hat {c}} + \\sum_ {\\substack {c = 1 \\\\ c \\neq \\hat {c}}} ^ {| L _ {s} |} p _ {s} ^ {c} \\log p _ {t} ^ {c} \\right], \\tag{3}", + "image_path": "35ba1e1adc53f8afc84a9322efdc8c188f34ae05d4e4238816b3261fe6ffe04f.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 335, + 555, + 370 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 335, + 555, + 370 + ], + "spans": [ + { + "bbox": [ + 313, + 335, + 555, + 370 + ], + "type": "text", + "content": "Discussion of SAKD loss: For simplicity, we consider the case with a single sample, where the SAKD loss simplifies to:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 329, + 384, + 555, + 421 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 384, + 555, + 421 + ], + "spans": [ + { + "bbox": [ + 329, + 384, + 555, + 421 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\mathrm {S A K D}} = - \\left[ \\left(1 + p _ {s} ^ {\\hat {c}}\\right) ^ {\\theta} \\log p _ {t} ^ {\\hat {c}} + \\sum_ {c = 1, c \\neq \\hat {c}} ^ {| L _ {s} |} p _ {s} ^ {c} \\log p _ {t} ^ {c} \\right] \\tag {4}", + "image_path": "25826f784bf54ae948cbf64d82741abb6b7e41e66505643484e555fb4795b928.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 438, + 554, + 466 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 438, + 554, + 466 + ], + "spans": [ + { + "bbox": [ + 313, + 438, + 554, + 466 + ], + "type": "text", + "content": "Next, we apply the generalized binomial theorem to expand " + }, + { + "bbox": [ + 313, + 438, + 554, + 466 + ], + "type": "inline_equation", + "content": "\\left(1 + p_s^{\\hat{c}}\\right)^\\theta" + }, + { + "bbox": [ + 313, + 438, + 554, + 466 + ], + "type": "text", + "content": " as follows:" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 318, + 491, + 553, + 602 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 318, + 491, + 553, + 602 + ], + "spans": [ + { + "bbox": [ + 318, + 491, + 553, + 602 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\mathcal {L} _ {\\mathrm {S A K D}} = - \\left[ \\sum_ {k = 0} ^ {\\infty} \\binom {\\theta} {k} \\left(p _ {s} ^ {\\hat {c}}\\right) ^ {k} \\log p _ {t} ^ {\\hat {c}} + \\sum_ {c = 1, c \\neq \\hat {c}} ^ {| L _ {s} |} p _ {s} ^ {c} \\log p _ {t} ^ {c} \\right] \\\\ = \\mathcal {L} _ {\\mathrm {K D}} - \\left[ \\sum_ {k = 1} ^ {\\infty} \\binom {\\theta} {k} \\left(p _ {s} ^ {\\hat {c}}\\right) ^ {k} - p _ {s} ^ {\\hat {c}} \\right] \\log p _ {t} ^ {\\hat {c}} \\\\ \\approx \\mathcal {L} _ {\\mathrm {K D}} - \\left[ (\\theta - 1) p _ {s} ^ {\\hat {c}} + \\frac {\\theta (\\theta - 1)}{2} \\left(p _ {s} ^ {\\hat {c}}\\right) ^ {2} \\right] \\log p _ {t} ^ {\\hat {c}} \\tag {5} \\\\ \\end{array}", + "image_path": "8842c1259ee51430d7a2f46bfba7359465d6068f69a6be0ce643d3f1f716fac6.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 605, + 555, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 605, + 555, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 605, + 555, + 714 + ], + "type": "text", + "content": "In Eq. (5), the first term " + }, + { + "bbox": [ + 313, + 605, + 555, + 714 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{KD}}" + }, + { + "bbox": [ + 313, + 605, + 555, + 714 + ], + "type": "text", + "content": " represents the original KD loss in Eq. (2), while the second term introduces an additional term, which is positive and solely depends on the target class " + }, + { + "bbox": [ + 313, + 605, + 555, + 714 + ], + "type": "inline_equation", + "content": "\\hat{c}" + }, + { + "bbox": [ + 313, + 605, + 555, + 714 + ], + "type": "text", + "content": ". We demonstrate that this additional term enables the SAKD loss to capture more information from pseudolabels with high confidence, meaning those with higher values of " + }, + { + "bbox": [ + 313, + 605, + 555, + 714 + ], + "type": "inline_equation", + "content": "p_s^{\\hat{c}}" + }, + { + "bbox": [ + 313, + 605, + 555, + 714 + ], + "type": "text", + "content": ", thereby reducing the impact of noisy pseudolabels. A comprehensive proof of this claim is provided in the supplemental material." + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "30591" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 247, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 247, + 85 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 247, + 85 + ], + "type": "text", + "content": "3.2. Entropy-driven label differentiation" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 89, + 297, + 244 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 89, + 297, + 244 + ], + "spans": [ + { + "bbox": [ + 55, + 89, + 297, + 244 + ], + "type": "text", + "content": "As stated above, the outputs from the source model are highly likely to be inaccurate and noisy due to the domain shift [3] and category shift. Even if we propose a promising solution in Eq. (3), we still face a tough challenge to detect the \"unknown\" categories, which means we should widen the difference between \"unknown\" and others. [59] shows entropy is an effective tool to detect \"unknown\" in domain adaptation. Entropy quantifies the prediction uncertainty, and smaller entropy represents a more certain prediction. In order to effectively address the influence brought by the category shift, we implement an automatic threshold determined by average entropy. The prediction process can be formulated as follows:" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 253, + 296, + 285 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 253, + 296, + 285 + ], + "spans": [ + { + "bbox": [ + 104, + 253, + 296, + 285 + ], + "type": "interline_equation", + "content": "y _ {t} = \\left\\{ \\begin{array}{l l} \\arg \\max _ {c} p _ {t} ^ {c} & H \\left(p _ {t}\\right) < w \\\\ \\text {u n k n o w n} & H \\left(p _ {t}\\right) \\geq w, \\end{array} \\right. \\tag {6}", + "image_path": "727220b0c67f5801558fe351d1d0574e0850743238b71ab3ef1f077596d906af.jpg" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 289, + 220, + 301 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 289, + 220, + 301 + ], + "spans": [ + { + "bbox": [ + 67, + 289, + 220, + 301 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 67, + 289, + 220, + 301 + ], + "type": "inline_equation", + "content": "H(p_{t})" + }, + { + "bbox": [ + 67, + 289, + 220, + 301 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 67, + 289, + 220, + 301 + ], + "type": "inline_equation", + "content": "w" + }, + { + "bbox": [ + 67, + 289, + 220, + 301 + ], + "type": "text", + "content": " are computed as:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 123, + 308, + 295, + 343 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 123, + 308, + 295, + 343 + ], + "spans": [ + { + "bbox": [ + 123, + 308, + 295, + 343 + ], + "type": "interline_equation", + "content": "H \\left(p _ {t}\\right) = - \\sum_ {c = 1} ^ {\\left| L _ {s} \\right|} p _ {t} ^ {c} \\log p _ {t} ^ {c}, \\tag {7}", + "image_path": "f9d084c2aebc2d7cb34ad9737e6f9e9574f31cbcea986feb1bedd533634b84e5.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 134, + 354, + 295, + 367 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 134, + 354, + 295, + 367 + ], + "spans": [ + { + "bbox": [ + 134, + 354, + 295, + 367 + ], + "type": "interline_equation", + "content": "w = \\mathbb {E} _ {x _ {t} \\sim \\mathcal {X} _ {t}} H (p _ {t}). \\tag {8}", + "image_path": "389bfb2301ef260b03ae65c9a08585e68fc7a64d8f9ce96a3b68f74816564d74.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 370, + 296, + 526 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 370, + 296, + 526 + ], + "spans": [ + { + "bbox": [ + 55, + 370, + 296, + 526 + ], + "type": "text", + "content": "Taking the average as a threshold eliminates the requirement of per-dataset hyper-parameter tuning and makes our selection process highly adaptive. As we employ entropy as a threshold to detect \"unknown\" categories, we still face a challenge to widen the gap between \"unknown\" and others. In order to address the issue, we propose Entropy-Driven Label Differentiation (EDLD) to make the \"unknown\" distinguishable and enhance the quality of pseudo-labels with high certainty. We use entropy to calculate the uncertainty level of pseudo-labels. Higher entropy always shows more uncertain predictions. We define the EDLD loss by dividing pseudo-labels into high-quality (HQ) and low-quality (LQ) by their entropy, the loss is defined as follows:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 66, + 542, + 296, + 577 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 542, + 296, + 577 + ], + "spans": [ + { + "bbox": [ + 66, + 542, + 296, + 577 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {E D L D} = \\mathbb {E} _ {x _ {t} \\sim \\mathcal {X} _ {t}} \\left[ \\left\\{ \\begin{array}{l l} \\mathcal {L} _ {H Q} (p _ {t}) & \\text {i f} H (p _ {t}) < w \\\\ \\mathcal {L} _ {L Q} (p _ {t}) & \\text {i f} H (p _ {t}) \\geq w \\end{array} \\right. \\right], \\tag {9}", + "image_path": "7676b002f6ae84336df26405c91c5990447438645c17250e57fca7e4ea9ab537.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 61, + 593, + 296, + 607 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 593, + 296, + 607 + ], + "spans": [ + { + "bbox": [ + 61, + 593, + 296, + 607 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {H Q} = H \\left(p _ {t}\\right) + \\mathcal {L} _ {N L} \\left(p _ {t}, \\bar {y} _ {t}\\right), \\quad \\mathcal {L} _ {L Q} = - H \\left(p _ {t}\\right), \\tag {10}", + "image_path": "c72df9cacad96d59606678faac4dba7f44b5a4392a652649d25db69f1f272043.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 613, + 296, + 674 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 613, + 296, + 674 + ], + "spans": [ + { + "bbox": [ + 55, + 613, + 296, + 674 + ], + "type": "text", + "content": "In the EDLD module, we not only use the entropy loss to widen the entropy gap between high-quality and low-quality pseudo-labels but also use a negative learning loss [21] to refine the high-quality pseudo-labels, the negative loss is the following:" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 97, + 682, + 296, + 716 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 97, + 682, + 296, + 716 + ], + "spans": [ + { + "bbox": [ + 97, + 682, + 296, + 716 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {N L} \\left(p _ {t}, \\bar {y} _ {t}\\right) = - \\sum_ {c = 1} ^ {| L _ {s} |} \\bar {\\mathbf {y}} _ {t} ^ {c} \\log \\left(1 - p _ {t} ^ {c}\\right), \\tag {11}", + "image_path": "276024784c1ff2711a67612e103ebc7e9f2f55bfc62bb8e01ffbf81cfa6b2d9c.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 72, + 555, + 156 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 555, + 156 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 555, + 156 + ], + "type": "text", + "content": "where is " + }, + { + "bbox": [ + 313, + 72, + 555, + 156 + ], + "type": "inline_equation", + "content": "\\bar{y}_t" + }, + { + "bbox": [ + 313, + 72, + 555, + 156 + ], + "type": "text", + "content": " a complementary label " + }, + { + "bbox": [ + 313, + 72, + 555, + 156 + ], + "type": "inline_equation", + "content": "\\bar{y}_t \\in \\{1, \\dots, |L_s|\\} \\setminus \\{y_t\\}" + }, + { + "bbox": [ + 313, + 72, + 555, + 156 + ], + "type": "text", + "content": " chosen randomly from the set of labels, and " + }, + { + "bbox": [ + 313, + 72, + 555, + 156 + ], + "type": "inline_equation", + "content": "\\bar{y}_t" + }, + { + "bbox": [ + 313, + 72, + 555, + 156 + ], + "type": "text", + "content": " is one-hot label derived from " + }, + { + "bbox": [ + 313, + 72, + 555, + 156 + ], + "type": "inline_equation", + "content": "\\bar{y}_t" + }, + { + "bbox": [ + 313, + 72, + 555, + 156 + ], + "type": "text", + "content": ". Eq. (11) enables the probability value of the complementary label to be optimized as zero, resulting in an increase in the probability values of other classes, which can effectively refine the high-quality pseudo-labels." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 163, + 511, + 175 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 163, + 511, + 175 + ], + "spans": [ + { + "bbox": [ + 313, + 163, + 511, + 175 + ], + "type": "text", + "content": "3.3. Adaptive refinement of pseudo labels" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 179, + 555, + 251 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 179, + 555, + 251 + ], + "spans": [ + { + "bbox": [ + 313, + 179, + 555, + 251 + ], + "type": "text", + "content": "To further mitigate the impact of noise in the pseudo-labels generated by the source model, we employ an exponential moving average (EMA) of the target predictions. This allows for a gradual and controlled update of the pseudolabels supplied by the source model at each iteration. The update process is defined as follows:" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 350, + 263, + 555, + 277 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 350, + 263, + 555, + 277 + ], + "spans": [ + { + "bbox": [ + 350, + 263, + 555, + 277 + ], + "type": "interline_equation", + "content": "p _ {s} \\leftarrow \\gamma p _ {s} + (1 - \\gamma) f _ {t} \\left(x _ {t}\\right), \\quad \\forall x _ {t} \\in \\mathcal {X} _ {t}, \\tag {12}", + "image_path": "c4fba10b7fbd4cfb75711476c778ce4c306249ed5aa5597107972fb613bb6c0c.jpg" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 280, + 555, + 339 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 280, + 555, + 339 + ], + "spans": [ + { + "bbox": [ + 313, + 280, + 555, + 339 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 280, + 555, + 339 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 313, + 280, + 555, + 339 + ], + "type": "text", + "content": " is a smoothing factor that determines the extent to which the pseudo-labels should adapt to the most recent predictions from the target model. A higher value of " + }, + { + "bbox": [ + 313, + 280, + 555, + 339 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 313, + 280, + 555, + 339 + ], + "type": "text", + "content": " places more weight on the existing pseudo-labels, while a lower value allows quicker adaptation to new information." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 340, + 556, + 436 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 340, + 556, + 436 + ], + "spans": [ + { + "bbox": [ + 313, + 340, + 556, + 436 + ], + "type": "text", + "content": "This strategy refines the pseudo-labels iteratively, balancing consistency with adaptability. By adjusting the pseudo-labels in a controlled manner, the model can better handle noise and gradually align the source model's outputs with the distribution of the target data. The EMA strategy ensures that updates are not overly reactive to fluctuations, thus enhancing the robustness of the model and improving performance in scenarios with diverse target domains." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 441, + 436, + 454 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 441, + 436, + 454 + ], + "spans": [ + { + "bbox": [ + 313, + 441, + 436, + 454 + ], + "type": "text", + "content": "3.4. The overall objective" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 459, + 554, + 483 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 459, + 554, + 483 + ], + "spans": [ + { + "bbox": [ + 313, + 459, + 554, + 483 + ], + "type": "text", + "content": "Integrating these objectives introduced in Eqs. (3, 9) together, we obtain the final loss function as follows:" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 380, + 495, + 553, + 508 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 380, + 495, + 553, + 508 + ], + "spans": [ + { + "bbox": [ + 380, + 495, + 553, + 508 + ], + "type": "interline_equation", + "content": "\\mathcal {L} = \\mathcal {L} _ {S A K D} + \\lambda \\mathcal {L} _ {E D L D}, \\tag {13}", + "image_path": "363efb75e03775fd35be66a9f6bca2b0c59cbffbbeb66ee944a83dd64a97305c.jpg" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 313, + 511, + 555, + 546 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 511, + 555, + 546 + ], + "spans": [ + { + "bbox": [ + 313, + 511, + 555, + 546 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 511, + 555, + 546 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 313, + 511, + 555, + 546 + ], + "type": "text", + "content": " is a hyper-parameter empirically set to 1.0, controlling the importance of " + }, + { + "bbox": [ + 313, + 511, + 555, + 546 + ], + "type": "inline_equation", + "content": "L_{SAKD}" + }, + { + "bbox": [ + 313, + 511, + 555, + 546 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 511, + 555, + 546 + ], + "type": "inline_equation", + "content": "L_{EDLD}" + }, + { + "bbox": [ + 313, + 511, + 555, + 546 + ], + "type": "text", + "content": " during distillation." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 313, + 557, + 395, + 571 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 557, + 395, + 571 + ], + "spans": [ + { + "bbox": [ + 313, + 557, + 395, + 571 + ], + "type": "text", + "content": "4. Experiments" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 313, + 576, + 365, + 589 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 576, + 365, + 589 + ], + "spans": [ + { + "bbox": [ + 313, + 576, + 365, + 589 + ], + "type": "text", + "content": "4.1. Setup" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 313, + 594, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 594, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 594, + 556, + 715 + ], + "type": "text", + "content": "Datasets. To assess the effectiveness of our approach, we conduct experiments using the Office31 [40], OfficeHome [50], VisDA [38], DomainNet [39] datasets. Office31 is a popular benchmark for UDA, consisting of three domains (Amazon, Webcam, Dslr) in 31 categories. OfficeHome is a more challenging benchmark for its distant domain shifts, which consists of four domains (Art, Clipart, Product, Real World) in 65 categories. VisDA is a large-scale benchmark containing 2 different 12-class domains, with a source domain with 152k synthetic images and a target domain with" + } + ] + } + ], + "index": 23 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "30592" + } + ] + } + ], + "index": 24 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 57, + 91, + 553, + 188 + ], + "blocks": [ + { + "bbox": [ + 156, + 71, + 455, + 82 + ], + "lines": [ + { + "bbox": [ + 156, + 71, + 455, + 82 + ], + "spans": [ + { + "bbox": [ + 156, + 71, + 455, + 82 + ], + "type": "text", + "content": "Table 1. H-score (\\%) comparison in OPBDA scenario on the OfficeHome dataset." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 57, + 91, + 553, + 188 + ], + "lines": [ + { + "bbox": [ + 57, + 91, + 553, + 188 + ], + "spans": [ + { + "bbox": [ + 57, + 91, + 553, + 188 + ], + "type": "table", + "html": "
MethodAr→ClAr→PrAr→ReCl→ArCl→PrCl→RePr→ArPr→ClPr→ReRe→ArRe→ClRe→PrAvg.
No Adapt.55.867.372.864.262.370.565.752.171.766.156.769.264.5
DINE [28]45.346.154.651.045.352.449.944.552.152.446.745.748.8
BETA [57]45.947.454.849.345.150.149.345.553.551.545.848.848.9
SEAL [54]40.646.847.844.542.745.247.340.047.145.546.746.645.1
UB²DA [9]60.969.676.374.469.276.574.560.376.274.162.071.170.4
ADU61.272.777.970.372.577.375.962.084.773.264.174.972.2
", + "image_path": "6829a6c811b7add73ec392459a7f7cd68a866d58d8909be1f4c6f87861a43edf.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "type": "table", + "bbox": [ + 57, + 217, + 553, + 331 + ], + "blocks": [ + { + "bbox": [ + 92, + 197, + 517, + 209 + ], + "lines": [ + { + "bbox": [ + 92, + 197, + 517, + 209 + ], + "spans": [ + { + "bbox": [ + 92, + 197, + 517, + 209 + ], + "type": "text", + "content": "Table 2. H-score (%) comparison in OPBDA scenario on the Office31, VisDA, and DomainNet datasets, respectively." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 57, + 217, + 553, + 331 + ], + "lines": [ + { + "bbox": [ + 57, + 217, + 553, + 331 + ], + "spans": [ + { + "bbox": [ + 57, + 217, + 553, + 331 + ], + "type": "table", + "html": "
MethodOffice31VisDADomainNet
A→DA→WD→AD→WW→AW→DAvg.S→RP→RP→SR→PR→SS→PS→RAvg.
No Adapt.79.971.980.191.578.789.882.037.752.835.043.632.535.751.841.9
DINE [28]50.351.456.663.054.060.155.943.548.439.543.438.137.645.642.1
BETA [57]52.454.051.661.253.457.655.045.549.240.343.138.238.048.442.9
SEAL [54]73.870.351.155.647.057.059.145.652.739.943.638.439.249.743.9
UB2DA [9]80.978.292.692.689.487.986.945.257.147.254.844.041.451.549.3
ADU87.585.287.094.483.890.588.148.759.847.852.546.642.756.451.0
", + "image_path": "7d9540ed93890a1b45b14cbe33bc0bb0619a2eae693df7962650a9e9edcc6097.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 350, + 295, + 456 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 350, + 295, + 456 + ], + "spans": [ + { + "bbox": [ + 55, + 350, + 295, + 456 + ], + "type": "text", + "content": "55k real images from Microsoft COCO. DomainNet is the largest DA dataset with about 0.6 million images. Like [11, 24], we conduct experiments on three subsets of it, i.e., Painting, Real, and Sketch. Following existing works [11, 24, 59], we separate the label set into three parts: common " + }, + { + "bbox": [ + 55, + 350, + 295, + 456 + ], + "type": "inline_equation", + "content": "(|L_s \\cap L_t|)" + }, + { + "bbox": [ + 55, + 350, + 295, + 456 + ], + "type": "text", + "content": ", source-private " + }, + { + "bbox": [ + 55, + 350, + 295, + 456 + ], + "type": "inline_equation", + "content": "(|L_s - L_t|)" + }, + { + "bbox": [ + 55, + 350, + 295, + 456 + ], + "type": "text", + "content": " and target-private " + }, + { + "bbox": [ + 55, + 350, + 295, + 456 + ], + "type": "inline_equation", + "content": "(|L_t - L_s|)" + }, + { + "bbox": [ + 55, + 350, + 295, + 456 + ], + "type": "text", + "content": ". The classes are separated according to their alphabetical order. We evaluate ADU in OPBDA using the four datasets, and in OSBDA using the first three datasets." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 457, + 296, + 529 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 457, + 296, + 529 + ], + "spans": [ + { + "bbox": [ + 55, + 457, + 296, + 529 + ], + "type": "text", + "content": "Evaluation protocols. Considering the trade-off between the accuracy of known and unknown classes is important in evaluating OSDA and OPDA methods. We evaluate methods using H-score [11]. H-score is the harmonic mean of the accuracy on common classes " + }, + { + "bbox": [ + 55, + 457, + 296, + 529 + ], + "type": "inline_equation", + "content": "(\\mathrm{Acc}_c)" + }, + { + "bbox": [ + 55, + 457, + 296, + 529 + ], + "type": "text", + "content": " and accuracy on the \"unknow\" classes " + }, + { + "bbox": [ + 55, + 457, + 296, + 529 + ], + "type": "inline_equation", + "content": "(\\mathrm{Acc}_u)" + }, + { + "bbox": [ + 55, + 457, + 296, + 529 + ], + "type": "text", + "content": " and is defined as:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 129, + 536, + 295, + 563 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 536, + 295, + 563 + ], + "spans": [ + { + "bbox": [ + 129, + 536, + 295, + 563 + ], + "type": "interline_equation", + "content": "h = 2 \\cdot \\frac {A c c _ {c} \\cdot A c c _ {u}}{A c c _ {c} + A c c _ {u}} \\tag {14}", + "image_path": "fc280410d28586e4e5f8d3b0bb36cf612b8fbdd526d477a0f435fd8baed134dd.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 570, + 295, + 616 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 570, + 295, + 616 + ], + "spans": [ + { + "bbox": [ + 55, + 570, + 295, + 616 + ], + "type": "text", + "content": "So, this metric is designed to provide a more comprehensive evaluation by ensuring that improvements in one area do not come at the expense of the other. It can measure both accuracies well." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 618, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 618, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 618, + 296, + 714 + ], + "type": "text", + "content": "Implementation details. All experiments are implemented in Pytorch [37]. For fair comparisons to previous methods, we use the same backbone of ResNet50 [16] pre-trained on ImageNet [10] as the feature extractor in all experiments. For the source model, we fine-tune the model on source examples optimizing with cross-entropy loss function and then treat it like a black-box by only requiring the input-output interfaces of this model in our experiments. We" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 350, + 555, + 421 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 350, + 555, + 421 + ], + "spans": [ + { + "bbox": [ + 313, + 350, + 555, + 421 + ], + "type": "text", + "content": "use SGD optimizer with a learning rate of 0.01, a momentum of 0.9 with a weight decay of 5e-4 and a batch size of 128. Concerning the parameters in ADU, We set " + }, + { + "bbox": [ + 313, + 350, + 555, + 421 + ], + "type": "inline_equation", + "content": "\\theta = 1.1" + }, + { + "bbox": [ + 313, + 350, + 555, + 421 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 350, + 555, + 421 + ], + "type": "inline_equation", + "content": "\\gamma = 0.6" + }, + { + "bbox": [ + 313, + 350, + 555, + 421 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 350, + 555, + 421 + ], + "type": "inline_equation", + "content": "\\lambda = 1.0" + }, + { + "bbox": [ + 313, + 350, + 555, + 421 + ], + "type": "text", + "content": " for all datasets and tasks. Additionally, following [23], we set the ratio of " + }, + { + "bbox": [ + 313, + 350, + 555, + 421 + ], + "type": "inline_equation", + "content": "L_{NL}(p_t,\\bar{y}_t)" + }, + { + "bbox": [ + 313, + 350, + 555, + 421 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 313, + 350, + 555, + 421 + ], + "type": "inline_equation", + "content": "H(p_{t})" + }, + { + "bbox": [ + 313, + 350, + 555, + 421 + ], + "type": "text", + "content": " as 0.01:1 in Eq. (10)." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 422, + 556, + 531 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 422, + 556, + 531 + ], + "spans": [ + { + "bbox": [ + 313, + 422, + 556, + 531 + ], + "type": "text", + "content": "Baselines. We compare the proposed ADU with (i) BDA: DINE [28], BETA [57], SEAL [54] (ii) OPBDA: " + }, + { + "bbox": [ + 313, + 422, + 556, + 531 + ], + "type": "inline_equation", + "content": "\\mathbf{UB}^2\\mathbf{DA}" + }, + { + "bbox": [ + 313, + 422, + 556, + 531 + ], + "type": "text", + "content": " [9]. These methods represent the state-of-the-art in their respective settings. Notably, owing to black-box DA lacking the capability to identify \"unknown\" categories, we apply the average entropy as the threshold to them, similar to our setting. The term \"No Adapt.\" refers to the baseline scenario where the source model is used directly for target label prediction, without any form of adaptation." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 540, + 372, + 551 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 540, + 372, + 551 + ], + "spans": [ + { + "bbox": [ + 313, + 540, + 372, + 551 + ], + "type": "text", + "content": "4.2. Results" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 558, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 558, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 558, + 556, + 715 + ], + "type": "text", + "content": "Results for OPBDA. We first perform experiments under the most challenging scenario, namely OPBDA, in which both the source and target domains contain private categories. The results for the OfficeHome dataset are presented in Table 1, while those for the Office31, VisDA, and DomainNet datasets are shown in Table 2. As illustrated in these tables, our proposed ADU method achieves a new state-of-the-art, surpassing all existing methods across the four datasets. Notably, ADU consistently improves the H-score compared to the \"No Adapt.\" baseline in each experimental setting, with a significant increase of " + }, + { + "bbox": [ + 313, + 558, + 556, + 715 + ], + "type": "inline_equation", + "content": "11.0\\%" + }, + { + "bbox": [ + 313, + 558, + 556, + 715 + ], + "type": "text", + "content": " on the VisDA dataset. This improvement demonstrates that our method effectively mitigates the influence of noise from" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "30593" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 57, + 91, + 553, + 186 + ], + "blocks": [ + { + "bbox": [ + 156, + 71, + 455, + 82 + ], + "lines": [ + { + "bbox": [ + 156, + 71, + 455, + 82 + ], + "spans": [ + { + "bbox": [ + 156, + 71, + 455, + 82 + ], + "type": "text", + "content": "Table 3. H-score (\\%) comparison in OSBDA scenario on the OfficeHome dataset." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 57, + 91, + 553, + 186 + ], + "lines": [ + { + "bbox": [ + 57, + 91, + 553, + 186 + ], + "spans": [ + { + "bbox": [ + 57, + 91, + 553, + 186 + ], + "type": "table", + "html": "
MethodAr→ClAr→PrAr→ReCl→ArCl→PrCl→RePr→ArPr→ClPr→ReRe→ArRe→ClRe→PrAvg.
No Adapt.59.668.175.767.166.770.463.854.955.771.458.670.565.2
DINE [28]47.045.852.249.747.150.048.243.850.452.646.046.848.3
BETA [57]46.648.354.947.748.450.549.142.951.850.245.248.948.7
SEAL [54]43.346.547.143.745.945.445.340.845.643.541.346.544.6
UB2DA [9]65.570.475.567.869.374.471.256.775.070.763.369.869.1
ADU66.070.577.472.270.175.069.063.676.173.464.175.271.1
", + "image_path": "5aeded3920a09d3f3f818a880df0f9b5d2220ff2224638f10f7fae9322a1ff88.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "type": "table", + "bbox": [ + 61, + 236, + 293, + 324 + ], + "blocks": [ + { + "bbox": [ + 55, + 205, + 295, + 228 + ], + "lines": [ + { + "bbox": [ + 55, + 205, + 295, + 228 + ], + "spans": [ + { + "bbox": [ + 55, + 205, + 295, + 228 + ], + "type": "text", + "content": "Table 4. H-score " + }, + { + "bbox": [ + 55, + 205, + 295, + 228 + ], + "type": "inline_equation", + "content": "(\\%)" + }, + { + "bbox": [ + 55, + 205, + 295, + 228 + ], + "type": "text", + "content": " comparison in OSBDA scenario on the Office31 and VisDA datasets, respectively." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 61, + 236, + 293, + 324 + ], + "lines": [ + { + "bbox": [ + 61, + 236, + 293, + 324 + ], + "spans": [ + { + "bbox": [ + 61, + 236, + 293, + 324 + ], + "type": "table", + "html": "
MethodOffice31VisDA
A→DA→WD→AD→WW→AW→DAvg.S→R
No Adapt.81.480.885.388.178.088.083.644.6
DINE [28]60.554.656.869.056.361.559.843.1
BETA [57]48.353.054.360.654.457.354.748.3
SEAL [54]50.443.753.444.854.050.849.540.6
UB2DA [9]85.787.491.089.285.184.187.148.1
ADU86.984.989.791.386.389.288.150.8
", + "image_path": "22ca625704e23f7c721369acd8cf3977af66ba4c18b6589e3807e6bfd1ffc5df.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 54, + 350, + 295, + 555 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 350, + 295, + 555 + ], + "spans": [ + { + "bbox": [ + 54, + 350, + 295, + 555 + ], + "type": "text", + "content": "source model predictions on the target model and accurately identifies unknown categories within the target data. An examination of Tables 1 and 2 reveals that methods such as DINE [28], BETA [57], and SEAL [54] perform poorly compared to our approach and " + }, + { + "bbox": [ + 54, + 350, + 295, + 555 + ], + "type": "inline_equation", + "content": "\\mathrm{UB}^2\\mathrm{DA}" + }, + { + "bbox": [ + 54, + 350, + 295, + 555 + ], + "type": "text", + "content": " [9], with performance even falling below the \"No Adapt.\" baseline on the Office31 and OfficeHome datasets. This underperformance is likely due to the lack of design tailored specifically for the OPBDA scenario in these methods, which hinders their ability to effectively differentiate unknown categories from other classes. These results underscore the importance of our ADU approach, which is specifically designed for OPBDA. When compared to " + }, + { + "bbox": [ + 54, + 350, + 295, + 555 + ], + "type": "inline_equation", + "content": "\\mathrm{UB}^2\\mathrm{DA}" + }, + { + "bbox": [ + 54, + 350, + 295, + 555 + ], + "type": "text", + "content": " [9], ADU achieves higher H-scores on the Office31, OfficeHome, VisDA, and DomainNet datasets, with improvements of " + }, + { + "bbox": [ + 54, + 350, + 295, + 555 + ], + "type": "inline_equation", + "content": "1.2\\%" + }, + { + "bbox": [ + 54, + 350, + 295, + 555 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 54, + 350, + 295, + 555 + ], + "type": "inline_equation", + "content": "1.8\\%" + }, + { + "bbox": [ + 54, + 350, + 295, + 555 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 54, + 350, + 295, + 555 + ], + "type": "inline_equation", + "content": "3.5\\%" + }, + { + "bbox": [ + 54, + 350, + 295, + 555 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 54, + 350, + 295, + 555 + ], + "type": "inline_equation", + "content": "1.7\\%" + }, + { + "bbox": [ + 54, + 350, + 295, + 555 + ], + "type": "text", + "content": ", respectively. These gains further highlight the effectiveness of our proposed approach." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 558, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 558, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 558, + 295, + 713 + ], + "type": "text", + "content": "Results for OSBDA. We subsequently conduct experiments under OSBDA scenarios, where only the target domain includes categories absent from the source domain. The results for the OfficeHome dataset are provided in Table 3, while those for the Office31 and VisDA datasets are presented in Table 4. As shown in these tables, our proposed ADU method achieves performance that surpasses the current state-of-the-art. Specifically, ADU consistently outperforms the \"No Adapt.\" baseline in terms of H-score across all experimental settings. Notably, for the " + }, + { + "bbox": [ + 55, + 558, + 295, + 713 + ], + "type": "inline_equation", + "content": "\\mathrm{Pr} \\rightarrow \\mathrm{Re}" + }, + { + "bbox": [ + 55, + 558, + 295, + 713 + ], + "type": "text", + "content": " scenario, it achieves an improvement of " + }, + { + "bbox": [ + 55, + 558, + 295, + 713 + ], + "type": "inline_equation", + "content": "20.4\\%" + }, + { + "bbox": [ + 55, + 558, + 295, + 713 + ], + "type": "text", + "content": ". This substantial enhancement demonstrates that our method effectively reduces the influence of noise from source model pre" + } + ] + } + ], + "index": 5 + }, + { + "type": "table", + "bbox": [ + 317, + 258, + 555, + 353 + ], + "blocks": [ + { + "bbox": [ + 313, + 205, + 555, + 250 + ], + "lines": [ + { + "bbox": [ + 313, + 205, + 555, + 250 + ], + "spans": [ + { + "bbox": [ + 313, + 205, + 555, + 250 + ], + "type": "text", + "content": "Table 5. Ablation Study. H-score " + }, + { + "bbox": [ + 313, + 205, + 555, + 250 + ], + "type": "inline_equation", + "content": "(\\%)" + }, + { + "bbox": [ + 313, + 205, + 555, + 250 + ], + "type": "text", + "content": " of different variants in OPBDA scenarios. " + }, + { + "bbox": [ + 313, + 205, + 555, + 250 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{HQ}^{1}" + }, + { + "bbox": [ + 313, + 205, + 555, + 250 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 205, + 555, + 250 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{HQ}^{2}" + }, + { + "bbox": [ + 313, + 205, + 555, + 250 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 205, + 555, + 250 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{LQ}" + }, + { + "bbox": [ + 313, + 205, + 555, + 250 + ], + "type": "text", + "content": " refer to the objectives corresponding to the negative loss in " + }, + { + "bbox": [ + 313, + 205, + 555, + 250 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{HQ}" + }, + { + "bbox": [ + 313, + 205, + 555, + 250 + ], + "type": "text", + "content": ", entropy loss in " + }, + { + "bbox": [ + 313, + 205, + 555, + 250 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{HQ}" + }, + { + "bbox": [ + 313, + 205, + 555, + 250 + ], + "type": "text", + "content": ", and loss associated with low-quality labels, respectively." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 317, + 258, + 555, + 353 + ], + "lines": [ + { + "bbox": [ + 317, + 258, + 555, + 353 + ], + "spans": [ + { + "bbox": [ + 317, + 258, + 555, + 353 + ], + "type": "table", + "html": "
L1HQL2HQL2QOffice31OfficeHomeAvg.
A → WD → WAr → ClCl → RePr → ArRe → Cl
---82.992.661.272.971.562.073.7
--84.393.561.173.072.062.774.6
--83.592.761.774.372.162.574.5
--84.193.861.474.172.662.674.8
-84.394.162.773.771.761.374.6
-83.194.263.374.772.362.675.0
-85.194.062.674.672.162.875.2
85.294.461.277.375.964.176.3
", + "image_path": "5e363a7f229f409a873965440ca20af475b9beab5552586c94d7b55ac24dbf15.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_body" + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 373, + 555, + 447 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 373, + 555, + 447 + ], + "spans": [ + { + "bbox": [ + 313, + 373, + 555, + 447 + ], + "type": "text", + "content": "dictions on the target model, allowing for accurate identification of unknown categories within the target data. Compared to " + }, + { + "bbox": [ + 313, + 373, + 555, + 447 + ], + "type": "inline_equation", + "content": "\\mathrm{UB^{2}DA}" + }, + { + "bbox": [ + 313, + 373, + 555, + 447 + ], + "type": "text", + "content": " [9], ADU achieves higher H-scores on the Office31, OfficeHome and Visda datasets, with improvements of " + }, + { + "bbox": [ + 313, + 373, + 555, + 447 + ], + "type": "inline_equation", + "content": "1.0\\%" + }, + { + "bbox": [ + 313, + 373, + 555, + 447 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 373, + 555, + 447 + ], + "type": "inline_equation", + "content": "2.0\\%" + }, + { + "bbox": [ + 313, + 373, + 555, + 447 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 313, + 373, + 555, + 447 + ], + "type": "inline_equation", + "content": "2.7\\%" + }, + { + "bbox": [ + 313, + 373, + 555, + 447 + ], + "type": "text", + "content": ", respectively. These results further validate the effectiveness of our proposed approach." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 456, + 377, + 468 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 456, + 377, + 468 + ], + "spans": [ + { + "bbox": [ + 313, + 456, + 377, + 468 + ], + "type": "text", + "content": "4.3. Analysis" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 474, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 474, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 474, + 555, + 715 + ], + "type": "text", + "content": "Ablation study. To comprehensively assess the individual contribution of the components comprising our method, we conduct extensive ablation studies on two tasks from the Office31 dataset and four tasks from the OfficeHome dataset in OPBDA scenarios. The results are summarized in Table 5. Here, " + }, + { + "bbox": [ + 313, + 474, + 555, + 715 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{HQ}^{1}" + }, + { + "bbox": [ + 313, + 474, + 555, + 715 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 474, + 555, + 715 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{HQ}^{2}" + }, + { + "bbox": [ + 313, + 474, + 555, + 715 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 474, + 555, + 715 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{LQ}" + }, + { + "bbox": [ + 313, + 474, + 555, + 715 + ], + "type": "text", + "content": " refer to the objectives corresponding to the negative loss in " + }, + { + "bbox": [ + 313, + 474, + 555, + 715 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{HQ}" + }, + { + "bbox": [ + 313, + 474, + 555, + 715 + ], + "type": "text", + "content": ", entropy loss in " + }, + { + "bbox": [ + 313, + 474, + 555, + 715 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{HQ}" + }, + { + "bbox": [ + 313, + 474, + 555, + 715 + ], + "type": "text", + "content": ", and loss associated with low-quality labels, respectively. More detailed results can be found in supplementary material. It is important to emphasize that in all ablation experiments, we consistently employ the SAKD loss, which is a critical component of the ADU framework. Without it, the model would be unable to transfer knowledge from the source domain to the target model effectively. From the ablation study results, we can draw the following conclusions: (i) The introduction of any component alongside the SAKD loss leads to performance improvements, underscoring the vital role of the EDLD module. (ii) The full EDLD loss, which includes the negative loss term, yields better performance compared to its version without the negative loss," + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "30594" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 60, + 75, + 223, + 195 + ], + "blocks": [ + { + "bbox": [ + 60, + 75, + 223, + 195 + ], + "lines": [ + { + "bbox": [ + 60, + 75, + 223, + 195 + ], + "spans": [ + { + "bbox": [ + 60, + 75, + 223, + 195 + ], + "type": "image", + "image_path": "4484827b5b8c63032e103d251105fe0b4bb9899dce6ca76aba4e8a7b83e52bb8.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 207, + 553, + 231 + ], + "lines": [ + { + "bbox": [ + 55, + 207, + 553, + 231 + ], + "spans": [ + { + "bbox": [ + 55, + 207, + 553, + 231 + ], + "type": "text", + "content": "Figure 3. Parameters sensitivity analysis for six tasks. (a-b) plot the H-score with different values of " + }, + { + "bbox": [ + 55, + 207, + 553, + 231 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 55, + 207, + 553, + 231 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 207, + 553, + 231 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 55, + 207, + 553, + 231 + ], + "type": "text", + "content": "; (c) plot the H-score, " + }, + { + "bbox": [ + 55, + 207, + 553, + 231 + ], + "type": "inline_equation", + "content": "\\mathrm{Acc}_c" + }, + { + "bbox": [ + 55, + 207, + 553, + 231 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 207, + 553, + 231 + ], + "type": "inline_equation", + "content": "\\mathrm{Acc}_u" + }, + { + "bbox": [ + 55, + 207, + 553, + 231 + ], + "type": "text", + "content": " with different values of " + }, + { + "bbox": [ + 55, + 207, + 553, + 231 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 55, + 207, + 553, + 231 + ], + "type": "text", + "content": ". The default values of these hyperparameters are set to " + }, + { + "bbox": [ + 55, + 207, + 553, + 231 + ], + "type": "inline_equation", + "content": "\\lambda = 1.0" + }, + { + "bbox": [ + 55, + 207, + 553, + 231 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 207, + 553, + 231 + ], + "type": "inline_equation", + "content": "\\theta = 1.1" + }, + { + "bbox": [ + 55, + 207, + 553, + 231 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 55, + 207, + 553, + 231 + ], + "type": "inline_equation", + "content": "\\gamma = 0.6" + }, + { + "bbox": [ + 55, + 207, + 553, + 231 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 231, + 75, + 386, + 195 + ], + "blocks": [ + { + "bbox": [ + 231, + 75, + 386, + 195 + ], + "lines": [ + { + "bbox": [ + 231, + 75, + 386, + 195 + ], + "spans": [ + { + "bbox": [ + 231, + 75, + 386, + 195 + ], + "type": "image", + "image_path": "08c225f96d48980126e7f969dfe663e036f9dfaf482f30dae474a9f76f098d72.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 395, + 75, + 548, + 195 + ], + "blocks": [ + { + "bbox": [ + 395, + 75, + 548, + 195 + ], + "lines": [ + { + "bbox": [ + 395, + 75, + 548, + 195 + ], + "spans": [ + { + "bbox": [ + 395, + 75, + 548, + 195 + ], + "type": "image", + "image_path": "0adff36860afc575c9c033912b7b501e3e31ed6569084e9ee11cda382153ac60.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 59, + 252, + 167, + 363 + ], + "blocks": [ + { + "bbox": [ + 59, + 252, + 167, + 363 + ], + "lines": [ + { + "bbox": [ + 59, + 252, + 167, + 363 + ], + "spans": [ + { + "bbox": [ + 59, + 252, + 167, + 363 + ], + "type": "image", + "image_path": "1355f2a705c8410e1d496dcc7c31f228a86a6862ec4e2355b6692ffc730caac4.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 94, + 368, + 140, + 378 + ], + "lines": [ + { + "bbox": [ + 94, + 368, + 140, + 378 + ], + "spans": [ + { + "bbox": [ + 94, + 368, + 140, + 378 + ], + "type": "text", + "content": "(a) No Adapt." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 55, + 392, + 295, + 426 + ], + "lines": [ + { + "bbox": [ + 55, + 392, + 295, + 426 + ], + "spans": [ + { + "bbox": [ + 55, + 392, + 295, + 426 + ], + "type": "text", + "content": "Figure 4. t-SNE feature visualization of target representations in " + }, + { + "bbox": [ + 55, + 392, + 295, + 426 + ], + "type": "inline_equation", + "content": "\\mathrm{D}\\rightarrow \\mathrm{A}" + }, + { + "bbox": [ + 55, + 392, + 295, + 426 + ], + "type": "text", + "content": " OPBDA task. Blue dots represent target \"known\" examples " + }, + { + "bbox": [ + 55, + 392, + 295, + 426 + ], + "type": "inline_equation", + "content": "(L_{s}\\cap L_{t})" + }, + { + "bbox": [ + 55, + 392, + 295, + 426 + ], + "type": "text", + "content": " while red dots are unknown\" examples " + }, + { + "bbox": [ + 55, + 392, + 295, + 426 + ], + "type": "inline_equation", + "content": "(L_{s} - L_{t})" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 181, + 252, + 290, + 361 + ], + "blocks": [ + { + "bbox": [ + 181, + 252, + 290, + 361 + ], + "lines": [ + { + "bbox": [ + 181, + 252, + 290, + 361 + ], + "spans": [ + { + "bbox": [ + 181, + 252, + 290, + 361 + ], + "type": "image", + "image_path": "bdf150df8dc7c29414b5907ff567505aba668e7afc1ab553079aa505144a5981.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 220, + 368, + 251, + 377 + ], + "lines": [ + { + "bbox": [ + 220, + 368, + 251, + 377 + ], + "spans": [ + { + "bbox": [ + 220, + 368, + 251, + 377 + ], + "type": "text", + "content": "(b) ADU" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "bbox": [ + 54, + 449, + 295, + 496 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 449, + 295, + 496 + ], + "spans": [ + { + "bbox": [ + 54, + 449, + 295, + 496 + ], + "type": "text", + "content": "demonstrating the effectiveness of incorporating this term. (iii) The integration of all components results in the highest H-scores, providing clear evidence of the synergy and efficacy of the combined modules." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 54, + 497, + 295, + 640 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 497, + 295, + 640 + ], + "spans": [ + { + "bbox": [ + 54, + 497, + 295, + 640 + ], + "type": "text", + "content": "Feature visualization. Fig. 4 displays the visualization of the target feature with t-SNE [49], providing a clear representation of the feature distribution. As expected, ADU achieves excellent alignment between the source and target domain features. Taking a closer look at the visualization, it is evident that ADU excels in distinguishing the \"unknown\" categories from the other classes. This improvement aligns well with the intended function of the EDLD module, which is designed to enhance the separation of \"unknown\" categories from known categories. This result further highlights the effectiveness of ADU in handling the challenges posed by unknown classes in black-box domain adaptation tasks." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 54, + 642, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 642, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 54, + 642, + 295, + 713 + ], + "type": "text", + "content": "Parameters sensitivity analysis. To better assess the impact of different hyperparameters, we conduct a detailed sensitivity analysis. We investigate the sensitivity of the parameters " + }, + { + "bbox": [ + 54, + 642, + 295, + 713 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 54, + 642, + 295, + 713 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 54, + 642, + 295, + 713 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 54, + 642, + 295, + 713 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 54, + 642, + 295, + 713 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 54, + 642, + 295, + 713 + ], + "type": "text", + "content": " by performing experiments on two tasks from the Office31 dataset and four tasks from the OfficeHome dataset in OPBDA scenarios, as shown in Fig. 3." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "spans": [ + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "text", + "content": "The parameter " + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "text", + "content": " is varied over the range [0.0, 0.2, 0.5, 1.0, 2.0, 5.0], " + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "text", + "content": " spans [1.00, 1.05, 1.10, 1.15, 1.20, 1.25], and " + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "text", + "content": " is explored within the range [0.0, 0.2, 0.4, 0.6, 0.8, 1.0]. It is evident that the results are stable around the selected values of " + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "inline_equation", + "content": "\\lambda = 1.0" + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "inline_equation", + "content": "\\theta = 1.1" + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "inline_equation", + "content": "\\gamma = 0.6" + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "text", + "content": ". Additionally, as shown in Fig. 3(b), we examine the effect of varying " + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "text", + "content": " in Eq. (3). The results around the chosen parameter " + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "inline_equation", + "content": "\\theta = 1.1" + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "text", + "content": " remain stable, and we also observe that increasing " + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "text", + "content": " slightly from 1.0 leads to an improvement in the H-score, thereby highlighting the effectiveness of Eq. (3). Finally, we analyze the impact of " + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "text", + "content": ". As shown in Fig. 3(c), there is an inverse relationship between " + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "inline_equation", + "content": "\\mathrm{Acc}_c" + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "inline_equation", + "content": "\\mathrm{Acc}_u" + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "text", + "content": ". However, when " + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "inline_equation", + "content": "\\gamma = 0.6" + }, + { + "bbox": [ + 313, + 251, + 555, + 419 + ], + "type": "text", + "content": ", the two metrics reach a relatively balanced state, and at this point, the H-score achieves an optimal result." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 437, + 388, + 449 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 437, + 388, + 449 + ], + "spans": [ + { + "bbox": [ + 313, + 437, + 388, + 449 + ], + "type": "text", + "content": "5. Conclusion" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 459, + 555, + 662 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 459, + 555, + 662 + ], + "spans": [ + { + "bbox": [ + 313, + 459, + 555, + 662 + ], + "type": "text", + "content": "In this paper, we introduce the ADU model, a framework specifically designed to tackle Black-box Domain Adaptation with unknown classes in the target domain. ADU integrates two key innovations: Selective Amplification Knowledge Distillation (SAKD) and Entropy-Driven Label Differentiation (EDLD). SAKD enhances model accuracy by selectively amplifying high-confidence pseudolabels, thereby effectively mitigating the influence of noisy pseudo-labels. Meanwhile, EDLD improves the recognition of unknown categories through an entropy-driven threshold, expanding the difference between unknown categories and others and bolstering the robustness of the method across a range of diverse target domains. Experiments across four benchmark datasets demonstrate that ADU outperforms existing state-of-the-art approaches, highlighting its exceptional adaptability and efficacy, setting a new benchmark for future research in the field." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 665, + 556, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 665, + 556, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 665, + 556, + 714 + ], + "type": "text", + "content": "Acknowledgement. This work was supported by the National Natural Science Foundation of China (Grant T2125006 and 42401415) and Jiangsu Innovation Capacity Building Program (Project BM2022028)." + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 758 + ], + "type": "text", + "content": "30595" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 115, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 115, + 83 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 115, + 83 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 57, + 91, + 296, + 713 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 61, + 91, + 296, + 145 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 91, + 296, + 145 + ], + "spans": [ + { + "bbox": [ + 61, + 91, + 296, + 145 + ], + "type": "text", + "content": "[1] Waqar Ahmed, Pietro Morerio, and Vittorio Murino. Cleaning noisy labels by negative ensemble learning for source-free unsupervised domain adaptation. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 1616-1625, 2022. 3" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 148, + 296, + 190 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 148, + 296, + 190 + ], + "spans": [ + { + "bbox": [ + 61, + 148, + 296, + 190 + ], + "type": "text", + "content": "[2] Mathilde Bateson, Hoel Kervadec, Jose Dolz, Herve Lombaert, and Ismail Ben Ayed. Source-free domain adaptation for image segmentation. Medical Image Analysis, 82: 102617, 2022. 3" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 62, + 194, + 295, + 236 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 194, + 295, + 236 + ], + "spans": [ + { + "bbox": [ + 62, + 194, + 295, + 236 + ], + "type": "text", + "content": "[3] Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. Advances in neural information processing systems, 19, 2006. 5" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 62, + 239, + 295, + 292 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 239, + 295, + 292 + ], + "spans": [ + { + "bbox": [ + 62, + 239, + 295, + 292 + ], + "type": "text", + "content": "[4] Zhangjie Cao, Mingsheng Long, Jianmin Wang, and Michael I Jordan. Partial transfer learning with selective adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2724-2732, 2018. 3" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 62, + 295, + 295, + 350 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 295, + 295, + 350 + ], + "spans": [ + { + "bbox": [ + 62, + 295, + 295, + 350 + ], + "type": "text", + "content": "[5] Zhangjie Cao, Kaichao You, Mingsheng Long, Jianmin Wang, and Qiang Yang. Learning to transfer examples for partial domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2985-2994, 2019. 3" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 351, + 295, + 396 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 351, + 295, + 396 + ], + "spans": [ + { + "bbox": [ + 62, + 351, + 295, + 396 + ], + "type": "text", + "content": "[6] Minghao Chen, Hongyang Xue, and Deng Cai. Domain adaptation for semantic segmentation with maximum squares loss. In Proceedings of the IEEE/CVF international conference on computer vision, pages 2090-2099, 2019. 1" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 397, + 295, + 451 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 397, + 295, + 451 + ], + "spans": [ + { + "bbox": [ + 62, + 397, + 295, + 451 + ], + "type": "text", + "content": "[7] Yuhua Chen, Wen Li, Christos Sakaridis, Dengxin Dai, and Luc Van Gool. Domain adaptive faster r-cnn for object detection in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3339-3348, 2018. 1" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 453, + 295, + 508 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 453, + 295, + 508 + ], + "spans": [ + { + "bbox": [ + 62, + 453, + 295, + 508 + ], + "type": "text", + "content": "[8] Shuhao Cui, Shuhui Wang, Junbao Zhuo, Chi Su, Qingming Huang, and Qi Tian. Gradually vanishing bridge for adversarial domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12455-12464, 2020. 3" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 510, + 295, + 542 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 510, + 295, + 542 + ], + "spans": [ + { + "bbox": [ + 62, + 510, + 295, + 542 + ], + "type": "text", + "content": "[9] Bin Deng, Yabin Zhang, Hui Tang, Changxing Ding, and Kui Jia. On universal black-box domain adaptation. arXiv preprint arXiv:2104.04665, 2021. 1, 3, 6, 7" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 57, + 544, + 295, + 588 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 544, + 295, + 588 + ], + "spans": [ + { + "bbox": [ + 57, + 544, + 295, + 588 + ], + "type": "text", + "content": "[10] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE, 2009. 6" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 57, + 590, + 295, + 644 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 590, + 295, + 644 + ], + "spans": [ + { + "bbox": [ + 57, + 590, + 295, + 644 + ], + "type": "text", + "content": "[11] Bo Fu, Zhangjie Cao, Mingsheng Long, and Jianmin Wang. Learning to detect open classes for universal domain adaptation. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XV 16, pages 567-583. Springer, 2020. 3, 6" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 647, + 295, + 678 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 647, + 295, + 678 + ], + "spans": [ + { + "bbox": [ + 57, + 647, + 295, + 678 + ], + "type": "text", + "content": "[12] Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In International conference on machine learning, pages 1180-1189. PMLR, 2015. 1, 2" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 57, + 681, + 295, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 681, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 57, + 681, + 295, + 713 + ], + "type": "text", + "content": "[13] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario March, and Victor Lempitsky. Domain-adversarial training" + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 317, + 74, + 555, + 713 + ], + "type": "list", + "angle": 0, + "index": 30, + "blocks": [ + { + "bbox": [ + 335, + 74, + 553, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 335, + 74, + 553, + 95 + ], + "spans": [ + { + "bbox": [ + 335, + 74, + 553, + 95 + ], + "type": "text", + "content": "of neural networks. Journal of machine learning research, 17(59):1-35, 2016. 1" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 317, + 96, + 553, + 128 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 96, + 553, + 128 + ], + "spans": [ + { + "bbox": [ + 317, + 96, + 553, + 128 + ], + "type": "text", + "content": "[14] Jacob Goldberger and Ehud Ben-Reuven. Training deep neural-networks using a noise adaptation layer. In International conference on learning representations, 2022. 3" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 317, + 129, + 553, + 173 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 129, + 553, + 173 + ], + "spans": [ + { + "bbox": [ + 317, + 129, + 553, + 173 + ], + "type": "text", + "content": "[15] Rui Gong, Wen Li, Yuhua Chen, and Luc Van Gool. Dlow: Domain flow for adaptation and generalization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2477-2486, 2019. 3" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 317, + 174, + 553, + 217 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 174, + 553, + 217 + ], + "spans": [ + { + "bbox": [ + 317, + 174, + 553, + 217 + ], + "type": "text", + "content": "[16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 6" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 317, + 217, + 553, + 249 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 217, + 553, + 249 + ], + "spans": [ + { + "bbox": [ + 317, + 217, + 553, + 249 + ], + "type": "text", + "content": "[17] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. 3" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 317, + 251, + 553, + 294 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 251, + 553, + 294 + ], + "spans": [ + { + "bbox": [ + 317, + 251, + 553, + 294 + ], + "type": "text", + "content": "[18] Judy Hoffman, Erik Rodner, Jeff Donahue, Brian Kulis, and Kate Saenko. Asymmetric and category invariant feature transformations for domain adaptation. International journal of computer vision, 109:28-41, 2014. 2" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 317, + 295, + 553, + 327 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 295, + 553, + 327 + ], + "spans": [ + { + "bbox": [ + 317, + 295, + 553, + 327 + ], + "type": "text", + "content": "[19] Yunzhong Hou and Liang Zheng. Source free domain adaptation with image translation. arXiv preprint arXiv:2008.07514, 2020. 1, 3" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 317, + 328, + 553, + 381 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 328, + 553, + 381 + ], + "spans": [ + { + "bbox": [ + 317, + 328, + 553, + 381 + ], + "type": "text", + "content": "[20] Mehran Khodabandeh, Arash Vahdat, Mani Ranjbar, and William G Macready. A robust learning approach to domain adaptive object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 480-490, 2019. 1" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 317, + 383, + 555, + 426 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 383, + 555, + 426 + ], + "spans": [ + { + "bbox": [ + 317, + 383, + 555, + 426 + ], + "type": "text", + "content": "[21] Youngdong Kim, Junho Yim, Juseung Yun, and Junmo Kim. Nlnl: Negative learning for noisy labels. In Proceedings of the IEEE/CVF international conference on computer vision, pages 101-110, 2019. 5" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 317, + 427, + 553, + 470 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 427, + 553, + 470 + ], + "spans": [ + { + "bbox": [ + 317, + 427, + 553, + 470 + ], + "type": "text", + "content": "[22] Youngeun Kim, Donghyeon Cho, Kyeongtak Han, Priyadarshini Panda, and Sungeun Hong. Domain adaptation without source data. IEEE Transactions on Artificial Intelligence, 2(6):508-518, 2021. 3" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 317, + 472, + 553, + 515 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 472, + 553, + 515 + ], + "spans": [ + { + "bbox": [ + 317, + 472, + 553, + 515 + ], + "type": "text", + "content": "[23] Youngdong Kim, Juseung Yun, Hyounguk Shon, and Junmo Kim. Joint negative and positive learning for noisy labels. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9442-9451, 2021. 3, 6" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 317, + 516, + 553, + 568 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 516, + 553, + 568 + ], + "spans": [ + { + "bbox": [ + 317, + 516, + 553, + 568 + ], + "type": "text", + "content": "[24] Guangrui Li, Guoliang Kang, Yi Zhu, Yunchao Wei, and Yi Yang. Domain consensus clustering for universal domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9757-9766, 2021. 6" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 317, + 571, + 553, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 571, + 553, + 624 + ], + "spans": [ + { + "bbox": [ + 317, + 571, + 553, + 624 + ], + "type": "text", + "content": "[25] Qingmei Li, Yibin Wen, Juepeng Zheng, Yuxiang Zhang, and Haohuan Fu. Hyunida: Breaking label set constraints for universal domain adaptation in cross-scene hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing, 2024. 1" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 317, + 626, + 553, + 669 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 626, + 553, + 669 + ], + "spans": [ + { + "bbox": [ + 317, + 626, + 553, + 669 + ], + "type": "text", + "content": "[26] Shuang Li, Shiji Song, Gao Huang, Zhengming Ding, and Cheng Wu. Domain invariant and class discriminative feature learning for visual domain adaptation. IEEE transactions on image processing, 27(9):4260-4273, 2018. 2" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 317, + 670, + 553, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 670, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 317, + 670, + 553, + 713 + ], + "type": "text", + "content": "[27] Jian Liang, Dapeng Hu, and Jiashi Feng. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In International conference on machine learning, pages 6028-6039. PMLR, 2020. 1, 3" + } + ] + } + ], + "index": 29 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 749, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 749, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 749, + 318, + 757 + ], + "type": "text", + "content": "30596" + } + ] + } + ], + "index": 31 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 73, + 295, + 712 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 56, + 73, + 294, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 73, + 294, + 127 + ], + "spans": [ + { + "bbox": [ + 56, + 73, + 294, + 127 + ], + "type": "text", + "content": "[28] Jian Liang, Dapeng Hu, Jiashi Feng, and Ran He. Dine: Domain adaptation from single and multiple black-box predictors. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8003-8013, 2022. 1, 3, 6, 7" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 129, + 295, + 185 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 129, + 295, + 185 + ], + "spans": [ + { + "bbox": [ + 56, + 129, + 295, + 185 + ], + "type": "text", + "content": "[29] Mattia Litrico, Alessio Del Bue, and Pietro Morerio. Guiding pseudo-labels with uncertainty estimation for source-free unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7640-7650, 2023. 3" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 186, + 294, + 228 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 186, + 294, + 228 + ], + "spans": [ + { + "bbox": [ + 56, + 186, + 294, + 228 + ], + "type": "text", + "content": "[30] Long Liu, Lechao Yang, and Bin Zhu. Sparse feature space representation: A unified framework for semi-supervised and domain adaptation learning. Knowledge-Based Systems, 156:43-61, 2018. 3" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 231, + 294, + 274 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 231, + 294, + 274 + ], + "spans": [ + { + "bbox": [ + 56, + 231, + 294, + 274 + ], + "type": "text", + "content": "[31] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3431-3440, 2015. 3" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 276, + 294, + 319 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 276, + 294, + 319 + ], + "spans": [ + { + "bbox": [ + 56, + 276, + 294, + 319 + ], + "type": "text", + "content": "[32] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In International conference on machine learning, pages 97-105. PMLR, 2015. 3" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 321, + 294, + 364 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 321, + 294, + 364 + ], + "spans": [ + { + "bbox": [ + 56, + 321, + 294, + 364 + ], + "type": "text", + "content": "[33] Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. Advances in neural information processing systems, 31, 2018. 1" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 365, + 294, + 409 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 365, + 294, + 409 + ], + "spans": [ + { + "bbox": [ + 56, + 365, + 294, + 409 + ], + "type": "text", + "content": "[34] Bekhzod Olimov, Jeonghong Kim, and Anand Paul. Dcbt-net: Training deep convolutional neural networks with extremely noisy labels. IEEE Access, 8:220482-220495, 2020. 3" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 411, + 294, + 466 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 411, + 294, + 466 + ], + "spans": [ + { + "bbox": [ + 56, + 411, + 294, + 466 + ], + "type": "text", + "content": "[35] Fei Pan, Inkyu Shin, Francois Rameau, Seokju Lee, and In So Kweon. Unsupervised intra-domain adaptation for semantic segmentation through self-supervision. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3764-3773, 2020. 1" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 468, + 294, + 500 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 468, + 294, + 500 + ], + "spans": [ + { + "bbox": [ + 56, + 468, + 294, + 500 + ], + "type": "text", + "content": "[36] Pau Panareda Busto and Juergen Gall. Open set domain adaptation. In Proceedings of the IEEE international conference on computer vision, pages 754-763, 2017. 1" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 502, + 294, + 545 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 502, + 294, + 545 + ], + "spans": [ + { + "bbox": [ + 56, + 502, + 294, + 545 + ], + "type": "text", + "content": "[37] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017. 6" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 547, + 294, + 589 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 547, + 294, + 589 + ], + "spans": [ + { + "bbox": [ + 56, + 547, + 294, + 589 + ], + "type": "text", + "content": "[38] Xingchao Peng, Ben Usman, Neela Kaushik, Judy Hoffman, Dequan Wang, and Kate Saenko. Visda: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924, 2017.5" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 591, + 294, + 645 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 591, + 294, + 645 + ], + "spans": [ + { + "bbox": [ + 56, + 591, + 294, + 645 + ], + "type": "text", + "content": "[39] Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1406-1415, 2019. 5" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 56, + 647, + 294, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 647, + 294, + 712 + ], + "spans": [ + { + "bbox": [ + 56, + 647, + 294, + 712 + ], + "type": "text", + "content": "[40] Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In Computer Vision-ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, September 5-11, 2010, Proceedings, Part IV 11, pages 213-226. Springer, 2010. 5" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 553, + 712 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 316, + 73, + 553, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 73, + 553, + 116 + ], + "spans": [ + { + "bbox": [ + 316, + 73, + 553, + 116 + ], + "type": "text", + "content": "[41] Kuniaki Saito and Kate Saenko. Ovanet: One-vs-all network for universal domain adaptation. In Proceedings of the IEEE/cvf international conference on computer vision, pages 9000-9009, 2021. 1" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 118, + 553, + 161 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 118, + 553, + 161 + ], + "spans": [ + { + "bbox": [ + 316, + 118, + 553, + 161 + ], + "type": "text", + "content": "[42] Kuniaki Saito, Shohei Yamamoto, Yoshitaka Ushiku, and Tatsuya Harada. Open set domain adaptation by backpropagation. In Proceedings of the European conference on computer vision (ECCV), pages 153-168, 2018. 1" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 162, + 553, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 162, + 553, + 205 + ], + "spans": [ + { + "bbox": [ + 316, + 162, + 553, + 205 + ], + "type": "text", + "content": "[43] Chen Shen and Yuhong Guo. Unsupervised heterogeneous domain adaptation with sparse feature transformation. In Asian conference on machine learning, pages 375-390. PMLR, 2018. 2" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 206, + 553, + 249 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 206, + 553, + 249 + ], + "spans": [ + { + "bbox": [ + 316, + 206, + 553, + 249 + ], + "type": "text", + "content": "[44] Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, and Jae-Gil Lee. Learning from noisy labels with deep neural networks: A survey. IEEE transactions on neural networks and learning systems, 2022. 3" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 251, + 553, + 305 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 251, + 553, + 305 + ], + "spans": [ + { + "bbox": [ + 316, + 251, + 553, + 305 + ], + "type": "text", + "content": "[45] Simen Thys, Wiebe Van Ranst, and Toon Goedemé. Fooling automated surveillance cameras: adversarial patches to attack person detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 0–0, 2019. 1" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 306, + 553, + 348 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 306, + 553, + 348 + ], + "spans": [ + { + "bbox": [ + 316, + 306, + 553, + 348 + ], + "type": "text", + "content": "[46] Florian Tramér, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204, 2017. 1" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 350, + 553, + 414 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 350, + 553, + 414 + ], + "spans": [ + { + "bbox": [ + 316, + 350, + 553, + 414 + ], + "type": "text", + "content": "[47] Yi-Hsuan Tsai, Wei-Chih Hung, Samuel Schulter, Kiyuk Sohn, Ming-Hsuan Yang, and Manmohan Chandraker. Learning to adapt structured output space for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7472-7481, 2018. 1" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 415, + 553, + 459 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 415, + 553, + 459 + ], + "spans": [ + { + "bbox": [ + 316, + 415, + 553, + 459 + ], + "type": "text", + "content": "[48] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7167-7176, 2017. 1" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 460, + 553, + 492 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 460, + 553, + 492 + ], + "spans": [ + { + "bbox": [ + 316, + 460, + 553, + 492 + ], + "type": "text", + "content": "[49] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9 (11), 2008. 8" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 494, + 553, + 547 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 494, + 553, + 547 + ], + "spans": [ + { + "bbox": [ + 316, + 494, + 553, + 547 + ], + "type": "text", + "content": "[50] Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5018-5027, 2017. 5" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 548, + 553, + 602 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 548, + 553, + 602 + ], + "spans": [ + { + "bbox": [ + 316, + 548, + 553, + 602 + ], + "type": "text", + "content": "[51] Riccardo Volpi, Pietro Morerio, Silvio Savarese, and Vittorio Murino. Adversarial feature augmentation for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5495-5504, 2018. 3" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 604, + 553, + 658 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 604, + 553, + 658 + ], + "spans": [ + { + "bbox": [ + 316, + 604, + 553, + 658 + ], + "type": "text", + "content": "[52] Rui Wang, Masao Utiyama, Lemao Liu, Kehai Chen, and Eichiro Sumita. Instance weighting for neural machine translation domain adaptation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1482-1488, 2017. 2" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 659, + 553, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 659, + 553, + 712 + ], + "spans": [ + { + "bbox": [ + 316, + 659, + 553, + 712 + ], + "type": "text", + "content": "[53] Hongxin Wei, Lei Feng, Xiangyu Chen, and Bo An. Combating noisy labels by agreement: A joint training method with co-regularization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 13726–13735, 2020. 3" + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "30597" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 73, + 294, + 518 + ], + "type": "list", + "angle": 0, + "index": 9, + "blocks": [ + { + "bbox": [ + 56, + 73, + 294, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 73, + 294, + 127 + ], + "spans": [ + { + "bbox": [ + 56, + 73, + 294, + 127 + ], + "type": "text", + "content": "[54] Mingxuan Xia, Junbo Zhao, Gengyu Lyu, Zenan Huang, Tianlei Hu, Gang Chen, and Haobo Wang. A separation and alignment framework for black-box domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 16005-16013, 2024. 6, 7" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 129, + 294, + 183 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 129, + 294, + 183 + ], + "spans": [ + { + "bbox": [ + 56, + 129, + 294, + 183 + ], + "type": "text", + "content": "[55] Rui Xia, Zhenchun Pan, and Feng Xu. Instance weighting for domain adaptation via trading off sample selection bias and variance. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden, pages 13-19, 2018. 2" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 185, + 294, + 239 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 185, + 294, + 239 + ], + "spans": [ + { + "bbox": [ + 56, + 185, + 294, + 239 + ], + "type": "text", + "content": "[56] Minghao Xu, Hang Wang, Bingbing Ni, Qi Tian, and Wenjun Zhang. Cross-domain detection via graph-induced prototype alignment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12355-12364, 2020. 1" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 240, + 294, + 284 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 240, + 294, + 284 + ], + "spans": [ + { + "bbox": [ + 56, + 240, + 294, + 284 + ], + "type": "text", + "content": "[57] Jianfei Yang, Xiangyu Peng, Kai Wang, Zheng Zhu, Jiashi Feng, Lihua Xie, and Yang You. Divide to adapt: Mitigating confirmation bias for domain adaptation of black-box predictors. arXiv preprint arXiv:2205.14467, 2022. 1, 3, 6, 7" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 285, + 294, + 328 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 285, + 294, + 328 + ], + "spans": [ + { + "bbox": [ + 56, + 285, + 294, + 328 + ], + "type": "text", + "content": "[58] Shiqi Yang, Yaxing Wang, Joost Van De Weijer, Luis Herranz, and Shangling Jui. Unsupervised domain adaptation without source data by casting a bait. arXiv preprint arXiv:2010.12427, 1(2):5, 2020. 1, 3" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 331, + 294, + 384 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 331, + 294, + 384 + ], + "spans": [ + { + "bbox": [ + 56, + 331, + 294, + 384 + ], + "type": "text", + "content": "[59] Kaichao You, Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Universal domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2720-2729, 2019. 1, 3, 5, 6" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 386, + 294, + 430 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 386, + 294, + 430 + ], + "spans": [ + { + "bbox": [ + 56, + 386, + 294, + 430 + ], + "type": "text", + "content": "[60] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3):107-115, 2021. 1, 3" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 431, + 294, + 464 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 431, + 294, + 464 + ], + "spans": [ + { + "bbox": [ + 56, + 431, + 294, + 464 + ], + "type": "text", + "content": "[61] Haojian Zhang, Yabin Zhang, Kui Jia, and Lei Zhang. Unsupervised domain adaptation of black-box source models. arXiv preprint arXiv:2101.02839, 2021. 1, 3" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 465, + 294, + 518 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 465, + 294, + 518 + ], + "spans": [ + { + "bbox": [ + 56, + 465, + 294, + 518 + ], + "type": "text", + "content": "[62] Siqi Zhang, Lu Zhang, and Zhiyong Liu. Refined pseudo labeling for source-free domain adaptive object detection. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1-5. IEEE, 2023. 3" + } + ] + } + ], + "index": 8 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "30598" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/AFL_ A Single-Round Analytic Approach for Federated Learning with Pre-trained Models/1961e1d7-9e21-46e8-8ee1-ddb51cc88578_content_list.json b/2025/AFL_ A Single-Round Analytic Approach for Federated Learning with Pre-trained Models/1961e1d7-9e21-46e8-8ee1-ddb51cc88578_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..fa208524f342ef063c25dce0ef98678d906f59da --- /dev/null +++ b/2025/AFL_ A Single-Round Analytic Approach for Federated Learning with Pre-trained Models/1961e1d7-9e21-46e8-8ee1-ddb51cc88578_content_list.json @@ -0,0 +1,1849 @@ +[ + { + "type": "text", + "text": "AFL: A Single-Round Analytic Approach for Federated Learning with Pre-trained Models", + "text_level": 1, + "bbox": [ + 142, + 128, + 854, + 174 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Run He $^{1}$ , Kai Tong $^{1}$ , Di Fang $^{1}$ , Han Sun $^{2,3}$ , Ziqian Zeng $^{1}$ , Haoran Li $^{4}$ , Tianyi Chen $^{5}$ , Huiping Zhuang $^{1*}$", + "bbox": [ + 204, + 202, + 792, + 238 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{1}$ South China University of Technology $^{2}$ Tsinghua University", + "bbox": [ + 240, + 238, + 756, + 256 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "3 Beijing National Research Center for Information Science and Technology", + "bbox": [ + 197, + 256, + 799, + 273 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "4 The Hong Kong University of Science and Technology 5 Microsoft", + "bbox": [ + 217, + 273, + 779, + 292 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "\\*corresponding: hpzhuang@scut.edu.cn", + "bbox": [ + 334, + 292, + 655, + 309 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 246, + 344, + 326, + 359 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "In this paper, we introduce analytic federated learning (AFL), a new training paradigm that brings analytical (i.e., closed-form) solutions to the federated learning (FL) with pretrained models. Our AFL draws inspiration from analytic learning—a gradient-free technique that trains neural networks with analytical solutions in one epoch. In the local client training stage, the AFL facilitates a one-epoch training, eliminating the necessity for multi-epoch updates. In the aggregation stage, we derive an absolute aggregation (AA) law. This AA law allows a single-round aggregation, reducing heavy communication overhead and achieving fast convergence by removing the need for multiple aggregation rounds. More importantly, the AFL exhibits a property that invariance to data partitioning, meaning that regardless of how the full dataset is distributed among clients, the aggregated result remains identical. This could spawn various potentials, such as data heterogeneity invariance and client-number invariance. We conduct experiments across various FL settings including extremely non-IID ones, and scenarios with a large number of clients (e.g., $\\geq 1000$ ). In all these settings, our AFL constantly performs competitively while existing FL techniques encounter various obstacles. Our codes are available at https://github.com/ZHUANGHP/Analytic-federated-learning.", + "bbox": [ + 88, + 376, + 485, + 739 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 91, + 768, + 223, + 784 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Federated learning (FL) [24] aims to collectively train a machine learning model over data silos by aggregating their individual trained weights, while preserving the privacy of their source data. This training paradigm has received high popularity, particularly in sensitive domains where data privacy is crucial, such as in banks [27, 41] and hospitals [11, 13].", + "bbox": [ + 89, + 794, + 483, + 885 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Conventional FL techniques rely on weight aggregation", + "bbox": [ + 109, + 886, + 482, + 901 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "among clients over multiple rounds of training. The objective is to achieve convergence and approximate its joint-training counterpart, where all clients' data are accessible in a single location. To accomplish this, many contributions have been made. One widely recognized method is FedAvg [24]. Relying on a large number of aggregation rounds, the FedAvg employs a simple yet effective weight averaging technique across local clients. Building upon this, various methods have been proposed (e.g., the FedProx [17] and the FedNova [35]), each with its own specific focus within the field of FL.", + "bbox": [ + 511, + 345, + 906, + 496 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "However, training a model from scratch via FL can be computationally intensive and demanding in terms of communication bandwidth, especially with large models and numerous participating clients. Several efforts have explored utilizing pre-trained models to mitigate these challenges [23, 45]. Typically, this involves freezing the backbone and only updating and sharing lightweight parameters, such as prototypes [29, 30] or prompts [7, 40], to reduce the substantial training costs.", + "bbox": [ + 511, + 502, + 908, + 638 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Although leveraging pre-trained models can circumvent the high costs associated with training backbones from scratch, existing FL techniques with pre-trained models are primarily based on a gradient-based iterative approach, necessitating iterative optimization on each client and multi-round aggregation across clients. The gradient-based optimization used in the existing FL faces various challenges and imposes several constraints. The faced challenges include, but are not limited to: 1) Data heterogeneity, where the data distribution in each client is not independently identical (non-IID), even with mutually exclusive data categories across different clients (i.e., pathological distribution), 2) Large client number, where the aggregation involving a significant number of clients (i.e., $\\geq 1000$ ) can lead to substantial performance degradation in FL systems as the client count increases [16], 3) Slow convergence: where FL methods may struggle to converge within limited communication rounds, especially", + "bbox": [ + 511, + 643, + 910, + 901 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 46 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "4988", + "bbox": [ + 482, + 945, + 514, + 955 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "in severe non-IID scenarios, and 4) High communication cost, where multi-round aggregation in existing FL methods escalates the communication costs associated with parameter sharing between clients and servers.", + "bbox": [ + 89, + 90, + 480, + 151 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this paper, we propose a new FL training framework named analytic federated learning (AFL), which provides a single-round aggregation for federated learning with pretrained models. The AFL draws inspiration from analytic learning [8, 32, 48]—a gradient-free technique with a closed-form solution obtained from reshaping the network training into linearized formulation. The AL paradigm receives several benefits over gradient-based techniques. First, it is gradient-free, thereby avoiding gradient-related issues, such as vanishing and exploding gradients. Second, the analytical solution frees AL from convergence issues during training. Also, the AL requires only one visit to the dataset while gradient-based mechanism usually needs hundreds of epochs or beyond. These properties are attractive in FL to accomplish fast convergence and low communication cost. Here, we are able to incorporate this mechanism into the FL domain to overcome limitations inherent in gradient-based techniques. Our contributions are summarized as follows:", + "bbox": [ + 91, + 154, + 485, + 425 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- We propose the AFL, a gradient-free FL framework with analytical (closed-form) solutions. These analytical solutions apply both in the local client training stage and the aggregation stage.", + "- In the local stage, we adopt a pre-trained network to harness input embeddings, and formulate the training in each client into a localized linear regression problem. This leads to a least squares (LS) based one-epoch client training, eliminating the need for multi-epoch training and enabling fast convergence in local clients.", + "- In the aggregation stage, we derive an absolute aggregation (AA) law in analytical form, optimally establishing a single-round aggregation. That is, the aggregation happens only once, avoiding multiple FL rounds that bring high communication costs. Additionally, in scenarios where the AA law becomes suboptimal due to a large number of clients, we introduce a regularization intermediary (RI) process to restore its optimality.", + "- Owing to analytical solutions, the AFL exhibits a property that invariance to data partitioning. This means that regardless of how the full dataset is distributed (e.g., nonIID) among local clients, the result remains identical. This property spawns several appealing characteristics: i) Data heterogeneity invariance where the result is invariant to arbitrary heterogeneous data partition scenarios. ii) Client-number invariance, which produces identical results regardless of the number of clients involved.", + "- We conduct extensive experiments spanning diverse scenarios, including a wide variety of non-IID partitions and large client number (up to 1000) settings. Our AFL consistently showcases competitive performance throughout all" + ], + "bbox": [ + 89, + 431, + 482, + 900 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "these settings when compared with other methods.", + "bbox": [ + 527, + 90, + 859, + 106 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related Works", + "text_level": 1, + "bbox": [ + 513, + 132, + 663, + 148 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this section, we review existing related FL literature. Additionally, we explore various AL techniques and their variants to reveal their underlying mechanisms.", + "bbox": [ + 511, + 162, + 906, + 208 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.1. Federated Learning Methods", + "text_level": 1, + "bbox": [ + 511, + 231, + 774, + 247 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Following the FedAvg [24], to address non-IID issues in FL, various methods have been proposed. One common approach involves assessing the significance of parameters during aggregation to ensure that local updates do not diverge substantially from the global model. For instance, the FedProx [17] restricts the size of local updates, while the FedNova [35] employs a normalized averaging method to eliminate target inconsistency while maintaining fast error convergence. These methods are frequently used as baselines, and we compare our results against them in our experiments. Another set of methods focuses on determining adaptive aggregation weights obtained from multiple clients. The Fed-LAW [19] learns these weights to achieve a global model with state-of-the-art performance, though it requires a proxy dataset to learn the weights, making the results sensitive to the selection of the proxy dataset. To address this sensitivity, the FedCDA [34] proposes a proxy-free method that reduces each client's deviation from the local models of other participants and selects a local model from its multiple recent models acquired over several rounds.", + "bbox": [ + 511, + 257, + 906, + 559 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Some methods address the parameter order mismatch issue across clients, which can occur during global aggregation. The Fed2 [43] designs a model structure adaptation method to ensure explicit feature allocation across different network structures. Similarly, the method in [18] seeks a position-aware neuron to fuse position-related values (i.e., position encodings) into neuron outputs. Distillation methods [2, 9, 33, 38] represent another branch, where the average of logits from client models is used for the local model aggregation, thereby enhancing generalization. [21] pioneers to apply knowledge distillation on the server side, transferring knowledge from multiple local models to the global model using an unlabeled proxy dataset. To overcome the limitation of using a proxy dataset, recent studies such as [47] and [44] suggest substituting the proxy dataset with generated data.", + "bbox": [ + 511, + 564, + 908, + 790 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Existing FL techniques have several significant drawbacks, including challenges with data heterogeneity, large client numbers, convergence issues and high communication costs. Our AFL framework addresses these issues by utilizing a gradient-free, closed-form analytic learning approach, avoiding gradient-related problems (e.g., multi-epoch training, convergence issues and multi-round aggregation).", + "bbox": [ + 511, + 795, + 908, + 902 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "4989", + "bbox": [ + 482, + 945, + 514, + 955 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.2. Analytic Learning", + "text_level": 1, + "bbox": [ + 91, + 90, + 267, + 107 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The AL has been developed as a strategy to address issues associated with gradient-based update, such as gradient vanishing/exploding, divergence during iteration, and long training time due to multi-epoch training. The AL is also referred to as pseudoinverse learning [8] owing to its utilization of matrix inversion. The AL starts from shallow learning, which is investigated prior to the advent of deep networks in the realm of research. For instance, the radial basis network [26] trains parameters using an LS estimation after performing a kernel transformation in the first layer. The multilayer AL [31, 37] comes up with a one-epoch training style, using LS techniques to resolve linear segments transformed by nonlinear network. One instance of this method is the dense pseudoinverse autoencoder [36], which uses LS solutions to combine shallow and deep features to train a stacked autoencoder layer-by-layer.", + "bbox": [ + 89, + 114, + 485, + 357 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Nonetheless, earlier AL techniques train their weights by processing the entire dataset simultaneously, therefore facing memory challenge. This memory concern is alleviated by the block-wise recursive Moore-Penrose inverse [48], which equivalently replaces the joint learning with a recursive approach. This recursive equivalent property echoes well with the continual learning community. Naturally, analytic continual learning techniques [50, 51, 53] adopt this equivalent characteristic, thrive in handling the catastrophic forgetting problem, and are invariant to the sequential data partition in continual learning. Our AFL draws inspiration from these adaptations, aiming to introduce similar equivalent patterns (e.g., invariant to heterogeneous data) to the FL community.", + "bbox": [ + 89, + 361, + 483, + 556 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. Analytic Federated Learning", + "text_level": 1, + "bbox": [ + 89, + 595, + 357, + 613 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In this section, we provide a detailed exposition of AFL derivations, organized into a local training stage and a centralized aggregation stage. In the local stage, a pre-trained backbone serves as a feature extractor, facilitating an AL network learning that allows the training to be completed in one epoch. In the aggregation stage, we introduce the AA law, establishing a single-round aggregation. We elaborate on AFL's invariance to data partitioning here, bringing benefits such as data heterogeneity invariance, client-number invariance and fast convergence in a single round. An Overview of the proposed AFL paradigm is depicted in Figure 1.", + "bbox": [ + 89, + 623, + 485, + 790 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Prior to further developments, here let $\\mathcal{D} = \\{\\mathcal{D}_k\\}_{k=1}^K$ be the complete training data, where $\\mathcal{D}_k \\sim \\{\\mathcal{X}_{k,i}, y_{k,i}\\}_{i=1}^{N_k}$ suggests an $N_k$ -sample sub-dataset accessible to the $k$ -th client with $\\mathcal{X}_{k,i}$ and $y_{k,i}$ representing the $i$ -th input-label pair. In this paper, all these $K$ clients share the same backbone network $f_{\\mathrm{backbone}}$ parameterized by $\\Theta$ to map their inputs (e.g., $\\mathcal{X}$ ) to embedding vectors.", + "bbox": [ + 89, + 792, + 483, + 902 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Local Stage: Localized Analytic Learning", + "text_level": 1, + "bbox": [ + 511, + 90, + 867, + 107 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In this stage, each local client's network is trained using the AL technique. This involves transforming the neural network's classification head into a linear regression problem, thereby enabling the derivation of a closed-form LS solution.", + "bbox": [ + 511, + 112, + 906, + 172 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "At the initial step, client $k$ extracts its embedding vector $\\mathbf{x}_{k,i}$ by passing the $i$ -th data $\\mathcal{X}_{k,i}$ from $\\mathcal{D}_k$ through the frozen backbone network $f_{\\mathrm{backbone}}$ , i.e.,", + "bbox": [ + 511, + 172, + 906, + 219 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {x} _ {k, j} = f _ {\\text {b a c k b o n e}} \\left(\\mathcal {X} _ {k, j}, \\Theta\\right) \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 622, + 229, + 906, + 247 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $\\pmb{x}_{k,j} \\in \\mathbb{R}^{1 \\times y_{\\mathrm{e}}}$ , with $y_{\\mathrm{e}}$ indicating the embedding length.", + "bbox": [ + 511, + 255, + 906, + 285 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "For the $k$ -th client (with $N_{k}$ samples in $\\mathcal{D}_k$ ), we can stack the extracted embeddings and their corresponding one-hoted labels via mapping $\\mathcal{D}_k \\sim \\{\\mathcal{X}_{k,i}, y_{k,i}\\}_{i=1}^{N_k}$ to $\\bar{\\mathcal{D}}_k \\sim \\{\\pmb{X}_k, \\pmb{Y}_k\\}$ , i.e.,", + "bbox": [ + 511, + 286, + 906, + 348 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {X} _ {k} = \\left[ \\begin{array}{c} \\boldsymbol {x} _ {k, 1} \\\\ \\boldsymbol {x} _ {k, 2} \\\\ \\vdots \\\\ \\boldsymbol {x} _ {k, N _ {k}} \\end{array} \\right] = \\left[ \\begin{array}{c} f _ {\\text {b a c k b o n e}} \\left(\\mathcal {X} _ {k, 1}, \\Theta\\right) \\\\ f _ {\\text {b a c k b o n e}} \\left(\\mathcal {X} _ {k, 2}, \\Theta\\right) \\\\ \\vdots \\\\ f _ {\\text {b a c k b o n e}} \\left(\\mathcal {X} _ {k, N _ {k}}, \\Theta\\right) \\end{array} \\right] \\boldsymbol {Y} _ {k} = \\left[ \\begin{array}{c} \\text {o n e h o t} \\left(y _ {k, 1}\\right) \\\\ \\text {o n e h o t} \\left(y _ {k, 2}\\right) \\\\ \\vdots \\\\ \\text {o n e h o t} \\left(y _ {k, N _ {k}}\\right) \\end{array} \\right], \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 524, + 356, + 906, + 415 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where the embedding matrix $\\mathbf{X}_k \\in \\mathbb{R}^{N_k \\times y_e}$ , and the label matrix $\\mathbf{Y}_k \\in \\mathbb{R}^{N_k \\times C}$ has $C$ classes. The onehot(*) operator converts the index label $y_{k,j}$ into a $C$ -dimension one-hot row vector.", + "bbox": [ + 511, + 422, + 905, + 482 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Subsequently, we approach the local client training with AL technique [8]. Specially, the target of the $k$ -th client is to linearly map the extracted embeddings onto the one-hoted labels by minimizing the mean square error (MSE) loss function as follows.", + "bbox": [ + 511, + 484, + 906, + 559 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} \\left(\\boldsymbol {W} _ {k}\\right) = \\left\\| \\boldsymbol {Y} _ {k} - \\boldsymbol {X} _ {k} \\boldsymbol {W} _ {k} \\right\\| _ {\\mathrm {F}} ^ {2}, \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 612, + 570, + 906, + 588 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $\\| *\\|_{\\mathrm{F}}$ indicates the Frobenius norm. This leads to an optimal weight estimation $\\hat{\\mathbf{W}}_k$ , i.e.,", + "bbox": [ + 511, + 597, + 905, + 628 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\hat {\\boldsymbol {W}} _ {k} = \\underset {\\boldsymbol {W} _ {k}} {\\operatorname {a r g m i n}} \\mathcal {L} (\\boldsymbol {W} _ {k}) = \\boldsymbol {X} _ {k} ^ {\\dagger} \\boldsymbol {Y} _ {k}, \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 596, + 637, + 906, + 664 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $\\dagger$ denotes the Moore-Penrose (MP) inverse (also referred as generalized inverse or pseudoinverse) [8, 48].", + "bbox": [ + 511, + 672, + 906, + 703 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The solution presented in (4) optimally addresses the MSE loss function described in (3), effectively establishing an LS-based AL solution for localized network learning.", + "bbox": [ + 511, + 704, + 906, + 750 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Why One-epoch Analytic Learning Works. AL methods are generally effective for training shallow networks but face challenges when applied to deeper ones. This can be attributed to the fact that AL techniques are often designed as classifiers rather than end-to-end learning approaches. Despite this limitation, recent research has demonstrated that with a well-trained backbone, the AL performs adequately in various complex scenarios [52]. The practice of using a \"pre-trained backbone + downstream tasks\" has become increasingly common. This has allowed the one-epoch AL", + "bbox": [ + 511, + 750, + 908, + 901 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "4990", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/582017bc5eb43d7259bbb83073d5d142d26e7139c39702bfc297a583a1d56d04.jpg", + "image_caption": [ + "Figure 1. An overview of the AFL. During the local stage, each client calculates $C_k^{\\mathrm{r}}$ and $\\hat{W}_k^{\\mathrm{r}}$ based on the same pre-trained backbone and its own dataset. The server obtained the $C_{\\mathrm{agg},K}^{\\mathrm{r}}$ and $\\hat{W}_{\\mathrm{agg},K}^{\\mathrm{r}}$ then get $\\hat{W}$ in the aggregation stage." + ], + "image_footnote": [], + "bbox": [ + 94, + 88, + 903, + 290 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "to thrive in various areas such as continual learning [50] and reinforcement learning [22]. Hence, it could also be well incorporated in the individual client training.", + "bbox": [ + 88, + 348, + 482, + 393 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Adopting AL is the key to enforcing the upcoming single-round aggregation (by deriving the AA law). The affine characteristic of linear regression in each client opens up new possibilities for exploration in FL. We provide a comprehensive explanation of such an exploration in later sections.", + "bbox": [ + 89, + 393, + 483, + 469 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2. Aggregation Stage: Absolute Aggregation Law", + "text_level": 1, + "bbox": [ + 89, + 478, + 483, + 494 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In the aggregation stage, we introduce the Absolute Aggregation (AA) law, a key contribution in AFL. The AA law facilitates a single-round aggregation, i.e., the aggregation happens only once. Additionally, in scenarios where the AA law becomes suboptimal due to a large number of clients, we introduce a regularization intermediary (RI) process to restore its optimality.", + "bbox": [ + 89, + 500, + 483, + 604 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The MP inverse partition [4] inspires our derivation, which is reformulated into Lemma 1.", + "bbox": [ + 89, + 606, + 483, + 635 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Lemma 1. Let $\\mathbf{X} = \\begin{bmatrix} \\mathbf{X}_u \\\\ \\mathbf{X}_v \\end{bmatrix}$ with $\\mathbf{X}_u$ and $\\mathbf{X}_v$ having full column ranks, and $\\mathbf{X}$ follows a partition", + "bbox": [ + 89, + 642, + 483, + 686 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {X} ^ {\\dagger} = \\left[ \\begin{array}{l l} \\bar {U} & \\bar {V} \\end{array} \\right], \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 228, + 696, + 483, + 715 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where", + "bbox": [ + 89, + 727, + 135, + 738 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\left\\{ \\begin{array}{l} \\bar {U} = X _ {u} ^ {\\dagger} - R _ {u} C _ {v} X _ {u} ^ {\\dagger} - R _ {u} C _ {v} (C _ {u} + C _ {v}) ^ {- 1} C _ {v} X _ {u} ^ {\\dagger} \\\\ \\bar {V} = X _ {v} ^ {\\dagger} - R _ {v} C _ {u} X _ {v} ^ {\\dagger} - R _ {v} C _ {u} (C _ {u} + C _ {v}) ^ {- 1} C _ {u} X _ {v} ^ {\\dagger} \\end{array} \\right.,\n$$\n", + "text_format": "latex", + "bbox": [ + 102, + 750, + 483, + 787 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\left\\{ \\begin{array}{l} \\boldsymbol {C} _ {u} = \\boldsymbol {X} _ {u} ^ {\\top} \\boldsymbol {X} _ {u} \\\\ \\boldsymbol {C} _ {v} = \\boldsymbol {X} _ {v} ^ {\\top} \\boldsymbol {X} _ {v} \\end{array} , \\quad \\left\\{ \\begin{array}{l} \\boldsymbol {R} _ {u} = \\boldsymbol {C} _ {u} ^ {- 1} \\\\ \\boldsymbol {R} _ {v} = \\boldsymbol {C} _ {v} ^ {- 1} \\end{array} . \\right. \\right. \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 151, + 797, + 483, + 838 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Proof. See Supplementary Materials A.", + "bbox": [ + 89, + 845, + 356, + 859 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Lemma 1 points out that, a matrix's MP inverse (e.g., $X^{\\dagger}$ ) can be computed using the inverse matrices of its block", + "bbox": [ + 91, + 869, + 483, + 900 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "components (e.g., $\\pmb{X}_u^\\dagger$ and $\\pmb{X}_v^\\dagger$ ). This introduces possibilities for aggregating a weight $\\pmb{W} = \\pmb{X}^\\dagger \\pmb{Y}$ equally from manipulating constituent counterparts $\\pmb{W}_u = \\pmb{X}_u^\\dagger \\pmb{Y}_u$ and $\\pmb{W}_v = \\pmb{X}_v^\\dagger \\pmb{Y}_v$ . That is, $\\pmb{W} = f_{\\mathrm{agg}}(\\pmb{W}_u, \\pmb{W}_v)$ , i.e., a single-aggregation strategy.", + "bbox": [ + 511, + 348, + 906, + 424 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Bearing the above intuition in mind, we are able to derive such a single-aggregation strategy in action. This is delivered in Theorem 1.", + "bbox": [ + 511, + 424, + 906, + 468 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Theorem 1. Absolute Aggregation Law: Let $\\hat{W} = X^{\\dagger}Y$ , where $X = \\begin{bmatrix} X_u \\\\ X_v \\end{bmatrix}$ and $Y = \\begin{bmatrix} Y_u \\\\ Y_v \\end{bmatrix}$ with $X_u$ and $X_v$ having full column ranks. Let $\\hat{W}_u = X_u^\\dagger Y_u$ , $\\hat{W}_v = X_v^\\dagger Y_v$ , and we have", + "bbox": [ + 511, + 476, + 906, + 551 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {W} = \\boldsymbol {\\mathcal {W}} _ {u} \\boldsymbol {W} _ {u} + \\boldsymbol {\\mathcal {W}} _ {u} \\boldsymbol {W} _ {v}, \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 617, + 554, + 906, + 570 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where", + "bbox": [ + 513, + 577, + 558, + 589 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\left\\{ \\begin{array}{l} \\mathcal {W} _ {u} = I - R _ {u} C _ {v} - R _ {u} C _ {v} \\left(C _ {u} + C _ {v}\\right) ^ {- 1} C _ {v} \\\\ \\mathcal {W} _ {v} = I - R _ {v} C _ {u} - R _ {v} C _ {u} \\left(C _ {u} + C _ {v}\\right) ^ {- 1} C _ {u} \\end{array} \\right.\n$$\n", + "text_format": "latex", + "bbox": [ + 540, + 598, + 859, + 638 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\left\\{ \\begin{array}{l} \\boldsymbol {C} _ {u} = \\boldsymbol {X} _ {u} ^ {\\top} \\boldsymbol {X} _ {u} \\\\ \\boldsymbol {C} _ {v} = \\boldsymbol {X} _ {v} ^ {\\top} \\boldsymbol {X} _ {v} \\end{array} \\quad \\left\\{ \\begin{array}{l} \\boldsymbol {R} _ {u} = \\boldsymbol {C} _ {u} ^ {- 1} \\\\ \\boldsymbol {R} _ {v} = \\boldsymbol {C} _ {v} ^ {- 1} \\end{array} \\right. \\right. \\tag {8}\n$$\n", + "text_format": "latex", + "bbox": [ + 584, + 648, + 906, + 688 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Proof. See Supplementary Materials B.", + "bbox": [ + 511, + 694, + 777, + 709 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The AA law, as stated in Theorem 1, provides a powerful insight. It establishes an intuition that we can aggregate two independently trained weights, such as $W_{u}$ and $W_{v}$ , into their jointly trained counterpart $W$ . This is achieved in an optimal way without any approximation or parameter tuning.", + "bbox": [ + 511, + 719, + 906, + 795 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Invariance to data partitioning. To a certain extent, the achievement in Theorem 1 attains the ultimate goal of FL, i.e., the equivalence between weights trained in FL fashion and that trained on a centralized joint dataset. Traditionally, the FL aims to approximate or converge to the performance of the joint-trained model through multiple rounds of aggregation in a central server. However, the AA law provides a", + "bbox": [ + 511, + 796, + 908, + 900 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4991", + "bbox": [ + 482, + 944, + 513, + 955 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "more direct path to this goal. It allows for an equivalence (not approximation or convergence) to manifest in a linear regression standpoint.", + "bbox": [ + 89, + 90, + 483, + 135 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Supported by the AA law, the AFL achieves a level of performance that is on par with the joint-trained model, without the need for multiple rounds of aggregation. This direct equivalence could establish significant advancement in FL, as it simplifies the process and reduces the heavy computational overhead associated with multiple aggregation rounds.", + "bbox": [ + 89, + 136, + 483, + 226 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Although the AA law in Theorem 1 admits the absolute aggregation between two clients (i.e., $\\hat{W}_u$ and $\\hat{W}_v$ ), this pattern can be trivially broadcast to multi-client scenario. To elaborate, without loss of generality, we denote $\\hat{W}_{\\mathrm{agg},k - 1}$ as the accumulated aggregation (AcAg) weight that has aggregated $k - 1$ clients. By rewriting (1), the next aggregation with $\\hat{W}_k$ ( $i = k,\\dots ,K$ ) reads", + "bbox": [ + 89, + 227, + 483, + 332 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\hat {W} _ {\\text {a g g}, k} = \\mathcal {W} _ {\\text {a g g}} \\hat {W} _ {\\text {a g g}, k - 1} + \\mathcal {W} _ {k} \\hat {W} _ {k}. \\tag {9}\n$$\n", + "text_format": "latex", + "bbox": [ + 160, + 339, + 483, + 358 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "According to (1), let $C_u \\to C_{\\mathrm{agg},k-1}$ , $C_v \\to C_k$ , and we have $C_{\\mathrm{agg},k} = C_{\\mathrm{agg},k-1} + C_k$ . Hence,", + "bbox": [ + 89, + 366, + 483, + 397 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\left\\{ \\begin{array}{l} \\mathcal {W} _ {\\text {a g g}} = I - C _ {\\text {a g g}, k - 1} ^ {- 1} C _ {k} \\left(I + C _ {\\text {a g g}, k} ^ {- 1} C _ {k}\\right), \\\\ \\mathcal {W} _ {k} = I - C _ {k} ^ {- 1} C _ {\\text {a g g}, k - 1} \\left(I + C _ {\\text {a g g}, k} ^ {- 1} C _ {\\text {a g g}, k - 1}\\right), \\end{array} \\right. \\tag {10}\n$$\n", + "text_format": "latex", + "bbox": [ + 109, + 406, + 483, + 457 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where", + "bbox": [ + 89, + 459, + 135, + 470 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\left\\{ \\begin{array}{l} C _ {\\text {a g g}, k} = C _ {\\text {a g g}, k - 1} + C _ {k} = \\sum_ {i} ^ {k} C _ {i}, \\\\ C _ {i} = X _ {i} ^ {\\top} X _ {i}. \\end{array} \\right. \\tag {11}\n$$\n", + "text_format": "latex", + "bbox": [ + 151, + 478, + 483, + 518 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "As such, the joint-trained weight $\\hat{W} = \\hat{W}_{\\mathrm{agg},k}$ is produced by aggregating among individual clients in a pair-wise manner. It is interesting to find that the optimal aggregation is in fact a linear combination between two matrices (e.g., $\\hat{W}_{\\mathrm{agg},k - 1}$ and $\\hat{W}_k$ ) weighted by $\\mathcal{W}_{\\mathrm{agg}}$ and $\\mathcal{W}_k$ respectively.", + "bbox": [ + 89, + 527, + 483, + 604 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Note that the aggregation does NOT necessarily follow a sequential index from 1 to $K$ . We can randomly sample an available client to aggregate with the AcAg weight. This is revealed by the fact that elements in the weighting matrices are somewhat interchangeable (e.g., see (10)).", + "bbox": [ + 89, + 604, + 483, + 680 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.3. RI Process: AA Law in Rank-deficient Scenario", + "text_level": 1, + "bbox": [ + 89, + 686, + 482, + 702 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "As indicated in Theorem 1, the equivalence in AA law relies on an assumption of a full-column rank in each client, e.g., $X_{k}$ having full-column rank. This may not hold in the large client number scenario where each client has limited data (e.g., $N_{k} < y_{\\mathrm{e}}$ ), rendering the full-column rank assumption invalid. To address this, we implement the AA law with an RI process. Specially, we include a regularization term as an intermediary during the local stage, and remove it after the aggregation stage.", + "bbox": [ + 89, + 709, + 483, + 845 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "To this end, we include an regularization term controlled by $\\gamma$ in the objective function, i.e.,", + "bbox": [ + 89, + 845, + 483, + 875 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} \\left(\\boldsymbol {W} _ {k} ^ {\\mathrm {r}}\\right) = \\left\\| \\boldsymbol {Y} _ {k} - \\boldsymbol {X} _ {k} \\boldsymbol {W} _ {k} ^ {\\mathrm {r}} \\right\\| _ {\\mathrm {F}} ^ {2} + \\gamma \\left\\| \\boldsymbol {W} _ {k} ^ {\\mathrm {r}} \\right\\| _ {\\mathrm {F}} ^ {2}, \\tag {12}\n$$\n", + "text_format": "latex", + "bbox": [ + 148, + 883, + 483, + 902 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "which rewrites the MP inverse based solution in (4) into", + "bbox": [ + 511, + 90, + 883, + 106 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\hat {\\boldsymbol {W}} _ {k} ^ {\\mathrm {r}} = \\underset {\\boldsymbol {W} _ {k} ^ {\\mathrm {r}}} {\\operatorname {a r g m i n}} \\mathcal {L} \\left(\\boldsymbol {W} _ {k} ^ {\\mathrm {r}}\\right) = \\left(\\boldsymbol {X} _ {k} ^ {\\top} \\boldsymbol {X} _ {k} + \\gamma \\boldsymbol {I}\\right) ^ {- 1} \\boldsymbol {X} _ {k} ^ {\\top} \\boldsymbol {Y} _ {k}. \\tag {13}\n$$\n", + "text_format": "latex", + "bbox": [ + 521, + 114, + 906, + 142 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Such a solution does not suffer from rank-deficiency issues, as $\\pmb{X}_k^\\top \\pmb{X}_k + \\gamma \\pmb{I}$ is positive-definite thereby a full-rank matrix.", + "bbox": [ + 511, + 152, + 906, + 196 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "During aggregation, we substitute $\\hat{\\mathbf{W}}_k$ in (4) with $\\hat{\\mathbf{W}}_k^{\\mathrm{r}}$ using (13). This substitution would clearly result in deviations (i.e., $\\hat{\\mathbf{W}}_{\\mathrm{agg},k}^{\\mathrm{r}}\\neq \\hat{\\mathbf{W}}_{\\mathrm{agg},k}$ ), which is depicted in Theorem 2.", + "bbox": [ + 511, + 196, + 906, + 246 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Theorem 2. RI-AA Law: The relation between $\\hat{W}_{agg,k}^{r}$ and $\\hat{W}_{agg,k}$ follows", + "bbox": [ + 511, + 255, + 905, + 290 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\hat {W} _ {\\text {a g g}, k} ^ {\\mathrm {r}} = \\left(C _ {\\text {a g g}, k} ^ {\\mathrm {r}}\\right) ^ {- 1} C _ {\\text {a g g}, k} \\hat {W} _ {\\text {a g g}, k}, \\tag {14}\n$$\n", + "text_format": "latex", + "bbox": [ + 589, + 300, + 906, + 320 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where", + "bbox": [ + 513, + 329, + 558, + 342 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nC _ {\\text {a g g}, k} ^ {\\mathrm {r}} = C _ {\\text {a g g}, k} + k \\gamma I = \\sum_ {i} ^ {k} C _ {i} ^ {\\mathrm {r}}, \\quad C _ {i} ^ {\\mathrm {r}} = X _ {i} ^ {\\top} X _ {i} + \\gamma I. \\tag {15}\n$$\n", + "text_format": "latex", + "bbox": [ + 526, + 354, + 906, + 372 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Proof. See Supplementary Materials C.", + "bbox": [ + 513, + 381, + 777, + 397 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Theorem 2 establishes the relation between $\\hat{W}_{\\mathrm{agg},k}^{\\mathrm{r}}$ and $\\hat{W}_{\\mathrm{agg},k}$ , which is a one-to-one mapping, such that $\\hat{W}_{\\mathrm{agg},k}$ can be restored by manipulating $\\hat{W}_{\\mathrm{agg},k}^{\\mathrm{r}}$ , i.e.,", + "bbox": [ + 511, + 406, + 905, + 460 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\hat {W} _ {\\text {a g g}, k} = \\left(C _ {\\text {a g g}, k}\\right) ^ {- 1} C _ {\\text {a g g}, k} ^ {\\mathrm {r}} \\hat {W} _ {\\text {a g g}, k} ^ {\\mathrm {r}} \\\\ = \\left(\\boldsymbol {C} _ {\\text {a g g}, k} ^ {\\mathrm {r}} - k \\gamma \\boldsymbol {I}\\right) ^ {- 1} \\boldsymbol {C} _ {\\text {a g g}, k} ^ {\\mathrm {r}} \\hat {\\boldsymbol {W}} _ {\\text {a g g}, k} ^ {\\mathrm {r}}. \\tag {16} \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 566, + 469, + 906, + 512 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "That is, we are able to attain $\\hat{W}_{\\mathrm{agg},k}$ by removing the impact of the regularization term $\\gamma$ to counter the ill-conditioned constraint in the large client number scenario. The implementation of AFL is summarized in Algorithm 1.", + "bbox": [ + 511, + 522, + 906, + 582 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Benefits of Adopting AL in AFL. Inheriting from the AL technique, the AFL admits several merits over its gradient-based counterparts as follows. i) Fast training and convergence: the analytical solutions allow AFL to finish the training and aggregation in one shot, exhibiting fast training and convergence. Also, the analytical solutions free the AFL from any convergence issue as no iterative-search based action is executed. ii) Low communication cost: the single-round aggregation only requires a single communication between the clients and the server, which significantly reduces the communication cost. iii) Data heterogeneity invariance: the invariance to data partitioning does not pose any constraint on data partition strategy. That is, the equivalence is hold across all possible data heterogeneous scenarios (e.g., see Section 4.2). iv) Client-number invariance: for a complete dataset $\\mathcal{D}$ partitioned among $K$ clients (i.e., $\\{\\mathcal{D}_k\\}_{k=1}^K$ ), according to Theorem 1 and (9), when the weights from all $K$ clients are aggregated, the resulting weight is identical to that trained on the full dataset $\\mathcal{D}$ . To validate AA law with RI-process, we conduct an experiment on a dummy dataset and show the invariance (see Supplementary Materials D).", + "bbox": [ + 511, + 583, + 908, + 901 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "4992", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 4 + }, + { + "type": "code", + "sub_type": "algorithm", + "code_caption": [], + "code_body": "Algorithm 1 Analytic Federated Learning \nInput: $\\mathcal{D}_k, k = 0, \\dots, K$ , $\\gamma$ , and pre-trained backbone $\\Theta$ . \nServer Executes: \n1. for each client $k$ in parallel do \n2. $\\hat{W}_k^{\\mathrm{r}}, C_k^{\\mathrm{r}} \\gets \\text{Local Stage}(k, \\mathcal{D}_k, \\gamma)$ . \n3. end for \n4. $\\hat{W} \\gets \\text{Aggregation Stage}(\\{\\hat{W}_k^{\\mathrm{r}}, C_k^{\\mathrm{r}}, \\gamma\\}_{k=1}^K)$ . \nLocal Stage: client $k$ with $\\mathcal{D}_k$ and $\\gamma$ . \n1. Get embedding and label matrices using (2). \n2. Obtain weight matrix $\\hat{W}_k^{\\mathrm{r}}$ by (13). \n3. Get $C_k^{\\mathrm{r}} = X_k^\\top X_k + \\gamma I$ . \n4. Return $\\hat{W}_k^{\\mathrm{r}}, C_k^{\\mathrm{r}}$ . \nAggregation Stage: with $\\{\\hat{W}_k^{\\mathrm{r}}, C_k^{\\mathrm{r}}, \\gamma\\}_{k=1}^K$ . \n1. Initialize $\\hat{W}_{\\text{agg},0}^{\\mathrm{r}} = 0$ , $C_{\\text{agg},0}^{\\mathrm{r}} = 0$ . \n2. for $k$ in range(K): \n i) Aggregate $\\hat{W}_{\\text{agg},k}^{\\mathrm{r}}$ with $\\hat{W}_k^{\\mathrm{r}}$ using (9). \n ii) Update $C_{\\text{agg},k}^{\\mathrm{r}} = C_{\\text{agg},k-1}^{\\mathrm{r}} + C_k^{\\mathrm{r}}$ . \n3. end for. \n4. Restore $\\hat{W} = \\hat{W}_{\\text{agg},K}$ with $\\hat{W}_{\\text{agg},K}^{\\mathrm{r}}$ in (16).", + "bbox": [ + 91, + 89, + 483, + 388 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "An AL Branch of Federated Learning. The AFL incorporates the AL technique and can be considered as an AL branch within the FL context. The AL and its recursive formulation have demonstrated remarkable adaptability in continual learning utilizing a well-trained backbone [52, 54]. In this case, this intuition has been extended to the FL field through non-trivial derivations.", + "bbox": [ + 89, + 417, + 483, + 522 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4. Experiments", + "text_level": 1, + "bbox": [ + 89, + 537, + 223, + 555 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In this section, we provide extensive experiments to validate the proposed AFL, including comparison with FL state-of-the-arts and analysis under various settings. The training time and ablation study of regularization are also investigated.", + "bbox": [ + 89, + 564, + 483, + 625 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.1. Comparison with FL Techniques", + "text_level": 1, + "bbox": [ + 89, + 635, + 377, + 651 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We conduct comparison with FL state-of-the-arts, including FedAvg [24], FedProx [17], MOON [15], FedGen [46], FedDyn [1], FedNTD [14] and FedDisco [42] under various non-IID settings.", + "bbox": [ + 89, + 657, + 483, + 717 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Dataset and Model. We validate the baselines and our proposed AFL in 3 popular benchmark datasets in FL: CIFAR-10 [12], CIFAR-100 [12] and Tiny-ImageNet [25]. For all datasets, we use a ResNet-18 [10] pretrained on ImageNet-1k [5] as backbone. We freeze the backbones in all FL methods.", + "bbox": [ + 89, + 719, + 483, + 808 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Data Partition. For simulating Non-IID scenarios in FL, we specify two Non-IID data partition methods including Latent Dirichlet Allocation [20] (LDA, denoted as NIID-1) and Sharding [20] (denoted as NIID-2). In the LDA setting, data assigned to each client is forced to satisfy the Dirichlet distribution, and degree of the data heterogeneity is controlled", + "bbox": [ + 89, + 810, + 483, + 900 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "by parameter $\\alpha$ . Smaller $\\alpha$ leads to a more heterogeneous data distribution. In the Sharding strategy, the data is sorted by labels and divided into same-sized shards, and $s$ controls the heterogeneity, i.e. the number of shards per client. When $s$ takes a smaller value, the data is more heterogeneous. We choose $\\alpha = 0.1, 0.01$ and $s = 10, 4$ for CIFAR-100 and Tiny-ImageNet. For CIFAR-10, $\\alpha$ is set to 0.1 and 0.01, and $s$ is set to 4 and 2. Most existing methods are validated on data partition of $\\alpha = 0.3$ to 1.0 and $s = 10$ [14, 19]. Here we provide more challenging settings to validate the robustness under extremely heterogeneous cases.", + "bbox": [ + 511, + 90, + 903, + 257 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Implementation Details. In all the experiments, we use 100 clients for each method and use the same partitioned dataset within experiments of the same data setting. We implement the AFL with a $\\gamma = 1$ RI process (any $\\gamma$ would suffice, see the ablation study). Each experiment setting is run 3 times and the mean and standard deviation of the best top-1 classification accuracy during training are reported. The implementation details of gradient-based compared methods can be found in Supplementary Materials E.", + "bbox": [ + 511, + 260, + 906, + 396 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Experimental Results. We report the results of the compared methods under the setting of NIID-1 and NIID-2 in Table 1. As shown in the table, except for slightly weaker results than those of FedDyn in the NIID-2 setting, the AFL obtains very competitive performance compared with other methods across various settings. The degree of data heterogeneity does not at all affect the AFL. For instance, the accuracy remains $80.75\\%$ on CIFAR-10 for various NIID-1 and NIID-2 settings. Although slight differences could occur among various settings, it barely impacts the classification accuracy (an AL property indicated in [48]). The same pattern repeats on CIFAR-100 and Tiny/ImageNet. Uniquely, the AFL obtains identical results for all 3 repeated runs, i.e., the standard deviations are zeros! This is because the AFL does not introduce any stochastic element, so the repeated computation in each run is naturally equivalent to one another, hence the zero standard deviation.", + "bbox": [ + 511, + 398, + 908, + 655 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Notably, when introducing a pre-trained backbone, the compared methods yield similar results and even FedAvg can become a competitive baseline. The reason could be that, when incorporating a pret-trained backbone, the FL process can be more stable with a better start point and methods stabilizing the FL process could be less effective. This phenomena has also been witnessed in other FL studies [3, 7]. However, other FL methods still experience performance reductions under severe non-IID scenarios. For example, FedDyn performs relatively well (e.g., $57.55\\%$ ) under NIID-1 $(\\alpha = 0.1)$ on CIFAR-100 but undergoes a performance degradation (to $36.12\\%$ ) when $\\alpha = 0.01$ . This pattern is rather consistent in other compared methods, such as FedAvg $(56.62\\% \\rightarrow 32.99\\%)$ , FedProx $(56.45\\% \\rightarrow 33.37\\%)$ , and MOON $(56.58\\% \\rightarrow 33.34\\%)$ , and is also true across all datasets. The performance distributions regarding NIID-2 for", + "bbox": [ + 511, + 657, + 908, + 900 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "4993", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/c0e640a9fafed99127747bdbb4c3b82462aa35dc25061a2dabd1c1ed824489c3.jpg", + "table_caption": [ + "Table 1. The top-1 accuracy (%) of compared methods under two non-IID settings. Settings controlled by $\\alpha$ and $s$ are NIID-1 and NIID-2 respectively. The data is reported as average and standard deviation after 3 runs. Results in bold are the best within the compared methods in the same setting." + ], + "table_footnote": [], + "table_body": "
DatasetSettingFedAvgFedProxMOONFedGenFedDynFedNTDFedDiscoAFL
CIFAR-10α = 0.164.02±0.1864.07±0.0863.84±0.0364.14±0.2464.77±0.1164.64±0.0263.83±0.0880.75±0.00
α = 0.0560.52±0.3960.39±0.0960.28±0.1760.65±0.1960.35±0.5461.16±0.3359.90±0.0580.75±0.00
s = 468.47±0.1368.46±0.0868.47±0.1568.24±0.2873.50±0.1170.24±0.1165.04±0.1180.75±0.00
s = 257.81±0.0357.61±0.1257.72±0.1557.02±0.1864.07±0.0958.77±0.1858.78±0.0280.75±0.00
CIFAR-100α = 0.156.62±0.1256.45±0.2256.58±0.0256.48±0.1757.55±0.0856.60±0.1455.79±0.0458.56±0.00
α = 0.0132.99±0.2033.37±0.0933.34±0.1133.09±0.0936.12±0.0832.59±0.2125.72±0.0858.56±0.00
s = 1055.76±0.1355.80±0.1655.70±0.2560.93±0.1761.09±0.0954.69±0.1554.65±0.0958.56±0.00
s = 548.33±0.1548.29±0.1448.34±0.1948.12±0.0659.34±0.1147.00±0.1945.86±0.1858.56±0.00
Tiny-ImageNetα = 0.146.04±0.2746.47±0.2346.21±0.1446.27±0.1447.72±0.2246.17±0.1647.48±0.0654.67±0.00
α = 0.0132.63±0.1932.26±0.1432.38±0.2032.33±0.1435.19±0.0631.86±0.4427.15±0.1054.67±0.00
s = 1039.06±0.2638.97±0.2338.79±0.1438.82±0.1641.36±0.0637.55±0.0938.86±0.1254.67±0.00
s = 529.66±0.1929.17±0.1629.24±0.3029.37±0.2535.18±0.1829.01±0.1427.72±0.1854.67±0.00
", + "bbox": [ + 94, + 141, + 906, + 325 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "these compared methods resemble those in NIID-1, where smaller $s$ values invite performance degradation among existing FL counterparts. For instance, the FedDyn exhibits $73.50\\% \\rightarrow 64.07\\%$ for $s = 4 \\rightarrow 2$ on CIFAR-10 while the AFL obtains competitive and identical results (e.g., $80.75\\%$ ).", + "bbox": [ + 89, + 349, + 485, + 428 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.2. Analysis on Data Partition", + "text_level": 1, + "bbox": [ + 89, + 438, + 330, + 454 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Here we provide broaden non-IID partitions to demonstrate AFL's invariance to data partitioning. This includes varying the client number and the non-IID degree. We also provide the IID partition results.", + "bbox": [ + 89, + 460, + 483, + 522 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Client-number Invariance. We compare our AFL and the FedAvg under NIID-1 setting on CIFAR-100 and TinyImageNet with $\\alpha = 0.1$ , and vary the number of clients from 100 to 500 and 1000. The results are shown in Figure 2. We observe that the AFL keeps an identical performance when scaling the number of clients, while the FedAvg experiences a performance decline along the increasing number (e.g., $56.57\\% \\rightarrow 41.01\\%$ for $K = 100 \\rightarrow 1000$ on CIFAR-100). This provides a strong evidence to support the invariance to data partitioning in our AFL. It also showcases the capability of pushing the AFL to large-scale client training scenario without any performance compromise.", + "bbox": [ + 89, + 523, + 483, + 705 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Data Heterogeneity Invariance. Here, we fix the client number to 100 and partition the CIFAR-100 under the setting of NIID-1 with $\\alpha = 0.005, 0.01, 0.1, 1$ , including the IID setting as well. We report the results of AFL and FedAvg in Table 2. The FedAvg suffers from more accuracy losses (e.g., $57.72\\% \\to 24.74\\%$ for $\\alpha = 0.1 \\to 0.005$ ) as the data heterogeneity grows higher. Under the IID partition, the FedAvg receives its best performance (i.e., $57.89\\%$ ), which is still less competitive than our AFL (i.e., $58.56\\%$ ). On the other hand, AFL obtains identical results (i.e., $58.56\\%$ ) across various settings, including non-IID and IID ones. This is another strong proof of the AA law indicating the weight-invariant property of AFL. Our AFL is invariant to any degree of data heterogeneity, leading to unchanged performance in all possible data heterogeneous partition scenarios, even in extreme data heterogeneous cases (e.g., $\\alpha = 0.005$ ).", + "bbox": [ + 511, + 349, + 908, + 592 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/5711b3cc21ff2b22d714021441aa998ba1c7655aff8d6fb580dc976d8797ccf8.jpg", + "table_caption": [ + "Table 2. The top-1 classification accuracy $(\\%)$ of AFL and FedAvg under different data heterogeneity." + ], + "table_footnote": [], + "table_body": "
Acc. (%)α = 0.005α = 0.01α = 0.1α = 1IID
FedAvg24.7433.0956.5757.7257.89
AFL58.5658.5658.5658.5658.56
", + "bbox": [ + 514, + 647, + 906, + 696 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/ff5af59812f9babf3ac3fc8c43a0eba868f57c76cb75c5f8153422642c402f1a.jpg", + "image_caption": [ + "(a) CIFAR-100", + "Figure 2. Accuracy over various number of clients." + ], + "image_footnote": [], + "bbox": [ + 91, + 734, + 295, + 873 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/6468592f6c73bab37b6e26c227cf4add993262f230521630c46372ad8edfce85.jpg", + "image_caption": [ + "(b) Tiny-ImageNet" + ], + "image_footnote": [], + "bbox": [ + 299, + 734, + 480, + 872 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.3. Training Efficiency", + "text_level": 1, + "bbox": [ + 511, + 726, + 694, + 742 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Fast Training with Single-round Aggregation. We plot the training evolution curves of accuracy on CIFAR-100 and Tiny-ImageNet in Figure 3 and report the execution time for each method in the legend bars. Compared FL methods take 60s to 100s on CIFAR-100 (100s to 160s on Tiny-ImageNet) to complete an aggregation round, leading to a total training time of 30,000s to 50,000 (50,000s to 80,000s). AFL, however, spends 236.61s on CIFAR-100 and 349.50s on Tiny-ImageNet, achieving approximately $150 \\times -200 \\times$ speedups over its FL counterparts due to only one aggregation.", + "bbox": [ + 511, + 750, + 908, + 902 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "4994", + "bbox": [ + 482, + 945, + 514, + 955 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/ad6d44bb618aeb472a976ec50a48b4bb738d8cfa9a1ce154bb9775abe9f16f1a.jpg", + "image_caption": [ + "Figure 3. Accuracy curves with communication rounds. Average training time is reported in the legends." + ], + "image_footnote": [], + "bbox": [ + 101, + 88, + 287, + 215 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/eae531ba3b2bccc94bba3e40f17de8a8e43fd8a49d8594597a61cc95c530dfa8.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 297, + 88, + 470, + 215 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.4. Ablation Study of RI Process", + "text_level": 1, + "bbox": [ + 89, + 281, + 349, + 297 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Here, we conduct an ablation study regarding the RI process by reporting accuracies of AFL with $\\alpha = 0.1$ and $K = 100,500,1000$ under different values of $\\gamma$ . The results without and with the RI process are provided in Table 3. When $\\gamma = 0$ (i.e., no regularization involved), the AFL stops working with $K = 500$ and 1000 due to the ill-conditioned matrix scenario (e.g., $N_{k} < y_{\\mathrm{e}}$ ). Such an ill-conditioned case is avoided by introducing $\\gamma$ . However, the lack of RI process (see left columns in Table 3) could lead to accuracy loss. For instance, for $\\gamma = 100$ , the AFL could suffer a loss of $9\\%$ (i.e., $58.56\\% \\rightarrow 49.62\\%$ ). This is the result of regularization accumulation (see (15)). With the RI process, the AFL obtains an identical result across various $\\gamma$ values. More importantly, this demonstrates that adopting the RI avoids the need to find proper $\\gamma$ values. That is, the regularization is a removable intermediary, not a hyperparameter that requires tuning.", + "bbox": [ + 91, + 303, + 483, + 546 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/ca767e7f3342def68ea4666d3e0a9bdee4db06bb301c9af398e5e8d42acd7a42.jpg", + "table_caption": [ + "Table 3. Ablation study of RI under various $\\gamma$ and $K$ . The left/right results are performance w/o and w/ the RI process in (16)." + ], + "table_footnote": [], + "table_body": "
Acc.(%)γ = 0γ = 0.1γ = 1γ = 10γ = 100
K=10058.56N/A58.5458.5658.5158.5658.1558.5655.7758.56
K=5001.11N/A58.5258.5658.3058.5656.7258.5651.7758.56
K=10000.75N/A58.5158.5658.1558.5655.7758.5649.6258.56
", + "bbox": [ + 93, + 599, + 483, + 646 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.5. Validation with Different Backbones", + "text_level": 1, + "bbox": [ + 89, + 671, + 406, + 686 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "To explore the effect of different backbones used in the AFL, we extend the experiments with VGG11 [28] and ViT-B-16 [6]. All these backbone are pre-trained in ImageNet-1k and we conduct the experiments under the same setting in Section 4.2. Due to the invariance to data partitioning, we only report one result in single dataset. As shown in the Table 4, with different pre-trained backbones, the AFL can all obtain competitive results.", + "bbox": [ + 89, + 694, + 482, + 814 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5. Limitations and Future Work", + "text_level": 1, + "bbox": [ + 89, + 829, + 362, + 845 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Utilizing Pre-trained Backbone. The AFL approach is both facilitated and constrained by the requirement of having a well-trained feature extractor. However, this limitation has", + "bbox": [ + 89, + 854, + 482, + 898 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/914575a73a3251bfce1b23d0910bceed49ca3110d2c4d4a4db5abe3868672b3f.jpg", + "table_caption": [ + "Table 4. The results of top-1 accuracy in % of the AFL with different backbones including ResNet-18, VGG11 and ViT-B-16." + ], + "table_footnote": [], + "table_body": "
Acc.(%)CIFAR-10CIFAR-100Tiny-ImageNet
ResNet-1880.7558.5654.67
VGG1182.7260.4354.73
ViT-B-1693.9275.4582.02
", + "bbox": [ + 542, + 128, + 877, + 188 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "been significantly mitigated by the emergence of reusing pretrained models for new tasks. This \"pre-trained backbone + downstream task\" paradigm has become a standard practice in numerous deep learning domains, offering improved generalization and reduced computational costs. FL can further enhance this paradigm and we validate that collaboration can still be beneficial with pre-trained backbones in Supplementary Materials F. The proposal of the AFL aligns with these recent research trends, making it a sensible FL advancement.", + "bbox": [ + 511, + 213, + 906, + 348 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Partially Participating and Stragglers. The AFL formulates a single-round aggregation for FL systems, promoting rapid convergence and reducing communication overhead. However, challenges arise when clients engage partially or when stragglers impede progress. Since clients can only contribute to the aggregation after finishing local computations, the AFL needs to wait for all the clients. This potentially hampers the AFL's overall efficiency and inspires us to further refine the AFL to address these issues.", + "bbox": [ + 511, + 348, + 908, + 483 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Linear Assumptions of AFL. The AFL is established upon linear classifiers and may be less effective with nonlinear data distribution. To address this, AFL can incorporate non-linear projections including non-linear activations or kernel functions. Also, for multi-layer model, AFL can formulate local least-square problem at each layer by label projection [49]. These techniques have been utilized in various AL-based work [39] and the AA law holds theoretically. We will conduct a further exploration in future.", + "bbox": [ + 511, + 484, + 908, + 619 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6. Conclusion", + "text_level": 1, + "bbox": [ + 513, + 633, + 633, + 648 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this paper, we introduce a gradient-free FL framework named analytic federated learning (AFL). The AFL unveils analytical solutions both in the local client training stage and the aggregation stage. This leads to one-epoch local training, single-round aggregation, and fast convergence. In particular, the single-round aggregation property is theoretically supported and proved by the well-formulated AA law. Additionally, by introducing the RI process, we re-establish the AFL's optimality which could be compromised in the scenario of rank-deficient with normally a large number of clients. The AFL demonstrates its invariance to data partitioning, a property that allows several appealing FL characteristics such as data heterogeneity invariance and client-number invariance. These characteristics are empirically validated through experiments across various settings, where the AFL achieves a consistent and competitive performance.", + "bbox": [ + 511, + 659, + 908, + 900 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "4995", + "bbox": [ + 482, + 945, + 514, + 955 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Acknowledgment", + "text_level": 1, + "bbox": [ + 91, + 90, + 240, + 107 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "This research was supported by the National Natural Science Foundation of China (62306117, 62406114), the Guangzhou Basic and Applied Basic Research Foundation (2024A04J3681, 2023A04J1687), GJYC program of Guangzhou (2024D03J0005), the National Key R & D Project from Minister of Science and Technology (2024YFA1211500), and the Fundamental Research Funds for the Central Universities (2024ZYGXZR074).", + "bbox": [ + 89, + 113, + 485, + 239 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 253, + 187, + 268 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Durmus Alp Emre Acar, Yue Zhao, Ramon Matas, Matthew Mattina, Paul Whatmough, and Venkatesh Saligrama. Federated learning based on dynamic regularization. In International Conference on Learning Representations, 2021. 6", + "[2] Ilai Bistritz, Ariana Mann, and Nicholas Bambos. Distributed distillation for on-device learning. In Advances in Neural Information Processing Systems, pages 22593-22604. Curran Associates, Inc., 2020. 2", + "[3] Hong-You Chen, Cheng-Hao Tu, Ziwei Li, Han Wei Shen, and Wei-Lun Chao. On the importance and applicability of pretraining for federated learning. In The Eleventh International Conference on Learning Representations, 2023. 6", + "[4] Randall E. Cline. Representations for the generalized inverse of a partitioned matrix. Journal of the Society for Industrial and Applied Mathematics, 12(3):588-600, 1964. 4", + "[5] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, 2009. 6", + "[6] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2021. 8", + "[7] Chun-Mei Feng, Bangjun Li, Xinxing Xu, Yong Liu, Huazhu Fu, and Wangmeng Zuo. Learning federated visual prompt in null space for mri reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8064-8073, 2023. 1, 6", + "[8] Ping Guo, Michael R Lyu, and NE Mastorakis. Pseudoinverse learning algorithm for feedforward neural networks. Advances in Neural Networks and Applications, pages 321-326, 2001. 2, 3", + "[9] Qiushan Guo, Xinjiang Wang, Yichao Wu, Zhipeng Yu, Ding Liang, Xiaolin Hu, and Ping Luo. Online knowledge distillation via collaborative learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2", + "[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 6", + "[11] Georgios A Kaissis, Marcus R Makowski, Daniel Rückert, and Rickmer F Braren. Secure, privacy-preserving and feder-" + ], + "bbox": [ + 93, + 277, + 483, + 900 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "ated machine learning in medical imaging. Nature Machine Intelligence, 2(6):305-311, 2020. 1", + "[12] Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. 6", + "[13] Rajesh Kumar, Abdullah Aman Khan, Jay Kumar, Zakria, Noorbakhsh Amiri Golilarz, Simin Zhang, Yang Ting, Chengyu Zheng, and Wenyong Wang. Blockchain-federated-learning and deep learning models for Covid-19 detection using ct imaging. IEEE Sensors Journal, 21(14):16301-16314, 2021. 1", + "[14] Gihun Lee, Minchan Jeong, Yongjin Shin, Sangmin Bae, and Se-Young Yun. Preservation of the global knowledge by nottrue distillation in federated learning. In Advances in Neural Information Processing Systems, pages 38461-38474. Curran Associates, Inc., 2022. 6", + "[15] Qinbin Li, Bingsheng He, and Dawn Song. Model-contrastive federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10713–10722, 2021. 6", + "[16] Qinbin Li, Yiqun Diao, Quan Chen, and Bingsheng He. Federated learning on non-iid data silos: An experimental study. In 2022 IEEE 38th International Conference on Data Engineering (ICDE), pages 965-978, 2022. 1", + "[17] Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. In Proceedings of Machine Learning and Systems, pages 429-450, 2020. 1, 2, 6", + "[18] Xin-Chun Li, Yi-Chu Xu, Shaoming Song, Bingshuai Li, Yinchuan Li, Yunfeng Shao, and De-Chuan Zhan. Federated learning with position-aware neurons. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10082-10091, 2022. 2", + "[19] Zexi Li, Tao Lin, Xinyi Shang, and Chao Wu. Revisiting weighted aggregation in federated learning with neural networks. In Proceedings of the 40th International Conference on Machine Learning, pages 19767-19788. PMLR, 2023. 2, 6", + "[20] Tao Lin, Lingjing Kong, Sebastian U Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated learning. In Advances in Neural Information Processing Systems, pages 2351-2363. Curran Associates, Inc., 2020. 6", + "[21] Tao Lin, Lingjing Kong, Sebastian U Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated learning. In Advances in Neural Information Processing Systems, pages 2351-2363. Curran Associates, Inc., 2020. 2", + "[22] Zichen Liu, Chao Du, Wee Sun Lee, and Min Lin. Locality sensitive sparse encoding for learning world models online. In The Twelfth International Conference on Learning Representations, 2024. 4", + "[23] Shubham Malaviya, Manish Shukla, and Sachin Lodha. Reducing communication overhead in federated learning for pre-trained language models using parameter-efficient finetuning. In Proceedings of The 2nd Conference on Lifelong Learning Agents, pages 456-469. PMLR, 2023. 1", + "[24] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-Efficient Learning of Deep Networks from Decentralized" + ], + "bbox": [ + 516, + 92, + 906, + 900 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "4996", + "bbox": [ + 482, + 945, + 514, + 955 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, pages 1273-1282. PMLR, 2017. 1, 2, 6", + "[25] Mohammed Ali mnmmoustafa. Tiny imagenet, 2017. 6", + "[26] J. Park and I. W. Sandberg. Universal approximation using radial-basis-function networks. Neural Computation, 3(2): 246-257, 1991. 3", + "[27] Geet Shingi. A federated learning based approach for loan defaults prediction. In 2020 International Conference on Data Mining Workshops (ICDMW), pages 362-368, 2020. 1", + "[28] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition, 2015. 8", + "[29] Yue Tan, Guodong Long, LU LIU, Tianyi Zhou, Qinghua Lu, Jing Jiang, and Chengqi Zhang. Fedproto: Federated prototype learning across heterogeneous clients. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8):8432-8440, 2022. 1", + "[30] Yue Tan, Guodong Long, Jie Ma, LU LIU, Tianyi Zhou, and Jing Jiang. Federated learning from pre-trained models: A contrastive learning approach. In Advances in Neural Information Processing Systems, pages 19332-19344. Curran Associates, Inc., 2022. 1", + "[31] Kar-Ann Toh. Learning from the kernel and the range space. In 2018 IEEE/ACIS 17th International Conference on Computer and Information Science (ICIS), pages 1–6, 2018. 3", + "[32] Kar-Ann Toh. Learning from the kernel and the range space. In the Proceedings of the 17th 2018 IEEE Conference on Computer and Information Science, pages 417–422. IEEE, 2018. 2", + "[33] Haozhao Wang, Yichen Li, Wenchao Xu, Ruixuan Li, Yufeng Zhan, and Zhigang Zeng. DaFKD: Domain-aware federated knowledge distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20412-20421, 2023. 2", + "[34] Haozhao Wang, Haoran Xu, Yichen Li, Yuan Xu, Ruixuan Li, and Tianwei Zhang. FedCDA: Federated learning with cross-rounds divergence-aware aggregation. In The Twelfth International Conference on Learning Representations, 2024. 2", + "[35] Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, and H. Vincent Poor. Tackling the objective inconsistency problem in heterogeneous federated optimization. In Advances in Neural Information Processing Systems, pages 7611-7623. Curran Associates, Inc., 2020. 1, 2", + "[36] Jue Wang, Ping Guo, and Yanjun Li. DensePILAE: a feature reuse pseudoinverse learning algorithm for deep stacked autoencoder. Complex & Intelligent Systems, pages 1-11, 2021. 3", + "[37] X. Wang, T. Zhang, and R. Wang. Noniterative deep learning: Incorporating restricted boltzmann machine into multilayer random weight neural networks. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 49(7):1299-1308, 2019. 3", + "[38] Guile Wu and Shaogang Gong. Peer collaborative learning for online knowledge distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12):10302-10310, 2021. 2" + ], + "bbox": [ + 91, + 90, + 483, + 898 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[39] Yicheng Xu, Yuxin Chen, Jiahao Nie, Yusong Wang, Huiping Zhuang, and Manabu Okumura. Advancing cross-domain discriminability in continual learning of vision-language models. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. 8", + "[40] Fu-En Yang, Chien-Yi Wang, and Yu-Chiang Frank Wang. Efficient model personalization in federated learning via client-specific prompt generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 19159-19168, 2023. 1", + "[41] Wensi Yang, Yuhang Zhang, Kejiang Ye, Li Li, and ChengZhong Xu. Ffd: A federated learning based method for credit card fraud detection. In Big Data - BigData 2019, pages 18-32, Cham, 2019. Springer International Publishing. 1", + "[42] Rui Ye, Mingkai Xu, Jianyu Wang, Chenxin Xu, Siheng Chen, and Yanfeng Wang. FedDisco: Federated learning with discrepancy-aware collaboration. In Proceedings of the 40th International Conference on Machine Learning, pages 39879-39902. PMLR, 2023. 6", + "[43] Fuxun Yu, Weishan Zhang, Zhuwei Qin, Zirui Xu, Di Wang, Chenchen Liu, Zhi Tian, and Xiang Chen. Fed2: Feature-aligned federated learning. In Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining, pages 2066-2074, 2021. 2", + "[44] Lin Zhang, Li Shen, Liang Ding, Dacheng Tao, and Ling-Yu Duan. Fine-tuning global model via data-free knowledge distillation for non-iid federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10174-10183, 2022. 2", + "[45] Zhuo Zhang, Yuanhang Yang, Yong Dai, Qifan Wang, Yue Yu, Lizhen Qu, and Zenglin Xu. Fedpetuning: When federated learning meets the parameter-efficient tuning methods of pre-trained language models. In Findings of the Association for Computational Linguistics: ACL 2023, page 9963-9977. Association for Computational Linguistics (ACL), 2023. Annual Meeting of the Association of Computational Linguistics 2023, ACL 2023; Conference date: 09-07-2023 Through 14-07-2023. 1", + "[46] Zhuangdi Zhu, Junyuan Hong, and Jiayu Zhou. Data-free knowledge distillation for heterogeneous federated learning. In Proceedings of the 38th International Conference on Machine Learning, pages 12878-12889. PMLR, 2021. 6", + "[47] Zhuangdi Zhu, Junyuan Hong, and Jiayu Zhou. Data-free knowledge distillation for heterogeneous federated learning. In Proceedings of the 38th International Conference on Machine Learning, pages 12878-12889. PMLR, 2021. 2", + "[48] Huiping Zhuang, Zhiping Lin, and Kar-Ann Toh. Blockwise recursive Moore-Penrose inverse for network learning. IEEE Transactions on Systems, Man, and Cybernetics: Systems, pages 1-14, 2021. 2, 3, 6", + "[49] Huiping Zhuang, Zhiping Lin, and Kar-Ann Toh. Correlation projection for analytic learning of a classification network. Neural Processing Letters, pages 1–22, 2021. 8", + "[50] Huiping Zhuang, Zhenyu Weng, Hongxin Wei, Renchunzi Xie, Kar-Ann Toh, and Zhiping Lin. ACIL: Analytic class incremental learning with absolute memorization and privacy protection. In Advances in Neural Information Processing" + ], + "bbox": [ + 516, + 92, + 906, + 901 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "4997", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Systems, pages 11602-11614. Curran Associates, Inc., 2022. 3, 4", + "[51] Huiping Zhuang, Zhenyu Weng, Run He, Zhiping Lin, and Ziqian Zeng. GKEAL: Gaussian kernel embedded analytic learning for few-shot class incremental task. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7746-7755, 2023. 3", + "[52] Huiping Zhuang, Yizhu Chen, Di Fang, Run He, Kai Tong, Hongxin Wei, Ziqian Zeng, and Cen Chen. GACL: Exemplar-free generalized analytic continual learning. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 2024. 3, 6", + "[53] Huiping Zhuang, Run He, Kai Tong, Ziqian Zeng, Cen Chen, and Zhiping Lin. DS-AL: A dual-stream analytic learning for exemplar-free class-incremental learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(15):17237-17244, 2024. 3", + "[54] Huiping Zhuang, Yuchen Liu, Run He, Kai Tong, Ziqian Zeng, Cen Chen, Yi Wang, and Lap-Pui Chau. F-OAL: Forward-only online analytic learning with fast training and low memory footprint in class incremental learning. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. 6" + ], + "bbox": [ + 91, + 90, + 483, + 415 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "4998", + "bbox": [ + 482, + 945, + 514, + 955 + ], + "page_idx": 10 + } +] \ No newline at end of file diff --git a/2025/AFL_ A Single-Round Analytic Approach for Federated Learning with Pre-trained Models/1961e1d7-9e21-46e8-8ee1-ddb51cc88578_model.json b/2025/AFL_ A Single-Round Analytic Approach for Federated Learning with Pre-trained Models/1961e1d7-9e21-46e8-8ee1-ddb51cc88578_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0895cf9416faf0274eb6629d3c760bc9ba1fbaea --- /dev/null +++ b/2025/AFL_ A Single-Round Analytic Approach for Federated Learning with Pre-trained Models/1961e1d7-9e21-46e8-8ee1-ddb51cc88578_model.json @@ -0,0 +1,2499 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.043 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.001, + 0.812, + 0.047 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.143, + 0.13, + 0.856, + 0.175 + ], + "angle": 0, + "content": "AFL: A Single-Round Analytic Approach for Federated Learning with Pre-trained Models" + }, + { + "type": "text", + "bbox": [ + 0.205, + 0.203, + 0.794, + 0.239 + ], + "angle": 0, + "content": "Run He\\(^{1}\\), Kai Tong\\(^{1}\\), Di Fang\\(^{1}\\), Han Sun\\(^{2,3}\\), Ziqian Zeng\\(^{1}\\), Haoran Li\\(^{4}\\), Tianyi Chen\\(^{5}\\), Huiping Zhuang\\(^{1*}\\)" + }, + { + "type": "text", + "bbox": [ + 0.241, + 0.239, + 0.757, + 0.257 + ], + "angle": 0, + "content": "\\(^{1}\\) South China University of Technology \\(^{2}\\) Tsinghua University" + }, + { + "type": "text", + "bbox": [ + 0.199, + 0.257, + 0.8, + 0.275 + ], + "angle": 0, + "content": "3 Beijing National Research Center for Information Science and Technology" + }, + { + "type": "text", + "bbox": [ + 0.218, + 0.275, + 0.78, + 0.293 + ], + "angle": 0, + "content": "4 The Hong Kong University of Science and Technology 5 Microsoft" + }, + { + "type": "text", + "bbox": [ + 0.336, + 0.294, + 0.656, + 0.31 + ], + "angle": 0, + "content": "\\*corresponding: hpzhuang@scut.edu.cn" + }, + { + "type": "title", + "bbox": [ + 0.248, + 0.345, + 0.327, + 0.361 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.377, + 0.486, + 0.74 + ], + "angle": 0, + "content": "In this paper, we introduce analytic federated learning (AFL), a new training paradigm that brings analytical (i.e., closed-form) solutions to the federated learning (FL) with pretrained models. Our AFL draws inspiration from analytic learning—a gradient-free technique that trains neural networks with analytical solutions in one epoch. In the local client training stage, the AFL facilitates a one-epoch training, eliminating the necessity for multi-epoch updates. In the aggregation stage, we derive an absolute aggregation (AA) law. This AA law allows a single-round aggregation, reducing heavy communication overhead and achieving fast convergence by removing the need for multiple aggregation rounds. More importantly, the AFL exhibits a property that invariance to data partitioning, meaning that regardless of how the full dataset is distributed among clients, the aggregated result remains identical. This could spawn various potentials, such as data heterogeneity invariance and client-number invariance. We conduct experiments across various FL settings including extremely non-IID ones, and scenarios with a large number of clients (e.g., \\(\\geq 1000\\)). In all these settings, our AFL constantly performs competitively while existing FL techniques encounter various obstacles. Our codes are available at https://github.com/ZHUANGHP/Analytic-federated-learning." + }, + { + "type": "title", + "bbox": [ + 0.092, + 0.769, + 0.224, + 0.785 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.795, + 0.484, + 0.886 + ], + "angle": 0, + "content": "Federated learning (FL) [24] aims to collectively train a machine learning model over data silos by aggregating their individual trained weights, while preserving the privacy of their source data. This training paradigm has received high popularity, particularly in sensitive domains where data privacy is crucial, such as in banks [27, 41] and hospitals [11, 13]." + }, + { + "type": "text", + "bbox": [ + 0.11, + 0.887, + 0.483, + 0.902 + ], + "angle": 0, + "content": "Conventional FL techniques rely on weight aggregation" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.346, + 0.908, + 0.497 + ], + "angle": 0, + "content": "among clients over multiple rounds of training. The objective is to achieve convergence and approximate its joint-training counterpart, where all clients' data are accessible in a single location. To accomplish this, many contributions have been made. One widely recognized method is FedAvg [24]. Relying on a large number of aggregation rounds, the FedAvg employs a simple yet effective weight averaging technique across local clients. Building upon this, various methods have been proposed (e.g., the FedProx [17] and the FedNova [35]), each with its own specific focus within the field of FL." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.503, + 0.909, + 0.639 + ], + "angle": 0, + "content": "However, training a model from scratch via FL can be computationally intensive and demanding in terms of communication bandwidth, especially with large models and numerous participating clients. Several efforts have explored utilizing pre-trained models to mitigate these challenges [23, 45]. Typically, this involves freezing the backbone and only updating and sharing lightweight parameters, such as prototypes [29, 30] or prompts [7, 40], to reduce the substantial training costs." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.645, + 0.911, + 0.902 + ], + "angle": 0, + "content": "Although leveraging pre-trained models can circumvent the high costs associated with training backbones from scratch, existing FL techniques with pre-trained models are primarily based on a gradient-based iterative approach, necessitating iterative optimization on each client and multi-round aggregation across clients. The gradient-based optimization used in the existing FL faces various challenges and imposes several constraints. The faced challenges include, but are not limited to: 1) Data heterogeneity, where the data distribution in each client is not independently identical (non-IID), even with mutually exclusive data categories across different clients (i.e., pathological distribution), 2) Large client number, where the aggregation involving a significant number of clients (i.e., \\(\\geq 1000\\)) can lead to substantial performance degradation in FL systems as the client count increases [16], 3) Slow convergence: where FL methods may struggle to converge within limited communication rounds, especially" + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.946, + 0.516, + 0.957 + ], + "angle": 0, + "content": "4988" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.482, + 0.152 + ], + "angle": 0, + "content": "in severe non-IID scenarios, and 4) High communication cost, where multi-round aggregation in existing FL methods escalates the communication costs associated with parameter sharing between clients and servers." + }, + { + "type": "text", + "bbox": [ + 0.093, + 0.155, + 0.486, + 0.426 + ], + "angle": 0, + "content": "In this paper, we propose a new FL training framework named analytic federated learning (AFL), which provides a single-round aggregation for federated learning with pretrained models. The AFL draws inspiration from analytic learning [8, 32, 48]—a gradient-free technique with a closed-form solution obtained from reshaping the network training into linearized formulation. The AL paradigm receives several benefits over gradient-based techniques. First, it is gradient-free, thereby avoiding gradient-related issues, such as vanishing and exploding gradients. Second, the analytical solution frees AL from convergence issues during training. Also, the AL requires only one visit to the dataset while gradient-based mechanism usually needs hundreds of epochs or beyond. These properties are attractive in FL to accomplish fast convergence and low communication cost. Here, we are able to incorporate this mechanism into the FL domain to overcome limitations inherent in gradient-based techniques. Our contributions are summarized as follows:" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.433, + 0.483, + 0.493 + ], + "angle": 0, + "content": "- We propose the AFL, a gradient-free FL framework with analytical (closed-form) solutions. These analytical solutions apply both in the local client training stage and the aggregation stage." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.494, + 0.483, + 0.583 + ], + "angle": 0, + "content": "- In the local stage, we adopt a pre-trained network to harness input embeddings, and formulate the training in each client into a localized linear regression problem. This leads to a least squares (LS) based one-epoch client training, eliminating the need for multi-epoch training and enabling fast convergence in local clients." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.585, + 0.483, + 0.705 + ], + "angle": 0, + "content": "- In the aggregation stage, we derive an absolute aggregation (AA) law in analytical form, optimally establishing a single-round aggregation. That is, the aggregation happens only once, avoiding multiple FL rounds that bring high communication costs. Additionally, in scenarios where the AA law becomes suboptimal due to a large number of clients, we introduce a regularization intermediary (RI) process to restore its optimality." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.705, + 0.483, + 0.84 + ], + "angle": 0, + "content": "- Owing to analytical solutions, the AFL exhibits a property that invariance to data partitioning. This means that regardless of how the full dataset is distributed (e.g., nonIID) among local clients, the result remains identical. This property spawns several appealing characteristics: i) Data heterogeneity invariance where the result is invariant to arbitrary heterogeneous data partition scenarios. ii) Client-number invariance, which produces identical results regardless of the number of clients involved." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.84, + 0.483, + 0.901 + ], + "angle": 0, + "content": "- We conduct extensive experiments spanning diverse scenarios, including a wide variety of non-IID partitions and large client number (up to 1000) settings. Our AFL consistently showcases competitive performance throughout all" + }, + { + "type": "list", + "bbox": [ + 0.091, + 0.433, + 0.483, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.092, + 0.861, + 0.107 + ], + "angle": 0, + "content": "these settings when compared with other methods." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.133, + 0.664, + 0.149 + ], + "angle": 0, + "content": "2. Related Works" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.164, + 0.908, + 0.209 + ], + "angle": 0, + "content": "In this section, we review existing related FL literature. Additionally, we explore various AL techniques and their variants to reveal their underlying mechanisms." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.232, + 0.776, + 0.248 + ], + "angle": 0, + "content": "2.1. Federated Learning Methods" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.258, + 0.908, + 0.56 + ], + "angle": 0, + "content": "Following the FedAvg [24], to address non-IID issues in FL, various methods have been proposed. One common approach involves assessing the significance of parameters during aggregation to ensure that local updates do not diverge substantially from the global model. For instance, the FedProx [17] restricts the size of local updates, while the FedNova [35] employs a normalized averaging method to eliminate target inconsistency while maintaining fast error convergence. These methods are frequently used as baselines, and we compare our results against them in our experiments. Another set of methods focuses on determining adaptive aggregation weights obtained from multiple clients. The Fed-LAW [19] learns these weights to achieve a global model with state-of-the-art performance, though it requires a proxy dataset to learn the weights, making the results sensitive to the selection of the proxy dataset. To address this sensitivity, the FedCDA [34] proposes a proxy-free method that reduces each client's deviation from the local models of other participants and selects a local model from its multiple recent models acquired over several rounds." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.565, + 0.909, + 0.791 + ], + "angle": 0, + "content": "Some methods address the parameter order mismatch issue across clients, which can occur during global aggregation. The Fed2 [43] designs a model structure adaptation method to ensure explicit feature allocation across different network structures. Similarly, the method in [18] seeks a position-aware neuron to fuse position-related values (i.e., position encodings) into neuron outputs. Distillation methods [2, 9, 33, 38] represent another branch, where the average of logits from client models is used for the local model aggregation, thereby enhancing generalization. [21] pioneers to apply knowledge distillation on the server side, transferring knowledge from multiple local models to the global model using an unlabeled proxy dataset. To overcome the limitation of using a proxy dataset, recent studies such as [47] and [44] suggest substituting the proxy dataset with generated data." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.796, + 0.909, + 0.903 + ], + "angle": 0, + "content": "Existing FL techniques have several significant drawbacks, including challenges with data heterogeneity, large client numbers, convergence issues and high communication costs. Our AFL framework addresses these issues by utilizing a gradient-free, closed-form analytic learning approach, avoiding gradient-related problems (e.g., multi-epoch training, convergence issues and multi-round aggregation)." + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.946, + 0.516, + 0.957 + ], + "angle": 0, + "content": "4989" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.092, + 0.091, + 0.269, + 0.108 + ], + "angle": 0, + "content": "2.2. Analytic Learning" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.116, + 0.486, + 0.358 + ], + "angle": 0, + "content": "The AL has been developed as a strategy to address issues associated with gradient-based update, such as gradient vanishing/exploding, divergence during iteration, and long training time due to multi-epoch training. The AL is also referred to as pseudoinverse learning [8] owing to its utilization of matrix inversion. The AL starts from shallow learning, which is investigated prior to the advent of deep networks in the realm of research. For instance, the radial basis network [26] trains parameters using an LS estimation after performing a kernel transformation in the first layer. The multilayer AL [31, 37] comes up with a one-epoch training style, using LS techniques to resolve linear segments transformed by nonlinear network. One instance of this method is the dense pseudoinverse autoencoder [36], which uses LS solutions to combine shallow and deep features to train a stacked autoencoder layer-by-layer." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.362, + 0.485, + 0.558 + ], + "angle": 0, + "content": "Nonetheless, earlier AL techniques train their weights by processing the entire dataset simultaneously, therefore facing memory challenge. This memory concern is alleviated by the block-wise recursive Moore-Penrose inverse [48], which equivalently replaces the joint learning with a recursive approach. This recursive equivalent property echoes well with the continual learning community. Naturally, analytic continual learning techniques [50, 51, 53] adopt this equivalent characteristic, thrive in handling the catastrophic forgetting problem, and are invariant to the sequential data partition in continual learning. Our AFL draws inspiration from these adaptations, aiming to introduce similar equivalent patterns (e.g., invariant to heterogeneous data) to the FL community." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.596, + 0.358, + 0.614 + ], + "angle": 0, + "content": "3. Analytic Federated Learning" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.624, + 0.486, + 0.791 + ], + "angle": 0, + "content": "In this section, we provide a detailed exposition of AFL derivations, organized into a local training stage and a centralized aggregation stage. In the local stage, a pre-trained backbone serves as a feature extractor, facilitating an AL network learning that allows the training to be completed in one epoch. In the aggregation stage, we introduce the AA law, establishing a single-round aggregation. We elaborate on AFL's invariance to data partitioning here, bringing benefits such as data heterogeneity invariance, client-number invariance and fast convergence in a single round. An Overview of the proposed AFL paradigm is depicted in Figure 1." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.793, + 0.485, + 0.903 + ], + "angle": 0, + "content": "Prior to further developments, here let \\(\\mathcal{D} = \\{\\mathcal{D}_k\\}_{k=1}^K\\) be the complete training data, where \\(\\mathcal{D}_k \\sim \\{\\mathcal{X}_{k,i}, y_{k,i}\\}_{i=1}^{N_k}\\) suggests an \\(N_k\\)-sample sub-dataset accessible to the \\(k\\)-th client with \\(\\mathcal{X}_{k,i}\\) and \\(y_{k,i}\\) representing the \\(i\\)-th input-label pair. In this paper, all these \\(K\\) clients share the same backbone network \\(f_{\\mathrm{backbone}}\\) parameterized by \\(\\Theta\\) to map their inputs (e.g., \\(\\mathcal{X}\\)) to embedding vectors." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.091, + 0.869, + 0.108 + ], + "angle": 0, + "content": "3.1. Local Stage: Localized Analytic Learning" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.113, + 0.908, + 0.174 + ], + "angle": 0, + "content": "In this stage, each local client's network is trained using the AL technique. This involves transforming the neural network's classification head into a linear regression problem, thereby enabling the derivation of a closed-form LS solution." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.174, + 0.907, + 0.22 + ], + "angle": 0, + "content": "At the initial step, client \\(k\\) extracts its embedding vector \\(\\mathbf{x}_{k,i}\\) by passing the \\(i\\)-th data \\(\\mathcal{X}_{k,i}\\) from \\(\\mathcal{D}_k\\) through the frozen backbone network \\(f_{\\mathrm{backbone}}\\), i.e.," + }, + { + "type": "equation", + "bbox": [ + 0.623, + 0.23, + 0.907, + 0.248 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {x} _ {k, j} = f _ {\\text {b a c k b o n e}} \\left(\\mathcal {X} _ {k, j}, \\Theta\\right) \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.256, + 0.907, + 0.286 + ], + "angle": 0, + "content": "where \\( \\pmb{x}_{k,j} \\in \\mathbb{R}^{1 \\times y_{\\mathrm{e}}} \\), with \\( y_{\\mathrm{e}} \\) indicating the embedding length." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.287, + 0.907, + 0.349 + ], + "angle": 0, + "content": "For the \\(k\\)-th client (with \\(N_{k}\\) samples in \\(\\mathcal{D}_k\\)), we can stack the extracted embeddings and their corresponding one-hoted labels via mapping \\(\\mathcal{D}_k \\sim \\{\\mathcal{X}_{k,i}, y_{k,i}\\}_{i=1}^{N_k}\\) to \\(\\bar{\\mathcal{D}}_k \\sim \\{\\pmb{X}_k, \\pmb{Y}_k\\}\\), i.e.," + }, + { + "type": "equation", + "bbox": [ + 0.525, + 0.357, + 0.907, + 0.416 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {X} _ {k} = \\left[ \\begin{array}{c} \\boldsymbol {x} _ {k, 1} \\\\ \\boldsymbol {x} _ {k, 2} \\\\ \\vdots \\\\ \\boldsymbol {x} _ {k, N _ {k}} \\end{array} \\right] = \\left[ \\begin{array}{c} f _ {\\text {b a c k b o n e}} \\left(\\mathcal {X} _ {k, 1}, \\Theta\\right) \\\\ f _ {\\text {b a c k b o n e}} \\left(\\mathcal {X} _ {k, 2}, \\Theta\\right) \\\\ \\vdots \\\\ f _ {\\text {b a c k b o n e}} \\left(\\mathcal {X} _ {k, N _ {k}}, \\Theta\\right) \\end{array} \\right] \\boldsymbol {Y} _ {k} = \\left[ \\begin{array}{c} \\text {o n e h o t} \\left(y _ {k, 1}\\right) \\\\ \\text {o n e h o t} \\left(y _ {k, 2}\\right) \\\\ \\vdots \\\\ \\text {o n e h o t} \\left(y _ {k, N _ {k}}\\right) \\end{array} \\right], \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.424, + 0.906, + 0.483 + ], + "angle": 0, + "content": "where the embedding matrix \\( \\mathbf{X}_k \\in \\mathbb{R}^{N_k \\times y_e} \\), and the label matrix \\( \\mathbf{Y}_k \\in \\mathbb{R}^{N_k \\times C} \\) has \\( C \\) classes. The onehot(*) operator converts the index label \\( y_{k,j} \\) into a \\( C \\)-dimension one-hot row vector." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.485, + 0.907, + 0.56 + ], + "angle": 0, + "content": "Subsequently, we approach the local client training with AL technique [8]. Specially, the target of the \\(k\\)-th client is to linearly map the extracted embeddings onto the one-hoted labels by minimizing the mean square error (MSE) loss function as follows." + }, + { + "type": "equation", + "bbox": [ + 0.613, + 0.571, + 0.907, + 0.589 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} \\left(\\boldsymbol {W} _ {k}\\right) = \\left\\| \\boldsymbol {Y} _ {k} - \\boldsymbol {X} _ {k} \\boldsymbol {W} _ {k} \\right\\| _ {\\mathrm {F}} ^ {2}, \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.598, + 0.906, + 0.629 + ], + "angle": 0, + "content": "where \\( \\| *\\|_{\\mathrm{F}} \\) indicates the Frobenius norm. This leads to an optimal weight estimation \\( \\hat{\\mathbf{W}}_k \\), i.e.," + }, + { + "type": "equation", + "bbox": [ + 0.598, + 0.638, + 0.907, + 0.665 + ], + "angle": 0, + "content": "\\[\n\\hat {\\boldsymbol {W}} _ {k} = \\underset {\\boldsymbol {W} _ {k}} {\\operatorname {a r g m i n}} \\mathcal {L} (\\boldsymbol {W} _ {k}) = \\boldsymbol {X} _ {k} ^ {\\dagger} \\boldsymbol {Y} _ {k}, \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.674, + 0.908, + 0.704 + ], + "angle": 0, + "content": "where \\(\\dagger\\) denotes the Moore-Penrose (MP) inverse (also referred as generalized inverse or pseudoinverse) [8, 48]." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.705, + 0.907, + 0.75 + ], + "angle": 0, + "content": "The solution presented in (4) optimally addresses the MSE loss function described in (3), effectively establishing an LS-based AL solution for localized network learning." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.75, + 0.909, + 0.902 + ], + "angle": 0, + "content": "Why One-epoch Analytic Learning Works. AL methods are generally effective for training shallow networks but face challenges when applied to deeper ones. This can be attributed to the fact that AL techniques are often designed as classifiers rather than end-to-end learning approaches. Despite this limitation, recent research has demonstrated that with a well-trained backbone, the AL performs adequately in various complex scenarios [52]. The practice of using a \"pre-trained backbone + downstream tasks\" has become increasingly common. This has allowed the one-epoch AL" + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "4990" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.095, + 0.089, + 0.905, + 0.291 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.303, + 0.908, + 0.334 + ], + "angle": 0, + "content": "Figure 1. An overview of the AFL. During the local stage, each client calculates \\( C_k^{\\mathrm{r}} \\) and \\( \\hat{W}_k^{\\mathrm{r}} \\) based on the same pre-trained backbone and its own dataset. The server obtained the \\( C_{\\mathrm{agg},K}^{\\mathrm{r}} \\) and \\( \\hat{W}_{\\mathrm{agg},K}^{\\mathrm{r}} \\) then get \\( \\hat{W} \\) in the aggregation stage." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.349, + 0.483, + 0.394 + ], + "angle": 0, + "content": "to thrive in various areas such as continual learning [50] and reinforcement learning [22]. Hence, it could also be well incorporated in the individual client training." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.395, + 0.484, + 0.47 + ], + "angle": 0, + "content": "Adopting AL is the key to enforcing the upcoming single-round aggregation (by deriving the AA law). The affine characteristic of linear regression in each client opens up new possibilities for exploration in FL. We provide a comprehensive explanation of such an exploration in later sections." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.479, + 0.484, + 0.496 + ], + "angle": 0, + "content": "3.2. Aggregation Stage: Absolute Aggregation Law" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.501, + 0.484, + 0.606 + ], + "angle": 0, + "content": "In the aggregation stage, we introduce the Absolute Aggregation (AA) law, a key contribution in AFL. The AA law facilitates a single-round aggregation, i.e., the aggregation happens only once. Additionally, in scenarios where the AA law becomes suboptimal due to a large number of clients, we introduce a regularization intermediary (RI) process to restore its optimality." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.607, + 0.484, + 0.636 + ], + "angle": 0, + "content": "The MP inverse partition [4] inspires our derivation, which is reformulated into Lemma 1." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.643, + 0.484, + 0.688 + ], + "angle": 0, + "content": "Lemma 1. Let \\( \\mathbf{X} = \\begin{bmatrix} \\mathbf{X}_u \\\\ \\mathbf{X}_v \\end{bmatrix} \\) with \\( \\mathbf{X}_u \\) and \\( \\mathbf{X}_v \\) having full column ranks, and \\( \\mathbf{X} \\) follows a partition" + }, + { + "type": "equation", + "bbox": [ + 0.229, + 0.697, + 0.484, + 0.716 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {X} ^ {\\dagger} = \\left[ \\begin{array}{l l} \\bar {U} & \\bar {V} \\end{array} \\right], \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.728, + 0.136, + 0.739 + ], + "angle": 0, + "content": "where" + }, + { + "type": "equation", + "bbox": [ + 0.104, + 0.75, + 0.484, + 0.788 + ], + "angle": 0, + "content": "\\[\n\\left\\{ \\begin{array}{l} \\bar {U} = X _ {u} ^ {\\dagger} - R _ {u} C _ {v} X _ {u} ^ {\\dagger} - R _ {u} C _ {v} (C _ {u} + C _ {v}) ^ {- 1} C _ {v} X _ {u} ^ {\\dagger} \\\\ \\bar {V} = X _ {v} ^ {\\dagger} - R _ {v} C _ {u} X _ {v} ^ {\\dagger} - R _ {v} C _ {u} (C _ {u} + C _ {v}) ^ {- 1} C _ {u} X _ {v} ^ {\\dagger} \\end{array} \\right.,\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.152, + 0.798, + 0.484, + 0.839 + ], + "angle": 0, + "content": "\\[\n\\left\\{ \\begin{array}{l} \\boldsymbol {C} _ {u} = \\boldsymbol {X} _ {u} ^ {\\top} \\boldsymbol {X} _ {u} \\\\ \\boldsymbol {C} _ {v} = \\boldsymbol {X} _ {v} ^ {\\top} \\boldsymbol {X} _ {v} \\end{array} , \\quad \\left\\{ \\begin{array}{l} \\boldsymbol {R} _ {u} = \\boldsymbol {C} _ {u} ^ {- 1} \\\\ \\boldsymbol {R} _ {v} = \\boldsymbol {C} _ {v} ^ {- 1} \\end{array} . \\right. \\right. \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.846, + 0.357, + 0.861 + ], + "angle": 0, + "content": "Proof. See Supplementary Materials A." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.871, + 0.484, + 0.901 + ], + "angle": 0, + "content": "Lemma 1 points out that, a matrix's MP inverse (e.g., \\(X^{\\dagger}\\)) can be computed using the inverse matrices of its block" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.349, + 0.907, + 0.425 + ], + "angle": 0, + "content": "components (e.g., \\( \\pmb{X}_u^\\dagger \\) and \\( \\pmb{X}_v^\\dagger \\)). This introduces possibilities for aggregating a weight \\( \\pmb{W} = \\pmb{X}^\\dagger \\pmb{Y} \\) equally from manipulating constituent counterparts \\( \\pmb{W}_u = \\pmb{X}_u^\\dagger \\pmb{Y}_u \\) and \\( \\pmb{W}_v = \\pmb{X}_v^\\dagger \\pmb{Y}_v \\). That is, \\( \\pmb{W} = f_{\\mathrm{agg}}(\\pmb{W}_u, \\pmb{W}_v) \\), i.e., a single-aggregation strategy." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.425, + 0.907, + 0.469 + ], + "angle": 0, + "content": "Bearing the above intuition in mind, we are able to derive such a single-aggregation strategy in action. This is delivered in Theorem 1." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.477, + 0.907, + 0.553 + ], + "angle": 0, + "content": "Theorem 1. Absolute Aggregation Law: Let \\(\\hat{W} = X^{\\dagger}Y\\), where \\(X = \\begin{bmatrix} X_u \\\\ X_v \\end{bmatrix}\\) and \\(Y = \\begin{bmatrix} Y_u \\\\ Y_v \\end{bmatrix}\\) with \\(X_u\\) and \\(X_v\\) having full column ranks. Let \\(\\hat{W}_u = X_u^\\dagger Y_u\\), \\(\\hat{W}_v = X_v^\\dagger Y_v\\), and we have" + }, + { + "type": "equation", + "bbox": [ + 0.619, + 0.555, + 0.907, + 0.571 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {W} = \\boldsymbol {\\mathcal {W}} _ {u} \\boldsymbol {W} _ {u} + \\boldsymbol {\\mathcal {W}} _ {u} \\boldsymbol {W} _ {v}, \\tag {7}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.578, + 0.56, + 0.59 + ], + "angle": 0, + "content": "where" + }, + { + "type": "equation", + "bbox": [ + 0.542, + 0.599, + 0.86, + 0.639 + ], + "angle": 0, + "content": "\\[\n\\left\\{ \\begin{array}{l} \\mathcal {W} _ {u} = I - R _ {u} C _ {v} - R _ {u} C _ {v} \\left(C _ {u} + C _ {v}\\right) ^ {- 1} C _ {v} \\\\ \\mathcal {W} _ {v} = I - R _ {v} C _ {u} - R _ {v} C _ {u} \\left(C _ {u} + C _ {v}\\right) ^ {- 1} C _ {u} \\end{array} \\right.\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.586, + 0.649, + 0.907, + 0.689 + ], + "angle": 0, + "content": "\\[\n\\left\\{ \\begin{array}{l} \\boldsymbol {C} _ {u} = \\boldsymbol {X} _ {u} ^ {\\top} \\boldsymbol {X} _ {u} \\\\ \\boldsymbol {C} _ {v} = \\boldsymbol {X} _ {v} ^ {\\top} \\boldsymbol {X} _ {v} \\end{array} \\quad \\left\\{ \\begin{array}{l} \\boldsymbol {R} _ {u} = \\boldsymbol {C} _ {u} ^ {- 1} \\\\ \\boldsymbol {R} _ {v} = \\boldsymbol {C} _ {v} ^ {- 1} \\end{array} \\right. \\right. \\tag {8}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.695, + 0.779, + 0.71 + ], + "angle": 0, + "content": "Proof. See Supplementary Materials B." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.72, + 0.907, + 0.796 + ], + "angle": 0, + "content": "The AA law, as stated in Theorem 1, provides a powerful insight. It establishes an intuition that we can aggregate two independently trained weights, such as \\( W_{u} \\) and \\( W_{v} \\), into their jointly trained counterpart \\( W \\). This is achieved in an optimal way without any approximation or parameter tuning." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.797, + 0.909, + 0.901 + ], + "angle": 0, + "content": "Invariance to data partitioning. To a certain extent, the achievement in Theorem 1 attains the ultimate goal of FL, i.e., the equivalence between weights trained in FL fashion and that trained on a centralized joint dataset. Traditionally, the FL aims to approximate or converge to the performance of the joint-trained model through multiple rounds of aggregation in a central server. However, the AA law provides a" + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.514, + 0.957 + ], + "angle": 0, + "content": "4991" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.485, + 0.136 + ], + "angle": 0, + "content": "more direct path to this goal. It allows for an equivalence (not approximation or convergence) to manifest in a linear regression standpoint." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.137, + 0.484, + 0.227 + ], + "angle": 0, + "content": "Supported by the AA law, the AFL achieves a level of performance that is on par with the joint-trained model, without the need for multiple rounds of aggregation. This direct equivalence could establish significant advancement in FL, as it simplifies the process and reduces the heavy computational overhead associated with multiple aggregation rounds." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.228, + 0.484, + 0.333 + ], + "angle": 0, + "content": "Although the AA law in Theorem 1 admits the absolute aggregation between two clients (i.e., \\(\\hat{W}_u\\) and \\(\\hat{W}_v\\)), this pattern can be trivially broadcast to multi-client scenario. To elaborate, without loss of generality, we denote \\(\\hat{W}_{\\mathrm{agg},k - 1}\\) as the accumulated aggregation (AcAg) weight that has aggregated \\(k - 1\\) clients. By rewriting (1), the next aggregation with \\(\\hat{W}_k\\) (\\(i = k,\\dots ,K\\)) reads" + }, + { + "type": "equation", + "bbox": [ + 0.161, + 0.34, + 0.484, + 0.359 + ], + "angle": 0, + "content": "\\[\n\\hat {W} _ {\\text {a g g}, k} = \\mathcal {W} _ {\\text {a g g}} \\hat {W} _ {\\text {a g g}, k - 1} + \\mathcal {W} _ {k} \\hat {W} _ {k}. \\tag {9}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.367, + 0.484, + 0.398 + ], + "angle": 0, + "content": "According to (1), let \\( C_u \\to C_{\\mathrm{agg},k-1} \\), \\( C_v \\to C_k \\), and we have \\( C_{\\mathrm{agg},k} = C_{\\mathrm{agg},k-1} + C_k \\). Hence," + }, + { + "type": "equation", + "bbox": [ + 0.111, + 0.407, + 0.484, + 0.458 + ], + "angle": 0, + "content": "\\[\n\\left\\{ \\begin{array}{l} \\mathcal {W} _ {\\text {a g g}} = I - C _ {\\text {a g g}, k - 1} ^ {- 1} C _ {k} \\left(I + C _ {\\text {a g g}, k} ^ {- 1} C _ {k}\\right), \\\\ \\mathcal {W} _ {k} = I - C _ {k} ^ {- 1} C _ {\\text {a g g}, k - 1} \\left(I + C _ {\\text {a g g}, k} ^ {- 1} C _ {\\text {a g g}, k - 1}\\right), \\end{array} \\right. \\tag {10}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.46, + 0.137, + 0.471 + ], + "angle": 0, + "content": "where" + }, + { + "type": "equation", + "bbox": [ + 0.152, + 0.479, + 0.484, + 0.52 + ], + "angle": 0, + "content": "\\[\n\\left\\{ \\begin{array}{l} C _ {\\text {a g g}, k} = C _ {\\text {a g g}, k - 1} + C _ {k} = \\sum_ {i} ^ {k} C _ {i}, \\\\ C _ {i} = X _ {i} ^ {\\top} X _ {i}. \\end{array} \\right. \\tag {11}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.529, + 0.484, + 0.606 + ], + "angle": 0, + "content": "As such, the joint-trained weight \\(\\hat{W} = \\hat{W}_{\\mathrm{agg},k}\\) is produced by aggregating among individual clients in a pair-wise manner. It is interesting to find that the optimal aggregation is in fact a linear combination between two matrices (e.g., \\(\\hat{W}_{\\mathrm{agg},k - 1}\\) and \\(\\hat{W}_k\\) ) weighted by \\(\\mathcal{W}_{\\mathrm{agg}}\\) and \\(\\mathcal{W}_k\\) respectively." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.606, + 0.484, + 0.681 + ], + "angle": 0, + "content": "Note that the aggregation does NOT necessarily follow a sequential index from 1 to \\( K \\). We can randomly sample an available client to aggregate with the AcAg weight. This is revealed by the fact that elements in the weighting matrices are somewhat interchangeable (e.g., see (10))." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.688, + 0.483, + 0.703 + ], + "angle": 0, + "content": "3.3. RI Process: AA Law in Rank-deficient Scenario" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.71, + 0.484, + 0.846 + ], + "angle": 0, + "content": "As indicated in Theorem 1, the equivalence in AA law relies on an assumption of a full-column rank in each client, e.g., \\( X_{k} \\) having full-column rank. This may not hold in the large client number scenario where each client has limited data (e.g., \\( N_{k} < y_{\\mathrm{e}} \\)), rendering the full-column rank assumption invalid. To address this, we implement the AA law with an RI process. Specially, we include a regularization term as an intermediary during the local stage, and remove it after the aggregation stage." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.847, + 0.484, + 0.876 + ], + "angle": 0, + "content": "To this end, we include an regularization term controlled by \\(\\gamma\\) in the objective function, i.e.," + }, + { + "type": "equation", + "bbox": [ + 0.149, + 0.884, + 0.484, + 0.903 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} \\left(\\boldsymbol {W} _ {k} ^ {\\mathrm {r}}\\right) = \\left\\| \\boldsymbol {Y} _ {k} - \\boldsymbol {X} _ {k} \\boldsymbol {W} _ {k} ^ {\\mathrm {r}} \\right\\| _ {\\mathrm {F}} ^ {2} + \\gamma \\left\\| \\boldsymbol {W} _ {k} ^ {\\mathrm {r}} \\right\\| _ {\\mathrm {F}} ^ {2}, \\tag {12}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.092, + 0.884, + 0.107 + ], + "angle": 0, + "content": "which rewrites the MP inverse based solution in (4) into" + }, + { + "type": "equation", + "bbox": [ + 0.522, + 0.115, + 0.907, + 0.143 + ], + "angle": 0, + "content": "\\[\n\\hat {\\boldsymbol {W}} _ {k} ^ {\\mathrm {r}} = \\underset {\\boldsymbol {W} _ {k} ^ {\\mathrm {r}}} {\\operatorname {a r g m i n}} \\mathcal {L} \\left(\\boldsymbol {W} _ {k} ^ {\\mathrm {r}}\\right) = \\left(\\boldsymbol {X} _ {k} ^ {\\top} \\boldsymbol {X} _ {k} + \\gamma \\boldsymbol {I}\\right) ^ {- 1} \\boldsymbol {X} _ {k} ^ {\\top} \\boldsymbol {Y} _ {k}. \\tag {13}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.154, + 0.907, + 0.198 + ], + "angle": 0, + "content": "Such a solution does not suffer from rank-deficiency issues, as \\( \\pmb{X}_k^\\top \\pmb{X}_k + \\gamma \\pmb{I} \\) is positive-definite thereby a full-rank matrix." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.198, + 0.907, + 0.247 + ], + "angle": 0, + "content": "During aggregation, we substitute \\(\\hat{\\mathbf{W}}_k\\) in (4) with \\(\\hat{\\mathbf{W}}_k^{\\mathrm{r}}\\) using (13). This substitution would clearly result in deviations (i.e., \\(\\hat{\\mathbf{W}}_{\\mathrm{agg},k}^{\\mathrm{r}}\\neq \\hat{\\mathbf{W}}_{\\mathrm{agg},k}\\)), which is depicted in Theorem 2." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.256, + 0.906, + 0.291 + ], + "angle": 0, + "content": "Theorem 2. RI-AA Law: The relation between \\(\\hat{W}_{agg,k}^{r}\\) and \\(\\hat{W}_{agg,k}\\) follows" + }, + { + "type": "equation", + "bbox": [ + 0.591, + 0.301, + 0.907, + 0.321 + ], + "angle": 0, + "content": "\\[\n\\hat {W} _ {\\text {a g g}, k} ^ {\\mathrm {r}} = \\left(C _ {\\text {a g g}, k} ^ {\\mathrm {r}}\\right) ^ {- 1} C _ {\\text {a g g}, k} \\hat {W} _ {\\text {a g g}, k}, \\tag {14}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.33, + 0.56, + 0.343 + ], + "angle": 0, + "content": "where" + }, + { + "type": "equation", + "bbox": [ + 0.527, + 0.355, + 0.907, + 0.373 + ], + "angle": 0, + "content": "\\[\nC _ {\\text {a g g}, k} ^ {\\mathrm {r}} = C _ {\\text {a g g}, k} + k \\gamma I = \\sum_ {i} ^ {k} C _ {i} ^ {\\mathrm {r}}, \\quad C _ {i} ^ {\\mathrm {r}} = X _ {i} ^ {\\top} X _ {i} + \\gamma I. \\tag {15}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.382, + 0.779, + 0.398 + ], + "angle": 0, + "content": "Proof. See Supplementary Materials C." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.407, + 0.906, + 0.461 + ], + "angle": 0, + "content": "Theorem 2 establishes the relation between \\(\\hat{W}_{\\mathrm{agg},k}^{\\mathrm{r}}\\) and \\(\\hat{W}_{\\mathrm{agg},k}\\), which is a one-to-one mapping, such that \\(\\hat{W}_{\\mathrm{agg},k}\\) can be restored by manipulating \\(\\hat{W}_{\\mathrm{agg},k}^{\\mathrm{r}}\\), i.e.," + }, + { + "type": "equation", + "bbox": [ + 0.567, + 0.47, + 0.907, + 0.513 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\hat {W} _ {\\text {a g g}, k} = \\left(C _ {\\text {a g g}, k}\\right) ^ {- 1} C _ {\\text {a g g}, k} ^ {\\mathrm {r}} \\hat {W} _ {\\text {a g g}, k} ^ {\\mathrm {r}} \\\\ = \\left(\\boldsymbol {C} _ {\\text {a g g}, k} ^ {\\mathrm {r}} - k \\gamma \\boldsymbol {I}\\right) ^ {- 1} \\boldsymbol {C} _ {\\text {a g g}, k} ^ {\\mathrm {r}} \\hat {\\boldsymbol {W}} _ {\\text {a g g}, k} ^ {\\mathrm {r}}. \\tag {16} \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.523, + 0.907, + 0.583 + ], + "angle": 0, + "content": "That is, we are able to attain \\(\\hat{W}_{\\mathrm{agg},k}\\) by removing the impact of the regularization term \\(\\gamma\\) to counter the ill-conditioned constraint in the large client number scenario. The implementation of AFL is summarized in Algorithm 1." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.584, + 0.909, + 0.902 + ], + "angle": 0, + "content": "Benefits of Adopting AL in AFL. Inheriting from the AL technique, the AFL admits several merits over its gradient-based counterparts as follows. i) Fast training and convergence: the analytical solutions allow AFL to finish the training and aggregation in one shot, exhibiting fast training and convergence. Also, the analytical solutions free the AFL from any convergence issue as no iterative-search based action is executed. ii) Low communication cost: the single-round aggregation only requires a single communication between the clients and the server, which significantly reduces the communication cost. iii) Data heterogeneity invariance: the invariance to data partitioning does not pose any constraint on data partition strategy. That is, the equivalence is hold across all possible data heterogeneous scenarios (e.g., see Section 4.2). iv) Client-number invariance: for a complete dataset \\(\\mathcal{D}\\) partitioned among \\(K\\) clients (i.e., \\(\\{\\mathcal{D}_k\\}_{k=1}^K\\)), according to Theorem 1 and (9), when the weights from all \\(K\\) clients are aggregated, the resulting weight is identical to that trained on the full dataset \\(\\mathcal{D}\\). To validate AA law with RI-process, we conduct an experiment on a dummy dataset and show the invariance (see Supplementary Materials D)." + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "4992" + } + ], + [ + { + "type": "algorithm", + "bbox": [ + 0.092, + 0.09, + 0.484, + 0.39 + ], + "angle": 0, + "content": "Algorithm 1 Analytic Federated Learning \nInput: \\(\\mathcal{D}_k, k = 0, \\dots, K\\), \\(\\gamma\\), and pre-trained backbone \\(\\Theta\\). \nServer Executes: \n1. for each client \\(k\\) in parallel do \n2. \\(\\hat{W}_k^{\\mathrm{r}}, C_k^{\\mathrm{r}} \\gets \\text{Local Stage}(k, \\mathcal{D}_k, \\gamma)\\). \n3. end for \n4. \\(\\hat{W} \\gets \\text{Aggregation Stage}(\\{\\hat{W}_k^{\\mathrm{r}}, C_k^{\\mathrm{r}}, \\gamma\\}_{k=1}^K)\\). \nLocal Stage: client \\(k\\) with \\(\\mathcal{D}_k\\) and \\(\\gamma\\). \n1. Get embedding and label matrices using (2). \n2. Obtain weight matrix \\(\\hat{W}_k^{\\mathrm{r}}\\) by (13). \n3. Get \\(C_k^{\\mathrm{r}} = X_k^\\top X_k + \\gamma I\\). \n4. Return \\(\\hat{W}_k^{\\mathrm{r}}, C_k^{\\mathrm{r}}\\). \nAggregation Stage: with \\(\\{\\hat{W}_k^{\\mathrm{r}}, C_k^{\\mathrm{r}}, \\gamma\\}_{k=1}^K\\). \n1. Initialize \\(\\hat{W}_{\\text{agg},0}^{\\mathrm{r}} = 0\\), \\(C_{\\text{agg},0}^{\\mathrm{r}} = 0\\). \n2. for \\(k\\) in range(K): \n i) Aggregate \\(\\hat{W}_{\\text{agg},k}^{\\mathrm{r}}\\) with \\(\\hat{W}_k^{\\mathrm{r}}\\) using (9). \n ii) Update \\(C_{\\text{agg},k}^{\\mathrm{r}} = C_{\\text{agg},k-1}^{\\mathrm{r}} + C_k^{\\mathrm{r}}\\). \n3. end for. \n4. Restore \\(\\hat{W} = \\hat{W}_{\\text{agg},K}\\) with \\(\\hat{W}_{\\text{agg},K}^{\\mathrm{r}}\\) in (16)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.418, + 0.484, + 0.523 + ], + "angle": 0, + "content": "An AL Branch of Federated Learning. The AFL incorporates the AL technique and can be considered as an AL branch within the FL context. The AL and its recursive formulation have demonstrated remarkable adaptability in continual learning utilizing a well-trained backbone [52, 54]. In this case, this intuition has been extended to the FL field through non-trivial derivations." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.539, + 0.224, + 0.556 + ], + "angle": 0, + "content": "4. Experiments" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.565, + 0.484, + 0.626 + ], + "angle": 0, + "content": "In this section, we provide extensive experiments to validate the proposed AFL, including comparison with FL state-of-the-arts and analysis under various settings. The training time and ablation study of regularization are also investigated." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.636, + 0.379, + 0.652 + ], + "angle": 0, + "content": "4.1. Comparison with FL Techniques" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.659, + 0.484, + 0.718 + ], + "angle": 0, + "content": "We conduct comparison with FL state-of-the-arts, including FedAvg [24], FedProx [17], MOON [15], FedGen [46], FedDyn [1], FedNTD [14] and FedDisco [42] under various non-IID settings." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.72, + 0.484, + 0.809 + ], + "angle": 0, + "content": "Dataset and Model. We validate the baselines and our proposed AFL in 3 popular benchmark datasets in FL: CIFAR-10 [12], CIFAR-100 [12] and Tiny-ImageNet [25]. For all datasets, we use a ResNet-18 [10] pretrained on ImageNet-1k [5] as backbone. We freeze the backbones in all FL methods." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.811, + 0.484, + 0.901 + ], + "angle": 0, + "content": "Data Partition. For simulating Non-IID scenarios in FL, we specify two Non-IID data partition methods including Latent Dirichlet Allocation [20] (LDA, denoted as NIID-1) and Sharding [20] (denoted as NIID-2). In the LDA setting, data assigned to each client is forced to satisfy the Dirichlet distribution, and degree of the data heterogeneity is controlled" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.905, + 0.258 + ], + "angle": 0, + "content": "by parameter \\(\\alpha\\). Smaller \\(\\alpha\\) leads to a more heterogeneous data distribution. In the Sharding strategy, the data is sorted by labels and divided into same-sized shards, and \\(s\\) controls the heterogeneity, i.e. the number of shards per client. When \\(s\\) takes a smaller value, the data is more heterogeneous. We choose \\(\\alpha = 0.1, 0.01\\) and \\(s = 10, 4\\) for CIFAR-100 and Tiny-ImageNet. For CIFAR-10, \\(\\alpha\\) is set to 0.1 and 0.01, and \\(s\\) is set to 4 and 2. Most existing methods are validated on data partition of \\(\\alpha = 0.3\\) to 1.0 and \\(s = 10\\) [14, 19]. Here we provide more challenging settings to validate the robustness under extremely heterogeneous cases." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.261, + 0.907, + 0.397 + ], + "angle": 0, + "content": "Implementation Details. In all the experiments, we use 100 clients for each method and use the same partitioned dataset within experiments of the same data setting. We implement the AFL with a \\(\\gamma = 1\\) RI process (any \\(\\gamma\\) would suffice, see the ablation study). Each experiment setting is run 3 times and the mean and standard deviation of the best top-1 classification accuracy during training are reported. The implementation details of gradient-based compared methods can be found in Supplementary Materials E." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.4, + 0.909, + 0.656 + ], + "angle": 0, + "content": "Experimental Results. We report the results of the compared methods under the setting of NIID-1 and NIID-2 in Table 1. As shown in the table, except for slightly weaker results than those of FedDyn in the NIID-2 setting, the AFL obtains very competitive performance compared with other methods across various settings. The degree of data heterogeneity does not at all affect the AFL. For instance, the accuracy remains \\(80.75\\%\\) on CIFAR-10 for various NIID-1 and NIID-2 settings. Although slight differences could occur among various settings, it barely impacts the classification accuracy (an AL property indicated in [48]). The same pattern repeats on CIFAR-100 and Tiny/ImageNet. Uniquely, the AFL obtains identical results for all 3 repeated runs, i.e., the standard deviations are zeros! This is because the AFL does not introduce any stochastic element, so the repeated computation in each run is naturally equivalent to one another, hence the zero standard deviation." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.659, + 0.909, + 0.901 + ], + "angle": 0, + "content": "Notably, when introducing a pre-trained backbone, the compared methods yield similar results and even FedAvg can become a competitive baseline. The reason could be that, when incorporating a pret-trained backbone, the FL process can be more stable with a better start point and methods stabilizing the FL process could be less effective. This phenomena has also been witnessed in other FL studies [3, 7]. However, other FL methods still experience performance reductions under severe non-IID scenarios. For example, FedDyn performs relatively well (e.g., \\(57.55\\%\\) ) under NIID-1 \\((\\alpha = 0.1)\\) on CIFAR-100 but undergoes a performance degradation (to \\(36.12\\%\\) ) when \\(\\alpha = 0.01\\). This pattern is rather consistent in other compared methods, such as FedAvg \\((56.62\\% \\rightarrow 32.99\\%)\\), FedProx \\((56.45\\% \\rightarrow 33.37\\%)\\), and MOON \\((56.58\\% \\rightarrow 33.34\\%)\\), and is also true across all datasets. The performance distributions regarding NIID-2 for" + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "4993" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.089, + 0.908, + 0.133 + ], + "angle": 0, + "content": "Table 1. The top-1 accuracy (%) of compared methods under two non-IID settings. Settings controlled by \\( \\alpha \\) and \\( s \\) are NIID-1 and NIID-2 respectively. The data is reported as average and standard deviation after 3 runs. Results in bold are the best within the compared methods in the same setting." + }, + { + "type": "table", + "bbox": [ + 0.095, + 0.142, + 0.907, + 0.326 + ], + "angle": 0, + "content": "
DatasetSettingFedAvgFedProxMOONFedGenFedDynFedNTDFedDiscoAFL
CIFAR-10α = 0.164.02±0.1864.07±0.0863.84±0.0364.14±0.2464.77±0.1164.64±0.0263.83±0.0880.75±0.00
α = 0.0560.52±0.3960.39±0.0960.28±0.1760.65±0.1960.35±0.5461.16±0.3359.90±0.0580.75±0.00
s = 468.47±0.1368.46±0.0868.47±0.1568.24±0.2873.50±0.1170.24±0.1165.04±0.1180.75±0.00
s = 257.81±0.0357.61±0.1257.72±0.1557.02±0.1864.07±0.0958.77±0.1858.78±0.0280.75±0.00
CIFAR-100α = 0.156.62±0.1256.45±0.2256.58±0.0256.48±0.1757.55±0.0856.60±0.1455.79±0.0458.56±0.00
α = 0.0132.99±0.2033.37±0.0933.34±0.1133.09±0.0936.12±0.0832.59±0.2125.72±0.0858.56±0.00
s = 1055.76±0.1355.80±0.1655.70±0.2560.93±0.1761.09±0.0954.69±0.1554.65±0.0958.56±0.00
s = 548.33±0.1548.29±0.1448.34±0.1948.12±0.0659.34±0.1147.00±0.1945.86±0.1858.56±0.00
Tiny-ImageNetα = 0.146.04±0.2746.47±0.2346.21±0.1446.27±0.1447.72±0.2246.17±0.1647.48±0.0654.67±0.00
α = 0.0132.63±0.1932.26±0.1432.38±0.2032.33±0.1435.19±0.0631.86±0.4427.15±0.1054.67±0.00
s = 1039.06±0.2638.97±0.2338.79±0.1438.82±0.1641.36±0.0637.55±0.0938.86±0.1254.67±0.00
s = 529.66±0.1929.17±0.1629.24±0.3029.37±0.2535.18±0.1829.01±0.1427.72±0.1854.67±0.00
" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.351, + 0.486, + 0.429 + ], + "angle": 0, + "content": "these compared methods resemble those in NIID-1, where smaller \\( s \\) values invite performance degradation among existing FL counterparts. For instance, the FedDyn exhibits \\( 73.50\\% \\rightarrow 64.07\\% \\) for \\( s = 4 \\rightarrow 2 \\) on CIFAR-10 while the AFL obtains competitive and identical results (e.g., \\( 80.75\\% \\))." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.439, + 0.331, + 0.455 + ], + "angle": 0, + "content": "4.2. Analysis on Data Partition" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.462, + 0.484, + 0.523 + ], + "angle": 0, + "content": "Here we provide broaden non-IID partitions to demonstrate AFL's invariance to data partitioning. This includes varying the client number and the non-IID degree. We also provide the IID partition results." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.524, + 0.484, + 0.707 + ], + "angle": 0, + "content": "Client-number Invariance. We compare our AFL and the FedAvg under NIID-1 setting on CIFAR-100 and TinyImageNet with \\(\\alpha = 0.1\\), and vary the number of clients from 100 to 500 and 1000. The results are shown in Figure 2. We observe that the AFL keeps an identical performance when scaling the number of clients, while the FedAvg experiences a performance decline along the increasing number (e.g., \\(56.57\\% \\rightarrow 41.01\\%\\) for \\(K = 100 \\rightarrow 1000\\) on CIFAR-100). This provides a strong evidence to support the invariance to data partitioning in our AFL. It also showcases the capability of pushing the AFL to large-scale client training scenario without any performance compromise." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.351, + 0.909, + 0.593 + ], + "angle": 0, + "content": "Data Heterogeneity Invariance. Here, we fix the client number to 100 and partition the CIFAR-100 under the setting of NIID-1 with \\(\\alpha = 0.005, 0.01, 0.1, 1\\), including the IID setting as well. We report the results of AFL and FedAvg in Table 2. The FedAvg suffers from more accuracy losses (e.g., \\(57.72\\% \\to 24.74\\%\\) for \\(\\alpha = 0.1 \\to 0.005\\)) as the data heterogeneity grows higher. Under the IID partition, the FedAvg receives its best performance (i.e., \\(57.89\\%\\)), which is still less competitive than our AFL (i.e., \\(58.56\\%\\)). On the other hand, AFL obtains identical results (i.e., \\(58.56\\%\\)) across various settings, including non-IID and IID ones. This is another strong proof of the AA law indicating the weight-invariant property of AFL. Our AFL is invariant to any degree of data heterogeneity, leading to unchanged performance in all possible data heterogeneous partition scenarios, even in extreme data heterogeneous cases (e.g., \\(\\alpha = 0.005\\))." + }, + { + "type": "table_caption", + "bbox": [ + 0.513, + 0.608, + 0.907, + 0.639 + ], + "angle": 0, + "content": "Table 2. The top-1 classification accuracy \\((\\%)\\) of AFL and FedAvg under different data heterogeneity." + }, + { + "type": "table", + "bbox": [ + 0.516, + 0.648, + 0.907, + 0.697 + ], + "angle": 0, + "content": "
Acc. (%)α = 0.005α = 0.01α = 0.1α = 1IID
FedAvg24.7433.0956.5757.7257.89
AFL58.5658.5658.5658.5658.56
" + }, + { + "type": "image_caption", + "bbox": [ + 0.154, + 0.721, + 0.256, + 0.735 + ], + "angle": 0, + "content": "(a) CIFAR-100" + }, + { + "type": "image", + "bbox": [ + 0.092, + 0.735, + 0.296, + 0.874 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.328, + 0.721, + 0.454, + 0.736 + ], + "angle": 0, + "content": "(b) Tiny-ImageNet" + }, + { + "type": "image", + "bbox": [ + 0.3, + 0.735, + 0.482, + 0.873 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.135, + 0.883, + 0.438, + 0.898 + ], + "angle": 0, + "content": "Figure 2. Accuracy over various number of clients." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.727, + 0.696, + 0.743 + ], + "angle": 0, + "content": "4.3. Training Efficiency" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.75, + 0.909, + 0.903 + ], + "angle": 0, + "content": "Fast Training with Single-round Aggregation. We plot the training evolution curves of accuracy on CIFAR-100 and Tiny-ImageNet in Figure 3 and report the execution time for each method in the legend bars. Compared FL methods take 60s to 100s on CIFAR-100 (100s to 160s on Tiny-ImageNet) to complete an aggregation round, leading to a total training time of 30,000s to 50,000 (50,000s to 80,000s). AFL, however, spends 236.61s on CIFAR-100 and 349.50s on Tiny-ImageNet, achieving approximately \\(150 \\times -200 \\times\\) speedups over its FL counterparts due to only one aggregation." + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.946, + 0.516, + 0.957 + ], + "angle": 0, + "content": "4994" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.102, + 0.089, + 0.288, + 0.216 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.299, + 0.089, + 0.472, + 0.216 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.227, + 0.483, + 0.255 + ], + "angle": 0, + "content": "Figure 3. Accuracy curves with communication rounds. Average training time is reported in the legends." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.282, + 0.35, + 0.298 + ], + "angle": 0, + "content": "4.4. Ablation Study of RI Process" + }, + { + "type": "text", + "bbox": [ + 0.093, + 0.304, + 0.485, + 0.547 + ], + "angle": 0, + "content": "Here, we conduct an ablation study regarding the RI process by reporting accuracies of AFL with \\(\\alpha = 0.1\\) and \\(K = 100,500,1000\\) under different values of \\(\\gamma\\). The results without and with the RI process are provided in Table 3. When \\(\\gamma = 0\\) (i.e., no regularization involved), the AFL stops working with \\(K = 500\\) and 1000 due to the ill-conditioned matrix scenario (e.g., \\(N_{k} < y_{\\mathrm{e}}\\)). Such an ill-conditioned case is avoided by introducing \\(\\gamma\\). However, the lack of RI process (see left columns in Table 3) could lead to accuracy loss. For instance, for \\(\\gamma = 100\\), the AFL could suffer a loss of \\(9\\%\\) (i.e., \\(58.56\\% \\rightarrow 49.62\\%\\)). This is the result of regularization accumulation (see (15)). With the RI process, the AFL obtains an identical result across various \\(\\gamma\\) values. More importantly, this demonstrates that adopting the RI avoids the need to find proper \\(\\gamma\\) values. That is, the regularization is a removable intermediary, not a hyperparameter that requires tuning." + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.56, + 0.483, + 0.589 + ], + "angle": 0, + "content": "Table 3. Ablation study of RI under various \\(\\gamma\\) and \\(K\\). The left/right results are performance w/o and w/ the RI process in (16)." + }, + { + "type": "table", + "bbox": [ + 0.094, + 0.6, + 0.485, + 0.647 + ], + "angle": 0, + "content": "
Acc.(%)γ = 0γ = 0.1γ = 1γ = 10γ = 100
K=10058.56N/A58.5458.5658.5158.5658.1558.5655.7758.56
K=5001.11N/A58.5258.5658.3058.5656.7258.5651.7758.56
K=10000.75N/A58.5158.5658.1558.5655.7758.5649.6258.56
" + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.672, + 0.407, + 0.687 + ], + "angle": 0, + "content": "4.5. Validation with Different Backbones" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.695, + 0.483, + 0.815 + ], + "angle": 0, + "content": "To explore the effect of different backbones used in the AFL, we extend the experiments with VGG11 [28] and ViT-B-16 [6]. All these backbone are pre-trained in ImageNet-1k and we conduct the experiments under the same setting in Section 4.2. Due to the invariance to data partitioning, we only report one result in single dataset. As shown in the Table 4, with different pre-trained backbones, the AFL can all obtain competitive results." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.83, + 0.363, + 0.846 + ], + "angle": 0, + "content": "5. Limitations and Future Work" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.856, + 0.483, + 0.9 + ], + "angle": 0, + "content": "Utilizing Pre-trained Backbone. The AFL approach is both facilitated and constrained by the requirement of having a well-trained feature extractor. However, this limitation has" + }, + { + "type": "table_caption", + "bbox": [ + 0.513, + 0.09, + 0.907, + 0.118 + ], + "angle": 0, + "content": "Table 4. The results of top-1 accuracy in % of the AFL with different backbones including ResNet-18, VGG11 and ViT-B-16." + }, + { + "type": "table", + "bbox": [ + 0.543, + 0.129, + 0.878, + 0.189 + ], + "angle": 0, + "content": "
Acc.(%)CIFAR-10CIFAR-100Tiny-ImageNet
ResNet-1880.7558.5654.67
VGG1182.7260.4354.73
ViT-B-1693.9275.4582.02
" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.214, + 0.908, + 0.349 + ], + "angle": 0, + "content": "been significantly mitigated by the emergence of reusing pretrained models for new tasks. This \"pre-trained backbone + downstream task\" paradigm has become a standard practice in numerous deep learning domains, offering improved generalization and reduced computational costs. FL can further enhance this paradigm and we validate that collaboration can still be beneficial with pre-trained backbones in Supplementary Materials F. The proposal of the AFL aligns with these recent research trends, making it a sensible FL advancement." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.349, + 0.909, + 0.484 + ], + "angle": 0, + "content": "Partially Participating and Stragglers. The AFL formulates a single-round aggregation for FL systems, promoting rapid convergence and reducing communication overhead. However, challenges arise when clients engage partially or when stragglers impede progress. Since clients can only contribute to the aggregation after finishing local computations, the AFL needs to wait for all the clients. This potentially hampers the AFL's overall efficiency and inspires us to further refine the AFL to address these issues." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.485, + 0.909, + 0.621 + ], + "angle": 0, + "content": "Linear Assumptions of AFL. The AFL is established upon linear classifiers and may be less effective with nonlinear data distribution. To address this, AFL can incorporate non-linear projections including non-linear activations or kernel functions. Also, for multi-layer model, AFL can formulate local least-square problem at each layer by label projection [49]. These techniques have been utilized in various AL-based work [39] and the AA law holds theoretically. We will conduct a further exploration in future." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.634, + 0.634, + 0.65 + ], + "angle": 0, + "content": "6. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.66, + 0.909, + 0.901 + ], + "angle": 0, + "content": "In this paper, we introduce a gradient-free FL framework named analytic federated learning (AFL). The AFL unveils analytical solutions both in the local client training stage and the aggregation stage. This leads to one-epoch local training, single-round aggregation, and fast convergence. In particular, the single-round aggregation property is theoretically supported and proved by the well-formulated AA law. Additionally, by introducing the RI process, we re-establish the AFL's optimality which could be compromised in the scenario of rank-deficient with normally a large number of clients. The AFL demonstrates its invariance to data partitioning, a property that allows several appealing FL characteristics such as data heterogeneity invariance and client-number invariance. These characteristics are empirically validated through experiments across various settings, where the AFL achieves a consistent and competitive performance." + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.946, + 0.516, + 0.957 + ], + "angle": 0, + "content": "4995" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.092, + 0.091, + 0.241, + 0.108 + ], + "angle": 0, + "content": "Acknowledgment" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.114, + 0.486, + 0.24 + ], + "angle": 0, + "content": "This research was supported by the National Natural Science Foundation of China (62306117, 62406114), the Guangzhou Basic and Applied Basic Research Foundation (2024A04J3681, 2023A04J1687), GJYC program of Guangzhou (2024D03J0005), the National Key R & D Project from Minister of Science and Technology (2024YFA1211500), and the Fundamental Research Funds for the Central Universities (2024ZYGXZR074)." + }, + { + "type": "title", + "bbox": [ + 0.093, + 0.254, + 0.188, + 0.269 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.279, + 0.484, + 0.334 + ], + "angle": 0, + "content": "[1] Durmus Alp Emre Acar, Yue Zhao, Ramon Matas, Matthew Mattina, Paul Whatmough, and Venkatesh Saligrama. Federated learning based on dynamic regularization. In International Conference on Learning Representations, 2021. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.336, + 0.484, + 0.39 + ], + "angle": 0, + "content": "[2] Ilai Bistritz, Ariana Mann, and Nicholas Bambos. Distributed distillation for on-device learning. In Advances in Neural Information Processing Systems, pages 22593-22604. Curran Associates, Inc., 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.392, + 0.484, + 0.447 + ], + "angle": 0, + "content": "[3] Hong-You Chen, Cheng-Hao Tu, Ziwei Li, Han Wei Shen, and Wei-Lun Chao. On the importance and applicability of pretraining for federated learning. In The Eleventh International Conference on Learning Representations, 2023. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.449, + 0.484, + 0.49 + ], + "angle": 0, + "content": "[4] Randall E. Cline. Representations for the generalized inverse of a partitioned matrix. Journal of the Society for Industrial and Applied Mathematics, 12(3):588-600, 1964. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.492, + 0.484, + 0.547 + ], + "angle": 0, + "content": "[5] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, 2009. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.549, + 0.484, + 0.618 + ], + "angle": 0, + "content": "[6] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2021. 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.62, + 0.484, + 0.688 + ], + "angle": 0, + "content": "[7] Chun-Mei Feng, Bangjun Li, Xinxing Xu, Yong Liu, Huazhu Fu, and Wangmeng Zuo. Learning federated visual prompt in null space for mri reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8064-8073, 2023. 1, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.69, + 0.484, + 0.744 + ], + "angle": 0, + "content": "[8] Ping Guo, Michael R Lyu, and NE Mastorakis. Pseudoinverse learning algorithm for feedforward neural networks. Advances in Neural Networks and Applications, pages 321-326, 2001. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.746, + 0.484, + 0.815 + ], + "angle": 0, + "content": "[9] Qiushan Guo, Xinjiang Wang, Yichao Wu, Zhipeng Yu, Ding Liang, Xiaolin Hu, and Ping Luo. Online knowledge distillation via collaborative learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.817, + 0.484, + 0.871 + ], + "angle": 0, + "content": "[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.873, + 0.484, + 0.901 + ], + "angle": 0, + "content": "[11] Georgios A Kaissis, Marcus R Makowski, Daniel Rückert, and Rickmer F Braren. Secure, privacy-preserving and feder-" + }, + { + "type": "list", + "bbox": [ + 0.094, + 0.279, + 0.484, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.545, + 0.093, + 0.905, + 0.12 + ], + "angle": 0, + "content": "ated machine learning in medical imaging. Nature Machine Intelligence, 2(6):305-311, 2020. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.122, + 0.907, + 0.162 + ], + "angle": 0, + "content": "[12] Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.164, + 0.907, + 0.244 + ], + "angle": 0, + "content": "[13] Rajesh Kumar, Abdullah Aman Khan, Jay Kumar, Zakria, Noorbakhsh Amiri Golilarz, Simin Zhang, Yang Ting, Chengyu Zheng, and Wenyong Wang. Blockchain-federated-learning and deep learning models for Covid-19 detection using ct imaging. IEEE Sensors Journal, 21(14):16301-16314, 2021. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.247, + 0.906, + 0.314 + ], + "angle": 0, + "content": "[14] Gihun Lee, Minchan Jeong, Yongjin Shin, Sangmin Bae, and Se-Young Yun. Preservation of the global knowledge by nottrue distillation in federated learning. In Advances in Neural Information Processing Systems, pages 38461-38474. Curran Associates, Inc., 2022. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.317, + 0.907, + 0.371 + ], + "angle": 0, + "content": "[15] Qinbin Li, Bingsheng He, and Dawn Song. Model-contrastive federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10713–10722, 2021. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.373, + 0.907, + 0.427 + ], + "angle": 0, + "content": "[16] Qinbin Li, Yiqun Diao, Quan Chen, and Bingsheng He. Federated learning on non-iid data silos: An experimental study. In 2022 IEEE 38th International Conference on Data Engineering (ICDE), pages 965-978, 2022. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.429, + 0.907, + 0.482 + ], + "angle": 0, + "content": "[17] Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. In Proceedings of Machine Learning and Systems, pages 429-450, 2020. 1, 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.485, + 0.907, + 0.552 + ], + "angle": 0, + "content": "[18] Xin-Chun Li, Yi-Chu Xu, Shaoming Song, Bingshuai Li, Yinchuan Li, Yunfeng Shao, and De-Chuan Zhan. Federated learning with position-aware neurons. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10082-10091, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.554, + 0.907, + 0.62 + ], + "angle": 0, + "content": "[19] Zexi Li, Tao Lin, Xinyi Shang, and Chao Wu. Revisiting weighted aggregation in federated learning with neural networks. In Proceedings of the 40th International Conference on Machine Learning, pages 19767-19788. PMLR, 2023. 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.623, + 0.907, + 0.677 + ], + "angle": 0, + "content": "[20] Tao Lin, Lingjing Kong, Sebastian U Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated learning. In Advances in Neural Information Processing Systems, pages 2351-2363. Curran Associates, Inc., 2020. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.679, + 0.907, + 0.734 + ], + "angle": 0, + "content": "[21] Tao Lin, Lingjing Kong, Sebastian U Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated learning. In Advances in Neural Information Processing Systems, pages 2351-2363. Curran Associates, Inc., 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.735, + 0.907, + 0.788 + ], + "angle": 0, + "content": "[22] Zichen Liu, Chao Du, Wee Sun Lee, and Min Lin. Locality sensitive sparse encoding for learning world models online. In The Twelfth International Conference on Learning Representations, 2024. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.79, + 0.907, + 0.858 + ], + "angle": 0, + "content": "[23] Shubham Malaviya, Manish Shukla, and Sachin Lodha. Reducing communication overhead in federated learning for pre-trained language models using parameter-efficient finetuning. In Proceedings of The 2nd Conference on Lifelong Learning Agents, pages 456-469. PMLR, 2023. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.86, + 0.907, + 0.901 + ], + "angle": 0, + "content": "[24] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-Efficient Learning of Deep Networks from Decentralized" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.907, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.946, + 0.516, + 0.957 + ], + "angle": 0, + "content": "4996" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.125, + 0.092, + 0.484, + 0.134 + ], + "angle": 0, + "content": "Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, pages 1273-1282. PMLR, 2017. 1, 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.135, + 0.445, + 0.148 + ], + "angle": 0, + "content": "[25] Mohammed Ali mnmmoustafa. Tiny imagenet, 2017. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.149, + 0.483, + 0.189 + ], + "angle": 0, + "content": "[26] J. Park and I. W. Sandberg. Universal approximation using radial-basis-function networks. Neural Computation, 3(2): 246-257, 1991. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.191, + 0.483, + 0.232 + ], + "angle": 0, + "content": "[27] Geet Shingi. A federated learning based approach for loan defaults prediction. In 2020 International Conference on Data Mining Workshops (ICDMW), pages 362-368, 2020. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.233, + 0.484, + 0.273 + ], + "angle": 0, + "content": "[28] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition, 2015. 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.274, + 0.484, + 0.343 + ], + "angle": 0, + "content": "[29] Yue Tan, Guodong Long, LU LIU, Tianyi Zhou, Qinghua Lu, Jing Jiang, and Chengqi Zhang. Fedproto: Federated prototype learning across heterogeneous clients. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8):8432-8440, 2022. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.344, + 0.484, + 0.412 + ], + "angle": 0, + "content": "[30] Yue Tan, Guodong Long, Jie Ma, LU LIU, Tianyi Zhou, and Jing Jiang. Federated learning from pre-trained models: A contrastive learning approach. In Advances in Neural Information Processing Systems, pages 19332-19344. Curran Associates, Inc., 2022. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.413, + 0.484, + 0.455 + ], + "angle": 0, + "content": "[31] Kar-Ann Toh. Learning from the kernel and the range space. In 2018 IEEE/ACIS 17th International Conference on Computer and Information Science (ICIS), pages 1–6, 2018. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.456, + 0.484, + 0.509 + ], + "angle": 0, + "content": "[32] Kar-Ann Toh. Learning from the kernel and the range space. In the Proceedings of the 17th 2018 IEEE Conference on Computer and Information Science, pages 417–422. IEEE, 2018. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.511, + 0.484, + 0.58 + ], + "angle": 0, + "content": "[33] Haozhao Wang, Yichen Li, Wenchao Xu, Ruixuan Li, Yufeng Zhan, and Zhigang Zeng. DaFKD: Domain-aware federated knowledge distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20412-20421, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.581, + 0.484, + 0.649 + ], + "angle": 0, + "content": "[34] Haozhao Wang, Haoran Xu, Yichen Li, Yuan Xu, Ruixuan Li, and Tianwei Zhang. FedCDA: Federated learning with cross-rounds divergence-aware aggregation. In The Twelfth International Conference on Learning Representations, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.65, + 0.484, + 0.718 + ], + "angle": 0, + "content": "[35] Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, and H. Vincent Poor. Tackling the objective inconsistency problem in heterogeneous federated optimization. In Advances in Neural Information Processing Systems, pages 7611-7623. Curran Associates, Inc., 2020. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.72, + 0.484, + 0.774 + ], + "angle": 0, + "content": "[36] Jue Wang, Ping Guo, and Yanjun Li. DensePILAE: a feature reuse pseudoinverse learning algorithm for deep stacked autoencoder. Complex & Intelligent Systems, pages 1-11, 2021. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.776, + 0.484, + 0.843 + ], + "angle": 0, + "content": "[37] X. Wang, T. Zhang, and R. Wang. Noniterative deep learning: Incorporating restricted boltzmann machine into multilayer random weight neural networks. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 49(7):1299-1308, 2019. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.845, + 0.484, + 0.899 + ], + "angle": 0, + "content": "[38] Guile Wu and Shaogang Gong. Peer collaborative learning for online knowledge distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12):10302-10310, 2021. 2" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.484, + 0.899 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.093, + 0.908, + 0.162 + ], + "angle": 0, + "content": "[39] Yicheng Xu, Yuxin Chen, Jiahao Nie, Yusong Wang, Huiping Zhuang, and Manabu Okumura. Advancing cross-domain discriminability in continual learning of vision-language models. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.163, + 0.908, + 0.232 + ], + "angle": 0, + "content": "[40] Fu-En Yang, Chien-Yi Wang, and Yu-Chiang Frank Wang. Efficient model personalization in federated learning via client-specific prompt generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 19159-19168, 2023. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.234, + 0.907, + 0.29 + ], + "angle": 0, + "content": "[41] Wensi Yang, Yuhang Zhang, Kejiang Ye, Li Li, and ChengZhong Xu. Ffd: A federated learning based method for credit card fraud detection. In Big Data - BigData 2019, pages 18-32, Cham, 2019. Springer International Publishing. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.291, + 0.908, + 0.36 + ], + "angle": 0, + "content": "[42] Rui Ye, Mingkai Xu, Jianyu Wang, Chenxin Xu, Siheng Chen, and Yanfeng Wang. FedDisco: Federated learning with discrepancy-aware collaboration. In Proceedings of the 40th International Conference on Machine Learning, pages 39879-39902. PMLR, 2023. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.362, + 0.908, + 0.432 + ], + "angle": 0, + "content": "[43] Fuxun Yu, Weishan Zhang, Zhuwei Qin, Zirui Xu, Di Wang, Chenchen Liu, Zhi Tian, and Xiang Chen. Fed2: Feature-aligned federated learning. In Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining, pages 2066-2074, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.434, + 0.908, + 0.503 + ], + "angle": 0, + "content": "[44] Lin Zhang, Li Shen, Liang Ding, Dacheng Tao, and Ling-Yu Duan. Fine-tuning global model via data-free knowledge distillation for non-iid federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10174-10183, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.504, + 0.908, + 0.628 + ], + "angle": 0, + "content": "[45] Zhuo Zhang, Yuanhang Yang, Yong Dai, Qifan Wang, Yue Yu, Lizhen Qu, and Zenglin Xu. Fedpetuning: When federated learning meets the parameter-efficient tuning methods of pre-trained language models. In Findings of the Association for Computational Linguistics: ACL 2023, page 9963-9977. Association for Computational Linguistics (ACL), 2023. Annual Meeting of the Association of Computational Linguistics 2023, ACL 2023; Conference date: 09-07-2023 Through 14-07-2023. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.63, + 0.908, + 0.686 + ], + "angle": 0, + "content": "[46] Zhuangdi Zhu, Junyuan Hong, and Jiayu Zhou. Data-free knowledge distillation for heterogeneous federated learning. In Proceedings of the 38th International Conference on Machine Learning, pages 12878-12889. PMLR, 2021. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.688, + 0.908, + 0.744 + ], + "angle": 0, + "content": "[47] Zhuangdi Zhu, Junyuan Hong, and Jiayu Zhou. Data-free knowledge distillation for heterogeneous federated learning. In Proceedings of the 38th International Conference on Machine Learning, pages 12878-12889. PMLR, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.745, + 0.908, + 0.8 + ], + "angle": 0, + "content": "[48] Huiping Zhuang, Zhiping Lin, and Kar-Ann Toh. Blockwise recursive Moore-Penrose inverse for network learning. IEEE Transactions on Systems, Man, and Cybernetics: Systems, pages 1-14, 2021. 2, 3, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.801, + 0.908, + 0.843 + ], + "angle": 0, + "content": "[49] Huiping Zhuang, Zhiping Lin, and Kar-Ann Toh. Correlation projection for analytic learning of a classification network. Neural Processing Letters, pages 1–22, 2021. 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.845, + 0.908, + 0.902 + ], + "angle": 0, + "content": "[50] Huiping Zhuang, Zhenyu Weng, Hongxin Wei, Renchunzi Xie, Kar-Ann Toh, and Zhiping Lin. ACIL: Analytic class incremental learning with absolute memorization and privacy protection. In Advances in Neural Information Processing" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.908, + 0.902 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.956 + ], + "angle": 0, + "content": "4997" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.125, + 0.092, + 0.484, + 0.119 + ], + "angle": 0, + "content": "Systems, pages 11602-11614. Curran Associates, Inc., 2022. 3, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.122, + 0.484, + 0.191 + ], + "angle": 0, + "content": "[51] Huiping Zhuang, Zhenyu Weng, Run He, Zhiping Lin, and Ziqian Zeng. GKEAL: Gaussian kernel embedded analytic learning for few-shot class incremental task. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7746-7755, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.192, + 0.484, + 0.261 + ], + "angle": 0, + "content": "[52] Huiping Zhuang, Yizhu Chen, Di Fang, Run He, Kai Tong, Hongxin Wei, Ziqian Zeng, and Cen Chen. GACL: Exemplar-free generalized analytic continual learning. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 2024. 3, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.263, + 0.484, + 0.331 + ], + "angle": 0, + "content": "[53] Huiping Zhuang, Run He, Kai Tong, Ziqian Zeng, Cen Chen, and Zhiping Lin. DS-AL: A dual-stream analytic learning for exemplar-free class-incremental learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(15):17237-17244, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.333, + 0.484, + 0.416 + ], + "angle": 0, + "content": "[54] Huiping Zhuang, Yuchen Liu, Run He, Kai Tong, Ziqian Zeng, Cen Chen, Yi Wang, and Lap-Pui Chau. F-OAL: Forward-only online analytic learning with fast training and low memory footprint in class incremental learning. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. 6" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.484, + 0.416 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.946, + 0.516, + 0.956 + ], + "angle": 0, + "content": "4998" + } + ] +] \ No newline at end of file diff --git a/2025/AFL_ A Single-Round Analytic Approach for Federated Learning with Pre-trained Models/1961e1d7-9e21-46e8-8ee1-ddb51cc88578_origin.pdf b/2025/AFL_ A Single-Round Analytic Approach for Federated Learning with Pre-trained Models/1961e1d7-9e21-46e8-8ee1-ddb51cc88578_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..98d3f710330caba758176e7152b8b94e150492c1 --- /dev/null +++ b/2025/AFL_ A Single-Round Analytic Approach for Federated Learning with Pre-trained Models/1961e1d7-9e21-46e8-8ee1-ddb51cc88578_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d12c86f626c17553afea2378a66b0e653a594e37823fe92bfe5d5e8ee3778bc8 +size 428044 diff --git a/2025/AFL_ A Single-Round Analytic Approach for Federated Learning with Pre-trained Models/full.md b/2025/AFL_ A Single-Round Analytic Approach for Federated Learning with Pre-trained Models/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1c20c8685efb4779460b64c12181be720a37fb16 --- /dev/null +++ b/2025/AFL_ A Single-Round Analytic Approach for Federated Learning with Pre-trained Models/full.md @@ -0,0 +1,404 @@ +# AFL: A Single-Round Analytic Approach for Federated Learning with Pre-trained Models + +Run He $^{1}$ , Kai Tong $^{1}$ , Di Fang $^{1}$ , Han Sun $^{2,3}$ , Ziqian Zeng $^{1}$ , Haoran Li $^{4}$ , Tianyi Chen $^{5}$ , Huiping Zhuang $^{1*}$ + +$^{1}$ South China University of Technology $^{2}$ Tsinghua University + +3 Beijing National Research Center for Information Science and Technology + +4 The Hong Kong University of Science and Technology 5 Microsoft + +\*corresponding: hpzhuang@scut.edu.cn + +# Abstract + +In this paper, we introduce analytic federated learning (AFL), a new training paradigm that brings analytical (i.e., closed-form) solutions to the federated learning (FL) with pretrained models. Our AFL draws inspiration from analytic learning—a gradient-free technique that trains neural networks with analytical solutions in one epoch. In the local client training stage, the AFL facilitates a one-epoch training, eliminating the necessity for multi-epoch updates. In the aggregation stage, we derive an absolute aggregation (AA) law. This AA law allows a single-round aggregation, reducing heavy communication overhead and achieving fast convergence by removing the need for multiple aggregation rounds. More importantly, the AFL exhibits a property that invariance to data partitioning, meaning that regardless of how the full dataset is distributed among clients, the aggregated result remains identical. This could spawn various potentials, such as data heterogeneity invariance and client-number invariance. We conduct experiments across various FL settings including extremely non-IID ones, and scenarios with a large number of clients (e.g., $\geq 1000$ ). In all these settings, our AFL constantly performs competitively while existing FL techniques encounter various obstacles. Our codes are available at https://github.com/ZHUANGHP/Analytic-federated-learning. + +# 1. Introduction + +Federated learning (FL) [24] aims to collectively train a machine learning model over data silos by aggregating their individual trained weights, while preserving the privacy of their source data. This training paradigm has received high popularity, particularly in sensitive domains where data privacy is crucial, such as in banks [27, 41] and hospitals [11, 13]. + +Conventional FL techniques rely on weight aggregation + +among clients over multiple rounds of training. The objective is to achieve convergence and approximate its joint-training counterpart, where all clients' data are accessible in a single location. To accomplish this, many contributions have been made. One widely recognized method is FedAvg [24]. Relying on a large number of aggregation rounds, the FedAvg employs a simple yet effective weight averaging technique across local clients. Building upon this, various methods have been proposed (e.g., the FedProx [17] and the FedNova [35]), each with its own specific focus within the field of FL. + +However, training a model from scratch via FL can be computationally intensive and demanding in terms of communication bandwidth, especially with large models and numerous participating clients. Several efforts have explored utilizing pre-trained models to mitigate these challenges [23, 45]. Typically, this involves freezing the backbone and only updating and sharing lightweight parameters, such as prototypes [29, 30] or prompts [7, 40], to reduce the substantial training costs. + +Although leveraging pre-trained models can circumvent the high costs associated with training backbones from scratch, existing FL techniques with pre-trained models are primarily based on a gradient-based iterative approach, necessitating iterative optimization on each client and multi-round aggregation across clients. The gradient-based optimization used in the existing FL faces various challenges and imposes several constraints. The faced challenges include, but are not limited to: 1) Data heterogeneity, where the data distribution in each client is not independently identical (non-IID), even with mutually exclusive data categories across different clients (i.e., pathological distribution), 2) Large client number, where the aggregation involving a significant number of clients (i.e., $\geq 1000$ ) can lead to substantial performance degradation in FL systems as the client count increases [16], 3) Slow convergence: where FL methods may struggle to converge within limited communication rounds, especially + +in severe non-IID scenarios, and 4) High communication cost, where multi-round aggregation in existing FL methods escalates the communication costs associated with parameter sharing between clients and servers. + +In this paper, we propose a new FL training framework named analytic federated learning (AFL), which provides a single-round aggregation for federated learning with pretrained models. The AFL draws inspiration from analytic learning [8, 32, 48]—a gradient-free technique with a closed-form solution obtained from reshaping the network training into linearized formulation. The AL paradigm receives several benefits over gradient-based techniques. First, it is gradient-free, thereby avoiding gradient-related issues, such as vanishing and exploding gradients. Second, the analytical solution frees AL from convergence issues during training. Also, the AL requires only one visit to the dataset while gradient-based mechanism usually needs hundreds of epochs or beyond. These properties are attractive in FL to accomplish fast convergence and low communication cost. Here, we are able to incorporate this mechanism into the FL domain to overcome limitations inherent in gradient-based techniques. Our contributions are summarized as follows: + +- We propose the AFL, a gradient-free FL framework with analytical (closed-form) solutions. These analytical solutions apply both in the local client training stage and the aggregation stage. +- In the local stage, we adopt a pre-trained network to harness input embeddings, and formulate the training in each client into a localized linear regression problem. This leads to a least squares (LS) based one-epoch client training, eliminating the need for multi-epoch training and enabling fast convergence in local clients. +- In the aggregation stage, we derive an absolute aggregation (AA) law in analytical form, optimally establishing a single-round aggregation. That is, the aggregation happens only once, avoiding multiple FL rounds that bring high communication costs. Additionally, in scenarios where the AA law becomes suboptimal due to a large number of clients, we introduce a regularization intermediary (RI) process to restore its optimality. +- Owing to analytical solutions, the AFL exhibits a property that invariance to data partitioning. This means that regardless of how the full dataset is distributed (e.g., nonIID) among local clients, the result remains identical. This property spawns several appealing characteristics: i) Data heterogeneity invariance where the result is invariant to arbitrary heterogeneous data partition scenarios. ii) Client-number invariance, which produces identical results regardless of the number of clients involved. +- We conduct extensive experiments spanning diverse scenarios, including a wide variety of non-IID partitions and large client number (up to 1000) settings. Our AFL consistently showcases competitive performance throughout all + +these settings when compared with other methods. + +# 2. Related Works + +In this section, we review existing related FL literature. Additionally, we explore various AL techniques and their variants to reveal their underlying mechanisms. + +# 2.1. Federated Learning Methods + +Following the FedAvg [24], to address non-IID issues in FL, various methods have been proposed. One common approach involves assessing the significance of parameters during aggregation to ensure that local updates do not diverge substantially from the global model. For instance, the FedProx [17] restricts the size of local updates, while the FedNova [35] employs a normalized averaging method to eliminate target inconsistency while maintaining fast error convergence. These methods are frequently used as baselines, and we compare our results against them in our experiments. Another set of methods focuses on determining adaptive aggregation weights obtained from multiple clients. The Fed-LAW [19] learns these weights to achieve a global model with state-of-the-art performance, though it requires a proxy dataset to learn the weights, making the results sensitive to the selection of the proxy dataset. To address this sensitivity, the FedCDA [34] proposes a proxy-free method that reduces each client's deviation from the local models of other participants and selects a local model from its multiple recent models acquired over several rounds. + +Some methods address the parameter order mismatch issue across clients, which can occur during global aggregation. The Fed2 [43] designs a model structure adaptation method to ensure explicit feature allocation across different network structures. Similarly, the method in [18] seeks a position-aware neuron to fuse position-related values (i.e., position encodings) into neuron outputs. Distillation methods [2, 9, 33, 38] represent another branch, where the average of logits from client models is used for the local model aggregation, thereby enhancing generalization. [21] pioneers to apply knowledge distillation on the server side, transferring knowledge from multiple local models to the global model using an unlabeled proxy dataset. To overcome the limitation of using a proxy dataset, recent studies such as [47] and [44] suggest substituting the proxy dataset with generated data. + +Existing FL techniques have several significant drawbacks, including challenges with data heterogeneity, large client numbers, convergence issues and high communication costs. Our AFL framework addresses these issues by utilizing a gradient-free, closed-form analytic learning approach, avoiding gradient-related problems (e.g., multi-epoch training, convergence issues and multi-round aggregation). + +# 2.2. Analytic Learning + +The AL has been developed as a strategy to address issues associated with gradient-based update, such as gradient vanishing/exploding, divergence during iteration, and long training time due to multi-epoch training. The AL is also referred to as pseudoinverse learning [8] owing to its utilization of matrix inversion. The AL starts from shallow learning, which is investigated prior to the advent of deep networks in the realm of research. For instance, the radial basis network [26] trains parameters using an LS estimation after performing a kernel transformation in the first layer. The multilayer AL [31, 37] comes up with a one-epoch training style, using LS techniques to resolve linear segments transformed by nonlinear network. One instance of this method is the dense pseudoinverse autoencoder [36], which uses LS solutions to combine shallow and deep features to train a stacked autoencoder layer-by-layer. + +Nonetheless, earlier AL techniques train their weights by processing the entire dataset simultaneously, therefore facing memory challenge. This memory concern is alleviated by the block-wise recursive Moore-Penrose inverse [48], which equivalently replaces the joint learning with a recursive approach. This recursive equivalent property echoes well with the continual learning community. Naturally, analytic continual learning techniques [50, 51, 53] adopt this equivalent characteristic, thrive in handling the catastrophic forgetting problem, and are invariant to the sequential data partition in continual learning. Our AFL draws inspiration from these adaptations, aiming to introduce similar equivalent patterns (e.g., invariant to heterogeneous data) to the FL community. + +# 3. Analytic Federated Learning + +In this section, we provide a detailed exposition of AFL derivations, organized into a local training stage and a centralized aggregation stage. In the local stage, a pre-trained backbone serves as a feature extractor, facilitating an AL network learning that allows the training to be completed in one epoch. In the aggregation stage, we introduce the AA law, establishing a single-round aggregation. We elaborate on AFL's invariance to data partitioning here, bringing benefits such as data heterogeneity invariance, client-number invariance and fast convergence in a single round. An Overview of the proposed AFL paradigm is depicted in Figure 1. + +Prior to further developments, here let $\mathcal{D} = \{\mathcal{D}_k\}_{k=1}^K$ be the complete training data, where $\mathcal{D}_k \sim \{\mathcal{X}_{k,i}, y_{k,i}\}_{i=1}^{N_k}$ suggests an $N_k$ -sample sub-dataset accessible to the $k$ -th client with $\mathcal{X}_{k,i}$ and $y_{k,i}$ representing the $i$ -th input-label pair. In this paper, all these $K$ clients share the same backbone network $f_{\mathrm{backbone}}$ parameterized by $\Theta$ to map their inputs (e.g., $\mathcal{X}$ ) to embedding vectors. + +# 3.1. Local Stage: Localized Analytic Learning + +In this stage, each local client's network is trained using the AL technique. This involves transforming the neural network's classification head into a linear regression problem, thereby enabling the derivation of a closed-form LS solution. + +At the initial step, client $k$ extracts its embedding vector $\mathbf{x}_{k,i}$ by passing the $i$ -th data $\mathcal{X}_{k,i}$ from $\mathcal{D}_k$ through the frozen backbone network $f_{\mathrm{backbone}}$ , i.e., + +$$ +\boldsymbol {x} _ {k, j} = f _ {\text {b a c k b o n e}} \left(\mathcal {X} _ {k, j}, \Theta\right) \tag {1} +$$ + +where $\pmb{x}_{k,j} \in \mathbb{R}^{1 \times y_{\mathrm{e}}}$ , with $y_{\mathrm{e}}$ indicating the embedding length. + +For the $k$ -th client (with $N_{k}$ samples in $\mathcal{D}_k$ ), we can stack the extracted embeddings and their corresponding one-hoted labels via mapping $\mathcal{D}_k \sim \{\mathcal{X}_{k,i}, y_{k,i}\}_{i=1}^{N_k}$ to $\bar{\mathcal{D}}_k \sim \{\pmb{X}_k, \pmb{Y}_k\}$ , i.e., + +$$ +\boldsymbol {X} _ {k} = \left[ \begin{array}{c} \boldsymbol {x} _ {k, 1} \\ \boldsymbol {x} _ {k, 2} \\ \vdots \\ \boldsymbol {x} _ {k, N _ {k}} \end{array} \right] = \left[ \begin{array}{c} f _ {\text {b a c k b o n e}} \left(\mathcal {X} _ {k, 1}, \Theta\right) \\ f _ {\text {b a c k b o n e}} \left(\mathcal {X} _ {k, 2}, \Theta\right) \\ \vdots \\ f _ {\text {b a c k b o n e}} \left(\mathcal {X} _ {k, N _ {k}}, \Theta\right) \end{array} \right] \boldsymbol {Y} _ {k} = \left[ \begin{array}{c} \text {o n e h o t} \left(y _ {k, 1}\right) \\ \text {o n e h o t} \left(y _ {k, 2}\right) \\ \vdots \\ \text {o n e h o t} \left(y _ {k, N _ {k}}\right) \end{array} \right], \tag {2} +$$ + +where the embedding matrix $\mathbf{X}_k \in \mathbb{R}^{N_k \times y_e}$ , and the label matrix $\mathbf{Y}_k \in \mathbb{R}^{N_k \times C}$ has $C$ classes. The onehot(*) operator converts the index label $y_{k,j}$ into a $C$ -dimension one-hot row vector. + +Subsequently, we approach the local client training with AL technique [8]. Specially, the target of the $k$ -th client is to linearly map the extracted embeddings onto the one-hoted labels by minimizing the mean square error (MSE) loss function as follows. + +$$ +\mathcal {L} \left(\boldsymbol {W} _ {k}\right) = \left\| \boldsymbol {Y} _ {k} - \boldsymbol {X} _ {k} \boldsymbol {W} _ {k} \right\| _ {\mathrm {F}} ^ {2}, \tag {3} +$$ + +where $\| *\|_{\mathrm{F}}$ indicates the Frobenius norm. This leads to an optimal weight estimation $\hat{\mathbf{W}}_k$ , i.e., + +$$ +\hat {\boldsymbol {W}} _ {k} = \underset {\boldsymbol {W} _ {k}} {\operatorname {a r g m i n}} \mathcal {L} (\boldsymbol {W} _ {k}) = \boldsymbol {X} _ {k} ^ {\dagger} \boldsymbol {Y} _ {k}, \tag {4} +$$ + +where $\dagger$ denotes the Moore-Penrose (MP) inverse (also referred as generalized inverse or pseudoinverse) [8, 48]. + +The solution presented in (4) optimally addresses the MSE loss function described in (3), effectively establishing an LS-based AL solution for localized network learning. + +Why One-epoch Analytic Learning Works. AL methods are generally effective for training shallow networks but face challenges when applied to deeper ones. This can be attributed to the fact that AL techniques are often designed as classifiers rather than end-to-end learning approaches. Despite this limitation, recent research has demonstrated that with a well-trained backbone, the AL performs adequately in various complex scenarios [52]. The practice of using a "pre-trained backbone + downstream tasks" has become increasingly common. This has allowed the one-epoch AL + +![](images/582017bc5eb43d7259bbb83073d5d142d26e7139c39702bfc297a583a1d56d04.jpg) +Figure 1. An overview of the AFL. During the local stage, each client calculates $C_k^{\mathrm{r}}$ and $\hat{W}_k^{\mathrm{r}}$ based on the same pre-trained backbone and its own dataset. The server obtained the $C_{\mathrm{agg},K}^{\mathrm{r}}$ and $\hat{W}_{\mathrm{agg},K}^{\mathrm{r}}$ then get $\hat{W}$ in the aggregation stage. + +to thrive in various areas such as continual learning [50] and reinforcement learning [22]. Hence, it could also be well incorporated in the individual client training. + +Adopting AL is the key to enforcing the upcoming single-round aggregation (by deriving the AA law). The affine characteristic of linear regression in each client opens up new possibilities for exploration in FL. We provide a comprehensive explanation of such an exploration in later sections. + +# 3.2. Aggregation Stage: Absolute Aggregation Law + +In the aggregation stage, we introduce the Absolute Aggregation (AA) law, a key contribution in AFL. The AA law facilitates a single-round aggregation, i.e., the aggregation happens only once. Additionally, in scenarios where the AA law becomes suboptimal due to a large number of clients, we introduce a regularization intermediary (RI) process to restore its optimality. + +The MP inverse partition [4] inspires our derivation, which is reformulated into Lemma 1. + +Lemma 1. Let $\mathbf{X} = \begin{bmatrix} \mathbf{X}_u \\ \mathbf{X}_v \end{bmatrix}$ with $\mathbf{X}_u$ and $\mathbf{X}_v$ having full column ranks, and $\mathbf{X}$ follows a partition + +$$ +\boldsymbol {X} ^ {\dagger} = \left[ \begin{array}{l l} \bar {U} & \bar {V} \end{array} \right], \tag {5} +$$ + +where + +$$ +\left\{ \begin{array}{l} \bar {U} = X _ {u} ^ {\dagger} - R _ {u} C _ {v} X _ {u} ^ {\dagger} - R _ {u} C _ {v} (C _ {u} + C _ {v}) ^ {- 1} C _ {v} X _ {u} ^ {\dagger} \\ \bar {V} = X _ {v} ^ {\dagger} - R _ {v} C _ {u} X _ {v} ^ {\dagger} - R _ {v} C _ {u} (C _ {u} + C _ {v}) ^ {- 1} C _ {u} X _ {v} ^ {\dagger} \end{array} \right., +$$ + +$$ +\left\{ \begin{array}{l} \boldsymbol {C} _ {u} = \boldsymbol {X} _ {u} ^ {\top} \boldsymbol {X} _ {u} \\ \boldsymbol {C} _ {v} = \boldsymbol {X} _ {v} ^ {\top} \boldsymbol {X} _ {v} \end{array} , \quad \left\{ \begin{array}{l} \boldsymbol {R} _ {u} = \boldsymbol {C} _ {u} ^ {- 1} \\ \boldsymbol {R} _ {v} = \boldsymbol {C} _ {v} ^ {- 1} \end{array} . \right. \right. \tag {6} +$$ + +Proof. See Supplementary Materials A. + +Lemma 1 points out that, a matrix's MP inverse (e.g., $X^{\dagger}$ ) can be computed using the inverse matrices of its block + +components (e.g., $\pmb{X}_u^\dagger$ and $\pmb{X}_v^\dagger$ ). This introduces possibilities for aggregating a weight $\pmb{W} = \pmb{X}^\dagger \pmb{Y}$ equally from manipulating constituent counterparts $\pmb{W}_u = \pmb{X}_u^\dagger \pmb{Y}_u$ and $\pmb{W}_v = \pmb{X}_v^\dagger \pmb{Y}_v$ . That is, $\pmb{W} = f_{\mathrm{agg}}(\pmb{W}_u, \pmb{W}_v)$ , i.e., a single-aggregation strategy. + +Bearing the above intuition in mind, we are able to derive such a single-aggregation strategy in action. This is delivered in Theorem 1. + +Theorem 1. Absolute Aggregation Law: Let $\hat{W} = X^{\dagger}Y$ , where $X = \begin{bmatrix} X_u \\ X_v \end{bmatrix}$ and $Y = \begin{bmatrix} Y_u \\ Y_v \end{bmatrix}$ with $X_u$ and $X_v$ having full column ranks. Let $\hat{W}_u = X_u^\dagger Y_u$ , $\hat{W}_v = X_v^\dagger Y_v$ , and we have + +$$ +\boldsymbol {W} = \boldsymbol {\mathcal {W}} _ {u} \boldsymbol {W} _ {u} + \boldsymbol {\mathcal {W}} _ {u} \boldsymbol {W} _ {v}, \tag {7} +$$ + +where + +$$ +\left\{ \begin{array}{l} \mathcal {W} _ {u} = I - R _ {u} C _ {v} - R _ {u} C _ {v} \left(C _ {u} + C _ {v}\right) ^ {- 1} C _ {v} \\ \mathcal {W} _ {v} = I - R _ {v} C _ {u} - R _ {v} C _ {u} \left(C _ {u} + C _ {v}\right) ^ {- 1} C _ {u} \end{array} \right. +$$ + +$$ +\left\{ \begin{array}{l} \boldsymbol {C} _ {u} = \boldsymbol {X} _ {u} ^ {\top} \boldsymbol {X} _ {u} \\ \boldsymbol {C} _ {v} = \boldsymbol {X} _ {v} ^ {\top} \boldsymbol {X} _ {v} \end{array} \quad \left\{ \begin{array}{l} \boldsymbol {R} _ {u} = \boldsymbol {C} _ {u} ^ {- 1} \\ \boldsymbol {R} _ {v} = \boldsymbol {C} _ {v} ^ {- 1} \end{array} \right. \right. \tag {8} +$$ + +Proof. See Supplementary Materials B. + +The AA law, as stated in Theorem 1, provides a powerful insight. It establishes an intuition that we can aggregate two independently trained weights, such as $W_{u}$ and $W_{v}$ , into their jointly trained counterpart $W$ . This is achieved in an optimal way without any approximation or parameter tuning. + +Invariance to data partitioning. To a certain extent, the achievement in Theorem 1 attains the ultimate goal of FL, i.e., the equivalence between weights trained in FL fashion and that trained on a centralized joint dataset. Traditionally, the FL aims to approximate or converge to the performance of the joint-trained model through multiple rounds of aggregation in a central server. However, the AA law provides a + +more direct path to this goal. It allows for an equivalence (not approximation or convergence) to manifest in a linear regression standpoint. + +Supported by the AA law, the AFL achieves a level of performance that is on par with the joint-trained model, without the need for multiple rounds of aggregation. This direct equivalence could establish significant advancement in FL, as it simplifies the process and reduces the heavy computational overhead associated with multiple aggregation rounds. + +Although the AA law in Theorem 1 admits the absolute aggregation between two clients (i.e., $\hat{W}_u$ and $\hat{W}_v$ ), this pattern can be trivially broadcast to multi-client scenario. To elaborate, without loss of generality, we denote $\hat{W}_{\mathrm{agg},k - 1}$ as the accumulated aggregation (AcAg) weight that has aggregated $k - 1$ clients. By rewriting (1), the next aggregation with $\hat{W}_k$ ( $i = k,\dots ,K$ ) reads + +$$ +\hat {W} _ {\text {a g g}, k} = \mathcal {W} _ {\text {a g g}} \hat {W} _ {\text {a g g}, k - 1} + \mathcal {W} _ {k} \hat {W} _ {k}. \tag {9} +$$ + +According to (1), let $C_u \to C_{\mathrm{agg},k-1}$ , $C_v \to C_k$ , and we have $C_{\mathrm{agg},k} = C_{\mathrm{agg},k-1} + C_k$ . Hence, + +$$ +\left\{ \begin{array}{l} \mathcal {W} _ {\text {a g g}} = I - C _ {\text {a g g}, k - 1} ^ {- 1} C _ {k} \left(I + C _ {\text {a g g}, k} ^ {- 1} C _ {k}\right), \\ \mathcal {W} _ {k} = I - C _ {k} ^ {- 1} C _ {\text {a g g}, k - 1} \left(I + C _ {\text {a g g}, k} ^ {- 1} C _ {\text {a g g}, k - 1}\right), \end{array} \right. \tag {10} +$$ + +where + +$$ +\left\{ \begin{array}{l} C _ {\text {a g g}, k} = C _ {\text {a g g}, k - 1} + C _ {k} = \sum_ {i} ^ {k} C _ {i}, \\ C _ {i} = X _ {i} ^ {\top} X _ {i}. \end{array} \right. \tag {11} +$$ + +As such, the joint-trained weight $\hat{W} = \hat{W}_{\mathrm{agg},k}$ is produced by aggregating among individual clients in a pair-wise manner. It is interesting to find that the optimal aggregation is in fact a linear combination between two matrices (e.g., $\hat{W}_{\mathrm{agg},k - 1}$ and $\hat{W}_k$ ) weighted by $\mathcal{W}_{\mathrm{agg}}$ and $\mathcal{W}_k$ respectively. + +Note that the aggregation does NOT necessarily follow a sequential index from 1 to $K$ . We can randomly sample an available client to aggregate with the AcAg weight. This is revealed by the fact that elements in the weighting matrices are somewhat interchangeable (e.g., see (10)). + +# 3.3. RI Process: AA Law in Rank-deficient Scenario + +As indicated in Theorem 1, the equivalence in AA law relies on an assumption of a full-column rank in each client, e.g., $X_{k}$ having full-column rank. This may not hold in the large client number scenario where each client has limited data (e.g., $N_{k} < y_{\mathrm{e}}$ ), rendering the full-column rank assumption invalid. To address this, we implement the AA law with an RI process. Specially, we include a regularization term as an intermediary during the local stage, and remove it after the aggregation stage. + +To this end, we include an regularization term controlled by $\gamma$ in the objective function, i.e., + +$$ +\mathcal {L} \left(\boldsymbol {W} _ {k} ^ {\mathrm {r}}\right) = \left\| \boldsymbol {Y} _ {k} - \boldsymbol {X} _ {k} \boldsymbol {W} _ {k} ^ {\mathrm {r}} \right\| _ {\mathrm {F}} ^ {2} + \gamma \left\| \boldsymbol {W} _ {k} ^ {\mathrm {r}} \right\| _ {\mathrm {F}} ^ {2}, \tag {12} +$$ + +which rewrites the MP inverse based solution in (4) into + +$$ +\hat {\boldsymbol {W}} _ {k} ^ {\mathrm {r}} = \underset {\boldsymbol {W} _ {k} ^ {\mathrm {r}}} {\operatorname {a r g m i n}} \mathcal {L} \left(\boldsymbol {W} _ {k} ^ {\mathrm {r}}\right) = \left(\boldsymbol {X} _ {k} ^ {\top} \boldsymbol {X} _ {k} + \gamma \boldsymbol {I}\right) ^ {- 1} \boldsymbol {X} _ {k} ^ {\top} \boldsymbol {Y} _ {k}. \tag {13} +$$ + +Such a solution does not suffer from rank-deficiency issues, as $\pmb{X}_k^\top \pmb{X}_k + \gamma \pmb{I}$ is positive-definite thereby a full-rank matrix. + +During aggregation, we substitute $\hat{\mathbf{W}}_k$ in (4) with $\hat{\mathbf{W}}_k^{\mathrm{r}}$ using (13). This substitution would clearly result in deviations (i.e., $\hat{\mathbf{W}}_{\mathrm{agg},k}^{\mathrm{r}}\neq \hat{\mathbf{W}}_{\mathrm{agg},k}$ ), which is depicted in Theorem 2. + +Theorem 2. RI-AA Law: The relation between $\hat{W}_{agg,k}^{r}$ and $\hat{W}_{agg,k}$ follows + +$$ +\hat {W} _ {\text {a g g}, k} ^ {\mathrm {r}} = \left(C _ {\text {a g g}, k} ^ {\mathrm {r}}\right) ^ {- 1} C _ {\text {a g g}, k} \hat {W} _ {\text {a g g}, k}, \tag {14} +$$ + +where + +$$ +C _ {\text {a g g}, k} ^ {\mathrm {r}} = C _ {\text {a g g}, k} + k \gamma I = \sum_ {i} ^ {k} C _ {i} ^ {\mathrm {r}}, \quad C _ {i} ^ {\mathrm {r}} = X _ {i} ^ {\top} X _ {i} + \gamma I. \tag {15} +$$ + +Proof. See Supplementary Materials C. + +Theorem 2 establishes the relation between $\hat{W}_{\mathrm{agg},k}^{\mathrm{r}}$ and $\hat{W}_{\mathrm{agg},k}$ , which is a one-to-one mapping, such that $\hat{W}_{\mathrm{agg},k}$ can be restored by manipulating $\hat{W}_{\mathrm{agg},k}^{\mathrm{r}}$ , i.e., + +$$ +\begin{array}{l} \hat {W} _ {\text {a g g}, k} = \left(C _ {\text {a g g}, k}\right) ^ {- 1} C _ {\text {a g g}, k} ^ {\mathrm {r}} \hat {W} _ {\text {a g g}, k} ^ {\mathrm {r}} \\ = \left(\boldsymbol {C} _ {\text {a g g}, k} ^ {\mathrm {r}} - k \gamma \boldsymbol {I}\right) ^ {- 1} \boldsymbol {C} _ {\text {a g g}, k} ^ {\mathrm {r}} \hat {\boldsymbol {W}} _ {\text {a g g}, k} ^ {\mathrm {r}}. \tag {16} \\ \end{array} +$$ + +That is, we are able to attain $\hat{W}_{\mathrm{agg},k}$ by removing the impact of the regularization term $\gamma$ to counter the ill-conditioned constraint in the large client number scenario. The implementation of AFL is summarized in Algorithm 1. + +Benefits of Adopting AL in AFL. Inheriting from the AL technique, the AFL admits several merits over its gradient-based counterparts as follows. i) Fast training and convergence: the analytical solutions allow AFL to finish the training and aggregation in one shot, exhibiting fast training and convergence. Also, the analytical solutions free the AFL from any convergence issue as no iterative-search based action is executed. ii) Low communication cost: the single-round aggregation only requires a single communication between the clients and the server, which significantly reduces the communication cost. iii) Data heterogeneity invariance: the invariance to data partitioning does not pose any constraint on data partition strategy. That is, the equivalence is hold across all possible data heterogeneous scenarios (e.g., see Section 4.2). iv) Client-number invariance: for a complete dataset $\mathcal{D}$ partitioned among $K$ clients (i.e., $\{\mathcal{D}_k\}_{k=1}^K$ ), according to Theorem 1 and (9), when the weights from all $K$ clients are aggregated, the resulting weight is identical to that trained on the full dataset $\mathcal{D}$ . To validate AA law with RI-process, we conduct an experiment on a dummy dataset and show the invariance (see Supplementary Materials D). + +Algorithm 1 Analytic Federated Learning +Input: $\mathcal{D}_k, k = 0, \dots, K$ , $\gamma$ , and pre-trained backbone $\Theta$ . +Server Executes: +1. for each client $k$ in parallel do +2. $\hat{W}_k^{\mathrm{r}}, C_k^{\mathrm{r}} \gets \text{Local Stage}(k, \mathcal{D}_k, \gamma)$ . +3. end for +4. $\hat{W} \gets \text{Aggregation Stage}(\{\hat{W}_k^{\mathrm{r}}, C_k^{\mathrm{r}}, \gamma\}_{k=1}^K)$ . +Local Stage: client $k$ with $\mathcal{D}_k$ and $\gamma$ . +1. Get embedding and label matrices using (2). +2. Obtain weight matrix $\hat{W}_k^{\mathrm{r}}$ by (13). +3. Get $C_k^{\mathrm{r}} = X_k^\top X_k + \gamma I$ . +4. Return $\hat{W}_k^{\mathrm{r}}, C_k^{\mathrm{r}}$ . +Aggregation Stage: with $\{\hat{W}_k^{\mathrm{r}}, C_k^{\mathrm{r}}, \gamma\}_{k=1}^K$ . +1. Initialize $\hat{W}_{\text{agg},0}^{\mathrm{r}} = 0$ , $C_{\text{agg},0}^{\mathrm{r}} = 0$ . +2. for $k$ in range(K): + i) Aggregate $\hat{W}_{\text{agg},k}^{\mathrm{r}}$ with $\hat{W}_k^{\mathrm{r}}$ using (9). + ii) Update $C_{\text{agg},k}^{\mathrm{r}} = C_{\text{agg},k-1}^{\mathrm{r}} + C_k^{\mathrm{r}}$ . +3. end for. +4. Restore $\hat{W} = \hat{W}_{\text{agg},K}$ with $\hat{W}_{\text{agg},K}^{\mathrm{r}}$ in (16). + +An AL Branch of Federated Learning. The AFL incorporates the AL technique and can be considered as an AL branch within the FL context. The AL and its recursive formulation have demonstrated remarkable adaptability in continual learning utilizing a well-trained backbone [52, 54]. In this case, this intuition has been extended to the FL field through non-trivial derivations. + +# 4. Experiments + +In this section, we provide extensive experiments to validate the proposed AFL, including comparison with FL state-of-the-arts and analysis under various settings. The training time and ablation study of regularization are also investigated. + +# 4.1. Comparison with FL Techniques + +We conduct comparison with FL state-of-the-arts, including FedAvg [24], FedProx [17], MOON [15], FedGen [46], FedDyn [1], FedNTD [14] and FedDisco [42] under various non-IID settings. + +Dataset and Model. We validate the baselines and our proposed AFL in 3 popular benchmark datasets in FL: CIFAR-10 [12], CIFAR-100 [12] and Tiny-ImageNet [25]. For all datasets, we use a ResNet-18 [10] pretrained on ImageNet-1k [5] as backbone. We freeze the backbones in all FL methods. + +Data Partition. For simulating Non-IID scenarios in FL, we specify two Non-IID data partition methods including Latent Dirichlet Allocation [20] (LDA, denoted as NIID-1) and Sharding [20] (denoted as NIID-2). In the LDA setting, data assigned to each client is forced to satisfy the Dirichlet distribution, and degree of the data heterogeneity is controlled + +by parameter $\alpha$ . Smaller $\alpha$ leads to a more heterogeneous data distribution. In the Sharding strategy, the data is sorted by labels and divided into same-sized shards, and $s$ controls the heterogeneity, i.e. the number of shards per client. When $s$ takes a smaller value, the data is more heterogeneous. We choose $\alpha = 0.1, 0.01$ and $s = 10, 4$ for CIFAR-100 and Tiny-ImageNet. For CIFAR-10, $\alpha$ is set to 0.1 and 0.01, and $s$ is set to 4 and 2. Most existing methods are validated on data partition of $\alpha = 0.3$ to 1.0 and $s = 10$ [14, 19]. Here we provide more challenging settings to validate the robustness under extremely heterogeneous cases. + +Implementation Details. In all the experiments, we use 100 clients for each method and use the same partitioned dataset within experiments of the same data setting. We implement the AFL with a $\gamma = 1$ RI process (any $\gamma$ would suffice, see the ablation study). Each experiment setting is run 3 times and the mean and standard deviation of the best top-1 classification accuracy during training are reported. The implementation details of gradient-based compared methods can be found in Supplementary Materials E. + +Experimental Results. We report the results of the compared methods under the setting of NIID-1 and NIID-2 in Table 1. As shown in the table, except for slightly weaker results than those of FedDyn in the NIID-2 setting, the AFL obtains very competitive performance compared with other methods across various settings. The degree of data heterogeneity does not at all affect the AFL. For instance, the accuracy remains $80.75\%$ on CIFAR-10 for various NIID-1 and NIID-2 settings. Although slight differences could occur among various settings, it barely impacts the classification accuracy (an AL property indicated in [48]). The same pattern repeats on CIFAR-100 and Tiny/ImageNet. Uniquely, the AFL obtains identical results for all 3 repeated runs, i.e., the standard deviations are zeros! This is because the AFL does not introduce any stochastic element, so the repeated computation in each run is naturally equivalent to one another, hence the zero standard deviation. + +Notably, when introducing a pre-trained backbone, the compared methods yield similar results and even FedAvg can become a competitive baseline. The reason could be that, when incorporating a pret-trained backbone, the FL process can be more stable with a better start point and methods stabilizing the FL process could be less effective. This phenomena has also been witnessed in other FL studies [3, 7]. However, other FL methods still experience performance reductions under severe non-IID scenarios. For example, FedDyn performs relatively well (e.g., $57.55\%$ ) under NIID-1 $(\alpha = 0.1)$ on CIFAR-100 but undergoes a performance degradation (to $36.12\%$ ) when $\alpha = 0.01$ . This pattern is rather consistent in other compared methods, such as FedAvg $(56.62\% \rightarrow 32.99\%)$ , FedProx $(56.45\% \rightarrow 33.37\%)$ , and MOON $(56.58\% \rightarrow 33.34\%)$ , and is also true across all datasets. The performance distributions regarding NIID-2 for + +Table 1. The top-1 accuracy (%) of compared methods under two non-IID settings. Settings controlled by $\alpha$ and $s$ are NIID-1 and NIID-2 respectively. The data is reported as average and standard deviation after 3 runs. Results in bold are the best within the compared methods in the same setting. + +
DatasetSettingFedAvgFedProxMOONFedGenFedDynFedNTDFedDiscoAFL
CIFAR-10α = 0.164.02±0.1864.07±0.0863.84±0.0364.14±0.2464.77±0.1164.64±0.0263.83±0.0880.75±0.00
α = 0.0560.52±0.3960.39±0.0960.28±0.1760.65±0.1960.35±0.5461.16±0.3359.90±0.0580.75±0.00
s = 468.47±0.1368.46±0.0868.47±0.1568.24±0.2873.50±0.1170.24±0.1165.04±0.1180.75±0.00
s = 257.81±0.0357.61±0.1257.72±0.1557.02±0.1864.07±0.0958.77±0.1858.78±0.0280.75±0.00
CIFAR-100α = 0.156.62±0.1256.45±0.2256.58±0.0256.48±0.1757.55±0.0856.60±0.1455.79±0.0458.56±0.00
α = 0.0132.99±0.2033.37±0.0933.34±0.1133.09±0.0936.12±0.0832.59±0.2125.72±0.0858.56±0.00
s = 1055.76±0.1355.80±0.1655.70±0.2560.93±0.1761.09±0.0954.69±0.1554.65±0.0958.56±0.00
s = 548.33±0.1548.29±0.1448.34±0.1948.12±0.0659.34±0.1147.00±0.1945.86±0.1858.56±0.00
Tiny-ImageNetα = 0.146.04±0.2746.47±0.2346.21±0.1446.27±0.1447.72±0.2246.17±0.1647.48±0.0654.67±0.00
α = 0.0132.63±0.1932.26±0.1432.38±0.2032.33±0.1435.19±0.0631.86±0.4427.15±0.1054.67±0.00
s = 1039.06±0.2638.97±0.2338.79±0.1438.82±0.1641.36±0.0637.55±0.0938.86±0.1254.67±0.00
s = 529.66±0.1929.17±0.1629.24±0.3029.37±0.2535.18±0.1829.01±0.1427.72±0.1854.67±0.00
+ +these compared methods resemble those in NIID-1, where smaller $s$ values invite performance degradation among existing FL counterparts. For instance, the FedDyn exhibits $73.50\% \rightarrow 64.07\%$ for $s = 4 \rightarrow 2$ on CIFAR-10 while the AFL obtains competitive and identical results (e.g., $80.75\%$ ). + +# 4.2. Analysis on Data Partition + +Here we provide broaden non-IID partitions to demonstrate AFL's invariance to data partitioning. This includes varying the client number and the non-IID degree. We also provide the IID partition results. + +Client-number Invariance. We compare our AFL and the FedAvg under NIID-1 setting on CIFAR-100 and TinyImageNet with $\alpha = 0.1$ , and vary the number of clients from 100 to 500 and 1000. The results are shown in Figure 2. We observe that the AFL keeps an identical performance when scaling the number of clients, while the FedAvg experiences a performance decline along the increasing number (e.g., $56.57\% \rightarrow 41.01\%$ for $K = 100 \rightarrow 1000$ on CIFAR-100). This provides a strong evidence to support the invariance to data partitioning in our AFL. It also showcases the capability of pushing the AFL to large-scale client training scenario without any performance compromise. + +Data Heterogeneity Invariance. Here, we fix the client number to 100 and partition the CIFAR-100 under the setting of NIID-1 with $\alpha = 0.005, 0.01, 0.1, 1$ , including the IID setting as well. We report the results of AFL and FedAvg in Table 2. The FedAvg suffers from more accuracy losses (e.g., $57.72\% \to 24.74\%$ for $\alpha = 0.1 \to 0.005$ ) as the data heterogeneity grows higher. Under the IID partition, the FedAvg receives its best performance (i.e., $57.89\%$ ), which is still less competitive than our AFL (i.e., $58.56\%$ ). On the other hand, AFL obtains identical results (i.e., $58.56\%$ ) across various settings, including non-IID and IID ones. This is another strong proof of the AA law indicating the weight-invariant property of AFL. Our AFL is invariant to any degree of data heterogeneity, leading to unchanged performance in all possible data heterogeneous partition scenarios, even in extreme data heterogeneous cases (e.g., $\alpha = 0.005$ ). + +Table 2. The top-1 classification accuracy $(\%)$ of AFL and FedAvg under different data heterogeneity. + +
Acc. (%)α = 0.005α = 0.01α = 0.1α = 1IID
FedAvg24.7433.0956.5757.7257.89
AFL58.5658.5658.5658.5658.56
+ +![](images/ff5af59812f9babf3ac3fc8c43a0eba868f57c76cb75c5f8153422642c402f1a.jpg) +(a) CIFAR-100 +Figure 2. Accuracy over various number of clients. + +![](images/6468592f6c73bab37b6e26c227cf4add993262f230521630c46372ad8edfce85.jpg) +(b) Tiny-ImageNet + +# 4.3. Training Efficiency + +Fast Training with Single-round Aggregation. We plot the training evolution curves of accuracy on CIFAR-100 and Tiny-ImageNet in Figure 3 and report the execution time for each method in the legend bars. Compared FL methods take 60s to 100s on CIFAR-100 (100s to 160s on Tiny-ImageNet) to complete an aggregation round, leading to a total training time of 30,000s to 50,000 (50,000s to 80,000s). AFL, however, spends 236.61s on CIFAR-100 and 349.50s on Tiny-ImageNet, achieving approximately $150 \times -200 \times$ speedups over its FL counterparts due to only one aggregation. + +![](images/ad6d44bb618aeb472a976ec50a48b4bb738d8cfa9a1ce154bb9775abe9f16f1a.jpg) +Figure 3. Accuracy curves with communication rounds. Average training time is reported in the legends. + +![](images/eae531ba3b2bccc94bba3e40f17de8a8e43fd8a49d8594597a61cc95c530dfa8.jpg) + +# 4.4. Ablation Study of RI Process + +Here, we conduct an ablation study regarding the RI process by reporting accuracies of AFL with $\alpha = 0.1$ and $K = 100,500,1000$ under different values of $\gamma$ . The results without and with the RI process are provided in Table 3. When $\gamma = 0$ (i.e., no regularization involved), the AFL stops working with $K = 500$ and 1000 due to the ill-conditioned matrix scenario (e.g., $N_{k} < y_{\mathrm{e}}$ ). Such an ill-conditioned case is avoided by introducing $\gamma$ . However, the lack of RI process (see left columns in Table 3) could lead to accuracy loss. For instance, for $\gamma = 100$ , the AFL could suffer a loss of $9\%$ (i.e., $58.56\% \rightarrow 49.62\%$ ). This is the result of regularization accumulation (see (15)). With the RI process, the AFL obtains an identical result across various $\gamma$ values. More importantly, this demonstrates that adopting the RI avoids the need to find proper $\gamma$ values. That is, the regularization is a removable intermediary, not a hyperparameter that requires tuning. + +Table 3. Ablation study of RI under various $\gamma$ and $K$ . The left/right results are performance w/o and w/ the RI process in (16). + +
Acc.(%)γ = 0γ = 0.1γ = 1γ = 10γ = 100
K=10058.56N/A58.5458.5658.5158.5658.1558.5655.7758.56
K=5001.11N/A58.5258.5658.3058.5656.7258.5651.7758.56
K=10000.75N/A58.5158.5658.1558.5655.7758.5649.6258.56
+ +# 4.5. Validation with Different Backbones + +To explore the effect of different backbones used in the AFL, we extend the experiments with VGG11 [28] and ViT-B-16 [6]. All these backbone are pre-trained in ImageNet-1k and we conduct the experiments under the same setting in Section 4.2. Due to the invariance to data partitioning, we only report one result in single dataset. As shown in the Table 4, with different pre-trained backbones, the AFL can all obtain competitive results. + +# 5. Limitations and Future Work + +Utilizing Pre-trained Backbone. The AFL approach is both facilitated and constrained by the requirement of having a well-trained feature extractor. However, this limitation has + +Table 4. The results of top-1 accuracy in % of the AFL with different backbones including ResNet-18, VGG11 and ViT-B-16. + +
Acc.(%)CIFAR-10CIFAR-100Tiny-ImageNet
ResNet-1880.7558.5654.67
VGG1182.7260.4354.73
ViT-B-1693.9275.4582.02
+ +been significantly mitigated by the emergence of reusing pretrained models for new tasks. This "pre-trained backbone + downstream task" paradigm has become a standard practice in numerous deep learning domains, offering improved generalization and reduced computational costs. FL can further enhance this paradigm and we validate that collaboration can still be beneficial with pre-trained backbones in Supplementary Materials F. The proposal of the AFL aligns with these recent research trends, making it a sensible FL advancement. + +Partially Participating and Stragglers. The AFL formulates a single-round aggregation for FL systems, promoting rapid convergence and reducing communication overhead. However, challenges arise when clients engage partially or when stragglers impede progress. Since clients can only contribute to the aggregation after finishing local computations, the AFL needs to wait for all the clients. This potentially hampers the AFL's overall efficiency and inspires us to further refine the AFL to address these issues. + +Linear Assumptions of AFL. The AFL is established upon linear classifiers and may be less effective with nonlinear data distribution. To address this, AFL can incorporate non-linear projections including non-linear activations or kernel functions. Also, for multi-layer model, AFL can formulate local least-square problem at each layer by label projection [49]. These techniques have been utilized in various AL-based work [39] and the AA law holds theoretically. We will conduct a further exploration in future. + +# 6. Conclusion + +In this paper, we introduce a gradient-free FL framework named analytic federated learning (AFL). The AFL unveils analytical solutions both in the local client training stage and the aggregation stage. This leads to one-epoch local training, single-round aggregation, and fast convergence. In particular, the single-round aggregation property is theoretically supported and proved by the well-formulated AA law. Additionally, by introducing the RI process, we re-establish the AFL's optimality which could be compromised in the scenario of rank-deficient with normally a large number of clients. The AFL demonstrates its invariance to data partitioning, a property that allows several appealing FL characteristics such as data heterogeneity invariance and client-number invariance. These characteristics are empirically validated through experiments across various settings, where the AFL achieves a consistent and competitive performance. + +# Acknowledgment + +This research was supported by the National Natural Science Foundation of China (62306117, 62406114), the Guangzhou Basic and Applied Basic Research Foundation (2024A04J3681, 2023A04J1687), GJYC program of Guangzhou (2024D03J0005), the National Key R & D Project from Minister of Science and Technology (2024YFA1211500), and the Fundamental Research Funds for the Central Universities (2024ZYGXZR074). + +# References + +[1] Durmus Alp Emre Acar, Yue Zhao, Ramon Matas, Matthew Mattina, Paul Whatmough, and Venkatesh Saligrama. Federated learning based on dynamic regularization. In International Conference on Learning Representations, 2021. 6 +[2] Ilai Bistritz, Ariana Mann, and Nicholas Bambos. Distributed distillation for on-device learning. In Advances in Neural Information Processing Systems, pages 22593-22604. Curran Associates, Inc., 2020. 2 +[3] Hong-You Chen, Cheng-Hao Tu, Ziwei Li, Han Wei Shen, and Wei-Lun Chao. On the importance and applicability of pretraining for federated learning. In The Eleventh International Conference on Learning Representations, 2023. 6 +[4] Randall E. Cline. Representations for the generalized inverse of a partitioned matrix. Journal of the Society for Industrial and Applied Mathematics, 12(3):588-600, 1964. 4 +[5] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, 2009. 6 +[6] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2021. 8 +[7] Chun-Mei Feng, Bangjun Li, Xinxing Xu, Yong Liu, Huazhu Fu, and Wangmeng Zuo. Learning federated visual prompt in null space for mri reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8064-8073, 2023. 1, 6 +[8] Ping Guo, Michael R Lyu, and NE Mastorakis. Pseudoinverse learning algorithm for feedforward neural networks. Advances in Neural Networks and Applications, pages 321-326, 2001. 2, 3 +[9] Qiushan Guo, Xinjiang Wang, Yichao Wu, Zhipeng Yu, Ding Liang, Xiaolin Hu, and Ping Luo. Online knowledge distillation via collaborative learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2 +[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 6 +[11] Georgios A Kaissis, Marcus R Makowski, Daniel Rückert, and Rickmer F Braren. Secure, privacy-preserving and feder- + +ated machine learning in medical imaging. Nature Machine Intelligence, 2(6):305-311, 2020. 1 +[12] Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. 6 +[13] Rajesh Kumar, Abdullah Aman Khan, Jay Kumar, Zakria, Noorbakhsh Amiri Golilarz, Simin Zhang, Yang Ting, Chengyu Zheng, and Wenyong Wang. Blockchain-federated-learning and deep learning models for Covid-19 detection using ct imaging. IEEE Sensors Journal, 21(14):16301-16314, 2021. 1 +[14] Gihun Lee, Minchan Jeong, Yongjin Shin, Sangmin Bae, and Se-Young Yun. Preservation of the global knowledge by nottrue distillation in federated learning. In Advances in Neural Information Processing Systems, pages 38461-38474. Curran Associates, Inc., 2022. 6 +[15] Qinbin Li, Bingsheng He, and Dawn Song. Model-contrastive federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10713–10722, 2021. 6 +[16] Qinbin Li, Yiqun Diao, Quan Chen, and Bingsheng He. Federated learning on non-iid data silos: An experimental study. In 2022 IEEE 38th International Conference on Data Engineering (ICDE), pages 965-978, 2022. 1 +[17] Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. In Proceedings of Machine Learning and Systems, pages 429-450, 2020. 1, 2, 6 +[18] Xin-Chun Li, Yi-Chu Xu, Shaoming Song, Bingshuai Li, Yinchuan Li, Yunfeng Shao, and De-Chuan Zhan. Federated learning with position-aware neurons. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10082-10091, 2022. 2 +[19] Zexi Li, Tao Lin, Xinyi Shang, and Chao Wu. Revisiting weighted aggregation in federated learning with neural networks. In Proceedings of the 40th International Conference on Machine Learning, pages 19767-19788. PMLR, 2023. 2, 6 +[20] Tao Lin, Lingjing Kong, Sebastian U Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated learning. In Advances in Neural Information Processing Systems, pages 2351-2363. Curran Associates, Inc., 2020. 6 +[21] Tao Lin, Lingjing Kong, Sebastian U Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated learning. In Advances in Neural Information Processing Systems, pages 2351-2363. Curran Associates, Inc., 2020. 2 +[22] Zichen Liu, Chao Du, Wee Sun Lee, and Min Lin. Locality sensitive sparse encoding for learning world models online. In The Twelfth International Conference on Learning Representations, 2024. 4 +[23] Shubham Malaviya, Manish Shukla, and Sachin Lodha. Reducing communication overhead in federated learning for pre-trained language models using parameter-efficient finetuning. In Proceedings of The 2nd Conference on Lifelong Learning Agents, pages 456-469. PMLR, 2023. 1 +[24] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-Efficient Learning of Deep Networks from Decentralized + +Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, pages 1273-1282. PMLR, 2017. 1, 2, 6 +[25] Mohammed Ali mnmmoustafa. Tiny imagenet, 2017. 6 +[26] J. Park and I. W. Sandberg. Universal approximation using radial-basis-function networks. Neural Computation, 3(2): 246-257, 1991. 3 +[27] Geet Shingi. A federated learning based approach for loan defaults prediction. In 2020 International Conference on Data Mining Workshops (ICDMW), pages 362-368, 2020. 1 +[28] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition, 2015. 8 +[29] Yue Tan, Guodong Long, LU LIU, Tianyi Zhou, Qinghua Lu, Jing Jiang, and Chengqi Zhang. Fedproto: Federated prototype learning across heterogeneous clients. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8):8432-8440, 2022. 1 +[30] Yue Tan, Guodong Long, Jie Ma, LU LIU, Tianyi Zhou, and Jing Jiang. Federated learning from pre-trained models: A contrastive learning approach. In Advances in Neural Information Processing Systems, pages 19332-19344. Curran Associates, Inc., 2022. 1 +[31] Kar-Ann Toh. Learning from the kernel and the range space. In 2018 IEEE/ACIS 17th International Conference on Computer and Information Science (ICIS), pages 1–6, 2018. 3 +[32] Kar-Ann Toh. Learning from the kernel and the range space. In the Proceedings of the 17th 2018 IEEE Conference on Computer and Information Science, pages 417–422. IEEE, 2018. 2 +[33] Haozhao Wang, Yichen Li, Wenchao Xu, Ruixuan Li, Yufeng Zhan, and Zhigang Zeng. DaFKD: Domain-aware federated knowledge distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20412-20421, 2023. 2 +[34] Haozhao Wang, Haoran Xu, Yichen Li, Yuan Xu, Ruixuan Li, and Tianwei Zhang. FedCDA: Federated learning with cross-rounds divergence-aware aggregation. In The Twelfth International Conference on Learning Representations, 2024. 2 +[35] Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, and H. Vincent Poor. Tackling the objective inconsistency problem in heterogeneous federated optimization. In Advances in Neural Information Processing Systems, pages 7611-7623. Curran Associates, Inc., 2020. 1, 2 +[36] Jue Wang, Ping Guo, and Yanjun Li. DensePILAE: a feature reuse pseudoinverse learning algorithm for deep stacked autoencoder. Complex & Intelligent Systems, pages 1-11, 2021. 3 +[37] X. Wang, T. Zhang, and R. Wang. Noniterative deep learning: Incorporating restricted boltzmann machine into multilayer random weight neural networks. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 49(7):1299-1308, 2019. 3 +[38] Guile Wu and Shaogang Gong. Peer collaborative learning for online knowledge distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12):10302-10310, 2021. 2 + +[39] Yicheng Xu, Yuxin Chen, Jiahao Nie, Yusong Wang, Huiping Zhuang, and Manabu Okumura. Advancing cross-domain discriminability in continual learning of vision-language models. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. 8 +[40] Fu-En Yang, Chien-Yi Wang, and Yu-Chiang Frank Wang. Efficient model personalization in federated learning via client-specific prompt generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 19159-19168, 2023. 1 +[41] Wensi Yang, Yuhang Zhang, Kejiang Ye, Li Li, and ChengZhong Xu. Ffd: A federated learning based method for credit card fraud detection. In Big Data - BigData 2019, pages 18-32, Cham, 2019. Springer International Publishing. 1 +[42] Rui Ye, Mingkai Xu, Jianyu Wang, Chenxin Xu, Siheng Chen, and Yanfeng Wang. FedDisco: Federated learning with discrepancy-aware collaboration. In Proceedings of the 40th International Conference on Machine Learning, pages 39879-39902. PMLR, 2023. 6 +[43] Fuxun Yu, Weishan Zhang, Zhuwei Qin, Zirui Xu, Di Wang, Chenchen Liu, Zhi Tian, and Xiang Chen. Fed2: Feature-aligned federated learning. In Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining, pages 2066-2074, 2021. 2 +[44] Lin Zhang, Li Shen, Liang Ding, Dacheng Tao, and Ling-Yu Duan. Fine-tuning global model via data-free knowledge distillation for non-iid federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10174-10183, 2022. 2 +[45] Zhuo Zhang, Yuanhang Yang, Yong Dai, Qifan Wang, Yue Yu, Lizhen Qu, and Zenglin Xu. Fedpetuning: When federated learning meets the parameter-efficient tuning methods of pre-trained language models. In Findings of the Association for Computational Linguistics: ACL 2023, page 9963-9977. Association for Computational Linguistics (ACL), 2023. Annual Meeting of the Association of Computational Linguistics 2023, ACL 2023; Conference date: 09-07-2023 Through 14-07-2023. 1 +[46] Zhuangdi Zhu, Junyuan Hong, and Jiayu Zhou. Data-free knowledge distillation for heterogeneous federated learning. In Proceedings of the 38th International Conference on Machine Learning, pages 12878-12889. PMLR, 2021. 6 +[47] Zhuangdi Zhu, Junyuan Hong, and Jiayu Zhou. Data-free knowledge distillation for heterogeneous federated learning. In Proceedings of the 38th International Conference on Machine Learning, pages 12878-12889. PMLR, 2021. 2 +[48] Huiping Zhuang, Zhiping Lin, and Kar-Ann Toh. Blockwise recursive Moore-Penrose inverse for network learning. IEEE Transactions on Systems, Man, and Cybernetics: Systems, pages 1-14, 2021. 2, 3, 6 +[49] Huiping Zhuang, Zhiping Lin, and Kar-Ann Toh. Correlation projection for analytic learning of a classification network. Neural Processing Letters, pages 1–22, 2021. 8 +[50] Huiping Zhuang, Zhenyu Weng, Hongxin Wei, Renchunzi Xie, Kar-Ann Toh, and Zhiping Lin. ACIL: Analytic class incremental learning with absolute memorization and privacy protection. In Advances in Neural Information Processing + +Systems, pages 11602-11614. Curran Associates, Inc., 2022. 3, 4 +[51] Huiping Zhuang, Zhenyu Weng, Run He, Zhiping Lin, and Ziqian Zeng. GKEAL: Gaussian kernel embedded analytic learning for few-shot class incremental task. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7746-7755, 2023. 3 +[52] Huiping Zhuang, Yizhu Chen, Di Fang, Run He, Kai Tong, Hongxin Wei, Ziqian Zeng, and Cen Chen. GACL: Exemplar-free generalized analytic continual learning. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 2024. 3, 6 +[53] Huiping Zhuang, Run He, Kai Tong, Ziqian Zeng, Cen Chen, and Zhiping Lin. DS-AL: A dual-stream analytic learning for exemplar-free class-incremental learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(15):17237-17244, 2024. 3 +[54] Huiping Zhuang, Yuchen Liu, Run He, Kai Tong, Ziqian Zeng, Cen Chen, Yi Wang, and Lap-Pui Chau. F-OAL: Forward-only online analytic learning with fast training and low memory footprint in class incremental learning. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. 6 \ No newline at end of file diff --git a/2025/AFL_ A Single-Round Analytic Approach for Federated Learning with Pre-trained Models/images.zip b/2025/AFL_ A Single-Round Analytic Approach for Federated Learning with Pre-trained Models/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..3bfe079ad39bca085541de786655eb9fe4767d62 --- /dev/null +++ b/2025/AFL_ A Single-Round Analytic Approach for Federated Learning with Pre-trained Models/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e79236e99b035b63ff75e9be46898ff2698d2f12b03b80b18765788b2c3cafe3 +size 390382 diff --git a/2025/AFL_ A Single-Round Analytic Approach for Federated Learning with Pre-trained Models/layout.json b/2025/AFL_ A Single-Round Analytic Approach for Federated Learning with Pre-trained Models/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..67bbf5a60fb318188068df805a97a58feafef7fb --- /dev/null +++ b/2025/AFL_ A Single-Round Analytic Approach for Federated Learning with Pre-trained Models/layout.json @@ -0,0 +1,11141 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 87, + 102, + 523, + 138 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 102, + 523, + 138 + ], + "spans": [ + { + "bbox": [ + 87, + 102, + 523, + 138 + ], + "type": "text", + "content": "AFL: A Single-Round Analytic Approach for Federated Learning with Pre-trained Models" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 125, + 160, + 485, + 189 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 125, + 160, + 485, + 189 + ], + "spans": [ + { + "bbox": [ + 125, + 160, + 485, + 189 + ], + "type": "text", + "content": "Run He" + }, + { + "bbox": [ + 125, + 160, + 485, + 189 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 125, + 160, + 485, + 189 + ], + "type": "text", + "content": ", Kai Tong" + }, + { + "bbox": [ + 125, + 160, + 485, + 189 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 125, + 160, + 485, + 189 + ], + "type": "text", + "content": ", Di Fang" + }, + { + "bbox": [ + 125, + 160, + 485, + 189 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 125, + 160, + 485, + 189 + ], + "type": "text", + "content": ", Han Sun" + }, + { + "bbox": [ + 125, + 160, + 485, + 189 + ], + "type": "inline_equation", + "content": "^{2,3}" + }, + { + "bbox": [ + 125, + 160, + 485, + 189 + ], + "type": "text", + "content": ", Ziqian Zeng" + }, + { + "bbox": [ + 125, + 160, + 485, + 189 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 125, + 160, + 485, + 189 + ], + "type": "text", + "content": ", Haoran Li" + }, + { + "bbox": [ + 125, + 160, + 485, + 189 + ], + "type": "inline_equation", + "content": "^{4}" + }, + { + "bbox": [ + 125, + 160, + 485, + 189 + ], + "type": "text", + "content": ", Tianyi Chen" + }, + { + "bbox": [ + 125, + 160, + 485, + 189 + ], + "type": "inline_equation", + "content": "^{5}" + }, + { + "bbox": [ + 125, + 160, + 485, + 189 + ], + "type": "text", + "content": ", Huiping Zhuang" + }, + { + "bbox": [ + 125, + 160, + 485, + 189 + ], + "type": "inline_equation", + "content": "^{1*}" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 147, + 189, + 463, + 203 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 147, + 189, + 463, + 203 + ], + "spans": [ + { + "bbox": [ + 147, + 189, + 463, + 203 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 147, + 189, + 463, + 203 + ], + "type": "text", + "content": " South China University of Technology " + }, + { + "bbox": [ + 147, + 189, + 463, + 203 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 147, + 189, + 463, + 203 + ], + "type": "text", + "content": " Tsinghua University" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 121, + 203, + 489, + 217 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 121, + 203, + 489, + 217 + ], + "spans": [ + { + "bbox": [ + 121, + 203, + 489, + 217 + ], + "type": "text", + "content": "3 Beijing National Research Center for Information Science and Technology" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 133, + 217, + 477, + 232 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 217, + 477, + 232 + ], + "spans": [ + { + "bbox": [ + 133, + 217, + 477, + 232 + ], + "type": "text", + "content": "4 The Hong Kong University of Science and Technology 5 Microsoft" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 205, + 232, + 401, + 245 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 205, + 232, + 401, + 245 + ], + "spans": [ + { + "bbox": [ + 205, + 232, + 401, + 245 + ], + "type": "text", + "content": "\\*corresponding: hpzhuang@scut.edu.cn" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 151, + 273, + 200, + 285 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 273, + 200, + 285 + ], + "spans": [ + { + "bbox": [ + 151, + 273, + 200, + 285 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 54, + 298, + 297, + 586 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 298, + 297, + 586 + ], + "spans": [ + { + "bbox": [ + 54, + 298, + 297, + 586 + ], + "type": "text", + "content": "In this paper, we introduce analytic federated learning (AFL), a new training paradigm that brings analytical (i.e., closed-form) solutions to the federated learning (FL) with pretrained models. Our AFL draws inspiration from analytic learning—a gradient-free technique that trains neural networks with analytical solutions in one epoch. In the local client training stage, the AFL facilitates a one-epoch training, eliminating the necessity for multi-epoch updates. In the aggregation stage, we derive an absolute aggregation (AA) law. This AA law allows a single-round aggregation, reducing heavy communication overhead and achieving fast convergence by removing the need for multiple aggregation rounds. More importantly, the AFL exhibits a property that invariance to data partitioning, meaning that regardless of how the full dataset is distributed among clients, the aggregated result remains identical. This could spawn various potentials, such as data heterogeneity invariance and client-number invariance. We conduct experiments across various FL settings including extremely non-IID ones, and scenarios with a large number of clients (e.g., " + }, + { + "bbox": [ + 54, + 298, + 297, + 586 + ], + "type": "inline_equation", + "content": "\\geq 1000" + }, + { + "bbox": [ + 54, + 298, + 297, + 586 + ], + "type": "text", + "content": "). In all these settings, our AFL constantly performs competitively while existing FL techniques encounter various obstacles. Our codes are available at https://github.com/ZHUANGHP/Analytic-federated-learning." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 609, + 137, + 621 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 609, + 137, + 621 + ], + "spans": [ + { + "bbox": [ + 56, + 609, + 137, + 621 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 55, + 629, + 296, + 701 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 629, + 296, + 701 + ], + "spans": [ + { + "bbox": [ + 55, + 629, + 296, + 701 + ], + "type": "text", + "content": "Federated learning (FL) [24] aims to collectively train a machine learning model over data silos by aggregating their individual trained weights, while preserving the privacy of their source data. This training paradigm has received high popularity, particularly in sensitive domains where data privacy is crucial, such as in banks [27, 41] and hospitals [11, 13]." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 67, + 702, + 295, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 702, + 295, + 714 + ], + "spans": [ + { + "bbox": [ + 67, + 702, + 295, + 714 + ], + "type": "text", + "content": "Conventional FL techniques rely on weight aggregation" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 274, + 555, + 393 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 274, + 555, + 393 + ], + "spans": [ + { + "bbox": [ + 313, + 274, + 555, + 393 + ], + "type": "text", + "content": "among clients over multiple rounds of training. The objective is to achieve convergence and approximate its joint-training counterpart, where all clients' data are accessible in a single location. To accomplish this, many contributions have been made. One widely recognized method is FedAvg [24]. Relying on a large number of aggregation rounds, the FedAvg employs a simple yet effective weight averaging technique across local clients. Building upon this, various methods have been proposed (e.g., the FedProx [17] and the FedNova [35]), each with its own specific focus within the field of FL." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 398, + 556, + 506 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 398, + 556, + 506 + ], + "spans": [ + { + "bbox": [ + 313, + 398, + 556, + 506 + ], + "type": "text", + "content": "However, training a model from scratch via FL can be computationally intensive and demanding in terms of communication bandwidth, especially with large models and numerous participating clients. Several efforts have explored utilizing pre-trained models to mitigate these challenges [23, 45]. Typically, this involves freezing the backbone and only updating and sharing lightweight parameters, such as prototypes [29, 30] or prompts [7, 40], to reduce the substantial training costs." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 510, + 557, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 510, + 557, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 510, + 557, + 714 + ], + "type": "text", + "content": "Although leveraging pre-trained models can circumvent the high costs associated with training backbones from scratch, existing FL techniques with pre-trained models are primarily based on a gradient-based iterative approach, necessitating iterative optimization on each client and multi-round aggregation across clients. The gradient-based optimization used in the existing FL faces various challenges and imposes several constraints. The faced challenges include, but are not limited to: 1) Data heterogeneity, where the data distribution in each client is not independently identical (non-IID), even with mutually exclusive data categories across different clients (i.e., pathological distribution), 2) Large client number, where the aggregation involving a significant number of clients (i.e., " + }, + { + "bbox": [ + 313, + 510, + 557, + 714 + ], + "type": "inline_equation", + "content": "\\geq 1000" + }, + { + "bbox": [ + 313, + 510, + 557, + 714 + ], + "type": "text", + "content": ") can lead to substantial performance degradation in FL systems as the client count increases [16], 3) Slow convergence: where FL methods may struggle to converge within limited communication rounds, especially" + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "type": "text", + "content": "4988" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 294, + 120 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 294, + 120 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 294, + 120 + ], + "type": "text", + "content": "in severe non-IID scenarios, and 4) High communication cost, where multi-round aggregation in existing FL methods escalates the communication costs associated with parameter sharing between clients and servers." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 122, + 297, + 337 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 122, + 297, + 337 + ], + "spans": [ + { + "bbox": [ + 56, + 122, + 297, + 337 + ], + "type": "text", + "content": "In this paper, we propose a new FL training framework named analytic federated learning (AFL), which provides a single-round aggregation for federated learning with pretrained models. The AFL draws inspiration from analytic learning [8, 32, 48]—a gradient-free technique with a closed-form solution obtained from reshaping the network training into linearized formulation. The AL paradigm receives several benefits over gradient-based techniques. First, it is gradient-free, thereby avoiding gradient-related issues, such as vanishing and exploding gradients. Second, the analytical solution frees AL from convergence issues during training. Also, the AL requires only one visit to the dataset while gradient-based mechanism usually needs hundreds of epochs or beyond. These properties are attractive in FL to accomplish fast convergence and low communication cost. Here, we are able to incorporate this mechanism into the FL domain to overcome limitations inherent in gradient-based techniques. Our contributions are summarized as follows:" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 342, + 295, + 713 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 55, + 342, + 295, + 390 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 342, + 295, + 390 + ], + "spans": [ + { + "bbox": [ + 55, + 342, + 295, + 390 + ], + "type": "text", + "content": "- We propose the AFL, a gradient-free FL framework with analytical (closed-form) solutions. These analytical solutions apply both in the local client training stage and the aggregation stage." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 391, + 295, + 461 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 391, + 295, + 461 + ], + "spans": [ + { + "bbox": [ + 55, + 391, + 295, + 461 + ], + "type": "text", + "content": "- In the local stage, we adopt a pre-trained network to harness input embeddings, and formulate the training in each client into a localized linear regression problem. This leads to a least squares (LS) based one-epoch client training, eliminating the need for multi-epoch training and enabling fast convergence in local clients." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 463, + 295, + 558 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 463, + 295, + 558 + ], + "spans": [ + { + "bbox": [ + 55, + 463, + 295, + 558 + ], + "type": "text", + "content": "- In the aggregation stage, we derive an absolute aggregation (AA) law in analytical form, optimally establishing a single-round aggregation. That is, the aggregation happens only once, avoiding multiple FL rounds that bring high communication costs. Additionally, in scenarios where the AA law becomes suboptimal due to a large number of clients, we introduce a regularization intermediary (RI) process to restore its optimality." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 558, + 295, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 558, + 295, + 665 + ], + "spans": [ + { + "bbox": [ + 55, + 558, + 295, + 665 + ], + "type": "text", + "content": "- Owing to analytical solutions, the AFL exhibits a property that invariance to data partitioning. This means that regardless of how the full dataset is distributed (e.g., nonIID) among local clients, the result remains identical. This property spawns several appealing characteristics: i) Data heterogeneity invariance where the result is invariant to arbitrary heterogeneous data partition scenarios. ii) Client-number invariance, which produces identical results regardless of the number of clients involved." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 665, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 665, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 665, + 295, + 713 + ], + "type": "text", + "content": "- We conduct extensive experiments spanning diverse scenarios, including a wide variety of non-IID partitions and large client number (up to 1000) settings. Our AFL consistently showcases competitive performance throughout all" + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 323, + 72, + 526, + 84 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 323, + 72, + 526, + 84 + ], + "spans": [ + { + "bbox": [ + 323, + 72, + 526, + 84 + ], + "type": "text", + "content": "these settings when compared with other methods." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 314, + 105, + 406, + 118 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 105, + 406, + 118 + ], + "spans": [ + { + "bbox": [ + 314, + 105, + 406, + 118 + ], + "type": "text", + "content": "2. Related Works" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 129, + 555, + 165 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 129, + 555, + 165 + ], + "spans": [ + { + "bbox": [ + 313, + 129, + 555, + 165 + ], + "type": "text", + "content": "In this section, we review existing related FL literature. Additionally, we explore various AL techniques and their variants to reveal their underlying mechanisms." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 183, + 474, + 196 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 183, + 474, + 196 + ], + "spans": [ + { + "bbox": [ + 313, + 183, + 474, + 196 + ], + "type": "text", + "content": "2.1. Federated Learning Methods" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 204, + 555, + 443 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 204, + 555, + 443 + ], + "spans": [ + { + "bbox": [ + 313, + 204, + 555, + 443 + ], + "type": "text", + "content": "Following the FedAvg [24], to address non-IID issues in FL, various methods have been proposed. One common approach involves assessing the significance of parameters during aggregation to ensure that local updates do not diverge substantially from the global model. For instance, the FedProx [17] restricts the size of local updates, while the FedNova [35] employs a normalized averaging method to eliminate target inconsistency while maintaining fast error convergence. These methods are frequently used as baselines, and we compare our results against them in our experiments. Another set of methods focuses on determining adaptive aggregation weights obtained from multiple clients. The Fed-LAW [19] learns these weights to achieve a global model with state-of-the-art performance, though it requires a proxy dataset to learn the weights, making the results sensitive to the selection of the proxy dataset. To address this sensitivity, the FedCDA [34] proposes a proxy-free method that reduces each client's deviation from the local models of other participants and selects a local model from its multiple recent models acquired over several rounds." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 447, + 556, + 626 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 447, + 556, + 626 + ], + "spans": [ + { + "bbox": [ + 313, + 447, + 556, + 626 + ], + "type": "text", + "content": "Some methods address the parameter order mismatch issue across clients, which can occur during global aggregation. The Fed2 [43] designs a model structure adaptation method to ensure explicit feature allocation across different network structures. Similarly, the method in [18] seeks a position-aware neuron to fuse position-related values (i.e., position encodings) into neuron outputs. Distillation methods [2, 9, 33, 38] represent another branch, where the average of logits from client models is used for the local model aggregation, thereby enhancing generalization. [21] pioneers to apply knowledge distillation on the server side, transferring knowledge from multiple local models to the global model using an unlabeled proxy dataset. To overcome the limitation of using a proxy dataset, recent studies such as [47] and [44] suggest substituting the proxy dataset with generated data." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 630, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 630, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 630, + 556, + 715 + ], + "type": "text", + "content": "Existing FL techniques have several significant drawbacks, including challenges with data heterogeneity, large client numbers, convergence issues and high communication costs. Our AFL framework addresses these issues by utilizing a gradient-free, closed-form analytic learning approach, avoiding gradient-related problems (e.g., multi-epoch training, convergence issues and multi-round aggregation)." + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "type": "text", + "content": "4989" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 164, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 164, + 85 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 164, + 85 + ], + "type": "text", + "content": "2.2. Analytic Learning" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 91, + 297, + 283 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 91, + 297, + 283 + ], + "spans": [ + { + "bbox": [ + 55, + 91, + 297, + 283 + ], + "type": "text", + "content": "The AL has been developed as a strategy to address issues associated with gradient-based update, such as gradient vanishing/exploding, divergence during iteration, and long training time due to multi-epoch training. The AL is also referred to as pseudoinverse learning [8] owing to its utilization of matrix inversion. The AL starts from shallow learning, which is investigated prior to the advent of deep networks in the realm of research. For instance, the radial basis network [26] trains parameters using an LS estimation after performing a kernel transformation in the first layer. The multilayer AL [31, 37] comes up with a one-epoch training style, using LS techniques to resolve linear segments transformed by nonlinear network. One instance of this method is the dense pseudoinverse autoencoder [36], which uses LS solutions to combine shallow and deep features to train a stacked autoencoder layer-by-layer." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 286, + 296, + 441 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 286, + 296, + 441 + ], + "spans": [ + { + "bbox": [ + 55, + 286, + 296, + 441 + ], + "type": "text", + "content": "Nonetheless, earlier AL techniques train their weights by processing the entire dataset simultaneously, therefore facing memory challenge. This memory concern is alleviated by the block-wise recursive Moore-Penrose inverse [48], which equivalently replaces the joint learning with a recursive approach. This recursive equivalent property echoes well with the continual learning community. Naturally, analytic continual learning techniques [50, 51, 53] adopt this equivalent characteristic, thrive in handling the catastrophic forgetting problem, and are invariant to the sequential data partition in continual learning. Our AFL draws inspiration from these adaptations, aiming to introduce similar equivalent patterns (e.g., invariant to heterogeneous data) to the FL community." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 472, + 219, + 486 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 472, + 219, + 486 + ], + "spans": [ + { + "bbox": [ + 55, + 472, + 219, + 486 + ], + "type": "text", + "content": "3. Analytic Federated Learning" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 494, + 297, + 626 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 494, + 297, + 626 + ], + "spans": [ + { + "bbox": [ + 55, + 494, + 297, + 626 + ], + "type": "text", + "content": "In this section, we provide a detailed exposition of AFL derivations, organized into a local training stage and a centralized aggregation stage. In the local stage, a pre-trained backbone serves as a feature extractor, facilitating an AL network learning that allows the training to be completed in one epoch. In the aggregation stage, we introduce the AA law, establishing a single-round aggregation. We elaborate on AFL's invariance to data partitioning here, bringing benefits such as data heterogeneity invariance, client-number invariance and fast convergence in a single round. An Overview of the proposed AFL paradigm is depicted in Figure 1." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "text", + "content": "Prior to further developments, here let " + }, + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "inline_equation", + "content": "\\mathcal{D} = \\{\\mathcal{D}_k\\}_{k=1}^K" + }, + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "text", + "content": " be the complete training data, where " + }, + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_k \\sim \\{\\mathcal{X}_{k,i}, y_{k,i}\\}_{i=1}^{N_k}" + }, + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "text", + "content": " suggests an " + }, + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "inline_equation", + "content": "N_k" + }, + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "text", + "content": "-sample sub-dataset accessible to the " + }, + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "text", + "content": "-th client with " + }, + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "inline_equation", + "content": "\\mathcal{X}_{k,i}" + }, + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "inline_equation", + "content": "y_{k,i}" + }, + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "text", + "content": " representing the " + }, + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "text", + "content": "-th input-label pair. In this paper, all these " + }, + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "text", + "content": " clients share the same backbone network " + }, + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "inline_equation", + "content": "f_{\\mathrm{backbone}}" + }, + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "text", + "content": " parameterized by " + }, + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "inline_equation", + "content": "\\Theta" + }, + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "text", + "content": " to map their inputs (e.g., " + }, + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "inline_equation", + "content": "\\mathcal{X}" + }, + { + "bbox": [ + 55, + 628, + 296, + 715 + ], + "type": "text", + "content": ") to embedding vectors." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 72, + 531, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 531, + 85 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 531, + 85 + ], + "type": "text", + "content": "3.1. Local Stage: Localized Analytic Learning" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 89, + 555, + 137 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 89, + 555, + 137 + ], + "spans": [ + { + "bbox": [ + 313, + 89, + 555, + 137 + ], + "type": "text", + "content": "In this stage, each local client's network is trained using the AL technique. This involves transforming the neural network's classification head into a linear regression problem, thereby enabling the derivation of a closed-form LS solution." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 137, + 555, + 174 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 137, + 555, + 174 + ], + "spans": [ + { + "bbox": [ + 313, + 137, + 555, + 174 + ], + "type": "text", + "content": "At the initial step, client " + }, + { + "bbox": [ + 313, + 137, + 555, + 174 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 313, + 137, + 555, + 174 + ], + "type": "text", + "content": " extracts its embedding vector " + }, + { + "bbox": [ + 313, + 137, + 555, + 174 + ], + "type": "inline_equation", + "content": "\\mathbf{x}_{k,i}" + }, + { + "bbox": [ + 313, + 137, + 555, + 174 + ], + "type": "text", + "content": " by passing the " + }, + { + "bbox": [ + 313, + 137, + 555, + 174 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 313, + 137, + 555, + 174 + ], + "type": "text", + "content": "-th data " + }, + { + "bbox": [ + 313, + 137, + 555, + 174 + ], + "type": "inline_equation", + "content": "\\mathcal{X}_{k,i}" + }, + { + "bbox": [ + 313, + 137, + 555, + 174 + ], + "type": "text", + "content": " from " + }, + { + "bbox": [ + 313, + 137, + 555, + 174 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_k" + }, + { + "bbox": [ + 313, + 137, + 555, + 174 + ], + "type": "text", + "content": " through the frozen backbone network " + }, + { + "bbox": [ + 313, + 137, + 555, + 174 + ], + "type": "inline_equation", + "content": "f_{\\mathrm{backbone}}" + }, + { + "bbox": [ + 313, + 137, + 555, + 174 + ], + "type": "text", + "content": ", i.e.," + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 381, + 182, + 555, + 196 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 381, + 182, + 555, + 196 + ], + "spans": [ + { + "bbox": [ + 381, + 182, + 555, + 196 + ], + "type": "interline_equation", + "content": "\\boldsymbol {x} _ {k, j} = f _ {\\text {b a c k b o n e}} \\left(\\mathcal {X} _ {k, j}, \\Theta\\right) \\tag {1}", + "image_path": "ececa5401bc4559e3e12226d8dc69677d28d8e24cc7a2fd177417c1e336cae74.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 202, + 555, + 226 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 202, + 555, + 226 + ], + "spans": [ + { + "bbox": [ + 313, + 202, + 555, + 226 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 202, + 555, + 226 + ], + "type": "inline_equation", + "content": "\\pmb{x}_{k,j} \\in \\mathbb{R}^{1 \\times y_{\\mathrm{e}}}" + }, + { + "bbox": [ + 313, + 202, + 555, + 226 + ], + "type": "text", + "content": ", with " + }, + { + "bbox": [ + 313, + 202, + 555, + 226 + ], + "type": "inline_equation", + "content": "y_{\\mathrm{e}}" + }, + { + "bbox": [ + 313, + 202, + 555, + 226 + ], + "type": "text", + "content": " indicating the embedding length." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 227, + 555, + 276 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 227, + 555, + 276 + ], + "spans": [ + { + "bbox": [ + 313, + 227, + 555, + 276 + ], + "type": "text", + "content": "For the " + }, + { + "bbox": [ + 313, + 227, + 555, + 276 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 313, + 227, + 555, + 276 + ], + "type": "text", + "content": "-th client (with " + }, + { + "bbox": [ + 313, + 227, + 555, + 276 + ], + "type": "inline_equation", + "content": "N_{k}" + }, + { + "bbox": [ + 313, + 227, + 555, + 276 + ], + "type": "text", + "content": " samples in " + }, + { + "bbox": [ + 313, + 227, + 555, + 276 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_k" + }, + { + "bbox": [ + 313, + 227, + 555, + 276 + ], + "type": "text", + "content": "), we can stack the extracted embeddings and their corresponding one-hoted labels via mapping " + }, + { + "bbox": [ + 313, + 227, + 555, + 276 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_k \\sim \\{\\mathcal{X}_{k,i}, y_{k,i}\\}_{i=1}^{N_k}" + }, + { + "bbox": [ + 313, + 227, + 555, + 276 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 313, + 227, + 555, + 276 + ], + "type": "inline_equation", + "content": "\\bar{\\mathcal{D}}_k \\sim \\{\\pmb{X}_k, \\pmb{Y}_k\\}" + }, + { + "bbox": [ + 313, + 227, + 555, + 276 + ], + "type": "text", + "content": ", i.e.," + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 321, + 282, + 555, + 329 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 282, + 555, + 329 + ], + "spans": [ + { + "bbox": [ + 321, + 282, + 555, + 329 + ], + "type": "interline_equation", + "content": "\\boldsymbol {X} _ {k} = \\left[ \\begin{array}{c} \\boldsymbol {x} _ {k, 1} \\\\ \\boldsymbol {x} _ {k, 2} \\\\ \\vdots \\\\ \\boldsymbol {x} _ {k, N _ {k}} \\end{array} \\right] = \\left[ \\begin{array}{c} f _ {\\text {b a c k b o n e}} \\left(\\mathcal {X} _ {k, 1}, \\Theta\\right) \\\\ f _ {\\text {b a c k b o n e}} \\left(\\mathcal {X} _ {k, 2}, \\Theta\\right) \\\\ \\vdots \\\\ f _ {\\text {b a c k b o n e}} \\left(\\mathcal {X} _ {k, N _ {k}}, \\Theta\\right) \\end{array} \\right] \\boldsymbol {Y} _ {k} = \\left[ \\begin{array}{c} \\text {o n e h o t} \\left(y _ {k, 1}\\right) \\\\ \\text {o n e h o t} \\left(y _ {k, 2}\\right) \\\\ \\vdots \\\\ \\text {o n e h o t} \\left(y _ {k, N _ {k}}\\right) \\end{array} \\right], \\tag {2}", + "image_path": "d0642a9daa2a672420a387adb60b39b2ec347b4649687318a050616c58724916.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 335, + 554, + 382 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 335, + 554, + 382 + ], + "spans": [ + { + "bbox": [ + 313, + 335, + 554, + 382 + ], + "type": "text", + "content": "where the embedding matrix " + }, + { + "bbox": [ + 313, + 335, + 554, + 382 + ], + "type": "inline_equation", + "content": "\\mathbf{X}_k \\in \\mathbb{R}^{N_k \\times y_e}" + }, + { + "bbox": [ + 313, + 335, + 554, + 382 + ], + "type": "text", + "content": ", and the label matrix " + }, + { + "bbox": [ + 313, + 335, + 554, + 382 + ], + "type": "inline_equation", + "content": "\\mathbf{Y}_k \\in \\mathbb{R}^{N_k \\times C}" + }, + { + "bbox": [ + 313, + 335, + 554, + 382 + ], + "type": "text", + "content": " has " + }, + { + "bbox": [ + 313, + 335, + 554, + 382 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 313, + 335, + 554, + 382 + ], + "type": "text", + "content": " classes. The onehot(*) operator converts the index label " + }, + { + "bbox": [ + 313, + 335, + 554, + 382 + ], + "type": "inline_equation", + "content": "y_{k,j}" + }, + { + "bbox": [ + 313, + 335, + 554, + 382 + ], + "type": "text", + "content": " into a " + }, + { + "bbox": [ + 313, + 335, + 554, + 382 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 313, + 335, + 554, + 382 + ], + "type": "text", + "content": "-dimension one-hot row vector." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 384, + 555, + 443 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 384, + 555, + 443 + ], + "spans": [ + { + "bbox": [ + 313, + 384, + 555, + 443 + ], + "type": "text", + "content": "Subsequently, we approach the local client training with AL technique [8]. Specially, the target of the " + }, + { + "bbox": [ + 313, + 384, + 555, + 443 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 313, + 384, + 555, + 443 + ], + "type": "text", + "content": "-th client is to linearly map the extracted embeddings onto the one-hoted labels by minimizing the mean square error (MSE) loss function as follows." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 375, + 452, + 555, + 466 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 375, + 452, + 555, + 466 + ], + "spans": [ + { + "bbox": [ + 375, + 452, + 555, + 466 + ], + "type": "interline_equation", + "content": "\\mathcal {L} \\left(\\boldsymbol {W} _ {k}\\right) = \\left\\| \\boldsymbol {Y} _ {k} - \\boldsymbol {X} _ {k} \\boldsymbol {W} _ {k} \\right\\| _ {\\mathrm {F}} ^ {2}, \\tag {3}", + "image_path": "1c1e66c84272e1b881c34e24384bf1e026888ea0786cb83da0404f1f751d0bfd.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 473, + 554, + 498 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 473, + 554, + 498 + ], + "spans": [ + { + "bbox": [ + 313, + 473, + 554, + 498 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 473, + 554, + 498 + ], + "type": "inline_equation", + "content": "\\| *\\|_{\\mathrm{F}}" + }, + { + "bbox": [ + 313, + 473, + 554, + 498 + ], + "type": "text", + "content": " indicates the Frobenius norm. This leads to an optimal weight estimation " + }, + { + "bbox": [ + 313, + 473, + 554, + 498 + ], + "type": "inline_equation", + "content": "\\hat{\\mathbf{W}}_k" + }, + { + "bbox": [ + 313, + 473, + 554, + 498 + ], + "type": "text", + "content": ", i.e.," + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 365, + 505, + 555, + 526 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 365, + 505, + 555, + 526 + ], + "spans": [ + { + "bbox": [ + 365, + 505, + 555, + 526 + ], + "type": "interline_equation", + "content": "\\hat {\\boldsymbol {W}} _ {k} = \\underset {\\boldsymbol {W} _ {k}} {\\operatorname {a r g m i n}} \\mathcal {L} (\\boldsymbol {W} _ {k}) = \\boldsymbol {X} _ {k} ^ {\\dagger} \\boldsymbol {Y} _ {k}, \\tag {4}", + "image_path": "5ef372c874d31092c9012b680e348605e87a6bc65bbfc085397992f3528fad84.jpg" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 533, + 555, + 557 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 533, + 555, + 557 + ], + "spans": [ + { + "bbox": [ + 313, + 533, + 555, + 557 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 533, + 555, + 557 + ], + "type": "inline_equation", + "content": "\\dagger" + }, + { + "bbox": [ + 313, + 533, + 555, + 557 + ], + "type": "text", + "content": " denotes the Moore-Penrose (MP) inverse (also referred as generalized inverse or pseudoinverse) [8, 48]." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 313, + 558, + 555, + 594 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 558, + 555, + 594 + ], + "spans": [ + { + "bbox": [ + 313, + 558, + 555, + 594 + ], + "type": "text", + "content": "The solution presented in (4) optimally addresses the MSE loss function described in (3), effectively establishing an LS-based AL solution for localized network learning." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 313, + 594, + 556, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 594, + 556, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 594, + 556, + 714 + ], + "type": "text", + "content": "Why One-epoch Analytic Learning Works. AL methods are generally effective for training shallow networks but face challenges when applied to deeper ones. This can be attributed to the fact that AL techniques are often designed as classifiers rather than end-to-end learning approaches. Despite this limitation, recent research has demonstrated that with a well-trained backbone, the AL performs adequately in various complex scenarios [52]. The practice of using a \"pre-trained backbone + downstream tasks\" has become increasingly common. This has allowed the one-epoch AL" + } + ] + } + ], + "index": 20 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "4990" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 58, + 70, + 553, + 230 + ], + "blocks": [ + { + "bbox": [ + 58, + 70, + 553, + 230 + ], + "lines": [ + { + "bbox": [ + 58, + 70, + 553, + 230 + ], + "spans": [ + { + "bbox": [ + 58, + 70, + 553, + 230 + ], + "type": "image", + "image_path": "582017bc5eb43d7259bbb83073d5d142d26e7139c39702bfc297a583a1d56d04.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 54, + 239, + 555, + 264 + ], + "lines": [ + { + "bbox": [ + 54, + 239, + 555, + 264 + ], + "spans": [ + { + "bbox": [ + 54, + 239, + 555, + 264 + ], + "type": "text", + "content": "Figure 1. An overview of the AFL. During the local stage, each client calculates " + }, + { + "bbox": [ + 54, + 239, + 555, + 264 + ], + "type": "inline_equation", + "content": "C_k^{\\mathrm{r}}" + }, + { + "bbox": [ + 54, + 239, + 555, + 264 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 54, + 239, + 555, + 264 + ], + "type": "inline_equation", + "content": "\\hat{W}_k^{\\mathrm{r}}" + }, + { + "bbox": [ + 54, + 239, + 555, + 264 + ], + "type": "text", + "content": " based on the same pre-trained backbone and its own dataset. The server obtained the " + }, + { + "bbox": [ + 54, + 239, + 555, + 264 + ], + "type": "inline_equation", + "content": "C_{\\mathrm{agg},K}^{\\mathrm{r}}" + }, + { + "bbox": [ + 54, + 239, + 555, + 264 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 54, + 239, + 555, + 264 + ], + "type": "inline_equation", + "content": "\\hat{W}_{\\mathrm{agg},K}^{\\mathrm{r}}" + }, + { + "bbox": [ + 54, + 239, + 555, + 264 + ], + "type": "text", + "content": " then get " + }, + { + "bbox": [ + 54, + 239, + 555, + 264 + ], + "type": "inline_equation", + "content": "\\hat{W}" + }, + { + "bbox": [ + 54, + 239, + 555, + 264 + ], + "type": "text", + "content": " in the aggregation stage." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 54, + 276, + 295, + 312 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 276, + 295, + 312 + ], + "spans": [ + { + "bbox": [ + 54, + 276, + 295, + 312 + ], + "type": "text", + "content": "to thrive in various areas such as continual learning [50] and reinforcement learning [22]. Hence, it could also be well incorporated in the individual client training." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 312, + 296, + 372 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 312, + 296, + 372 + ], + "spans": [ + { + "bbox": [ + 55, + 312, + 296, + 372 + ], + "type": "text", + "content": "Adopting AL is the key to enforcing the upcoming single-round aggregation (by deriving the AA law). The affine characteristic of linear regression in each client opens up new possibilities for exploration in FL. We provide a comprehensive explanation of such an exploration in later sections." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 379, + 296, + 392 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 379, + 296, + 392 + ], + "spans": [ + { + "bbox": [ + 55, + 379, + 296, + 392 + ], + "type": "text", + "content": "3.2. Aggregation Stage: Absolute Aggregation Law" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 396, + 296, + 479 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 396, + 296, + 479 + ], + "spans": [ + { + "bbox": [ + 55, + 396, + 296, + 479 + ], + "type": "text", + "content": "In the aggregation stage, we introduce the Absolute Aggregation (AA) law, a key contribution in AFL. The AA law facilitates a single-round aggregation, i.e., the aggregation happens only once. Additionally, in scenarios where the AA law becomes suboptimal due to a large number of clients, we introduce a regularization intermediary (RI) process to restore its optimality." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 480, + 296, + 503 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 480, + 296, + 503 + ], + "spans": [ + { + "bbox": [ + 55, + 480, + 296, + 503 + ], + "type": "text", + "content": "The MP inverse partition [4] inspires our derivation, which is reformulated into Lemma 1." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 509, + 296, + 544 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 509, + 296, + 544 + ], + "spans": [ + { + "bbox": [ + 55, + 509, + 296, + 544 + ], + "type": "text", + "content": "Lemma 1. Let " + }, + { + "bbox": [ + 55, + 509, + 296, + 544 + ], + "type": "inline_equation", + "content": "\\mathbf{X} = \\begin{bmatrix} \\mathbf{X}_u \\\\ \\mathbf{X}_v \\end{bmatrix}" + }, + { + "bbox": [ + 55, + 509, + 296, + 544 + ], + "type": "text", + "content": " with " + }, + { + "bbox": [ + 55, + 509, + 296, + 544 + ], + "type": "inline_equation", + "content": "\\mathbf{X}_u" + }, + { + "bbox": [ + 55, + 509, + 296, + 544 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 509, + 296, + 544 + ], + "type": "inline_equation", + "content": "\\mathbf{X}_v" + }, + { + "bbox": [ + 55, + 509, + 296, + 544 + ], + "type": "text", + "content": " having full column ranks, and " + }, + { + "bbox": [ + 55, + 509, + 296, + 544 + ], + "type": "inline_equation", + "content": "\\mathbf{X}" + }, + { + "bbox": [ + 55, + 509, + 296, + 544 + ], + "type": "text", + "content": " follows a partition" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 140, + 552, + 296, + 567 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 140, + 552, + 296, + 567 + ], + "spans": [ + { + "bbox": [ + 140, + 552, + 296, + 567 + ], + "type": "interline_equation", + "content": "\\boldsymbol {X} ^ {\\dagger} = \\left[ \\begin{array}{l l} \\bar {U} & \\bar {V} \\end{array} \\right], \\tag {5}", + "image_path": "88b65a1d98f586558cfde14409ccde5879e9d41376add5031adb9f15da182d9e.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 576, + 83, + 585 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 576, + 83, + 585 + ], + "spans": [ + { + "bbox": [ + 55, + 576, + 83, + 585 + ], + "type": "text", + "content": "where" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 63, + 594, + 296, + 624 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 594, + 296, + 624 + ], + "spans": [ + { + "bbox": [ + 63, + 594, + 296, + 624 + ], + "type": "interline_equation", + "content": "\\left\\{ \\begin{array}{l} \\bar {U} = X _ {u} ^ {\\dagger} - R _ {u} C _ {v} X _ {u} ^ {\\dagger} - R _ {u} C _ {v} (C _ {u} + C _ {v}) ^ {- 1} C _ {v} X _ {u} ^ {\\dagger} \\\\ \\bar {V} = X _ {v} ^ {\\dagger} - R _ {v} C _ {u} X _ {v} ^ {\\dagger} - R _ {v} C _ {u} (C _ {u} + C _ {v}) ^ {- 1} C _ {u} X _ {v} ^ {\\dagger} \\end{array} \\right.,", + "image_path": "846e67955439205c109e2433e5b5efd630e3f7951c327140404eb89787f23d31.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 93, + 632, + 296, + 664 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 93, + 632, + 296, + 664 + ], + "spans": [ + { + "bbox": [ + 93, + 632, + 296, + 664 + ], + "type": "interline_equation", + "content": "\\left\\{ \\begin{array}{l} \\boldsymbol {C} _ {u} = \\boldsymbol {X} _ {u} ^ {\\top} \\boldsymbol {X} _ {u} \\\\ \\boldsymbol {C} _ {v} = \\boldsymbol {X} _ {v} ^ {\\top} \\boldsymbol {X} _ {v} \\end{array} , \\quad \\left\\{ \\begin{array}{l} \\boldsymbol {R} _ {u} = \\boldsymbol {C} _ {u} ^ {- 1} \\\\ \\boldsymbol {R} _ {v} = \\boldsymbol {C} _ {v} ^ {- 1} \\end{array} . \\right. \\right. \\tag {6}", + "image_path": "4b04d867378d3e3a9a901a0c58b0a1f7bbb69edddf359b411ca8e58d70836d52.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 55, + 670, + 218, + 681 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 670, + 218, + 681 + ], + "spans": [ + { + "bbox": [ + 55, + 670, + 218, + 681 + ], + "type": "text", + "content": "Proof. See Supplementary Materials A." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 56, + 689, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 689, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 56, + 689, + 296, + 713 + ], + "type": "text", + "content": "Lemma 1 points out that, a matrix's MP inverse (e.g., " + }, + { + "bbox": [ + 56, + 689, + 296, + 713 + ], + "type": "inline_equation", + "content": "X^{\\dagger}" + }, + { + "bbox": [ + 56, + 689, + 296, + 713 + ], + "type": "text", + "content": ") can be computed using the inverse matrices of its block" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 276, + 555, + 336 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 276, + 555, + 336 + ], + "spans": [ + { + "bbox": [ + 313, + 276, + 555, + 336 + ], + "type": "text", + "content": "components (e.g., " + }, + { + "bbox": [ + 313, + 276, + 555, + 336 + ], + "type": "inline_equation", + "content": "\\pmb{X}_u^\\dagger" + }, + { + "bbox": [ + 313, + 276, + 555, + 336 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 276, + 555, + 336 + ], + "type": "inline_equation", + "content": "\\pmb{X}_v^\\dagger" + }, + { + "bbox": [ + 313, + 276, + 555, + 336 + ], + "type": "text", + "content": "). This introduces possibilities for aggregating a weight " + }, + { + "bbox": [ + 313, + 276, + 555, + 336 + ], + "type": "inline_equation", + "content": "\\pmb{W} = \\pmb{X}^\\dagger \\pmb{Y}" + }, + { + "bbox": [ + 313, + 276, + 555, + 336 + ], + "type": "text", + "content": " equally from manipulating constituent counterparts " + }, + { + "bbox": [ + 313, + 276, + 555, + 336 + ], + "type": "inline_equation", + "content": "\\pmb{W}_u = \\pmb{X}_u^\\dagger \\pmb{Y}_u" + }, + { + "bbox": [ + 313, + 276, + 555, + 336 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 276, + 555, + 336 + ], + "type": "inline_equation", + "content": "\\pmb{W}_v = \\pmb{X}_v^\\dagger \\pmb{Y}_v" + }, + { + "bbox": [ + 313, + 276, + 555, + 336 + ], + "type": "text", + "content": ". That is, " + }, + { + "bbox": [ + 313, + 276, + 555, + 336 + ], + "type": "inline_equation", + "content": "\\pmb{W} = f_{\\mathrm{agg}}(\\pmb{W}_u, \\pmb{W}_v)" + }, + { + "bbox": [ + 313, + 276, + 555, + 336 + ], + "type": "text", + "content": ", i.e., a single-aggregation strategy." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 336, + 555, + 371 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 336, + 555, + 371 + ], + "spans": [ + { + "bbox": [ + 313, + 336, + 555, + 371 + ], + "type": "text", + "content": "Bearing the above intuition in mind, we are able to derive such a single-aggregation strategy in action. This is delivered in Theorem 1." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 377, + 555, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 377, + 555, + 437 + ], + "spans": [ + { + "bbox": [ + 313, + 377, + 555, + 437 + ], + "type": "text", + "content": "Theorem 1. Absolute Aggregation Law: Let " + }, + { + "bbox": [ + 313, + 377, + 555, + 437 + ], + "type": "inline_equation", + "content": "\\hat{W} = X^{\\dagger}Y" + }, + { + "bbox": [ + 313, + 377, + 555, + 437 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 313, + 377, + 555, + 437 + ], + "type": "inline_equation", + "content": "X = \\begin{bmatrix} X_u \\\\ X_v \\end{bmatrix}" + }, + { + "bbox": [ + 313, + 377, + 555, + 437 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 377, + 555, + 437 + ], + "type": "inline_equation", + "content": "Y = \\begin{bmatrix} Y_u \\\\ Y_v \\end{bmatrix}" + }, + { + "bbox": [ + 313, + 377, + 555, + 437 + ], + "type": "text", + "content": " with " + }, + { + "bbox": [ + 313, + 377, + 555, + 437 + ], + "type": "inline_equation", + "content": "X_u" + }, + { + "bbox": [ + 313, + 377, + 555, + 437 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 377, + 555, + 437 + ], + "type": "inline_equation", + "content": "X_v" + }, + { + "bbox": [ + 313, + 377, + 555, + 437 + ], + "type": "text", + "content": " having full column ranks. Let " + }, + { + "bbox": [ + 313, + 377, + 555, + 437 + ], + "type": "inline_equation", + "content": "\\hat{W}_u = X_u^\\dagger Y_u" + }, + { + "bbox": [ + 313, + 377, + 555, + 437 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 377, + 555, + 437 + ], + "type": "inline_equation", + "content": "\\hat{W}_v = X_v^\\dagger Y_v" + }, + { + "bbox": [ + 313, + 377, + 555, + 437 + ], + "type": "text", + "content": ", and we have" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 378, + 439, + 555, + 452 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 378, + 439, + 555, + 452 + ], + "spans": [ + { + "bbox": [ + 378, + 439, + 555, + 452 + ], + "type": "interline_equation", + "content": "\\boldsymbol {W} = \\boldsymbol {\\mathcal {W}} _ {u} \\boldsymbol {W} _ {u} + \\boldsymbol {\\mathcal {W}} _ {u} \\boldsymbol {W} _ {v}, \\tag {7}", + "image_path": "dfbf40ebde4c7f01a0f15ed6c23dc6d0a9a54d5d221bdfab10cf2361b6d532cf.jpg" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 314, + 457, + 342, + 467 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 457, + 342, + 467 + ], + "spans": [ + { + "bbox": [ + 314, + 457, + 342, + 467 + ], + "type": "text", + "content": "where" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 331, + 474, + 526, + 506 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 331, + 474, + 526, + 506 + ], + "spans": [ + { + "bbox": [ + 331, + 474, + 526, + 506 + ], + "type": "interline_equation", + "content": "\\left\\{ \\begin{array}{l} \\mathcal {W} _ {u} = I - R _ {u} C _ {v} - R _ {u} C _ {v} \\left(C _ {u} + C _ {v}\\right) ^ {- 1} C _ {v} \\\\ \\mathcal {W} _ {v} = I - R _ {v} C _ {u} - R _ {v} C _ {u} \\left(C _ {u} + C _ {v}\\right) ^ {- 1} C _ {u} \\end{array} \\right.", + "image_path": "07a22567ec1e07f6c8fb3fef3d0898c9589a301e79e8b9cc4ccbe8cb31ea3895.jpg" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 358, + 514, + 555, + 545 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 358, + 514, + 555, + 545 + ], + "spans": [ + { + "bbox": [ + 358, + 514, + 555, + 545 + ], + "type": "interline_equation", + "content": "\\left\\{ \\begin{array}{l} \\boldsymbol {C} _ {u} = \\boldsymbol {X} _ {u} ^ {\\top} \\boldsymbol {X} _ {u} \\\\ \\boldsymbol {C} _ {v} = \\boldsymbol {X} _ {v} ^ {\\top} \\boldsymbol {X} _ {v} \\end{array} \\quad \\left\\{ \\begin{array}{l} \\boldsymbol {R} _ {u} = \\boldsymbol {C} _ {u} ^ {- 1} \\\\ \\boldsymbol {R} _ {v} = \\boldsymbol {C} _ {v} ^ {- 1} \\end{array} \\right. \\right. \\tag {8}", + "image_path": "89461a26903b4ad7c8d4c6d0411e0296ef455d9fb0be42e3c388fd41c97bb4e7.jpg" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 313, + 550, + 476, + 562 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 550, + 476, + 562 + ], + "spans": [ + { + "bbox": [ + 313, + 550, + 476, + 562 + ], + "type": "text", + "content": "Proof. See Supplementary Materials B." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 313, + 570, + 555, + 630 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 570, + 555, + 630 + ], + "spans": [ + { + "bbox": [ + 313, + 570, + 555, + 630 + ], + "type": "text", + "content": "The AA law, as stated in Theorem 1, provides a powerful insight. It establishes an intuition that we can aggregate two independently trained weights, such as " + }, + { + "bbox": [ + 313, + 570, + 555, + 630 + ], + "type": "inline_equation", + "content": "W_{u}" + }, + { + "bbox": [ + 313, + 570, + 555, + 630 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 570, + 555, + 630 + ], + "type": "inline_equation", + "content": "W_{v}" + }, + { + "bbox": [ + 313, + 570, + 555, + 630 + ], + "type": "text", + "content": ", into their jointly trained counterpart " + }, + { + "bbox": [ + 313, + 570, + 555, + 630 + ], + "type": "inline_equation", + "content": "W" + }, + { + "bbox": [ + 313, + 570, + 555, + 630 + ], + "type": "text", + "content": ". This is achieved in an optimal way without any approximation or parameter tuning." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 313, + 631, + 556, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 631, + 556, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 631, + 556, + 713 + ], + "type": "text", + "content": "Invariance to data partitioning. To a certain extent, the achievement in Theorem 1 attains the ultimate goal of FL, i.e., the equivalence between weights trained in FL fashion and that trained on a centralized joint dataset. Traditionally, the FL aims to approximate or converge to the performance of the joint-trained model through multiple rounds of aggregation in a central server. However, the AA law provides a" + } + ] + } + ], + "index": 23 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 314, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 314, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 314, + 757 + ], + "type": "text", + "content": "4991" + } + ] + } + ], + "index": 24 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 296, + 107 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 296, + 107 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 296, + 107 + ], + "type": "text", + "content": "more direct path to this goal. It allows for an equivalence (not approximation or convergence) to manifest in a linear regression standpoint." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 108, + 296, + 179 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 108, + 296, + 179 + ], + "spans": [ + { + "bbox": [ + 55, + 108, + 296, + 179 + ], + "type": "text", + "content": "Supported by the AA law, the AFL achieves a level of performance that is on par with the joint-trained model, without the need for multiple rounds of aggregation. This direct equivalence could establish significant advancement in FL, as it simplifies the process and reduces the heavy computational overhead associated with multiple aggregation rounds." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 180, + 296, + 263 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 180, + 296, + 263 + ], + "spans": [ + { + "bbox": [ + 55, + 180, + 296, + 263 + ], + "type": "text", + "content": "Although the AA law in Theorem 1 admits the absolute aggregation between two clients (i.e., " + }, + { + "bbox": [ + 55, + 180, + 296, + 263 + ], + "type": "inline_equation", + "content": "\\hat{W}_u" + }, + { + "bbox": [ + 55, + 180, + 296, + 263 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 180, + 296, + 263 + ], + "type": "inline_equation", + "content": "\\hat{W}_v" + }, + { + "bbox": [ + 55, + 180, + 296, + 263 + ], + "type": "text", + "content": "), this pattern can be trivially broadcast to multi-client scenario. To elaborate, without loss of generality, we denote " + }, + { + "bbox": [ + 55, + 180, + 296, + 263 + ], + "type": "inline_equation", + "content": "\\hat{W}_{\\mathrm{agg},k - 1}" + }, + { + "bbox": [ + 55, + 180, + 296, + 263 + ], + "type": "text", + "content": " as the accumulated aggregation (AcAg) weight that has aggregated " + }, + { + "bbox": [ + 55, + 180, + 296, + 263 + ], + "type": "inline_equation", + "content": "k - 1" + }, + { + "bbox": [ + 55, + 180, + 296, + 263 + ], + "type": "text", + "content": " clients. By rewriting (1), the next aggregation with " + }, + { + "bbox": [ + 55, + 180, + 296, + 263 + ], + "type": "inline_equation", + "content": "\\hat{W}_k" + }, + { + "bbox": [ + 55, + 180, + 296, + 263 + ], + "type": "text", + "content": " (" + }, + { + "bbox": [ + 55, + 180, + 296, + 263 + ], + "type": "inline_equation", + "content": "i = k,\\dots ,K" + }, + { + "bbox": [ + 55, + 180, + 296, + 263 + ], + "type": "text", + "content": ") reads" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 98, + 269, + 296, + 284 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 98, + 269, + 296, + 284 + ], + "spans": [ + { + "bbox": [ + 98, + 269, + 296, + 284 + ], + "type": "interline_equation", + "content": "\\hat {W} _ {\\text {a g g}, k} = \\mathcal {W} _ {\\text {a g g}} \\hat {W} _ {\\text {a g g}, k - 1} + \\mathcal {W} _ {k} \\hat {W} _ {k}. \\tag {9}", + "image_path": "004dc3bdc8c3538a35d7107c769c113a5753762ce28807fb0581559787596957.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 290, + 296, + 315 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 290, + 296, + 315 + ], + "spans": [ + { + "bbox": [ + 55, + 290, + 296, + 315 + ], + "type": "text", + "content": "According to (1), let " + }, + { + "bbox": [ + 55, + 290, + 296, + 315 + ], + "type": "inline_equation", + "content": "C_u \\to C_{\\mathrm{agg},k-1}" + }, + { + "bbox": [ + 55, + 290, + 296, + 315 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 290, + 296, + 315 + ], + "type": "inline_equation", + "content": "C_v \\to C_k" + }, + { + "bbox": [ + 55, + 290, + 296, + 315 + ], + "type": "text", + "content": ", and we have " + }, + { + "bbox": [ + 55, + 290, + 296, + 315 + ], + "type": "inline_equation", + "content": "C_{\\mathrm{agg},k} = C_{\\mathrm{agg},k-1} + C_k" + }, + { + "bbox": [ + 55, + 290, + 296, + 315 + ], + "type": "text", + "content": ". Hence," + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 322, + 296, + 362 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 322, + 296, + 362 + ], + "spans": [ + { + "bbox": [ + 67, + 322, + 296, + 362 + ], + "type": "interline_equation", + "content": "\\left\\{ \\begin{array}{l} \\mathcal {W} _ {\\text {a g g}} = I - C _ {\\text {a g g}, k - 1} ^ {- 1} C _ {k} \\left(I + C _ {\\text {a g g}, k} ^ {- 1} C _ {k}\\right), \\\\ \\mathcal {W} _ {k} = I - C _ {k} ^ {- 1} C _ {\\text {a g g}, k - 1} \\left(I + C _ {\\text {a g g}, k} ^ {- 1} C _ {\\text {a g g}, k - 1}\\right), \\end{array} \\right. \\tag {10}", + "image_path": "34624c5fdfe7e4c59d50d5d0fe7346f5a556ce9a556a0a00886635c3cd5b3d6c.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 364, + 83, + 373 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 364, + 83, + 373 + ], + "spans": [ + { + "bbox": [ + 55, + 364, + 83, + 373 + ], + "type": "text", + "content": "where" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 93, + 379, + 296, + 411 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 93, + 379, + 296, + 411 + ], + "spans": [ + { + "bbox": [ + 93, + 379, + 296, + 411 + ], + "type": "interline_equation", + "content": "\\left\\{ \\begin{array}{l} C _ {\\text {a g g}, k} = C _ {\\text {a g g}, k - 1} + C _ {k} = \\sum_ {i} ^ {k} C _ {i}, \\\\ C _ {i} = X _ {i} ^ {\\top} X _ {i}. \\end{array} \\right. \\tag {11}", + "image_path": "6a468ad220a8ed30bf0f027c4fad8bbe794cc0d03d7fb9fae082247e49df11b7.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 418, + 296, + 479 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 418, + 296, + 479 + ], + "spans": [ + { + "bbox": [ + 55, + 418, + 296, + 479 + ], + "type": "text", + "content": "As such, the joint-trained weight " + }, + { + "bbox": [ + 55, + 418, + 296, + 479 + ], + "type": "inline_equation", + "content": "\\hat{W} = \\hat{W}_{\\mathrm{agg},k}" + }, + { + "bbox": [ + 55, + 418, + 296, + 479 + ], + "type": "text", + "content": " is produced by aggregating among individual clients in a pair-wise manner. It is interesting to find that the optimal aggregation is in fact a linear combination between two matrices (e.g., " + }, + { + "bbox": [ + 55, + 418, + 296, + 479 + ], + "type": "inline_equation", + "content": "\\hat{W}_{\\mathrm{agg},k - 1}" + }, + { + "bbox": [ + 55, + 418, + 296, + 479 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 418, + 296, + 479 + ], + "type": "inline_equation", + "content": "\\hat{W}_k" + }, + { + "bbox": [ + 55, + 418, + 296, + 479 + ], + "type": "text", + "content": " ) weighted by " + }, + { + "bbox": [ + 55, + 418, + 296, + 479 + ], + "type": "inline_equation", + "content": "\\mathcal{W}_{\\mathrm{agg}}" + }, + { + "bbox": [ + 55, + 418, + 296, + 479 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 418, + 296, + 479 + ], + "type": "inline_equation", + "content": "\\mathcal{W}_k" + }, + { + "bbox": [ + 55, + 418, + 296, + 479 + ], + "type": "text", + "content": " respectively." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 479, + 296, + 539 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 479, + 296, + 539 + ], + "spans": [ + { + "bbox": [ + 55, + 479, + 296, + 539 + ], + "type": "text", + "content": "Note that the aggregation does NOT necessarily follow a sequential index from 1 to " + }, + { + "bbox": [ + 55, + 479, + 296, + 539 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 55, + 479, + 296, + 539 + ], + "type": "text", + "content": ". We can randomly sample an available client to aggregate with the AcAg weight. This is revealed by the fact that elements in the weighting matrices are somewhat interchangeable (e.g., see (10))." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 544, + 295, + 556 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 544, + 295, + 556 + ], + "spans": [ + { + "bbox": [ + 55, + 544, + 295, + 556 + ], + "type": "text", + "content": "3.3. RI Process: AA Law in Rank-deficient Scenario" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 55, + 562, + 296, + 670 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 562, + 296, + 670 + ], + "spans": [ + { + "bbox": [ + 55, + 562, + 296, + 670 + ], + "type": "text", + "content": "As indicated in Theorem 1, the equivalence in AA law relies on an assumption of a full-column rank in each client, e.g., " + }, + { + "bbox": [ + 55, + 562, + 296, + 670 + ], + "type": "inline_equation", + "content": "X_{k}" + }, + { + "bbox": [ + 55, + 562, + 296, + 670 + ], + "type": "text", + "content": " having full-column rank. This may not hold in the large client number scenario where each client has limited data (e.g., " + }, + { + "bbox": [ + 55, + 562, + 296, + 670 + ], + "type": "inline_equation", + "content": "N_{k} < y_{\\mathrm{e}}" + }, + { + "bbox": [ + 55, + 562, + 296, + 670 + ], + "type": "text", + "content": "), rendering the full-column rank assumption invalid. To address this, we implement the AA law with an RI process. Specially, we include a regularization term as an intermediary during the local stage, and remove it after the aggregation stage." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 55, + 670, + 296, + 693 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 670, + 296, + 693 + ], + "spans": [ + { + "bbox": [ + 55, + 670, + 296, + 693 + ], + "type": "text", + "content": "To this end, we include an regularization term controlled by " + }, + { + "bbox": [ + 55, + 670, + 296, + 693 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 55, + 670, + 296, + 693 + ], + "type": "text", + "content": " in the objective function, i.e.," + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 91, + 700, + 296, + 715 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 91, + 700, + 296, + 715 + ], + "spans": [ + { + "bbox": [ + 91, + 700, + 296, + 715 + ], + "type": "interline_equation", + "content": "\\mathcal {L} \\left(\\boldsymbol {W} _ {k} ^ {\\mathrm {r}}\\right) = \\left\\| \\boldsymbol {Y} _ {k} - \\boldsymbol {X} _ {k} \\boldsymbol {W} _ {k} ^ {\\mathrm {r}} \\right\\| _ {\\mathrm {F}} ^ {2} + \\gamma \\left\\| \\boldsymbol {W} _ {k} ^ {\\mathrm {r}} \\right\\| _ {\\mathrm {F}} ^ {2}, \\tag {12}", + "image_path": "cac7e5418395c3744e1e056e82f5dc56878607d71d394131de2ac42de77ea129.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 72, + 541, + 84 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 541, + 84 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 541, + 84 + ], + "type": "text", + "content": "which rewrites the MP inverse based solution in (4) into" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 319, + 91, + 555, + 113 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 319, + 91, + 555, + 113 + ], + "spans": [ + { + "bbox": [ + 319, + 91, + 555, + 113 + ], + "type": "interline_equation", + "content": "\\hat {\\boldsymbol {W}} _ {k} ^ {\\mathrm {r}} = \\underset {\\boldsymbol {W} _ {k} ^ {\\mathrm {r}}} {\\operatorname {a r g m i n}} \\mathcal {L} \\left(\\boldsymbol {W} _ {k} ^ {\\mathrm {r}}\\right) = \\left(\\boldsymbol {X} _ {k} ^ {\\top} \\boldsymbol {X} _ {k} + \\gamma \\boldsymbol {I}\\right) ^ {- 1} \\boldsymbol {X} _ {k} ^ {\\top} \\boldsymbol {Y} _ {k}. \\tag {13}", + "image_path": "87f3bca2d62928afde53f39f0794d0224a9c93f3594bb701635f76083e307bba.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 121, + 555, + 156 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 121, + 555, + 156 + ], + "spans": [ + { + "bbox": [ + 313, + 121, + 555, + 156 + ], + "type": "text", + "content": "Such a solution does not suffer from rank-deficiency issues, as " + }, + { + "bbox": [ + 313, + 121, + 555, + 156 + ], + "type": "inline_equation", + "content": "\\pmb{X}_k^\\top \\pmb{X}_k + \\gamma \\pmb{I}" + }, + { + "bbox": [ + 313, + 121, + 555, + 156 + ], + "type": "text", + "content": " is positive-definite thereby a full-rank matrix." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 156, + 555, + 195 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 156, + 555, + 195 + ], + "spans": [ + { + "bbox": [ + 313, + 156, + 555, + 195 + ], + "type": "text", + "content": "During aggregation, we substitute " + }, + { + "bbox": [ + 313, + 156, + 555, + 195 + ], + "type": "inline_equation", + "content": "\\hat{\\mathbf{W}}_k" + }, + { + "bbox": [ + 313, + 156, + 555, + 195 + ], + "type": "text", + "content": " in (4) with " + }, + { + "bbox": [ + 313, + 156, + 555, + 195 + ], + "type": "inline_equation", + "content": "\\hat{\\mathbf{W}}_k^{\\mathrm{r}}" + }, + { + "bbox": [ + 313, + 156, + 555, + 195 + ], + "type": "text", + "content": " using (13). This substitution would clearly result in deviations (i.e., " + }, + { + "bbox": [ + 313, + 156, + 555, + 195 + ], + "type": "inline_equation", + "content": "\\hat{\\mathbf{W}}_{\\mathrm{agg},k}^{\\mathrm{r}}\\neq \\hat{\\mathbf{W}}_{\\mathrm{agg},k}" + }, + { + "bbox": [ + 313, + 156, + 555, + 195 + ], + "type": "text", + "content": "), which is depicted in Theorem 2." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 202, + 554, + 230 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 202, + 554, + 230 + ], + "spans": [ + { + "bbox": [ + 313, + 202, + 554, + 230 + ], + "type": "text", + "content": "Theorem 2. RI-AA Law: The relation between " + }, + { + "bbox": [ + 313, + 202, + 554, + 230 + ], + "type": "inline_equation", + "content": "\\hat{W}_{agg,k}^{r}" + }, + { + "bbox": [ + 313, + 202, + 554, + 230 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 202, + 554, + 230 + ], + "type": "inline_equation", + "content": "\\hat{W}_{agg,k}" + }, + { + "bbox": [ + 313, + 202, + 554, + 230 + ], + "type": "text", + "content": " follows" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 361, + 238, + 555, + 254 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 361, + 238, + 555, + 254 + ], + "spans": [ + { + "bbox": [ + 361, + 238, + 555, + 254 + ], + "type": "interline_equation", + "content": "\\hat {W} _ {\\text {a g g}, k} ^ {\\mathrm {r}} = \\left(C _ {\\text {a g g}, k} ^ {\\mathrm {r}}\\right) ^ {- 1} C _ {\\text {a g g}, k} \\hat {W} _ {\\text {a g g}, k}, \\tag {14}", + "image_path": "1908c70dd39f640f7fccc47cbe76a348486033b32ffbb7503dde56b9be04ba79.jpg" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 314, + 261, + 342, + 271 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 261, + 342, + 271 + ], + "spans": [ + { + "bbox": [ + 314, + 261, + 342, + 271 + ], + "type": "text", + "content": "where" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 322, + 281, + 555, + 295 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 322, + 281, + 555, + 295 + ], + "spans": [ + { + "bbox": [ + 322, + 281, + 555, + 295 + ], + "type": "interline_equation", + "content": "C _ {\\text {a g g}, k} ^ {\\mathrm {r}} = C _ {\\text {a g g}, k} + k \\gamma I = \\sum_ {i} ^ {k} C _ {i} ^ {\\mathrm {r}}, \\quad C _ {i} ^ {\\mathrm {r}} = X _ {i} ^ {\\top} X _ {i} + \\gamma I. \\tag {15}", + "image_path": "95d89e0f559e19603a0c904580d14fff0e4507d66e226407f994ba426a9cf5f5.jpg" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 314, + 302, + 476, + 315 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 302, + 476, + 315 + ], + "spans": [ + { + "bbox": [ + 314, + 302, + 476, + 315 + ], + "type": "text", + "content": "Proof. See Supplementary Materials C." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 313, + 322, + 554, + 365 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 322, + 554, + 365 + ], + "spans": [ + { + "bbox": [ + 313, + 322, + 554, + 365 + ], + "type": "text", + "content": "Theorem 2 establishes the relation between " + }, + { + "bbox": [ + 313, + 322, + 554, + 365 + ], + "type": "inline_equation", + "content": "\\hat{W}_{\\mathrm{agg},k}^{\\mathrm{r}}" + }, + { + "bbox": [ + 313, + 322, + 554, + 365 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 322, + 554, + 365 + ], + "type": "inline_equation", + "content": "\\hat{W}_{\\mathrm{agg},k}" + }, + { + "bbox": [ + 313, + 322, + 554, + 365 + ], + "type": "text", + "content": ", which is a one-to-one mapping, such that " + }, + { + "bbox": [ + 313, + 322, + 554, + 365 + ], + "type": "inline_equation", + "content": "\\hat{W}_{\\mathrm{agg},k}" + }, + { + "bbox": [ + 313, + 322, + 554, + 365 + ], + "type": "text", + "content": " can be restored by manipulating " + }, + { + "bbox": [ + 313, + 322, + 554, + 365 + ], + "type": "inline_equation", + "content": "\\hat{W}_{\\mathrm{agg},k}^{\\mathrm{r}}" + }, + { + "bbox": [ + 313, + 322, + 554, + 365 + ], + "type": "text", + "content": ", i.e.," + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 347, + 372, + 555, + 406 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 347, + 372, + 555, + 406 + ], + "spans": [ + { + "bbox": [ + 347, + 372, + 555, + 406 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\hat {W} _ {\\text {a g g}, k} = \\left(C _ {\\text {a g g}, k}\\right) ^ {- 1} C _ {\\text {a g g}, k} ^ {\\mathrm {r}} \\hat {W} _ {\\text {a g g}, k} ^ {\\mathrm {r}} \\\\ = \\left(\\boldsymbol {C} _ {\\text {a g g}, k} ^ {\\mathrm {r}} - k \\gamma \\boldsymbol {I}\\right) ^ {- 1} \\boldsymbol {C} _ {\\text {a g g}, k} ^ {\\mathrm {r}} \\hat {\\boldsymbol {W}} _ {\\text {a g g}, k} ^ {\\mathrm {r}}. \\tag {16} \\\\ \\end{array}", + "image_path": "4e4073961b6d19e521273f913638a97e3de5c6cf03b1bb75269645caffbab485.jpg" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 313, + 414, + 555, + 461 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 414, + 555, + 461 + ], + "spans": [ + { + "bbox": [ + 313, + 414, + 555, + 461 + ], + "type": "text", + "content": "That is, we are able to attain " + }, + { + "bbox": [ + 313, + 414, + 555, + 461 + ], + "type": "inline_equation", + "content": "\\hat{W}_{\\mathrm{agg},k}" + }, + { + "bbox": [ + 313, + 414, + 555, + 461 + ], + "type": "text", + "content": " by removing the impact of the regularization term " + }, + { + "bbox": [ + 313, + 414, + 555, + 461 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 313, + 414, + 555, + 461 + ], + "type": "text", + "content": " to counter the ill-conditioned constraint in the large client number scenario. The implementation of AFL is summarized in Algorithm 1." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 313, + 462, + 556, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 462, + 556, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 462, + 556, + 714 + ], + "type": "text", + "content": "Benefits of Adopting AL in AFL. Inheriting from the AL technique, the AFL admits several merits over its gradient-based counterparts as follows. i) Fast training and convergence: the analytical solutions allow AFL to finish the training and aggregation in one shot, exhibiting fast training and convergence. Also, the analytical solutions free the AFL from any convergence issue as no iterative-search based action is executed. ii) Low communication cost: the single-round aggregation only requires a single communication between the clients and the server, which significantly reduces the communication cost. iii) Data heterogeneity invariance: the invariance to data partitioning does not pose any constraint on data partition strategy. That is, the equivalence is hold across all possible data heterogeneous scenarios (e.g., see Section 4.2). iv) Client-number invariance: for a complete dataset " + }, + { + "bbox": [ + 313, + 462, + 556, + 714 + ], + "type": "inline_equation", + "content": "\\mathcal{D}" + }, + { + "bbox": [ + 313, + 462, + 556, + 714 + ], + "type": "text", + "content": " partitioned among " + }, + { + "bbox": [ + 313, + 462, + 556, + 714 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 313, + 462, + 556, + 714 + ], + "type": "text", + "content": " clients (i.e., " + }, + { + "bbox": [ + 313, + 462, + 556, + 714 + ], + "type": "inline_equation", + "content": "\\{\\mathcal{D}_k\\}_{k=1}^K" + }, + { + "bbox": [ + 313, + 462, + 556, + 714 + ], + "type": "text", + "content": "), according to Theorem 1 and (9), when the weights from all " + }, + { + "bbox": [ + 313, + 462, + 556, + 714 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 313, + 462, + 556, + 714 + ], + "type": "text", + "content": " clients are aggregated, the resulting weight is identical to that trained on the full dataset " + }, + { + "bbox": [ + 313, + 462, + 556, + 714 + ], + "type": "inline_equation", + "content": "\\mathcal{D}" + }, + { + "bbox": [ + 313, + 462, + 556, + 714 + ], + "type": "text", + "content": ". To validate AA law with RI-process, we conduct an experiment on a dummy dataset and show the invariance (see Supplementary Materials D)." + } + ] + } + ], + "index": 26 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "4992" + } + ] + } + ], + "index": 27 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "code", + "bbox": [ + 56, + 71, + 296, + 308 + ], + "blocks": [ + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "lines": [ + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "spans": [ + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "text", + "content": "Algorithm 1 Analytic Federated Learning \nInput: " + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_k, k = 0, \\dots, K" + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "text", + "content": ", and pre-trained backbone " + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "inline_equation", + "content": "\\Theta" + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "text", + "content": ". \nServer Executes: \n1. for each client " + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "text", + "content": " in parallel do \n2. " + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "inline_equation", + "content": "\\hat{W}_k^{\\mathrm{r}}, C_k^{\\mathrm{r}} \\gets \\text{Local Stage}(k, \\mathcal{D}_k, \\gamma)" + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "text", + "content": ". \n3. end for \n4. " + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "inline_equation", + "content": "\\hat{W} \\gets \\text{Aggregation Stage}(\\{\\hat{W}_k^{\\mathrm{r}}, C_k^{\\mathrm{r}}, \\gamma\\}_{k=1}^K)" + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "text", + "content": ". \nLocal Stage: client " + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "text", + "content": " with " + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_k" + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "text", + "content": ". \n1. Get embedding and label matrices using (2). \n2. Obtain weight matrix " + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "inline_equation", + "content": "\\hat{W}_k^{\\mathrm{r}}" + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "text", + "content": " by (13). \n3. Get " + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "inline_equation", + "content": "C_k^{\\mathrm{r}} = X_k^\\top X_k + \\gamma I" + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "text", + "content": ". \n4. Return " + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "inline_equation", + "content": "\\hat{W}_k^{\\mathrm{r}}, C_k^{\\mathrm{r}}" + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "text", + "content": ". \nAggregation Stage: with " + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "inline_equation", + "content": "\\{\\hat{W}_k^{\\mathrm{r}}, C_k^{\\mathrm{r}}, \\gamma\\}_{k=1}^K" + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "text", + "content": ". \n1. Initialize " + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "inline_equation", + "content": "\\hat{W}_{\\text{agg},0}^{\\mathrm{r}} = 0" + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "inline_equation", + "content": "C_{\\text{agg},0}^{\\mathrm{r}} = 0" + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "text", + "content": ". \n2. for " + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "text", + "content": " in range(K): \n i) Aggregate " + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "inline_equation", + "content": "\\hat{W}_{\\text{agg},k}^{\\mathrm{r}}" + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "text", + "content": " with " + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "inline_equation", + "content": "\\hat{W}_k^{\\mathrm{r}}" + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "text", + "content": " using (9). \n ii) Update " + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "inline_equation", + "content": "C_{\\text{agg},k}^{\\mathrm{r}} = C_{\\text{agg},k-1}^{\\mathrm{r}} + C_k^{\\mathrm{r}}" + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "text", + "content": ". \n3. end for. \n4. Restore " + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "inline_equation", + "content": "\\hat{W} = \\hat{W}_{\\text{agg},K}" + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "text", + "content": " with " + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "inline_equation", + "content": "\\hat{W}_{\\text{agg},K}^{\\mathrm{r}}" + }, + { + "bbox": [ + 56, + 71, + 296, + 308 + ], + "type": "text", + "content": " in (16)." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "code_body" + } + ], + "index": 0, + "sub_type": "algorithm" + }, + { + "bbox": [ + 55, + 331, + 296, + 414 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 331, + 296, + 414 + ], + "spans": [ + { + "bbox": [ + 55, + 331, + 296, + 414 + ], + "type": "text", + "content": "An AL Branch of Federated Learning. The AFL incorporates the AL technique and can be considered as an AL branch within the FL context. The AL and its recursive formulation have demonstrated remarkable adaptability in continual learning utilizing a well-trained backbone [52, 54]. In this case, this intuition has been extended to the FL field through non-trivial derivations." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 426, + 137, + 440 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 426, + 137, + 440 + ], + "spans": [ + { + "bbox": [ + 55, + 426, + 137, + 440 + ], + "type": "text", + "content": "4. Experiments" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 447, + 296, + 495 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 447, + 296, + 495 + ], + "spans": [ + { + "bbox": [ + 55, + 447, + 296, + 495 + ], + "type": "text", + "content": "In this section, we provide extensive experiments to validate the proposed AFL, including comparison with FL state-of-the-arts and analysis under various settings. The training time and ablation study of regularization are also investigated." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 503, + 231, + 516 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 503, + 231, + 516 + ], + "spans": [ + { + "bbox": [ + 55, + 503, + 231, + 516 + ], + "type": "text", + "content": "4.1. Comparison with FL Techniques" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 521, + 296, + 568 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 521, + 296, + 568 + ], + "spans": [ + { + "bbox": [ + 55, + 521, + 296, + 568 + ], + "type": "text", + "content": "We conduct comparison with FL state-of-the-arts, including FedAvg [24], FedProx [17], MOON [15], FedGen [46], FedDyn [1], FedNTD [14] and FedDisco [42] under various non-IID settings." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 570, + 296, + 640 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 570, + 296, + 640 + ], + "spans": [ + { + "bbox": [ + 55, + 570, + 296, + 640 + ], + "type": "text", + "content": "Dataset and Model. We validate the baselines and our proposed AFL in 3 popular benchmark datasets in FL: CIFAR-10 [12], CIFAR-100 [12] and Tiny-ImageNet [25]. For all datasets, we use a ResNet-18 [10] pretrained on ImageNet-1k [5] as backbone. We freeze the backbones in all FL methods." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 642, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 642, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 642, + 296, + 713 + ], + "type": "text", + "content": "Data Partition. For simulating Non-IID scenarios in FL, we specify two Non-IID data partition methods including Latent Dirichlet Allocation [20] (LDA, denoted as NIID-1) and Sharding [20] (denoted as NIID-2). In the LDA setting, data assigned to each client is forced to satisfy the Dirichlet distribution, and degree of the data heterogeneity is controlled" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 72, + 553, + 204 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 553, + 204 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 553, + 204 + ], + "type": "text", + "content": "by parameter " + }, + { + "bbox": [ + 313, + 72, + 553, + 204 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 313, + 72, + 553, + 204 + ], + "type": "text", + "content": ". Smaller " + }, + { + "bbox": [ + 313, + 72, + 553, + 204 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 313, + 72, + 553, + 204 + ], + "type": "text", + "content": " leads to a more heterogeneous data distribution. In the Sharding strategy, the data is sorted by labels and divided into same-sized shards, and " + }, + { + "bbox": [ + 313, + 72, + 553, + 204 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 313, + 72, + 553, + 204 + ], + "type": "text", + "content": " controls the heterogeneity, i.e. the number of shards per client. When " + }, + { + "bbox": [ + 313, + 72, + 553, + 204 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 313, + 72, + 553, + 204 + ], + "type": "text", + "content": " takes a smaller value, the data is more heterogeneous. We choose " + }, + { + "bbox": [ + 313, + 72, + 553, + 204 + ], + "type": "inline_equation", + "content": "\\alpha = 0.1, 0.01" + }, + { + "bbox": [ + 313, + 72, + 553, + 204 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 72, + 553, + 204 + ], + "type": "inline_equation", + "content": "s = 10, 4" + }, + { + "bbox": [ + 313, + 72, + 553, + 204 + ], + "type": "text", + "content": " for CIFAR-100 and Tiny-ImageNet. For CIFAR-10, " + }, + { + "bbox": [ + 313, + 72, + 553, + 204 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 313, + 72, + 553, + 204 + ], + "type": "text", + "content": " is set to 0.1 and 0.01, and " + }, + { + "bbox": [ + 313, + 72, + 553, + 204 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 313, + 72, + 553, + 204 + ], + "type": "text", + "content": " is set to 4 and 2. Most existing methods are validated on data partition of " + }, + { + "bbox": [ + 313, + 72, + 553, + 204 + ], + "type": "inline_equation", + "content": "\\alpha = 0.3" + }, + { + "bbox": [ + 313, + 72, + 553, + 204 + ], + "type": "text", + "content": " to 1.0 and " + }, + { + "bbox": [ + 313, + 72, + 553, + 204 + ], + "type": "inline_equation", + "content": "s = 10" + }, + { + "bbox": [ + 313, + 72, + 553, + 204 + ], + "type": "text", + "content": " [14, 19]. Here we provide more challenging settings to validate the robustness under extremely heterogeneous cases." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 206, + 555, + 314 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 206, + 555, + 314 + ], + "spans": [ + { + "bbox": [ + 313, + 206, + 555, + 314 + ], + "type": "text", + "content": "Implementation Details. In all the experiments, we use 100 clients for each method and use the same partitioned dataset within experiments of the same data setting. We implement the AFL with a " + }, + { + "bbox": [ + 313, + 206, + 555, + 314 + ], + "type": "inline_equation", + "content": "\\gamma = 1" + }, + { + "bbox": [ + 313, + 206, + 555, + 314 + ], + "type": "text", + "content": " RI process (any " + }, + { + "bbox": [ + 313, + 206, + 555, + 314 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 313, + 206, + 555, + 314 + ], + "type": "text", + "content": " would suffice, see the ablation study). Each experiment setting is run 3 times and the mean and standard deviation of the best top-1 classification accuracy during training are reported. The implementation details of gradient-based compared methods can be found in Supplementary Materials E." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 316, + 556, + 519 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 316, + 556, + 519 + ], + "spans": [ + { + "bbox": [ + 313, + 316, + 556, + 519 + ], + "type": "text", + "content": "Experimental Results. We report the results of the compared methods under the setting of NIID-1 and NIID-2 in Table 1. As shown in the table, except for slightly weaker results than those of FedDyn in the NIID-2 setting, the AFL obtains very competitive performance compared with other methods across various settings. The degree of data heterogeneity does not at all affect the AFL. For instance, the accuracy remains " + }, + { + "bbox": [ + 313, + 316, + 556, + 519 + ], + "type": "inline_equation", + "content": "80.75\\%" + }, + { + "bbox": [ + 313, + 316, + 556, + 519 + ], + "type": "text", + "content": " on CIFAR-10 for various NIID-1 and NIID-2 settings. Although slight differences could occur among various settings, it barely impacts the classification accuracy (an AL property indicated in [48]). The same pattern repeats on CIFAR-100 and Tiny/ImageNet. Uniquely, the AFL obtains identical results for all 3 repeated runs, i.e., the standard deviations are zeros! This is because the AFL does not introduce any stochastic element, so the repeated computation in each run is naturally equivalent to one another, hence the zero standard deviation." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 521, + 556, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 521, + 556, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 521, + 556, + 713 + ], + "type": "text", + "content": "Notably, when introducing a pre-trained backbone, the compared methods yield similar results and even FedAvg can become a competitive baseline. The reason could be that, when incorporating a pret-trained backbone, the FL process can be more stable with a better start point and methods stabilizing the FL process could be less effective. This phenomena has also been witnessed in other FL studies [3, 7]. However, other FL methods still experience performance reductions under severe non-IID scenarios. For example, FedDyn performs relatively well (e.g., " + }, + { + "bbox": [ + 313, + 521, + 556, + 713 + ], + "type": "inline_equation", + "content": "57.55\\%" + }, + { + "bbox": [ + 313, + 521, + 556, + 713 + ], + "type": "text", + "content": " ) under NIID-1 " + }, + { + "bbox": [ + 313, + 521, + 556, + 713 + ], + "type": "inline_equation", + "content": "(\\alpha = 0.1)" + }, + { + "bbox": [ + 313, + 521, + 556, + 713 + ], + "type": "text", + "content": " on CIFAR-100 but undergoes a performance degradation (to " + }, + { + "bbox": [ + 313, + 521, + 556, + 713 + ], + "type": "inline_equation", + "content": "36.12\\%" + }, + { + "bbox": [ + 313, + 521, + 556, + 713 + ], + "type": "text", + "content": " ) when " + }, + { + "bbox": [ + 313, + 521, + 556, + 713 + ], + "type": "inline_equation", + "content": "\\alpha = 0.01" + }, + { + "bbox": [ + 313, + 521, + 556, + 713 + ], + "type": "text", + "content": ". This pattern is rather consistent in other compared methods, such as FedAvg " + }, + { + "bbox": [ + 313, + 521, + 556, + 713 + ], + "type": "inline_equation", + "content": "(56.62\\% \\rightarrow 32.99\\%)" + }, + { + "bbox": [ + 313, + 521, + 556, + 713 + ], + "type": "text", + "content": ", FedProx " + }, + { + "bbox": [ + 313, + 521, + 556, + 713 + ], + "type": "inline_equation", + "content": "(56.45\\% \\rightarrow 33.37\\%)" + }, + { + "bbox": [ + 313, + 521, + 556, + 713 + ], + "type": "text", + "content": ", and MOON " + }, + { + "bbox": [ + 313, + 521, + 556, + 713 + ], + "type": "inline_equation", + "content": "(56.58\\% \\rightarrow 33.34\\%)" + }, + { + "bbox": [ + 313, + 521, + 556, + 713 + ], + "type": "text", + "content": ", and is also true across all datasets. The performance distributions regarding NIID-2 for" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "4993" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 58, + 112, + 555, + 258 + ], + "blocks": [ + { + "bbox": [ + 55, + 70, + 555, + 105 + ], + "lines": [ + { + "bbox": [ + 55, + 70, + 555, + 105 + ], + "spans": [ + { + "bbox": [ + 55, + 70, + 555, + 105 + ], + "type": "text", + "content": "Table 1. The top-1 accuracy (%) of compared methods under two non-IID settings. Settings controlled by " + }, + { + "bbox": [ + 55, + 70, + 555, + 105 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 55, + 70, + 555, + 105 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 70, + 555, + 105 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 55, + 70, + 555, + 105 + ], + "type": "text", + "content": " are NIID-1 and NIID-2 respectively. The data is reported as average and standard deviation after 3 runs. Results in bold are the best within the compared methods in the same setting." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 58, + 112, + 555, + 258 + ], + "lines": [ + { + "bbox": [ + 58, + 112, + 555, + 258 + ], + "spans": [ + { + "bbox": [ + 58, + 112, + 555, + 258 + ], + "type": "table", + "html": "
DatasetSettingFedAvgFedProxMOONFedGenFedDynFedNTDFedDiscoAFL
CIFAR-10α = 0.164.02±0.1864.07±0.0863.84±0.0364.14±0.2464.77±0.1164.64±0.0263.83±0.0880.75±0.00
α = 0.0560.52±0.3960.39±0.0960.28±0.1760.65±0.1960.35±0.5461.16±0.3359.90±0.0580.75±0.00
s = 468.47±0.1368.46±0.0868.47±0.1568.24±0.2873.50±0.1170.24±0.1165.04±0.1180.75±0.00
s = 257.81±0.0357.61±0.1257.72±0.1557.02±0.1864.07±0.0958.77±0.1858.78±0.0280.75±0.00
CIFAR-100α = 0.156.62±0.1256.45±0.2256.58±0.0256.48±0.1757.55±0.0856.60±0.1455.79±0.0458.56±0.00
α = 0.0132.99±0.2033.37±0.0933.34±0.1133.09±0.0936.12±0.0832.59±0.2125.72±0.0858.56±0.00
s = 1055.76±0.1355.80±0.1655.70±0.2560.93±0.1761.09±0.0954.69±0.1554.65±0.0958.56±0.00
s = 548.33±0.1548.29±0.1448.34±0.1948.12±0.0659.34±0.1147.00±0.1945.86±0.1858.56±0.00
Tiny-ImageNetα = 0.146.04±0.2746.47±0.2346.21±0.1446.27±0.1447.72±0.2246.17±0.1647.48±0.0654.67±0.00
α = 0.0132.63±0.1932.26±0.1432.38±0.2032.33±0.1435.19±0.0631.86±0.4427.15±0.1054.67±0.00
s = 1039.06±0.2638.97±0.2338.79±0.1438.82±0.1641.36±0.0637.55±0.0938.86±0.1254.67±0.00
s = 529.66±0.1929.17±0.1629.24±0.3029.37±0.2535.18±0.1829.01±0.1427.72±0.1854.67±0.00
", + "image_path": "c0e640a9fafed99127747bdbb4c3b82462aa35dc25061a2dabd1c1ed824489c3.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 277, + 297, + 339 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 277, + 297, + 339 + ], + "spans": [ + { + "bbox": [ + 55, + 277, + 297, + 339 + ], + "type": "text", + "content": "these compared methods resemble those in NIID-1, where smaller " + }, + { + "bbox": [ + 55, + 277, + 297, + 339 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 55, + 277, + 297, + 339 + ], + "type": "text", + "content": " values invite performance degradation among existing FL counterparts. For instance, the FedDyn exhibits " + }, + { + "bbox": [ + 55, + 277, + 297, + 339 + ], + "type": "inline_equation", + "content": "73.50\\% \\rightarrow 64.07\\%" + }, + { + "bbox": [ + 55, + 277, + 297, + 339 + ], + "type": "text", + "content": " for " + }, + { + "bbox": [ + 55, + 277, + 297, + 339 + ], + "type": "inline_equation", + "content": "s = 4 \\rightarrow 2" + }, + { + "bbox": [ + 55, + 277, + 297, + 339 + ], + "type": "text", + "content": " on CIFAR-10 while the AFL obtains competitive and identical results (e.g., " + }, + { + "bbox": [ + 55, + 277, + 297, + 339 + ], + "type": "inline_equation", + "content": "80.75\\%" + }, + { + "bbox": [ + 55, + 277, + 297, + 339 + ], + "type": "text", + "content": ")." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 347, + 202, + 360 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 347, + 202, + 360 + ], + "spans": [ + { + "bbox": [ + 55, + 347, + 202, + 360 + ], + "type": "text", + "content": "4.2. Analysis on Data Partition" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 365, + 296, + 414 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 365, + 296, + 414 + ], + "spans": [ + { + "bbox": [ + 55, + 365, + 296, + 414 + ], + "type": "text", + "content": "Here we provide broaden non-IID partitions to demonstrate AFL's invariance to data partitioning. This includes varying the client number and the non-IID degree. We also provide the IID partition results." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 415, + 296, + 559 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 415, + 296, + 559 + ], + "spans": [ + { + "bbox": [ + 55, + 415, + 296, + 559 + ], + "type": "text", + "content": "Client-number Invariance. We compare our AFL and the FedAvg under NIID-1 setting on CIFAR-100 and TinyImageNet with " + }, + { + "bbox": [ + 55, + 415, + 296, + 559 + ], + "type": "inline_equation", + "content": "\\alpha = 0.1" + }, + { + "bbox": [ + 55, + 415, + 296, + 559 + ], + "type": "text", + "content": ", and vary the number of clients from 100 to 500 and 1000. The results are shown in Figure 2. We observe that the AFL keeps an identical performance when scaling the number of clients, while the FedAvg experiences a performance decline along the increasing number (e.g., " + }, + { + "bbox": [ + 55, + 415, + 296, + 559 + ], + "type": "inline_equation", + "content": "56.57\\% \\rightarrow 41.01\\%" + }, + { + "bbox": [ + 55, + 415, + 296, + 559 + ], + "type": "text", + "content": " for " + }, + { + "bbox": [ + 55, + 415, + 296, + 559 + ], + "type": "inline_equation", + "content": "K = 100 \\rightarrow 1000" + }, + { + "bbox": [ + 55, + 415, + 296, + 559 + ], + "type": "text", + "content": " on CIFAR-100). This provides a strong evidence to support the invariance to data partitioning in our AFL. It also showcases the capability of pushing the AFL to large-scale client training scenario without any performance compromise." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 277, + 556, + 469 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 277, + 556, + 469 + ], + "spans": [ + { + "bbox": [ + 313, + 277, + 556, + 469 + ], + "type": "text", + "content": "Data Heterogeneity Invariance. Here, we fix the client number to 100 and partition the CIFAR-100 under the setting of NIID-1 with " + }, + { + "bbox": [ + 313, + 277, + 556, + 469 + ], + "type": "inline_equation", + "content": "\\alpha = 0.005, 0.01, 0.1, 1" + }, + { + "bbox": [ + 313, + 277, + 556, + 469 + ], + "type": "text", + "content": ", including the IID setting as well. We report the results of AFL and FedAvg in Table 2. The FedAvg suffers from more accuracy losses (e.g., " + }, + { + "bbox": [ + 313, + 277, + 556, + 469 + ], + "type": "inline_equation", + "content": "57.72\\% \\to 24.74\\%" + }, + { + "bbox": [ + 313, + 277, + 556, + 469 + ], + "type": "text", + "content": " for " + }, + { + "bbox": [ + 313, + 277, + 556, + 469 + ], + "type": "inline_equation", + "content": "\\alpha = 0.1 \\to 0.005" + }, + { + "bbox": [ + 313, + 277, + 556, + 469 + ], + "type": "text", + "content": ") as the data heterogeneity grows higher. Under the IID partition, the FedAvg receives its best performance (i.e., " + }, + { + "bbox": [ + 313, + 277, + 556, + 469 + ], + "type": "inline_equation", + "content": "57.89\\%" + }, + { + "bbox": [ + 313, + 277, + 556, + 469 + ], + "type": "text", + "content": "), which is still less competitive than our AFL (i.e., " + }, + { + "bbox": [ + 313, + 277, + 556, + 469 + ], + "type": "inline_equation", + "content": "58.56\\%" + }, + { + "bbox": [ + 313, + 277, + 556, + 469 + ], + "type": "text", + "content": "). On the other hand, AFL obtains identical results (i.e., " + }, + { + "bbox": [ + 313, + 277, + 556, + 469 + ], + "type": "inline_equation", + "content": "58.56\\%" + }, + { + "bbox": [ + 313, + 277, + 556, + 469 + ], + "type": "text", + "content": ") across various settings, including non-IID and IID ones. This is another strong proof of the AA law indicating the weight-invariant property of AFL. Our AFL is invariant to any degree of data heterogeneity, leading to unchanged performance in all possible data heterogeneous partition scenarios, even in extreme data heterogeneous cases (e.g., " + }, + { + "bbox": [ + 313, + 277, + 556, + 469 + ], + "type": "inline_equation", + "content": "\\alpha = 0.005" + }, + { + "bbox": [ + 313, + 277, + 556, + 469 + ], + "type": "text", + "content": ")." + } + ] + } + ], + "index": 6 + }, + { + "type": "table", + "bbox": [ + 315, + 513, + 555, + 552 + ], + "blocks": [ + { + "bbox": [ + 313, + 481, + 555, + 506 + ], + "lines": [ + { + "bbox": [ + 313, + 481, + 555, + 506 + ], + "spans": [ + { + "bbox": [ + 313, + 481, + 555, + 506 + ], + "type": "text", + "content": "Table 2. The top-1 classification accuracy " + }, + { + "bbox": [ + 313, + 481, + 555, + 506 + ], + "type": "inline_equation", + "content": "(\\%)" + }, + { + "bbox": [ + 313, + 481, + 555, + 506 + ], + "type": "text", + "content": " of AFL and FedAvg under different data heterogeneity." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 315, + 513, + 555, + 552 + ], + "lines": [ + { + "bbox": [ + 315, + 513, + 555, + 552 + ], + "spans": [ + { + "bbox": [ + 315, + 513, + 555, + 552 + ], + "type": "table", + "html": "
Acc. (%)α = 0.005α = 0.01α = 0.1α = 1IID
FedAvg24.7433.0956.5757.7257.89
AFL58.5658.5658.5658.5658.56
", + "image_path": "5711b3cc21ff2b22d714021441aa998ba1c7655aff8d6fb580dc976d8797ccf8.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_body" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 56, + 582, + 181, + 692 + ], + "blocks": [ + { + "bbox": [ + 94, + 571, + 156, + 582 + ], + "lines": [ + { + "bbox": [ + 94, + 571, + 156, + 582 + ], + "spans": [ + { + "bbox": [ + 94, + 571, + 156, + 582 + ], + "type": "text", + "content": "(a) CIFAR-100" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 56, + 582, + 181, + 692 + ], + "lines": [ + { + "bbox": [ + 56, + 582, + 181, + 692 + ], + "spans": [ + { + "bbox": [ + 56, + 582, + 181, + 692 + ], + "type": "image", + "image_path": "ff5af59812f9babf3ac3fc8c43a0eba868f57c76cb75c5f8153422642c402f1a.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 82, + 699, + 268, + 711 + ], + "lines": [ + { + "bbox": [ + 82, + 699, + 268, + 711 + ], + "spans": [ + { + "bbox": [ + 82, + 699, + 268, + 711 + ], + "type": "text", + "content": "Figure 2. Accuracy over various number of clients." + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 183, + 582, + 294, + 691 + ], + "blocks": [ + { + "bbox": [ + 200, + 571, + 277, + 582 + ], + "lines": [ + { + "bbox": [ + 200, + 571, + 277, + 582 + ], + "spans": [ + { + "bbox": [ + 200, + 571, + 277, + 582 + ], + "type": "text", + "content": "(b) Tiny-ImageNet" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 183, + 582, + 294, + 691 + ], + "lines": [ + { + "bbox": [ + 183, + 582, + 294, + 691 + ], + "spans": [ + { + "bbox": [ + 183, + 582, + 294, + 691 + ], + "type": "image", + "image_path": "6468592f6c73bab37b6e26c227cf4add993262f230521630c46372ad8edfce85.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 575, + 425, + 588 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 575, + 425, + 588 + ], + "spans": [ + { + "bbox": [ + 313, + 575, + 425, + 588 + ], + "type": "text", + "content": "4.3. Training Efficiency" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 594, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 594, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 594, + 556, + 715 + ], + "type": "text", + "content": "Fast Training with Single-round Aggregation. We plot the training evolution curves of accuracy on CIFAR-100 and Tiny-ImageNet in Figure 3 and report the execution time for each method in the legend bars. Compared FL methods take 60s to 100s on CIFAR-100 (100s to 160s on Tiny-ImageNet) to complete an aggregation round, leading to a total training time of 30,000s to 50,000 (50,000s to 80,000s). AFL, however, spends 236.61s on CIFAR-100 and 349.50s on Tiny-ImageNet, achieving approximately " + }, + { + "bbox": [ + 313, + 594, + 556, + 715 + ], + "type": "inline_equation", + "content": "150 \\times -200 \\times" + }, + { + "bbox": [ + 313, + 594, + 556, + 715 + ], + "type": "text", + "content": " speedups over its FL counterparts due to only one aggregation." + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "type": "text", + "content": "4994" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 62, + 70, + 176, + 171 + ], + "blocks": [ + { + "bbox": [ + 62, + 70, + 176, + 171 + ], + "lines": [ + { + "bbox": [ + 62, + 70, + 176, + 171 + ], + "spans": [ + { + "bbox": [ + 62, + 70, + 176, + 171 + ], + "type": "image", + "image_path": "ad6d44bb618aeb472a976ec50a48b4bb738d8cfa9a1ce154bb9775abe9f16f1a.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 179, + 295, + 201 + ], + "lines": [ + { + "bbox": [ + 55, + 179, + 295, + 201 + ], + "spans": [ + { + "bbox": [ + 55, + 179, + 295, + 201 + ], + "type": "text", + "content": "Figure 3. Accuracy curves with communication rounds. Average training time is reported in the legends." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 182, + 70, + 288, + 171 + ], + "blocks": [ + { + "bbox": [ + 182, + 70, + 288, + 171 + ], + "lines": [ + { + "bbox": [ + 182, + 70, + 288, + 171 + ], + "spans": [ + { + "bbox": [ + 182, + 70, + 288, + 171 + ], + "type": "image", + "image_path": "eae531ba3b2bccc94bba3e40f17de8a8e43fd8a49d8594597a61cc95c530dfa8.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 223, + 214, + 236 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 223, + 214, + 236 + ], + "spans": [ + { + "bbox": [ + 55, + 223, + 214, + 236 + ], + "type": "text", + "content": "4.4. Ablation Study of RI Process" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "spans": [ + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "text", + "content": "Here, we conduct an ablation study regarding the RI process by reporting accuracies of AFL with " + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "inline_equation", + "content": "\\alpha = 0.1" + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "inline_equation", + "content": "K = 100,500,1000" + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "text", + "content": " under different values of " + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "text", + "content": ". The results without and with the RI process are provided in Table 3. When " + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "inline_equation", + "content": "\\gamma = 0" + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "text", + "content": " (i.e., no regularization involved), the AFL stops working with " + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "inline_equation", + "content": "K = 500" + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "text", + "content": " and 1000 due to the ill-conditioned matrix scenario (e.g., " + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "inline_equation", + "content": "N_{k} < y_{\\mathrm{e}}" + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "text", + "content": "). Such an ill-conditioned case is avoided by introducing " + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "text", + "content": ". However, the lack of RI process (see left columns in Table 3) could lead to accuracy loss. For instance, for " + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "inline_equation", + "content": "\\gamma = 100" + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "text", + "content": ", the AFL could suffer a loss of " + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "inline_equation", + "content": "9\\%" + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "text", + "content": " (i.e., " + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "inline_equation", + "content": "58.56\\% \\rightarrow 49.62\\%" + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "text", + "content": "). This is the result of regularization accumulation (see (15)). With the RI process, the AFL obtains an identical result across various " + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "text", + "content": " values. More importantly, this demonstrates that adopting the RI avoids the need to find proper " + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 56, + 240, + 296, + 433 + ], + "type": "text", + "content": " values. That is, the regularization is a removable intermediary, not a hyperparameter that requires tuning." + } + ] + } + ], + "index": 4 + }, + { + "type": "table", + "bbox": [ + 57, + 475, + 296, + 512 + ], + "blocks": [ + { + "bbox": [ + 55, + 443, + 295, + 466 + ], + "lines": [ + { + "bbox": [ + 55, + 443, + 295, + 466 + ], + "spans": [ + { + "bbox": [ + 55, + 443, + 295, + 466 + ], + "type": "text", + "content": "Table 3. Ablation study of RI under various " + }, + { + "bbox": [ + 55, + 443, + 295, + 466 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 55, + 443, + 295, + 466 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 443, + 295, + 466 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 55, + 443, + 295, + 466 + ], + "type": "text", + "content": ". The left/right results are performance w/o and w/ the RI process in (16)." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 57, + 475, + 296, + 512 + ], + "lines": [ + { + "bbox": [ + 57, + 475, + 296, + 512 + ], + "spans": [ + { + "bbox": [ + 57, + 475, + 296, + 512 + ], + "type": "table", + "html": "
Acc.(%)γ = 0γ = 0.1γ = 1γ = 10γ = 100
K=10058.56N/A58.5458.5658.5158.5658.1558.5655.7758.56
K=5001.11N/A58.5258.5658.3058.5656.7258.5651.7758.56
K=10000.75N/A58.5158.5658.1558.5655.7758.5649.6258.56
", + "image_path": "ca767e7f3342def68ea4666d3e0a9bdee4db06bb301c9af398e5e8d42acd7a42.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_body" + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 532, + 249, + 544 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 532, + 249, + 544 + ], + "spans": [ + { + "bbox": [ + 55, + 532, + 249, + 544 + ], + "type": "text", + "content": "4.5. Validation with Different Backbones" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 550, + 295, + 645 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 550, + 295, + 645 + ], + "spans": [ + { + "bbox": [ + 55, + 550, + 295, + 645 + ], + "type": "text", + "content": "To explore the effect of different backbones used in the AFL, we extend the experiments with VGG11 [28] and ViT-B-16 [6]. All these backbone are pre-trained in ImageNet-1k and we conduct the experiments under the same setting in Section 4.2. Due to the invariance to data partitioning, we only report one result in single dataset. As shown in the Table 4, with different pre-trained backbones, the AFL can all obtain competitive results." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 657, + 222, + 670 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 657, + 222, + 670 + ], + "spans": [ + { + "bbox": [ + 55, + 657, + 222, + 670 + ], + "type": "text", + "content": "5. Limitations and Future Work" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 677, + 295, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 677, + 295, + 712 + ], + "spans": [ + { + "bbox": [ + 55, + 677, + 295, + 712 + ], + "type": "text", + "content": "Utilizing Pre-trained Backbone. The AFL approach is both facilitated and constrained by the requirement of having a well-trained feature extractor. However, this limitation has" + } + ] + } + ], + "index": 10 + }, + { + "type": "table", + "bbox": [ + 332, + 102, + 537, + 149 + ], + "blocks": [ + { + "bbox": [ + 313, + 71, + 555, + 93 + ], + "lines": [ + { + "bbox": [ + 313, + 71, + 555, + 93 + ], + "spans": [ + { + "bbox": [ + 313, + 71, + 555, + 93 + ], + "type": "text", + "content": "Table 4. The results of top-1 accuracy in % of the AFL with different backbones including ResNet-18, VGG11 and ViT-B-16." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 332, + 102, + 537, + 149 + ], + "lines": [ + { + "bbox": [ + 332, + 102, + 537, + 149 + ], + "spans": [ + { + "bbox": [ + 332, + 102, + 537, + 149 + ], + "type": "table", + "html": "
Acc.(%)CIFAR-10CIFAR-100Tiny-ImageNet
ResNet-1880.7558.5654.67
VGG1182.7260.4354.73
ViT-B-1693.9275.4582.02
", + "image_path": "914575a73a3251bfce1b23d0910bceed49ca3110d2c4d4a4db5abe3868672b3f.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "table_body" + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 169, + 555, + 276 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 169, + 555, + 276 + ], + "spans": [ + { + "bbox": [ + 313, + 169, + 555, + 276 + ], + "type": "text", + "content": "been significantly mitigated by the emergence of reusing pretrained models for new tasks. This \"pre-trained backbone + downstream task\" paradigm has become a standard practice in numerous deep learning domains, offering improved generalization and reduced computational costs. FL can further enhance this paradigm and we validate that collaboration can still be beneficial with pre-trained backbones in Supplementary Materials F. The proposal of the AFL aligns with these recent research trends, making it a sensible FL advancement." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 276, + 556, + 383 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 276, + 556, + 383 + ], + "spans": [ + { + "bbox": [ + 313, + 276, + 556, + 383 + ], + "type": "text", + "content": "Partially Participating and Stragglers. The AFL formulates a single-round aggregation for FL systems, promoting rapid convergence and reducing communication overhead. However, challenges arise when clients engage partially or when stragglers impede progress. Since clients can only contribute to the aggregation after finishing local computations, the AFL needs to wait for all the clients. This potentially hampers the AFL's overall efficiency and inspires us to further refine the AFL to address these issues." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 384, + 556, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 384, + 556, + 491 + ], + "spans": [ + { + "bbox": [ + 313, + 384, + 556, + 491 + ], + "type": "text", + "content": "Linear Assumptions of AFL. The AFL is established upon linear classifiers and may be less effective with nonlinear data distribution. To address this, AFL can incorporate non-linear projections including non-linear activations or kernel functions. Also, for multi-layer model, AFL can formulate local least-square problem at each layer by label projection [49]. These techniques have been utilized in various AL-based work [39] and the AA law holds theoretically. We will conduct a further exploration in future." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 314, + 502, + 388, + 514 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 502, + 388, + 514 + ], + "spans": [ + { + "bbox": [ + 314, + 502, + 388, + 514 + ], + "type": "text", + "content": "6. Conclusion" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 522, + 556, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 522, + 556, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 522, + 556, + 713 + ], + "type": "text", + "content": "In this paper, we introduce a gradient-free FL framework named analytic federated learning (AFL). The AFL unveils analytical solutions both in the local client training stage and the aggregation stage. This leads to one-epoch local training, single-round aggregation, and fast convergence. In particular, the single-round aggregation property is theoretically supported and proved by the well-formulated AA law. Additionally, by introducing the RI process, we re-establish the AFL's optimality which could be compromised in the scenario of rank-deficient with normally a large number of clients. The AFL demonstrates its invariance to data partitioning, a property that allows several appealing FL characteristics such as data heterogeneity invariance and client-number invariance. These characteristics are empirically validated through experiments across various settings, where the AFL achieves a consistent and competitive performance." + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "type": "text", + "content": "4995" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 147, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 147, + 85 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 147, + 85 + ], + "type": "text", + "content": "Acknowledgment" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 90, + 297, + 190 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 90, + 297, + 190 + ], + "spans": [ + { + "bbox": [ + 55, + 90, + 297, + 190 + ], + "type": "text", + "content": "This research was supported by the National Natural Science Foundation of China (62306117, 62406114), the Guangzhou Basic and Applied Basic Research Foundation (2024A04J3681, 2023A04J1687), GJYC program of Guangzhou (2024D03J0005), the National Key R & D Project from Minister of Science and Technology (2024YFA1211500), and the Fundamental Research Funds for the Central Universities (2024ZYGXZR074)." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 201, + 115, + 213 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 201, + 115, + 213 + ], + "spans": [ + { + "bbox": [ + 56, + 201, + 115, + 213 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 57, + 220, + 296, + 713 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 61, + 220, + 296, + 264 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 220, + 296, + 264 + ], + "spans": [ + { + "bbox": [ + 61, + 220, + 296, + 264 + ], + "type": "text", + "content": "[1] Durmus Alp Emre Acar, Yue Zhao, Ramon Matas, Matthew Mattina, Paul Whatmough, and Venkatesh Saligrama. Federated learning based on dynamic regularization. In International Conference on Learning Representations, 2021. 6" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 62, + 266, + 296, + 308 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 266, + 296, + 308 + ], + "spans": [ + { + "bbox": [ + 62, + 266, + 296, + 308 + ], + "type": "text", + "content": "[2] Ilai Bistritz, Ariana Mann, and Nicholas Bambos. Distributed distillation for on-device learning. In Advances in Neural Information Processing Systems, pages 22593-22604. Curran Associates, Inc., 2020. 2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 62, + 310, + 296, + 354 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 310, + 296, + 354 + ], + "spans": [ + { + "bbox": [ + 62, + 310, + 296, + 354 + ], + "type": "text", + "content": "[3] Hong-You Chen, Cheng-Hao Tu, Ziwei Li, Han Wei Shen, and Wei-Lun Chao. On the importance and applicability of pretraining for federated learning. In The Eleventh International Conference on Learning Representations, 2023. 6" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 355, + 296, + 388 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 355, + 296, + 388 + ], + "spans": [ + { + "bbox": [ + 62, + 355, + 296, + 388 + ], + "type": "text", + "content": "[4] Randall E. Cline. Representations for the generalized inverse of a partitioned matrix. Journal of the Society for Industrial and Applied Mathematics, 12(3):588-600, 1964. 4" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 389, + 296, + 433 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 389, + 296, + 433 + ], + "spans": [ + { + "bbox": [ + 62, + 389, + 296, + 433 + ], + "type": "text", + "content": "[5] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, 2009. 6" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 434, + 296, + 489 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 434, + 296, + 489 + ], + "spans": [ + { + "bbox": [ + 62, + 434, + 296, + 489 + ], + "type": "text", + "content": "[6] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2021. 8" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 491, + 296, + 544 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 491, + 296, + 544 + ], + "spans": [ + { + "bbox": [ + 62, + 491, + 296, + 544 + ], + "type": "text", + "content": "[7] Chun-Mei Feng, Bangjun Li, Xinxing Xu, Yong Liu, Huazhu Fu, and Wangmeng Zuo. Learning federated visual prompt in null space for mri reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8064-8073, 2023. 1, 6" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 62, + 546, + 296, + 589 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 546, + 296, + 589 + ], + "spans": [ + { + "bbox": [ + 62, + 546, + 296, + 589 + ], + "type": "text", + "content": "[8] Ping Guo, Michael R Lyu, and NE Mastorakis. Pseudoinverse learning algorithm for feedforward neural networks. Advances in Neural Networks and Applications, pages 321-326, 2001. 2, 3" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 62, + 590, + 296, + 645 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 590, + 296, + 645 + ], + "spans": [ + { + "bbox": [ + 62, + 590, + 296, + 645 + ], + "type": "text", + "content": "[9] Qiushan Guo, Xinjiang Wang, Yichao Wu, Zhipeng Yu, Ding Liang, Xiaolin Hu, and Ping Luo. Online knowledge distillation via collaborative learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 647, + 296, + 689 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 647, + 296, + 689 + ], + "spans": [ + { + "bbox": [ + 57, + 647, + 296, + 689 + ], + "type": "text", + "content": "[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 6" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 57, + 691, + 296, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 691, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 57, + 691, + 296, + 713 + ], + "type": "text", + "content": "[11] Georgios A Kaissis, Marcus R Makowski, Daniel Rückert, and Rickmer F Braren. Secure, privacy-preserving and feder-" + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 555, + 713 + ], + "type": "list", + "angle": 0, + "index": 29, + "blocks": [ + { + "bbox": [ + 333, + 73, + 553, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 333, + 73, + 553, + 95 + ], + "spans": [ + { + "bbox": [ + 333, + 73, + 553, + 95 + ], + "type": "text", + "content": "ated machine learning in medical imaging. Nature Machine Intelligence, 2(6):305-311, 2020. 1" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 96, + 555, + 128 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 96, + 555, + 128 + ], + "spans": [ + { + "bbox": [ + 316, + 96, + 555, + 128 + ], + "type": "text", + "content": "[12] Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. 6" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 317, + 129, + 555, + 193 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 129, + 555, + 193 + ], + "spans": [ + { + "bbox": [ + 317, + 129, + 555, + 193 + ], + "type": "text", + "content": "[13] Rajesh Kumar, Abdullah Aman Khan, Jay Kumar, Zakria, Noorbakhsh Amiri Golilarz, Simin Zhang, Yang Ting, Chengyu Zheng, and Wenyong Wang. Blockchain-federated-learning and deep learning models for Covid-19 detection using ct imaging. IEEE Sensors Journal, 21(14):16301-16314, 2021. 1" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 195, + 554, + 248 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 195, + 554, + 248 + ], + "spans": [ + { + "bbox": [ + 316, + 195, + 554, + 248 + ], + "type": "text", + "content": "[14] Gihun Lee, Minchan Jeong, Yongjin Shin, Sangmin Bae, and Se-Young Yun. Preservation of the global knowledge by nottrue distillation in federated learning. In Advances in Neural Information Processing Systems, pages 38461-38474. Curran Associates, Inc., 2022. 6" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 251, + 555, + 293 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 251, + 555, + 293 + ], + "spans": [ + { + "bbox": [ + 316, + 251, + 555, + 293 + ], + "type": "text", + "content": "[15] Qinbin Li, Bingsheng He, and Dawn Song. Model-contrastive federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10713–10722, 2021. 6" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 295, + 555, + 338 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 295, + 555, + 338 + ], + "spans": [ + { + "bbox": [ + 316, + 295, + 555, + 338 + ], + "type": "text", + "content": "[16] Qinbin Li, Yiqun Diao, Quan Chen, and Bingsheng He. Federated learning on non-iid data silos: An experimental study. In 2022 IEEE 38th International Conference on Data Engineering (ICDE), pages 965-978, 2022. 1" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 339, + 555, + 381 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 339, + 555, + 381 + ], + "spans": [ + { + "bbox": [ + 316, + 339, + 555, + 381 + ], + "type": "text", + "content": "[17] Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. In Proceedings of Machine Learning and Systems, pages 429-450, 2020. 1, 2, 6" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 317, + 384, + 555, + 437 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 384, + 555, + 437 + ], + "spans": [ + { + "bbox": [ + 317, + 384, + 555, + 437 + ], + "type": "text", + "content": "[18] Xin-Chun Li, Yi-Chu Xu, Shaoming Song, Bingshuai Li, Yinchuan Li, Yunfeng Shao, and De-Chuan Zhan. Federated learning with position-aware neurons. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10082-10091, 2022. 2" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 317, + 438, + 555, + 491 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 438, + 555, + 491 + ], + "spans": [ + { + "bbox": [ + 317, + 438, + 555, + 491 + ], + "type": "text", + "content": "[19] Zexi Li, Tao Lin, Xinyi Shang, and Chao Wu. Revisiting weighted aggregation in federated learning with neural networks. In Proceedings of the 40th International Conference on Machine Learning, pages 19767-19788. PMLR, 2023. 2, 6" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 317, + 493, + 555, + 536 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 493, + 555, + 536 + ], + "spans": [ + { + "bbox": [ + 317, + 493, + 555, + 536 + ], + "type": "text", + "content": "[20] Tao Lin, Lingjing Kong, Sebastian U Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated learning. In Advances in Neural Information Processing Systems, pages 2351-2363. Curran Associates, Inc., 2020. 6" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 317, + 537, + 555, + 581 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 537, + 555, + 581 + ], + "spans": [ + { + "bbox": [ + 317, + 537, + 555, + 581 + ], + "type": "text", + "content": "[21] Tao Lin, Lingjing Kong, Sebastian U Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated learning. In Advances in Neural Information Processing Systems, pages 2351-2363. Curran Associates, Inc., 2020. 2" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 317, + 582, + 555, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 582, + 555, + 624 + ], + "spans": [ + { + "bbox": [ + 317, + 582, + 555, + 624 + ], + "type": "text", + "content": "[22] Zichen Liu, Chao Du, Wee Sun Lee, and Min Lin. Locality sensitive sparse encoding for learning world models online. In The Twelfth International Conference on Learning Representations, 2024. 4" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 317, + 625, + 555, + 679 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 625, + 555, + 679 + ], + "spans": [ + { + "bbox": [ + 317, + 625, + 555, + 679 + ], + "type": "text", + "content": "[23] Shubham Malaviya, Manish Shukla, and Sachin Lodha. Reducing communication overhead in federated learning for pre-trained language models using parameter-efficient finetuning. In Proceedings of The 2nd Conference on Lifelong Learning Agents, pages 456-469. PMLR, 2023. 1" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 316, + 681, + 555, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 681, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 681, + 555, + 713 + ], + "type": "text", + "content": "[24] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-Efficient Learning of Deep Networks from Decentralized" + } + ] + } + ], + "index": 28 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "type": "text", + "content": "4996" + } + ] + } + ], + "index": 30 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 296, + 712 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 76, + 72, + 296, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 72, + 296, + 106 + ], + "spans": [ + { + "bbox": [ + 76, + 72, + 296, + 106 + ], + "type": "text", + "content": "Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, pages 1273-1282. PMLR, 2017. 1, 2, 6" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 57, + 106, + 272, + 117 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 106, + 272, + 117 + ], + "spans": [ + { + "bbox": [ + 57, + 106, + 272, + 117 + ], + "type": "text", + "content": "[25] Mohammed Ali mnmmoustafa. Tiny imagenet, 2017. 6" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 57, + 118, + 295, + 149 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 118, + 295, + 149 + ], + "spans": [ + { + "bbox": [ + 57, + 118, + 295, + 149 + ], + "type": "text", + "content": "[26] J. Park and I. W. Sandberg. Universal approximation using radial-basis-function networks. Neural Computation, 3(2): 246-257, 1991. 3" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 151, + 295, + 183 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 151, + 295, + 183 + ], + "spans": [ + { + "bbox": [ + 56, + 151, + 295, + 183 + ], + "type": "text", + "content": "[27] Geet Shingi. A federated learning based approach for loan defaults prediction. In 2020 International Conference on Data Mining Workshops (ICDMW), pages 362-368, 2020. 1" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 57, + 184, + 296, + 216 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 184, + 296, + 216 + ], + "spans": [ + { + "bbox": [ + 57, + 184, + 296, + 216 + ], + "type": "text", + "content": "[28] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition, 2015. 8" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 217, + 296, + 271 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 217, + 296, + 271 + ], + "spans": [ + { + "bbox": [ + 56, + 217, + 296, + 271 + ], + "type": "text", + "content": "[29] Yue Tan, Guodong Long, LU LIU, Tianyi Zhou, Qinghua Lu, Jing Jiang, and Chengqi Zhang. Fedproto: Federated prototype learning across heterogeneous clients. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8):8432-8440, 2022. 1" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 272, + 296, + 326 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 272, + 296, + 326 + ], + "spans": [ + { + "bbox": [ + 56, + 272, + 296, + 326 + ], + "type": "text", + "content": "[30] Yue Tan, Guodong Long, Jie Ma, LU LIU, Tianyi Zhou, and Jing Jiang. Federated learning from pre-trained models: A contrastive learning approach. In Advances in Neural Information Processing Systems, pages 19332-19344. Curran Associates, Inc., 2022. 1" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 327, + 296, + 360 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 327, + 296, + 360 + ], + "spans": [ + { + "bbox": [ + 56, + 327, + 296, + 360 + ], + "type": "text", + "content": "[31] Kar-Ann Toh. Learning from the kernel and the range space. In 2018 IEEE/ACIS 17th International Conference on Computer and Information Science (ICIS), pages 1–6, 2018. 3" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 361, + 296, + 403 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 361, + 296, + 403 + ], + "spans": [ + { + "bbox": [ + 56, + 361, + 296, + 403 + ], + "type": "text", + "content": "[32] Kar-Ann Toh. Learning from the kernel and the range space. In the Proceedings of the 17th 2018 IEEE Conference on Computer and Information Science, pages 417–422. IEEE, 2018. 2" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 404, + 296, + 459 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 404, + 296, + 459 + ], + "spans": [ + { + "bbox": [ + 56, + 404, + 296, + 459 + ], + "type": "text", + "content": "[33] Haozhao Wang, Yichen Li, Wenchao Xu, Ruixuan Li, Yufeng Zhan, and Zhigang Zeng. DaFKD: Domain-aware federated knowledge distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20412-20421, 2023. 2" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 460, + 296, + 514 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 460, + 296, + 514 + ], + "spans": [ + { + "bbox": [ + 56, + 460, + 296, + 514 + ], + "type": "text", + "content": "[34] Haozhao Wang, Haoran Xu, Yichen Li, Yuan Xu, Ruixuan Li, and Tianwei Zhang. FedCDA: Federated learning with cross-rounds divergence-aware aggregation. In The Twelfth International Conference on Learning Representations, 2024. 2" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 514, + 296, + 568 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 514, + 296, + 568 + ], + "spans": [ + { + "bbox": [ + 56, + 514, + 296, + 568 + ], + "type": "text", + "content": "[35] Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, and H. Vincent Poor. Tackling the objective inconsistency problem in heterogeneous federated optimization. In Advances in Neural Information Processing Systems, pages 7611-7623. Curran Associates, Inc., 2020. 1, 2" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 56, + 570, + 296, + 613 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 570, + 296, + 613 + ], + "spans": [ + { + "bbox": [ + 56, + 570, + 296, + 613 + ], + "type": "text", + "content": "[36] Jue Wang, Ping Guo, and Yanjun Li. DensePILAE: a feature reuse pseudoinverse learning algorithm for deep stacked autoencoder. Complex & Intelligent Systems, pages 1-11, 2021. 3" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 56, + 614, + 296, + 667 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 614, + 296, + 667 + ], + "spans": [ + { + "bbox": [ + 56, + 614, + 296, + 667 + ], + "type": "text", + "content": "[37] X. Wang, T. Zhang, and R. Wang. Noniterative deep learning: Incorporating restricted boltzmann machine into multilayer random weight neural networks. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 49(7):1299-1308, 2019. 3" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 56, + 669, + 296, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 669, + 296, + 712 + ], + "spans": [ + { + "bbox": [ + 56, + 669, + 296, + 712 + ], + "type": "text", + "content": "[38] Guile Wu and Shaogang Gong. Peer collaborative learning for online knowledge distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12):10302-10310, 2021. 2" + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 555, + 714 + ], + "type": "list", + "angle": 0, + "index": 28, + "blocks": [ + { + "bbox": [ + 316, + 73, + 555, + 128 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 73, + 555, + 128 + ], + "spans": [ + { + "bbox": [ + 316, + 73, + 555, + 128 + ], + "type": "text", + "content": "[39] Yicheng Xu, Yuxin Chen, Jiahao Nie, Yusong Wang, Huiping Zhuang, and Manabu Okumura. Advancing cross-domain discriminability in continual learning of vision-language models. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. 8" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 129, + 555, + 183 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 129, + 555, + 183 + ], + "spans": [ + { + "bbox": [ + 316, + 129, + 555, + 183 + ], + "type": "text", + "content": "[40] Fu-En Yang, Chien-Yi Wang, and Yu-Chiang Frank Wang. Efficient model personalization in federated learning via client-specific prompt generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 19159-19168, 2023. 1" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 185, + 555, + 229 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 185, + 555, + 229 + ], + "spans": [ + { + "bbox": [ + 316, + 185, + 555, + 229 + ], + "type": "text", + "content": "[41] Wensi Yang, Yuhang Zhang, Kejiang Ye, Li Li, and ChengZhong Xu. Ffd: A federated learning based method for credit card fraud detection. In Big Data - BigData 2019, pages 18-32, Cham, 2019. Springer International Publishing. 1" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 230, + 555, + 285 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 230, + 555, + 285 + ], + "spans": [ + { + "bbox": [ + 316, + 230, + 555, + 285 + ], + "type": "text", + "content": "[42] Rui Ye, Mingkai Xu, Jianyu Wang, Chenxin Xu, Siheng Chen, and Yanfeng Wang. FedDisco: Federated learning with discrepancy-aware collaboration. In Proceedings of the 40th International Conference on Machine Learning, pages 39879-39902. PMLR, 2023. 6" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 286, + 555, + 342 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 286, + 555, + 342 + ], + "spans": [ + { + "bbox": [ + 316, + 286, + 555, + 342 + ], + "type": "text", + "content": "[43] Fuxun Yu, Weishan Zhang, Zhuwei Qin, Zirui Xu, Di Wang, Chenchen Liu, Zhi Tian, and Xiang Chen. Fed2: Feature-aligned federated learning. In Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining, pages 2066-2074, 2021. 2" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 343, + 555, + 398 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 343, + 555, + 398 + ], + "spans": [ + { + "bbox": [ + 316, + 343, + 555, + 398 + ], + "type": "text", + "content": "[44] Lin Zhang, Li Shen, Liang Ding, Dacheng Tao, and Ling-Yu Duan. Fine-tuning global model via data-free knowledge distillation for non-iid federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10174-10183, 2022. 2" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 399, + 555, + 497 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 399, + 555, + 497 + ], + "spans": [ + { + "bbox": [ + 316, + 399, + 555, + 497 + ], + "type": "text", + "content": "[45] Zhuo Zhang, Yuanhang Yang, Yong Dai, Qifan Wang, Yue Yu, Lizhen Qu, and Zenglin Xu. Fedpetuning: When federated learning meets the parameter-efficient tuning methods of pre-trained language models. In Findings of the Association for Computational Linguistics: ACL 2023, page 9963-9977. Association for Computational Linguistics (ACL), 2023. Annual Meeting of the Association of Computational Linguistics 2023, ACL 2023; Conference date: 09-07-2023 Through 14-07-2023. 1" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 498, + 555, + 543 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 498, + 555, + 543 + ], + "spans": [ + { + "bbox": [ + 316, + 498, + 555, + 543 + ], + "type": "text", + "content": "[46] Zhuangdi Zhu, Junyuan Hong, and Jiayu Zhou. Data-free knowledge distillation for heterogeneous federated learning. In Proceedings of the 38th International Conference on Machine Learning, pages 12878-12889. PMLR, 2021. 6" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 544, + 555, + 589 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 544, + 555, + 589 + ], + "spans": [ + { + "bbox": [ + 316, + 544, + 555, + 589 + ], + "type": "text", + "content": "[47] Zhuangdi Zhu, Junyuan Hong, and Jiayu Zhou. Data-free knowledge distillation for heterogeneous federated learning. In Proceedings of the 38th International Conference on Machine Learning, pages 12878-12889. PMLR, 2021. 2" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 590, + 555, + 633 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 590, + 555, + 633 + ], + "spans": [ + { + "bbox": [ + 316, + 590, + 555, + 633 + ], + "type": "text", + "content": "[48] Huiping Zhuang, Zhiping Lin, and Kar-Ann Toh. Blockwise recursive Moore-Penrose inverse for network learning. IEEE Transactions on Systems, Man, and Cybernetics: Systems, pages 1-14, 2021. 2, 3, 6" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 634, + 555, + 667 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 634, + 555, + 667 + ], + "spans": [ + { + "bbox": [ + 316, + 634, + 555, + 667 + ], + "type": "text", + "content": "[49] Huiping Zhuang, Zhiping Lin, and Kar-Ann Toh. Correlation projection for analytic learning of a classification network. Neural Processing Letters, pages 1–22, 2021. 8" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 316, + 669, + 555, + 714 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 669, + 555, + 714 + ], + "spans": [ + { + "bbox": [ + 316, + 669, + 555, + 714 + ], + "type": "text", + "content": "[50] Huiping Zhuang, Zhenyu Weng, Hongxin Wei, Renchunzi Xie, Kar-Ann Toh, and Zhiping Lin. ACIL: Analytic class incremental learning with absolute memorization and privacy protection. In Advances in Neural Information Processing" + } + ] + } + ], + "index": 27 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "4997" + } + ] + } + ], + "index": 29 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 296, + 329 + ], + "type": "list", + "angle": 0, + "index": 5, + "blocks": [ + { + "bbox": [ + 76, + 72, + 296, + 94 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 72, + 296, + 94 + ], + "spans": [ + { + "bbox": [ + 76, + 72, + 296, + 94 + ], + "type": "text", + "content": "Systems, pages 11602-11614. Curran Associates, Inc., 2022. 3, 4" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 96, + 296, + 151 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 96, + 296, + 151 + ], + "spans": [ + { + "bbox": [ + 56, + 96, + 296, + 151 + ], + "type": "text", + "content": "[51] Huiping Zhuang, Zhenyu Weng, Run He, Zhiping Lin, and Ziqian Zeng. GKEAL: Gaussian kernel embedded analytic learning for few-shot class incremental task. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7746-7755, 2023. 3" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 152, + 296, + 206 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 152, + 296, + 206 + ], + "spans": [ + { + "bbox": [ + 56, + 152, + 296, + 206 + ], + "type": "text", + "content": "[52] Huiping Zhuang, Yizhu Chen, Di Fang, Run He, Kai Tong, Hongxin Wei, Ziqian Zeng, and Cen Chen. GACL: Exemplar-free generalized analytic continual learning. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 2024. 3, 6" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 208, + 296, + 262 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 208, + 296, + 262 + ], + "spans": [ + { + "bbox": [ + 56, + 208, + 296, + 262 + ], + "type": "text", + "content": "[53] Huiping Zhuang, Run He, Kai Tong, Ziqian Zeng, Cen Chen, and Zhiping Lin. DS-AL: A dual-stream analytic learning for exemplar-free class-incremental learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(15):17237-17244, 2024. 3" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 263, + 296, + 329 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 263, + 296, + 329 + ], + "spans": [ + { + "bbox": [ + 56, + 263, + 296, + 329 + ], + "type": "text", + "content": "[54] Huiping Zhuang, Yuchen Liu, Run He, Kai Tong, Ziqian Zeng, Cen Chen, Yi Wang, and Lap-Pui Chau. F-OAL: Forward-only online analytic learning with fast training and low memory footprint in class incremental learning. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. 6" + } + ] + } + ], + "index": 4 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "type": "text", + "content": "4998" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/AG-VPReID_ A Challenging Large-Scale Benchmark for Aerial-Ground Video-based Person Re-Identification/4cdeaea8-7584-41e0-a7e3-68147957d6c6_content_list.json b/2025/AG-VPReID_ A Challenging Large-Scale Benchmark for Aerial-Ground Video-based Person Re-Identification/4cdeaea8-7584-41e0-a7e3-68147957d6c6_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..72f9e63f60ef1e1c468795c416e249bf85cd70da --- /dev/null +++ b/2025/AG-VPReID_ A Challenging Large-Scale Benchmark for Aerial-Ground Video-based Person Re-Identification/4cdeaea8-7584-41e0-a7e3-68147957d6c6_content_list.json @@ -0,0 +1,1629 @@ +[ + { + "type": "text", + "text": "AG-VPReID: A Challenging Large-Scale Benchmark for Aerial-Ground Video-based Person Re-Identification", + "text_level": 1, + "bbox": [ + 214, + 130, + 782, + 174 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Huy Nguyen1, Kien Nguyen1, Akila Pemasiri1, Feng Liu2, Sridha Sridharan1, Clinton Fookes1", + "bbox": [ + 235, + 202, + 761, + 237 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{1}$ School of Electrical Engineering and Robotics, Queensland University of Technology $^{2}$ Department of Computer Science, Drexel University", + "bbox": [ + 153, + 238, + 841, + 273 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{1}\\{t497.nguyen, k.nguyenthanh, a.thondilege, s.sridharan, c.fookes\\}@qut.edu.au,$ $^{2}f1397@drexel.edu$", + "bbox": [ + 153, + 275, + 839, + 309 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 246, + 344, + 326, + 359 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "We introduce AG-VPReLU, a new large-scale dataset for aerial-ground video-based person re-identification (ReID) that comprises 6,632 subjects, 32,321 tracklets and over 9.6 million frames captured by drones (altitudes ranging from 15–120m), CCTV, and wearable cameras. This dataset offers a real-world benchmark for evaluating the robustness to significant viewpoint changes, scale variations, and resolution differences in cross-platform aerial-ground settings. In addition, to address these challenges, we propose AG-VPReLU-Net, an end-to-end framework composed of three complementary streams: (1) an Adapted Temporal-Spatial Stream addressing motion pattern inconsistencies and facilitating temporal feature learning, (2) a Normalized Appearance Stream leveraging physics-informed techniques to tackle resolution and appearance changes, and (3) a MultiScale Attention Stream handling scale variations across drone altitudes. We integrate visual-semantic cues from all streams to form a robust, viewpoint-invariant whole-body representation. Extensive experiments demonstrate that AG-VPReLU-Net outperforms state-of-the-art approaches on both our new dataset and existing video-based ReID benchmarks, showcasing its effectiveness and generalizability. Nevertheless, the performance gap observed on AG-VPReLU across all methods underscores the dataset's challenging nature. The dataset, code and trained models are available at AG-VPReLU-Net.", + "bbox": [ + 88, + 376, + 485, + 767 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 91, + 799, + 220, + 814 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Video-based person re-identification (ReID) is a challenging and in-demand task, with significant real-world applications in surveillance, search and rescue operations, and urban monitoring [25, 35, 37]. While traditional ReID methods focus on ground-based cameras [7, 50], the inte", + "bbox": [ + 89, + 824, + 483, + 902 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "gration of aerial perspectives through aerial-ground person ReID presents a paradigm shift in this field [32]. This approach enables the identification and matching of individuals across non-overlapping aerial and ground-based camera views, substantially enhancing situational awareness and response times in complex environments [31, 46]. The motivation behind this research stems from the increasing deployment of aerial platforms, such as unmanned aerial vehicles (UAVs), which provide unique vantage points that complement ground-based observations. However, the development of robust aerial-ground ReID systems faces a significant challenge: the scarcity of diverse and large-scale datasets that capture the nuances of both aerial and ground perspectives. As demonstrated by ImageNet [4], large and diverse benchmarks are crucial for deep learning based methods, indicating a need for a comprehensive ReID dataset integrating multiple platforms, environments, and real-world challenges.", + "bbox": [ + 511, + 345, + 906, + 618 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Initial efforts in aerial-ground person ReID have focused primarily on image-based tasks. For instance, Nguyen et al. [31] pioneer this area by releasing the first aerial-ground ReID dataset, which includes images from one drone and one CCTV camera capturing 21,983 images of 388 identities. They later expand the dataset to 100,502 images of 1,615 individuals [32]. Recently, Zhang et al. [46] collect a synthetic dataset named CARGO, containing 108,563 images representing 5,000 subjects, to complement real-world datasets. Within video-based tasks, Zhang et al. [47] collect a video-based dataset called G2A-VReID, which consists of 185,907 images and 5,576 tracklets from one drone and one CCTV camera, featuring 2,788 identities. Building on these advancements, G2A-VReID could be expanded compared to ground-based datasets like MARS [51], which includes 20,000 tracklets and 1.19 million frames from six cameras. While current aerial-ground datasets are valuable, increasing identity variation and environmental diversity would", + "bbox": [ + 511, + 628, + 910, + 902 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 46 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "1241", + "bbox": [ + 483, + 944, + 513, + 957 + ], + "page_idx": 0 + }, + { + "type": "table", + "img_path": "images/b623f5961d8cb0edf5422fc8ba8d42cdcaefa3eb6f894db4c2c684d0053a4a40.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
DatasetYear#Identities#Tracklets#Frames (M)#CVCCAtt.PlatformDur.Altitude (m)
GroundWearableAerial
MARS [51]20161,26120,4781.196XXXX--
LSVID [18]20193,77214,9432.9815XXXX4-
VCCR [10]20223924,3840.151XXX90-
CCVID [8]20222262,8560.341XXX--
MEVID [3]20231588,09210.4633XXX73-
PDestre [17]20202531,8940.101XX-5-6
G2A-VReID [47]20242,7885,5760.182XXX-20 - 60
AG-VPRelD20246,63232,3219.662015 - 120
", + "bbox": [ + 119, + 88, + 874, + 214 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Table 1. Comparison of AG-VPReID with existing video-based person ReID datasets. Above: ground-based datasets, Below: aerial-based datasets. CV: Camera Views, CC: Clothes-Change, Att.: Attributes (Soft-biometrics annotations), Dur.: Duration (days).", + "bbox": [ + 89, + 218, + 903, + 244 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "improve model robustness for real-world applications.", + "bbox": [ + 89, + 271, + 447, + 286 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In light of this, we introduce AG-VPReID, a comprehensive large-scale benchmark dataset for Aerial-Ground Video-based Person ReID. AG-VPReID comprises 6,632 subjects, 32,321 tracklets, and over 9.6 million frames, captured across multiple dates and times of day using a combination of three platforms: aerial drones operating at various altitudes (15-120m), stationary CCTV cameras and wearable mobile cameras. This dataset significantly surpasses existing video-based ReID benchmarks in terms of scale, diversity, and real-world applicability with the highest number of identities, the highest number of tracklets, the highest drone flying altitudes, and the most diverse platforms. The key characteristics of AG-VPReID include: drastic view changes between aerial and ground perspectives; a large number of annotated identities across multiple sessions; rich outdoor scenarios with varying environmental conditions; significant differences in resolution between aerial and ground footage; and both controlled scenarios with clothing changes and in-the-wild pedestrian traffic.", + "bbox": [ + 88, + 287, + 482, + 575 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Aerial-ground person ReID presents unique challenges due to significant appearance variations between aerial and ground-level views. These variations include extreme viewpoint differences, drastic changes in resolution and scale, partial occlusions, and temporal discontinuities caused by high-flying altitudes and long-range captures. Traditional video-based person ReID methods, although effective in ground-based settings [26, 45], often struggle in aerial-ground scenarios due to the complex combination of inconsistent motion patterns and the aforementioned variations.", + "bbox": [ + 89, + 580, + 482, + 731 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To address these challenges, we introduce AG-VPReID-Net, an end-to-end framework for Aerial-Ground Video-based Person Re-Identification. Unlike existing state-of-the-art methods focused on single-view or ground scenarios, AG-VPReID-Net features three complementary streams tailored for aerial-ground challenges: i) An Adapted Temporal-Spatial Stream enhances traditional temporal modeling by integrating identity-specific memory and temporal shape analysis. This improves the extraction of consistent motion patterns and body shape representations, addressing the temporal discontinuity and motion", + "bbox": [ + 89, + 734, + 482, + 901 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "inconsistencies of aerial footage, outperforming standard LSTM [6, 14] or 3D CNN [20] approaches; ii) A Normalized Appearance Stream addresses the resolution and appearance differences between aerial and ground views by using UV maps aggregation across frames for a normalized appearance representation. This provides robustness against pose changes, viewpoint shifts, and varying image quality, excelling where current appearance-based methods [19, 29] falter; and iii) A Multi-Scale Attention Stream addresses scale variations inherent in aerial-ground data by incorporating multi-scale feature extraction, motion analysis, temporal context, and a transformer decoder, effectively improving identification across drone altitudes compared to single-scale [13, 43] methods. By integrating these streams, AG-VPReID-Net offers incremental improvements in aerial-ground video-based re-identification, highlighting its potential in addressing this challenging scenario.", + "bbox": [ + 511, + 270, + 906, + 542 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In summary, our main contributions are as follows:", + "bbox": [ + 532, + 542, + 870, + 556 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "(1) We introduce AG-VPReID, a challenging large-scale benchmark for aerial-ground video-based person ReID, bridging the gap with a diverse dataset that captures nuanced challenges from both aerial and ground perspectives.", + "(2) We propose AG-VPReLUID-Net, an innovative end-to-end framework that integrates adapted temporal-spatial processing, normalized appearance representation, and multiscale attention mechanisms to effectively address the challenges of aerial-ground ReID.", + "(3) AG-VPReLU-Net sets new state-of-the-art performance on the AG-VPReLU and existing video-based ReID benchmarks, demonstrating our approach's effectiveness and generalizability across different settings." + ], + "bbox": [ + 511, + 558, + 903, + 753 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Prior Work", + "text_level": 1, + "bbox": [ + 511, + 768, + 635, + 784 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Video-based Person ReID Datasets. Existing person ReID datasets are numerous but severely lack the ability to address real-world challenges, particularly in aerial-ground scenarios. Ground-based datasets like MARS [51] and LSVID [18] provide large-scale benchmarks but focus mainly on ground perspectives, highlighting the need for multi-platform surveillance datasets. The inclusion of clothing", + "bbox": [ + 511, + 794, + 906, + 902 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "1242", + "bbox": [ + 483, + 944, + 514, + 955 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/4be2113499ae6adece19cf4a8306ac03f748d2eb26b2f3e4a6eabd1218630242.jpg", + "image_caption": [ + "Figure 1. Our AG-VPReID dataset was captured using a variety of six cameras, including aerial drones, CCTVs, and GoPros. Sample images and camera locations are illustrated on the right side of the figure. The left side depicts the cross-camera appearance variations of two pedestrians, showcasing differences across various sessions and times of the day." + ], + "image_footnote": [], + "bbox": [ + 133, + 89, + 867, + 422 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "changes in datasets such as MEVID [3], CCVID [8], and VCCR [10] represents progress in addressing real-world challenges, though they could benefit from more identities and diverse environments. The BRIAR dataset [2], while featuring 1,000 subjects and UAV footage, primarily targets face recognition with restricted access. P-Destre [17] pioneered aerial view exploration, though its use of a single drone at lower altitudes (5-6m) creates opportunities for datasets covering higher operational altitudes more common in surveillance applications. The G2A-VReID dataset by Zhang et al. [47] represents an important step in combining aerial and ground views. While innovative, it contains 2,788 identities within a $20 - 60\\mathrm{m}$ altitude range using 2 cameras, suggesting opportunities for future datasets to expand in scale, altitude diversity, camera count, and environmental variety. Tab. 1 compares our AG-VPReID dataset with others across multiple dimensions.", + "bbox": [ + 88, + 500, + 485, + 758 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Video-based Person ReID. Video-based person ReID methods have evolved to leverage both spatial and temporal cues. Early approaches used recurrent neural networks and 3D convolutional networks [20, 30], while later works incorporated temporal pooling [51] and attention mechanisms [25]. Recent advancements include temporal complementary learning [13], Transformer-based architectures [11, 28], and techniques addressing cross-platform and cloth-changing scenarios [10, 39, 47]. The", + "bbox": [ + 89, + 763, + 483, + 901 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "AG-ReID 2023 Challenge [33] highlighted aerial-ground ReID challenges, with winners employing re-ranking, data augmentation, and centroid-based representations. Recent works include Instruct-ReID [12] with instruction-guided retrieval, Domain Shifting [15] for distribution adaptation, and SEAS [53] using 3D body shape guidance. Despite these developments, most existing methods employ uni-modal frameworks trained on predefined label sets. In contrast, recent work has proposed a visual-language multi-modal learning paradigm [45], potentially offering more flexibility and robustness in feature representation for video-based person ReID.", + "bbox": [ + 511, + 500, + 908, + 681 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. AG-VPReID Dataset", + "text_level": 1, + "bbox": [ + 511, + 694, + 712, + 709 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "This section offers a detailed overview of the creation process for our AG-VPReID dataset. We describe the methods used for collecting video footage in Sec. 3.1. Sec. 3.2 introduces our annotation procedures. Sec. 3.3 compares AG-VPReID with existing datasets, highlighting its unique features.", + "bbox": [ + 511, + 719, + 908, + 808 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Dataset Collection", + "text_level": 1, + "bbox": [ + 511, + 818, + 691, + 832 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The AG-VPReLU dataset was collected over a period of 20 days, including 10 morning sessions (8:30am-10:00am) and 10 afternoon sessions (3:30pm-5:00pm), with each session lasting 60 minutes. Data capture involved two drones, two", + "bbox": [ + 511, + 839, + 908, + 902 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "1243", + "bbox": [ + 483, + 944, + 514, + 955 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "CCTV cameras, and two wearable cameras. Each drone operated at four different altitudes—15m, 30m, 80m, and 120m—for 15 minutes per session, providing a comprehensive range of aerial views. In total, the dataset comprises 240 hours of video footage, documented in Tab. 2, which includes detailed specifications of the equipment used. The dataset features diverse resolutions, frame rates, and perspectives, extending from ground level to 120-meter aerial views. Fig. 1 shows example frames across different viewpoints and image qualities from different platforms. The drones, CCTV, and wearable devices are positioned to view individuals from different angles, as illustrated in Fig. 1, forcing person ReID models to learn robust multiview and partial-view representations to be effective.", + "bbox": [ + 89, + 90, + 483, + 303 + ], + "page_idx": 3 + }, + { + "type": "table", + "img_path": "images/3f7a5aca82afd1bc55ba9a25658b3e53a4b488eb88b37ba28e972a6d5c8c92b7.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
TypeModelResolutionLensFPS
CCTVBosch (Outdoor)704 × 48024mm15
Bosch (Indoor)1280 × 72018mm25
WearableGoPro10 (Front)3840 × 216016mm30
GoPro10 (Side)1920 × 108016mm60
DronesDJI Inspire23840 × 216024mm25
DJI M300RTK8192 × 546035mm1
", + "bbox": [ + 156, + 316, + 415, + 416 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Table 2. Equipment specifications for the AG-VPReID dataset.", + "bbox": [ + 116, + 419, + 455, + 431 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "To ensure professional drone operations, a specialized team (one RPAS engineer, one Chief Remote Pilot, one RPAS technician) managed all 20 data collection days, handling flights, capturing aerial footage, and performing initial data preprocessing.", + "bbox": [ + 89, + 450, + 483, + 526 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2. Labeling Process", + "text_level": 1, + "bbox": [ + 89, + 537, + 256, + 553 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The AG-VPReID dataset uses YOLOv8x for person detection and tracking [16], extracting images from all frames across multiple cameras. It includes both short-term and long-term ReID scenarios, the latter including instances where participants change clothes to test long-term identity persistence. Identity matching was performed by a team of expert annotators, supported by research assistants, to ensure both accuracy and consistency in the matching process. Following [32], we manually annotated each identity with 15 selective soft-biometrics attributes to enhance the dataset's utility for attribute-based person ReID applications. For a detailed list of these attributes, refer to 8.1.", + "bbox": [ + 89, + 560, + 483, + 741 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.3. Dataset Characteristics", + "text_level": 1, + "bbox": [ + 89, + 752, + 307, + 767 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Compared with existing video-based ReID datasets, our AG-VPReID dataset has five unique characteristics:", + "bbox": [ + 89, + 775, + 482, + 805 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "1) The highest number of identities and tracklets. AGVPReID sets a new benchmark with 6,632 unique identities and 32,321 tracklets, substantially surpassing existing datasets. It holds nearly twice as many identities as prominent ground-based datasets such as LSVID [18], which contains 3,772 identities, and more than six times", + "bbox": [ + 89, + 810, + 483, + 898 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "the tracklets of the next largest aerial-ground dataset, G2AVReID [47], which includes 5,576 tracklets. Additionally, AG-VPReID encompasses over 9.6 million frames, providing 50 times the volume of frames compared to other aerial-ground datasets. While MEVID [3] exceeds in frame count with 10.46 million, it does not match AG-VPReID in terms of identity and tracklet numbers.", + "bbox": [ + 511, + 90, + 903, + 196 + ], + "page_idx": 3 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "2) The most diverse platforms. Our AG-VPReLUID dataset is the first to incorporate aerial, ground, and wearable platforms for video-based person ReID. The inclusion of wearable cameras provides a novel dimension with high-quality first-person perspectives. This combination results in extreme variations in resolution and subject size across platforms: UAV (18×37 to 293×542 pixels), CCTV (22×23 to 172×413 pixels), and GoPro (25×48 to 483×1078 pixels).", + "3) The highest flying altitudes. AG-VPReID features footage from altitudes reaching up to 120 meters, exceeding existing datasets' 60-meter maximum [47]. This introduces challenges: 1) extreme viewpoints with perspective distortions; 2) multiple scales with varying resolutions; and 3) image quality issues (Fig. 2). We used two drones—one with a wide-angle camera for area monitoring and another with a narrow-angle camera for detailed observation.", + "4) Rich outdoor scenarios with real-world challenges. Our AG-VPReID dataset presents diverse outdoor campus scenarios with real-world challenges including complex occlusions, varied poses from different activities, and uniform-wearing individuals. Fig. 2 shows examples of these diverse scenarios.", + "5) Other notable characteristics. AG-VPReLU includes comprehensive attributes for each identity (gender, age, clothing style, accessories), enabling fine-grained analysis. The dataset features long-term tracking data of 14 diverse individuals recorded across multiple days, each wearing different clothing per session to capture real-world variations. We also provide camera calibration information and GPS coordinates to support multi-camera tracking research. See supplementary material Sec. 8 for details." + ], + "bbox": [ + 511, + 200, + 906, + 684 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.4. Ethics and Privacy", + "text_level": 1, + "bbox": [ + 513, + 696, + 694, + 713 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "This research received ethics approval for data collection and usage. We implement \"Deface\" [5] to blur faces, secure data storage, and obtained informed consent from all participants. Details are available at our project repository.", + "bbox": [ + 511, + 720, + 903, + 782 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "4. AG-VPReID-Net", + "text_level": 1, + "bbox": [ + 511, + 797, + 679, + 814 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "We propose AG-VPReLU-DNet, a purpose-built framework addressing aerial-ground ReID's unique challenges. In particular, we propose an Adapted Temporal-Spatial Stream for robust temporal-spatial representations to deal with the temporal discontinuity challenge caused by drone motion", + "bbox": [ + 511, + 825, + 903, + 900 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "1244", + "bbox": [ + 483, + 944, + 514, + 955 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/f5041b8ee0d43fbe3c67dae2d976f3ba9842453939e1d8967e949f57549b5c4a.jpg", + "image_caption": [ + "Aerial Camera" + ], + "image_footnote": [], + "bbox": [ + 143, + 90, + 460, + 276 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/84bc2bab5ce1bb227c0c9a5047b2f2878eb081be68533d2c7c1ab3b44ed85f14.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 143, + 286, + 282, + 351 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/3f33b18445e2169b6dabdd3509ce0cd75de375a821bdd874e2b3a78cde0291b6.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 161, + 353, + 277, + 402 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/9c10adb7d47f8c23d47fbb8ad49cdb776c6fe02c6359b6e37345ed9339d91b6b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 316, + 289, + 439, + 349 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/6a94c1e0b5aea0e6901cab8b1be05a0a065928e010a123d3a8fd8b5158dc5c3a.jpg", + "image_caption": [ + "(f) Clothing similarity", + "Figure 2. The AG-VPReID dataset presents several key challenges: extreme viewpoints, varying resolutions and subject sizes, pose/illumination variations, occlusions, and similar clothing among subjects." + ], + "image_footnote": [], + "bbox": [ + 318, + 349, + 439, + 429 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "from unstable tracking between frames. We propose a Normalized Appearance Stream for resolution and appearance changes to deal with extreme viewpoint shifts. To deal with altitude-driven scale variance, we introduce a Multi-Scale Attention Stream for scale variations. Fig. 3 illustrates our architecture. Detailed stream contributions are provided in Tab. 9 of the supplementary material.", + "bbox": [ + 89, + 550, + 483, + 656 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.1. Stream 1: Adapted Temporal-Spatial Stream", + "text_level": 1, + "bbox": [ + 89, + 666, + 473, + 683 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "When performing video-based person ReID, a key challenge is handling inconsistent motion patterns and temporal gaps between video frames. To address this, we propose an Adapted Temporal-Spatial stream that combines CLIP's visual encoder with temporal and 3D shape modeling to create a comprehensive representation of individuals. Our method operates on a sequence $\\mathcal{V}$ of $T$ frames through the following components:", + "bbox": [ + 89, + 688, + 483, + 809 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Visual Feature Extraction: Using CLIP's visual encoder $E_{v}(\\cdot)$ , we extract frame-level features,", + "bbox": [ + 89, + 813, + 483, + 844 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nF _ {t} = E _ {v} \\left(\\mathcal {I} _ {t}\\right), \\quad t \\in \\{1, \\dots , T \\}, \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 181, + 857, + 482, + 873 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\mathcal{I}_t$ and $F_{t}$ are the $t$ -th frame and its features.", + "bbox": [ + 89, + 885, + 426, + 900 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Temporal Processing: We incorporate temporal modeling through two key components:", + "bbox": [ + 511, + 90, + 903, + 121 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "1) Temporal 3D Shape Modeling (TSM): Following [34], we extract 3D shape representations,", + "bbox": [ + 511, + 121, + 903, + 151 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\ng _ {t} = \\operatorname {G R U} \\left(F _ {t}, g _ {t - 1}\\right), \\beta_ {t} = 3 \\mathrm {D} \\operatorname {R e g u s s o r} \\left(g _ {t}\\right), \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 555, + 162, + 903, + 180 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $g_{t}$ captures temporal dynamics and $\\beta_{t}$ represents SMPL model parameters.", + "bbox": [ + 511, + 191, + 903, + 222 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "2) Temporal Feature Enhancement (TFE): Adapting from [45], we enhance features by combining appearance and shape,", + "bbox": [ + 511, + 222, + 903, + 267 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nF _ {\\text {e n h a n c e d}} = \\operatorname {T F E} \\left(F _ {1: T}, \\beta_ {1: T}\\right). \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 604, + 280, + 903, + 297 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Identity-Aware Processing: We incorporate identity information through,", + "bbox": [ + 511, + 306, + 903, + 338 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {M} _ {y _ {i}} = \\frac {1}{N _ {y _ {i}}} \\sum_ {V \\in \\mathcal {V} _ {y _ {i}}} \\operatorname {T A P} \\left(F _ {\\text {e n h a n c e d}}\\right),\n$$\n", + "text_format": "latex", + "bbox": [ + 601, + 348, + 849, + 387 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {M} _ {y _ {i}} ^ {\\text {r e f i n e d}} = \\operatorname {S S P} \\left(F _ {\\text {e n h a n c e d}}, \\mathcal {M} _ {y _ {i}}\\right),\n$$\n", + "text_format": "latex", + "bbox": [ + 568, + 390, + 812, + 409 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nR _ {s t r e a m 1} = \\left[ F _ {e n h a n c e d}; \\mathcal {M} _ {y _ {i}} ^ {r e f i n e d} \\right], \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 573, + 411, + 903, + 430 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\mathcal{M}_{y_i}$ is the identity memory bank constructed through Temporal Average Pooling (TAP), and the Sequence-Specific Prompt (SSP) module refines this representation for the final output $R_{stream1}$ .", + "bbox": [ + 511, + 441, + 905, + 502 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.2. Stream 2: Normalized Appearance Stream", + "text_level": 1, + "bbox": [ + 511, + 511, + 877, + 527 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The Adapted Temporal-Spatial (ATS) stream provides robust temporal-spatial representation, but may not fully capture fine-grained appearance details across viewpoints, especially in aerial footage. To address this limitation, we propose a Normalized Appearance (NA) stream that effectively aggregates appearance details from multiple viewpoints.", + "bbox": [ + 511, + 532, + 905, + 623 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The NA stream normalizes and combines appearance information across frames using UVTexture maps and visibility masks. Our process involves: (1) Extracting UVTexture maps and visibility masks per frame, (2) Normalizing UVTexture maps brightness, (3) Aligning maps across frames, (4) Weighted aggregation using visibility masks, and (5) Generating the final normalized representation. The brightness normalization and weighted aggregation of UVTexture maps can be formulated as,", + "bbox": [ + 511, + 625, + 905, + 758 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nT _ {i} ^ {\\text {n o r m}} = \\gamma (H (N (T _ {i}))), \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 620, + 771, + 903, + 787 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nT _ {\\text {a g g r e g a t e d}} = \\frac {\\sum_ {i = 1} ^ {N} V _ {i} \\odot T _ {i} ^ {\\text {n o r m}}}{\\sum_ {i = 1} ^ {N} V _ {i}}, \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 591, + 791, + 903, + 830 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $T_{i}^{norm}$ is the normalized UVTexture map for frame $i$ , $N(\\cdot), H(\\cdot)$ , and $\\gamma(\\cdot)$ are normalization, histogram matching and gamma correction functions respectively. $T_{aggregated}$ is the final aggregated map. We leverage PhysPT [49]", + "bbox": [ + 511, + 839, + 905, + 901 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "1245", + "bbox": [ + 483, + 944, + 514, + 955 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/d1697fc18f796b09bfbaba2feb45ad8e162dff3d0f8bf44b83643ec2512766b0.jpg", + "image_caption": [ + "Figure 3. The three-stream AG-VPReLU-DNet architecture addresses aerial-ground ReID challenges: Temporal-Spatial stream for motion modeling and temporal features, Normalized Appearance for resolution/appearance variations, and Multi-Scale Attention for aerial-ground scale variations." + ], + "image_footnote": [], + "bbox": [ + 93, + 88, + 903, + 329 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "for pose estimation and Texformer [41] to generate UV maps from PhysPT's output 3D meshes. The maps are improved through inter-frame consistency before feeding into the DGC Omni-scale Module [52].", + "bbox": [ + 89, + 407, + 483, + 469 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.3. Stream 3: Multi-Scale Attention Stream", + "text_level": 1, + "bbox": [ + 89, + 481, + 434, + 496 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "While the ATS stream provides a robust temporal-spatial representation and the NA stream addresses viewpoint and occlusion challenges, aerial-ground person ReID still faces significant hurdles due to extreme scale variations between drone and ground-level footage. The first two streams effectively capture temporal dynamics, 3D shape information, and viewpoint-invariant appearance details, but they may not fully address the drastic scale differences inherent in aerial-ground scenarios. To complement the ATS stream and NA stream and address this limitation, we propose a Multi-Scale Attention (MSA) stream.", + "bbox": [ + 89, + 505, + 483, + 671 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In detail, this stream leverages the power of frozen large vision models combined with lightweight, adaptive processing. Specifically, this stream utilizes a frozen large vision model to extract multi-scale features for video-based person ReID. By combining a lightweight Transformer decoder with a local temporal module, this approach dynamically integrates spatial and temporal information, thereby enhancing our framework's ability to accurately capture essential person-specific details.", + "bbox": [ + 89, + 672, + 483, + 808 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Specifically, for each frame $\\mathcal{I}_t$ within the sequence $\\nu y_{i}$ the CLIP vision encoder [36] is employed to extract features independently. The process collects tokens from various layers at regular intervals to compile a detailed feature map that captures spatial correspondences. These frame feature maps are subsequently concatenated and assembled", + "bbox": [ + 89, + 810, + 483, + 900 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "into a spatiotemporal feature volume $\\mathbf{G}$ . Following the methods [23, 44], we integrate temporal information into this volume before processing it through a Transformer decoder. This decoder globally aggregates features across multiple layers, employing a video-level classification token as a query, with feature volumes from different layers of the backbone serving as keys and values. A linear layer then maps the output of the decoder's final block to produce class predictions. The operational dynamics of the Transformer decoder are outlined as follows,", + "bbox": [ + 511, + 407, + 906, + 559 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\nY _ {i} = \\operatorname {T e m p} _ {i} ([ \\mathbf {G} _ {N - M + i, 1}, \\mathbf {G} _ {N - M + i, 2}, \\dots , \\mathbf {G} _ {N - M + i, T} ]),\n$$\n", + "text_format": "latex", + "bbox": [ + 519, + 603, + 900, + 619 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\tilde {q} _ {i} = q _ {i - 1} + \\mathrm {M H A} _ {i} \\left(q _ {i - 1}, Y _ {i}, Y _ {i}\\right),\n$$\n", + "text_format": "latex", + "bbox": [ + 524, + 622, + 746, + 638 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\nq _ {i} = \\tilde {q} _ {i} + \\operatorname {M L P} _ {i} (\\tilde {q} _ {i}),\n$$\n", + "text_format": "latex", + "bbox": [ + 524, + 641, + 666, + 657 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\nf _ {G} = \\operatorname {F C} \\left(q _ {M}\\right), \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 519, + 660, + 903, + 676 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "where $\\mathbf{G}_{n,t}$ represents the features of frame $t$ extracted from the $n$ -th layer of CLIP vision encoder. The feature volume $Y_{i}$ , which undergoes temporal modulation, is input into the $i$ -th layer of the Transformer decoder. The query token $q_{i}$ is incrementally refined, beginning with $q_{0}$ as learnable initial parameters. The final output $f_{G}$ , corresponds to the final feature. The spatiotemporal decoder comprises $M$ blocks. $N$ denotes the number of encoder layers. Multi-head attention (MHA) involves query, key, and value, each of which plays a distinct role. The operator $\\mathrm{Temp}(\\cdot)$ is utilized to model temporal dynamics, which produces feature tokens influenced by detailed temporal information.", + "bbox": [ + 511, + 719, + 906, + 900 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "1246", + "bbox": [ + 483, + 944, + 514, + 955 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "5. Experimental Results", + "text_level": 1, + "bbox": [ + 89, + 89, + 295, + 107 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.1. Datasets and Evaluation Metrics", + "text_level": 1, + "bbox": [ + 89, + 114, + 377, + 128 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We conducted evaluations of our method using the AG-VPReID and four established video-based person ReID datasets: iLIDS [38], Mars [51], LS-VID [18] and G2AVReID [47]. For AG-VPReID, we used a balanced split of 3,013 identities with both ground and aerial views, dividing them equally for training and testing purposes [31, 32]. Details on the training and testing configurations are provided in Table 3. We evaluate performance using the Cumulative Matching Characteristic (CMC) at Rank-1 and the mean Average Precision (mAP).", + "bbox": [ + 89, + 136, + 483, + 287 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/b0fb1db0fa20b01eaf569770bbbb480bf54e59e31d23ab7bbe6539a7008aff74.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
CaseSubset#IDs#Tracklets#Images (M)
TrainingAll1,55513,3003.85
Testing (A2G)All1,45613,5663.94
15m5064,9071.50
30m3772,8850.89
80m3562,5920.69
120m3083,1820.86
Testing (G2A)All5,075*19,0215.79
15m1,4036,3622.14
30m1,4064,4681.41
80m1,1623,8661.13
120m1,1954,3251.11
", + "bbox": [ + 109, + 299, + 460, + 452 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 3. Statistics of AG-VPReID dataset. A2G: aerial-to-ground, G2A: ground-to-aerial. *3,619 additional IDs as distractors.", + "bbox": [ + 89, + 455, + 482, + 483 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.2. Implementation Details", + "text_level": 1, + "bbox": [ + 89, + 507, + 308, + 523 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Our pipeline leverages UV maps generated by Texformer [41] using 3D human meshes from PhysPT [49] with refined pose estimation. The UV maps are processed through normalization, histogram matching, and gamma correction before weighted blending with visibility masks. The architecture consists of three streams: an Adapted Temporal-Spatial Stream (CLIP ViT-B/16), a Normalized Appearance Stream for 3D coordinates and UV textures, and a Multi-Scale Attention Stream (CLIP ViT-L/14). More implementation details can be found in the supplementary.", + "bbox": [ + 89, + 529, + 483, + 680 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.3. Comparison with State-of-the-Art Methods", + "text_level": 1, + "bbox": [ + 89, + 689, + 459, + 705 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We evaluate our proposed method AG-VPReID-Net against several state-of-the-art approaches across multiple video-based person ReID datasets. Tab. 4 summarizes the results.", + "bbox": [ + 89, + 710, + 482, + 757 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Ground-to-Ground Datasets. Our method achieves superior performance on MARS (91.5% mAP, 93.2% Rank-1), outperforming CLIP-ReID by 3.4% mAP. On LS-VID (87.3% mAP, 93.2% Rank-1), we surpass LSTRL by 4.9% mAP. For iLIDS-VID, we reach 96.3% Rank-1, which is 3.0% higher than MFA.", + "bbox": [ + 89, + 761, + 482, + 851 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Cross-Platform Datasets. On G2A-VReID, we achieve $81.3\\%$ mAP and $73.1\\%$ Rank-1, surpassing MGH by $4.6\\%$ mAP. Note that the G2A-VReID dataset only provides a", + "bbox": [ + 89, + 854, + 483, + 900 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/5aeac7856fbfa5606dcc60f6256ab85735938e0fee7e11f5c68df0f17dc7efc2.jpg", + "image_caption": [ + "Figure 4. Baseline vs our method on AG-VPReLU dataset. Green/red: correct/incorrect labels. First tracklet image shown. Ranks show improvements in bold." + ], + "image_footnote": [], + "bbox": [ + 542, + 89, + 874, + 232 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "ground-to-aerial testing set. For AG-VPReID, we demonstrate strong results in both Ground-to-Aerial (58.0% mAP, 75.6% Rank-1, exceeding CLIP-ReID by 8.8% Rank-1) and aerial-to-ground scenarios (64.0% mAP, 71.9% Rank-1, surpassing CLIP-ReID by 1.7% mAP and 0.3% Rank-1).", + "bbox": [ + 511, + 314, + 903, + 388 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.4. Ablation Study", + "text_level": 1, + "bbox": [ + 511, + 398, + 666, + 415 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We conduct an ablation study on AG-VPReLU to evaluate each stream. St-1 is our temporal modeling stream, St-2 is the appearance normalization stream, and St-3 is the multiscale feature stream. Their combinations (St-12/13/23/123) merge multiple streams.", + "bbox": [ + 511, + 420, + 903, + 496 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Stream Contributions. Tab. 5 shows that St-1 achieves the strongest individual performance (71.52% A2G, 74.80% G2A Rank-1). St-2 and St-3 show moderate results (58.40% and 61.65% A2G Rank-1). Combined streams demonstrate complementary strengths, with St-123 achieving the best results (71.91% A2G, 75.57% G2A Rank-1) by integrating the three streams.", + "bbox": [ + 511, + 500, + 905, + 604 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Impact of Altitude. Table 6 shows performance decreasing with altitude, most significantly between $30\\mathrm{m} - 80\\mathrm{m}$ . A2G Rank-1 drops $\\sim 11\\%$ across streams. At $120\\mathrm{m}$ , St-1 demonstrates robustness (52.47% vs 38.12%/35.13%), achieving +17.34% improvement through temporal modeling. St-2's physics-informed UV mapping provides +7.57% improvement (42.70% vs. 35.13%), while St-3's multi-scale attention yields +17.72% improvement (52.85% vs. 35.13%). St-123 maintains best performance across all altitudes by combining stream strengths.", + "bbox": [ + 511, + 609, + 905, + 761 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Clothing Changes vs. Camera Angles. Analysis shows altitude increases $(15\\mathrm{m} \\rightarrow 120\\mathrm{m})$ reduce Rank-1 by $27.66\\%$ , significantly more than clothing changes $(7.85\\%$ in ground-to-ground). Without clothing changes, aerial-ground matching $(71.91\\%$ Rank-1) still underperforms ground-to-ground $(91.52\\%)$ due to viewpoint differences. When combining aerial views with clothing changes $(65.83\\%$ Rank-1), these factors create synergistic challenges where viewpoint differences amplify clothing ambi", + "bbox": [ + 511, + 763, + 905, + 900 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "1247", + "bbox": [ + 483, + 944, + 514, + 955 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/aeb331b8e1be85e7a4f27def95caf5574369207ea179c11635636293776667b3.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
MethodMARSLS-VIDiLIDS-VIDG2A-VReIDAG-VPReID
Ground → GroundGround → AerialAerial → GroundGround → Aerial
mAPRank-1mAPRank-1Rank-1Rank-5mAPRank-1mAPRank-1mAPRank-1
STMP[29]72.784.439.156.884.396.8--50.760.345.255.8
M3D[20]74.184.440.157.774.094.3--52.462.647.957.3
GLTR[19]78.587.044.363.186.098.0--55.665.850.160.5
TCLNet[13]85.189.870.381.586.6-65.454.757.267.952.762.4
MGH[43]85.890.061.879.685.697.176.769.960.370.855.565.2
GRL[25]84.891.0--90.498.3--58.768.453.963.6
BiCnet-TKS[14]86.090.275.184.6--63.451.759.869.254.364.7
CTL[24]86.791.4--89.797.0--56.466.951.861.3
STMN[6]84.590.569.282.1--66.756.161.671.556.966.2
PSTA[39]85.891.5--91.598.1--60.570.255.865.7
DIL[11]87.090.8--92.098.0--61.270.956.366.1
STT[48]86.388.778.087.587.595.0--61.070.756.165.9
TMT[28]85.891.2--91.398.6--60.870.555.965.8
CAVIT[40]87.290.879.289.293.398.0--61.471.156.566.3
SINet[1]86.291.079.687.492.5---61.371.056.466.2
MFA[9]85.090.478.988.293.398.7--61.170.856.266.0
DCCT[27]87.592.3--91.798.6--61.571.256.666.4
LSTRL[26]86.891.682.489.892.298.6--61.771.356.766.5
CLIP-ReID[21]88.191.780.688.8----62.371.657.266.8
AG-VPReID-Net91.593.287.393.296.399.581.373.164.071.958.075.6
", + "bbox": [ + 138, + 88, + 859, + 343 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/358c49fd6d94f4cb79284e071440b05413b17d3a913b99c47babad5e06e36ed7.jpg", + "table_caption": [ + "Table 4. Performance comparison across datasets. Bold shows best results." + ], + "table_footnote": [], + "table_body": "
MethodAerial → GroundGround → Aerial
Rank-1Rank-5Rank-10Rank-1Rank-5Rank-10
St-171.5280.4283.8874.8084.2786.90
St-258.4070.2075.8061.5073.6078.20
St-361.6574.5379.1567.3878.8282.3
St-1269.5078.8082.5072.8082.6085.40
St-1371.8080.6083.9175.4084.4886.91
St-2365.7076.5580.9070.1080.4583.91
St-12371.9180.6783.9275.5784.5086.92
", + "bbox": [ + 107, + 383, + 464, + 494 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/bdfba46b2eb21291470992e4d4ad4130b35086b372ab70f4e83a7e12415e9967.jpg", + "table_caption": [ + "Table 5. Ranking accuracy (%) improvement on AG-VPReID dataset." + ], + "table_footnote": [ + "Table 6. Rank-1 accuracy (%) on AG-VPReID at various altitudes." + ], + "table_body": "
MethodAerial → GroundGround → Aerial
15m30m80m120m15m30m80m120m
St-180.2878.7667.2452.4783.2583.0367.0762.31
St-269.7568.1352.6238.1272.8771.4152.2245.43
St-374.2574.0152.5335.1377.1878.9156.5952.56
St-1278.3276.8265.2450.5381.3481.2765.1760.43
St-1380.5578.9567.5052.8583.7083.5567.6063.10
St-2376.4574.9057.4042.7079.5579.4057.4052.80
St-12380.6679.0067.6353.0083.9283.6667.8263.32
", + "bbox": [ + 91, + 527, + 480, + 638 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "guity. See Table 7.", + "bbox": [ + 89, + 685, + 217, + 699 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5.5. Visualization", + "text_level": 1, + "bbox": [ + 89, + 710, + 227, + 726 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We further visualize the ReID results with Top-5 ranking to understand how our model improves compared to the baseline [21] for aerial-to-ground person ReID in Fig. 4. Unlike the baseline which may be biased by image resolution and clothing textures, our approach pays attention to more robust features like motion patterns and body shape characteristics, which explains its successful identification of similar walking postures and body proportions despite the significant viewpoint differences between the aerial query and ground-view gallery pair. Additional examples are in Figs. 7 and 8 of the supplementary material.", + "bbox": [ + 89, + 734, + 483, + 901 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/706371d48f912c84501dfec8a776b1091a9da090fbb299f613d3b34299fc5ca4.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
ScenarioRank-1mAPKey Observation
Camera Angle Impact
15m altitude (AG)80.6677.23Baseline performance
30m altitude (AG)79.0075.81Minimal degradation
80m altitude (AG)67.6363.42-13.03% Rank-1 vs. 15m
120m altitude (AG)53.0048.75-27.66% Rank-1 vs. 15m
Clothing Change Impact
GG-SameClothes91.5288.74Upper-bound performance
GG-DiffClothes83.6779.92-7.85% Rank-1 (CC-only impact)
AG-SameClothes71.9164.00-19.61% Rank-1 (AG-only impact)
AG-DiffClothes65.8357.52-6.08% Rank-1 (CC impact in AG)
", + "bbox": [ + 516, + 383, + 905, + 503 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Table 7. Impact of clothing changes (CC) vs. camera angles.", + "bbox": [ + 527, + 507, + 888, + 522 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6. Conclusion", + "text_level": 1, + "bbox": [ + 513, + 547, + 633, + 563 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We introduce AG-VPReLU, a comprehensive dataset for video-based aerial-ground person ReID, addressing the critical need for a large and challenging aerial-ground dataset. We also propose AG-VPReLU-Net, a purpose-built three-stream person ReID framework that combines temporal-spatial processing, physics-informed normalized appearance representation, and multi-scale attention mechanisms. This approach achieves state-of-the-art performance on both the AG-VPReLU dataset and existing video-based ReID benchmarks. Notably, the relatively lower performance across all approaches on AG-VPReLU highlights its demanding nature and establishes it as a robust benchmark for advancing future research in the field.", + "bbox": [ + 511, + 573, + 906, + 770 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "7. Acknowledgement", + "text_level": 1, + "bbox": [ + 511, + 784, + 692, + 801 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "This work was supported by the Australian Research Council (ARC) Discovery Project (DP200101942) and a QUT Postgraduate Research Award. We gratefully acknowledge the Research Engineering Facility (REF) team at QUT for providing expertise and the research infrastructure essential for data collection and processing within this project.", + "bbox": [ + 511, + 809, + 906, + 902 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "1248", + "bbox": [ + 483, + 944, + 514, + 955 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 89, + 187, + 104 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Shutao Bai, Bingpeng Ma, Hong Chang, Rui Huang, and Xilin Chen. Salient-to-broad transition for video person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 7339-7348, 2022. 8", + "[2] David Cornett, Joel Brogan, Nell Barber, Deniz Aykac, Seth Baird, Nicholas Burchfield, Carl Dukes, Andrew Duncan, Regina Ferrell, Jim Goddard, et al. Expanding accurate person recognition to new altitudes and ranges: The briar dataset. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 593-602, 2023. 3", + "[3] Daniel Davila, Dawei Du, Bryon Lewis, Christopher Funk, Joseph Van Pelt, Roderic Collins, Kellie Corona, Matt Brown, Scott McCloskey, Anthony Hoogs, and Brian Clipp. Mevid: Multi-view extended videos with identities for video person re-identification. In IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023. 2, 3, 4", + "[4] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In IEEE conference on computer vision and pattern recognition (CVPR), pages 248-255. IEEE, 2009. 1", + "[5] Michael Dreuw and ORB-HD. deface: Video anonymization by face detection, 2023. Python package version 1.5.0. 4", + "[6] Chanho Eom, Geon Lee, Junghyup Lee, and Bumsub Ham. Video-based person re-identification with spatial and temporal memory networks. In IEEE International Conference on Computer Vision (ICCV), pages 12036–12045, 2021. 2, 8", + "[7] Yang Fu, Xiaoyang Wang, Yunchao Wei, and Thomas Huang. Sta: Spatial-temporal attention for large-scale video-based person re-identification. In AAAI Conference on Artificial Intelligence, pages 8287–8294, 2019. 1", + "[8] Xinqian Gu, Hong Chang, Bingpeng Ma, Shutao Bai, Shiguang Shan, and Xilin Chen. Clothes-changing person re-identification with rgb modality only. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2, 3", + "[9] Xinqian Gu, Hong Chang, Bingpeng Ma, and Shiguang Shan. Motion feature aggregation for video-based person re-identification. IEEE Transactions on Image Processing, 31:3908-3919, 2022. 8", + "[10] Ke Han, Yan Huang, Shaogang Gong, Liang Wang, and Tieniu Tan. 3d shape temporal aggregation for video-based clothing-change person re-identification. In Asian Conference on Computer Vision (ACCV), pages 2371–2387, 2022. 2, 3", + "[11] Tianyu He, Xin Jin, Xu Shen, Jianqiang Huang, Zhibo Chen, and Xian-Sheng Hua. Dense interaction learning for video-based person re-identification. In IEEE International Conference on Computer Vision (ICCV), pages 1490–1501, 2021. 3, 8", + "[12] Weizhen He, Yiheng Deng, Shixiang Tang, Qihao Chen, Qingsong Xie, Yizhou Wang, Lei Bai, Feng Zhu, Rui Zhao, Wanli Ouyang, et al. Instruct-reid: A multi-purpose person re-identification task with instructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17521–17531, 2024. 3" + ], + "bbox": [ + 93, + 114, + 483, + 900 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[13] Ruibing Hou, Hong Chang, Bingpeng Ma, Shiguang Shan, and Xilin Chen. Temporal complementary learning for video person re-identification. In European Conference on Computer Vision (ECCV), pages 388–405, 2020. 2, 3, 8", + "[14] Ruibing Hou, Hong Chang, Bingpeng Ma, Rui Huang, and Shiguang Shan. Bicnet-tks: Learning efficient spatial-temporal representation for video person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2014–2023, 2021. 2, 8", + "[15] Yan Jiang, Xu Cheng, Hao Yu, Xingyu Liu, Haoyu Chen, and Guoying Zhao. Domain shifting: A generalized solution for heterogeneous cross-modality person re-identification. In European Conference on Computer Vision, pages 289–306. Springer, 2025. 3", + "[16] Glenn Jocher, Ayush, and Jing Qiu. Ultralytics YOLO. https://github.com/ultralytics/ ultralytics, 2024. Accessed: 2024-03-22. 4", + "[17] SV Aruna Kumar, Ehsan Yaghoubi, Abhijit Das, BS Harish, and Hugo Proença. The p-destre: A fully annotated dataset for pedestrian detection, tracking, and short/long-term re-identification from aerial devices. IEEE Transactions on Information Forensics and Security, 16:1696-1708, 2020. 2, 3, 1", + "[18] J. Li, J. Wang, Q. Tian, W. Gao, and S. Zhang. Global-local temporal representations for video person re-identification. In IEEE International Conference on Computer Vision (ICCV), pages 3958–3967, 2019. 2, 4, 7", + "[19] Jianing Li, Jingdong Wang, Qi Tian, Wen Gao, and Shiliang Zhang. Global-local temporal representations for video person re-identification. In IEEE International Conference on Computer Vision (ICCV), pages 3958–3967, 2019. 2, 8", + "[20] Jianing Li, Shiliang Zhang, and Tiejun Huang. Multiscale 3d convolution network for video based person re-identification. In AAAI Conference on Artificial Intelligence, pages 8618–8625, 2019. 2, 3, 8", + "[21] Siyuan Li, Li Sun, and Qingli Li. Clip-reid: exploiting vision-language model for image re-identification without concrete text labels. In AAAI Conference on Artificial Intelligence, pages 1405–1413, 2023. 8, 3", + "[22] Yutian Lin, Liang Zheng, Zhedong Zheng, Yu Wu, and Yi Yang. Improving person re-identification by attribute and identity learning. ArXiv, abs/1703.07220, 2019. 1", + "[23] Ziyi Lin, Shijie Geng, Renrui Zhang, Peng Gao, Gerard De Melo, Xiaogang Wang, Jifeng Dai, Yu Qiao, and Hongsheng Li. Frozen clip models are efficient video learners. In European Conference on Computer Vision (ECCV), pages 388-404. Springer, 2022. 6, 3", + "[24] Jiawei Liu, Zheng-Jun Zha, Wei Wu, Kecheng Zheng, and Qibin Sun. Spatial-temporal correlation and topology learning for person re-identification in videos. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4370–4379, 2021. 8", + "[25] Xuehu Liu, Pingping Zhang, Chenyang Yu, Huchuan Lu, and Xiaoyun Yang. Watching you: Global-guided reciprocal learning for video-based person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 13334–13343, 2021. 1, 3, 8" + ], + "bbox": [ + 516, + 92, + 903, + 900 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "1249", + "bbox": [ + 483, + 944, + 514, + 955 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[26] Xuehu Liu, Pingping Zhang, and Hutchuan Lu. Video-based person re-identification with long short-term representation learning. arXiv preprint arXiv:2308.03703, 2023. 2, 8", + "[27] Xuehu Liu, Chenyang Yu, Pingping Zhang, and Hutchuan Lu. Deeply coupled convolution-transformer with spatial-temporal complementary learning for video-based person re-identification. IEEE Transactions on Neural Networks and Learning Systems, 35(10):13753-13763, 2024. 8", + "[28] Xuehu Liu, Pingping Zhang, Chenyang Yu, Xuesheng Qian, Xiaoyun Yang, and Huchuan Lu. A video is worth three views: Trigeminal transformers for video-based person re-identification. IEEE Transactions on Intelligent Transportation Systems, 25(9):12818-12828, 2024. 3, 8", + "[29] Yiheng Liu, Zhenxun Yuan, Wengang Zhou, and Houqiang Li. Spatial and temporal mutual promotion for video-based person re-identification. In AAAI Conference on Artificial Intelligence, pages 8786–8793, 2019. 2, 8", + "[30] Niall McLaughlin, Jesus Martinez del Rincon, and Paul Miller. Recurrent convolutional network for video-based person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1325–1334, 2016. 3", + "[31] Huy Nguyen, Kien Nguyen, Sridha Sridharan, and Clinton Fookes. Aerial-ground person re-id. In IEEE International Conference on Multimedia and Expo (ICME), pages 2585-2590, 2023. 1, 7", + "[32] Huy Nguyen, Kien Nguyen, Sridha Sridharan, and Clinton Fookes. Ag-reid.v2: Bridging aerial and ground views for person re-identification. IEEE Transactions on Information Forensics and Security, 19:2896-2908, 2024. 1, 4, 7", + "[33] Kien Nguyen, Clinton Fookes, Sridha Sridharan, Feng Liu, Xiaoming Liu, Arun Ross, Dana Michalski, Huy Nguyen, Debayan Deb, Mahak Kothari, et al. Ag-reid 2023: Aerial-ground person re-identification challenge results. In 2023 IEEE International Joint Conference on Biometrics (IJCB), pages 1–10. IEEE, 2023. 3", + "[34] Vuong D Nguyen, Pranav Mantini, and Shishir K Shah. Temporal 3d shape modeling for video-based cloth-changing person re-identification. In IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 173–182, 2024. 5", + "[35] Honghu Pan, Qiao Liu, Yongyong Chen, Yunqi He, Yuan Zheng, Feng Zheng, and Zhenyu He. Pose-aided video-based person re-identification via recurrent graph convolutional network. IEEE Transactions on Circuits and Systems for Video Technology, 33(12):7183-7196, 2023. 1", + "[36] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning (ICML), pages 8748-8763, 2021. 6", + "[37] Kien Nguyen Thanh, Clinton Fookes, Sridha Sridharan, Yingli Tian, Feng Liu, Xiaoming Liu, and Arun Ross. The state of aerial surveillance: A survey. CoRR, abs/2201.03080, 2022. 1", + "[38] Xiaogang Wang and Rui Zhao. Person re-identification:" + ], + "bbox": [ + 89, + 92, + 482, + 900 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "System design and evaluation overview. In Person Re-Identification, pages 351-370. Springer, 2014. 7", + "[39] Yingquan Wang, Pingping Zhang, Shang Gao, Xia Geng, Hu Lu, and Dong Wang. Pyramid spatial-temporal aggregation for video-based person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 12026–12035, 2021. 3, 8", + "[40] Jinlin Wu, Lingxiao He, Wu Liu, Yang Yang, Zhen Lei, Tao Mei, and Stan Z Li. Cavit: Contextual alignment vision transformer for video object re-identification. In European Conference on Computer Vision (ECCV), pages 549–566. Springer, 2022. 8", + "[41] Xiangyu Xu and Chen Change Loy. 3d human texture estimation from a single image with transformers. In IEEE International Conference on Computer Vision (ICCV), pages 13849-13858, 2021. 6, 7, 3", + "[42] Xiangyu Xu, Hao Chen, Francesc Moreno-Noguer, László A Jeni, and Fernando De la Torre. 3d human shape and pose from a single low-resolution image with self-supervised learning. In European Conference on Computer Vision (ECCV), pages 284-300. Springer, 2020. 3", + "[43] Yichao Yan, Jie Qin, Jiaxin Chen, Li Liu, Fan Zhu, Ying Tai, and Ling Shao. Learning multi-granular hypergraphs for video-based person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2899–2908, 2020. 2, 8", + "[44] Dingqiang Ye, Chao Fan, Jingzhe Ma, Xiaoming Liu, and Shiqi Yu. Biggait: Learning gait representation you want by large vision models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 200-210, 2024. 6, 3", + "[45] Chenyang Yu, Xuehu Liu, Yingquan Wang, Pingping Zhang, and Huchuan Lu. Tf-clip: Learning text-free clip for video-based person re-identification. In AAAI Conference on Artificial Intelligence, pages 6764–6772, 2024. 2, 3, 5", + "[46] Quan Zhang, Lei Wang, Vishal M. Patel, Xiaohua Xie, and Jianhaung Lai. View-decoupled transformer for person re-identification under aerial-ground camera network. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 22000–22009, 2024. 1", + "[47] Shizhou Zhang, Wenlong Luo, De Cheng, Qingchun Yang, Lingyan Ran, Yinghui Xing, and Yanning Zhang. Cross-platform video person reid: A new benchmark dataset and adaptation approach. In European Conference on Computer Vision (ECCV), 2024. 1, 2, 3, 4, 7", + "[48] Tianyu Zhang, Longhui Wei, Lingxi Xie, Zijie Zhuang, Yongfei Zhang, Bo Li, and Qi Tian. Spatiotemporal transformer for video-based person re-identification. arXiv:2103.16469, 2021. 8", + "[49] Yufei Zhang, Jeffrey O Kephart, Zijun Cui, and Qiang Ji. Physpt: Physics-aware pretrained transformer for estimating human dynamics from monocular videos. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2305-2317, 2024. 5, 7, 3", + "[50] Zhizheng Zhang, Cuiling Lan, Wenjun Zeng, and Zhibo Chen. Multi-granularity reference-aided attentive feature aggregation for video-based person re-identification. In IEEE" + ], + "bbox": [ + 516, + 92, + 903, + 900 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "1250", + "bbox": [ + 483, + 944, + 514, + 955 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Conference on Computer Vision and Pattern Recognition (CVPR), pages 10407-10416, 2020. 1", + "[51] L. Zheng, Z. Bie, Y. Sun, J. Wang, C. Su, S. Wang, and Q. Tian. Mars: A video benchmark for large-scale person re-identification. In European Conference on Computer Vision (ECCV, pages 868–884, 2016. 1, 2, 3, 7", + "[52] Zhedong Zheng, Xiaohan Wang, Nenggan Zheng, and Yi Yang. Parameter-efficient person re-identification in the 3d space. IEEE Transactions on Neural Networks and Learning Systems, 35(6):7534-7547, 2022. 6", + "[53] Haidong Zhu, Pranav Budhwant, Zhaoheng Zheng, and Ram Nevatia. Seas: Shape-aligned supervision for person re-identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 164–174, 2024. 3" + ], + "bbox": [ + 91, + 90, + 482, + 301 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "1251", + "bbox": [ + 483, + 945, + 514, + 955 + ], + "page_idx": 10 + } +] \ No newline at end of file diff --git a/2025/AG-VPReID_ A Challenging Large-Scale Benchmark for Aerial-Ground Video-based Person Re-Identification/4cdeaea8-7584-41e0-a7e3-68147957d6c6_model.json b/2025/AG-VPReID_ A Challenging Large-Scale Benchmark for Aerial-Ground Video-based Person Re-Identification/4cdeaea8-7584-41e0-a7e3-68147957d6c6_model.json new file mode 100644 index 0000000000000000000000000000000000000000..8e729de8f1ee966cda71e7675ea3b8cf49aadd00 --- /dev/null +++ b/2025/AG-VPReID_ A Challenging Large-Scale Benchmark for Aerial-Ground Video-based Person Re-Identification/4cdeaea8-7584-41e0-a7e3-68147957d6c6_model.json @@ -0,0 +1,2268 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.043 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.001, + 0.812, + 0.047 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.131, + 0.784, + 0.175 + ], + "angle": 0, + "content": "AG-VPReID: A Challenging Large-Scale Benchmark for Aerial-Ground Video-based Person Re-Identification" + }, + { + "type": "text", + "bbox": [ + 0.236, + 0.203, + 0.762, + 0.238 + ], + "angle": 0, + "content": "Huy Nguyen1, Kien Nguyen1, Akila Pemasiri1, Feng Liu2, Sridha Sridharan1, Clinton Fookes1" + }, + { + "type": "text", + "bbox": [ + 0.155, + 0.239, + 0.842, + 0.275 + ], + "angle": 0, + "content": "\\(^{1}\\) School of Electrical Engineering and Robotics, Queensland University of Technology \\(^{2}\\) Department of Computer Science, Drexel University" + }, + { + "type": "text", + "bbox": [ + 0.155, + 0.276, + 0.841, + 0.31 + ], + "angle": 0, + "content": "\\(^{1}\\{t497.nguyen, k.nguyenthanh, a.thondilege, s.sridharan, c.fookes\\}@qut.edu.au,\\) \\(^{2}f1397@drexel.edu\\)" + }, + { + "type": "title", + "bbox": [ + 0.248, + 0.345, + 0.327, + 0.361 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.377, + 0.486, + 0.768 + ], + "angle": 0, + "content": "We introduce AG-VPReLU, a new large-scale dataset for aerial-ground video-based person re-identification (ReID) that comprises 6,632 subjects, 32,321 tracklets and over 9.6 million frames captured by drones (altitudes ranging from 15–120m), CCTV, and wearable cameras. This dataset offers a real-world benchmark for evaluating the robustness to significant viewpoint changes, scale variations, and resolution differences in cross-platform aerial-ground settings. In addition, to address these challenges, we propose AG-VPReLU-Net, an end-to-end framework composed of three complementary streams: (1) an Adapted Temporal-Spatial Stream addressing motion pattern inconsistencies and facilitating temporal feature learning, (2) a Normalized Appearance Stream leveraging physics-informed techniques to tackle resolution and appearance changes, and (3) a MultiScale Attention Stream handling scale variations across drone altitudes. We integrate visual-semantic cues from all streams to form a robust, viewpoint-invariant whole-body representation. Extensive experiments demonstrate that AG-VPReLU-Net outperforms state-of-the-art approaches on both our new dataset and existing video-based ReID benchmarks, showcasing its effectiveness and generalizability. Nevertheless, the performance gap observed on AG-VPReLU across all methods underscores the dataset's challenging nature. The dataset, code and trained models are available at AG-VPReLU-Net." + }, + { + "type": "title", + "bbox": [ + 0.092, + 0.8, + 0.222, + 0.815 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.825, + 0.485, + 0.903 + ], + "angle": 0, + "content": "Video-based person re-identification (ReID) is a challenging and in-demand task, with significant real-world applications in surveillance, search and rescue operations, and urban monitoring [25, 35, 37]. While traditional ReID methods focus on ground-based cameras [7, 50], the inte" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.346, + 0.908, + 0.619 + ], + "angle": 0, + "content": "gration of aerial perspectives through aerial-ground person ReID presents a paradigm shift in this field [32]. This approach enables the identification and matching of individuals across non-overlapping aerial and ground-based camera views, substantially enhancing situational awareness and response times in complex environments [31, 46]. The motivation behind this research stems from the increasing deployment of aerial platforms, such as unmanned aerial vehicles (UAVs), which provide unique vantage points that complement ground-based observations. However, the development of robust aerial-ground ReID systems faces a significant challenge: the scarcity of diverse and large-scale datasets that capture the nuances of both aerial and ground perspectives. As demonstrated by ImageNet [4], large and diverse benchmarks are crucial for deep learning based methods, indicating a need for a comprehensive ReID dataset integrating multiple platforms, environments, and real-world challenges." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.629, + 0.911, + 0.903 + ], + "angle": 0, + "content": "Initial efforts in aerial-ground person ReID have focused primarily on image-based tasks. For instance, Nguyen et al. [31] pioneer this area by releasing the first aerial-ground ReID dataset, which includes images from one drone and one CCTV camera capturing 21,983 images of 388 identities. They later expand the dataset to 100,502 images of 1,615 individuals [32]. Recently, Zhang et al. [46] collect a synthetic dataset named CARGO, containing 108,563 images representing 5,000 subjects, to complement real-world datasets. Within video-based tasks, Zhang et al. [47] collect a video-based dataset called G2A-VReID, which consists of 185,907 images and 5,576 tracklets from one drone and one CCTV camera, featuring 2,788 identities. Building on these advancements, G2A-VReID could be expanded compared to ground-based datasets like MARS [51], which includes 20,000 tracklets and 1.19 million frames from six cameras. While current aerial-ground datasets are valuable, increasing identity variation and environmental diversity would" + }, + { + "type": "page_number", + "bbox": [ + 0.484, + 0.945, + 0.514, + 0.958 + ], + "angle": 0, + "content": "1241" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.12, + 0.089, + 0.875, + 0.215 + ], + "angle": 0, + "content": "
DatasetYear#Identities#Tracklets#Frames (M)#CVCCAtt.PlatformDur.Altitude (m)
GroundWearableAerial
MARS [51]20161,26120,4781.196XXXX--
LSVID [18]20193,77214,9432.9815XXXX4-
VCCR [10]20223924,3840.151XXX90-
CCVID [8]20222262,8560.341XXX--
MEVID [3]20231588,09210.4633XXX73-
PDestre [17]20202531,8940.101XX-5-6
G2A-VReID [47]20242,7885,5760.182XXX-20 - 60
AG-VPRelD20246,63232,3219.662015 - 120
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.219, + 0.905, + 0.245 + ], + "angle": 0, + "content": "Table 1. Comparison of AG-VPReID with existing video-based person ReID datasets. Above: ground-based datasets, Below: aerial-based datasets. CV: Camera Views, CC: Clothes-Change, Att.: Attributes (Soft-biometrics annotations), Dur.: Duration (days)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.272, + 0.449, + 0.287 + ], + "angle": 0, + "content": "improve model robustness for real-world applications." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.289, + 0.483, + 0.576 + ], + "angle": 0, + "content": "In light of this, we introduce AG-VPReID, a comprehensive large-scale benchmark dataset for Aerial-Ground Video-based Person ReID. AG-VPReID comprises 6,632 subjects, 32,321 tracklets, and over 9.6 million frames, captured across multiple dates and times of day using a combination of three platforms: aerial drones operating at various altitudes (15-120m), stationary CCTV cameras and wearable mobile cameras. This dataset significantly surpasses existing video-based ReID benchmarks in terms of scale, diversity, and real-world applicability with the highest number of identities, the highest number of tracklets, the highest drone flying altitudes, and the most diverse platforms. The key characteristics of AG-VPReID include: drastic view changes between aerial and ground perspectives; a large number of annotated identities across multiple sessions; rich outdoor scenarios with varying environmental conditions; significant differences in resolution between aerial and ground footage; and both controlled scenarios with clothing changes and in-the-wild pedestrian traffic." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.581, + 0.483, + 0.732 + ], + "angle": 0, + "content": "Aerial-ground person ReID presents unique challenges due to significant appearance variations between aerial and ground-level views. These variations include extreme viewpoint differences, drastic changes in resolution and scale, partial occlusions, and temporal discontinuities caused by high-flying altitudes and long-range captures. Traditional video-based person ReID methods, although effective in ground-based settings [26, 45], often struggle in aerial-ground scenarios due to the complex combination of inconsistent motion patterns and the aforementioned variations." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.735, + 0.483, + 0.902 + ], + "angle": 0, + "content": "To address these challenges, we introduce AG-VPReID-Net, an end-to-end framework for Aerial-Ground Video-based Person Re-Identification. Unlike existing state-of-the-art methods focused on single-view or ground scenarios, AG-VPReID-Net features three complementary streams tailored for aerial-ground challenges: i) An Adapted Temporal-Spatial Stream enhances traditional temporal modeling by integrating identity-specific memory and temporal shape analysis. This improves the extraction of consistent motion patterns and body shape representations, addressing the temporal discontinuity and motion" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.271, + 0.908, + 0.543 + ], + "angle": 0, + "content": "inconsistencies of aerial footage, outperforming standard LSTM [6, 14] or 3D CNN [20] approaches; ii) A Normalized Appearance Stream addresses the resolution and appearance differences between aerial and ground views by using UV maps aggregation across frames for a normalized appearance representation. This provides robustness against pose changes, viewpoint shifts, and varying image quality, excelling where current appearance-based methods [19, 29] falter; and iii) A Multi-Scale Attention Stream addresses scale variations inherent in aerial-ground data by incorporating multi-scale feature extraction, motion analysis, temporal context, and a transformer decoder, effectively improving identification across drone altitudes compared to single-scale [13, 43] methods. By integrating these streams, AG-VPReID-Net offers incremental improvements in aerial-ground video-based re-identification, highlighting its potential in addressing this challenging scenario." + }, + { + "type": "text", + "bbox": [ + 0.533, + 0.544, + 0.871, + 0.558 + ], + "angle": 0, + "content": "In summary, our main contributions are as follows:" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.559, + 0.905, + 0.619 + ], + "angle": 0, + "content": "(1) We introduce AG-VPReID, a challenging large-scale benchmark for aerial-ground video-based person ReID, bridging the gap with a diverse dataset that captures nuanced challenges from both aerial and ground perspectives." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.62, + 0.905, + 0.693 + ], + "angle": 0, + "content": "(2) We propose AG-VPReLUID-Net, an innovative end-to-end framework that integrates adapted temporal-spatial processing, normalized appearance representation, and multiscale attention mechanisms to effectively address the challenges of aerial-ground ReID." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.695, + 0.905, + 0.755 + ], + "angle": 0, + "content": "(3) AG-VPReLU-Net sets new state-of-the-art performance on the AG-VPReLU and existing video-based ReID benchmarks, demonstrating our approach's effectiveness and generalizability across different settings." + }, + { + "type": "list", + "bbox": [ + 0.512, + 0.559, + 0.905, + 0.755 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.77, + 0.637, + 0.785 + ], + "angle": 0, + "content": "2. Prior Work" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.795, + 0.907, + 0.903 + ], + "angle": 0, + "content": "Video-based Person ReID Datasets. Existing person ReID datasets are numerous but severely lack the ability to address real-world challenges, particularly in aerial-ground scenarios. Ground-based datasets like MARS [51] and LSVID [18] provide large-scale benchmarks but focus mainly on ground perspectives, highlighting the need for multi-platform surveillance datasets. The inclusion of clothing" + }, + { + "type": "page_number", + "bbox": [ + 0.484, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "1242" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.135, + 0.09, + 0.868, + 0.423 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.432, + 0.908, + 0.475 + ], + "angle": 0, + "content": "Figure 1. Our AG-VPReID dataset was captured using a variety of six cameras, including aerial drones, CCTVs, and GoPros. Sample images and camera locations are illustrated on the right side of the figure. The left side depicts the cross-camera appearance variations of two pedestrians, showcasing differences across various sessions and times of the day." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.501, + 0.486, + 0.759 + ], + "angle": 0, + "content": "changes in datasets such as MEVID [3], CCVID [8], and VCCR [10] represents progress in addressing real-world challenges, though they could benefit from more identities and diverse environments. The BRIAR dataset [2], while featuring 1,000 subjects and UAV footage, primarily targets face recognition with restricted access. P-Destre [17] pioneered aerial view exploration, though its use of a single drone at lower altitudes (5-6m) creates opportunities for datasets covering higher operational altitudes more common in surveillance applications. The G2A-VReID dataset by Zhang et al. [47] represents an important step in combining aerial and ground views. While innovative, it contains 2,788 identities within a \\(20 - 60\\mathrm{m}\\) altitude range using 2 cameras, suggesting opportunities for future datasets to expand in scale, altitude diversity, camera count, and environmental variety. Tab. 1 compares our AG-VPReID dataset with others across multiple dimensions." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.765, + 0.485, + 0.902 + ], + "angle": 0, + "content": "Video-based Person ReID. Video-based person ReID methods have evolved to leverage both spatial and temporal cues. Early approaches used recurrent neural networks and 3D convolutional networks [20, 30], while later works incorporated temporal pooling [51] and attention mechanisms [25]. Recent advancements include temporal complementary learning [13], Transformer-based architectures [11, 28], and techniques addressing cross-platform and cloth-changing scenarios [10, 39, 47]. The" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.501, + 0.909, + 0.683 + ], + "angle": 0, + "content": "AG-ReID 2023 Challenge [33] highlighted aerial-ground ReID challenges, with winners employing re-ranking, data augmentation, and centroid-based representations. Recent works include Instruct-ReID [12] with instruction-guided retrieval, Domain Shifting [15] for distribution adaptation, and SEAS [53] using 3D body shape guidance. Despite these developments, most existing methods employ uni-modal frameworks trained on predefined label sets. In contrast, recent work has proposed a visual-language multi-modal learning paradigm [45], potentially offering more flexibility and robustness in feature representation for video-based person ReID." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.695, + 0.714, + 0.71 + ], + "angle": 0, + "content": "3. AG-VPReID Dataset" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.72, + 0.909, + 0.809 + ], + "angle": 0, + "content": "This section offers a detailed overview of the creation process for our AG-VPReID dataset. We describe the methods used for collecting video footage in Sec. 3.1. Sec. 3.2 introduces our annotation procedures. Sec. 3.3 compares AG-VPReID with existing datasets, highlighting its unique features." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.819, + 0.692, + 0.833 + ], + "angle": 0, + "content": "3.1. Dataset Collection" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.84, + 0.909, + 0.903 + ], + "angle": 0, + "content": "The AG-VPReLU dataset was collected over a period of 20 days, including 10 morning sessions (8:30am-10:00am) and 10 afternoon sessions (3:30pm-5:00pm), with each session lasting 60 minutes. Data capture involved two drones, two" + }, + { + "type": "page_number", + "bbox": [ + 0.484, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "1243" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.485, + 0.304 + ], + "angle": 0, + "content": "CCTV cameras, and two wearable cameras. Each drone operated at four different altitudes—15m, 30m, 80m, and 120m—for 15 minutes per session, providing a comprehensive range of aerial views. In total, the dataset comprises 240 hours of video footage, documented in Tab. 2, which includes detailed specifications of the equipment used. The dataset features diverse resolutions, frame rates, and perspectives, extending from ground level to 120-meter aerial views. Fig. 1 shows example frames across different viewpoints and image qualities from different platforms. The drones, CCTV, and wearable devices are positioned to view individuals from different angles, as illustrated in Fig. 1, forcing person ReID models to learn robust multiview and partial-view representations to be effective." + }, + { + "type": "table", + "bbox": [ + 0.158, + 0.317, + 0.416, + 0.417 + ], + "angle": 0, + "content": "
TypeModelResolutionLensFPS
CCTVBosch (Outdoor)704 × 48024mm15
Bosch (Indoor)1280 × 72018mm25
WearableGoPro10 (Front)3840 × 216016mm30
GoPro10 (Side)1920 × 108016mm60
DronesDJI Inspire23840 × 216024mm25
DJI M300RTK8192 × 546035mm1
" + }, + { + "type": "table_caption", + "bbox": [ + 0.117, + 0.42, + 0.456, + 0.433 + ], + "angle": 0, + "content": "Table 2. Equipment specifications for the AG-VPReID dataset." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.451, + 0.484, + 0.527 + ], + "angle": 0, + "content": "To ensure professional drone operations, a specialized team (one RPAS engineer, one Chief Remote Pilot, one RPAS technician) managed all 20 data collection days, handling flights, capturing aerial footage, and performing initial data preprocessing." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.538, + 0.258, + 0.554 + ], + "angle": 0, + "content": "3.2. Labeling Process" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.561, + 0.484, + 0.742 + ], + "angle": 0, + "content": "The AG-VPReID dataset uses YOLOv8x for person detection and tracking [16], extracting images from all frames across multiple cameras. It includes both short-term and long-term ReID scenarios, the latter including instances where participants change clothes to test long-term identity persistence. Identity matching was performed by a team of expert annotators, supported by research assistants, to ensure both accuracy and consistency in the matching process. Following [32], we manually annotated each identity with 15 selective soft-biometrics attributes to enhance the dataset's utility for attribute-based person ReID applications. For a detailed list of these attributes, refer to 8.1." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.753, + 0.308, + 0.768 + ], + "angle": 0, + "content": "3.3. Dataset Characteristics" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.776, + 0.483, + 0.806 + ], + "angle": 0, + "content": "Compared with existing video-based ReID datasets, our AG-VPReID dataset has five unique characteristics:" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.811, + 0.484, + 0.9 + ], + "angle": 0, + "content": "1) The highest number of identities and tracklets. AGVPReID sets a new benchmark with 6,632 unique identities and 32,321 tracklets, substantially surpassing existing datasets. It holds nearly twice as many identities as prominent ground-based datasets such as LSVID [18], which contains 3,772 identities, and more than six times" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.905, + 0.198 + ], + "angle": 0, + "content": "the tracklets of the next largest aerial-ground dataset, G2AVReID [47], which includes 5,576 tracklets. Additionally, AG-VPReID encompasses over 9.6 million frames, providing 50 times the volume of frames compared to other aerial-ground datasets. While MEVID [3] exceeds in frame count with 10.46 million, it does not match AG-VPReID in terms of identity and tracklet numbers." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.202, + 0.905, + 0.324 + ], + "angle": 0, + "content": "2) The most diverse platforms. Our AG-VPReLUID dataset is the first to incorporate aerial, ground, and wearable platforms for video-based person ReID. The inclusion of wearable cameras provides a novel dimension with high-quality first-person perspectives. This combination results in extreme variations in resolution and subject size across platforms: UAV (18×37 to 293×542 pixels), CCTV (22×23 to 172×413 pixels), and GoPro (25×48 to 483×1078 pixels)." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.328, + 0.905, + 0.449 + ], + "angle": 0, + "content": "3) The highest flying altitudes. AG-VPReID features footage from altitudes reaching up to 120 meters, exceeding existing datasets' 60-meter maximum [47]. This introduces challenges: 1) extreme viewpoints with perspective distortions; 2) multiple scales with varying resolutions; and 3) image quality issues (Fig. 2). We used two drones—one with a wide-angle camera for area monitoring and another with a narrow-angle camera for detailed observation." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.454, + 0.907, + 0.543 + ], + "angle": 0, + "content": "4) Rich outdoor scenarios with real-world challenges. Our AG-VPReID dataset presents diverse outdoor campus scenarios with real-world challenges including complex occlusions, varied poses from different activities, and uniform-wearing individuals. Fig. 2 shows examples of these diverse scenarios." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.549, + 0.906, + 0.685 + ], + "angle": 0, + "content": "5) Other notable characteristics. AG-VPReLU includes comprehensive attributes for each identity (gender, age, clothing style, accessories), enabling fine-grained analysis. The dataset features long-term tracking data of 14 diverse individuals recorded across multiple days, each wearing different clothing per session to capture real-world variations. We also provide camera calibration information and GPS coordinates to support multi-camera tracking research. See supplementary material Sec. 8 for details." + }, + { + "type": "list", + "bbox": [ + 0.513, + 0.202, + 0.907, + 0.685 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.698, + 0.695, + 0.714 + ], + "angle": 0, + "content": "3.4. Ethics and Privacy" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.722, + 0.905, + 0.783 + ], + "angle": 0, + "content": "This research received ethics approval for data collection and usage. We implement \"Deface\" [5] to blur faces, secure data storage, and obtained informed consent from all participants. Details are available at our project repository." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.799, + 0.681, + 0.815 + ], + "angle": 0, + "content": "4. AG-VPReID-Net" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.826, + 0.905, + 0.901 + ], + "angle": 0, + "content": "We propose AG-VPReLU-DNet, a purpose-built framework addressing aerial-ground ReID's unique challenges. In particular, we propose an Adapted Temporal-Spatial Stream for robust temporal-spatial representations to deal with the temporal discontinuity challenge caused by drone motion" + }, + { + "type": "page_number", + "bbox": [ + 0.484, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "1244" + } + ], + [ + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.132, + 0.131, + 0.23 + ], + "angle": 270, + "content": "Aerial Camera" + }, + { + "type": "image", + "bbox": [ + 0.145, + 0.091, + 0.462, + 0.277 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.145, + 0.287, + 0.284, + 0.352 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.163, + 0.354, + 0.279, + 0.404 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.318, + 0.29, + 0.441, + 0.35 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.319, + 0.351, + 0.441, + 0.43 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.315, + 0.433, + 0.45, + 0.447 + ], + "angle": 0, + "content": "(f) Clothing similarity" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.467, + 0.484, + 0.524 + ], + "angle": 0, + "content": "Figure 2. The AG-VPReID dataset presents several key challenges: extreme viewpoints, varying resolutions and subject sizes, pose/illumination variations, occlusions, and similar clothing among subjects." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.551, + 0.484, + 0.657 + ], + "angle": 0, + "content": "from unstable tracking between frames. We propose a Normalized Appearance Stream for resolution and appearance changes to deal with extreme viewpoint shifts. To deal with altitude-driven scale variance, we introduce a Multi-Scale Attention Stream for scale variations. Fig. 3 illustrates our architecture. Detailed stream contributions are provided in Tab. 9 of the supplementary material." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.667, + 0.474, + 0.684 + ], + "angle": 0, + "content": "4.1. Stream 1: Adapted Temporal-Spatial Stream" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.689, + 0.484, + 0.81 + ], + "angle": 0, + "content": "When performing video-based person ReID, a key challenge is handling inconsistent motion patterns and temporal gaps between video frames. To address this, we propose an Adapted Temporal-Spatial stream that combines CLIP's visual encoder with temporal and 3D shape modeling to create a comprehensive representation of individuals. Our method operates on a sequence \\(\\mathcal{V}\\) of \\(T\\) frames through the following components:" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.814, + 0.484, + 0.845 + ], + "angle": 0, + "content": "Visual Feature Extraction: Using CLIP's visual encoder \\(E_{v}(\\cdot)\\), we extract frame-level features," + }, + { + "type": "equation", + "bbox": [ + 0.183, + 0.858, + 0.483, + 0.874 + ], + "angle": 0, + "content": "\\[\nF _ {t} = E _ {v} \\left(\\mathcal {I} _ {t}\\right), \\quad t \\in \\{1, \\dots , T \\}, \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.886, + 0.427, + 0.901 + ], + "angle": 0, + "content": "where \\(\\mathcal{I}_t\\) and \\(F_{t}\\) are the \\(t\\)-th frame and its features." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.092, + 0.905, + 0.122 + ], + "angle": 0, + "content": "Temporal Processing: We incorporate temporal modeling through two key components:" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.122, + 0.905, + 0.152 + ], + "angle": 0, + "content": "1) Temporal 3D Shape Modeling (TSM): Following [34], we extract 3D shape representations," + }, + { + "type": "equation", + "bbox": [ + 0.557, + 0.164, + 0.905, + 0.181 + ], + "angle": 0, + "content": "\\[\ng _ {t} = \\operatorname {G R U} \\left(F _ {t}, g _ {t - 1}\\right), \\beta_ {t} = 3 \\mathrm {D} \\operatorname {R e g u s s o r} \\left(g _ {t}\\right), \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.193, + 0.905, + 0.223 + ], + "angle": 0, + "content": "where \\(g_{t}\\) captures temporal dynamics and \\(\\beta_{t}\\) represents SMPL model parameters." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.223, + 0.905, + 0.268 + ], + "angle": 0, + "content": "2) Temporal Feature Enhancement (TFE): Adapting from [45], we enhance features by combining appearance and shape," + }, + { + "type": "equation", + "bbox": [ + 0.606, + 0.281, + 0.905, + 0.298 + ], + "angle": 0, + "content": "\\[\nF _ {\\text {e n h a n c e d}} = \\operatorname {T F E} \\left(F _ {1: T}, \\beta_ {1: T}\\right). \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.308, + 0.905, + 0.339 + ], + "angle": 0, + "content": "Identity-Aware Processing: We incorporate identity information through," + }, + { + "type": "equation", + "bbox": [ + 0.602, + 0.349, + 0.851, + 0.388 + ], + "angle": 0, + "content": "\\[\n\\mathcal {M} _ {y _ {i}} = \\frac {1}{N _ {y _ {i}}} \\sum_ {V \\in \\mathcal {V} _ {y _ {i}}} \\operatorname {T A P} \\left(F _ {\\text {e n h a n c e d}}\\right),\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.57, + 0.391, + 0.813, + 0.41 + ], + "angle": 0, + "content": "\\[\n\\mathcal {M} _ {y _ {i}} ^ {\\text {r e f i n e d}} = \\operatorname {S S P} \\left(F _ {\\text {e n h a n c e d}}, \\mathcal {M} _ {y _ {i}}\\right),\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.574, + 0.412, + 0.905, + 0.431 + ], + "angle": 0, + "content": "\\[\nR _ {s t r e a m 1} = \\left[ F _ {e n h a n c e d}; \\mathcal {M} _ {y _ {i}} ^ {r e f i n e d} \\right], \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.442, + 0.906, + 0.503 + ], + "angle": 0, + "content": "where \\(\\mathcal{M}_{y_i}\\) is the identity memory bank constructed through Temporal Average Pooling (TAP), and the Sequence-Specific Prompt (SSP) module refines this representation for the final output \\(R_{stream1}\\)." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.512, + 0.878, + 0.528 + ], + "angle": 0, + "content": "4.2. Stream 2: Normalized Appearance Stream" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.534, + 0.906, + 0.624 + ], + "angle": 0, + "content": "The Adapted Temporal-Spatial (ATS) stream provides robust temporal-spatial representation, but may not fully capture fine-grained appearance details across viewpoints, especially in aerial footage. To address this limitation, we propose a Normalized Appearance (NA) stream that effectively aggregates appearance details from multiple viewpoints." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.625, + 0.906, + 0.76 + ], + "angle": 0, + "content": "The NA stream normalizes and combines appearance information across frames using UVTexture maps and visibility masks. Our process involves: (1) Extracting UVTexture maps and visibility masks per frame, (2) Normalizing UVTexture maps brightness, (3) Aligning maps across frames, (4) Weighted aggregation using visibility masks, and (5) Generating the final normalized representation. The brightness normalization and weighted aggregation of UVTexture maps can be formulated as," + }, + { + "type": "equation", + "bbox": [ + 0.622, + 0.772, + 0.905, + 0.789 + ], + "angle": 0, + "content": "\\[\nT _ {i} ^ {\\text {n o r m}} = \\gamma (H (N (T _ {i}))), \\tag {5}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.593, + 0.792, + 0.905, + 0.832 + ], + "angle": 0, + "content": "\\[\nT _ {\\text {a g g r e g a t e d}} = \\frac {\\sum_ {i = 1} ^ {N} V _ {i} \\odot T _ {i} ^ {\\text {n o r m}}}{\\sum_ {i = 1} ^ {N} V _ {i}}, \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.84, + 0.906, + 0.902 + ], + "angle": 0, + "content": "where \\(T_{i}^{norm}\\) is the normalized UVTexture map for frame \\(i\\), \\(N(\\cdot), H(\\cdot)\\), and \\(\\gamma(\\cdot)\\) are normalization, histogram matching and gamma correction functions respectively. \\(T_{aggregated}\\) is the final aggregated map. We leverage PhysPT [49]" + }, + { + "type": "page_number", + "bbox": [ + 0.484, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "1245" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.094, + 0.089, + 0.905, + 0.33 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.34, + 0.908, + 0.381 + ], + "angle": 0, + "content": "Figure 3. The three-stream AG-VPReLU-DNet architecture addresses aerial-ground ReID challenges: Temporal-Spatial stream for motion modeling and temporal features, Normalized Appearance for resolution/appearance variations, and Multi-Scale Attention for aerial-ground scale variations." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.409, + 0.484, + 0.47 + ], + "angle": 0, + "content": "for pose estimation and Texformer [41] to generate UV maps from PhysPT's output 3D meshes. The maps are improved through inter-frame consistency before feeding into the DGC Omni-scale Module [52]." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.482, + 0.435, + 0.497 + ], + "angle": 0, + "content": "4.3. Stream 3: Multi-Scale Attention Stream" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.506, + 0.484, + 0.672 + ], + "angle": 0, + "content": "While the ATS stream provides a robust temporal-spatial representation and the NA stream addresses viewpoint and occlusion challenges, aerial-ground person ReID still faces significant hurdles due to extreme scale variations between drone and ground-level footage. The first two streams effectively capture temporal dynamics, 3D shape information, and viewpoint-invariant appearance details, but they may not fully address the drastic scale differences inherent in aerial-ground scenarios. To complement the ATS stream and NA stream and address this limitation, we propose a Multi-Scale Attention (MSA) stream." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.673, + 0.484, + 0.809 + ], + "angle": 0, + "content": "In detail, this stream leverages the power of frozen large vision models combined with lightweight, adaptive processing. Specifically, this stream utilizes a frozen large vision model to extract multi-scale features for video-based person ReID. By combining a lightweight Transformer decoder with a local temporal module, this approach dynamically integrates spatial and temporal information, thereby enhancing our framework's ability to accurately capture essential person-specific details." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.811, + 0.484, + 0.901 + ], + "angle": 0, + "content": "Specifically, for each frame \\(\\mathcal{I}_t\\) within the sequence \\(\\nu y_{i}\\) the CLIP vision encoder [36] is employed to extract features independently. The process collects tokens from various layers at regular intervals to compile a detailed feature map that captures spatial correspondences. These frame feature maps are subsequently concatenated and assembled" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.409, + 0.907, + 0.56 + ], + "angle": 0, + "content": "into a spatiotemporal feature volume \\(\\mathbf{G}\\). Following the methods [23, 44], we integrate temporal information into this volume before processing it through a Transformer decoder. This decoder globally aggregates features across multiple layers, employing a video-level classification token as a query, with feature volumes from different layers of the backbone serving as keys and values. A linear layer then maps the output of the decoder's final block to produce class predictions. The operational dynamics of the Transformer decoder are outlined as follows," + }, + { + "type": "equation", + "bbox": [ + 0.52, + 0.604, + 0.901, + 0.621 + ], + "angle": 0, + "content": "\\[\nY _ {i} = \\operatorname {T e m p} _ {i} ([ \\mathbf {G} _ {N - M + i, 1}, \\mathbf {G} _ {N - M + i, 2}, \\dots , \\mathbf {G} _ {N - M + i, T} ]),\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.525, + 0.623, + 0.748, + 0.639 + ], + "angle": 0, + "content": "\\[\n\\tilde {q} _ {i} = q _ {i - 1} + \\mathrm {M H A} _ {i} \\left(q _ {i - 1}, Y _ {i}, Y _ {i}\\right),\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.525, + 0.642, + 0.668, + 0.658 + ], + "angle": 0, + "content": "\\[\nq _ {i} = \\tilde {q} _ {i} + \\operatorname {M L P} _ {i} (\\tilde {q} _ {i}),\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.52, + 0.661, + 0.905, + 0.677 + ], + "angle": 0, + "content": "\\[\nf _ {G} = \\operatorname {F C} \\left(q _ {M}\\right), \\tag {7}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.72, + 0.907, + 0.901 + ], + "angle": 0, + "content": "where \\(\\mathbf{G}_{n,t}\\) represents the features of frame \\(t\\) extracted from the \\(n\\)-th layer of CLIP vision encoder. The feature volume \\(Y_{i}\\), which undergoes temporal modulation, is input into the \\(i\\)-th layer of the Transformer decoder. The query token \\(q_{i}\\) is incrementally refined, beginning with \\(q_{0}\\) as learnable initial parameters. The final output \\(f_{G}\\), corresponds to the final feature. The spatiotemporal decoder comprises \\(M\\) blocks. \\(N\\) denotes the number of encoder layers. Multi-head attention (MHA) involves query, key, and value, each of which plays a distinct role. The operator \\(\\mathrm{Temp}(\\cdot)\\) is utilized to model temporal dynamics, which produces feature tokens influenced by detailed temporal information." + }, + { + "type": "page_number", + "bbox": [ + 0.484, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "1246" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.091, + 0.09, + 0.296, + 0.108 + ], + "angle": 0, + "content": "5. Experimental Results" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.115, + 0.378, + 0.13 + ], + "angle": 0, + "content": "5.1. Datasets and Evaluation Metrics" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.137, + 0.484, + 0.288 + ], + "angle": 0, + "content": "We conducted evaluations of our method using the AG-VPReID and four established video-based person ReID datasets: iLIDS [38], Mars [51], LS-VID [18] and G2AVReID [47]. For AG-VPReID, we used a balanced split of 3,013 identities with both ground and aerial views, dividing them equally for training and testing purposes [31, 32]. Details on the training and testing configurations are provided in Table 3. We evaluate performance using the Cumulative Matching Characteristic (CMC) at Rank-1 and the mean Average Precision (mAP)." + }, + { + "type": "table", + "bbox": [ + 0.111, + 0.3, + 0.462, + 0.453 + ], + "angle": 0, + "content": "
CaseSubset#IDs#Tracklets#Images (M)
TrainingAll1,55513,3003.85
Testing (A2G)All1,45613,5663.94
15m5064,9071.50
30m3772,8850.89
80m3562,5920.69
120m3083,1820.86
Testing (G2A)All5,075*19,0215.79
15m1,4036,3622.14
30m1,4064,4681.41
80m1,1623,8661.13
120m1,1954,3251.11
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.457, + 0.483, + 0.484 + ], + "angle": 0, + "content": "Table 3. Statistics of AG-VPReID dataset. A2G: aerial-to-ground, G2A: ground-to-aerial. *3,619 additional IDs as distractors." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.508, + 0.309, + 0.525 + ], + "angle": 0, + "content": "5.2. Implementation Details" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.53, + 0.484, + 0.681 + ], + "angle": 0, + "content": "Our pipeline leverages UV maps generated by Texformer [41] using 3D human meshes from PhysPT [49] with refined pose estimation. The UV maps are processed through normalization, histogram matching, and gamma correction before weighted blending with visibility masks. The architecture consists of three streams: an Adapted Temporal-Spatial Stream (CLIP ViT-B/16), a Normalized Appearance Stream for 3D coordinates and UV textures, and a Multi-Scale Attention Stream (CLIP ViT-L/14). More implementation details can be found in the supplementary." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.69, + 0.46, + 0.707 + ], + "angle": 0, + "content": "5.3. Comparison with State-of-the-Art Methods" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.712, + 0.483, + 0.758 + ], + "angle": 0, + "content": "We evaluate our proposed method AG-VPReID-Net against several state-of-the-art approaches across multiple video-based person ReID datasets. Tab. 4 summarizes the results." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.762, + 0.483, + 0.852 + ], + "angle": 0, + "content": "Ground-to-Ground Datasets. Our method achieves superior performance on MARS (91.5% mAP, 93.2% Rank-1), outperforming CLIP-ReID by 3.4% mAP. On LS-VID (87.3% mAP, 93.2% Rank-1), we surpass LSTRL by 4.9% mAP. For iLIDS-VID, we reach 96.3% Rank-1, which is 3.0% higher than MFA." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.856, + 0.484, + 0.901 + ], + "angle": 0, + "content": "Cross-Platform Datasets. On G2A-VReID, we achieve \\(81.3\\%\\) mAP and \\(73.1\\%\\) Rank-1, surpassing MGH by \\(4.6\\%\\) mAP. Note that the G2A-VReID dataset only provides a" + }, + { + "type": "image", + "bbox": [ + 0.544, + 0.09, + 0.875, + 0.233 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.513, + 0.245, + 0.905, + 0.288 + ], + "angle": 0, + "content": "Figure 4. Baseline vs our method on AG-VPReLU dataset. Green/red: correct/incorrect labels. First tracklet image shown. Ranks show improvements in bold." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.315, + 0.905, + 0.39 + ], + "angle": 0, + "content": "ground-to-aerial testing set. For AG-VPReID, we demonstrate strong results in both Ground-to-Aerial (58.0% mAP, 75.6% Rank-1, exceeding CLIP-ReID by 8.8% Rank-1) and aerial-to-ground scenarios (64.0% mAP, 71.9% Rank-1, surpassing CLIP-ReID by 1.7% mAP and 0.3% Rank-1)." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.399, + 0.667, + 0.416 + ], + "angle": 0, + "content": "5.4. Ablation Study" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.421, + 0.905, + 0.497 + ], + "angle": 0, + "content": "We conduct an ablation study on AG-VPReLU to evaluate each stream. St-1 is our temporal modeling stream, St-2 is the appearance normalization stream, and St-3 is the multiscale feature stream. Their combinations (St-12/13/23/123) merge multiple streams." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.501, + 0.906, + 0.606 + ], + "angle": 0, + "content": "Stream Contributions. Tab. 5 shows that St-1 achieves the strongest individual performance (71.52% A2G, 74.80% G2A Rank-1). St-2 and St-3 show moderate results (58.40% and 61.65% A2G Rank-1). Combined streams demonstrate complementary strengths, with St-123 achieving the best results (71.91% A2G, 75.57% G2A Rank-1) by integrating the three streams." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.61, + 0.906, + 0.762 + ], + "angle": 0, + "content": "Impact of Altitude. Table 6 shows performance decreasing with altitude, most significantly between \\(30\\mathrm{m} - 80\\mathrm{m}\\). A2G Rank-1 drops \\(\\sim 11\\%\\) across streams. At \\(120\\mathrm{m}\\), St-1 demonstrates robustness (52.47% vs 38.12%/35.13%), achieving +17.34% improvement through temporal modeling. St-2's physics-informed UV mapping provides +7.57% improvement (42.70% vs. 35.13%), while St-3's multi-scale attention yields +17.72% improvement (52.85% vs. 35.13%). St-123 maintains best performance across all altitudes by combining stream strengths." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.765, + 0.906, + 0.901 + ], + "angle": 0, + "content": "Clothing Changes vs. Camera Angles. Analysis shows altitude increases \\((15\\mathrm{m} \\rightarrow 120\\mathrm{m})\\) reduce Rank-1 by \\(27.66\\%\\), significantly more than clothing changes \\((7.85\\%\\) in ground-to-ground). Without clothing changes, aerial-ground matching \\((71.91\\%\\) Rank-1) still underperforms ground-to-ground \\((91.52\\%)\\) due to viewpoint differences. When combining aerial views with clothing changes \\((65.83\\%\\) Rank-1), these factors create synergistic challenges where viewpoint differences amplify clothing ambi" + }, + { + "type": "page_number", + "bbox": [ + 0.484, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "1247" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.139, + 0.089, + 0.86, + 0.344 + ], + "angle": 0, + "content": "
MethodMARSLS-VIDiLIDS-VIDG2A-VReIDAG-VPReID
Ground → GroundGround → AerialAerial → GroundGround → Aerial
mAPRank-1mAPRank-1Rank-1Rank-5mAPRank-1mAPRank-1mAPRank-1
STMP[29]72.784.439.156.884.396.8--50.760.345.255.8
M3D[20]74.184.440.157.774.094.3--52.462.647.957.3
GLTR[19]78.587.044.363.186.098.0--55.665.850.160.5
TCLNet[13]85.189.870.381.586.6-65.454.757.267.952.762.4
MGH[43]85.890.061.879.685.697.176.769.960.370.855.565.2
GRL[25]84.891.0--90.498.3--58.768.453.963.6
BiCnet-TKS[14]86.090.275.184.6--63.451.759.869.254.364.7
CTL[24]86.791.4--89.797.0--56.466.951.861.3
STMN[6]84.590.569.282.1--66.756.161.671.556.966.2
PSTA[39]85.891.5--91.598.1--60.570.255.865.7
DIL[11]87.090.8--92.098.0--61.270.956.366.1
STT[48]86.388.778.087.587.595.0--61.070.756.165.9
TMT[28]85.891.2--91.398.6--60.870.555.965.8
CAVIT[40]87.290.879.289.293.398.0--61.471.156.566.3
SINet[1]86.291.079.687.492.5---61.371.056.466.2
MFA[9]85.090.478.988.293.398.7--61.170.856.266.0
DCCT[27]87.592.3--91.798.6--61.571.256.666.4
LSTRL[26]86.891.682.489.892.298.6--61.771.356.766.5
CLIP-ReID[21]88.191.780.688.8----62.371.657.266.8
AG-VPReID-Net91.593.287.393.296.399.581.373.164.071.958.075.6
" + }, + { + "type": "table_caption", + "bbox": [ + 0.276, + 0.346, + 0.72, + 0.361 + ], + "angle": 0, + "content": "Table 4. Performance comparison across datasets. Bold shows best results." + }, + { + "type": "table", + "bbox": [ + 0.108, + 0.385, + 0.465, + 0.496 + ], + "angle": 0, + "content": "
MethodAerial → GroundGround → Aerial
Rank-1Rank-5Rank-10Rank-1Rank-5Rank-10
St-171.5280.4283.8874.8084.2786.90
St-258.4070.2075.8061.5073.6078.20
St-361.6574.5379.1567.3878.8282.3
St-1269.5078.8082.5072.8082.6085.40
St-1371.8080.6083.9175.4084.4886.91
St-2365.7076.5580.9070.1080.4583.91
St-12371.9180.6783.9275.5784.5086.92
" + }, + { + "type": "table_caption", + "bbox": [ + 0.1, + 0.499, + 0.473, + 0.513 + ], + "angle": 0, + "content": "Table 5. Ranking accuracy (%) improvement on AG-VPReID dataset." + }, + { + "type": "table", + "bbox": [ + 0.093, + 0.528, + 0.482, + 0.639 + ], + "angle": 0, + "content": "
MethodAerial → GroundGround → Aerial
15m30m80m120m15m30m80m120m
St-180.2878.7667.2452.4783.2583.0367.0762.31
St-269.7568.1352.6238.1272.8771.4152.2245.43
St-374.2574.0152.5335.1377.1878.9156.5952.56
St-1278.3276.8265.2450.5381.3481.2765.1760.43
St-1380.5578.9567.5052.8583.7083.5567.6063.10
St-2376.4574.9057.4042.7079.5579.4057.4052.80
St-12380.6679.0067.6353.0083.9283.6667.8263.32
" + }, + { + "type": "table_footnote", + "bbox": [ + 0.091, + 0.643, + 0.482, + 0.657 + ], + "angle": 0, + "content": "Table 6. Rank-1 accuracy (%) on AG-VPReID at various altitudes." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.686, + 0.218, + 0.7 + ], + "angle": 0, + "content": "guity. See Table 7." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.712, + 0.228, + 0.727 + ], + "angle": 0, + "content": "5.5. Visualization" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.735, + 0.484, + 0.902 + ], + "angle": 0, + "content": "We further visualize the ReID results with Top-5 ranking to understand how our model improves compared to the baseline [21] for aerial-to-ground person ReID in Fig. 4. Unlike the baseline which may be biased by image resolution and clothing textures, our approach pays attention to more robust features like motion patterns and body shape characteristics, which explains its successful identification of similar walking postures and body proportions despite the significant viewpoint differences between the aerial query and ground-view gallery pair. Additional examples are in Figs. 7 and 8 of the supplementary material." + }, + { + "type": "table", + "bbox": [ + 0.517, + 0.384, + 0.906, + 0.505 + ], + "angle": 0, + "content": "
ScenarioRank-1mAPKey Observation
Camera Angle Impact
15m altitude (AG)80.6677.23Baseline performance
30m altitude (AG)79.0075.81Minimal degradation
80m altitude (AG)67.6363.42-13.03% Rank-1 vs. 15m
120m altitude (AG)53.0048.75-27.66% Rank-1 vs. 15m
Clothing Change Impact
GG-SameClothes91.5288.74Upper-bound performance
GG-DiffClothes83.6779.92-7.85% Rank-1 (CC-only impact)
AG-SameClothes71.9164.00-19.61% Rank-1 (AG-only impact)
AG-DiffClothes65.8357.52-6.08% Rank-1 (CC impact in AG)
" + }, + { + "type": "table_caption", + "bbox": [ + 0.529, + 0.508, + 0.89, + 0.523 + ], + "angle": 0, + "content": "Table 7. Impact of clothing changes (CC) vs. camera angles." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.548, + 0.634, + 0.564 + ], + "angle": 0, + "content": "6. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.574, + 0.907, + 0.771 + ], + "angle": 0, + "content": "We introduce AG-VPReLU, a comprehensive dataset for video-based aerial-ground person ReID, addressing the critical need for a large and challenging aerial-ground dataset. We also propose AG-VPReLU-Net, a purpose-built three-stream person ReID framework that combines temporal-spatial processing, physics-informed normalized appearance representation, and multi-scale attention mechanisms. This approach achieves state-of-the-art performance on both the AG-VPReLU dataset and existing video-based ReID benchmarks. Notably, the relatively lower performance across all approaches on AG-VPReLU highlights its demanding nature and establishes it as a robust benchmark for advancing future research in the field." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.785, + 0.694, + 0.803 + ], + "angle": 0, + "content": "7. Acknowledgement" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.81, + 0.907, + 0.903 + ], + "angle": 0, + "content": "This work was supported by the Australian Research Council (ARC) Discovery Project (DP200101942) and a QUT Postgraduate Research Award. We gratefully acknowledge the Research Engineering Facility (REF) team at QUT for providing expertise and the research infrastructure essential for data collection and processing within this project." + }, + { + "type": "page_number", + "bbox": [ + 0.484, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "1248" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.093, + 0.09, + 0.188, + 0.106 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.115, + 0.484, + 0.171 + ], + "angle": 0, + "content": "[1] Shutao Bai, Bingpeng Ma, Hong Chang, Rui Huang, and Xilin Chen. Salient-to-broad transition for video person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 7339-7348, 2022. 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.172, + 0.484, + 0.267 + ], + "angle": 0, + "content": "[2] David Cornett, Joel Brogan, Nell Barber, Deniz Aykac, Seth Baird, Nicholas Burchfield, Carl Dukes, Andrew Duncan, Regina Ferrell, Jim Goddard, et al. Expanding accurate person recognition to new altitudes and ranges: The briar dataset. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 593-602, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.269, + 0.483, + 0.352 + ], + "angle": 0, + "content": "[3] Daniel Davila, Dawei Du, Bryon Lewis, Christopher Funk, Joseph Van Pelt, Roderic Collins, Kellie Corona, Matt Brown, Scott McCloskey, Anthony Hoogs, and Brian Clipp. Mevid: Multi-view extended videos with identities for video person re-identification. In IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023. 2, 3, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.354, + 0.483, + 0.409 + ], + "angle": 0, + "content": "[4] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In IEEE conference on computer vision and pattern recognition (CVPR), pages 248-255. IEEE, 2009. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.41, + 0.482, + 0.438 + ], + "angle": 0, + "content": "[5] Michael Dreuw and ORB-HD. deface: Video anonymization by face detection, 2023. Python package version 1.5.0. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.439, + 0.483, + 0.494 + ], + "angle": 0, + "content": "[6] Chanho Eom, Geon Lee, Junghyup Lee, and Bumsub Ham. Video-based person re-identification with spatial and temporal memory networks. In IEEE International Conference on Computer Vision (ICCV), pages 12036–12045, 2021. 2, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.496, + 0.483, + 0.55 + ], + "angle": 0, + "content": "[7] Yang Fu, Xiaoyang Wang, Yunchao Wei, and Thomas Huang. Sta: Spatial-temporal attention for large-scale video-based person re-identification. In AAAI Conference on Artificial Intelligence, pages 8287–8294, 2019. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.552, + 0.483, + 0.62 + ], + "angle": 0, + "content": "[8] Xinqian Gu, Hong Chang, Bingpeng Ma, Shutao Bai, Shiguang Shan, and Xilin Chen. Clothes-changing person re-identification with rgb modality only. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.622, + 0.483, + 0.676 + ], + "angle": 0, + "content": "[9] Xinqian Gu, Hong Chang, Bingpeng Ma, and Shiguang Shan. Motion feature aggregation for video-based person re-identification. IEEE Transactions on Image Processing, 31:3908-3919, 2022. 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.678, + 0.483, + 0.746 + ], + "angle": 0, + "content": "[10] Ke Han, Yan Huang, Shaogang Gong, Liang Wang, and Tieniu Tan. 3d shape temporal aggregation for video-based clothing-change person re-identification. In Asian Conference on Computer Vision (ACCV), pages 2371–2387, 2022. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.748, + 0.483, + 0.816 + ], + "angle": 0, + "content": "[11] Tianyu He, Xin Jin, Xu Shen, Jianqiang Huang, Zhibo Chen, and Xian-Sheng Hua. Dense interaction learning for video-based person re-identification. In IEEE International Conference on Computer Vision (ICCV), pages 1490–1501, 2021. 3, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.818, + 0.484, + 0.901 + ], + "angle": 0, + "content": "[12] Weizhen He, Yiheng Deng, Shixiang Tang, Qihao Chen, Qingsong Xie, Yizhou Wang, Lei Bai, Feng Zhu, Rui Zhao, Wanli Ouyang, et al. Instruct-reid: A multi-purpose person re-identification task with instructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17521–17531, 2024. 3" + }, + { + "type": "list", + "bbox": [ + 0.094, + 0.115, + 0.484, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.148 + ], + "angle": 0, + "content": "[13] Ruibing Hou, Hong Chang, Bingpeng Ma, Shiguang Shan, and Xilin Chen. Temporal complementary learning for video person re-identification. In European Conference on Computer Vision (ECCV), pages 388–405, 2020. 2, 3, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.151, + 0.905, + 0.219 + ], + "angle": 0, + "content": "[14] Ruibing Hou, Hong Chang, Bingpeng Ma, Rui Huang, and Shiguang Shan. Bicnet-tks: Learning efficient spatial-temporal representation for video person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2014–2023, 2021. 2, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.221, + 0.905, + 0.29 + ], + "angle": 0, + "content": "[15] Yan Jiang, Xu Cheng, Hao Yu, Xingyu Liu, Haoyu Chen, and Guoying Zhao. Domain shifting: A generalized solution for heterogeneous cross-modality person re-identification. In European Conference on Computer Vision, pages 289–306. Springer, 2025. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.292, + 0.905, + 0.332 + ], + "angle": 0, + "content": "[16] Glenn Jocher, Ayush, and Jing Qiu. Ultralytics YOLO. https://github.com/ultralytics/ ultralytics, 2024. Accessed: 2024-03-22. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.334, + 0.905, + 0.417 + ], + "angle": 0, + "content": "[17] SV Aruna Kumar, Ehsan Yaghoubi, Abhijit Das, BS Harish, and Hugo Proença. The p-destre: A fully annotated dataset for pedestrian detection, tracking, and short/long-term re-identification from aerial devices. IEEE Transactions on Information Forensics and Security, 16:1696-1708, 2020. 2, 3, 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.42, + 0.905, + 0.475 + ], + "angle": 0, + "content": "[18] J. Li, J. Wang, Q. Tian, W. Gao, and S. Zhang. Global-local temporal representations for video person re-identification. In IEEE International Conference on Computer Vision (ICCV), pages 3958–3967, 2019. 2, 4, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.477, + 0.905, + 0.532 + ], + "angle": 0, + "content": "[19] Jianing Li, Jingdong Wang, Qi Tian, Wen Gao, and Shiliang Zhang. Global-local temporal representations for video person re-identification. In IEEE International Conference on Computer Vision (ICCV), pages 3958–3967, 2019. 2, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.534, + 0.905, + 0.588 + ], + "angle": 0, + "content": "[20] Jianing Li, Shiliang Zhang, and Tiejun Huang. Multiscale 3d convolution network for video based person re-identification. In AAAI Conference on Artificial Intelligence, pages 8618–8625, 2019. 2, 3, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.591, + 0.905, + 0.645 + ], + "angle": 0, + "content": "[21] Siyuan Li, Li Sun, and Qingli Li. Clip-reid: exploiting vision-language model for image re-identification without concrete text labels. In AAAI Conference on Artificial Intelligence, pages 1405–1413, 2023. 8, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.648, + 0.905, + 0.688 + ], + "angle": 0, + "content": "[22] Yutian Lin, Liang Zheng, Zhedong Zheng, Yu Wu, and Yi Yang. Improving person re-identification by attribute and identity learning. ArXiv, abs/1703.07220, 2019. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.691, + 0.905, + 0.759 + ], + "angle": 0, + "content": "[23] Ziyi Lin, Shijie Geng, Renrui Zhang, Peng Gao, Gerard De Melo, Xiaogang Wang, Jifeng Dai, Yu Qiao, and Hongsheng Li. Frozen clip models are efficient video learners. In European Conference on Computer Vision (ECCV), pages 388-404. Springer, 2022. 6, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.761, + 0.905, + 0.83 + ], + "angle": 0, + "content": "[24] Jiawei Liu, Zheng-Jun Zha, Wei Wu, Kecheng Zheng, and Qibin Sun. Spatial-temporal correlation and topology learning for person re-identification in videos. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4370–4379, 2021. 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.832, + 0.905, + 0.901 + ], + "angle": 0, + "content": "[25] Xuehu Liu, Pingping Zhang, Chenyang Yu, Huchuan Lu, and Xiaoyun Yang. Watching you: Global-guided reciprocal learning for video-based person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 13334–13343, 2021. 1, 3, 8" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.484, + 0.945, + 0.516, + 0.956 + ], + "angle": 0, + "content": "1249" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.093, + 0.482, + 0.135 + ], + "angle": 0, + "content": "[26] Xuehu Liu, Pingping Zhang, and Hutchuan Lu. Video-based person re-identification with long short-term representation learning. arXiv preprint arXiv:2308.03703, 2023. 2, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.136, + 0.483, + 0.205 + ], + "angle": 0, + "content": "[27] Xuehu Liu, Chenyang Yu, Pingping Zhang, and Hutchuan Lu. Deeply coupled convolution-transformer with spatial-temporal complementary learning for video-based person re-identification. IEEE Transactions on Neural Networks and Learning Systems, 35(10):13753-13763, 2024. 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.207, + 0.483, + 0.275 + ], + "angle": 0, + "content": "[28] Xuehu Liu, Pingping Zhang, Chenyang Yu, Xuesheng Qian, Xiaoyun Yang, and Huchuan Lu. A video is worth three views: Trigeminal transformers for video-based person re-identification. IEEE Transactions on Intelligent Transportation Systems, 25(9):12818-12828, 2024. 3, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.278, + 0.482, + 0.332 + ], + "angle": 0, + "content": "[29] Yiheng Liu, Zhenxun Yuan, Wengang Zhou, and Houqiang Li. Spatial and temporal mutual promotion for video-based person re-identification. In AAAI Conference on Artificial Intelligence, pages 8786–8793, 2019. 2, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.334, + 0.483, + 0.403 + ], + "angle": 0, + "content": "[30] Niall McLaughlin, Jesus Martinez del Rincon, and Paul Miller. Recurrent convolutional network for video-based person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1325–1334, 2016. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.406, + 0.483, + 0.46 + ], + "angle": 0, + "content": "[31] Huy Nguyen, Kien Nguyen, Sridha Sridharan, and Clinton Fookes. Aerial-ground person re-id. In IEEE International Conference on Multimedia and Expo (ICME), pages 2585-2590, 2023. 1, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.462, + 0.482, + 0.517 + ], + "angle": 0, + "content": "[32] Huy Nguyen, Kien Nguyen, Sridha Sridharan, and Clinton Fookes. Ag-reid.v2: Bridging aerial and ground views for person re-identification. IEEE Transactions on Information Forensics and Security, 19:2896-2908, 2024. 1, 4, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.519, + 0.482, + 0.602 + ], + "angle": 0, + "content": "[33] Kien Nguyen, Clinton Fookes, Sridha Sridharan, Feng Liu, Xiaoming Liu, Arun Ross, Dana Michalski, Huy Nguyen, Debayan Deb, Mahak Kothari, et al. Ag-reid 2023: Aerial-ground person re-identification challenge results. In 2023 IEEE International Joint Conference on Biometrics (IJCB), pages 1–10. IEEE, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.604, + 0.482, + 0.671 + ], + "angle": 0, + "content": "[34] Vuong D Nguyen, Pranav Mantini, and Shishir K Shah. Temporal 3d shape modeling for video-based cloth-changing person re-identification. In IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 173–182, 2024. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.675, + 0.482, + 0.743 + ], + "angle": 0, + "content": "[35] Honghu Pan, Qiao Liu, Yongyong Chen, Yunqi He, Yuan Zheng, Feng Zheng, and Zhenyu He. Pose-aided video-based person re-identification via recurrent graph convolutional network. IEEE Transactions on Circuits and Systems for Video Technology, 33(12):7183-7196, 2023. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.746, + 0.482, + 0.828 + ], + "angle": 0, + "content": "[36] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning (ICML), pages 8748-8763, 2021. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.831, + 0.482, + 0.884 + ], + "angle": 0, + "content": "[37] Kien Nguyen Thanh, Clinton Fookes, Sridha Sridharan, Yingli Tian, Feng Liu, Xiaoming Liu, and Arun Ross. The state of aerial surveillance: A survey. CoRR, abs/2201.03080, 2022. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.887, + 0.482, + 0.901 + ], + "angle": 0, + "content": "[38] Xiaogang Wang and Rui Zhao. Person re-identification:" + }, + { + "type": "list", + "bbox": [ + 0.091, + 0.093, + 0.483, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.545, + 0.093, + 0.905, + 0.121 + ], + "angle": 0, + "content": "System design and evaluation overview. In Person Re-Identification, pages 351-370. Springer, 2014. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.123, + 0.905, + 0.19 + ], + "angle": 0, + "content": "[39] Yingquan Wang, Pingping Zhang, Shang Gao, Xia Geng, Hu Lu, and Dong Wang. Pyramid spatial-temporal aggregation for video-based person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 12026–12035, 2021. 3, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.193, + 0.905, + 0.262 + ], + "angle": 0, + "content": "[40] Jinlin Wu, Lingxiao He, Wu Liu, Yang Yang, Zhen Lei, Tao Mei, and Stan Z Li. Cavit: Contextual alignment vision transformer for video object re-identification. In European Conference on Computer Vision (ECCV), pages 549–566. Springer, 2022. 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.264, + 0.905, + 0.319 + ], + "angle": 0, + "content": "[41] Xiangyu Xu and Chen Change Loy. 3d human texture estimation from a single image with transformers. In IEEE International Conference on Computer Vision (ICCV), pages 13849-13858, 2021. 6, 7, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.321, + 0.905, + 0.39 + ], + "angle": 0, + "content": "[42] Xiangyu Xu, Hao Chen, Francesc Moreno-Noguer, László A Jeni, and Fernando De la Torre. 3d human shape and pose from a single low-resolution image with self-supervised learning. In European Conference on Computer Vision (ECCV), pages 284-300. Springer, 2020. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.392, + 0.905, + 0.459 + ], + "angle": 0, + "content": "[43] Yichao Yan, Jie Qin, Jiaxin Chen, Li Liu, Fan Zhu, Ying Tai, and Ling Shao. Learning multi-granular hypergraphs for video-based person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2899–2908, 2020. 2, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.462, + 0.905, + 0.531 + ], + "angle": 0, + "content": "[44] Dingqiang Ye, Chao Fan, Jingzhe Ma, Xiaoming Liu, and Shiqi Yu. Biggait: Learning gait representation you want by large vision models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 200-210, 2024. 6, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.533, + 0.905, + 0.588 + ], + "angle": 0, + "content": "[45] Chenyang Yu, Xuehu Liu, Yingquan Wang, Pingping Zhang, and Huchuan Lu. Tf-clip: Learning text-free clip for video-based person re-identification. In AAAI Conference on Artificial Intelligence, pages 6764–6772, 2024. 2, 3, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.591, + 0.905, + 0.659 + ], + "angle": 0, + "content": "[46] Quan Zhang, Lei Wang, Vishal M. Patel, Xiaohua Xie, and Jianhaung Lai. View-decoupled transformer for person re-identification under aerial-ground camera network. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 22000–22009, 2024. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.661, + 0.905, + 0.729 + ], + "angle": 0, + "content": "[47] Shizhou Zhang, Wenlong Luo, De Cheng, Qingchun Yang, Lingyan Ran, Yinghui Xing, and Yanning Zhang. Cross-platform video person reid: A new benchmark dataset and adaptation approach. In European Conference on Computer Vision (ECCV), 2024. 1, 2, 3, 4, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.732, + 0.905, + 0.786 + ], + "angle": 0, + "content": "[48] Tianyu Zhang, Longhui Wei, Lingxi Xie, Zijie Zhuang, Yongfei Zhang, Bo Li, and Qi Tian. Spatiotemporal transformer for video-based person re-identification. arXiv:2103.16469, 2021. 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.789, + 0.905, + 0.857 + ], + "angle": 0, + "content": "[49] Yufei Zhang, Jeffrey O Kephart, Zijun Cui, and Qiang Ji. Physpt: Physics-aware pretrained transformer for estimating human dynamics from monocular videos. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2305-2317, 2024. 5, 7, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.86, + 0.905, + 0.901 + ], + "angle": 0, + "content": "[50] Zhizheng Zhang, Cuiling Lan, Wenjun Zeng, and Zhibo Chen. Multi-granularity reference-aided attentive feature aggregation for video-based person re-identification. In IEEE" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.484, + 0.945, + 0.516, + 0.956 + ], + "angle": 0, + "content": "1250" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.125, + 0.092, + 0.482, + 0.12 + ], + "angle": 0, + "content": "Conference on Computer Vision and Pattern Recognition (CVPR), pages 10407-10416, 2020. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.122, + 0.483, + 0.177 + ], + "angle": 0, + "content": "[51] L. Zheng, Z. Bie, Y. Sun, J. Wang, C. Su, S. Wang, and Q. Tian. Mars: A video benchmark for large-scale person re-identification. In European Conference on Computer Vision (ECCV, pages 868–884, 2016. 1, 2, 3, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.179, + 0.482, + 0.233 + ], + "angle": 0, + "content": "[52] Zhedong Zheng, Xiaohan Wang, Nenggan Zheng, and Yi Yang. Parameter-efficient person re-identification in the 3d space. IEEE Transactions on Neural Networks and Learning Systems, 35(6):7534-7547, 2022. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.235, + 0.482, + 0.303 + ], + "angle": 0, + "content": "[53] Haidong Zhu, Pranav Budhwant, Zhaoheng Zheng, and Ram Nevatia. Seas: Shape-aligned supervision for person re-identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 164–174, 2024. 3" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.483, + 0.303 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.485, + 0.946, + 0.515, + 0.957 + ], + "angle": 0, + "content": "1251" + } + ] +] \ No newline at end of file diff --git a/2025/AG-VPReID_ A Challenging Large-Scale Benchmark for Aerial-Ground Video-based Person Re-Identification/4cdeaea8-7584-41e0-a7e3-68147957d6c6_origin.pdf b/2025/AG-VPReID_ A Challenging Large-Scale Benchmark for Aerial-Ground Video-based Person Re-Identification/4cdeaea8-7584-41e0-a7e3-68147957d6c6_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9909a731e24797b46c65b32c4effbf5bc77c1e85 --- /dev/null +++ b/2025/AG-VPReID_ A Challenging Large-Scale Benchmark for Aerial-Ground Video-based Person Re-Identification/4cdeaea8-7584-41e0-a7e3-68147957d6c6_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c3fe4f20e2db0948a729d8db37d6397ba389bddf1a2b2c60048abfcb97f76fd +size 9952074 diff --git a/2025/AG-VPReID_ A Challenging Large-Scale Benchmark for Aerial-Ground Video-based Person Re-Identification/full.md b/2025/AG-VPReID_ A Challenging Large-Scale Benchmark for Aerial-Ground Video-based Person Re-Identification/full.md new file mode 100644 index 0000000000000000000000000000000000000000..cd57bfadc5b61afb92d0748feca931e1a3d5a118 --- /dev/null +++ b/2025/AG-VPReID_ A Challenging Large-Scale Benchmark for Aerial-Ground Video-based Person Re-Identification/full.md @@ -0,0 +1,330 @@ +# AG-VPReID: A Challenging Large-Scale Benchmark for Aerial-Ground Video-based Person Re-Identification + +Huy Nguyen1, Kien Nguyen1, Akila Pemasiri1, Feng Liu2, Sridha Sridharan1, Clinton Fookes1 + +$^{1}$ School of Electrical Engineering and Robotics, Queensland University of Technology $^{2}$ Department of Computer Science, Drexel University + +$^{1}\{t497.nguyen, k.nguyenthanh, a.thondilege, s.sridharan, c.fookes\}@qut.edu.au,$ $^{2}f1397@drexel.edu$ + +# Abstract + +We introduce AG-VPReLU, a new large-scale dataset for aerial-ground video-based person re-identification (ReID) that comprises 6,632 subjects, 32,321 tracklets and over 9.6 million frames captured by drones (altitudes ranging from 15–120m), CCTV, and wearable cameras. This dataset offers a real-world benchmark for evaluating the robustness to significant viewpoint changes, scale variations, and resolution differences in cross-platform aerial-ground settings. In addition, to address these challenges, we propose AG-VPReLU-Net, an end-to-end framework composed of three complementary streams: (1) an Adapted Temporal-Spatial Stream addressing motion pattern inconsistencies and facilitating temporal feature learning, (2) a Normalized Appearance Stream leveraging physics-informed techniques to tackle resolution and appearance changes, and (3) a MultiScale Attention Stream handling scale variations across drone altitudes. We integrate visual-semantic cues from all streams to form a robust, viewpoint-invariant whole-body representation. Extensive experiments demonstrate that AG-VPReLU-Net outperforms state-of-the-art approaches on both our new dataset and existing video-based ReID benchmarks, showcasing its effectiveness and generalizability. Nevertheless, the performance gap observed on AG-VPReLU across all methods underscores the dataset's challenging nature. The dataset, code and trained models are available at AG-VPReLU-Net. + +# 1. Introduction + +Video-based person re-identification (ReID) is a challenging and in-demand task, with significant real-world applications in surveillance, search and rescue operations, and urban monitoring [25, 35, 37]. While traditional ReID methods focus on ground-based cameras [7, 50], the inte + +gration of aerial perspectives through aerial-ground person ReID presents a paradigm shift in this field [32]. This approach enables the identification and matching of individuals across non-overlapping aerial and ground-based camera views, substantially enhancing situational awareness and response times in complex environments [31, 46]. The motivation behind this research stems from the increasing deployment of aerial platforms, such as unmanned aerial vehicles (UAVs), which provide unique vantage points that complement ground-based observations. However, the development of robust aerial-ground ReID systems faces a significant challenge: the scarcity of diverse and large-scale datasets that capture the nuances of both aerial and ground perspectives. As demonstrated by ImageNet [4], large and diverse benchmarks are crucial for deep learning based methods, indicating a need for a comprehensive ReID dataset integrating multiple platforms, environments, and real-world challenges. + +Initial efforts in aerial-ground person ReID have focused primarily on image-based tasks. For instance, Nguyen et al. [31] pioneer this area by releasing the first aerial-ground ReID dataset, which includes images from one drone and one CCTV camera capturing 21,983 images of 388 identities. They later expand the dataset to 100,502 images of 1,615 individuals [32]. Recently, Zhang et al. [46] collect a synthetic dataset named CARGO, containing 108,563 images representing 5,000 subjects, to complement real-world datasets. Within video-based tasks, Zhang et al. [47] collect a video-based dataset called G2A-VReID, which consists of 185,907 images and 5,576 tracklets from one drone and one CCTV camera, featuring 2,788 identities. Building on these advancements, G2A-VReID could be expanded compared to ground-based datasets like MARS [51], which includes 20,000 tracklets and 1.19 million frames from six cameras. While current aerial-ground datasets are valuable, increasing identity variation and environmental diversity would + +
DatasetYear#Identities#Tracklets#Frames (M)#CVCCAtt.PlatformDur.Altitude (m)
GroundWearableAerial
MARS [51]20161,26120,4781.196XXXX--
LSVID [18]20193,77214,9432.9815XXXX4-
VCCR [10]20223924,3840.151XXX90-
CCVID [8]20222262,8560.341XXX--
MEVID [3]20231588,09210.4633XXX73-
PDestre [17]20202531,8940.101XX-5-6
G2A-VReID [47]20242,7885,5760.182XXX-20 - 60
AG-VPRelD20246,63232,3219.662015 - 120
+ +Table 1. Comparison of AG-VPReID with existing video-based person ReID datasets. Above: ground-based datasets, Below: aerial-based datasets. CV: Camera Views, CC: Clothes-Change, Att.: Attributes (Soft-biometrics annotations), Dur.: Duration (days). + +improve model robustness for real-world applications. + +In light of this, we introduce AG-VPReID, a comprehensive large-scale benchmark dataset for Aerial-Ground Video-based Person ReID. AG-VPReID comprises 6,632 subjects, 32,321 tracklets, and over 9.6 million frames, captured across multiple dates and times of day using a combination of three platforms: aerial drones operating at various altitudes (15-120m), stationary CCTV cameras and wearable mobile cameras. This dataset significantly surpasses existing video-based ReID benchmarks in terms of scale, diversity, and real-world applicability with the highest number of identities, the highest number of tracklets, the highest drone flying altitudes, and the most diverse platforms. The key characteristics of AG-VPReID include: drastic view changes between aerial and ground perspectives; a large number of annotated identities across multiple sessions; rich outdoor scenarios with varying environmental conditions; significant differences in resolution between aerial and ground footage; and both controlled scenarios with clothing changes and in-the-wild pedestrian traffic. + +Aerial-ground person ReID presents unique challenges due to significant appearance variations between aerial and ground-level views. These variations include extreme viewpoint differences, drastic changes in resolution and scale, partial occlusions, and temporal discontinuities caused by high-flying altitudes and long-range captures. Traditional video-based person ReID methods, although effective in ground-based settings [26, 45], often struggle in aerial-ground scenarios due to the complex combination of inconsistent motion patterns and the aforementioned variations. + +To address these challenges, we introduce AG-VPReID-Net, an end-to-end framework for Aerial-Ground Video-based Person Re-Identification. Unlike existing state-of-the-art methods focused on single-view or ground scenarios, AG-VPReID-Net features three complementary streams tailored for aerial-ground challenges: i) An Adapted Temporal-Spatial Stream enhances traditional temporal modeling by integrating identity-specific memory and temporal shape analysis. This improves the extraction of consistent motion patterns and body shape representations, addressing the temporal discontinuity and motion + +inconsistencies of aerial footage, outperforming standard LSTM [6, 14] or 3D CNN [20] approaches; ii) A Normalized Appearance Stream addresses the resolution and appearance differences between aerial and ground views by using UV maps aggregation across frames for a normalized appearance representation. This provides robustness against pose changes, viewpoint shifts, and varying image quality, excelling where current appearance-based methods [19, 29] falter; and iii) A Multi-Scale Attention Stream addresses scale variations inherent in aerial-ground data by incorporating multi-scale feature extraction, motion analysis, temporal context, and a transformer decoder, effectively improving identification across drone altitudes compared to single-scale [13, 43] methods. By integrating these streams, AG-VPReID-Net offers incremental improvements in aerial-ground video-based re-identification, highlighting its potential in addressing this challenging scenario. + +In summary, our main contributions are as follows: + +(1) We introduce AG-VPReID, a challenging large-scale benchmark for aerial-ground video-based person ReID, bridging the gap with a diverse dataset that captures nuanced challenges from both aerial and ground perspectives. +(2) We propose AG-VPReLUID-Net, an innovative end-to-end framework that integrates adapted temporal-spatial processing, normalized appearance representation, and multiscale attention mechanisms to effectively address the challenges of aerial-ground ReID. +(3) AG-VPReLU-Net sets new state-of-the-art performance on the AG-VPReLU and existing video-based ReID benchmarks, demonstrating our approach's effectiveness and generalizability across different settings. + +# 2. Prior Work + +Video-based Person ReID Datasets. Existing person ReID datasets are numerous but severely lack the ability to address real-world challenges, particularly in aerial-ground scenarios. Ground-based datasets like MARS [51] and LSVID [18] provide large-scale benchmarks but focus mainly on ground perspectives, highlighting the need for multi-platform surveillance datasets. The inclusion of clothing + +![](images/4be2113499ae6adece19cf4a8306ac03f748d2eb26b2f3e4a6eabd1218630242.jpg) +Figure 1. Our AG-VPReID dataset was captured using a variety of six cameras, including aerial drones, CCTVs, and GoPros. Sample images and camera locations are illustrated on the right side of the figure. The left side depicts the cross-camera appearance variations of two pedestrians, showcasing differences across various sessions and times of the day. + +changes in datasets such as MEVID [3], CCVID [8], and VCCR [10] represents progress in addressing real-world challenges, though they could benefit from more identities and diverse environments. The BRIAR dataset [2], while featuring 1,000 subjects and UAV footage, primarily targets face recognition with restricted access. P-Destre [17] pioneered aerial view exploration, though its use of a single drone at lower altitudes (5-6m) creates opportunities for datasets covering higher operational altitudes more common in surveillance applications. The G2A-VReID dataset by Zhang et al. [47] represents an important step in combining aerial and ground views. While innovative, it contains 2,788 identities within a $20 - 60\mathrm{m}$ altitude range using 2 cameras, suggesting opportunities for future datasets to expand in scale, altitude diversity, camera count, and environmental variety. Tab. 1 compares our AG-VPReID dataset with others across multiple dimensions. + +Video-based Person ReID. Video-based person ReID methods have evolved to leverage both spatial and temporal cues. Early approaches used recurrent neural networks and 3D convolutional networks [20, 30], while later works incorporated temporal pooling [51] and attention mechanisms [25]. Recent advancements include temporal complementary learning [13], Transformer-based architectures [11, 28], and techniques addressing cross-platform and cloth-changing scenarios [10, 39, 47]. The + +AG-ReID 2023 Challenge [33] highlighted aerial-ground ReID challenges, with winners employing re-ranking, data augmentation, and centroid-based representations. Recent works include Instruct-ReID [12] with instruction-guided retrieval, Domain Shifting [15] for distribution adaptation, and SEAS [53] using 3D body shape guidance. Despite these developments, most existing methods employ uni-modal frameworks trained on predefined label sets. In contrast, recent work has proposed a visual-language multi-modal learning paradigm [45], potentially offering more flexibility and robustness in feature representation for video-based person ReID. + +# 3. AG-VPReID Dataset + +This section offers a detailed overview of the creation process for our AG-VPReID dataset. We describe the methods used for collecting video footage in Sec. 3.1. Sec. 3.2 introduces our annotation procedures. Sec. 3.3 compares AG-VPReID with existing datasets, highlighting its unique features. + +# 3.1. Dataset Collection + +The AG-VPReLU dataset was collected over a period of 20 days, including 10 morning sessions (8:30am-10:00am) and 10 afternoon sessions (3:30pm-5:00pm), with each session lasting 60 minutes. Data capture involved two drones, two + +CCTV cameras, and two wearable cameras. Each drone operated at four different altitudes—15m, 30m, 80m, and 120m—for 15 minutes per session, providing a comprehensive range of aerial views. In total, the dataset comprises 240 hours of video footage, documented in Tab. 2, which includes detailed specifications of the equipment used. The dataset features diverse resolutions, frame rates, and perspectives, extending from ground level to 120-meter aerial views. Fig. 1 shows example frames across different viewpoints and image qualities from different platforms. The drones, CCTV, and wearable devices are positioned to view individuals from different angles, as illustrated in Fig. 1, forcing person ReID models to learn robust multiview and partial-view representations to be effective. + +
TypeModelResolutionLensFPS
CCTVBosch (Outdoor)704 × 48024mm15
Bosch (Indoor)1280 × 72018mm25
WearableGoPro10 (Front)3840 × 216016mm30
GoPro10 (Side)1920 × 108016mm60
DronesDJI Inspire23840 × 216024mm25
DJI M300RTK8192 × 546035mm1
+ +Table 2. Equipment specifications for the AG-VPReID dataset. + +To ensure professional drone operations, a specialized team (one RPAS engineer, one Chief Remote Pilot, one RPAS technician) managed all 20 data collection days, handling flights, capturing aerial footage, and performing initial data preprocessing. + +# 3.2. Labeling Process + +The AG-VPReID dataset uses YOLOv8x for person detection and tracking [16], extracting images from all frames across multiple cameras. It includes both short-term and long-term ReID scenarios, the latter including instances where participants change clothes to test long-term identity persistence. Identity matching was performed by a team of expert annotators, supported by research assistants, to ensure both accuracy and consistency in the matching process. Following [32], we manually annotated each identity with 15 selective soft-biometrics attributes to enhance the dataset's utility for attribute-based person ReID applications. For a detailed list of these attributes, refer to 8.1. + +# 3.3. Dataset Characteristics + +Compared with existing video-based ReID datasets, our AG-VPReID dataset has five unique characteristics: + +1) The highest number of identities and tracklets. AGVPReID sets a new benchmark with 6,632 unique identities and 32,321 tracklets, substantially surpassing existing datasets. It holds nearly twice as many identities as prominent ground-based datasets such as LSVID [18], which contains 3,772 identities, and more than six times + +the tracklets of the next largest aerial-ground dataset, G2AVReID [47], which includes 5,576 tracklets. Additionally, AG-VPReID encompasses over 9.6 million frames, providing 50 times the volume of frames compared to other aerial-ground datasets. While MEVID [3] exceeds in frame count with 10.46 million, it does not match AG-VPReID in terms of identity and tracklet numbers. + +2) The most diverse platforms. Our AG-VPReLUID dataset is the first to incorporate aerial, ground, and wearable platforms for video-based person ReID. The inclusion of wearable cameras provides a novel dimension with high-quality first-person perspectives. This combination results in extreme variations in resolution and subject size across platforms: UAV (18×37 to 293×542 pixels), CCTV (22×23 to 172×413 pixels), and GoPro (25×48 to 483×1078 pixels). +3) The highest flying altitudes. AG-VPReID features footage from altitudes reaching up to 120 meters, exceeding existing datasets' 60-meter maximum [47]. This introduces challenges: 1) extreme viewpoints with perspective distortions; 2) multiple scales with varying resolutions; and 3) image quality issues (Fig. 2). We used two drones—one with a wide-angle camera for area monitoring and another with a narrow-angle camera for detailed observation. +4) Rich outdoor scenarios with real-world challenges. Our AG-VPReID dataset presents diverse outdoor campus scenarios with real-world challenges including complex occlusions, varied poses from different activities, and uniform-wearing individuals. Fig. 2 shows examples of these diverse scenarios. +5) Other notable characteristics. AG-VPReLU includes comprehensive attributes for each identity (gender, age, clothing style, accessories), enabling fine-grained analysis. The dataset features long-term tracking data of 14 diverse individuals recorded across multiple days, each wearing different clothing per session to capture real-world variations. We also provide camera calibration information and GPS coordinates to support multi-camera tracking research. See supplementary material Sec. 8 for details. + +# 3.4. Ethics and Privacy + +This research received ethics approval for data collection and usage. We implement "Deface" [5] to blur faces, secure data storage, and obtained informed consent from all participants. Details are available at our project repository. + +# 4. AG-VPReID-Net + +We propose AG-VPReLU-DNet, a purpose-built framework addressing aerial-ground ReID's unique challenges. In particular, we propose an Adapted Temporal-Spatial Stream for robust temporal-spatial representations to deal with the temporal discontinuity challenge caused by drone motion + +![](images/f5041b8ee0d43fbe3c67dae2d976f3ba9842453939e1d8967e949f57549b5c4a.jpg) +Aerial Camera + +![](images/84bc2bab5ce1bb227c0c9a5047b2f2878eb081be68533d2c7c1ab3b44ed85f14.jpg) + +![](images/3f33b18445e2169b6dabdd3509ce0cd75de375a821bdd874e2b3a78cde0291b6.jpg) + +![](images/9c10adb7d47f8c23d47fbb8ad49cdb776c6fe02c6359b6e37345ed9339d91b6b.jpg) + +![](images/6a94c1e0b5aea0e6901cab8b1be05a0a065928e010a123d3a8fd8b5158dc5c3a.jpg) +(f) Clothing similarity +Figure 2. The AG-VPReID dataset presents several key challenges: extreme viewpoints, varying resolutions and subject sizes, pose/illumination variations, occlusions, and similar clothing among subjects. + +from unstable tracking between frames. We propose a Normalized Appearance Stream for resolution and appearance changes to deal with extreme viewpoint shifts. To deal with altitude-driven scale variance, we introduce a Multi-Scale Attention Stream for scale variations. Fig. 3 illustrates our architecture. Detailed stream contributions are provided in Tab. 9 of the supplementary material. + +# 4.1. Stream 1: Adapted Temporal-Spatial Stream + +When performing video-based person ReID, a key challenge is handling inconsistent motion patterns and temporal gaps between video frames. To address this, we propose an Adapted Temporal-Spatial stream that combines CLIP's visual encoder with temporal and 3D shape modeling to create a comprehensive representation of individuals. Our method operates on a sequence $\mathcal{V}$ of $T$ frames through the following components: + +Visual Feature Extraction: Using CLIP's visual encoder $E_{v}(\cdot)$ , we extract frame-level features, + +$$ +F _ {t} = E _ {v} \left(\mathcal {I} _ {t}\right), \quad t \in \{1, \dots , T \}, \tag {1} +$$ + +where $\mathcal{I}_t$ and $F_{t}$ are the $t$ -th frame and its features. + +Temporal Processing: We incorporate temporal modeling through two key components: + +1) Temporal 3D Shape Modeling (TSM): Following [34], we extract 3D shape representations, + +$$ +g _ {t} = \operatorname {G R U} \left(F _ {t}, g _ {t - 1}\right), \beta_ {t} = 3 \mathrm {D} \operatorname {R e g u s s o r} \left(g _ {t}\right), \tag {2} +$$ + +where $g_{t}$ captures temporal dynamics and $\beta_{t}$ represents SMPL model parameters. + +2) Temporal Feature Enhancement (TFE): Adapting from [45], we enhance features by combining appearance and shape, + +$$ +F _ {\text {e n h a n c e d}} = \operatorname {T F E} \left(F _ {1: T}, \beta_ {1: T}\right). \tag {3} +$$ + +Identity-Aware Processing: We incorporate identity information through, + +$$ +\mathcal {M} _ {y _ {i}} = \frac {1}{N _ {y _ {i}}} \sum_ {V \in \mathcal {V} _ {y _ {i}}} \operatorname {T A P} \left(F _ {\text {e n h a n c e d}}\right), +$$ + +$$ +\mathcal {M} _ {y _ {i}} ^ {\text {r e f i n e d}} = \operatorname {S S P} \left(F _ {\text {e n h a n c e d}}, \mathcal {M} _ {y _ {i}}\right), +$$ + +$$ +R _ {s t r e a m 1} = \left[ F _ {e n h a n c e d}; \mathcal {M} _ {y _ {i}} ^ {r e f i n e d} \right], \tag {4} +$$ + +where $\mathcal{M}_{y_i}$ is the identity memory bank constructed through Temporal Average Pooling (TAP), and the Sequence-Specific Prompt (SSP) module refines this representation for the final output $R_{stream1}$ . + +# 4.2. Stream 2: Normalized Appearance Stream + +The Adapted Temporal-Spatial (ATS) stream provides robust temporal-spatial representation, but may not fully capture fine-grained appearance details across viewpoints, especially in aerial footage. To address this limitation, we propose a Normalized Appearance (NA) stream that effectively aggregates appearance details from multiple viewpoints. + +The NA stream normalizes and combines appearance information across frames using UVTexture maps and visibility masks. Our process involves: (1) Extracting UVTexture maps and visibility masks per frame, (2) Normalizing UVTexture maps brightness, (3) Aligning maps across frames, (4) Weighted aggregation using visibility masks, and (5) Generating the final normalized representation. The brightness normalization and weighted aggregation of UVTexture maps can be formulated as, + +$$ +T _ {i} ^ {\text {n o r m}} = \gamma (H (N (T _ {i}))), \tag {5} +$$ + +$$ +T _ {\text {a g g r e g a t e d}} = \frac {\sum_ {i = 1} ^ {N} V _ {i} \odot T _ {i} ^ {\text {n o r m}}}{\sum_ {i = 1} ^ {N} V _ {i}}, \tag {6} +$$ + +where $T_{i}^{norm}$ is the normalized UVTexture map for frame $i$ , $N(\cdot), H(\cdot)$ , and $\gamma(\cdot)$ are normalization, histogram matching and gamma correction functions respectively. $T_{aggregated}$ is the final aggregated map. We leverage PhysPT [49] + +![](images/d1697fc18f796b09bfbaba2feb45ad8e162dff3d0f8bf44b83643ec2512766b0.jpg) +Figure 3. The three-stream AG-VPReLU-DNet architecture addresses aerial-ground ReID challenges: Temporal-Spatial stream for motion modeling and temporal features, Normalized Appearance for resolution/appearance variations, and Multi-Scale Attention for aerial-ground scale variations. + +for pose estimation and Texformer [41] to generate UV maps from PhysPT's output 3D meshes. The maps are improved through inter-frame consistency before feeding into the DGC Omni-scale Module [52]. + +# 4.3. Stream 3: Multi-Scale Attention Stream + +While the ATS stream provides a robust temporal-spatial representation and the NA stream addresses viewpoint and occlusion challenges, aerial-ground person ReID still faces significant hurdles due to extreme scale variations between drone and ground-level footage. The first two streams effectively capture temporal dynamics, 3D shape information, and viewpoint-invariant appearance details, but they may not fully address the drastic scale differences inherent in aerial-ground scenarios. To complement the ATS stream and NA stream and address this limitation, we propose a Multi-Scale Attention (MSA) stream. + +In detail, this stream leverages the power of frozen large vision models combined with lightweight, adaptive processing. Specifically, this stream utilizes a frozen large vision model to extract multi-scale features for video-based person ReID. By combining a lightweight Transformer decoder with a local temporal module, this approach dynamically integrates spatial and temporal information, thereby enhancing our framework's ability to accurately capture essential person-specific details. + +Specifically, for each frame $\mathcal{I}_t$ within the sequence $\nu y_{i}$ the CLIP vision encoder [36] is employed to extract features independently. The process collects tokens from various layers at regular intervals to compile a detailed feature map that captures spatial correspondences. These frame feature maps are subsequently concatenated and assembled + +into a spatiotemporal feature volume $\mathbf{G}$ . Following the methods [23, 44], we integrate temporal information into this volume before processing it through a Transformer decoder. This decoder globally aggregates features across multiple layers, employing a video-level classification token as a query, with feature volumes from different layers of the backbone serving as keys and values. A linear layer then maps the output of the decoder's final block to produce class predictions. The operational dynamics of the Transformer decoder are outlined as follows, + +$$ +Y _ {i} = \operatorname {T e m p} _ {i} ([ \mathbf {G} _ {N - M + i, 1}, \mathbf {G} _ {N - M + i, 2}, \dots , \mathbf {G} _ {N - M + i, T} ]), +$$ + +$$ +\tilde {q} _ {i} = q _ {i - 1} + \mathrm {M H A} _ {i} \left(q _ {i - 1}, Y _ {i}, Y _ {i}\right), +$$ + +$$ +q _ {i} = \tilde {q} _ {i} + \operatorname {M L P} _ {i} (\tilde {q} _ {i}), +$$ + +$$ +f _ {G} = \operatorname {F C} \left(q _ {M}\right), \tag {7} +$$ + +where $\mathbf{G}_{n,t}$ represents the features of frame $t$ extracted from the $n$ -th layer of CLIP vision encoder. The feature volume $Y_{i}$ , which undergoes temporal modulation, is input into the $i$ -th layer of the Transformer decoder. The query token $q_{i}$ is incrementally refined, beginning with $q_{0}$ as learnable initial parameters. The final output $f_{G}$ , corresponds to the final feature. The spatiotemporal decoder comprises $M$ blocks. $N$ denotes the number of encoder layers. Multi-head attention (MHA) involves query, key, and value, each of which plays a distinct role. The operator $\mathrm{Temp}(\cdot)$ is utilized to model temporal dynamics, which produces feature tokens influenced by detailed temporal information. + +# 5. Experimental Results + +# 5.1. Datasets and Evaluation Metrics + +We conducted evaluations of our method using the AG-VPReID and four established video-based person ReID datasets: iLIDS [38], Mars [51], LS-VID [18] and G2AVReID [47]. For AG-VPReID, we used a balanced split of 3,013 identities with both ground and aerial views, dividing them equally for training and testing purposes [31, 32]. Details on the training and testing configurations are provided in Table 3. We evaluate performance using the Cumulative Matching Characteristic (CMC) at Rank-1 and the mean Average Precision (mAP). + +
CaseSubset#IDs#Tracklets#Images (M)
TrainingAll1,55513,3003.85
Testing (A2G)All1,45613,5663.94
15m5064,9071.50
30m3772,8850.89
80m3562,5920.69
120m3083,1820.86
Testing (G2A)All5,075*19,0215.79
15m1,4036,3622.14
30m1,4064,4681.41
80m1,1623,8661.13
120m1,1954,3251.11
+ +Table 3. Statistics of AG-VPReID dataset. A2G: aerial-to-ground, G2A: ground-to-aerial. *3,619 additional IDs as distractors. + +# 5.2. Implementation Details + +Our pipeline leverages UV maps generated by Texformer [41] using 3D human meshes from PhysPT [49] with refined pose estimation. The UV maps are processed through normalization, histogram matching, and gamma correction before weighted blending with visibility masks. The architecture consists of three streams: an Adapted Temporal-Spatial Stream (CLIP ViT-B/16), a Normalized Appearance Stream for 3D coordinates and UV textures, and a Multi-Scale Attention Stream (CLIP ViT-L/14). More implementation details can be found in the supplementary. + +# 5.3. Comparison with State-of-the-Art Methods + +We evaluate our proposed method AG-VPReID-Net against several state-of-the-art approaches across multiple video-based person ReID datasets. Tab. 4 summarizes the results. + +Ground-to-Ground Datasets. Our method achieves superior performance on MARS (91.5% mAP, 93.2% Rank-1), outperforming CLIP-ReID by 3.4% mAP. On LS-VID (87.3% mAP, 93.2% Rank-1), we surpass LSTRL by 4.9% mAP. For iLIDS-VID, we reach 96.3% Rank-1, which is 3.0% higher than MFA. + +Cross-Platform Datasets. On G2A-VReID, we achieve $81.3\%$ mAP and $73.1\%$ Rank-1, surpassing MGH by $4.6\%$ mAP. Note that the G2A-VReID dataset only provides a + +![](images/5aeac7856fbfa5606dcc60f6256ab85735938e0fee7e11f5c68df0f17dc7efc2.jpg) +Figure 4. Baseline vs our method on AG-VPReLU dataset. Green/red: correct/incorrect labels. First tracklet image shown. Ranks show improvements in bold. + +ground-to-aerial testing set. For AG-VPReID, we demonstrate strong results in both Ground-to-Aerial (58.0% mAP, 75.6% Rank-1, exceeding CLIP-ReID by 8.8% Rank-1) and aerial-to-ground scenarios (64.0% mAP, 71.9% Rank-1, surpassing CLIP-ReID by 1.7% mAP and 0.3% Rank-1). + +# 5.4. Ablation Study + +We conduct an ablation study on AG-VPReLU to evaluate each stream. St-1 is our temporal modeling stream, St-2 is the appearance normalization stream, and St-3 is the multiscale feature stream. Their combinations (St-12/13/23/123) merge multiple streams. + +Stream Contributions. Tab. 5 shows that St-1 achieves the strongest individual performance (71.52% A2G, 74.80% G2A Rank-1). St-2 and St-3 show moderate results (58.40% and 61.65% A2G Rank-1). Combined streams demonstrate complementary strengths, with St-123 achieving the best results (71.91% A2G, 75.57% G2A Rank-1) by integrating the three streams. + +Impact of Altitude. Table 6 shows performance decreasing with altitude, most significantly between $30\mathrm{m} - 80\mathrm{m}$ . A2G Rank-1 drops $\sim 11\%$ across streams. At $120\mathrm{m}$ , St-1 demonstrates robustness (52.47% vs 38.12%/35.13%), achieving +17.34% improvement through temporal modeling. St-2's physics-informed UV mapping provides +7.57% improvement (42.70% vs. 35.13%), while St-3's multi-scale attention yields +17.72% improvement (52.85% vs. 35.13%). St-123 maintains best performance across all altitudes by combining stream strengths. + +Clothing Changes vs. Camera Angles. Analysis shows altitude increases $(15\mathrm{m} \rightarrow 120\mathrm{m})$ reduce Rank-1 by $27.66\%$ , significantly more than clothing changes $(7.85\%$ in ground-to-ground). Without clothing changes, aerial-ground matching $(71.91\%$ Rank-1) still underperforms ground-to-ground $(91.52\%)$ due to viewpoint differences. When combining aerial views with clothing changes $(65.83\%$ Rank-1), these factors create synergistic challenges where viewpoint differences amplify clothing ambi + +
MethodMARSLS-VIDiLIDS-VIDG2A-VReIDAG-VPReID
Ground → GroundGround → AerialAerial → GroundGround → Aerial
mAPRank-1mAPRank-1Rank-1Rank-5mAPRank-1mAPRank-1mAPRank-1
STMP[29]72.784.439.156.884.396.8--50.760.345.255.8
M3D[20]74.184.440.157.774.094.3--52.462.647.957.3
GLTR[19]78.587.044.363.186.098.0--55.665.850.160.5
TCLNet[13]85.189.870.381.586.6-65.454.757.267.952.762.4
MGH[43]85.890.061.879.685.697.176.769.960.370.855.565.2
GRL[25]84.891.0--90.498.3--58.768.453.963.6
BiCnet-TKS[14]86.090.275.184.6--63.451.759.869.254.364.7
CTL[24]86.791.4--89.797.0--56.466.951.861.3
STMN[6]84.590.569.282.1--66.756.161.671.556.966.2
PSTA[39]85.891.5--91.598.1--60.570.255.865.7
DIL[11]87.090.8--92.098.0--61.270.956.366.1
STT[48]86.388.778.087.587.595.0--61.070.756.165.9
TMT[28]85.891.2--91.398.6--60.870.555.965.8
CAVIT[40]87.290.879.289.293.398.0--61.471.156.566.3
SINet[1]86.291.079.687.492.5---61.371.056.466.2
MFA[9]85.090.478.988.293.398.7--61.170.856.266.0
DCCT[27]87.592.3--91.798.6--61.571.256.666.4
LSTRL[26]86.891.682.489.892.298.6--61.771.356.766.5
CLIP-ReID[21]88.191.780.688.8----62.371.657.266.8
AG-VPReID-Net91.593.287.393.296.399.581.373.164.071.958.075.6
+ +Table 4. Performance comparison across datasets. Bold shows best results. + +
MethodAerial → GroundGround → Aerial
Rank-1Rank-5Rank-10Rank-1Rank-5Rank-10
St-171.5280.4283.8874.8084.2786.90
St-258.4070.2075.8061.5073.6078.20
St-361.6574.5379.1567.3878.8282.3
St-1269.5078.8082.5072.8082.6085.40
St-1371.8080.6083.9175.4084.4886.91
St-2365.7076.5580.9070.1080.4583.91
St-12371.9180.6783.9275.5784.5086.92
+ +Table 5. Ranking accuracy (%) improvement on AG-VPReID dataset. + +
MethodAerial → GroundGround → Aerial
15m30m80m120m15m30m80m120m
St-180.2878.7667.2452.4783.2583.0367.0762.31
St-269.7568.1352.6238.1272.8771.4152.2245.43
St-374.2574.0152.5335.1377.1878.9156.5952.56
St-1278.3276.8265.2450.5381.3481.2765.1760.43
St-1380.5578.9567.5052.8583.7083.5567.6063.10
St-2376.4574.9057.4042.7079.5579.4057.4052.80
St-12380.6679.0067.6353.0083.9283.6667.8263.32
+ +Table 6. Rank-1 accuracy (%) on AG-VPReID at various altitudes. + +guity. See Table 7. + +# 5.5. Visualization + +We further visualize the ReID results with Top-5 ranking to understand how our model improves compared to the baseline [21] for aerial-to-ground person ReID in Fig. 4. Unlike the baseline which may be biased by image resolution and clothing textures, our approach pays attention to more robust features like motion patterns and body shape characteristics, which explains its successful identification of similar walking postures and body proportions despite the significant viewpoint differences between the aerial query and ground-view gallery pair. Additional examples are in Figs. 7 and 8 of the supplementary material. + +
ScenarioRank-1mAPKey Observation
Camera Angle Impact
15m altitude (AG)80.6677.23Baseline performance
30m altitude (AG)79.0075.81Minimal degradation
80m altitude (AG)67.6363.42-13.03% Rank-1 vs. 15m
120m altitude (AG)53.0048.75-27.66% Rank-1 vs. 15m
Clothing Change Impact
GG-SameClothes91.5288.74Upper-bound performance
GG-DiffClothes83.6779.92-7.85% Rank-1 (CC-only impact)
AG-SameClothes71.9164.00-19.61% Rank-1 (AG-only impact)
AG-DiffClothes65.8357.52-6.08% Rank-1 (CC impact in AG)
+ +Table 7. Impact of clothing changes (CC) vs. camera angles. + +# 6. Conclusion + +We introduce AG-VPReLU, a comprehensive dataset for video-based aerial-ground person ReID, addressing the critical need for a large and challenging aerial-ground dataset. We also propose AG-VPReLU-Net, a purpose-built three-stream person ReID framework that combines temporal-spatial processing, physics-informed normalized appearance representation, and multi-scale attention mechanisms. This approach achieves state-of-the-art performance on both the AG-VPReLU dataset and existing video-based ReID benchmarks. Notably, the relatively lower performance across all approaches on AG-VPReLU highlights its demanding nature and establishes it as a robust benchmark for advancing future research in the field. + +# 7. Acknowledgement + +This work was supported by the Australian Research Council (ARC) Discovery Project (DP200101942) and a QUT Postgraduate Research Award. We gratefully acknowledge the Research Engineering Facility (REF) team at QUT for providing expertise and the research infrastructure essential for data collection and processing within this project. + +# References + +[1] Shutao Bai, Bingpeng Ma, Hong Chang, Rui Huang, and Xilin Chen. Salient-to-broad transition for video person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 7339-7348, 2022. 8 +[2] David Cornett, Joel Brogan, Nell Barber, Deniz Aykac, Seth Baird, Nicholas Burchfield, Carl Dukes, Andrew Duncan, Regina Ferrell, Jim Goddard, et al. Expanding accurate person recognition to new altitudes and ranges: The briar dataset. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 593-602, 2023. 3 +[3] Daniel Davila, Dawei Du, Bryon Lewis, Christopher Funk, Joseph Van Pelt, Roderic Collins, Kellie Corona, Matt Brown, Scott McCloskey, Anthony Hoogs, and Brian Clipp. Mevid: Multi-view extended videos with identities for video person re-identification. In IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023. 2, 3, 4 +[4] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In IEEE conference on computer vision and pattern recognition (CVPR), pages 248-255. IEEE, 2009. 1 +[5] Michael Dreuw and ORB-HD. deface: Video anonymization by face detection, 2023. Python package version 1.5.0. 4 +[6] Chanho Eom, Geon Lee, Junghyup Lee, and Bumsub Ham. Video-based person re-identification with spatial and temporal memory networks. In IEEE International Conference on Computer Vision (ICCV), pages 12036–12045, 2021. 2, 8 +[7] Yang Fu, Xiaoyang Wang, Yunchao Wei, and Thomas Huang. Sta: Spatial-temporal attention for large-scale video-based person re-identification. In AAAI Conference on Artificial Intelligence, pages 8287–8294, 2019. 1 +[8] Xinqian Gu, Hong Chang, Bingpeng Ma, Shutao Bai, Shiguang Shan, and Xilin Chen. Clothes-changing person re-identification with rgb modality only. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2, 3 +[9] Xinqian Gu, Hong Chang, Bingpeng Ma, and Shiguang Shan. Motion feature aggregation for video-based person re-identification. IEEE Transactions on Image Processing, 31:3908-3919, 2022. 8 +[10] Ke Han, Yan Huang, Shaogang Gong, Liang Wang, and Tieniu Tan. 3d shape temporal aggregation for video-based clothing-change person re-identification. In Asian Conference on Computer Vision (ACCV), pages 2371–2387, 2022. 2, 3 +[11] Tianyu He, Xin Jin, Xu Shen, Jianqiang Huang, Zhibo Chen, and Xian-Sheng Hua. Dense interaction learning for video-based person re-identification. In IEEE International Conference on Computer Vision (ICCV), pages 1490–1501, 2021. 3, 8 +[12] Weizhen He, Yiheng Deng, Shixiang Tang, Qihao Chen, Qingsong Xie, Yizhou Wang, Lei Bai, Feng Zhu, Rui Zhao, Wanli Ouyang, et al. Instruct-reid: A multi-purpose person re-identification task with instructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17521–17531, 2024. 3 + +[13] Ruibing Hou, Hong Chang, Bingpeng Ma, Shiguang Shan, and Xilin Chen. Temporal complementary learning for video person re-identification. In European Conference on Computer Vision (ECCV), pages 388–405, 2020. 2, 3, 8 +[14] Ruibing Hou, Hong Chang, Bingpeng Ma, Rui Huang, and Shiguang Shan. Bicnet-tks: Learning efficient spatial-temporal representation for video person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2014–2023, 2021. 2, 8 +[15] Yan Jiang, Xu Cheng, Hao Yu, Xingyu Liu, Haoyu Chen, and Guoying Zhao. Domain shifting: A generalized solution for heterogeneous cross-modality person re-identification. In European Conference on Computer Vision, pages 289–306. Springer, 2025. 3 +[16] Glenn Jocher, Ayush, and Jing Qiu. Ultralytics YOLO. https://github.com/ultralytics/ ultralytics, 2024. Accessed: 2024-03-22. 4 +[17] SV Aruna Kumar, Ehsan Yaghoubi, Abhijit Das, BS Harish, and Hugo Proença. The p-destre: A fully annotated dataset for pedestrian detection, tracking, and short/long-term re-identification from aerial devices. IEEE Transactions on Information Forensics and Security, 16:1696-1708, 2020. 2, 3, 1 +[18] J. Li, J. Wang, Q. Tian, W. Gao, and S. Zhang. Global-local temporal representations for video person re-identification. In IEEE International Conference on Computer Vision (ICCV), pages 3958–3967, 2019. 2, 4, 7 +[19] Jianing Li, Jingdong Wang, Qi Tian, Wen Gao, and Shiliang Zhang. Global-local temporal representations for video person re-identification. In IEEE International Conference on Computer Vision (ICCV), pages 3958–3967, 2019. 2, 8 +[20] Jianing Li, Shiliang Zhang, and Tiejun Huang. Multiscale 3d convolution network for video based person re-identification. In AAAI Conference on Artificial Intelligence, pages 8618–8625, 2019. 2, 3, 8 +[21] Siyuan Li, Li Sun, and Qingli Li. Clip-reid: exploiting vision-language model for image re-identification without concrete text labels. In AAAI Conference on Artificial Intelligence, pages 1405–1413, 2023. 8, 3 +[22] Yutian Lin, Liang Zheng, Zhedong Zheng, Yu Wu, and Yi Yang. Improving person re-identification by attribute and identity learning. ArXiv, abs/1703.07220, 2019. 1 +[23] Ziyi Lin, Shijie Geng, Renrui Zhang, Peng Gao, Gerard De Melo, Xiaogang Wang, Jifeng Dai, Yu Qiao, and Hongsheng Li. Frozen clip models are efficient video learners. In European Conference on Computer Vision (ECCV), pages 388-404. Springer, 2022. 6, 3 +[24] Jiawei Liu, Zheng-Jun Zha, Wei Wu, Kecheng Zheng, and Qibin Sun. Spatial-temporal correlation and topology learning for person re-identification in videos. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4370–4379, 2021. 8 +[25] Xuehu Liu, Pingping Zhang, Chenyang Yu, Huchuan Lu, and Xiaoyun Yang. Watching you: Global-guided reciprocal learning for video-based person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 13334–13343, 2021. 1, 3, 8 + +[26] Xuehu Liu, Pingping Zhang, and Hutchuan Lu. Video-based person re-identification with long short-term representation learning. arXiv preprint arXiv:2308.03703, 2023. 2, 8 +[27] Xuehu Liu, Chenyang Yu, Pingping Zhang, and Hutchuan Lu. Deeply coupled convolution-transformer with spatial-temporal complementary learning for video-based person re-identification. IEEE Transactions on Neural Networks and Learning Systems, 35(10):13753-13763, 2024. 8 +[28] Xuehu Liu, Pingping Zhang, Chenyang Yu, Xuesheng Qian, Xiaoyun Yang, and Huchuan Lu. A video is worth three views: Trigeminal transformers for video-based person re-identification. IEEE Transactions on Intelligent Transportation Systems, 25(9):12818-12828, 2024. 3, 8 +[29] Yiheng Liu, Zhenxun Yuan, Wengang Zhou, and Houqiang Li. Spatial and temporal mutual promotion for video-based person re-identification. In AAAI Conference on Artificial Intelligence, pages 8786–8793, 2019. 2, 8 +[30] Niall McLaughlin, Jesus Martinez del Rincon, and Paul Miller. Recurrent convolutional network for video-based person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1325–1334, 2016. 3 +[31] Huy Nguyen, Kien Nguyen, Sridha Sridharan, and Clinton Fookes. Aerial-ground person re-id. In IEEE International Conference on Multimedia and Expo (ICME), pages 2585-2590, 2023. 1, 7 +[32] Huy Nguyen, Kien Nguyen, Sridha Sridharan, and Clinton Fookes. Ag-reid.v2: Bridging aerial and ground views for person re-identification. IEEE Transactions on Information Forensics and Security, 19:2896-2908, 2024. 1, 4, 7 +[33] Kien Nguyen, Clinton Fookes, Sridha Sridharan, Feng Liu, Xiaoming Liu, Arun Ross, Dana Michalski, Huy Nguyen, Debayan Deb, Mahak Kothari, et al. Ag-reid 2023: Aerial-ground person re-identification challenge results. In 2023 IEEE International Joint Conference on Biometrics (IJCB), pages 1–10. IEEE, 2023. 3 +[34] Vuong D Nguyen, Pranav Mantini, and Shishir K Shah. Temporal 3d shape modeling for video-based cloth-changing person re-identification. In IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 173–182, 2024. 5 +[35] Honghu Pan, Qiao Liu, Yongyong Chen, Yunqi He, Yuan Zheng, Feng Zheng, and Zhenyu He. Pose-aided video-based person re-identification via recurrent graph convolutional network. IEEE Transactions on Circuits and Systems for Video Technology, 33(12):7183-7196, 2023. 1 +[36] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning (ICML), pages 8748-8763, 2021. 6 +[37] Kien Nguyen Thanh, Clinton Fookes, Sridha Sridharan, Yingli Tian, Feng Liu, Xiaoming Liu, and Arun Ross. The state of aerial surveillance: A survey. CoRR, abs/2201.03080, 2022. 1 +[38] Xiaogang Wang and Rui Zhao. Person re-identification: + +System design and evaluation overview. In Person Re-Identification, pages 351-370. Springer, 2014. 7 +[39] Yingquan Wang, Pingping Zhang, Shang Gao, Xia Geng, Hu Lu, and Dong Wang. Pyramid spatial-temporal aggregation for video-based person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 12026–12035, 2021. 3, 8 +[40] Jinlin Wu, Lingxiao He, Wu Liu, Yang Yang, Zhen Lei, Tao Mei, and Stan Z Li. Cavit: Contextual alignment vision transformer for video object re-identification. In European Conference on Computer Vision (ECCV), pages 549–566. Springer, 2022. 8 +[41] Xiangyu Xu and Chen Change Loy. 3d human texture estimation from a single image with transformers. In IEEE International Conference on Computer Vision (ICCV), pages 13849-13858, 2021. 6, 7, 3 +[42] Xiangyu Xu, Hao Chen, Francesc Moreno-Noguer, László A Jeni, and Fernando De la Torre. 3d human shape and pose from a single low-resolution image with self-supervised learning. In European Conference on Computer Vision (ECCV), pages 284-300. Springer, 2020. 3 +[43] Yichao Yan, Jie Qin, Jiaxin Chen, Li Liu, Fan Zhu, Ying Tai, and Ling Shao. Learning multi-granular hypergraphs for video-based person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2899–2908, 2020. 2, 8 +[44] Dingqiang Ye, Chao Fan, Jingzhe Ma, Xiaoming Liu, and Shiqi Yu. Biggait: Learning gait representation you want by large vision models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 200-210, 2024. 6, 3 +[45] Chenyang Yu, Xuehu Liu, Yingquan Wang, Pingping Zhang, and Huchuan Lu. Tf-clip: Learning text-free clip for video-based person re-identification. In AAAI Conference on Artificial Intelligence, pages 6764–6772, 2024. 2, 3, 5 +[46] Quan Zhang, Lei Wang, Vishal M. Patel, Xiaohua Xie, and Jianhaung Lai. View-decoupled transformer for person re-identification under aerial-ground camera network. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 22000–22009, 2024. 1 +[47] Shizhou Zhang, Wenlong Luo, De Cheng, Qingchun Yang, Lingyan Ran, Yinghui Xing, and Yanning Zhang. Cross-platform video person reid: A new benchmark dataset and adaptation approach. In European Conference on Computer Vision (ECCV), 2024. 1, 2, 3, 4, 7 +[48] Tianyu Zhang, Longhui Wei, Lingxi Xie, Zijie Zhuang, Yongfei Zhang, Bo Li, and Qi Tian. Spatiotemporal transformer for video-based person re-identification. arXiv:2103.16469, 2021. 8 +[49] Yufei Zhang, Jeffrey O Kephart, Zijun Cui, and Qiang Ji. Physpt: Physics-aware pretrained transformer for estimating human dynamics from monocular videos. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2305-2317, 2024. 5, 7, 3 +[50] Zhizheng Zhang, Cuiling Lan, Wenjun Zeng, and Zhibo Chen. Multi-granularity reference-aided attentive feature aggregation for video-based person re-identification. In IEEE + +Conference on Computer Vision and Pattern Recognition (CVPR), pages 10407-10416, 2020. 1 +[51] L. Zheng, Z. Bie, Y. Sun, J. Wang, C. Su, S. Wang, and Q. Tian. Mars: A video benchmark for large-scale person re-identification. In European Conference on Computer Vision (ECCV, pages 868–884, 2016. 1, 2, 3, 7 +[52] Zhedong Zheng, Xiaohan Wang, Nenggan Zheng, and Yi Yang. Parameter-efficient person re-identification in the 3d space. IEEE Transactions on Neural Networks and Learning Systems, 35(6):7534-7547, 2022. 6 +[53] Haidong Zhu, Pranav Budhwant, Zhaoheng Zheng, and Ram Nevatia. Seas: Shape-aligned supervision for person re-identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 164–174, 2024. 3 \ No newline at end of file diff --git a/2025/AG-VPReID_ A Challenging Large-Scale Benchmark for Aerial-Ground Video-based Person Re-Identification/images.zip b/2025/AG-VPReID_ A Challenging Large-Scale Benchmark for Aerial-Ground Video-based Person Re-Identification/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..90aae3a79c933ac3891b0ea5c216eb7594c7e868 --- /dev/null +++ b/2025/AG-VPReID_ A Challenging Large-Scale Benchmark for Aerial-Ground Video-based Person Re-Identification/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9901aa5e40536fb62c09ce31257292e2f769f4cbc7c5719ece1d71a933985d43 +size 782622 diff --git a/2025/AG-VPReID_ A Challenging Large-Scale Benchmark for Aerial-Ground Video-based Person Re-Identification/layout.json b/2025/AG-VPReID_ A Challenging Large-Scale Benchmark for Aerial-Ground Video-based Person Re-Identification/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ad449921a79116d3b10ac941fdb0b167f855c91d --- /dev/null +++ b/2025/AG-VPReID_ A Challenging Large-Scale Benchmark for Aerial-Ground Video-based Person Re-Identification/layout.json @@ -0,0 +1,7805 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 131, + 103, + 479, + 138 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 103, + 479, + 138 + ], + "spans": [ + { + "bbox": [ + 131, + 103, + 479, + 138 + ], + "type": "text", + "content": "AG-VPReID: A Challenging Large-Scale Benchmark for Aerial-Ground Video-based Person Re-Identification" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 144, + 160, + 466, + 188 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 144, + 160, + 466, + 188 + ], + "spans": [ + { + "bbox": [ + 144, + 160, + 466, + 188 + ], + "type": "text", + "content": "Huy Nguyen1, Kien Nguyen1, Akila Pemasiri1, Feng Liu2, Sridha Sridharan1, Clinton Fookes1" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 94, + 189, + 515, + 217 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 189, + 515, + 217 + ], + "spans": [ + { + "bbox": [ + 94, + 189, + 515, + 217 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 94, + 189, + 515, + 217 + ], + "type": "text", + "content": " School of Electrical Engineering and Robotics, Queensland University of Technology " + }, + { + "bbox": [ + 94, + 189, + 515, + 217 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 94, + 189, + 515, + 217 + ], + "type": "text", + "content": " Department of Computer Science, Drexel University" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 94, + 218, + 514, + 245 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 218, + 514, + 245 + ], + "spans": [ + { + "bbox": [ + 94, + 218, + 514, + 245 + ], + "type": "inline_equation", + "content": "^{1}\\{t497.nguyen, k.nguyenthanh, a.thondilege, s.sridharan, c.fookes\\}@qut.edu.au," + }, + { + "bbox": [ + 94, + 218, + 514, + 245 + ], + "type": "inline_equation", + "content": "^{2}f1397@drexel.edu" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 151, + 273, + 200, + 285 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 273, + 200, + 285 + ], + "spans": [ + { + "bbox": [ + 151, + 273, + 200, + 285 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 54, + 298, + 297, + 608 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 298, + 297, + 608 + ], + "spans": [ + { + "bbox": [ + 54, + 298, + 297, + 608 + ], + "type": "text", + "content": "We introduce AG-VPReLU, a new large-scale dataset for aerial-ground video-based person re-identification (ReID) that comprises 6,632 subjects, 32,321 tracklets and over 9.6 million frames captured by drones (altitudes ranging from 15–120m), CCTV, and wearable cameras. This dataset offers a real-world benchmark for evaluating the robustness to significant viewpoint changes, scale variations, and resolution differences in cross-platform aerial-ground settings. In addition, to address these challenges, we propose AG-VPReLU-Net, an end-to-end framework composed of three complementary streams: (1) an Adapted Temporal-Spatial Stream addressing motion pattern inconsistencies and facilitating temporal feature learning, (2) a Normalized Appearance Stream leveraging physics-informed techniques to tackle resolution and appearance changes, and (3) a MultiScale Attention Stream handling scale variations across drone altitudes. We integrate visual-semantic cues from all streams to form a robust, viewpoint-invariant whole-body representation. Extensive experiments demonstrate that AG-VPReLU-Net outperforms state-of-the-art approaches on both our new dataset and existing video-based ReID benchmarks, showcasing its effectiveness and generalizability. Nevertheless, the performance gap observed on AG-VPReLU across all methods underscores the dataset's challenging nature. The dataset, code and trained models are available at AG-VPReLU-Net." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 633, + 135, + 645 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 633, + 135, + 645 + ], + "spans": [ + { + "bbox": [ + 56, + 633, + 135, + 645 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 653, + 296, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 653, + 296, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 653, + 296, + 715 + ], + "type": "text", + "content": "Video-based person re-identification (ReID) is a challenging and in-demand task, with significant real-world applications in surveillance, search and rescue operations, and urban monitoring [25, 35, 37]. While traditional ReID methods focus on ground-based cameras [7, 50], the inte" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 274, + 555, + 490 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 274, + 555, + 490 + ], + "spans": [ + { + "bbox": [ + 313, + 274, + 555, + 490 + ], + "type": "text", + "content": "gration of aerial perspectives through aerial-ground person ReID presents a paradigm shift in this field [32]. This approach enables the identification and matching of individuals across non-overlapping aerial and ground-based camera views, substantially enhancing situational awareness and response times in complex environments [31, 46]. The motivation behind this research stems from the increasing deployment of aerial platforms, such as unmanned aerial vehicles (UAVs), which provide unique vantage points that complement ground-based observations. However, the development of robust aerial-ground ReID systems faces a significant challenge: the scarcity of diverse and large-scale datasets that capture the nuances of both aerial and ground perspectives. As demonstrated by ImageNet [4], large and diverse benchmarks are crucial for deep learning based methods, indicating a need for a comprehensive ReID dataset integrating multiple platforms, environments, and real-world challenges." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 498, + 557, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 498, + 557, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 498, + 557, + 715 + ], + "type": "text", + "content": "Initial efforts in aerial-ground person ReID have focused primarily on image-based tasks. For instance, Nguyen et al. [31] pioneer this area by releasing the first aerial-ground ReID dataset, which includes images from one drone and one CCTV camera capturing 21,983 images of 388 identities. They later expand the dataset to 100,502 images of 1,615 individuals [32]. Recently, Zhang et al. [46] collect a synthetic dataset named CARGO, containing 108,563 images representing 5,000 subjects, to complement real-world datasets. Within video-based tasks, Zhang et al. [47] collect a video-based dataset called G2A-VReID, which consists of 185,907 images and 5,576 tracklets from one drone and one CCTV camera, featuring 2,788 identities. Building on these advancements, G2A-VReID could be expanded compared to ground-based datasets like MARS [51], which includes 20,000 tracklets and 1.19 million frames from six cameras. While current aerial-ground datasets are valuable, increasing identity variation and environmental diversity would" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 296, + 748, + 314, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 296, + 748, + 314, + 758 + ], + "spans": [ + { + "bbox": [ + 296, + 748, + 314, + 758 + ], + "type": "text", + "content": "1241" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 73, + 70, + 535, + 170 + ], + "blocks": [ + { + "bbox": [ + 73, + 70, + 535, + 170 + ], + "lines": [ + { + "bbox": [ + 73, + 70, + 535, + 170 + ], + "spans": [ + { + "bbox": [ + 73, + 70, + 535, + 170 + ], + "type": "table", + "html": "
DatasetYear#Identities#Tracklets#Frames (M)#CVCCAtt.PlatformDur.Altitude (m)
GroundWearableAerial
MARS [51]20161,26120,4781.196XXXX--
LSVID [18]20193,77214,9432.9815XXXX4-
VCCR [10]20223924,3840.151XXX90-
CCVID [8]20222262,8560.341XXX--
MEVID [3]20231588,09210.4633XXX73-
PDestre [17]20202531,8940.101XX-5-6
G2A-VReID [47]20242,7885,5760.182XXX-20 - 60
AG-VPRelD20246,63232,3219.662015 - 120
", + "image_path": "b623f5961d8cb0edf5422fc8ba8d42cdcaefa3eb6f894db4c2c684d0053a4a40.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 173, + 553, + 194 + ], + "lines": [ + { + "bbox": [ + 55, + 173, + 553, + 194 + ], + "spans": [ + { + "bbox": [ + 55, + 173, + 553, + 194 + ], + "type": "text", + "content": "Table 1. Comparison of AG-VPReID with existing video-based person ReID datasets. Above: ground-based datasets, Below: aerial-based datasets. CV: Camera Views, CC: Clothes-Change, Att.: Attributes (Soft-biometrics annotations), Dur.: Duration (days)." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 55, + 215, + 274, + 227 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 215, + 274, + 227 + ], + "spans": [ + { + "bbox": [ + 55, + 215, + 274, + 227 + ], + "type": "text", + "content": "improve model robustness for real-world applications." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 54, + 228, + 295, + 456 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 228, + 295, + 456 + ], + "spans": [ + { + "bbox": [ + 54, + 228, + 295, + 456 + ], + "type": "text", + "content": "In light of this, we introduce AG-VPReID, a comprehensive large-scale benchmark dataset for Aerial-Ground Video-based Person ReID. AG-VPReID comprises 6,632 subjects, 32,321 tracklets, and over 9.6 million frames, captured across multiple dates and times of day using a combination of three platforms: aerial drones operating at various altitudes (15-120m), stationary CCTV cameras and wearable mobile cameras. This dataset significantly surpasses existing video-based ReID benchmarks in terms of scale, diversity, and real-world applicability with the highest number of identities, the highest number of tracklets, the highest drone flying altitudes, and the most diverse platforms. The key characteristics of AG-VPReID include: drastic view changes between aerial and ground perspectives; a large number of annotated identities across multiple sessions; rich outdoor scenarios with varying environmental conditions; significant differences in resolution between aerial and ground footage; and both controlled scenarios with clothing changes and in-the-wild pedestrian traffic." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 460, + 295, + 579 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 460, + 295, + 579 + ], + "spans": [ + { + "bbox": [ + 55, + 460, + 295, + 579 + ], + "type": "text", + "content": "Aerial-ground person ReID presents unique challenges due to significant appearance variations between aerial and ground-level views. These variations include extreme viewpoint differences, drastic changes in resolution and scale, partial occlusions, and temporal discontinuities caused by high-flying altitudes and long-range captures. Traditional video-based person ReID methods, although effective in ground-based settings [26, 45], often struggle in aerial-ground scenarios due to the complex combination of inconsistent motion patterns and the aforementioned variations." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 582, + 295, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 582, + 295, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 582, + 295, + 714 + ], + "type": "text", + "content": "To address these challenges, we introduce AG-VPReID-Net, an end-to-end framework for Aerial-Ground Video-based Person Re-Identification. Unlike existing state-of-the-art methods focused on single-view or ground scenarios, AG-VPReID-Net features three complementary streams tailored for aerial-ground challenges: i) An Adapted Temporal-Spatial Stream enhances traditional temporal modeling by integrating identity-specific memory and temporal shape analysis. This improves the extraction of consistent motion patterns and body shape representations, addressing the temporal discontinuity and motion" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 214, + 555, + 430 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 214, + 555, + 430 + ], + "spans": [ + { + "bbox": [ + 313, + 214, + 555, + 430 + ], + "type": "text", + "content": "inconsistencies of aerial footage, outperforming standard LSTM [6, 14] or 3D CNN [20] approaches; ii) A Normalized Appearance Stream addresses the resolution and appearance differences between aerial and ground views by using UV maps aggregation across frames for a normalized appearance representation. This provides robustness against pose changes, viewpoint shifts, and varying image quality, excelling where current appearance-based methods [19, 29] falter; and iii) A Multi-Scale Attention Stream addresses scale variations inherent in aerial-ground data by incorporating multi-scale feature extraction, motion analysis, temporal context, and a transformer decoder, effectively improving identification across drone altitudes compared to single-scale [13, 43] methods. By integrating these streams, AG-VPReID-Net offers incremental improvements in aerial-ground video-based re-identification, highlighting its potential in addressing this challenging scenario." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 326, + 430, + 533, + 441 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 326, + 430, + 533, + 441 + ], + "spans": [ + { + "bbox": [ + 326, + 430, + 533, + 441 + ], + "type": "text", + "content": "In summary, our main contributions are as follows:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 442, + 553, + 597 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 313, + 442, + 553, + 490 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 442, + 553, + 490 + ], + "spans": [ + { + "bbox": [ + 313, + 442, + 553, + 490 + ], + "type": "text", + "content": "(1) We introduce AG-VPReID, a challenging large-scale benchmark for aerial-ground video-based person ReID, bridging the gap with a diverse dataset that captures nuanced challenges from both aerial and ground perspectives." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 491, + 553, + 548 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 491, + 553, + 548 + ], + "spans": [ + { + "bbox": [ + 313, + 491, + 553, + 548 + ], + "type": "text", + "content": "(2) We propose AG-VPReLUID-Net, an innovative end-to-end framework that integrates adapted temporal-spatial processing, normalized appearance representation, and multiscale attention mechanisms to effectively address the challenges of aerial-ground ReID." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 550, + 553, + 597 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 550, + 553, + 597 + ], + "spans": [ + { + "bbox": [ + 313, + 550, + 553, + 597 + ], + "type": "text", + "content": "(3) AG-VPReLU-Net sets new state-of-the-art performance on the AG-VPReLU and existing video-based ReID benchmarks, demonstrating our approach's effectiveness and generalizability across different settings." + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 313, + 609, + 389, + 621 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 609, + 389, + 621 + ], + "spans": [ + { + "bbox": [ + 313, + 609, + 389, + 621 + ], + "type": "text", + "content": "2. Prior Work" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 629, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 629, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 629, + 555, + 715 + ], + "type": "text", + "content": "Video-based Person ReID Datasets. Existing person ReID datasets are numerous but severely lack the ability to address real-world challenges, particularly in aerial-ground scenarios. Ground-based datasets like MARS [51] and LSVID [18] provide large-scale benchmarks but focus mainly on ground perspectives, highlighting the need for multi-platform surveillance datasets. The inclusion of clothing" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "type": "text", + "content": "1242" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 82, + 71, + 531, + 335 + ], + "blocks": [ + { + "bbox": [ + 82, + 71, + 531, + 335 + ], + "lines": [ + { + "bbox": [ + 82, + 71, + 531, + 335 + ], + "spans": [ + { + "bbox": [ + 82, + 71, + 531, + 335 + ], + "type": "image", + "image_path": "4be2113499ae6adece19cf4a8306ac03f748d2eb26b2f3e4a6eabd1218630242.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 54, + 342, + 555, + 376 + ], + "lines": [ + { + "bbox": [ + 54, + 342, + 555, + 376 + ], + "spans": [ + { + "bbox": [ + 54, + 342, + 555, + 376 + ], + "type": "text", + "content": "Figure 1. Our AG-VPReID dataset was captured using a variety of six cameras, including aerial drones, CCTVs, and GoPros. Sample images and camera locations are illustrated on the right side of the figure. The left side depicts the cross-camera appearance variations of two pedestrians, showcasing differences across various sessions and times of the day." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 54, + 396, + 297, + 601 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 396, + 297, + 601 + ], + "spans": [ + { + "bbox": [ + 54, + 396, + 297, + 601 + ], + "type": "text", + "content": "changes in datasets such as MEVID [3], CCVID [8], and VCCR [10] represents progress in addressing real-world challenges, though they could benefit from more identities and diverse environments. The BRIAR dataset [2], while featuring 1,000 subjects and UAV footage, primarily targets face recognition with restricted access. P-Destre [17] pioneered aerial view exploration, though its use of a single drone at lower altitudes (5-6m) creates opportunities for datasets covering higher operational altitudes more common in surveillance applications. The G2A-VReID dataset by Zhang et al. [47] represents an important step in combining aerial and ground views. While innovative, it contains 2,788 identities within a " + }, + { + "bbox": [ + 54, + 396, + 297, + 601 + ], + "type": "inline_equation", + "content": "20 - 60\\mathrm{m}" + }, + { + "bbox": [ + 54, + 396, + 297, + 601 + ], + "type": "text", + "content": " altitude range using 2 cameras, suggesting opportunities for future datasets to expand in scale, altitude diversity, camera count, and environmental variety. Tab. 1 compares our AG-VPReID dataset with others across multiple dimensions." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 605, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 605, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 605, + 296, + 714 + ], + "type": "text", + "content": "Video-based Person ReID. Video-based person ReID methods have evolved to leverage both spatial and temporal cues. Early approaches used recurrent neural networks and 3D convolutional networks [20, 30], while later works incorporated temporal pooling [51] and attention mechanisms [25]. Recent advancements include temporal complementary learning [13], Transformer-based architectures [11, 28], and techniques addressing cross-platform and cloth-changing scenarios [10, 39, 47]. The" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 313, + 396, + 556, + 540 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 396, + 556, + 540 + ], + "spans": [ + { + "bbox": [ + 313, + 396, + 556, + 540 + ], + "type": "text", + "content": "AG-ReID 2023 Challenge [33] highlighted aerial-ground ReID challenges, with winners employing re-ranking, data augmentation, and centroid-based representations. Recent works include Instruct-ReID [12] with instruction-guided retrieval, Domain Shifting [15] for distribution adaptation, and SEAS [53] using 3D body shape guidance. Despite these developments, most existing methods employ uni-modal frameworks trained on predefined label sets. In contrast, recent work has proposed a visual-language multi-modal learning paradigm [45], potentially offering more flexibility and robustness in feature representation for video-based person ReID." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 313, + 550, + 436, + 562 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 550, + 436, + 562 + ], + "spans": [ + { + "bbox": [ + 313, + 550, + 436, + 562 + ], + "type": "text", + "content": "3. AG-VPReID Dataset" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 570, + 556, + 640 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 570, + 556, + 640 + ], + "spans": [ + { + "bbox": [ + 313, + 570, + 556, + 640 + ], + "type": "text", + "content": "This section offers a detailed overview of the creation process for our AG-VPReID dataset. We describe the methods used for collecting video footage in Sec. 3.1. Sec. 3.2 introduces our annotation procedures. Sec. 3.3 compares AG-VPReID with existing datasets, highlighting its unique features." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 648, + 423, + 659 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 648, + 423, + 659 + ], + "spans": [ + { + "bbox": [ + 313, + 648, + 423, + 659 + ], + "type": "text", + "content": "3.1. Dataset Collection" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 665, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 665, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 665, + 556, + 715 + ], + "type": "text", + "content": "The AG-VPReLU dataset was collected over a period of 20 days, including 10 morning sessions (8:30am-10:00am) and 10 afternoon sessions (3:30pm-5:00pm), with each session lasting 60 minutes. Data capture involved two drones, two" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "type": "text", + "content": "1243" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 296, + 240 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 296, + 240 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 296, + 240 + ], + "type": "text", + "content": "CCTV cameras, and two wearable cameras. Each drone operated at four different altitudes—15m, 30m, 80m, and 120m—for 15 minutes per session, providing a comprehensive range of aerial views. In total, the dataset comprises 240 hours of video footage, documented in Tab. 2, which includes detailed specifications of the equipment used. The dataset features diverse resolutions, frame rates, and perspectives, extending from ground level to 120-meter aerial views. Fig. 1 shows example frames across different viewpoints and image qualities from different platforms. The drones, CCTV, and wearable devices are positioned to view individuals from different angles, as illustrated in Fig. 1, forcing person ReID models to learn robust multiview and partial-view representations to be effective." + } + ] + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 96, + 251, + 254, + 330 + ], + "blocks": [ + { + "bbox": [ + 96, + 251, + 254, + 330 + ], + "lines": [ + { + "bbox": [ + 96, + 251, + 254, + 330 + ], + "spans": [ + { + "bbox": [ + 96, + 251, + 254, + 330 + ], + "type": "table", + "html": "
TypeModelResolutionLensFPS
CCTVBosch (Outdoor)704 × 48024mm15
Bosch (Indoor)1280 × 72018mm25
WearableGoPro10 (Front)3840 × 216016mm30
GoPro10 (Side)1920 × 108016mm60
DronesDJI Inspire23840 × 216024mm25
DJI M300RTK8192 × 546035mm1
", + "image_path": "3f7a5aca82afd1bc55ba9a25658b3e53a4b488eb88b37ba28e972a6d5c8c92b7.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 71, + 332, + 279, + 342 + ], + "lines": [ + { + "bbox": [ + 71, + 332, + 279, + 342 + ], + "spans": [ + { + "bbox": [ + 71, + 332, + 279, + 342 + ], + "type": "text", + "content": "Table 2. Equipment specifications for the AG-VPReID dataset." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 55, + 357, + 296, + 417 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 357, + 296, + 417 + ], + "spans": [ + { + "bbox": [ + 55, + 357, + 296, + 417 + ], + "type": "text", + "content": "To ensure professional drone operations, a specialized team (one RPAS engineer, one Chief Remote Pilot, one RPAS technician) managed all 20 data collection days, handling flights, capturing aerial footage, and performing initial data preprocessing." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 426, + 157, + 438 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 426, + 157, + 438 + ], + "spans": [ + { + "bbox": [ + 55, + 426, + 157, + 438 + ], + "type": "text", + "content": "3.2. Labeling Process" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 444, + 296, + 587 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 444, + 296, + 587 + ], + "spans": [ + { + "bbox": [ + 55, + 444, + 296, + 587 + ], + "type": "text", + "content": "The AG-VPReID dataset uses YOLOv8x for person detection and tracking [16], extracting images from all frames across multiple cameras. It includes both short-term and long-term ReID scenarios, the latter including instances where participants change clothes to test long-term identity persistence. Identity matching was performed by a team of expert annotators, supported by research assistants, to ensure both accuracy and consistency in the matching process. Following [32], we manually annotated each identity with 15 selective soft-biometrics attributes to enhance the dataset's utility for attribute-based person ReID applications. For a detailed list of these attributes, refer to 8.1." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 596, + 188, + 608 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 596, + 188, + 608 + ], + "spans": [ + { + "bbox": [ + 55, + 596, + 188, + 608 + ], + "type": "text", + "content": "3.3. Dataset Characteristics" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 614, + 295, + 638 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 614, + 295, + 638 + ], + "spans": [ + { + "bbox": [ + 55, + 614, + 295, + 638 + ], + "type": "text", + "content": "Compared with existing video-based ReID datasets, our AG-VPReID dataset has five unique characteristics:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 642, + 296, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 642, + 296, + 712 + ], + "spans": [ + { + "bbox": [ + 55, + 642, + 296, + 712 + ], + "type": "text", + "content": "1) The highest number of identities and tracklets. AGVPReID sets a new benchmark with 6,632 unique identities and 32,321 tracklets, substantially surpassing existing datasets. It holds nearly twice as many identities as prominent ground-based datasets such as LSVID [18], which contains 3,772 identities, and more than six times" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 72, + 553, + 156 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 553, + 156 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 553, + 156 + ], + "type": "text", + "content": "the tracklets of the next largest aerial-ground dataset, G2AVReID [47], which includes 5,576 tracklets. Additionally, AG-VPReID encompasses over 9.6 million frames, providing 50 times the volume of frames compared to other aerial-ground datasets. While MEVID [3] exceeds in frame count with 10.46 million, it does not match AG-VPReID in terms of identity and tracklet numbers." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 159, + 555, + 542 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 313, + 159, + 553, + 256 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 159, + 553, + 256 + ], + "spans": [ + { + "bbox": [ + 313, + 159, + 553, + 256 + ], + "type": "text", + "content": "2) The most diverse platforms. Our AG-VPReLUID dataset is the first to incorporate aerial, ground, and wearable platforms for video-based person ReID. The inclusion of wearable cameras provides a novel dimension with high-quality first-person perspectives. This combination results in extreme variations in resolution and subject size across platforms: UAV (18×37 to 293×542 pixels), CCTV (22×23 to 172×413 pixels), and GoPro (25×48 to 483×1078 pixels)." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 259, + 553, + 355 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 259, + 553, + 355 + ], + "spans": [ + { + "bbox": [ + 313, + 259, + 553, + 355 + ], + "type": "text", + "content": "3) The highest flying altitudes. AG-VPReID features footage from altitudes reaching up to 120 meters, exceeding existing datasets' 60-meter maximum [47]. This introduces challenges: 1) extreme viewpoints with perspective distortions; 2) multiple scales with varying resolutions; and 3) image quality issues (Fig. 2). We used two drones—one with a wide-angle camera for area monitoring and another with a narrow-angle camera for detailed observation." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 359, + 555, + 430 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 359, + 555, + 430 + ], + "spans": [ + { + "bbox": [ + 313, + 359, + 555, + 430 + ], + "type": "text", + "content": "4) Rich outdoor scenarios with real-world challenges. Our AG-VPReID dataset presents diverse outdoor campus scenarios with real-world challenges including complex occlusions, varied poses from different activities, and uniform-wearing individuals. Fig. 2 shows examples of these diverse scenarios." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 434, + 554, + 542 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 434, + 554, + 542 + ], + "spans": [ + { + "bbox": [ + 313, + 434, + 554, + 542 + ], + "type": "text", + "content": "5) Other notable characteristics. AG-VPReLU includes comprehensive attributes for each identity (gender, age, clothing style, accessories), enabling fine-grained analysis. The dataset features long-term tracking data of 14 diverse individuals recorded across multiple days, each wearing different clothing per session to capture real-world variations. We also provide camera calibration information and GPS coordinates to support multi-camera tracking research. See supplementary material Sec. 8 for details." + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 314, + 552, + 425, + 565 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 552, + 425, + 565 + ], + "spans": [ + { + "bbox": [ + 314, + 552, + 425, + 565 + ], + "type": "text", + "content": "3.4. Ethics and Privacy" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 571, + 553, + 620 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 571, + 553, + 620 + ], + "spans": [ + { + "bbox": [ + 313, + 571, + 553, + 620 + ], + "type": "text", + "content": "This research received ethics approval for data collection and usage. We implement \"Deface\" [5] to blur faces, secure data storage, and obtained informed consent from all participants. Details are available at our project repository." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 632, + 416, + 645 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 632, + 416, + 645 + ], + "spans": [ + { + "bbox": [ + 313, + 632, + 416, + 645 + ], + "type": "text", + "content": "4. AG-VPReID-Net" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 654, + 553, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 654, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 654, + 553, + 713 + ], + "type": "text", + "content": "We propose AG-VPReLU-DNet, a purpose-built framework addressing aerial-ground ReID's unique challenges. In particular, we propose an Adapted Temporal-Spatial Stream for robust temporal-spatial representations to deal with the temporal discontinuity challenge caused by drone motion" + } + ] + } + ], + "index": 18 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "type": "text", + "content": "1244" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 88, + 72, + 282, + 219 + ], + "blocks": [ + { + "bbox": [ + 69, + 104, + 80, + 182 + ], + "lines": [ + { + "bbox": [ + 69, + 104, + 80, + 182 + ], + "spans": [ + { + "bbox": [ + 69, + 104, + 80, + 182 + ], + "type": "text", + "content": "Aerial Camera" + } + ] + } + ], + "index": 0, + "angle": 270, + "type": "image_caption" + }, + { + "bbox": [ + 88, + 72, + 282, + 219 + ], + "lines": [ + { + "bbox": [ + 88, + 72, + 282, + 219 + ], + "spans": [ + { + "bbox": [ + 88, + 72, + 282, + 219 + ], + "type": "image", + "image_path": "f5041b8ee0d43fbe3c67dae2d976f3ba9842453939e1d8967e949f57549b5c4a.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 88, + 227, + 173, + 278 + ], + "blocks": [ + { + "bbox": [ + 88, + 227, + 173, + 278 + ], + "lines": [ + { + "bbox": [ + 88, + 227, + 173, + 278 + ], + "spans": [ + { + "bbox": [ + 88, + 227, + 173, + 278 + ], + "type": "image", + "image_path": "84bc2bab5ce1bb227c0c9a5047b2f2878eb081be68533d2c7c1ab3b44ed85f14.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 99, + 280, + 170, + 319 + ], + "blocks": [ + { + "bbox": [ + 99, + 280, + 170, + 319 + ], + "lines": [ + { + "bbox": [ + 99, + 280, + 170, + 319 + ], + "spans": [ + { + "bbox": [ + 99, + 280, + 170, + 319 + ], + "type": "image", + "image_path": "3f33b18445e2169b6dabdd3509ce0cd75de375a821bdd874e2b3a78cde0291b6.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 194, + 229, + 269, + 277 + ], + "blocks": [ + { + "bbox": [ + 194, + 229, + 269, + 277 + ], + "lines": [ + { + "bbox": [ + 194, + 229, + 269, + 277 + ], + "spans": [ + { + "bbox": [ + 194, + 229, + 269, + 277 + ], + "type": "image", + "image_path": "9c10adb7d47f8c23d47fbb8ad49cdb776c6fe02c6359b6e37345ed9339d91b6b.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 195, + 277, + 269, + 340 + ], + "blocks": [ + { + "bbox": [ + 195, + 277, + 269, + 340 + ], + "lines": [ + { + "bbox": [ + 195, + 277, + 269, + 340 + ], + "spans": [ + { + "bbox": [ + 195, + 277, + 269, + 340 + ], + "type": "image", + "image_path": "6a94c1e0b5aea0e6901cab8b1be05a0a065928e010a123d3a8fd8b5158dc5c3a.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 192, + 342, + 275, + 354 + ], + "lines": [ + { + "bbox": [ + 192, + 342, + 275, + 354 + ], + "spans": [ + { + "bbox": [ + 192, + 342, + 275, + 354 + ], + "type": "text", + "content": "(f) Clothing similarity" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 55, + 369, + 296, + 415 + ], + "lines": [ + { + "bbox": [ + 55, + 369, + 296, + 415 + ], + "spans": [ + { + "bbox": [ + 55, + 369, + 296, + 415 + ], + "type": "text", + "content": "Figure 2. The AG-VPReID dataset presents several key challenges: extreme viewpoints, varying resolutions and subject sizes, pose/illumination variations, occlusions, and similar clothing among subjects." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 436, + 296, + 520 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 436, + 296, + 520 + ], + "spans": [ + { + "bbox": [ + 55, + 436, + 296, + 520 + ], + "type": "text", + "content": "from unstable tracking between frames. We propose a Normalized Appearance Stream for resolution and appearance changes to deal with extreme viewpoint shifts. To deal with altitude-driven scale variance, we introduce a Multi-Scale Attention Stream for scale variations. Fig. 3 illustrates our architecture. Detailed stream contributions are provided in Tab. 9 of the supplementary material." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 528, + 290, + 541 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 528, + 290, + 541 + ], + "spans": [ + { + "bbox": [ + 55, + 528, + 290, + 541 + ], + "type": "text", + "content": "4.1. Stream 1: Adapted Temporal-Spatial Stream" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 545, + 296, + 641 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 545, + 296, + 641 + ], + "spans": [ + { + "bbox": [ + 55, + 545, + 296, + 641 + ], + "type": "text", + "content": "When performing video-based person ReID, a key challenge is handling inconsistent motion patterns and temporal gaps between video frames. To address this, we propose an Adapted Temporal-Spatial stream that combines CLIP's visual encoder with temporal and 3D shape modeling to create a comprehensive representation of individuals. Our method operates on a sequence " + }, + { + "bbox": [ + 55, + 545, + 296, + 641 + ], + "type": "inline_equation", + "content": "\\mathcal{V}" + }, + { + "bbox": [ + 55, + 545, + 296, + 641 + ], + "type": "text", + "content": " of " + }, + { + "bbox": [ + 55, + 545, + 296, + 641 + ], + "type": "inline_equation", + "content": "T" + }, + { + "bbox": [ + 55, + 545, + 296, + 641 + ], + "type": "text", + "content": " frames through the following components:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 55, + 644, + 296, + 669 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 644, + 296, + 669 + ], + "spans": [ + { + "bbox": [ + 55, + 644, + 296, + 669 + ], + "type": "text", + "content": "Visual Feature Extraction: Using CLIP's visual encoder " + }, + { + "bbox": [ + 55, + 644, + 296, + 669 + ], + "type": "inline_equation", + "content": "E_{v}(\\cdot)" + }, + { + "bbox": [ + 55, + 644, + 296, + 669 + ], + "type": "text", + "content": ", we extract frame-level features," + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 111, + 679, + 295, + 692 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 679, + 295, + 692 + ], + "spans": [ + { + "bbox": [ + 111, + 679, + 295, + 692 + ], + "type": "interline_equation", + "content": "F _ {t} = E _ {v} \\left(\\mathcal {I} _ {t}\\right), \\quad t \\in \\{1, \\dots , T \\}, \\tag {1}", + "image_path": "3d3fcae6666dc66120e9edf0f4743cc34e2ccf1a36b6b08dd2d3fdf4deddb202.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 55, + 701, + 261, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 701, + 261, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 701, + 261, + 713 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 701, + 261, + 713 + ], + "type": "inline_equation", + "content": "\\mathcal{I}_t" + }, + { + "bbox": [ + 55, + 701, + 261, + 713 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 701, + 261, + 713 + ], + "type": "inline_equation", + "content": "F_{t}" + }, + { + "bbox": [ + 55, + 701, + 261, + 713 + ], + "type": "text", + "content": " are the " + }, + { + "bbox": [ + 55, + 701, + 261, + 713 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 55, + 701, + 261, + 713 + ], + "type": "text", + "content": "-th frame and its features." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 72, + 553, + 96 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 553, + 96 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 553, + 96 + ], + "type": "text", + "content": "Temporal Processing: We incorporate temporal modeling through two key components:" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 96, + 553, + 120 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 96, + 553, + 120 + ], + "spans": [ + { + "bbox": [ + 313, + 96, + 553, + 120 + ], + "type": "text", + "content": "1) Temporal 3D Shape Modeling (TSM): Following [34], we extract 3D shape representations," + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 340, + 129, + 553, + 143 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 340, + 129, + 553, + 143 + ], + "spans": [ + { + "bbox": [ + 340, + 129, + 553, + 143 + ], + "type": "interline_equation", + "content": "g _ {t} = \\operatorname {G R U} \\left(F _ {t}, g _ {t - 1}\\right), \\beta_ {t} = 3 \\mathrm {D} \\operatorname {R e g u s s o r} \\left(g _ {t}\\right), \\tag {2}", + "image_path": "843e5783767ea9251b95682cbe7c02ca8b7a08000e64fb86d38a37c619c5578c.jpg" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 152, + 553, + 176 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 152, + 553, + 176 + ], + "spans": [ + { + "bbox": [ + 313, + 152, + 553, + 176 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 152, + 553, + 176 + ], + "type": "inline_equation", + "content": "g_{t}" + }, + { + "bbox": [ + 313, + 152, + 553, + 176 + ], + "type": "text", + "content": " captures temporal dynamics and " + }, + { + "bbox": [ + 313, + 152, + 553, + 176 + ], + "type": "inline_equation", + "content": "\\beta_{t}" + }, + { + "bbox": [ + 313, + 152, + 553, + 176 + ], + "type": "text", + "content": " represents SMPL model parameters." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 176, + 553, + 212 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 176, + 553, + 212 + ], + "spans": [ + { + "bbox": [ + 313, + 176, + 553, + 212 + ], + "type": "text", + "content": "2) Temporal Feature Enhancement (TFE): Adapting from [45], we enhance features by combining appearance and shape," + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 370, + 222, + 553, + 236 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 370, + 222, + 553, + 236 + ], + "spans": [ + { + "bbox": [ + 370, + 222, + 553, + 236 + ], + "type": "interline_equation", + "content": "F _ {\\text {e n h a n c e d}} = \\operatorname {T F E} \\left(F _ {1: T}, \\beta_ {1: T}\\right). \\tag {3}", + "image_path": "8c3d483b14bb427a75cb540f9f8cf4141871bce17640545779c27eda3ab1ffc6.jpg" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 313, + 243, + 553, + 268 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 243, + 553, + 268 + ], + "spans": [ + { + "bbox": [ + 313, + 243, + 553, + 268 + ], + "type": "text", + "content": "Identity-Aware Processing: We incorporate identity information through," + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 368, + 276, + 520, + 307 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 368, + 276, + 520, + 307 + ], + "spans": [ + { + "bbox": [ + 368, + 276, + 520, + 307 + ], + "type": "interline_equation", + "content": "\\mathcal {M} _ {y _ {i}} = \\frac {1}{N _ {y _ {i}}} \\sum_ {V \\in \\mathcal {V} _ {y _ {i}}} \\operatorname {T A P} \\left(F _ {\\text {e n h a n c e d}}\\right),", + "image_path": "06f6972f7554a77dfdcea3ea9f91f8d10889d37e6d52341e5c338541a184ac0a.jpg" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 348, + 309, + 497, + 324 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 348, + 309, + 497, + 324 + ], + "spans": [ + { + "bbox": [ + 348, + 309, + 497, + 324 + ], + "type": "interline_equation", + "content": "\\mathcal {M} _ {y _ {i}} ^ {\\text {r e f i n e d}} = \\operatorname {S S P} \\left(F _ {\\text {e n h a n c e d}}, \\mathcal {M} _ {y _ {i}}\\right),", + "image_path": "382e26e9391ebd71f4b004c43d0aef1faec80c6fb49a9b86ee099d63a6d983b8.jpg" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 351, + 326, + 553, + 341 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 351, + 326, + 553, + 341 + ], + "spans": [ + { + "bbox": [ + 351, + 326, + 553, + 341 + ], + "type": "interline_equation", + "content": "R _ {s t r e a m 1} = \\left[ F _ {e n h a n c e d}; \\mathcal {M} _ {y _ {i}} ^ {r e f i n e d} \\right], \\tag {4}", + "image_path": "a5a592bf1345d24dfaf2e44e1520a8522c5e099cc2786f79a88c0bb1dfaadf88.jpg" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 313, + 350, + 554, + 398 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 350, + 554, + 398 + ], + "spans": [ + { + "bbox": [ + 313, + 350, + 554, + 398 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 350, + 554, + 398 + ], + "type": "inline_equation", + "content": "\\mathcal{M}_{y_i}" + }, + { + "bbox": [ + 313, + 350, + 554, + 398 + ], + "type": "text", + "content": " is the identity memory bank constructed through Temporal Average Pooling (TAP), and the Sequence-Specific Prompt (SSP) module refines this representation for the final output " + }, + { + "bbox": [ + 313, + 350, + 554, + 398 + ], + "type": "inline_equation", + "content": "R_{stream1}" + }, + { + "bbox": [ + 313, + 350, + 554, + 398 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 313, + 405, + 537, + 418 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 405, + 537, + 418 + ], + "spans": [ + { + "bbox": [ + 313, + 405, + 537, + 418 + ], + "type": "text", + "content": "4.2. Stream 2: Normalized Appearance Stream" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 313, + 422, + 554, + 494 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 422, + 554, + 494 + ], + "spans": [ + { + "bbox": [ + 313, + 422, + 554, + 494 + ], + "type": "text", + "content": "The Adapted Temporal-Spatial (ATS) stream provides robust temporal-spatial representation, but may not fully capture fine-grained appearance details across viewpoints, especially in aerial footage. To address this limitation, we propose a Normalized Appearance (NA) stream that effectively aggregates appearance details from multiple viewpoints." + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 313, + 495, + 554, + 601 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 495, + 554, + 601 + ], + "spans": [ + { + "bbox": [ + 313, + 495, + 554, + 601 + ], + "type": "text", + "content": "The NA stream normalizes and combines appearance information across frames using UVTexture maps and visibility masks. Our process involves: (1) Extracting UVTexture maps and visibility masks per frame, (2) Normalizing UVTexture maps brightness, (3) Aligning maps across frames, (4) Weighted aggregation using visibility masks, and (5) Generating the final normalized representation. The brightness normalization and weighted aggregation of UVTexture maps can be formulated as," + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 380, + 611, + 553, + 624 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 380, + 611, + 553, + 624 + ], + "spans": [ + { + "bbox": [ + 380, + 611, + 553, + 624 + ], + "type": "interline_equation", + "content": "T _ {i} ^ {\\text {n o r m}} = \\gamma (H (N (T _ {i}))), \\tag {5}", + "image_path": "0aac668843fd031af959fa0ca14842a4363a6594f6a3a2d11c81ee7d69fd6148.jpg" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 362, + 627, + 553, + 658 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 362, + 627, + 553, + 658 + ], + "spans": [ + { + "bbox": [ + 362, + 627, + 553, + 658 + ], + "type": "interline_equation", + "content": "T _ {\\text {a g g r e g a t e d}} = \\frac {\\sum_ {i = 1} ^ {N} V _ {i} \\odot T _ {i} ^ {\\text {n o r m}}}{\\sum_ {i = 1} ^ {N} V _ {i}}, \\tag {6}", + "image_path": "0e3000f5478f2cd0a5e1e5a98d9b58fd40e2f23b3b62de097f1347fa74223c83.jpg" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 313, + 665, + 554, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 665, + 554, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 665, + 554, + 714 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 665, + 554, + 714 + ], + "type": "inline_equation", + "content": "T_{i}^{norm}" + }, + { + "bbox": [ + 313, + 665, + 554, + 714 + ], + "type": "text", + "content": " is the normalized UVTexture map for frame " + }, + { + "bbox": [ + 313, + 665, + 554, + 714 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 313, + 665, + 554, + 714 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 665, + 554, + 714 + ], + "type": "inline_equation", + "content": "N(\\cdot), H(\\cdot)" + }, + { + "bbox": [ + 313, + 665, + 554, + 714 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 313, + 665, + 554, + 714 + ], + "type": "inline_equation", + "content": "\\gamma(\\cdot)" + }, + { + "bbox": [ + 313, + 665, + 554, + 714 + ], + "type": "text", + "content": " are normalization, histogram matching and gamma correction functions respectively. " + }, + { + "bbox": [ + 313, + 665, + 554, + 714 + ], + "type": "inline_equation", + "content": "T_{aggregated}" + }, + { + "bbox": [ + 313, + 665, + 554, + 714 + ], + "type": "text", + "content": " is the final aggregated map. We leverage PhysPT [49]" + } + ] + } + ], + "index": 30 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "type": "text", + "content": "1245" + } + ] + } + ], + "index": 31 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 57, + 70, + 553, + 261 + ], + "blocks": [ + { + "bbox": [ + 57, + 70, + 553, + 261 + ], + "lines": [ + { + "bbox": [ + 57, + 70, + 553, + 261 + ], + "spans": [ + { + "bbox": [ + 57, + 70, + 553, + 261 + ], + "type": "image", + "image_path": "d1697fc18f796b09bfbaba2feb45ad8e162dff3d0f8bf44b83643ec2512766b0.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 269, + 555, + 301 + ], + "lines": [ + { + "bbox": [ + 55, + 269, + 555, + 301 + ], + "spans": [ + { + "bbox": [ + 55, + 269, + 555, + 301 + ], + "type": "text", + "content": "Figure 3. The three-stream AG-VPReLU-DNet architecture addresses aerial-ground ReID challenges: Temporal-Spatial stream for motion modeling and temporal features, Normalized Appearance for resolution/appearance variations, and Multi-Scale Attention for aerial-ground scale variations." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 323, + 296, + 372 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 323, + 296, + 372 + ], + "spans": [ + { + "bbox": [ + 55, + 323, + 296, + 372 + ], + "type": "text", + "content": "for pose estimation and Texformer [41] to generate UV maps from PhysPT's output 3D meshes. The maps are improved through inter-frame consistency before feeding into the DGC Omni-scale Module [52]." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 381, + 266, + 393 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 381, + 266, + 393 + ], + "spans": [ + { + "bbox": [ + 55, + 381, + 266, + 393 + ], + "type": "text", + "content": "4.3. Stream 3: Multi-Scale Attention Stream" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 400, + 296, + 532 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 400, + 296, + 532 + ], + "spans": [ + { + "bbox": [ + 55, + 400, + 296, + 532 + ], + "type": "text", + "content": "While the ATS stream provides a robust temporal-spatial representation and the NA stream addresses viewpoint and occlusion challenges, aerial-ground person ReID still faces significant hurdles due to extreme scale variations between drone and ground-level footage. The first two streams effectively capture temporal dynamics, 3D shape information, and viewpoint-invariant appearance details, but they may not fully address the drastic scale differences inherent in aerial-ground scenarios. To complement the ATS stream and NA stream and address this limitation, we propose a Multi-Scale Attention (MSA) stream." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 533, + 296, + 640 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 533, + 296, + 640 + ], + "spans": [ + { + "bbox": [ + 55, + 533, + 296, + 640 + ], + "type": "text", + "content": "In detail, this stream leverages the power of frozen large vision models combined with lightweight, adaptive processing. Specifically, this stream utilizes a frozen large vision model to extract multi-scale features for video-based person ReID. By combining a lightweight Transformer decoder with a local temporal module, this approach dynamically integrates spatial and temporal information, thereby enhancing our framework's ability to accurately capture essential person-specific details." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 642, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 642, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 642, + 296, + 713 + ], + "type": "text", + "content": "Specifically, for each frame " + }, + { + "bbox": [ + 55, + 642, + 296, + 713 + ], + "type": "inline_equation", + "content": "\\mathcal{I}_t" + }, + { + "bbox": [ + 55, + 642, + 296, + 713 + ], + "type": "text", + "content": " within the sequence " + }, + { + "bbox": [ + 55, + 642, + 296, + 713 + ], + "type": "inline_equation", + "content": "\\nu y_{i}" + }, + { + "bbox": [ + 55, + 642, + 296, + 713 + ], + "type": "text", + "content": " the CLIP vision encoder [36] is employed to extract features independently. The process collects tokens from various layers at regular intervals to compile a detailed feature map that captures spatial correspondences. These frame feature maps are subsequently concatenated and assembled" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 323, + 555, + 443 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 323, + 555, + 443 + ], + "spans": [ + { + "bbox": [ + 313, + 323, + 555, + 443 + ], + "type": "text", + "content": "into a spatiotemporal feature volume " + }, + { + "bbox": [ + 313, + 323, + 555, + 443 + ], + "type": "inline_equation", + "content": "\\mathbf{G}" + }, + { + "bbox": [ + 313, + 323, + 555, + 443 + ], + "type": "text", + "content": ". Following the methods [23, 44], we integrate temporal information into this volume before processing it through a Transformer decoder. This decoder globally aggregates features across multiple layers, employing a video-level classification token as a query, with feature volumes from different layers of the backbone serving as keys and values. A linear layer then maps the output of the decoder's final block to produce class predictions. The operational dynamics of the Transformer decoder are outlined as follows," + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 318, + 478, + 551, + 491 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 318, + 478, + 551, + 491 + ], + "spans": [ + { + "bbox": [ + 318, + 478, + 551, + 491 + ], + "type": "interline_equation", + "content": "Y _ {i} = \\operatorname {T e m p} _ {i} ([ \\mathbf {G} _ {N - M + i, 1}, \\mathbf {G} _ {N - M + i, 2}, \\dots , \\mathbf {G} _ {N - M + i, T} ]),", + "image_path": "cc6495d3564f24bf79e541ee111dcaaee7fc55d8fa6fdf1e1e36b2adb9a840b3.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 321, + 493, + 457, + 506 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 493, + 457, + 506 + ], + "spans": [ + { + "bbox": [ + 321, + 493, + 457, + 506 + ], + "type": "interline_equation", + "content": "\\tilde {q} _ {i} = q _ {i - 1} + \\mathrm {M H A} _ {i} \\left(q _ {i - 1}, Y _ {i}, Y _ {i}\\right),", + "image_path": "059f11ee81ec394c8832273a8698dcc5d693bf1814476bf930e8bf699c12fab8.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 321, + 508, + 408, + 521 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 508, + 408, + 521 + ], + "spans": [ + { + "bbox": [ + 321, + 508, + 408, + 521 + ], + "type": "interline_equation", + "content": "q _ {i} = \\tilde {q} _ {i} + \\operatorname {M L P} _ {i} (\\tilde {q} _ {i}),", + "image_path": "3d7eeb4405faed5e2627b67105e881087676ddc3fd1c37bce7a129867f0b6738.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 318, + 523, + 553, + 536 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 318, + 523, + 553, + 536 + ], + "spans": [ + { + "bbox": [ + 318, + 523, + 553, + 536 + ], + "type": "interline_equation", + "content": "f _ {G} = \\operatorname {F C} \\left(q _ {M}\\right), \\tag {7}", + "image_path": "f832a9dbe7c3d052b94b26747e2feb45a6607a5ab4807c4a6aa43e9ba2be65e8.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "inline_equation", + "content": "\\mathbf{G}_{n,t}" + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "text", + "content": " represents the features of frame " + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "text", + "content": " extracted from the " + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "text", + "content": "-th layer of CLIP vision encoder. The feature volume " + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "inline_equation", + "content": "Y_{i}" + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "text", + "content": ", which undergoes temporal modulation, is input into the " + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "text", + "content": "-th layer of the Transformer decoder. The query token " + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "inline_equation", + "content": "q_{i}" + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "text", + "content": " is incrementally refined, beginning with " + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "inline_equation", + "content": "q_{0}" + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "text", + "content": " as learnable initial parameters. The final output " + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "inline_equation", + "content": "f_{G}" + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "text", + "content": ", corresponds to the final feature. The spatiotemporal decoder comprises " + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "text", + "content": " blocks. " + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "text", + "content": " denotes the number of encoder layers. Multi-head attention (MHA) involves query, key, and value, each of which plays a distinct role. The operator " + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "inline_equation", + "content": "\\mathrm{Temp}(\\cdot)" + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "text", + "content": " is utilized to model temporal dynamics, which produces feature tokens influenced by detailed temporal information." + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "type": "text", + "content": "1246" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 71, + 181, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 71, + 181, + 85 + ], + "spans": [ + { + "bbox": [ + 55, + 71, + 181, + 85 + ], + "type": "text", + "content": "5. Experimental Results" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 91, + 231, + 102 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 91, + 231, + 102 + ], + "spans": [ + { + "bbox": [ + 55, + 91, + 231, + 102 + ], + "type": "text", + "content": "5.1. Datasets and Evaluation Metrics" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 108, + 296, + 228 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 108, + 296, + 228 + ], + "spans": [ + { + "bbox": [ + 55, + 108, + 296, + 228 + ], + "type": "text", + "content": "We conducted evaluations of our method using the AG-VPReID and four established video-based person ReID datasets: iLIDS [38], Mars [51], LS-VID [18] and G2AVReID [47]. For AG-VPReID, we used a balanced split of 3,013 identities with both ground and aerial views, dividing them equally for training and testing purposes [31, 32]. Details on the training and testing configurations are provided in Table 3. We evaluate performance using the Cumulative Matching Characteristic (CMC) at Rank-1 and the mean Average Precision (mAP)." + } + ] + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 67, + 237, + 282, + 358 + ], + "blocks": [ + { + "bbox": [ + 67, + 237, + 282, + 358 + ], + "lines": [ + { + "bbox": [ + 67, + 237, + 282, + 358 + ], + "spans": [ + { + "bbox": [ + 67, + 237, + 282, + 358 + ], + "type": "table", + "html": "
CaseSubset#IDs#Tracklets#Images (M)
TrainingAll1,55513,3003.85
Testing (A2G)All1,45613,5663.94
15m5064,9071.50
30m3772,8850.89
80m3562,5920.69
120m3083,1820.86
Testing (G2A)All5,075*19,0215.79
15m1,4036,3622.14
30m1,4064,4681.41
80m1,1623,8661.13
120m1,1954,3251.11
", + "image_path": "b0fb1db0fa20b01eaf569770bbbb480bf54e59e31d23ab7bbe6539a7008aff74.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 361, + 295, + 383 + ], + "lines": [ + { + "bbox": [ + 55, + 361, + 295, + 383 + ], + "spans": [ + { + "bbox": [ + 55, + 361, + 295, + 383 + ], + "type": "text", + "content": "Table 3. Statistics of AG-VPReID dataset. A2G: aerial-to-ground, G2A: ground-to-aerial. *3,619 additional IDs as distractors." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 55, + 402, + 189, + 415 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 402, + 189, + 415 + ], + "spans": [ + { + "bbox": [ + 55, + 402, + 189, + 415 + ], + "type": "text", + "content": "5.2. Implementation Details" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 419, + 296, + 539 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 419, + 296, + 539 + ], + "spans": [ + { + "bbox": [ + 55, + 419, + 296, + 539 + ], + "type": "text", + "content": "Our pipeline leverages UV maps generated by Texformer [41] using 3D human meshes from PhysPT [49] with refined pose estimation. The UV maps are processed through normalization, histogram matching, and gamma correction before weighted blending with visibility masks. The architecture consists of three streams: an Adapted Temporal-Spatial Stream (CLIP ViT-B/16), a Normalized Appearance Stream for 3D coordinates and UV textures, and a Multi-Scale Attention Stream (CLIP ViT-L/14). More implementation details can be found in the supplementary." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 546, + 281, + 559 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 546, + 281, + 559 + ], + "spans": [ + { + "bbox": [ + 55, + 546, + 281, + 559 + ], + "type": "text", + "content": "5.3. Comparison with State-of-the-Art Methods" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 563, + 295, + 600 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 563, + 295, + 600 + ], + "spans": [ + { + "bbox": [ + 55, + 563, + 295, + 600 + ], + "type": "text", + "content": "We evaluate our proposed method AG-VPReID-Net against several state-of-the-art approaches across multiple video-based person ReID datasets. Tab. 4 summarizes the results." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 603, + 295, + 674 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 603, + 295, + 674 + ], + "spans": [ + { + "bbox": [ + 55, + 603, + 295, + 674 + ], + "type": "text", + "content": "Ground-to-Ground Datasets. Our method achieves superior performance on MARS (91.5% mAP, 93.2% Rank-1), outperforming CLIP-ReID by 3.4% mAP. On LS-VID (87.3% mAP, 93.2% Rank-1), we surpass LSTRL by 4.9% mAP. For iLIDS-VID, we reach 96.3% Rank-1, which is 3.0% higher than MFA." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 677, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 677, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 677, + 296, + 713 + ], + "type": "text", + "content": "Cross-Platform Datasets. On G2A-VReID, we achieve " + }, + { + "bbox": [ + 55, + 677, + 296, + 713 + ], + "type": "inline_equation", + "content": "81.3\\%" + }, + { + "bbox": [ + 55, + 677, + 296, + 713 + ], + "type": "text", + "content": " mAP and " + }, + { + "bbox": [ + 55, + 677, + 296, + 713 + ], + "type": "inline_equation", + "content": "73.1\\%" + }, + { + "bbox": [ + 55, + 677, + 296, + 713 + ], + "type": "text", + "content": " Rank-1, surpassing MGH by " + }, + { + "bbox": [ + 55, + 677, + 296, + 713 + ], + "type": "inline_equation", + "content": "4.6\\%" + }, + { + "bbox": [ + 55, + 677, + 296, + 713 + ], + "type": "text", + "content": " mAP. Note that the G2A-VReID dataset only provides a" + } + ] + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 332, + 71, + 535, + 184 + ], + "blocks": [ + { + "bbox": [ + 332, + 71, + 535, + 184 + ], + "lines": [ + { + "bbox": [ + 332, + 71, + 535, + 184 + ], + "spans": [ + { + "bbox": [ + 332, + 71, + 535, + 184 + ], + "type": "image", + "image_path": "5aeac7856fbfa5606dcc60f6256ab85735938e0fee7e11f5c68df0f17dc7efc2.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 194, + 553, + 228 + ], + "lines": [ + { + "bbox": [ + 313, + 194, + 553, + 228 + ], + "spans": [ + { + "bbox": [ + 313, + 194, + 553, + 228 + ], + "type": "text", + "content": "Figure 4. Baseline vs our method on AG-VPReLU dataset. Green/red: correct/incorrect labels. First tracklet image shown. Ranks show improvements in bold." + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 249, + 553, + 308 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 249, + 553, + 308 + ], + "spans": [ + { + "bbox": [ + 313, + 249, + 553, + 308 + ], + "type": "text", + "content": "ground-to-aerial testing set. For AG-VPReID, we demonstrate strong results in both Ground-to-Aerial (58.0% mAP, 75.6% Rank-1, exceeding CLIP-ReID by 8.8% Rank-1) and aerial-to-ground scenarios (64.0% mAP, 71.9% Rank-1, surpassing CLIP-ReID by 1.7% mAP and 0.3% Rank-1)." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 316, + 408, + 329 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 316, + 408, + 329 + ], + "spans": [ + { + "bbox": [ + 313, + 316, + 408, + 329 + ], + "type": "text", + "content": "5.4. Ablation Study" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 333, + 553, + 393 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 333, + 553, + 393 + ], + "spans": [ + { + "bbox": [ + 313, + 333, + 553, + 393 + ], + "type": "text", + "content": "We conduct an ablation study on AG-VPReLU to evaluate each stream. St-1 is our temporal modeling stream, St-2 is the appearance normalization stream, and St-3 is the multiscale feature stream. Their combinations (St-12/13/23/123) merge multiple streams." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 396, + 554, + 479 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 396, + 554, + 479 + ], + "spans": [ + { + "bbox": [ + 313, + 396, + 554, + 479 + ], + "type": "text", + "content": "Stream Contributions. Tab. 5 shows that St-1 achieves the strongest individual performance (71.52% A2G, 74.80% G2A Rank-1). St-2 and St-3 show moderate results (58.40% and 61.65% A2G Rank-1). Combined streams demonstrate complementary strengths, with St-123 achieving the best results (71.91% A2G, 75.57% G2A Rank-1) by integrating the three streams." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 483, + 554, + 603 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 483, + 554, + 603 + ], + "spans": [ + { + "bbox": [ + 313, + 483, + 554, + 603 + ], + "type": "text", + "content": "Impact of Altitude. Table 6 shows performance decreasing with altitude, most significantly between " + }, + { + "bbox": [ + 313, + 483, + 554, + 603 + ], + "type": "inline_equation", + "content": "30\\mathrm{m} - 80\\mathrm{m}" + }, + { + "bbox": [ + 313, + 483, + 554, + 603 + ], + "type": "text", + "content": ". A2G Rank-1 drops " + }, + { + "bbox": [ + 313, + 483, + 554, + 603 + ], + "type": "inline_equation", + "content": "\\sim 11\\%" + }, + { + "bbox": [ + 313, + 483, + 554, + 603 + ], + "type": "text", + "content": " across streams. At " + }, + { + "bbox": [ + 313, + 483, + 554, + 603 + ], + "type": "inline_equation", + "content": "120\\mathrm{m}" + }, + { + "bbox": [ + 313, + 483, + 554, + 603 + ], + "type": "text", + "content": ", St-1 demonstrates robustness (52.47% vs 38.12%/35.13%), achieving +17.34% improvement through temporal modeling. St-2's physics-informed UV mapping provides +7.57% improvement (42.70% vs. 35.13%), while St-3's multi-scale attention yields +17.72% improvement (52.85% vs. 35.13%). St-123 maintains best performance across all altitudes by combining stream strengths." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 605, + 554, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 605, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 605, + 554, + 713 + ], + "type": "text", + "content": "Clothing Changes vs. Camera Angles. Analysis shows altitude increases " + }, + { + "bbox": [ + 313, + 605, + 554, + 713 + ], + "type": "inline_equation", + "content": "(15\\mathrm{m} \\rightarrow 120\\mathrm{m})" + }, + { + "bbox": [ + 313, + 605, + 554, + 713 + ], + "type": "text", + "content": " reduce Rank-1 by " + }, + { + "bbox": [ + 313, + 605, + 554, + 713 + ], + "type": "inline_equation", + "content": "27.66\\%" + }, + { + "bbox": [ + 313, + 605, + 554, + 713 + ], + "type": "text", + "content": ", significantly more than clothing changes " + }, + { + "bbox": [ + 313, + 605, + 554, + 713 + ], + "type": "inline_equation", + "content": "(7.85\\%" + }, + { + "bbox": [ + 313, + 605, + 554, + 713 + ], + "type": "text", + "content": " in ground-to-ground). Without clothing changes, aerial-ground matching " + }, + { + "bbox": [ + 313, + 605, + 554, + 713 + ], + "type": "inline_equation", + "content": "(71.91\\%" + }, + { + "bbox": [ + 313, + 605, + 554, + 713 + ], + "type": "text", + "content": " Rank-1) still underperforms ground-to-ground " + }, + { + "bbox": [ + 313, + 605, + 554, + 713 + ], + "type": "inline_equation", + "content": "(91.52\\%)" + }, + { + "bbox": [ + 313, + 605, + 554, + 713 + ], + "type": "text", + "content": " due to viewpoint differences. When combining aerial views with clothing changes " + }, + { + "bbox": [ + 313, + 605, + 554, + 713 + ], + "type": "inline_equation", + "content": "(65.83\\%" + }, + { + "bbox": [ + 313, + 605, + 554, + 713 + ], + "type": "text", + "content": " Rank-1), these factors create synergistic challenges where viewpoint differences amplify clothing ambi" + } + ] + } + ], + "index": 18 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "type": "text", + "content": "1247" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 85, + 70, + 526, + 272 + ], + "blocks": [ + { + "bbox": [ + 85, + 70, + 526, + 272 + ], + "lines": [ + { + "bbox": [ + 85, + 70, + 526, + 272 + ], + "spans": [ + { + "bbox": [ + 85, + 70, + 526, + 272 + ], + "type": "table", + "html": "
MethodMARSLS-VIDiLIDS-VIDG2A-VReIDAG-VPReID
Ground → GroundGround → AerialAerial → GroundGround → Aerial
mAPRank-1mAPRank-1Rank-1Rank-5mAPRank-1mAPRank-1mAPRank-1
STMP[29]72.784.439.156.884.396.8--50.760.345.255.8
M3D[20]74.184.440.157.774.094.3--52.462.647.957.3
GLTR[19]78.587.044.363.186.098.0--55.665.850.160.5
TCLNet[13]85.189.870.381.586.6-65.454.757.267.952.762.4
MGH[43]85.890.061.879.685.697.176.769.960.370.855.565.2
GRL[25]84.891.0--90.498.3--58.768.453.963.6
BiCnet-TKS[14]86.090.275.184.6--63.451.759.869.254.364.7
CTL[24]86.791.4--89.797.0--56.466.951.861.3
STMN[6]84.590.569.282.1--66.756.161.671.556.966.2
PSTA[39]85.891.5--91.598.1--60.570.255.865.7
DIL[11]87.090.8--92.098.0--61.270.956.366.1
STT[48]86.388.778.087.587.595.0--61.070.756.165.9
TMT[28]85.891.2--91.398.6--60.870.555.965.8
CAVIT[40]87.290.879.289.293.398.0--61.471.156.566.3
SINet[1]86.291.079.687.492.5---61.371.056.466.2
MFA[9]85.090.478.988.293.398.7--61.170.856.266.0
DCCT[27]87.592.3--91.798.6--61.571.256.666.4
LSTRL[26]86.891.682.489.892.298.6--61.771.356.766.5
CLIP-ReID[21]88.191.780.688.8----62.371.657.266.8
AG-VPReID-Net91.593.287.393.296.399.581.373.164.071.958.075.6
", + "image_path": "aeb331b8e1be85e7a4f27def95caf5574369207ea179c11635636293776667b3.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 66, + 304, + 284, + 392 + ], + "blocks": [ + { + "bbox": [ + 168, + 274, + 440, + 285 + ], + "lines": [ + { + "bbox": [ + 168, + 274, + 440, + 285 + ], + "spans": [ + { + "bbox": [ + 168, + 274, + 440, + 285 + ], + "type": "text", + "content": "Table 4. Performance comparison across datasets. Bold shows best results." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 66, + 304, + 284, + 392 + ], + "lines": [ + { + "bbox": [ + 66, + 304, + 284, + 392 + ], + "spans": [ + { + "bbox": [ + 66, + 304, + 284, + 392 + ], + "type": "table", + "html": "
MethodAerial → GroundGround → Aerial
Rank-1Rank-5Rank-10Rank-1Rank-5Rank-10
St-171.5280.4283.8874.8084.2786.90
St-258.4070.2075.8061.5073.6078.20
St-361.6574.5379.1567.3878.8282.3
St-1269.5078.8082.5072.8082.6085.40
St-1371.8080.6083.9175.4084.4886.91
St-2365.7076.5580.9070.1080.4583.91
St-12371.9180.6783.9275.5784.5086.92
", + "image_path": "358c49fd6d94f4cb79284e071440b05413b17d3a913b99c47babad5e06e36ed7.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 56, + 418, + 294, + 506 + ], + "blocks": [ + { + "bbox": [ + 61, + 395, + 289, + 406 + ], + "lines": [ + { + "bbox": [ + 61, + 395, + 289, + 406 + ], + "spans": [ + { + "bbox": [ + 61, + 395, + 289, + 406 + ], + "type": "text", + "content": "Table 5. Ranking accuracy (%) improvement on AG-VPReID dataset." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 56, + 418, + 294, + 506 + ], + "lines": [ + { + "bbox": [ + 56, + 418, + 294, + 506 + ], + "spans": [ + { + "bbox": [ + 56, + 418, + 294, + 506 + ], + "type": "table", + "html": "
MethodAerial → GroundGround → Aerial
15m30m80m120m15m30m80m120m
St-180.2878.7667.2452.4783.2583.0367.0762.31
St-269.7568.1352.6238.1272.8771.4152.2245.43
St-374.2574.0152.5335.1377.1878.9156.5952.56
St-1278.3276.8265.2450.5381.3481.2765.1760.43
St-1380.5578.9567.5052.8583.7083.5567.6063.10
St-2376.4574.9057.4042.7079.5579.4057.4052.80
St-12380.6679.0067.6353.0083.9283.6667.8263.32
", + "image_path": "bdfba46b2eb21291470992e4d4ad4130b35086b372ab70f4e83a7e12415e9967.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + }, + { + "bbox": [ + 55, + 509, + 294, + 520 + ], + "lines": [ + { + "bbox": [ + 55, + 509, + 294, + 520 + ], + "spans": [ + { + "bbox": [ + 55, + 509, + 294, + 520 + ], + "type": "text", + "content": "Table 6. Rank-1 accuracy (%) on AG-VPReID at various altitudes." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_footnote" + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 543, + 133, + 554 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 543, + 133, + 554 + ], + "spans": [ + { + "bbox": [ + 55, + 543, + 133, + 554 + ], + "type": "text", + "content": "guity. See Table 7." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 563, + 139, + 575 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 563, + 139, + 575 + ], + "spans": [ + { + "bbox": [ + 55, + 563, + 139, + 575 + ], + "type": "text", + "content": "5.5. Visualization" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 582, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 582, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 582, + 296, + 714 + ], + "type": "text", + "content": "We further visualize the ReID results with Top-5 ranking to understand how our model improves compared to the baseline [21] for aerial-to-ground person ReID in Fig. 4. Unlike the baseline which may be biased by image resolution and clothing textures, our approach pays attention to more robust features like motion patterns and body shape characteristics, which explains its successful identification of similar walking postures and body proportions despite the significant viewpoint differences between the aerial query and ground-view gallery pair. Additional examples are in Figs. 7 and 8 of the supplementary material." + } + ] + } + ], + "index": 8 + }, + { + "type": "table", + "bbox": [ + 316, + 304, + 554, + 399 + ], + "blocks": [ + { + "bbox": [ + 316, + 304, + 554, + 399 + ], + "lines": [ + { + "bbox": [ + 316, + 304, + 554, + 399 + ], + "spans": [ + { + "bbox": [ + 316, + 304, + 554, + 399 + ], + "type": "table", + "html": "
ScenarioRank-1mAPKey Observation
Camera Angle Impact
15m altitude (AG)80.6677.23Baseline performance
30m altitude (AG)79.0075.81Minimal degradation
80m altitude (AG)67.6363.42-13.03% Rank-1 vs. 15m
120m altitude (AG)53.0048.75-27.66% Rank-1 vs. 15m
Clothing Change Impact
GG-SameClothes91.5288.74Upper-bound performance
GG-DiffClothes83.6779.92-7.85% Rank-1 (CC-only impact)
AG-SameClothes71.9164.00-19.61% Rank-1 (AG-only impact)
AG-DiffClothes65.8357.52-6.08% Rank-1 (CC impact in AG)
", + "image_path": "706371d48f912c84501dfec8a776b1091a9da090fbb299f613d3b34299fc5ca4.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 323, + 402, + 544, + 414 + ], + "lines": [ + { + "bbox": [ + 323, + 402, + 544, + 414 + ], + "spans": [ + { + "bbox": [ + 323, + 402, + 544, + 414 + ], + "type": "text", + "content": "Table 7. Impact of clothing changes (CC) vs. camera angles." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 314, + 434, + 388, + 446 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 434, + 388, + 446 + ], + "spans": [ + { + "bbox": [ + 314, + 434, + 388, + 446 + ], + "type": "text", + "content": "6. Conclusion" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 454, + 555, + 610 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 454, + 555, + 610 + ], + "spans": [ + { + "bbox": [ + 313, + 454, + 555, + 610 + ], + "type": "text", + "content": "We introduce AG-VPReLU, a comprehensive dataset for video-based aerial-ground person ReID, addressing the critical need for a large and challenging aerial-ground dataset. We also propose AG-VPReLU-Net, a purpose-built three-stream person ReID framework that combines temporal-spatial processing, physics-informed normalized appearance representation, and multi-scale attention mechanisms. This approach achieves state-of-the-art performance on both the AG-VPReLU dataset and existing video-based ReID benchmarks. Notably, the relatively lower performance across all approaches on AG-VPReLU highlights its demanding nature and establishes it as a robust benchmark for advancing future research in the field." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 621, + 424, + 635 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 621, + 424, + 635 + ], + "spans": [ + { + "bbox": [ + 313, + 621, + 424, + 635 + ], + "type": "text", + "content": "7. Acknowledgement" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 641, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 641, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 641, + 555, + 715 + ], + "type": "text", + "content": "This work was supported by the Australian Research Council (ARC) Discovery Project (DP200101942) and a QUT Postgraduate Research Award. We gratefully acknowledge the Research Engineering Facility (REF) team at QUT for providing expertise and the research infrastructure essential for data collection and processing within this project." + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "type": "text", + "content": "1248" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "spans": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 57, + 91, + 296, + 713 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 61, + 91, + 296, + 135 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 91, + 296, + 135 + ], + "spans": [ + { + "bbox": [ + 61, + 91, + 296, + 135 + ], + "type": "text", + "content": "[1] Shutao Bai, Bingpeng Ma, Hong Chang, Rui Huang, and Xilin Chen. Salient-to-broad transition for video person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 7339-7348, 2022. 8" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 136, + 296, + 211 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 136, + 296, + 211 + ], + "spans": [ + { + "bbox": [ + 61, + 136, + 296, + 211 + ], + "type": "text", + "content": "[2] David Cornett, Joel Brogan, Nell Barber, Deniz Aykac, Seth Baird, Nicholas Burchfield, Carl Dukes, Andrew Duncan, Regina Ferrell, Jim Goddard, et al. Expanding accurate person recognition to new altitudes and ranges: The briar dataset. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 593-602, 2023. 3" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 62, + 213, + 295, + 278 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 213, + 295, + 278 + ], + "spans": [ + { + "bbox": [ + 62, + 213, + 295, + 278 + ], + "type": "text", + "content": "[3] Daniel Davila, Dawei Du, Bryon Lewis, Christopher Funk, Joseph Van Pelt, Roderic Collins, Kellie Corona, Matt Brown, Scott McCloskey, Anthony Hoogs, and Brian Clipp. Mevid: Multi-view extended videos with identities for video person re-identification. In IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023. 2, 3, 4" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 62, + 280, + 295, + 323 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 280, + 295, + 323 + ], + "spans": [ + { + "bbox": [ + 62, + 280, + 295, + 323 + ], + "type": "text", + "content": "[4] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In IEEE conference on computer vision and pattern recognition (CVPR), pages 248-255. IEEE, 2009. 1" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 62, + 324, + 294, + 346 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 324, + 294, + 346 + ], + "spans": [ + { + "bbox": [ + 62, + 324, + 294, + 346 + ], + "type": "text", + "content": "[5] Michael Dreuw and ORB-HD. deface: Video anonymization by face detection, 2023. Python package version 1.5.0. 4" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 347, + 295, + 391 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 347, + 295, + 391 + ], + "spans": [ + { + "bbox": [ + 62, + 347, + 295, + 391 + ], + "type": "text", + "content": "[6] Chanho Eom, Geon Lee, Junghyup Lee, and Bumsub Ham. Video-based person re-identification with spatial and temporal memory networks. In IEEE International Conference on Computer Vision (ICCV), pages 12036–12045, 2021. 2, 8" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 392, + 295, + 435 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 392, + 295, + 435 + ], + "spans": [ + { + "bbox": [ + 62, + 392, + 295, + 435 + ], + "type": "text", + "content": "[7] Yang Fu, Xiaoyang Wang, Yunchao Wei, and Thomas Huang. Sta: Spatial-temporal attention for large-scale video-based person re-identification. In AAAI Conference on Artificial Intelligence, pages 8287–8294, 2019. 1" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 437, + 295, + 491 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 437, + 295, + 491 + ], + "spans": [ + { + "bbox": [ + 62, + 437, + 295, + 491 + ], + "type": "text", + "content": "[8] Xinqian Gu, Hong Chang, Bingpeng Ma, Shutao Bai, Shiguang Shan, and Xilin Chen. Clothes-changing person re-identification with rgb modality only. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2, 3" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 492, + 295, + 535 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 492, + 295, + 535 + ], + "spans": [ + { + "bbox": [ + 62, + 492, + 295, + 535 + ], + "type": "text", + "content": "[9] Xinqian Gu, Hong Chang, Bingpeng Ma, and Shiguang Shan. Motion feature aggregation for video-based person re-identification. IEEE Transactions on Image Processing, 31:3908-3919, 2022. 8" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 57, + 536, + 295, + 590 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 536, + 295, + 590 + ], + "spans": [ + { + "bbox": [ + 57, + 536, + 295, + 590 + ], + "type": "text", + "content": "[10] Ke Han, Yan Huang, Shaogang Gong, Liang Wang, and Tieniu Tan. 3d shape temporal aggregation for video-based clothing-change person re-identification. In Asian Conference on Computer Vision (ACCV), pages 2371–2387, 2022. 2, 3" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 57, + 592, + 295, + 646 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 592, + 295, + 646 + ], + "spans": [ + { + "bbox": [ + 57, + 592, + 295, + 646 + ], + "type": "text", + "content": "[11] Tianyu He, Xin Jin, Xu Shen, Jianqiang Huang, Zhibo Chen, and Xian-Sheng Hua. Dense interaction learning for video-based person re-identification. In IEEE International Conference on Computer Vision (ICCV), pages 1490–1501, 2021. 3, 8" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 647, + 296, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 647, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 57, + 647, + 296, + 713 + ], + "type": "text", + "content": "[12] Weizhen He, Yiheng Deng, Shixiang Tang, Qihao Chen, Qingsong Xie, Yizhou Wang, Lei Bai, Feng Zhu, Rui Zhao, Wanli Ouyang, et al. Instruct-reid: A multi-purpose person re-identification task with instructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17521–17531, 2024. 3" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 553, + 713 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 316, + 73, + 553, + 117 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 73, + 553, + 117 + ], + "spans": [ + { + "bbox": [ + 316, + 73, + 553, + 117 + ], + "type": "text", + "content": "[13] Ruibing Hou, Hong Chang, Bingpeng Ma, Shiguang Shan, and Xilin Chen. Temporal complementary learning for video person re-identification. In European Conference on Computer Vision (ECCV), pages 388–405, 2020. 2, 3, 8" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 317, + 119, + 553, + 173 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 119, + 553, + 173 + ], + "spans": [ + { + "bbox": [ + 317, + 119, + 553, + 173 + ], + "type": "text", + "content": "[14] Ruibing Hou, Hong Chang, Bingpeng Ma, Rui Huang, and Shiguang Shan. Bicnet-tks: Learning efficient spatial-temporal representation for video person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2014–2023, 2021. 2, 8" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 317, + 175, + 553, + 229 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 175, + 553, + 229 + ], + "spans": [ + { + "bbox": [ + 317, + 175, + 553, + 229 + ], + "type": "text", + "content": "[15] Yan Jiang, Xu Cheng, Hao Yu, Xingyu Liu, Haoyu Chen, and Guoying Zhao. Domain shifting: A generalized solution for heterogeneous cross-modality person re-identification. In European Conference on Computer Vision, pages 289–306. Springer, 2025. 3" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 317, + 231, + 553, + 262 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 231, + 553, + 262 + ], + "spans": [ + { + "bbox": [ + 317, + 231, + 553, + 262 + ], + "type": "text", + "content": "[16] Glenn Jocher, Ayush, and Jing Qiu. Ultralytics YOLO. https://github.com/ultralytics/ ultralytics, 2024. Accessed: 2024-03-22. 4" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 317, + 264, + 553, + 330 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 264, + 553, + 330 + ], + "spans": [ + { + "bbox": [ + 317, + 264, + 553, + 330 + ], + "type": "text", + "content": "[17] SV Aruna Kumar, Ehsan Yaghoubi, Abhijit Das, BS Harish, and Hugo Proença. The p-destre: A fully annotated dataset for pedestrian detection, tracking, and short/long-term re-identification from aerial devices. IEEE Transactions on Information Forensics and Security, 16:1696-1708, 2020. 2, 3, 1" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 317, + 332, + 553, + 376 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 332, + 553, + 376 + ], + "spans": [ + { + "bbox": [ + 317, + 332, + 553, + 376 + ], + "type": "text", + "content": "[18] J. Li, J. Wang, Q. Tian, W. Gao, and S. Zhang. Global-local temporal representations for video person re-identification. In IEEE International Conference on Computer Vision (ICCV), pages 3958–3967, 2019. 2, 4, 7" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 317, + 377, + 553, + 421 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 377, + 553, + 421 + ], + "spans": [ + { + "bbox": [ + 317, + 377, + 553, + 421 + ], + "type": "text", + "content": "[19] Jianing Li, Jingdong Wang, Qi Tian, Wen Gao, and Shiliang Zhang. Global-local temporal representations for video person re-identification. In IEEE International Conference on Computer Vision (ICCV), pages 3958–3967, 2019. 2, 8" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 317, + 422, + 553, + 465 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 422, + 553, + 465 + ], + "spans": [ + { + "bbox": [ + 317, + 422, + 553, + 465 + ], + "type": "text", + "content": "[20] Jianing Li, Shiliang Zhang, and Tiejun Huang. Multiscale 3d convolution network for video based person re-identification. In AAAI Conference on Artificial Intelligence, pages 8618–8625, 2019. 2, 3, 8" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 317, + 468, + 553, + 510 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 468, + 553, + 510 + ], + "spans": [ + { + "bbox": [ + 317, + 468, + 553, + 510 + ], + "type": "text", + "content": "[21] Siyuan Li, Li Sun, and Qingli Li. Clip-reid: exploiting vision-language model for image re-identification without concrete text labels. In AAAI Conference on Artificial Intelligence, pages 1405–1413, 2023. 8, 3" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 317, + 513, + 553, + 544 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 513, + 553, + 544 + ], + "spans": [ + { + "bbox": [ + 317, + 513, + 553, + 544 + ], + "type": "text", + "content": "[22] Yutian Lin, Liang Zheng, Zhedong Zheng, Yu Wu, and Yi Yang. Improving person re-identification by attribute and identity learning. ArXiv, abs/1703.07220, 2019. 1" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 317, + 547, + 553, + 601 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 547, + 553, + 601 + ], + "spans": [ + { + "bbox": [ + 317, + 547, + 553, + 601 + ], + "type": "text", + "content": "[23] Ziyi Lin, Shijie Geng, Renrui Zhang, Peng Gao, Gerard De Melo, Xiaogang Wang, Jifeng Dai, Yu Qiao, and Hongsheng Li. Frozen clip models are efficient video learners. In European Conference on Computer Vision (ECCV), pages 388-404. Springer, 2022. 6, 3" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 317, + 602, + 553, + 657 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 602, + 553, + 657 + ], + "spans": [ + { + "bbox": [ + 317, + 602, + 553, + 657 + ], + "type": "text", + "content": "[24] Jiawei Liu, Zheng-Jun Zha, Wei Wu, Kecheng Zheng, and Qibin Sun. Spatial-temporal correlation and topology learning for person re-identification in videos. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4370–4379, 2021. 8" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 317, + 658, + 553, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 658, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 317, + 658, + 553, + 713 + ], + "type": "text", + "content": "[25] Xuehu Liu, Pingping Zhang, Chenyang Yu, Huchuan Lu, and Xiaoyun Yang. Watching you: Global-guided reciprocal learning for video-based person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 13334–13343, 2021. 1, 3, 8" + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "type": "text", + "content": "1249" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 73, + 295, + 713 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 56, + 73, + 294, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 73, + 294, + 106 + ], + "spans": [ + { + "bbox": [ + 56, + 73, + 294, + 106 + ], + "type": "text", + "content": "[26] Xuehu Liu, Pingping Zhang, and Hutchuan Lu. Video-based person re-identification with long short-term representation learning. arXiv preprint arXiv:2308.03703, 2023. 2, 8" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 107, + 295, + 162 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 107, + 295, + 162 + ], + "spans": [ + { + "bbox": [ + 55, + 107, + 295, + 162 + ], + "type": "text", + "content": "[27] Xuehu Liu, Chenyang Yu, Pingping Zhang, and Hutchuan Lu. Deeply coupled convolution-transformer with spatial-temporal complementary learning for video-based person re-identification. IEEE Transactions on Neural Networks and Learning Systems, 35(10):13753-13763, 2024. 8" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 57, + 163, + 295, + 217 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 163, + 295, + 217 + ], + "spans": [ + { + "bbox": [ + 57, + 163, + 295, + 217 + ], + "type": "text", + "content": "[28] Xuehu Liu, Pingping Zhang, Chenyang Yu, Xuesheng Qian, Xiaoyun Yang, and Huchuan Lu. A video is worth three views: Trigeminal transformers for video-based person re-identification. IEEE Transactions on Intelligent Transportation Systems, 25(9):12818-12828, 2024. 3, 8" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 57, + 220, + 294, + 262 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 220, + 294, + 262 + ], + "spans": [ + { + "bbox": [ + 57, + 220, + 294, + 262 + ], + "type": "text", + "content": "[29] Yiheng Liu, Zhenxun Yuan, Wengang Zhou, and Houqiang Li. Spatial and temporal mutual promotion for video-based person re-identification. In AAAI Conference on Artificial Intelligence, pages 8786–8793, 2019. 2, 8" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 57, + 264, + 295, + 319 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 264, + 295, + 319 + ], + "spans": [ + { + "bbox": [ + 57, + 264, + 295, + 319 + ], + "type": "text", + "content": "[30] Niall McLaughlin, Jesus Martinez del Rincon, and Paul Miller. Recurrent convolutional network for video-based person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1325–1334, 2016. 3" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 57, + 321, + 295, + 364 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 321, + 295, + 364 + ], + "spans": [ + { + "bbox": [ + 57, + 321, + 295, + 364 + ], + "type": "text", + "content": "[31] Huy Nguyen, Kien Nguyen, Sridha Sridharan, and Clinton Fookes. Aerial-ground person re-id. In IEEE International Conference on Multimedia and Expo (ICME), pages 2585-2590, 2023. 1, 7" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 57, + 365, + 294, + 409 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 365, + 294, + 409 + ], + "spans": [ + { + "bbox": [ + 57, + 365, + 294, + 409 + ], + "type": "text", + "content": "[32] Huy Nguyen, Kien Nguyen, Sridha Sridharan, and Clinton Fookes. Ag-reid.v2: Bridging aerial and ground views for person re-identification. IEEE Transactions on Information Forensics and Security, 19:2896-2908, 2024. 1, 4, 7" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 57, + 411, + 294, + 476 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 411, + 294, + 476 + ], + "spans": [ + { + "bbox": [ + 57, + 411, + 294, + 476 + ], + "type": "text", + "content": "[33] Kien Nguyen, Clinton Fookes, Sridha Sridharan, Feng Liu, Xiaoming Liu, Arun Ross, Dana Michalski, Huy Nguyen, Debayan Deb, Mahak Kothari, et al. Ag-reid 2023: Aerial-ground person re-identification challenge results. In 2023 IEEE International Joint Conference on Biometrics (IJCB), pages 1–10. IEEE, 2023. 3" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 57, + 478, + 294, + 531 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 478, + 294, + 531 + ], + "spans": [ + { + "bbox": [ + 57, + 478, + 294, + 531 + ], + "type": "text", + "content": "[34] Vuong D Nguyen, Pranav Mantini, and Shishir K Shah. Temporal 3d shape modeling for video-based cloth-changing person re-identification. In IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 173–182, 2024. 5" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 57, + 534, + 294, + 588 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 534, + 294, + 588 + ], + "spans": [ + { + "bbox": [ + 57, + 534, + 294, + 588 + ], + "type": "text", + "content": "[35] Honghu Pan, Qiao Liu, Yongyong Chen, Yunqi He, Yuan Zheng, Feng Zheng, and Zhenyu He. Pose-aided video-based person re-identification via recurrent graph convolutional network. IEEE Transactions on Circuits and Systems for Video Technology, 33(12):7183-7196, 2023. 1" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 57, + 590, + 294, + 655 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 590, + 294, + 655 + ], + "spans": [ + { + "bbox": [ + 57, + 590, + 294, + 655 + ], + "type": "text", + "content": "[36] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning (ICML), pages 8748-8763, 2021. 6" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 57, + 658, + 294, + 700 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 658, + 294, + 700 + ], + "spans": [ + { + "bbox": [ + 57, + 658, + 294, + 700 + ], + "type": "text", + "content": "[37] Kien Nguyen Thanh, Clinton Fookes, Sridha Sridharan, Yingli Tian, Feng Liu, Xiaoming Liu, and Arun Ross. The state of aerial surveillance: A survey. CoRR, abs/2201.03080, 2022. 1" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 702, + 294, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 702, + 294, + 713 + ], + "spans": [ + { + "bbox": [ + 57, + 702, + 294, + 713 + ], + "type": "text", + "content": "[38] Xiaogang Wang and Rui Zhao. Person re-identification:" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 553, + 713 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 333, + 73, + 553, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 333, + 73, + 553, + 95 + ], + "spans": [ + { + "bbox": [ + 333, + 73, + 553, + 95 + ], + "type": "text", + "content": "System design and evaluation overview. In Person Re-Identification, pages 351-370. Springer, 2014. 7" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 97, + 553, + 150 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 97, + 553, + 150 + ], + "spans": [ + { + "bbox": [ + 316, + 97, + 553, + 150 + ], + "type": "text", + "content": "[39] Yingquan Wang, Pingping Zhang, Shang Gao, Xia Geng, Hu Lu, and Dong Wang. Pyramid spatial-temporal aggregation for video-based person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 12026–12035, 2021. 3, 8" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 152, + 553, + 207 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 152, + 553, + 207 + ], + "spans": [ + { + "bbox": [ + 316, + 152, + 553, + 207 + ], + "type": "text", + "content": "[40] Jinlin Wu, Lingxiao He, Wu Liu, Yang Yang, Zhen Lei, Tao Mei, and Stan Z Li. Cavit: Contextual alignment vision transformer for video object re-identification. In European Conference on Computer Vision (ECCV), pages 549–566. Springer, 2022. 8" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 209, + 553, + 252 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 209, + 553, + 252 + ], + "spans": [ + { + "bbox": [ + 316, + 209, + 553, + 252 + ], + "type": "text", + "content": "[41] Xiangyu Xu and Chen Change Loy. 3d human texture estimation from a single image with transformers. In IEEE International Conference on Computer Vision (ICCV), pages 13849-13858, 2021. 6, 7, 3" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 254, + 553, + 308 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 254, + 553, + 308 + ], + "spans": [ + { + "bbox": [ + 316, + 254, + 553, + 308 + ], + "type": "text", + "content": "[42] Xiangyu Xu, Hao Chen, Francesc Moreno-Noguer, László A Jeni, and Fernando De la Torre. 3d human shape and pose from a single low-resolution image with self-supervised learning. In European Conference on Computer Vision (ECCV), pages 284-300. Springer, 2020. 3" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 310, + 553, + 363 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 310, + 553, + 363 + ], + "spans": [ + { + "bbox": [ + 316, + 310, + 553, + 363 + ], + "type": "text", + "content": "[43] Yichao Yan, Jie Qin, Jiaxin Chen, Li Liu, Fan Zhu, Ying Tai, and Ling Shao. Learning multi-granular hypergraphs for video-based person re-identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2899–2908, 2020. 2, 8" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 365, + 553, + 420 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 365, + 553, + 420 + ], + "spans": [ + { + "bbox": [ + 316, + 365, + 553, + 420 + ], + "type": "text", + "content": "[44] Dingqiang Ye, Chao Fan, Jingzhe Ma, Xiaoming Liu, and Shiqi Yu. Biggait: Learning gait representation you want by large vision models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 200-210, 2024. 6, 3" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 422, + 553, + 465 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 422, + 553, + 465 + ], + "spans": [ + { + "bbox": [ + 316, + 422, + 553, + 465 + ], + "type": "text", + "content": "[45] Chenyang Yu, Xuehu Liu, Yingquan Wang, Pingping Zhang, and Huchuan Lu. Tf-clip: Learning text-free clip for video-based person re-identification. In AAAI Conference on Artificial Intelligence, pages 6764–6772, 2024. 2, 3, 5" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 317, + 468, + 553, + 521 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 468, + 553, + 521 + ], + "spans": [ + { + "bbox": [ + 317, + 468, + 553, + 521 + ], + "type": "text", + "content": "[46] Quan Zhang, Lei Wang, Vishal M. Patel, Xiaohua Xie, and Jianhaung Lai. View-decoupled transformer for person re-identification under aerial-ground camera network. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 22000–22009, 2024. 1" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 317, + 523, + 553, + 577 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 523, + 553, + 577 + ], + "spans": [ + { + "bbox": [ + 317, + 523, + 553, + 577 + ], + "type": "text", + "content": "[47] Shizhou Zhang, Wenlong Luo, De Cheng, Qingchun Yang, Lingyan Ran, Yinghui Xing, and Yanning Zhang. Cross-platform video person reid: A new benchmark dataset and adaptation approach. In European Conference on Computer Vision (ECCV), 2024. 1, 2, 3, 4, 7" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 579, + 553, + 622 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 579, + 553, + 622 + ], + "spans": [ + { + "bbox": [ + 316, + 579, + 553, + 622 + ], + "type": "text", + "content": "[48] Tianyu Zhang, Longhui Wei, Lingxi Xie, Zijie Zhuang, Yongfei Zhang, Bo Li, and Qi Tian. Spatiotemporal transformer for video-based person re-identification. arXiv:2103.16469, 2021. 8" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 624, + 553, + 678 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 624, + 553, + 678 + ], + "spans": [ + { + "bbox": [ + 316, + 624, + 553, + 678 + ], + "type": "text", + "content": "[49] Yufei Zhang, Jeffrey O Kephart, Zijun Cui, and Qiang Ji. Physpt: Physics-aware pretrained transformer for estimating human dynamics from monocular videos. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2305-2317, 2024. 5, 7, 3" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 681, + 553, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 681, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 681, + 553, + 713 + ], + "type": "text", + "content": "[50] Zhizheng Zhang, Cuiling Lan, Wenjun Zeng, and Zhibo Chen. Multi-granularity reference-aided attentive feature aggregation for video-based person re-identification. In IEEE" + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 296, + 748, + 315, + 757 + ], + "type": "text", + "content": "1250" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 295, + 239 + ], + "type": "list", + "angle": 0, + "index": 4, + "blocks": [ + { + "bbox": [ + 76, + 72, + 294, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 72, + 294, + 95 + ], + "spans": [ + { + "bbox": [ + 76, + 72, + 294, + 95 + ], + "type": "text", + "content": "Conference on Computer Vision and Pattern Recognition (CVPR), pages 10407-10416, 2020. 1" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 96, + 295, + 140 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 96, + 295, + 140 + ], + "spans": [ + { + "bbox": [ + 56, + 96, + 295, + 140 + ], + "type": "text", + "content": "[51] L. Zheng, Z. Bie, Y. Sun, J. Wang, C. Su, S. Wang, and Q. Tian. Mars: A video benchmark for large-scale person re-identification. In European Conference on Computer Vision (ECCV, pages 868–884, 2016. 1, 2, 3, 7" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 141, + 294, + 184 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 141, + 294, + 184 + ], + "spans": [ + { + "bbox": [ + 56, + 141, + 294, + 184 + ], + "type": "text", + "content": "[52] Zhedong Zheng, Xiaohan Wang, Nenggan Zheng, and Yi Yang. Parameter-efficient person re-identification in the 3d space. IEEE Transactions on Neural Networks and Learning Systems, 35(6):7534-7547, 2022. 6" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 186, + 294, + 239 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 186, + 294, + 239 + ], + "spans": [ + { + "bbox": [ + 56, + 186, + 294, + 239 + ], + "type": "text", + "content": "[53] Haidong Zhu, Pranav Budhwant, Zhaoheng Zheng, and Ram Nevatia. Seas: Shape-aligned supervision for person re-identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 164–174, 2024. 3" + } + ] + } + ], + "index": 3 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 296, + 749, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 296, + 749, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 296, + 749, + 315, + 757 + ], + "type": "text", + "content": "1251" + } + ] + } + ], + "index": 5 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/AI-Face_ A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark/69a93785-ab72-45f7-87fa-9182ea82821d_content_list.json b/2025/AI-Face_ A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark/69a93785-ab72-45f7-87fa-9182ea82821d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1136ffb62dc35a6a51ddf69f8d13b5e428aa1943 --- /dev/null +++ b/2025/AI-Face_ A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark/69a93785-ab72-45f7-87fa-9182ea82821d_content_list.json @@ -0,0 +1,1806 @@ +[ + { + "type": "text", + "text": "AI-Face: A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark", + "text_level": 1, + "bbox": [ + 91, + 130, + 906, + 174 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Li Lin $^{1}$ , Santosh $^{1}$ , Mingyang Wu $^{1}$ , Xin Wang $^{2}$ , Shu Hu $^{1\\dagger}$", + "bbox": [ + 269, + 202, + 723, + 220 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{1}$ Purdue University, West Lafayette, USA {lin1785, santosh2, wu2415, hu968}@purdue.edu", + "bbox": [ + 135, + 220, + 864, + 238 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{2}$ University at Albany, State University of New York, New York, USA xwang56@albany.edu", + "bbox": [ + 138, + 239, + 857, + 256 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 246, + 291, + 326, + 306 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "AI-generated faces have enriched human life, such as entertainment, education, and art. However, they also pose misuse risks. Therefore, detecting AI-generated faces becomes crucial, yet current detectors show biased performance across different demographic groups. Mitigating biases can be done by designing algorithmic fairness methods, which usually require demographically annotated face datasets for model training. However, no existing dataset encompasses both demographic attributes and diverse generative methods simultaneously, which hinders the development of fair detectors for AI-generated faces. In this work, we introduce the AI-Face dataset, the first million-scale demographically annotated AI-generated face image dataset, including real faces, faces from deepfake videos, and faces generated by Generative Adversarial Networks and Diffusion Models. Based on this dataset, we conduct the first comprehensive fairness benchmark to assess various AI face detectors and provide valuable insights and findings to promote the future fair design of AI face detectors. Our AI-Face dataset and benchmark code are publicly available at https://github.com/Purdue-M2/AI-Face-FairnessBench.", + "bbox": [ + 89, + 321, + 485, + 638 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 91, + 660, + 220, + 676 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "AI-generated faces are created using sophisticated AI technologies that are visually difficult to discern from real ones [1]. They can be summarized into three categories: deepfake videos [2] created by typically using Variational Autoencoders (VAEs) [3, 4], faces generated from Generative Adversarial Networks (GANs) [5-8], and Diffusion Models (DMs) [9]. These technologies have significantly advanced the realism and controllability of synthetic facial representations. Generated faces can enrich media and increase creativity [10]. However, they also carry significant risks of misuse. For example, during the 2024 United States presidential election, fake face images of Donald Trump surrounded by groups of black people smiling and laughing to", + "bbox": [ + 89, + 681, + 485, + 878 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/b60e36a480d2233a4dde3cd316471aec9b9fb7003921d83df06178da82c10f17.jpg", + "image_caption": [ + "Figure 1. Comparison between AI-Face and other datasets in terms of demographic annotation, generation category, and the number of generation methods. 'DF', 'GAN', and 'DM' stand for Deepfake Videos, Generative Adversarial Networks, and Diffusion Models." + ], + "image_footnote": [], + "bbox": [ + 531, + 289, + 883, + 484 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "encourage African Americans to vote Republican are spreading online [11]. This could distort public opinion and erode people's trust in media [12, 13], necessitating the detection of AI-generated faces for their ethical use.", + "bbox": [ + 511, + 551, + 906, + 612 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "However, one major issue existing in current AI face detectors [24-27] is biased detection (i.e., unfair detection performance among demographic groups [19, 28-30]). Mitigating biases can be done by designing algorithmic fairness methods, but they usually require demographically annotated face datasets for model training. For example, works like [29, 30] have made efforts to enhance fairness in the detection based on A-FF++ [19] and A-DFD [19]. However, both datasets are limited to containing only faces from deepfake videos, which could cause the trained models not to be applicable for fairly detecting faces generated by GANs and DMs. While some datasets (e.g., GenData [17], DF40 [31]) include GAN and DM faces, they either lack demographic annotations or provide only limited demographic attributes. Most importantly, no existing dataset offers sufficient diversity in generation methods while also providing demographic labels. A comparison of existing datasets is shown in Fig. 1. These limitations of existing datasets hamper the development of fair technologies for detecting AI-generated faces.", + "bbox": [ + 509, + 613, + 908, + 902 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 44 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "†Corresponding author", + "bbox": [ + 114, + 886, + 240, + 898 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "3503", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 0 + }, + { + "type": "table", + "img_path": "images/e7814b570fbb09fb05be359e226cd83fc885db7cd71cc6212792e3312cdc5733.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
DatasetYearFace ImagesGeneration Category#Generation MethodsSource of Real ImagesDemographic Annotation
#Real#FakeDeepfake VideosGANDMSkin ToneGenderAge
DF-1.0 [14]20202.9M14.7M1Self-Recording
DeePhy [15]20221K50.4K3YouTube
DF-Platter [16]2023392.3K653.4K3YouTube
GenData [17]2023-20K3CelebA [18]
A-FF++ [19]202429.8K149.1K5YouTube
A-DFD [19]202410.8K89.6K5Self-Recording
A-DFDC [19]202454.5K52.6K8Self-Recording
A-Celeb-DF-v2 [19]202426.3K166.5K1Self-Recording
A-DF-1.0 [19]2024870.3K321.5K1Self-Recording
AI-Face2025400K1.2M37FFHQ [6], IMDB-WIKI [20], real from FF++ [2], DFDC [21], DFD [22],Celeb-DF-v2 [23]
", + "bbox": [ + 133, + 88, + 859, + 231 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Table 1. Quantitative comparison of existing datasets with ours on demographically annotated AI-generated faces.", + "bbox": [ + 156, + 234, + 834, + 250 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Moreover, benchmarking fairness provides a direct method to uncover prevalent and unique fairness issues in recent AI-generated face detection. However, there is a lack of a comprehensive benchmark to estimate the fairness of existing AI face detectors. Existing benchmarks [32-35] primarily assess utility, neglecting systematic fairness evaluation. Two studies [28, 36] do evaluate fairness in detection models, but their examination is based on a few outdated detectors. Furthermore, detectors' fairness reliability (e.g., robustness with test set post-processing, fairness generalization) has not been assessed. The absence of a comprehensive fairness benchmark impedes a thorough understanding of the fairness behaviors of recent AI face detectors and obscures the research path for detector fairness guarantees.", + "bbox": [ + 88, + 253, + 480, + 465 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this work, we build the first million-scale demographically annotated AI-generated face image dataset: AI-Face. The face images are collected from various public datasets, including the real faces that are usually used to train AI face generators, faces from deepfake videos, and faces generated by GANs and DMs. Each face is demographically annotated by our designed measurement method and Contrastive Language-Image Pretraining (CLIP) [37]-based lightweight annotator. Next, we conduct the first comprehensive fairness benchmark on our dataset to estimate the fairness performance of 12 representative detectors coming from four model types. Our benchmark exposes common and unique fairness challenges in recent AI face detectors, providing essential insights that can guide and enhance the future design of fair AI face detectors. Our contributions are as follows:", + "bbox": [ + 89, + 465, + 482, + 691 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- We build the first million-scale demographically annotated AI-generated face dataset by leveraging our designed measure and developed lightweight annotator.", + "- We conduct the first comprehensive fairness benchmark of AI-generated face detectors, providing an extensive fairness assessment of current representative detectors.", + "- Based on our experiments, we summarize the unsolved questions and offer valuable insights within this research field, setting the stage for future investigations." + ], + "bbox": [ + 109, + 694, + 482, + 829 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Background and Motivation", + "text_level": 1, + "bbox": [ + 89, + 847, + 352, + 864 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "AI-generated Faces and Biased Detection. AI-generated face images, created by advanced AI technologies, are vi", + "bbox": [ + 89, + 869, + 482, + 900 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "sually difficult to discern from real ones. They can be summarized into three categories: 1) Deepfake Videos. Initiated in 2017 [13], these use face-swapping and face-reenactment techniques with a variational autoencoder to replace a face in a target video with one from a source [3, 4]. Note that our paper focuses solely on images extracted from videos. 2) GAN-generated Faces. Post-2017, Generative Adversarial Networks (GANs) [38] like StyleGANs [6-8] have significantly improved generated face realism. 3) DM-generated Faces. Diffusion models (DMs), emerging in 2021, generate detailed faces from textual descriptions and offer greater controllability. Tools like Midjourney [39] and DALLE2 [40] facilitate customized face generation. While these AI-generated faces can enhance visual media and creativity [10], they also pose risks, such as being misused in social media profiles [41, 42]. Therefore, numerous studies focus on detecting AI-generated faces [24-27], but current detectors often show performance disparities among demographic groups [19, 28-30]. This bias can lead to unfair targeting or exclusion, undermining trust in detection models. Recent efforts [29, 30] aim to enhance fairness in deepfake detection but mainly address deepfake videos, overlooking biases in detecting GAN- and DM-generated faces.", + "bbox": [ + 511, + 253, + 906, + 602 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The Existing Datasets. Current AI-generated facial datasets with demographic annotations are limited in size, generation categories, methods, and annotations, as illustrated in Table 1. For instance, A-FF++, A-DFD, A-DFDC, and A-Celeb-DF-v2 [19] are deepfake video datasets with fewer than one million images. Datasets like DF-1.0 [14] and DF-Platter [16] lack various demographic annotations. Additionally, existing datasets offer limited generation methods. These limitations hinder the development of fair AI face detectors, motivating us to build a million-scale demographically annotated AI-Face dataset.", + "bbox": [ + 511, + 607, + 908, + 773 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Benchmark for Detecting AI-generated Faces. Benchmarks are essential for evaluating AI-generated face detectors under standardized conditions. Existing benchmarks, as shown in Table 2, mainly focus on detectors' utility, often overlooking fairness [31-35]. Loc et al. [28] and CCv1 [36] examined detector fairness. However, their study did not have an analysis on DM-generated faces and only measured bias between groups in basic scenarios without considering", + "bbox": [ + 511, + 779, + 908, + 902 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "3504", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 1 + }, + { + "type": "table", + "img_path": "images/a9414d9a48c0ad63987ca40394ecd53fb009bc3821b028feca68a85dca62cc55.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Existing BenchmarksYearCategoryScope of Benchmark
Deepfake VideosGANDMUtilityFairness General Reliability
Loc et al. [28]2021
CCv1 [36]2021
DeepfakeBench [34]2023
CDDB [32]2023
Lin et al. [33]2024
Le et al. [35]2024
DF40 [31]2024
Ours2025
", + "bbox": [ + 94, + 88, + 482, + 196 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Table 2. Comparison of existing AI-generated face detection benchmarks and ours. Fairness 'General' means fairness evaluation under default/basic settings. Fairness 'Reliability' measures fairness consistency across dynamic scenarios (e.g., post-processing).", + "bbox": [ + 89, + 202, + 483, + 258 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "fairness reliability under real-world variations and transformations. This motivates us to conduct a comprehensive benchmark to evaluate AI face detectors' fairness.", + "bbox": [ + 89, + 263, + 483, + 309 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The Definition of Demographic Categories. Demography-related labels are highly salient to measuring bias. Following prior works [36, 43-47], we will focus on three key demographic categories: Skin Tone, Gender, and Age, in this work. For skin tone, this vital attribute spans a range from pale to dark. We use the Monk Skin Tone scale [48], specifically designed for computer vision applications. For gender, we adopt binary categories (i.e., Male and Female), following practices by many governments [49, 50] and facial recognition research [45, 51, 52], based on sex at birth. For age, using definitions from the United Nations [53] and Statistics Canada [54], we define five age groups: Child (0-14), Youth (15-24), Adult (25-44), Middle-age Adult (45-64), and Senior $(65+)$ . More discussion is in Appendix A.", + "bbox": [ + 89, + 315, + 483, + 526 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.AI-Face Dataset", + "text_level": 1, + "bbox": [ + 89, + 542, + 250, + 556 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "This section outlines the process of building our demographically annotated AI-Face dataset (see Fig. 2), along with its statistics and annotation quality assessment.", + "bbox": [ + 89, + 564, + 483, + 609 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Data Collection", + "text_level": 1, + "bbox": [ + 89, + 621, + 246, + 635 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "We build our AI-Face dataset by collecting and integrating public real and AI-generated face images sourced from academic publications, GitHub repositories, and commercial tools. We strictly adhere to the license agreements of all datasets to ensure that they allow inclusion in our datasets and secondary use for training and testing. More details are in Appendix B.1. Specifically, the fake face images in our dataset originate from 4 Deepfake Video datasets (i.e., $\\mathrm{FF} + +$ [2], DFDC [21], DFD [22], and Celeb-DFv2 [23]), generated by 10 GAN models (i.e., AttGAN [55], MMDGAN [56], StarGAN [55], StyleGANs [55, 57, 58], MSGGAN [56], ProGAN [59], STGAN [56], and VQGAN [60]), and 8 DM models (i.e., DALLE2 [61], IF [61], Midjourney [61], DCFace [62], Latent Diffusion [63], Palette [64], Stable Diffusion v1.5 [65], Stable Diffusion Inpainting [65]). This constitutes a total of 1,245,660 fake face images in our dataset. We include 6 real source datasets", + "bbox": [ + 89, + 643, + 482, + 900 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "(i.e., FFHQ [6], IMDB-WIKI [20], and real images from $\\mathrm{FF}++$ [2], DFDC [21], DFD [22], and Celeb-DF-v2 [23]). All of them are usually used as a training set for generative models to generate fake face images. This constitutes a total of 400,885 real face images in our dataset. In general, our dataset contains 28 subsets and 37 generation methods (i.e., 5 in $\\mathrm{FF}++$ , 5 in DFD, 8 in DFDC, 1 in Celeb-DF-v2, 10 GANs, and 8 DMs). For all images, we use RetinaFace [66] for detecting and cropping faces.", + "bbox": [ + 511, + 90, + 906, + 227 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.2. Annotation Generation", + "text_level": 1, + "bbox": [ + 511, + 237, + 728, + 252 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.2.1. Skin Tone Annotation Generation", + "text_level": 1, + "bbox": [ + 511, + 260, + 794, + 273 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Skin tone is typically measured using an intuitive approach [67, 68], without requiring a predictive model. Inspired by [67], we developed a method to estimate skin tone using the Monk Skin Tone (MST) Scale [48] (including 10-shade scales: Tone 1 to 10) by combining facial landmark detection with color analysis. Specifically, utilizing Mediapipe's FaceMesh [69] for precise facial landmark localization, we isolate skin regions while excluding non-skin areas such as the eyes and mouth. Based on the detected landmarks, we generate a mask to extract skin pixels from the facial area. These pixels are then subjected to K-Means clustering [70] (we use $\\mathrm{K} = 3$ in practice) to identify the dominant skin color within the region of interest. The top-1 largest color cluster is mapped to the closest tone in the MST Scale by calculating the Euclidean distance between the cluster centroid and the MST reference colors in RGB space.", + "bbox": [ + 511, + 279, + 906, + 521 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.2.2. Gender and Age Annotation Generation", + "text_level": 1, + "bbox": [ + 511, + 530, + 839, + 544 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "For generating gender and age annotations, the existing online software (e.g., Face++ [71]) and open-source tools (e.g., InsightFace [72]) can be used for the prediction. However, they fall short in our task due to two reasons: 1) They are mostly designed for face recognition and trained on datasets of real face images but lack generalization capability for annotating AI-generated face images. 2) Their use may introduce bias into our dataset, as they are typically designed and trained without careful consideration of bias and imbalance in the training set. See Appendix B.3 for our experimental study on these tools. To this end, we have to develop our specific annotators to predict gender and age annotations for each image in our dataset.", + "bbox": [ + 511, + 547, + 908, + 744 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Problem Definition. Given a training dataset $\\mathbb{D} = \\{(X_i, A_i)\\}_{i=1}^n$ with size $n$ , where $X_i$ represents the $i$ -th face image and $A_i$ signifies a demographic attribute associated with $X_i$ . Here, $A_i \\in \\mathcal{A}$ , where $\\mathcal{A}$ represents user-defined groups (e.g., for gender, $\\mathcal{A} = \\{\\text{Female, Male}\\}$ . For age, $\\mathcal{A} = \\{\\text{Child, Youth, Adult, Middle-age Adult, Senior}\\}$ ). Our goal is to design a lightweight, generalizable annotator based on $\\mathbb{D}$ that reduces bias while predicting facial demographic attributes for each image in our dataset. In practice, we use IMDB-WIKI [20] as training dataset, which contains", + "bbox": [ + 511, + 750, + 908, + 900 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3505", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/73cdb83d2fd072d055ecc2be288d34736d78933d58e073c80c3646a5aedbf2b0.jpg", + "image_caption": [ + "Figure 2. Generation pipeline of our Demographically Annotated AI-Face Dataset. First, we collect and filter face images from Deepfake Videos, GAN-generated faces, and DM-generated faces found in public datasets. Second, we perform skin tone, gender, and age annotation generation. Skin tone is estimated by combining facial landmark detection with color analysis to generate the corresponding annotation. For gender and age, we develop annotators trained on the IMDB-WIKI dataset [20], then use them to predict attributes for each image." + ], + "image_footnote": [], + "bbox": [ + 132, + 87, + 864, + 335 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "images along with profile metadata sourced from IMDb and Wikipedia, ensuring that the demographic annotations are as accurate as possible. We trained two annotators with identical architecture and training procedures for gender and age annotations, respectively.", + "bbox": [ + 88, + 400, + 480, + 474 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Annotator Architecture. We build a lightweight annotator based on the CLIP [37] foundation model by leveraging its strong zero-shot and few-shot learning capabilities. Specifically, our annotator employs a frozen pre-trained CLIP ViT L/14 [73] as a feature extractor $\\mathbf{E}$ followed by a trainable classifier parameterized by $\\theta$ , which contains 3-layer Multi-layer Perceptron (MLP) M and a classification head $h$ .", + "bbox": [ + 88, + 479, + 482, + 585 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Learning Objective. Aware that neural networks can perform poorly when the training dataset suffers from class-imbalance [74] and CLIP is not free from demographic bias [75-77], we introduce an imbalance loss and fairness loss to address these challenges in the annotator training. Specifically, for image $X_{i}$ , its feature $f_{i}$ is obtained through $f_{i} = \\mathbf{M}(\\mathbf{E}(X_{i}))$ . Next, two losses are detailed below.", + "bbox": [ + 89, + 589, + 482, + 695 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Imbalance Loss: To mitigate the impact of imbalance data, we use Vector Scaling [78] loss, which is a re-weighting method for training models on the imbalanced data with distribution shifts and can be expressed as", + "bbox": [ + 89, + 695, + 483, + 755 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nL _ {i m b} = \\frac {1}{n} \\sum_ {i = 1} ^ {n} - u _ {A _ {i}} \\log \\frac {e ^ {\\zeta_ {A _ {i}} h (f _ {i}) _ {A _ {i}} + \\Delta_ {A _ {i}}}}{\\sum_ {A \\in \\mathcal {A}} e ^ {\\zeta_ {A} h (f _ {i}) _ {A} + \\Delta_ {A}}},\n$$\n", + "text_format": "latex", + "bbox": [ + 127, + 763, + 444, + 801 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $u_{A_i}$ is the weighting factor for attribute $A_i$ . $h(f_i)_{A_i}$ is the predict logit on $A_i$ . $\\zeta_{A_i}$ is the multiplicative logit scaling factor, calculated as the inverse of $A_i$ 's frequency. $\\Delta_{A_i}$ is the additive logit scaling factor, calculated as the log of $A_i$ probabilities. More details about them are in appendix B.4.", + "bbox": [ + 89, + 809, + 482, + 885 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Fairness Loss: We introduce a fairness loss to minimize", + "bbox": [ + 109, + 885, + 482, + 900 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "the disparity between the distribution $\\mathcal{D}^f$ of $f$ and the conditional distribution $\\mathcal{D}^{f_A}$ of $f$ on attribute $A\\in \\mathcal{A}$ . Specifically, we follow [79, 80] to minimize the summation of the following Sinkhorn distance between these two distributions:", + "bbox": [ + 511, + 400, + 906, + 460 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nL_{fair} = \\sum_{A\\in \\mathcal{A}}\\inf_{\\gamma \\in \\Gamma (\\mathcal{D}^{f},\\mathcal{D}^{f_{A}})}\\bigl\\{\\mathbb{E}_{X\\sim \\gamma}[c(p,q)] + \\alpha H(\\gamma |\\mu \\otimes \\nu)\\bigr \\} ,\n$$\n", + "text_format": "latex", + "bbox": [ + 513, + 465, + 906, + 498 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\Gamma(\\mathcal{D}^f, \\mathcal{D}^{f_A})$ is the set of joint distributions based on $\\mathcal{D}^f$ and $\\mathcal{D}^{f_A}$ . Let $p$ and $q$ be the points from $\\mathcal{D}^f$ and $\\mathcal{D}^{f_A}$ , respectively. Then, $c(p, q)$ represents the transport cost [80]. Let $\\mu$ and $\\nu$ be the reference measures from the set of measures on $f$ . Then, $H(\\gamma | \\mu \\otimes \\nu)$ represents the relative entropy of $\\gamma$ with respect to the product measure $\\mu \\otimes \\nu$ . $\\alpha \\geq 0$ is a regularization hyperparameter. In practice, we use the empirical form of $L_{fair}$ .", + "bbox": [ + 511, + 503, + 906, + 626 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Total Loss: Therefore, the final learning objective becomes $\\mathcal{L}(\\theta) = L_{imb} + \\lambda L_{fair}$ , where $\\lambda$ is a hyperparameter. Train. Traditional optimization methods like stochastic gradient descent can lead to poor model generalization due to sharp loss landscapes with multiple local and global minima. To address this, we use Sharpness-Aware Minimization (SAM) [81] to enhance our annotator's generalization by flattening the loss landscape. Specifically, flattening is attained by determining the optimal $\\epsilon^{*}$ for perturbing model parameters $\\theta$ to maximize the loss, formulated as: $\\epsilon^{*} = \\arg \\max_{\\| \\epsilon \\|_{2} \\leq \\beta} \\mathcal{L}(\\theta + \\epsilon) \\approx \\arg \\max_{\\| \\epsilon \\|_{2} \\leq \\beta} \\epsilon^{\\top} \\nabla_{\\theta} \\mathcal{L} = \\beta \\text{sign}(\\nabla_{\\theta} \\mathcal{L})$ , where $\\beta$ controls the perturbation magnitude. The approximation is based on the first-order Taylor expansion with assuming $\\epsilon$ is small. The final equation is obtained by solving a dual norm problem, where sign represents a sign function and $\\nabla_{\\theta} \\mathcal{L}$ being the gradient of $\\mathcal{L}$ with respect to $\\theta$ . As a result, the model parameters are updated by solving: $\\min_{\\theta} \\mathcal{L}(\\theta + \\epsilon^{*})$ .", + "bbox": [ + 511, + 626, + 908, + 901 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "3506", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/ee33e2d2ecf2b2de8ef43e6b49c40496f7bca36c846a744baca4b05c14801536.jpg", + "image_caption": [ + "Figure 3. Distribution of face images of the AI-Face dataset. The figure shows the (a) subset distribution and the demographic distribution for (b) skin tone, (c) gender, and (d) gender. The outer rings in (b), (c), and (d) represent the proportion of groups within each attribute category, while the inner rings indicate the distribution of fake $(F)$ and real $(R)$ images within those groups." + ], + "image_footnote": [], + "bbox": [ + 89, + 85, + 316, + 212 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/73dce0516cea83156d02b385626f5b7c5190a16667a7fb0a7c2413c9ae3dae08.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 318, + 97, + 549, + 212 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/64b85c5d29c5a70ca88dd95e3cd88007ea0acd67770d93cd21e538fb0c9ef472.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 547, + 98, + 705, + 210 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/7c9ff3f0b1c04966ed4493926d615fc9c3189dd883ddb15e16207aafedd36107.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 707, + 98, + 905, + 212 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Inference. We use the trained annotators to predict demographic labels for each image in AI-Face dataset, except for those from IMDB-WIKI, which already contain true labels.", + "bbox": [ + 89, + 263, + 485, + 308 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.3. Dataset Statistics", + "text_level": 1, + "bbox": [ + 89, + 318, + 259, + 332 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Fig. 3 illustrates the subset distribution and demographic attributes of the AI-Face dataset. The dataset contains approximately three times more generated images than real images, with diffusion model-generated images constituting the majority. In terms of demographic attributes, the majorities in skin tone are Tone 5 (31.14%) and Tone 6 (35.16%). The lightest skin tones (Tones 1-3) are underrepresented, comprising only 0.97% of the dataset. The dataset is relatively balanced across gender. Adult (25-44) (49.67%) is the predominant representation in age groups.", + "bbox": [ + 88, + 337, + 485, + 489 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.4. Annotation Quality Assessment", + "text_level": 1, + "bbox": [ + 89, + 498, + 369, + 512 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "To assess the quality of demographic annotations in our AI-Face dataset, we conducted a user study. Three participants label the demographic attributes for the given images (the details of labeling activities are in appendix B.5), with the final ground truth determined by majority vote. We then compare our annotations with those in A-FF++, A-DFDC, A-CelebDF-V2, and A-DFD datasets. Specifically, we perform two assessments: 1) Strategic comparison: We select 1,000 images from A-FF++ and A-DFDC that have different annotations from AI-Face. These images likely represent challenging cases. 2) Random comparison: We randomly sampled 1,000 images from A-Celeb-DF-V2 and A-DFD. Due to the limited age classes in these datasets, only gender was evaluated. The results, presented in Table 3, demonstrate the high correctness of the AI-Face annotations and their superior quality compared to the annotations of other datasets. For example, our annotation quality (ACC) surpasses those in A-FF++ by $78.714\\%$ on gender and $48.000\\%$ on age.", + "bbox": [ + 89, + 516, + 485, + 789 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4. Fairness Benchmark Settings", + "text_level": 1, + "bbox": [ + 89, + 799, + 359, + 816 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "This section demonstrates the fairness benchmark settings for detection methods and evaluation metrics on AI-Face (80%/20%: Train/Test). More settings are in Appendix C.1.", + "bbox": [ + 89, + 820, + 483, + 866 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Detection Methods. Our benchmark has implemented 12 detectors. The methodologies cover a spectrum that", + "bbox": [ + 89, + 869, + 483, + 900 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/95a24ad1b607c1481f8c4106de72b750295ab509feb3316e4a35a864ef0ddefc.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Evaluation TypeDatasetGenderAge
ACCPrecisionRecallACCPrecisionRecall
StrategicA-FF++8.14317.5835.96637.70039.45945.381
AI-Face86.85774.40477.36785.70074.02463.751
A-DFDC21.60028.60423.08233.40038.01140.165
AI-Face91.70092.12983.44877.00076.18462.646
RandomA-Celeb-DF-V289.62890.62690.494-
AI-Face91.20691.47491.767-
A-DFD70.90071.68674.435-
AI-Face92.30091.06091.727-
", + "bbox": [ + 516, + 260, + 911, + 359 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Table 3. Annotation quality assessment results $(\\%)$ for $A - FF + +$ A- DFDC, A-Celeb-DF-V2, A-DFD, and our AI-Face. ACC: Accuracy.", + "bbox": [ + 511, + 364, + 906, + 393 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "is specifically tailored to detect AI-generated faces from Deepfake Videos, GANs, and DMs. They can be classified into four types: Naive detectors: refer to backbone models that can be directly utilized as the detector for binary classification, including CNN-based (i.e., Xception [82], EfficientB4 [83]) and transformer-based (i.e., ViT-B/16 [84]). Frequency-based: explore the frequency domain for forgery detection (i.e., F3Net [85], SPSL [86], SRM [87]). Spatial-based: focus on mining spatial characteristics (e.g., texture) within images for detection (i.e., UCF [26], UnivFD [88], CORE [89]). Fairness-enhanced: focus on improving fairness in AI-generated face detection by designing specific algorithms (i.e., DAW-FDD [29], DAGFDD [29], PG-FDD [30]).", + "bbox": [ + 511, + 398, + 908, + 609 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Evaluation Metrics. To provide a comprehensive benchmarking, we consider 5 fairness metrics commonly used in fairness community [90-94] and 5 widely used utility metrics [95-98]. For fairness metrics, we consider Demographic Parity $(F_{DP})$ [90, 91], Max Equalized Odds $(F_{MEO})$ [93], Equal Odds $(F_{EO})$ [92], and Overall Accuracy Equality $(F_{OAE})$ [93] for evaluating group (e.g., gender) and intersectional (e.g., individuals of a specific gender and simultaneously a specific skin tone) fairness. In experiments, the intersectional groups are Female-Light (F-L), Female-Medium (F-M), Female-Dark (Dark), Male-Light (M-L), Male-Medium (M-M), and Male-Dark (M-D), where we group 10 categories of skin tones into Light (Tone 1-3), Medium (Tone 4-6), and Dark (Tone 7-10) for simplicity according to [99]. We also use individual fairness $(F_{IND})$ [94, 100] (i.e., similar individuals should have similar predicted outcomes) for estimation. For utility metrics, we employ the Area Under the ROC Curve (AUC), Accuracy (ACC), Average Precision (AP), Equal Error Rate (EER),", + "bbox": [ + 511, + 613, + 910, + 900 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "3507", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/29af5e2554ab6eb21ea1181e6fc447cf25726e3560b37dc9569d40ac740ee3a4.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
MeasureAttributeMetricModel Type
NaiveFrequencySpatialFairness-enhanced
Xception [82]EfficientB4 [83]ViT-B/16 [84]F3Net [85]SPSL [86]SRM [87]UCF [26]UnivFD [88]CORE [89]DAW-FDD [29]DAG-FDD [29]PG-FDD [30]
Fairness(%)↓Skin ToneFMEO8.8368.3006.26419.9388.05510.00217.3252.57710.77914.1186.5516.465
FDP9.7516.1847.72812.8769.37910.89712.5818.55610.31710.7068.6179.746
FOAE1.2714.3772.1682.8181.1350.9151.8832.7481.3321.6671.3880.882
FEO12.13211.0628.81323.7089.78914.23921.925.53613.06916.6047.3839.115
GenderFMEO3.9755.3855.1044.7174.4116.2715.0744.5035.7955.5105.9103.190
FDP1.6911.7251.3441.8641.8271.9571.7361.1902.1542.0152.1511.252
FOAE0.9751.4871.8031.1291.0371.7721.4511.6221.3891.3251.4201.071
FEO4.1435.8636.0314.8704.5346.785.5105.4085.9315.6966.0663.702
AgeFMEO27.8836.79614.93738.80127.61424.84347.5005.43633.88245.46615.22914.804
FDP10.90511.84911.83914.90611.23211.57017.04915.24912.56414.1069.63310.467
FOAE7.2652.8566.83810.1167.2706.52411.6523.7938.76011.8785.5335.009
FEO42.21610.30030.79555.03240.94338.52867.54514.14848.72964.38430.18229.585
IntersectionFMEO10.50517.5869.38421.36910.37915.14220.1346.11915.3416.56512.1789.578
FDP14.5118.60711.53517.17513.25915.18617.0314.02614.30114.08811.70514.697
FOAE2.5368.4614.9284.8702.4643.9983.5366.2872.7753.5474.0353.062
FEO24.31525.11427.44347.78321.67930.11243.37620.25528.8433.12226.29518.348
IndividualFIND10.33825.7420.0221.8722.5187.6210.7673.5230.0413.7720.9010.780
Utility(%)-AUC↑98.58398.61198.6998.71498.74797.93698.08298.19298.57997.81198.77199.172
ACC↑96.30894.20394.47295.71996.34695.09295.15193.65196.22495.42695.72296.174
AP↑99.35099.54299.57199.45399.35699.17299.27399.40099.36099.01599.49899.694
EER↓5.1496.6896.3725.2564.3716.4837.7087.6335.1457.0635.4994.961
FPR↓12.96120.06616.42614.67913.66115.74613.64618.55013.41016.67014.84410.971
Training Time / Epoch1h15min2h25min2h40min1h18min1h20min3h10min5h05min4h1h16min1h25min1h17min7h20min
", + "bbox": [ + 94, + 80, + 906, + 349 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Table 4. Overall performance comparison of difference methods on the AI-Face dataset. The best performance is shown in bold.", + "bbox": [ + 117, + 354, + 875, + 369 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "and False Positive Rate (FPR).", + "bbox": [ + 89, + 375, + 295, + 388 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "5. Results and Analysis", + "text_level": 1, + "bbox": [ + 89, + 402, + 289, + 421 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In this section, we estimate the existing AI-generated image detectors' fairness performance alongside their utility on our AI-Face Dataset. More results can be found in Appendix D.", + "bbox": [ + 89, + 425, + 485, + 472 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "5.1. General Fairness Comparison", + "text_level": 1, + "bbox": [ + 89, + 481, + 356, + 498 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Overall Performance. Table 4 reports the overall performance on our AI-Face test set. Our observations are: 1) Fairness-Enhanced Models (specifically PG-FDD [30]) are the most effective in achieving both high fairness and utility, underscoring the effectiveness of specialized fairness-enhancement techniques in mitigating demographic biases. 2) UnivFD [88], based on the CLIP backbone [73], also achieves commendable fairness, suggesting that foundation models equipped with fairness-focused enhancements could be a promising direction for developing fairer detectors. 3) Naive detectors, such as EfficientB4 [83], trained on large, diverse datasets (e.g., our AI-Face) can achieve competitive fairness and utility, highlighting the potential of fairness improvements by choosing specific architecture. 4) 10 out of 12 detectors have an AUC higher than $98\\%$ , demonstrating our AI-Face dataset is significant for training AI-face detectors in resulting high utility. 5) PG-FDD demonstrates superior performance but has a long training time, which can be explored and addressed in the future.", + "bbox": [ + 89, + 503, + 485, + 790 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Performance on Different Subsets. 1) Fig. 4 demonstrates the intersectional $F_{EO}$ and AUC performance of detectors on each test subset. We observe that the fairness performance varies a lot among different generative methods for every detector. The largest bias on most detectors comes from detecting face images generated by diffusion models. 2) DAG-FDD [29] and SRM [87] demonstrate the most", + "bbox": [ + 89, + 795, + 483, + 901 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "consistent fairness across subsets, indicating a robust handling of bias introduced by different generative methods. 3) Moreover, the stable utility demonstrates our dataset's expansiveness and diversity, enabling effective training to detect AI-generated faces from various generative techniques.", + "bbox": [ + 511, + 375, + 908, + 450 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Performance on Different Subgroups. We conduct an analysis of all detectors on intersectional subgroups. 1) As shown in Fig. 5, facial images with lighter skin tone are more often misclassified as fake, likely due to the underrepresentation of lighter tones (Tone 1-3) in our dataset (see Fig. 3 (b)). This suggests detectors tend to show higher error rates for minority groups. 2) Although gender representation is relatively balanced (see Fig. 3 (c)) in our dataset, the detectors consistently exhibit higher false positive rates for female subgroups, indicating a persistent gender-based bias.", + "bbox": [ + 511, + 455, + 908, + 608 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "5.2. Fairness Reliability Assessment", + "text_level": 1, + "bbox": [ + 511, + 619, + 790, + 635 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Fairness Robustness Evaluation. We apply 6 post-processing methods: Random Crop (RC) [101], Rotation (RT) [34], Brightness Contrast (BC) [34], Hue Saturation Value (HSV) [34], Gaussian Blur (GB) [34], and JEPG Compression (JC) [102] to the test images. Fig. 6 shows each detector's intersectional $F_{EO}$ and AUC performance changes after using post-processing. Our observations are: 1) These impairments tend to wash out forensic traces, so that detectors have evident performance degradation. 2) Post-processing does not always cause detectors more bias (e.g., UCF, UnivFD, CORE, DAW-FDD have better fairness after rotation), though they hurt the utility. 3) Fairness-enhanced detectors struggle to maintain fairness when images undergo post-processing. 4) Spatial detectors have better fairness robustness compared with other model types.", + "bbox": [ + 511, + 638, + 908, + 866 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Fairness Generalization Evaluation. To evaluate detectors' fairness generalization capability, we test them on Casual", + "bbox": [ + 511, + 869, + 908, + 902 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "3508", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/e46031def0f22af80823816c764efa8d609057504dfa7814e2516c6ac5abeecb.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 96, + 89, + 228, + 281 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/6ec0f78a6d4f1e7cf989ca92809a4fd0af26df25346a5b2ad4dfc784590fa136.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 230, + 90, + 362, + 281 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/01b5f57c01e0a85aab33413e5edc86dec9275e457a2fcc570db0106a9a8b77cd.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 364, + 90, + 496, + 281 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/3ed381744d2a149f361ac5002bf73cdb9224e270715f0926951af5bd27b443d2.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 496, + 90, + 629, + 281 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/f703227f7c0a1d3042de5635e5224485e29975e281ebb83c2f19c9c1928eb972.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 629, + 90, + 761, + 281 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/b5cf2bdd2f22f0fe64e0b55ea896aff1bfb41e28fc01205ab907db05edd9c00d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 764, + 90, + 897, + 281 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/e33836a66a827e63ba49d5986faa44a281c5f61c13bdc81a1326c85ec9b10c12.jpg", + "image_caption": [ + "Figure 4. Visualization of the intersectional $F_{EO}$ (\\%) and AUC (\\%) of detectors on different subsets. The smaller $F_{EO}$ polygon area represents better fairness. The larger AUC area means better utility." + ], + "image_footnote": [], + "bbox": [ + 91, + 309, + 228, + 393 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/54aaaa160577b20bc7065760bc2188b1c0a14a90ef49b20360fb2d3397e8e5f7.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 230, + 309, + 367, + 393 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/a7cd58856432107fdfe153cd0a7f772366496c4324f72f4b142fbf760ebb6672.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 367, + 309, + 500, + 393 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/04dd2777922990ee178b7c87375c81b7f3c5b33437c35c7ebb0d8e9771a5386d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 500, + 309, + 635, + 393 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/0dda740b787a117192b48a263b4d80ea906be8d79747a6fd1edd3122ea595f52.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 635, + 309, + 771, + 393 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/3fb87041182ba581a4144a41520a81745c8ffea2f4b79db682ca098f23a32c69.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 771, + 309, + 903, + 393 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/be00424190d93f7bc86666e788fa87f2a877b5ed9b6f4a274c244514da0fd855.jpg", + "image_caption": [ + "Figure 5. FPR(%) of each intersectional subgroup The dashline represents the lowest FPR on Female-Light (F-L) subgroup." + ], + "image_footnote": [], + "bbox": [ + 91, + 393, + 228, + 479 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/c8b428974b8488c6589a108584966d8fd60bd1d37b3a3fd5055c70f86681c9f6.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 230, + 393, + 367, + 479 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/7b66e46430a04c9303d75e59aa66d9b11e3b84337a57a3059062880851871ded.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 367, + 393, + 500, + 479 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/07971c0affdc5b1324513f078b01c55b5fe787d10e186a57af172f5bf4f5000c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 500, + 393, + 635, + 479 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/0c507a0a9d0f1ddb836170ca5ed44a91ce6d0652e050b0f96195d9cbc1692452.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 635, + 393, + 771, + 479 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/59f86cfc141be60c428d5a68cff6f5c5454d96715dbd8157a104501234ee22ca.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 771, + 393, + 903, + 479 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Conversations v2 (CCv2) [103], DF-Platter [16], and GenData [17], none of which are part of AI-Face. Notably, CCv2 is a dataset that contains only real face images with demographic annotations (e.g., gender) self-reported by the participants. Results on gender attribute in Table 5 show that: 1) Even well-designed detectors that focus on improving utility or fairness generalization (e.g., UCF, PG-FDD) struggle to achieve consistently superior performance across different dataset domains. This highlights the remaining fairness generalization issue. 2) DAW-FDD and PG-PDD are two fairness-enhanced detectors that require accessing demographic information during training, but their fairness does not encounter a drastic drop when evaluating on CCv2. This reflects the high accuracy of the annotations in our AI-face.", + "bbox": [ + 88, + 502, + 485, + 715 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Effect of Training Set Size. We randomly sample $20\\%$ , $40\\%$ , $60\\%$ , and $80\\%$ of each training subset from AI-Face to assess the impact of training size on performance. Key observations from Fig. 7 (Left): 1) Among all detectors, UnivFD demonstrates the most stable fairness and utility performance as the training dataset size changes, likely due to its fixed CLIP backbone. 2) Increasing the training dataset size generally improves model utility, but this pattern does not extend to fairness metrics. In fact, certain detectors such as F3Net and UCF exhibit worsening fairness as the training size reaches its maximum. This suggests that more training data does not necessarily lead to fairer detectors.", + "bbox": [ + 88, + 719, + 485, + 901 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Effect of the Ratio of Real and Fake. To examine how training real-to-fake sample ratios affect detector performance, we set the ratios at 1:10, 1:1, and 10:1 while keeping the total sample count constant. Experimental results in Fig. 7 (Right) show: 1) Most detectors' fairness improves as real sample representation increases. Probably because increasing real and reducing fake samples helps detectors reduce overfitting to artifacts specific to fake samples. This makes it easier for detectors to distinguish real from fake, even for underrepresented groups, thereby enhancing fairness. 2) Most detectors achieve the highest AUC with balanced data.", + "bbox": [ + 511, + 502, + 908, + 669 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.3. Discussion", + "text_level": 1, + "bbox": [ + 511, + 686, + 632, + 700 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "According to the above experiments, we summarize the unsolved fairness problems in recent detectors: 1) Detectors' fairness is unstable when detecting face images generated by different generative methods, indicating a future direction for enhancing fairness stability since new generative models continue to emerge. 2) Even though fairness-enhanced detectors exhibit small overall fairness metrics, they still show biased detection towards minority groups. Future studies should be more cautious when designing fair detectors to ensure balanced performance across all demographic groups. 3) There is currently no reliable detector, as all detectors experience severe large performance degradation under image post-processing and cross-domain evaluation. Future", + "bbox": [ + 509, + 704, + 908, + 902 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "3509", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/62c71090c95651797a47b03964b61722d2a55d6802d1bfa822336b0986e770aa.jpg", + "image_caption": [ + "Figure 6. Performance ratio after vs. before post-processing. Points closer to 1.0 (i.e., no post-processing) indicate better robustness." + ], + "image_footnote": [], + "bbox": [ + 94, + 87, + 294, + 188 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/68008447f733340256d211e2ea028e169a18c9aa3abaec3961e24c8e2fd13c1d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 295, + 88, + 496, + 188 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/f7f61cca8e81436fef80425da46d9a5460d41d86dcba5847c49004dcf222480b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 500, + 88, + 700, + 188 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/578ff42d96fa595ace4f5a6a53b17142d06e0d0d598dc743a023d62a2e304a01.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 702, + 88, + 903, + 188 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/d49bb99d9380d2e3f49b97da2a2341c9ce5b91983ac5e555cbecef162926305a.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Model TypeDetectorDataset
CCv2 [103]DF-Platter [16]GenData [17]
Fairness(%)↓FOAEUtility(%)↑ACCFairness(%)↓FOAEFEOUtility(%)↑AUCFairness(%)↓FOAEFEOUtility(%)↑AUC
NaiveXception1.006(+0.031)86.465(-9.843)6.836(+5.861)9.789(+5.646)81.273(-17.310)2.539(+1.564)13.487(+9.344)96.971(-1.612)
EfficientB44.077(+0.259)82.980(-11.223)8.786(+7.299)12.370(+6.507)67.694(-30.917)3.304(+1.817)1.995 (-3.686)93.213(-5.398)
ViT-B/162.167(+0.364)81.489(-12.983)0.015 (-1.788)12.373(+6.342)76.050(-22.640)3.164(+1.361)9.610(+3.579)88.253(-10.437)
FrequencyF3Net5.743(+4.614)87.867(-7.852)3.521(+2.392)6.445(+1.575)85.112(-13.602)1.188(+0.059)16.306(+11.436)91.603(-7.111)
SPSL0.601 (-0.436)80.006(-16.340)5.109(+4.072)7.842(+3.308)82.175(-16.572)1.385(+0.348)9.261(+4.272)98.838 (+0.091)
SRM7.000(+5.228)79.768(-15.324)3.823(+2.051)6.567(-0.213)66.401(-31.535)3.281(+1.509)7.907(+1.127)90.049(-7.887)
SpatialUCF2.169(+0.718)93.009 (-2.142)8.687(+7.236)17.068(+11.558)80.821(-17.261)3.513(+2.062)10.529(+5.019)87.778(-10.304)
UnivFD7.625(+6.003)67.983(-25.668)4.540(+2.918)9.950(+4.542)76.443(-21.749)1.645(+0.023)3.848(-1.560)94.418(-3.774)
CORE4.410(+3.021)83.328(-12.896)7.741(+6.352)17.348(+11.417)77.226(-21.353)3.759(+2.370)23.289(+17.358)98.408(-0.171)
Fairness-enhancedDAW-FDD4.726(+3.401)84.685(-10.741)5.536(+4.211)13.667(+7.791)81.807(-16.004)1.443(+0.118)10.228(+4.532)97.854(+0.043)
DAG-FDD2.364(+0.944)83.918(-11.804)3.064(+1.644)22.203(+16.137)75.206(-23.565)0.714 (-0.706)10.332(+4.266)92.108(-6.663)
PG-FDD1.513(+0.442)92.852(-3.322)4.565(+3.494)9.717(+6.015)85.271 (-13.901)3.063(+1.992)9.479(+5.777)93.329(-5.843)
", + "bbox": [ + 101, + 207, + 892, + 380 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Table 5. Fairness generalization results based on the gender attribute. The smallest performance changes (in parentheses) and the best performance are in red and in bold, respectively. Only $F_{OAE}$ fairness metric and ACC metric are used in CCv2 due to all samples are real.", + "bbox": [ + 89, + 382, + 906, + 410 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/084be84553a199b922a7ec664074fd2869b88a754601e70c2177a9a14abde391.jpg", + "image_caption": [ + "Figure 7. Impact of the training set size (Left) and the ratio of real and fake (Right) on detectors' intersectional $F_{EO}(\\%)$ and AUC (\\%)." + ], + "image_footnote": [], + "bbox": [ + 91, + 415, + 906, + 537 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "studies should aim to develop a unified framework that ensures fairness, robustness, and generalization, as these three characteristics are essential for creating a reliable detector. Moreover, integrating foundation models (e.g., CLIP) into detector design may help mitigate bias.", + "bbox": [ + 89, + 560, + 485, + 636 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6. Conclusion", + "text_level": 1, + "bbox": [ + 89, + 651, + 209, + 667 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "This work presents the first demographically annotated million-scale AI-Face dataset, serving as a pivotal foundation for addressing the urgent need for developing fair AI face detectors. Based on this dataset, we conduct the first comprehensive fairness benchmark, shedding light on the fairness performance and challenges of current representative AI face detectors. Our findings can inspire and guide researchers in refining current models and exploring new methods to mitigate bias. Limitation and Future Work: One limitation is that our dataset's annotations are algorithmically generated, so they may lack $100\\%$ accuracy. This challenge is difficult to resolve, as demographic attributes for most AI-generated faces are often too ambiguous to predict and do not map to real-world individuals. We plan to enhance annotation quality through human labeling in the future. We", + "bbox": [ + 88, + 672, + 485, + 900 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "also plan to extend our fairness benchmark to evaluate large language models like LLaMA2 [104] and GPT4 [105] for detecting AI faces. Social Impact: Malicious users could misuse AI-generated face images from our dataset to create fake social media profiles and spread misinformation. To mitigate this risk, only users who submit a signed end-user license agreement will be granted access to our dataset.", + "bbox": [ + 511, + 560, + 906, + 666 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Ethics Statement", + "text_level": 1, + "bbox": [ + 511, + 670, + 661, + 686 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Our dataset collection and annotation generation are approved by Purdue's Institutional Review Board. The dataset is only for research purposes. All data included in this work are sourced from publicly available datasets, and we strictly comply with each dataset's license agreement to ensure lawful inclusion and permissible secondary use for training and testing. All collected data and their associated licenses are mentioned in the Datasheet of AI-Face in Appendix E. Our annotation processes prioritize ethical considerations: 1) $76\\%$ images we annotated are generated facial images, ensuring no potential for harm to any individual. 2) For real images, we only provide annotations for content either licensed by the original copyright holders or explicitly stated as freely shareable for research purposes.", + "bbox": [ + 509, + 688, + 908, + 900 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "3510", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Acknowledgments", + "text_level": 1, + "bbox": [ + 91, + 90, + 250, + 107 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "This work is supported by the U.S. National Science Foundation (NSF) under grant IIS-2434967 and the National Artificial Intelligence Research Resource (NAIRR) Pilot and TACC Lonestar6. The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of NSF and NAIRR Pilot.", + "bbox": [ + 89, + 108, + 485, + 213 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 226, + 187, + 242 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] L. Lin, N. Gupta, Y. Zhang, H. Ren, C.-H. Liu, F. Ding, X. Wang, X. Li, L. Verdoliva, and S. Hu, \"Detecting multimedia generated by large ai models: A survey,\" arXiv preprint arXiv:2402.00045, 2024. 1", + "[2] A. Rossler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, and M. Nießner, “Faceforensics++: Learning to detect manipulated facial images,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 1-11, 2019, 1, 2, 3, 18, 19", + "[3] “Deepfakes github.” https://github.com/deepfakes/faceswap. Accessed: 2024-04-17. 1,2", + "[4] “Fakeapp.” https://www.fakeapp.com/. Accessed: 2024-04-17. 1, 2", + "[5] A. Brock, J. Donahue, and K. Simonyan, \"Large scale gan training for high fidelity natural image synthesis,\" in 7th International Conference on Learning Representations, ICLR 2019, 2019. 1", + "[6] T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401–4410, 2019. 2, 3, 18", + "[7] T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila, \"Analyzing and improving the image quality of stylegan,\" in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119, 2020.", + "[8] T. Karras, M. Aittala, S. Laine, E. Härkönen, J. Hellsten, J. Lehtinen, and T. Aila, \"Alias-free generative adversarial networks,\" Advances in neural information processing systems, vol. 34, pp. 852-863, 2021. 1, 2", + "[9] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, \"High-resolution image synthesis with latent diffusion models,\" in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10684-10695, 2022. 1", + "[10] D. J. Tojin T. Eapen, “How generative ai can augment human creativity.” https://hbr.org/2023/07/how-generative-ai-can-augment-human-creativity, 2023. Accessed: 2024-04-21. 1, 2", + "[11] B. News, \"Trump supporters target black voters with faked ai images.\" https://www.bbc.com/news/world-us-canada-68440150, 2024. Accessed: 2023-05-09. 1", + "[12] H. S. Sætra, “Generative ai: Here to stay, but for good?,” Technology in Society, vol. 75, p. 102372, 2023. 1" + ], + "bbox": [ + 101, + 251, + 485, + 900 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[13] M. Westerlund, “The emergence of deepfake technology: A review,” Technology innovation management review, vol. 9, no. 11, 2019. 1, 2", + "[14] L. Jiang, R. Li, W. Wu, C. Qian, and C. C. Loy, \"Deeperforensics-1.0: A large-scale dataset for real-world face forgery detection,\" in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2889-2898, 2020. 2", + "[15] K. Narayan, H. Agarwal, K. Thakral, S. Mittal, M. Vatsa, and R. Singh, \"Deeply: On deepfake phylogeny,\" in 2022 IEEE International Joint Conference on Biometrics (IJCB), pp. 1-10, IEEE, 2022. 2", + "[16] K. Narayan, H. Agarwal, K. Thakral, S. Mittal, M. Vatsa, and R. Singh, \"Df-platter: multi-face heterogeneous deepfake dataset,\" in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9739-9748, 2023. 2, 7, 8", + "[17] C. Teo, M. Abdollahzadeh, and N.-M. M. Cheung, “On measuring fairness in generative models,” Advances in Neural Information Processing Systems, vol. 36, 2023. 1, 2, 7, 8", + "[18] Z. Liu, P. Luo, X. Wang, and X. Tang, \"Deep learning face attributes in the wild,\" in Proceedings of International Conference on Computer Vision (ICCV), December 2015. 2", + "[19] Y. Xu, P. Terhöst, M. Pedersen, and K. Raja, \"Analyzing fairness in deepfake detection with massively annotated databases,\" IEEE Transactions on Technology and Society, 2024. 1, 2, 14", + "[20] R. Rothe, R. Timofte, and L. Van Gool, “Dex: Deep expectation of apparent age from a single image,” in Proceedings of the IEEE international conference on computer vision workshops, pp. 10–15, 2015. 2, 3, 4, 18", + "[21] B. Dolhansky, J. Bitton, B. Pflaum, J. Lu, R. Howes, M. Wang, and C. C. Ferrer, \"The deepfake detection challenge (dfdc) dataset,\" arXiv preprint arXiv:2006.07397, 2020. 2, 3, 18, 19", + "[22] G. Research, \"Contributing data to deepfake detection research,\" 2019. Accessed: 2024-04-12. 2, 3, 18, 19", + "[23] Y. Li, X. Yang, P. Sun, H. Qi, and S. Lyu, \"Celeb-df: A large-scale challenging dataset for deepfake forensics,\" in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3207-3216, 2020. 2, 3, 18, 19", + "[24] W. Pu, J. Hu, X. Wang, Y. Li, S. Hu, B. Zhu, R. Song, Q. Song, X. Wu, and S. Lyu, \"Learning a deep dual-level network for robust deepfake detection,\" Pattern Recognition, vol. 130, p. 108832, 2022. 1, 2", + "[25] H. Guo, S. Hu, X. Wang, M.-C. Chang, and S. Lyu, \"Robust attentive deep neural network for detecting gan-generated faces,\" IEEE Access, vol. 10, pp. 32574-32583, 2022.", + "[26] Z. Yan, Y. Zhang, Y. Fan, and B. Wu, \"Ucf: Uncovering common features for generalizable deepfake detection,\" in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 22412-22423, 2023. 5, 6, 19, 21, 23, 24, 25, 26, 27", + "[27] L. Papa, L. Faiella, L. Corvitto, L. Maiano, and I. Amerini, \"On the use of stable diffusion for creating realistic faces: from generation to detection,\" in 2023 11th International" + ], + "bbox": [ + 522, + 92, + 906, + 900 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "3511", + "bbox": [ + 482, + 944, + 513, + 955 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Workshop on Biometrics and Forensics (IWBF), pp. 1-6, IEEE, 2023. 1, 2", + "[28] L. Trinh and Y. Liu, “An examination of fairness of ai models for deepfake detection,” IJCAI, 2021. 1, 2, 3", + "[29] Y. Ju, S. Hu, S. Jia, G. H. Chen, and S. Lyu, “Improving fairness in deepfake detection,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 4655–4665, 2024. 1, 2, 5, 6, 21, 23, 24, 25, 26, 27", + "[30] L. Lin, X. He, Y. Ju, X. Wang, F. Ding, and S. Hu, “Preserving fairness generalization in deepfake detection,” CVPR, 2024. 1, 2, 5, 6, 19, 21, 23, 24, 25, 26, 27", + "[31] Z. Yan, T. Yao, S. Chen, Y. Zhao, X. Fu, J. Zhu, D. Luo, L. Yuan, C. Wang, S. Ding, et al., \"Df40: Toward next-generation deepfake detection,\" NeurIPS, 2024. 1, 2, 3", + "[32] C. Li et al., “A continual deepfake detection benchmark: Dataset, methods, and essentials,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1339–1349, 2023. 2, 3", + "[33] J. Deng, C. Lin, P. Hu, C. Shen, Q. Wang, Q. Li, and Q. Li, \"Towards benchmarking and evaluating deepfake detection,\" IEEE Transactions on Dependable and Secure Computing, 2024. 3", + "[34] Z. Yan, Y. Zhang, X. Yuan, S. Lyu, and B. Wu, \"Deepfakebench: A comprehensive benchmark of deepfake detection,\" in NeurIPS, 2023. 3, 6", + "[35] B. M. Le, J. Kim, S. Tariq, K. Moore, A. Abuadbba, and S. S. Woo, \"Sok: Facial deepfake detectors,\" arXiv, 2024. 2, 3", + "[36] C. Hazirbas, J. Bitton, B. Dolhansky, J. Pan, A. Gordo, and C. C. Ferrer, \"Towards measuring fairness in ai: the casual conversations dataset,\" IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 4, no. 3, pp. 324-332, 2021. 2, 3", + "[37] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al., \"Learning transferable visual models from natural language supervision,\" in International conference on machine learning, pp. 8748-8763, PMLR, 2021. 2, 4", + "[38] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, \"Generative adversarial nets,\" Advances in neural information processing systems, vol. 27, 2014. 2", + "[39] “Midjourney.” https://mid-journey.ai/. Accessed: 2024-04-17. 2", + "[40] A. Ramesh et al., \"Hierarchical text-conditional image generation with clip latents,\" arXiv, vol. 1, no. 2, p. 3, 2022. 2", + "[41] D. O'Sullivan, \"A high school student created a fake 2020 us candidate. twitter verified it.\" https://cnn.it/3HpHfzz, 2020. Accessed: 2024-04-21. 2", + "[42] S. Bond, \"That smiling linkedin profile face might be a computer-generated fake.\" https://www.npr.org/2022/03/27/1088140809/fake-linkedin-profiles, 2022. Accessed: 2024-04-21. 2", + "[43] V. Albiero, K. Bowyer, K. Vangara, and M. King, \"Does face recognition accuracy get better with age? deep face matchers" + ], + "bbox": [ + 99, + 92, + 483, + 900 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "say no,\" in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 261-269, 2020. 3", + "[44] V. Albiero, K. Ks, K. Vangara, K. Zhang, M. C. King, and K. W. Bowyer, \"Analysis of gender inequality in face recognition accuracy,\" in Proceedings of the IEEE/cvf winter conference on applications of computer vision workshops, pp. 81-89, 2020.", + "[45] C. M. Cook, J. J. Howard, Y. B. Sirotin, J. L. Tipton, and A. R. Vermury, \"Demographic effects in facial recognition and their dependence on image acquisition: An evaluation of eleven commercial systems,\" IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 1, no. 1, pp. 32-41, 2019. 3, 14", + "[46] K. Krishnapriya, V. Albiero, K. Vangara, M. C. King, and K. W. Bowyer, “Issues related to face recognition accuracy varying based on race and skin tone,” IEEE Transactions on Technology and Society, vol. 1, no. 1, pp. 8–20, 2020. 14", + "[47] B. Porgali, V. Albiero, J. Ryda, C. C. Ferrer, and C. Hazirbas, \"The casual conversations v2 dataset,\" in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 10-17, June 2023. 3", + "[48] Google, “The Monk Skin Tone Scale,” 2024. [Accessed October 16, 2024]. 3, 14", + "[49] United States Department of State — Bureau of Consular Affairs, “Selecting your gender marker - travel,” 2022. [Accessed October 16, 2024]. 3, 14", + "[50] Australian Bureau of Statistics, \"Standard for Sex, Gender, Variations of Sex Characteristics and Sexual Orientation Variables,\" 2024. [Accessed October 16, 2024]. 3, 14", + "[51] J. J. Howard, Y. B. Sirotin, and A. R. Vemury, “The effect of broad and specific demographic homogeneity on the imposter distributions and false match rates in face recognition algorithm performance,” in 2019 IEEE 10th international conference on biometrics theory, applications and systems (btas), pp. 1–8, IEEE, 2019. 3, 14", + "[52] I. D. Raji and J. Buolamwini, \"Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products,\" in Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 429-435, 2019. 3, 14", + "[53] United Nations, “Provisional Guidelines on Standard International Age Classifications,” 1982. [Accessed October 16, 2024]. 3, 14", + "[54] Statistics Canada, \"Age Categories, Life Cycle Groupings,\" 2017. [Accessed October 16, 2024]. 3, 14", + "[55] O. Giudice, L. Guarnera, and S. Battiato, “Fighting deep-fakes by detecting gan dct anomalies,” Journal of Imaging, vol. 7, no. 8, p. 128, 2021. 3, 18", + "[56] V. Asnani, X. Yin, T. Hassner, and X. Liu, \"Reverse engineering of generative models: Inferring model hyperparameters from generated images,\" IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023. 3, 18", + "[57] D. Beniaguev, “Synthetic faces high quality (sfhq) dataset,” 2022. 3, 18", + "[58] Z. Lu, D. Huang, L. Bai, J. Qu, C. Wu, X. Liu, and W. Ouyang, \"Seeing is not always believing: Benchmarking" + ], + "bbox": [ + 522, + 92, + 906, + 900 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "3512", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "human and model perception of ai-generated images,\" Advances in Neural Information Processing Systems, vol. 36, 2024. 3, 18", + "[59] L. M. Dang, S. I. Hassan, S. Im, J. Lee, S. Lee, and H. Moon, \"Deep learning based computer generated face identification using convolutional neural network,\" Applied Sciences, vol. 8, no. 12, p. 2610, 2018. 3, 18", + "[60] P. Esser, R. Rombach, and B. Ommer, “Taming transformers for high-resolution image synthesis,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12873–12883, 2021. 3, 18", + "[61] Z. Wang, J. Bao, W. Zhou, W. Wang, H. Hu, H. Chen, and H. Li, \"Dire for diffusion-generated image detection,\" arXiv preprint arXiv:2303.09295, 2023. 3, 18", + "[62] M. Kim, F. Liu, A. Jain, and X. Liu, \"Dface: Synthetic face generation with dual condition diffusion model,\" in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12715-12725, 2023. 3, 18", + "[63] R. Corvi, D. Cozzolino, G. Zingarini, G. Poggi, K. Nagano, and L. Verdoliva, \"On the detection of synthetic images generated by diffusion models,\" in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1-5, IEEE, 2023. 3, 18", + "[64] M. Awsafur Rahman, B. Paul, N. Haque Sarker, Z. I. A. Hakim, and S. Anowarul Fattah, \"Artifact: A large-scale dataset with artificial and factual images for generalizable and robust synthetic image detection,\" arXiv e-prints, pp. arXiv-2302, 2023. 3, 18", + "[65] H. Song, S. Huang, Y. Dong, and W.-W. Tu, \"Robustness and generalizability of deepfake detection: A study with diffusion models,\" arXiv preprint arXiv:2309.02218, 2023. 3, 18", + "[66] J. Deng, J. Guo, E. Ververas, I. Kotsia, and S. Zafeiriou, \"Retinaface: Single-shot multi-level face localisation in the wild,\" in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5203-5212, 2020. 3, 31, 32", + "[67] K. S. Krishnapriya, G. Pangelinan, M. C. King, and K. W. Bowyer, \"Analysis of manual and automated skin tone assignments,\" in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops, pp. 429-438, January 2022. 3", + "[68] W. Thong, P. Joniak, and A. Xiang, \"Beyond skin tone: A multidimensional measure of apparent skin color,\" in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 4903-4913, October 2023. 3", + "[69] C. Lugaresi, J. Tang, H. Nash, C. McClanahan, E. Uboweja, M. Hays, F. Zhang, C.-L. Chang, M. G. Yong, J. Lee, et al., \"Mediapipe: A framework for building perception pipelines,\" arXiv preprint arXiv:1906.08172, 2019. 3", + "[70] J. A. Hartigan and M. A. Wong, \"Algorithm as 136: A k-means clustering algorithm,\" Journal of the royal statistical society. series c (applied statistics), vol. 28, no. 1, pp. 100-108, 1979. 3", + "[71] Megvii Technology Limited, “Face++ Face Detection.” https://www(faceplusplus.com/face-detection/. Accessed: 2024-03. 3, 14, 18, 19" + ], + "bbox": [ + 99, + 90, + 485, + 898 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[72] InsightFace Project Contributors, \"InsightFace: State-of-the-Art Face Analysis Toolbox.\" https://insightface.ai/. Accessed: 2024-03. 3, 14, 18, 19", + "[73] G. Ilharco, M. Wortsman, R. Wightman, C. Gordon, N. Carlini, R. Taori, A. Dave, V. Shankar, H. Namkoong, J. Miller, H. Hajishirzi, A. Farhadi, and L. Schmidt, \"Open clip.\" https://github.com/mlfoundations/open Clip, 2021.4.6.21", + "[74] K. Cao, C. Wei, A. Gaidon, N. Arechiga, and T. Ma, \"Learning imbalanced datasets with label-distribution-aware margin loss,\" Advances in neural information processing systems, vol. 32, 2019. 4", + "[75] S. Agarwal, G. Krueger, J. Clark, A. Radford, J. W. Kim, and M. Brundage, “Evaluating clip: towards characterization of broader capabilities and downstream implications,” arXiv preprint arXiv:2108.02818, 2021. 4", + "[76] M. M. Tanjim, K. K. Singh, K. Kafle, R. Sinha, and G. W. Cottrell, “Discovering and mitigating biases in clip-based image editing,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 2984–2993, 2024.", + "[77] J. Wang and G. Kang, “Learn to rectify the bias of clip for unsupervised semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4102–4112, 2024. 4", + "[78] G. R. Kini, O. Paraskevas, S. Oymak, and C. Thrampoulidis, \"Label-imbalanced and group-sensitive classification under overparameterization,\" Advances in Neural Information Processing Systems, vol. 34, pp. 18970-18983, 2021. 4", + "[79] G. Peyré, M. Cuturi, et al., \"Computational optimal transport: With applications to data science,\" Foundations and Trends® in Machine Learning, vol. 11, no. 5-6, pp. 355-607, 2019. 4", + "[80] M. Cuturi, \"Sinkhorn distances: Lightspeed computation of optimal transport,\" Advances in neural information processing systems, vol. 26, 2013. 4", + "[81] P. Foret, A. Kleiner, H. Mobahi, and B. Neyshabur, \"Sharpness-aware minimization for efficiently improving generalization,\" in International Conference on Learning Representations, 2020. 4", + "[82] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1251–1258, 2017. 5, 6, 19, 21, 23, 24, 25, 26, 27", + "[83] M. Tan and Q. Le, \"Efficientnet: Rethinking model scaling for convolutional neural networks,\" in International conference on machine learning, pp. 6105-6114, PMLR, 2019. 5, 6, 19, 21, 23, 24, 25, 26, 27", + "[84] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., \"An image is worth 16x16 words: Transformers for image recognition at scale,\" in 9th International Conference on Learning Representations, 2021. 5, 6, 20, 21, 23, 24, 25, 26, 27", + "[85] Y. Qian, G. Yin, L. Sheng, Z. Chen, and J. Shao, “Thinking in frequency: Face forgery detection by mining frequency-aware clues,” in European conference on computer vision, pp. 86–103, Springer, 2020. 5, 6, 21, 23, 24, 25, 26, 27" + ], + "bbox": [ + 522, + 92, + 908, + 898 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "3513", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[86] H. Liu, X. Li, W. Zhou, Y. Chen, Y. He, H. Xue, W. Zhang, and N. Yu, \"Spatial-phase shallow learning: rethinking face forgery detection in frequency domain,\" in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 772-781, 2021. 5, 6, 21, 23, 24, 25, 26, 27", + "[87] Y. Luo, Y. Zhang, J. Yan, and W. Liu, \"Generalizing face forgery detection with high-frequency features,\" in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 16317-16326, 2021. 5, 6, 19, 21, 23, 24, 25, 26, 27", + "[88] U. Ojha, Y. Li, and Y. J. Lee, \"Towards universal fake image detectors that generalize across generative models,\" in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 24480-24489, 2023. 5, 6, 21, 23, 24, 25, 26, 27", + "[89] Y. Ni, D. Meng, C. Yu, C. Quan, D. Ren, and Y. Zhao, \"Core: Consistent representation learning for face forgery detection,\" in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12-21, 2022. 5, 6, 21, 23, 24, 25, 26, 27", + "[90] X. Han, J. Chi, Y. Chen, Q. Wang, H. Zhao, N. Zou, and X. Hu, “Ffb: A fair fairness benchmark for in-processing group fairness methods,” in ICLR, 2024. 5", + "[91] N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan, “A survey on bias and fairness in machine learning,” ACM computing surveys (CSUR), vol. 54, no. 6, pp. 1–35, 2021. 5", + "[92] J. Wang, X. E. Wang, and Y. Liu, “Understanding instance-level impact of fairness constraints,” in International Conference on Machine Learning, pp. 23114–23130, PMLR, 2022. 5", + "[93] H. Wang, L. He, R. Gao, and F. P. Calmon, \"Aleatoric and epistemic discrimination in classification,\" ICML, 2023. 5", + "[94] C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel, \"Fairness through awareness,\" in Proceedings of the 3rd innovations in theoretical computer science conference, pp. 214-226, 2012. 5", + "[95] Z. Yan, Y. Luo, S. Lyu, Q. Liu, and B. Wu, \"Transcending forgery specificity with latent space augmentation for generalizable deepfake detection,\" in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8984-8994, June 2024. 5", + "[96] H. Ren, L. Lin, C.-H. Liu, X. Wang, and S. Hu, \"Improving generalization for ai-synthesized voice detection,\" in AAAI, 2025.", + "[97] Z. Yan, Y. Zhao, S. Chen, M. Guo, X. Fu, T. Yao, S. Ding, and L. Yuan, \"Generalizing deepfake video detection with plug-and-play: Video-level blending and spatiotemporal adapter tuning,\" in CVPR, 2025.", + "[98] J. Cheng, Z. Yan, Y. Zhang, L. Hao, J. Ai, Q. Zou, C. Li, and Z. Wang, \"Stacking brick by brick: Aligned feature isolation for incremental face forgery detection,\" in CVPR, 2025. 5", + "[99] \"Monk skin tone scale,\" in https://en.wikipedia.org/wiki/Monk_Skin_Tone_Scale, Wikipedia, The Free Encyclopedia. 5", + "[100] S. Hu and G. H. Chen, \"Fairness in survival analysis with distributionally robust optimization,\" arXiv, 2023. 5" + ], + "bbox": [ + 94, + 90, + 485, + 900 + ], + "page_idx": 11 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[101] F. Cocchi, L. Baraldi, S. Poppi, M. Cornia, L. Baraldi, and R. Cucchiara, “Unveiling the impact of image transformations on deepfake detection: An experimental analysis,” in International Conference on Image Analysis and Processing, pp. 345–356, Springer, 2023. 6", + "[102] D. Cozzolino, G. Poggi, R. Corvi, M. Nießner, and L. Verdoliva, “Raising the bar of ai-generated image detection with clip,” arXiv preprint arXiv:2312.00195, 2023. 6", + "[103] B. Porgali, V. Albiero, J. Ryda, C. C. Ferrer, and C. Hazirbas, “The casual conversations v2 dataset,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10–17, 2023. 7, 8", + "[104] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, et al., \"Llama 2: Open foundation and fine-tuned chat models,\" arXiv preprint arXiv:2307.09288, 2023. 8", + "[105] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat, et al., \"Gpt-4 technical report,\" arXiv preprint arXiv:2303.08774, 2023. 8", + "[106] J. Buolamwini and T. Gebru, “Gender shades: Intersectional accuracy disparities in commercial gender classification,” in Conference on fairness, accountability and transparency, pp. 77–91, PMLR, 2018. 14", + "[107] B. Lu, J.-C. Chen, C. D. Castillo, and R. Chellappa, “An experimental evaluation of covariates effects on unconstrained face verification,” IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 1, no. 1, pp. 42–55, 2019. 14", + "[108] Z. Khan and Y. Fu, “One label, one billion faces: Usage and consistency of racial categories in computer vision,” in Proceedings of the 2021 acm conference on fairness, accountability, and transparency, pp. 587–597, 2021. 14", + "[109] S. Sachdeva, “Fitzpatrick skin typing: Applications in dermatology,” Indian journal of dermatology, venereology and leprology, vol. 75, p. 93, 2009. 14", + "[110] J. J. Howard, Y. B. Sirotin, J. L. Tipton, and A. R. Vemury, \"Reliability and validity of image-based and self-reported skin phenotype metrics,\" IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 3, no. 4, pp. 550-560, 2021. 14", + "[111] U. Okoji, S. Taylor, and J. Lipoff, “Equity in skin typing: why it is time to replace the fitzpatrick scale,” British Journal of Dermatology, vol. 185, no. 1, pp. 198–199, 2021. 14", + "[112] M. Groh, C. Harris, R. Daneshjou, O. Badri, and A. Koochek, \"Towards transparency in dermatology image datasets with skin tone annotations by experts, crowds, and an algorithm,\" Proceedings of the ACM on Human-Computer Interaction, vol. 6, no. CSCW2, pp. 1-26, 2022. 14", + "[113] R. Williamson and A. Menon, “Fairness risk measures,” in International conference on machine learning, pp. 6786–6797, PMLR, 2019. 21", + "[114] D. Levy, Y. Carmon, J. C. Duchi, and A. Sidford, \"Largescale methods for distributionally robust optimization,\" Advances in Neural Information Processing Systems, vol. 33, pp. 8847-8860, 2020. 21" + ], + "bbox": [ + 94, + 90, + 906, + 900 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "3514", + "bbox": [ + 482, + 944, + 514, + 955 + ], + "page_idx": 11 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[115] R. T. Rockafellar, S. Uryasev, et al., \"Optimization of conditional value-at-risk,\" Journal of risk, vol. 2, pp. 21-42, 2000. 21", + "[116] T. Hashimoto, M. Srivastava, H. Namkoong, and P. Liang, \"Fairness without demographics in repeated loss minimization,\" in International Conference on Machine Learning, pp. 1929-1938, PMLR, 2018. 21", + "[117] J. C. Duchi and H. Namkoong, “Learning models with uniform performance via distributionally robust optimization,” The Annals of Statistics, vol. 49, no. 3, pp. 1378–1406, 2021. 21", + "[118] T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach, H. D. Iiii, and K. Crawford, \"Datasheets for datasets,\" Communications of the ACM, vol. 64, no. 12, pp. 86-92, 2021. 31" + ], + "bbox": [ + 89, + 90, + 485, + 304 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "3515", + "bbox": [ + 482, + 945, + 514, + 955 + ], + "page_idx": 12 + } +] \ No newline at end of file diff --git a/2025/AI-Face_ A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark/69a93785-ab72-45f7-87fa-9182ea82821d_model.json b/2025/AI-Face_ A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark/69a93785-ab72-45f7-87fa-9182ea82821d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..64fb4decf859953491970d6f1097ce7a4276d11d --- /dev/null +++ b/2025/AI-Face_ A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark/69a93785-ab72-45f7-87fa-9182ea82821d_model.json @@ -0,0 +1,3020 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.043 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.0, + 0.812, + 0.045 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.093, + 0.131, + 0.908, + 0.175 + ], + "angle": 0, + "content": "AI-Face: A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark" + }, + { + "type": "text", + "bbox": [ + 0.27, + 0.203, + 0.724, + 0.221 + ], + "angle": 0, + "content": "Li Lin\\(^{1}\\), Santosh\\(^{1}\\), Mingyang Wu\\(^{1}\\), Xin Wang\\(^{2}\\), Shu Hu\\(^{1\\dagger}\\)" + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.222, + 0.865, + 0.239 + ], + "angle": 0, + "content": "\\(^{1}\\)Purdue University, West Lafayette, USA {lin1785, santosh2, wu2415, hu968}@purdue.edu" + }, + { + "type": "text", + "bbox": [ + 0.14, + 0.24, + 0.858, + 0.257 + ], + "angle": 0, + "content": "\\(^{2}\\)University at Albany, State University of New York, New York, USA xwang56@albany.edu" + }, + { + "type": "title", + "bbox": [ + 0.248, + 0.292, + 0.327, + 0.308 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.323, + 0.486, + 0.64 + ], + "angle": 0, + "content": "AI-generated faces have enriched human life, such as entertainment, education, and art. However, they also pose misuse risks. Therefore, detecting AI-generated faces becomes crucial, yet current detectors show biased performance across different demographic groups. Mitigating biases can be done by designing algorithmic fairness methods, which usually require demographically annotated face datasets for model training. However, no existing dataset encompasses both demographic attributes and diverse generative methods simultaneously, which hinders the development of fair detectors for AI-generated faces. In this work, we introduce the AI-Face dataset, the first million-scale demographically annotated AI-generated face image dataset, including real faces, faces from deepfake videos, and faces generated by Generative Adversarial Networks and Diffusion Models. Based on this dataset, we conduct the first comprehensive fairness benchmark to assess various AI face detectors and provide valuable insights and findings to promote the future fair design of AI face detectors. Our AI-Face dataset and benchmark code are publicly available at https://github.com/Purdue-M2/AI-Face-FairnessBench." + }, + { + "type": "title", + "bbox": [ + 0.092, + 0.661, + 0.222, + 0.677 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.682, + 0.486, + 0.88 + ], + "angle": 0, + "content": "AI-generated faces are created using sophisticated AI technologies that are visually difficult to discern from real ones [1]. They can be summarized into three categories: deepfake videos [2] created by typically using Variational Autoencoders (VAEs) [3, 4], faces generated from Generative Adversarial Networks (GANs) [5-8], and Diffusion Models (DMs) [9]. These technologies have significantly advanced the realism and controllability of synthetic facial representations. Generated faces can enrich media and increase creativity [10]. However, they also carry significant risks of misuse. For example, during the 2024 United States presidential election, fake face images of Donald Trump surrounded by groups of black people smiling and laughing to" + }, + { + "type": "image", + "bbox": [ + 0.532, + 0.29, + 0.885, + 0.486 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.512, + 0.49, + 0.907, + 0.545 + ], + "angle": 0, + "content": "Figure 1. Comparison between AI-Face and other datasets in terms of demographic annotation, generation category, and the number of generation methods. 'DF', 'GAN', and 'DM' stand for Deepfake Videos, Generative Adversarial Networks, and Diffusion Models." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.553, + 0.907, + 0.613 + ], + "angle": 0, + "content": "encourage African Americans to vote Republican are spreading online [11]. This could distort public opinion and erode people's trust in media [12, 13], necessitating the detection of AI-generated faces for their ethical use." + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.614, + 0.909, + 0.903 + ], + "angle": 0, + "content": "However, one major issue existing in current AI face detectors [24-27] is biased detection (i.e., unfair detection performance among demographic groups [19, 28-30]). Mitigating biases can be done by designing algorithmic fairness methods, but they usually require demographically annotated face datasets for model training. For example, works like [29, 30] have made efforts to enhance fairness in the detection based on A-FF++ [19] and A-DFD [19]. However, both datasets are limited to containing only faces from deepfake videos, which could cause the trained models not to be applicable for fairly detecting faces generated by GANs and DMs. While some datasets (e.g., GenData [17], DF40 [31]) include GAN and DM faces, they either lack demographic annotations or provide only limited demographic attributes. Most importantly, no existing dataset offers sufficient diversity in generation methods while also providing demographic labels. A comparison of existing datasets is shown in Fig. 1. These limitations of existing datasets hamper the development of fair technologies for detecting AI-generated faces." + }, + { + "type": "page_footnote", + "bbox": [ + 0.116, + 0.887, + 0.241, + 0.9 + ], + "angle": 0, + "content": "†Corresponding author" + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "3503" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.134, + 0.089, + 0.861, + 0.232 + ], + "angle": 0, + "content": "
DatasetYearFace ImagesGeneration Category#Generation MethodsSource of Real ImagesDemographic Annotation
#Real#FakeDeepfake VideosGANDMSkin ToneGenderAge
DF-1.0 [14]20202.9M14.7M1Self-Recording
DeePhy [15]20221K50.4K3YouTube
DF-Platter [16]2023392.3K653.4K3YouTube
GenData [17]2023-20K3CelebA [18]
A-FF++ [19]202429.8K149.1K5YouTube
A-DFD [19]202410.8K89.6K5Self-Recording
A-DFDC [19]202454.5K52.6K8Self-Recording
A-Celeb-DF-v2 [19]202426.3K166.5K1Self-Recording
A-DF-1.0 [19]2024870.3K321.5K1Self-Recording
AI-Face2025400K1.2M37FFHQ [6], IMDB-WIKI [20], real from FF++ [2], DFDC [21], DFD [22],Celeb-DF-v2 [23]
" + }, + { + "type": "table_caption", + "bbox": [ + 0.158, + 0.235, + 0.835, + 0.25 + ], + "angle": 0, + "content": "Table 1. Quantitative comparison of existing datasets with ours on demographically annotated AI-generated faces." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.255, + 0.482, + 0.466 + ], + "angle": 0, + "content": "Moreover, benchmarking fairness provides a direct method to uncover prevalent and unique fairness issues in recent AI-generated face detection. However, there is a lack of a comprehensive benchmark to estimate the fairness of existing AI face detectors. Existing benchmarks [32-35] primarily assess utility, neglecting systematic fairness evaluation. Two studies [28, 36] do evaluate fairness in detection models, but their examination is based on a few outdated detectors. Furthermore, detectors' fairness reliability (e.g., robustness with test set post-processing, fairness generalization) has not been assessed. The absence of a comprehensive fairness benchmark impedes a thorough understanding of the fairness behaviors of recent AI face detectors and obscures the research path for detector fairness guarantees." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.467, + 0.483, + 0.693 + ], + "angle": 0, + "content": "In this work, we build the first million-scale demographically annotated AI-generated face image dataset: AI-Face. The face images are collected from various public datasets, including the real faces that are usually used to train AI face generators, faces from deepfake videos, and faces generated by GANs and DMs. Each face is demographically annotated by our designed measurement method and Contrastive Language-Image Pretraining (CLIP) [37]-based lightweight annotator. Next, we conduct the first comprehensive fairness benchmark on our dataset to estimate the fairness performance of 12 representative detectors coming from four model types. Our benchmark exposes common and unique fairness challenges in recent AI face detectors, providing essential insights that can guide and enhance the future design of fair AI face detectors. Our contributions are as follows:" + }, + { + "type": "text", + "bbox": [ + 0.11, + 0.695, + 0.483, + 0.739 + ], + "angle": 0, + "content": "- We build the first million-scale demographically annotated AI-generated face dataset by leveraging our designed measure and developed lightweight annotator." + }, + { + "type": "text", + "bbox": [ + 0.11, + 0.741, + 0.483, + 0.784 + ], + "angle": 0, + "content": "- We conduct the first comprehensive fairness benchmark of AI-generated face detectors, providing an extensive fairness assessment of current representative detectors." + }, + { + "type": "text", + "bbox": [ + 0.11, + 0.786, + 0.483, + 0.83 + ], + "angle": 0, + "content": "- Based on our experiments, we summarize the unsolved questions and offer valuable insights within this research field, setting the stage for future investigations." + }, + { + "type": "list", + "bbox": [ + 0.11, + 0.695, + 0.483, + 0.83 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.848, + 0.353, + 0.866 + ], + "angle": 0, + "content": "2. Background and Motivation" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.871, + 0.483, + 0.901 + ], + "angle": 0, + "content": "AI-generated Faces and Biased Detection. AI-generated face images, created by advanced AI technologies, are vi" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.255, + 0.908, + 0.603 + ], + "angle": 0, + "content": "sually difficult to discern from real ones. They can be summarized into three categories: 1) Deepfake Videos. Initiated in 2017 [13], these use face-swapping and face-reenactment techniques with a variational autoencoder to replace a face in a target video with one from a source [3, 4]. Note that our paper focuses solely on images extracted from videos. 2) GAN-generated Faces. Post-2017, Generative Adversarial Networks (GANs) [38] like StyleGANs [6-8] have significantly improved generated face realism. 3) DM-generated Faces. Diffusion models (DMs), emerging in 2021, generate detailed faces from textual descriptions and offer greater controllability. Tools like Midjourney [39] and DALLE2 [40] facilitate customized face generation. While these AI-generated faces can enhance visual media and creativity [10], they also pose risks, such as being misused in social media profiles [41, 42]. Therefore, numerous studies focus on detecting AI-generated faces [24-27], but current detectors often show performance disparities among demographic groups [19, 28-30]. This bias can lead to unfair targeting or exclusion, undermining trust in detection models. Recent efforts [29, 30] aim to enhance fairness in deepfake detection but mainly address deepfake videos, overlooking biases in detecting GAN- and DM-generated faces." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.608, + 0.909, + 0.774 + ], + "angle": 0, + "content": "The Existing Datasets. Current AI-generated facial datasets with demographic annotations are limited in size, generation categories, methods, and annotations, as illustrated in Table 1. For instance, A-FF++, A-DFD, A-DFDC, and A-Celeb-DF-v2 [19] are deepfake video datasets with fewer than one million images. Datasets like DF-1.0 [14] and DF-Platter [16] lack various demographic annotations. Additionally, existing datasets offer limited generation methods. These limitations hinder the development of fair AI face detectors, motivating us to build a million-scale demographically annotated AI-Face dataset." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.78, + 0.909, + 0.903 + ], + "angle": 0, + "content": "Benchmark for Detecting AI-generated Faces. Benchmarks are essential for evaluating AI-generated face detectors under standardized conditions. Existing benchmarks, as shown in Table 2, mainly focus on detectors' utility, often overlooking fairness [31-35]. Loc et al. [28] and CCv1 [36] examined detector fairness. However, their study did not have an analysis on DM-generated faces and only measured bias between groups in basic scenarios without considering" + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "3504" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.095, + 0.089, + 0.483, + 0.198 + ], + "angle": 0, + "content": "
Existing BenchmarksYearCategoryScope of Benchmark
Deepfake VideosGANDMUtilityFairness General Reliability
Loc et al. [28]2021
CCv1 [36]2021
DeepfakeBench [34]2023
CDDB [32]2023
Lin et al. [33]2024
Le et al. [35]2024
DF40 [31]2024
Ours2025
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.203, + 0.485, + 0.26 + ], + "angle": 0, + "content": "Table 2. Comparison of existing AI-generated face detection benchmarks and ours. Fairness 'General' means fairness evaluation under default/basic settings. Fairness 'Reliability' measures fairness consistency across dynamic scenarios (e.g., post-processing)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.265, + 0.484, + 0.31 + ], + "angle": 0, + "content": "fairness reliability under real-world variations and transformations. This motivates us to conduct a comprehensive benchmark to evaluate AI face detectors' fairness." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.316, + 0.484, + 0.527 + ], + "angle": 0, + "content": "The Definition of Demographic Categories. Demography-related labels are highly salient to measuring bias. Following prior works [36, 43-47], we will focus on three key demographic categories: Skin Tone, Gender, and Age, in this work. For skin tone, this vital attribute spans a range from pale to dark. We use the Monk Skin Tone scale [48], specifically designed for computer vision applications. For gender, we adopt binary categories (i.e., Male and Female), following practices by many governments [49, 50] and facial recognition research [45, 51, 52], based on sex at birth. For age, using definitions from the United Nations [53] and Statistics Canada [54], we define five age groups: Child (0-14), Youth (15-24), Adult (25-44), Middle-age Adult (45-64), and Senior \\((65+)\\). More discussion is in Appendix A." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.543, + 0.251, + 0.558 + ], + "angle": 0, + "content": "3.AI-Face Dataset" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.565, + 0.484, + 0.61 + ], + "angle": 0, + "content": "This section outlines the process of building our demographically annotated AI-Face dataset (see Fig. 2), along with its statistics and annotation quality assessment." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.622, + 0.247, + 0.636 + ], + "angle": 0, + "content": "3.1. Data Collection" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.644, + 0.483, + 0.901 + ], + "angle": 0, + "content": "We build our AI-Face dataset by collecting and integrating public real and AI-generated face images sourced from academic publications, GitHub repositories, and commercial tools. We strictly adhere to the license agreements of all datasets to ensure that they allow inclusion in our datasets and secondary use for training and testing. More details are in Appendix B.1. Specifically, the fake face images in our dataset originate from 4 Deepfake Video datasets (i.e., \\(\\mathrm{FF} + +\\) [2], DFDC [21], DFD [22], and Celeb-DFv2 [23]), generated by 10 GAN models (i.e., AttGAN [55], MMDGAN [56], StarGAN [55], StyleGANs [55, 57, 58], MSGGAN [56], ProGAN [59], STGAN [56], and VQGAN [60]), and 8 DM models (i.e., DALLE2 [61], IF [61], Midjourney [61], DCFace [62], Latent Diffusion [63], Palette [64], Stable Diffusion v1.5 [65], Stable Diffusion Inpainting [65]). This constitutes a total of 1,245,660 fake face images in our dataset. We include 6 real source datasets" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.908, + 0.228 + ], + "angle": 0, + "content": "(i.e., FFHQ [6], IMDB-WIKI [20], and real images from \\(\\mathrm{FF}++\\) [2], DFDC [21], DFD [22], and Celeb-DF-v2 [23]). All of them are usually used as a training set for generative models to generate fake face images. This constitutes a total of 400,885 real face images in our dataset. In general, our dataset contains 28 subsets and 37 generation methods (i.e., 5 in \\(\\mathrm{FF}++\\), 5 in DFD, 8 in DFDC, 1 in Celeb-DF-v2, 10 GANs, and 8 DMs). For all images, we use RetinaFace [66] for detecting and cropping faces." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.238, + 0.729, + 0.253 + ], + "angle": 0, + "content": "3.2. Annotation Generation" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.261, + 0.795, + 0.274 + ], + "angle": 0, + "content": "3.2.1. Skin Tone Annotation Generation" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.28, + 0.907, + 0.522 + ], + "angle": 0, + "content": "Skin tone is typically measured using an intuitive approach [67, 68], without requiring a predictive model. Inspired by [67], we developed a method to estimate skin tone using the Monk Skin Tone (MST) Scale [48] (including 10-shade scales: Tone 1 to 10) by combining facial landmark detection with color analysis. Specifically, utilizing Mediapipe's FaceMesh [69] for precise facial landmark localization, we isolate skin regions while excluding non-skin areas such as the eyes and mouth. Based on the detected landmarks, we generate a mask to extract skin pixels from the facial area. These pixels are then subjected to K-Means clustering [70] (we use \\( \\mathrm{K} = 3 \\) in practice) to identify the dominant skin color within the region of interest. The top-1 largest color cluster is mapped to the closest tone in the MST Scale by calculating the Euclidean distance between the cluster centroid and the MST reference colors in RGB space." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.531, + 0.84, + 0.545 + ], + "angle": 0, + "content": "3.2.2. Gender and Age Annotation Generation" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.549, + 0.909, + 0.746 + ], + "angle": 0, + "content": "For generating gender and age annotations, the existing online software (e.g., Face++ [71]) and open-source tools (e.g., InsightFace [72]) can be used for the prediction. However, they fall short in our task due to two reasons: 1) They are mostly designed for face recognition and trained on datasets of real face images but lack generalization capability for annotating AI-generated face images. 2) Their use may introduce bias into our dataset, as they are typically designed and trained without careful consideration of bias and imbalance in the training set. See Appendix B.3 for our experimental study on these tools. To this end, we have to develop our specific annotators to predict gender and age annotations for each image in our dataset." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.75, + 0.909, + 0.901 + ], + "angle": 0, + "content": "Problem Definition. Given a training dataset \\(\\mathbb{D} = \\{(X_i, A_i)\\}_{i=1}^n\\) with size \\(n\\), where \\(X_i\\) represents the \\(i\\)-th face image and \\(A_i\\) signifies a demographic attribute associated with \\(X_i\\). Here, \\(A_i \\in \\mathcal{A}\\), where \\(\\mathcal{A}\\) represents user-defined groups (e.g., for gender, \\(\\mathcal{A} = \\{\\text{Female, Male}\\}\\). For age, \\(\\mathcal{A} = \\{\\text{Child, Youth, Adult, Middle-age Adult, Senior}\\}\\)). Our goal is to design a lightweight, generalizable annotator based on \\(\\mathbb{D}\\) that reduces bias while predicting facial demographic attributes for each image in our dataset. In practice, we use IMDB-WIKI [20] as training dataset, which contains" + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "3505" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.133, + 0.088, + 0.865, + 0.337 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.34, + 0.908, + 0.397 + ], + "angle": 0, + "content": "Figure 2. Generation pipeline of our Demographically Annotated AI-Face Dataset. First, we collect and filter face images from Deepfake Videos, GAN-generated faces, and DM-generated faces found in public datasets. Second, we perform skin tone, gender, and age annotation generation. Skin tone is estimated by combining facial landmark detection with color analysis to generate the corresponding annotation. For gender and age, we develop annotators trained on the IMDB-WIKI dataset [20], then use them to predict attributes for each image." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.401, + 0.482, + 0.476 + ], + "angle": 0, + "content": "images along with profile metadata sourced from IMDb and Wikipedia, ensuring that the demographic annotations are as accurate as possible. We trained two annotators with identical architecture and training procedures for gender and age annotations, respectively." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.48, + 0.483, + 0.587 + ], + "angle": 0, + "content": "Annotator Architecture. We build a lightweight annotator based on the CLIP [37] foundation model by leveraging its strong zero-shot and few-shot learning capabilities. Specifically, our annotator employs a frozen pre-trained CLIP ViT L/14 [73] as a feature extractor \\(\\mathbf{E}\\) followed by a trainable classifier parameterized by \\(\\theta\\), which contains 3-layer Multi-layer Perceptron (MLP) M and a classification head \\(h\\)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.59, + 0.483, + 0.696 + ], + "angle": 0, + "content": "Learning Objective. Aware that neural networks can perform poorly when the training dataset suffers from class-imbalance [74] and CLIP is not free from demographic bias [75-77], we introduce an imbalance loss and fairness loss to address these challenges in the annotator training. Specifically, for image \\( X_{i} \\), its feature \\( f_{i} \\) is obtained through \\( f_{i} = \\mathbf{M}(\\mathbf{E}(X_{i})) \\). Next, two losses are detailed below." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.696, + 0.484, + 0.756 + ], + "angle": 0, + "content": "Imbalance Loss: To mitigate the impact of imbalance data, we use Vector Scaling [78] loss, which is a re-weighting method for training models on the imbalanced data with distribution shifts and can be expressed as" + }, + { + "type": "equation", + "bbox": [ + 0.128, + 0.764, + 0.446, + 0.803 + ], + "angle": 0, + "content": "\\[\nL _ {i m b} = \\frac {1}{n} \\sum_ {i = 1} ^ {n} - u _ {A _ {i}} \\log \\frac {e ^ {\\zeta_ {A _ {i}} h (f _ {i}) _ {A _ {i}} + \\Delta_ {A _ {i}}}}{\\sum_ {A \\in \\mathcal {A}} e ^ {\\zeta_ {A} h (f _ {i}) _ {A} + \\Delta_ {A}}},\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.81, + 0.483, + 0.886 + ], + "angle": 0, + "content": "where \\( u_{A_i} \\) is the weighting factor for attribute \\( A_i \\). \\( h(f_i)_{A_i} \\) is the predict logit on \\( A_i \\). \\( \\zeta_{A_i} \\) is the multiplicative logit scaling factor, calculated as the inverse of \\( A_i \\)'s frequency. \\( \\Delta_{A_i} \\) is the additive logit scaling factor, calculated as the log of \\( A_i \\) probabilities. More details about them are in appendix B.4." + }, + { + "type": "text", + "bbox": [ + 0.11, + 0.886, + 0.483, + 0.901 + ], + "angle": 0, + "content": "Fairness Loss: We introduce a fairness loss to minimize" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.401, + 0.908, + 0.462 + ], + "angle": 0, + "content": "the disparity between the distribution \\(\\mathcal{D}^f\\) of \\(f\\) and the conditional distribution \\(\\mathcal{D}^{f_A}\\) of \\(f\\) on attribute \\(A\\in \\mathcal{A}\\). Specifically, we follow [79, 80] to minimize the summation of the following Sinkhorn distance between these two distributions:" + }, + { + "type": "equation", + "bbox": [ + 0.514, + 0.467, + 0.907, + 0.499 + ], + "angle": 0, + "content": "\\[\nL_{fair} = \\sum_{A\\in \\mathcal{A}}\\inf_{\\gamma \\in \\Gamma (\\mathcal{D}^{f},\\mathcal{D}^{f_{A}})}\\bigl\\{\\mathbb{E}_{X\\sim \\gamma}[c(p,q)] + \\alpha H(\\gamma |\\mu \\otimes \\nu)\\bigr \\} ,\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.505, + 0.907, + 0.627 + ], + "angle": 0, + "content": "where \\(\\Gamma(\\mathcal{D}^f, \\mathcal{D}^{f_A})\\) is the set of joint distributions based on \\(\\mathcal{D}^f\\) and \\(\\mathcal{D}^{f_A}\\). Let \\(p\\) and \\(q\\) be the points from \\(\\mathcal{D}^f\\) and \\(\\mathcal{D}^{f_A}\\), respectively. Then, \\(c(p, q)\\) represents the transport cost [80]. Let \\(\\mu\\) and \\(\\nu\\) be the reference measures from the set of measures on \\(f\\). Then, \\(H(\\gamma | \\mu \\otimes \\nu)\\) represents the relative entropy of \\(\\gamma\\) with respect to the product measure \\(\\mu \\otimes \\nu\\). \\(\\alpha \\geq 0\\) is a regularization hyperparameter. In practice, we use the empirical form of \\(L_{fair}\\)." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.627, + 0.909, + 0.902 + ], + "angle": 0, + "content": "Total Loss: Therefore, the final learning objective becomes \\(\\mathcal{L}(\\theta) = L_{imb} + \\lambda L_{fair}\\), where \\(\\lambda\\) is a hyperparameter. Train. Traditional optimization methods like stochastic gradient descent can lead to poor model generalization due to sharp loss landscapes with multiple local and global minima. To address this, we use Sharpness-Aware Minimization (SAM) [81] to enhance our annotator's generalization by flattening the loss landscape. Specifically, flattening is attained by determining the optimal \\(\\epsilon^{*}\\) for perturbing model parameters \\(\\theta\\) to maximize the loss, formulated as: \\(\\epsilon^{*} = \\arg \\max_{\\| \\epsilon \\|_{2} \\leq \\beta} \\mathcal{L}(\\theta + \\epsilon) \\approx \\arg \\max_{\\| \\epsilon \\|_{2} \\leq \\beta} \\epsilon^{\\top} \\nabla_{\\theta} \\mathcal{L} = \\beta \\text{sign}(\\nabla_{\\theta} \\mathcal{L})\\), where \\(\\beta\\) controls the perturbation magnitude. The approximation is based on the first-order Taylor expansion with assuming \\(\\epsilon\\) is small. The final equation is obtained by solving a dual norm problem, where sign represents a sign function and \\(\\nabla_{\\theta} \\mathcal{L}\\) being the gradient of \\(\\mathcal{L}\\) with respect to \\(\\theta\\). As a result, the model parameters are updated by solving: \\(\\min_{\\theta} \\mathcal{L}(\\theta + \\epsilon^{*})\\)." + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "3506" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.091, + 0.087, + 0.318, + 0.213 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.32, + 0.098, + 0.55, + 0.213 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.549, + 0.099, + 0.707, + 0.212 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.708, + 0.099, + 0.906, + 0.213 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.217, + 0.908, + 0.259 + ], + "angle": 0, + "content": "Figure 3. Distribution of face images of the AI-Face dataset. The figure shows the (a) subset distribution and the demographic distribution for (b) skin tone, (c) gender, and (d) gender. The outer rings in (b), (c), and (d) represent the proportion of groups within each attribute category, while the inner rings indicate the distribution of fake \\((F)\\) and real \\((R)\\) images within those groups." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.264, + 0.486, + 0.309 + ], + "angle": 0, + "content": "Inference. We use the trained annotators to predict demographic labels for each image in AI-Face dataset, except for those from IMDB-WIKI, which already contain true labels." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.319, + 0.26, + 0.333 + ], + "angle": 0, + "content": "3.3. Dataset Statistics" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.338, + 0.486, + 0.49 + ], + "angle": 0, + "content": "Fig. 3 illustrates the subset distribution and demographic attributes of the AI-Face dataset. The dataset contains approximately three times more generated images than real images, with diffusion model-generated images constituting the majority. In terms of demographic attributes, the majorities in skin tone are Tone 5 (31.14%) and Tone 6 (35.16%). The lightest skin tones (Tones 1-3) are underrepresented, comprising only 0.97% of the dataset. The dataset is relatively balanced across gender. Adult (25-44) (49.67%) is the predominant representation in age groups." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.499, + 0.37, + 0.513 + ], + "angle": 0, + "content": "3.4. Annotation Quality Assessment" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.517, + 0.486, + 0.79 + ], + "angle": 0, + "content": "To assess the quality of demographic annotations in our AI-Face dataset, we conducted a user study. Three participants label the demographic attributes for the given images (the details of labeling activities are in appendix B.5), with the final ground truth determined by majority vote. We then compare our annotations with those in A-FF++, A-DFDC, A-CelebDF-V2, and A-DFD datasets. Specifically, we perform two assessments: 1) Strategic comparison: We select 1,000 images from A-FF++ and A-DFDC that have different annotations from AI-Face. These images likely represent challenging cases. 2) Random comparison: We randomly sampled 1,000 images from A-Celeb-DF-V2 and A-DFD. Due to the limited age classes in these datasets, only gender was evaluated. The results, presented in Table 3, demonstrate the high correctness of the AI-Face annotations and their superior quality compared to the annotations of other datasets. For example, our annotation quality (ACC) surpasses those in A-FF++ by \\(78.714\\%\\) on gender and \\(48.000\\%\\) on age." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.8, + 0.36, + 0.817 + ], + "angle": 0, + "content": "4. Fairness Benchmark Settings" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.821, + 0.485, + 0.867 + ], + "angle": 0, + "content": "This section demonstrates the fairness benchmark settings for detection methods and evaluation metrics on AI-Face (80%/20%: Train/Test). More settings are in Appendix C.1." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.871, + 0.484, + 0.901 + ], + "angle": 0, + "content": "Detection Methods. Our benchmark has implemented 12 detectors. The methodologies cover a spectrum that" + }, + { + "type": "table", + "bbox": [ + 0.517, + 0.261, + 0.912, + 0.361 + ], + "angle": 0, + "content": "
Evaluation TypeDatasetGenderAge
ACCPrecisionRecallACCPrecisionRecall
StrategicA-FF++8.14317.5835.96637.70039.45945.381
AI-Face86.85774.40477.36785.70074.02463.751
A-DFDC21.60028.60423.08233.40038.01140.165
AI-Face91.70092.12983.44877.00076.18462.646
RandomA-Celeb-DF-V289.62890.62690.494-
AI-Face91.20691.47491.767-
A-DFD70.90071.68674.435-
AI-Face92.30091.06091.727-
" + }, + { + "type": "table_caption", + "bbox": [ + 0.512, + 0.365, + 0.908, + 0.394 + ], + "angle": 0, + "content": "Table 3. Annotation quality assessment results \\((\\%)\\) for \\(A - FF + +\\) A- DFDC, A-Celeb-DF-V2, A-DFD, and our AI-Face. ACC: Accuracy." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.399, + 0.909, + 0.611 + ], + "angle": 0, + "content": "is specifically tailored to detect AI-generated faces from Deepfake Videos, GANs, and DMs. They can be classified into four types: Naive detectors: refer to backbone models that can be directly utilized as the detector for binary classification, including CNN-based (i.e., Xception [82], EfficientB4 [83]) and transformer-based (i.e., ViT-B/16 [84]). Frequency-based: explore the frequency domain for forgery detection (i.e., F3Net [85], SPSL [86], SRM [87]). Spatial-based: focus on mining spatial characteristics (e.g., texture) within images for detection (i.e., UCF [26], UnivFD [88], CORE [89]). Fairness-enhanced: focus on improving fairness in AI-generated face detection by designing specific algorithms (i.e., DAW-FDD [29], DAGFDD [29], PG-FDD [30])." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.614, + 0.911, + 0.901 + ], + "angle": 0, + "content": "Evaluation Metrics. To provide a comprehensive benchmarking, we consider 5 fairness metrics commonly used in fairness community [90-94] and 5 widely used utility metrics [95-98]. For fairness metrics, we consider Demographic Parity \\((F_{DP})\\) [90, 91], Max Equalized Odds \\((F_{MEO})\\) [93], Equal Odds \\((F_{EO})\\) [92], and Overall Accuracy Equality \\((F_{OAE})\\) [93] for evaluating group (e.g., gender) and intersectional (e.g., individuals of a specific gender and simultaneously a specific skin tone) fairness. In experiments, the intersectional groups are Female-Light (F-L), Female-Medium (F-M), Female-Dark (Dark), Male-Light (M-L), Male-Medium (M-M), and Male-Dark (M-D), where we group 10 categories of skin tones into Light (Tone 1-3), Medium (Tone 4-6), and Dark (Tone 7-10) for simplicity according to [99]. We also use individual fairness \\((F_{IND})\\) [94, 100] (i.e., similar individuals should have similar predicted outcomes) for estimation. For utility metrics, we employ the Area Under the ROC Curve (AUC), Accuracy (ACC), Average Precision (AP), Equal Error Rate (EER)," + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "3507" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.095, + 0.082, + 0.907, + 0.351 + ], + "angle": 0, + "content": "
MeasureAttributeMetricModel Type
NaiveFrequencySpatialFairness-enhanced
Xception [82]EfficientB4 [83]ViT-B/16 [84]F3Net [85]SPSL [86]SRM [87]UCF [26]UnivFD [88]CORE [89]DAW-FDD [29]DAG-FDD [29]PG-FDD [30]
Fairness(%)↓Skin ToneFMEO8.8368.3006.26419.9388.05510.00217.3252.57710.77914.1186.5516.465
FDP9.7516.1847.72812.8769.37910.89712.5818.55610.31710.7068.6179.746
FOAE1.2714.3772.1682.8181.1350.9151.8832.7481.3321.6671.3880.882
FEO12.13211.0628.81323.7089.78914.23921.925.53613.06916.6047.3839.115
GenderFMEO3.9755.3855.1044.7174.4116.2715.0744.5035.7955.5105.9103.190
FDP1.6911.7251.3441.8641.8271.9571.7361.1902.1542.0152.1511.252
FOAE0.9751.4871.8031.1291.0371.7721.4511.6221.3891.3251.4201.071
FEO4.1435.8636.0314.8704.5346.785.5105.4085.9315.6966.0663.702
AgeFMEO27.8836.79614.93738.80127.61424.84347.5005.43633.88245.46615.22914.804
FDP10.90511.84911.83914.90611.23211.57017.04915.24912.56414.1069.63310.467
FOAE7.2652.8566.83810.1167.2706.52411.6523.7938.76011.8785.5335.009
FEO42.21610.30030.79555.03240.94338.52867.54514.14848.72964.38430.18229.585
IntersectionFMEO10.50517.5869.38421.36910.37915.14220.1346.11915.3416.56512.1789.578
FDP14.5118.60711.53517.17513.25915.18617.0314.02614.30114.08811.70514.697
FOAE2.5368.4614.9284.8702.4643.9983.5366.2872.7753.5474.0353.062
FEO24.31525.11427.44347.78321.67930.11243.37620.25528.8433.12226.29518.348
IndividualFIND10.33825.7420.0221.8722.5187.6210.7673.5230.0413.7720.9010.780
Utility(%)-AUC↑98.58398.61198.6998.71498.74797.93698.08298.19298.57997.81198.77199.172
ACC↑96.30894.20394.47295.71996.34695.09295.15193.65196.22495.42695.72296.174
AP↑99.35099.54299.57199.45399.35699.17299.27399.40099.36099.01599.49899.694
EER↓5.1496.6896.3725.2564.3716.4837.7087.6335.1457.0635.4994.961
FPR↓12.96120.06616.42614.67913.66115.74613.64618.55013.41016.67014.84410.971
Training Time / Epoch1h15min2h25min2h40min1h18min1h20min3h10min5h05min4h1h16min1h25min1h17min7h20min
" + }, + { + "type": "table_caption", + "bbox": [ + 0.119, + 0.356, + 0.877, + 0.37 + ], + "angle": 0, + "content": "Table 4. Overall performance comparison of difference methods on the AI-Face dataset. The best performance is shown in bold." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.375, + 0.296, + 0.39 + ], + "angle": 0, + "content": "and False Positive Rate (FPR)." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.404, + 0.29, + 0.422 + ], + "angle": 0, + "content": "5. Results and Analysis" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.426, + 0.486, + 0.473 + ], + "angle": 0, + "content": "In this section, we estimate the existing AI-generated image detectors' fairness performance alongside their utility on our AI-Face Dataset. More results can be found in Appendix D." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.482, + 0.357, + 0.499 + ], + "angle": 0, + "content": "5.1. General Fairness Comparison" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.504, + 0.486, + 0.791 + ], + "angle": 0, + "content": "Overall Performance. Table 4 reports the overall performance on our AI-Face test set. Our observations are: 1) Fairness-Enhanced Models (specifically PG-FDD [30]) are the most effective in achieving both high fairness and utility, underscoring the effectiveness of specialized fairness-enhancement techniques in mitigating demographic biases. 2) UnivFD [88], based on the CLIP backbone [73], also achieves commendable fairness, suggesting that foundation models equipped with fairness-focused enhancements could be a promising direction for developing fairer detectors. 3) Naive detectors, such as EfficientB4 [83], trained on large, diverse datasets (e.g., our AI-Face) can achieve competitive fairness and utility, highlighting the potential of fairness improvements by choosing specific architecture. 4) 10 out of 12 detectors have an AUC higher than \\(98\\%\\), demonstrating our AI-Face dataset is significant for training AI-face detectors in resulting high utility. 5) PG-FDD demonstrates superior performance but has a long training time, which can be explored and addressed in the future." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.796, + 0.485, + 0.902 + ], + "angle": 0, + "content": "Performance on Different Subsets. 1) Fig. 4 demonstrates the intersectional \\( F_{EO} \\) and AUC performance of detectors on each test subset. We observe that the fairness performance varies a lot among different generative methods for every detector. The largest bias on most detectors comes from detecting face images generated by diffusion models. 2) DAG-FDD [29] and SRM [87] demonstrate the most" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.375, + 0.909, + 0.452 + ], + "angle": 0, + "content": "consistent fairness across subsets, indicating a robust handling of bias introduced by different generative methods. 3) Moreover, the stable utility demonstrates our dataset's expansiveness and diversity, enabling effective training to detect AI-generated faces from various generative techniques." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.456, + 0.909, + 0.609 + ], + "angle": 0, + "content": "Performance on Different Subgroups. We conduct an analysis of all detectors on intersectional subgroups. 1) As shown in Fig. 5, facial images with lighter skin tone are more often misclassified as fake, likely due to the underrepresentation of lighter tones (Tone 1-3) in our dataset (see Fig. 3 (b)). This suggests detectors tend to show higher error rates for minority groups. 2) Although gender representation is relatively balanced (see Fig. 3 (c)) in our dataset, the detectors consistently exhibit higher false positive rates for female subgroups, indicating a persistent gender-based bias." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.62, + 0.792, + 0.636 + ], + "angle": 0, + "content": "5.2. Fairness Reliability Assessment" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.639, + 0.909, + 0.867 + ], + "angle": 0, + "content": "Fairness Robustness Evaluation. We apply 6 post-processing methods: Random Crop (RC) [101], Rotation (RT) [34], Brightness Contrast (BC) [34], Hue Saturation Value (HSV) [34], Gaussian Blur (GB) [34], and JEPG Compression (JC) [102] to the test images. Fig. 6 shows each detector's intersectional \\( F_{EO} \\) and AUC performance changes after using post-processing. Our observations are: 1) These impairments tend to wash out forensic traces, so that detectors have evident performance degradation. 2) Post-processing does not always cause detectors more bias (e.g., UCF, UnivFD, CORE, DAW-FDD have better fairness after rotation), though they hurt the utility. 3) Fairness-enhanced detectors struggle to maintain fairness when images undergo post-processing. 4) Spatial detectors have better fairness robustness compared with other model types." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.871, + 0.909, + 0.903 + ], + "angle": 0, + "content": "Fairness Generalization Evaluation. To evaluate detectors' fairness generalization capability, we test them on Casual" + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "3508" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.098, + 0.09, + 0.23, + 0.282 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.232, + 0.091, + 0.363, + 0.282 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.365, + 0.091, + 0.498, + 0.282 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.498, + 0.091, + 0.63, + 0.282 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.63, + 0.091, + 0.762, + 0.282 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.765, + 0.091, + 0.898, + 0.282 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.283, + 0.907, + 0.31 + ], + "angle": 0, + "content": "Figure 4. Visualization of the intersectional \\( F_{EO} \\) (\\%) and AUC (\\%) of detectors on different subsets. The smaller \\( F_{EO} \\) polygon area represents better fairness. The larger AUC area means better utility." + }, + { + "type": "image", + "bbox": [ + 0.092, + 0.31, + 0.23, + 0.395 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.231, + 0.31, + 0.368, + 0.395 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.368, + 0.31, + 0.5, + 0.395 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.5, + 0.31, + 0.637, + 0.395 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.637, + 0.31, + 0.772, + 0.395 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.772, + 0.31, + 0.905, + 0.395 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.092, + 0.395, + 0.23, + 0.48 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.231, + 0.395, + 0.369, + 0.48 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.368, + 0.395, + 0.5, + 0.48 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.5, + 0.395, + 0.637, + 0.48 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.637, + 0.395, + 0.772, + 0.48 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.772, + 0.395, + 0.905, + 0.48 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.13, + 0.484, + 0.865, + 0.499 + ], + "angle": 0, + "content": "Figure 5. FPR(%) of each intersectional subgroup The dashline represents the lowest FPR on Female-Light (F-L) subgroup." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.503, + 0.486, + 0.716 + ], + "angle": 0, + "content": "Conversations v2 (CCv2) [103], DF-Platter [16], and GenData [17], none of which are part of AI-Face. Notably, CCv2 is a dataset that contains only real face images with demographic annotations (e.g., gender) self-reported by the participants. Results on gender attribute in Table 5 show that: 1) Even well-designed detectors that focus on improving utility or fairness generalization (e.g., UCF, PG-FDD) struggle to achieve consistently superior performance across different dataset domains. This highlights the remaining fairness generalization issue. 2) DAW-FDD and PG-PDD are two fairness-enhanced detectors that require accessing demographic information during training, but their fairness does not encounter a drastic drop when evaluating on CCv2. This reflects the high accuracy of the annotations in our AI-face." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.72, + 0.486, + 0.902 + ], + "angle": 0, + "content": "Effect of Training Set Size. We randomly sample \\(20\\%\\), \\(40\\%\\), \\(60\\%\\), and \\(80\\%\\) of each training subset from AI-Face to assess the impact of training size on performance. Key observations from Fig. 7 (Left): 1) Among all detectors, UnivFD demonstrates the most stable fairness and utility performance as the training dataset size changes, likely due to its fixed CLIP backbone. 2) Increasing the training dataset size generally improves model utility, but this pattern does not extend to fairness metrics. In fact, certain detectors such as F3Net and UCF exhibit worsening fairness as the training size reaches its maximum. This suggests that more training data does not necessarily lead to fairer detectors." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.503, + 0.909, + 0.67 + ], + "angle": 0, + "content": "Effect of the Ratio of Real and Fake. To examine how training real-to-fake sample ratios affect detector performance, we set the ratios at 1:10, 1:1, and 10:1 while keeping the total sample count constant. Experimental results in Fig. 7 (Right) show: 1) Most detectors' fairness improves as real sample representation increases. Probably because increasing real and reducing fake samples helps detectors reduce overfitting to artifacts specific to fake samples. This makes it easier for detectors to distinguish real from fake, even for underrepresented groups, thereby enhancing fairness. 2) Most detectors achieve the highest AUC with balanced data." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.687, + 0.633, + 0.701 + ], + "angle": 0, + "content": "5.3. Discussion" + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.705, + 0.909, + 0.903 + ], + "angle": 0, + "content": "According to the above experiments, we summarize the unsolved fairness problems in recent detectors: 1) Detectors' fairness is unstable when detecting face images generated by different generative methods, indicating a future direction for enhancing fairness stability since new generative models continue to emerge. 2) Even though fairness-enhanced detectors exhibit small overall fairness metrics, they still show biased detection towards minority groups. Future studies should be more cautious when designing fair detectors to ensure balanced performance across all demographic groups. 3) There is currently no reliable detector, as all detectors experience severe large performance degradation under image post-processing and cross-domain evaluation. Future" + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "3509" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.095, + 0.088, + 0.295, + 0.189 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.296, + 0.089, + 0.498, + 0.189 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.5, + 0.089, + 0.702, + 0.189 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.703, + 0.089, + 0.905, + 0.189 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.104, + 0.191, + 0.89, + 0.205 + ], + "angle": 0, + "content": "Figure 6. Performance ratio after vs. before post-processing. Points closer to 1.0 (i.e., no post-processing) indicate better robustness." + }, + { + "type": "table", + "bbox": [ + 0.102, + 0.208, + 0.893, + 0.381 + ], + "angle": 0, + "content": "
Model TypeDetectorDataset
CCv2 [103]DF-Platter [16]GenData [17]
Fairness(%)↓FOAEUtility(%)↑ACCFairness(%)↓FOAEFEOUtility(%)↑AUCFairness(%)↓FOAEFEOUtility(%)↑AUC
NaiveXception1.006(+0.031)86.465(-9.843)6.836(+5.861)9.789(+5.646)81.273(-17.310)2.539(+1.564)13.487(+9.344)96.971(-1.612)
EfficientB44.077(+0.259)82.980(-11.223)8.786(+7.299)12.370(+6.507)67.694(-30.917)3.304(+1.817)1.995 (-3.686)93.213(-5.398)
ViT-B/162.167(+0.364)81.489(-12.983)0.015 (-1.788)12.373(+6.342)76.050(-22.640)3.164(+1.361)9.610(+3.579)88.253(-10.437)
FrequencyF3Net5.743(+4.614)87.867(-7.852)3.521(+2.392)6.445(+1.575)85.112(-13.602)1.188(+0.059)16.306(+11.436)91.603(-7.111)
SPSL0.601 (-0.436)80.006(-16.340)5.109(+4.072)7.842(+3.308)82.175(-16.572)1.385(+0.348)9.261(+4.272)98.838 (+0.091)
SRM7.000(+5.228)79.768(-15.324)3.823(+2.051)6.567(-0.213)66.401(-31.535)3.281(+1.509)7.907(+1.127)90.049(-7.887)
SpatialUCF2.169(+0.718)93.009 (-2.142)8.687(+7.236)17.068(+11.558)80.821(-17.261)3.513(+2.062)10.529(+5.019)87.778(-10.304)
UnivFD7.625(+6.003)67.983(-25.668)4.540(+2.918)9.950(+4.542)76.443(-21.749)1.645(+0.023)3.848(-1.560)94.418(-3.774)
CORE4.410(+3.021)83.328(-12.896)7.741(+6.352)17.348(+11.417)77.226(-21.353)3.759(+2.370)23.289(+17.358)98.408(-0.171)
Fairness-enhancedDAW-FDD4.726(+3.401)84.685(-10.741)5.536(+4.211)13.667(+7.791)81.807(-16.004)1.443(+0.118)10.228(+4.532)97.854(+0.043)
DAG-FDD2.364(+0.944)83.918(-11.804)3.064(+1.644)22.203(+16.137)75.206(-23.565)0.714 (-0.706)10.332(+4.266)92.108(-6.663)
PG-FDD1.513(+0.442)92.852(-3.322)4.565(+3.494)9.717(+6.015)85.271 (-13.901)3.063(+1.992)9.479(+5.777)93.329(-5.843)
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.383, + 0.907, + 0.411 + ], + "angle": 0, + "content": "Table 5. Fairness generalization results based on the gender attribute. The smallest performance changes (in parentheses) and the best performance are in red and in bold, respectively. Only \\( F_{OAE} \\) fairness metric and ACC metric are used in CCv2 due to all samples are real." + }, + { + "type": "image", + "bbox": [ + 0.093, + 0.416, + 0.907, + 0.539 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.095, + 0.541, + 0.9, + 0.556 + ], + "angle": 0, + "content": "Figure 7. Impact of the training set size (Left) and the ratio of real and fake (Right) on detectors' intersectional \\( F_{EO}(\\%) \\) and AUC (\\%)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.561, + 0.486, + 0.637 + ], + "angle": 0, + "content": "studies should aim to develop a unified framework that ensures fairness, robustness, and generalization, as these three characteristics are essential for creating a reliable detector. Moreover, integrating foundation models (e.g., CLIP) into detector design may help mitigate bias." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.652, + 0.21, + 0.668 + ], + "angle": 0, + "content": "6. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.674, + 0.486, + 0.901 + ], + "angle": 0, + "content": "This work presents the first demographically annotated million-scale AI-Face dataset, serving as a pivotal foundation for addressing the urgent need for developing fair AI face detectors. Based on this dataset, we conduct the first comprehensive fairness benchmark, shedding light on the fairness performance and challenges of current representative AI face detectors. Our findings can inspire and guide researchers in refining current models and exploring new methods to mitigate bias. Limitation and Future Work: One limitation is that our dataset's annotations are algorithmically generated, so they may lack \\(100\\%\\) accuracy. This challenge is difficult to resolve, as demographic attributes for most AI-generated faces are often too ambiguous to predict and do not map to real-world individuals. We plan to enhance annotation quality through human labeling in the future. We" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.561, + 0.907, + 0.667 + ], + "angle": 0, + "content": "also plan to extend our fairness benchmark to evaluate large language models like LLaMA2 [104] and GPT4 [105] for detecting AI faces. Social Impact: Malicious users could misuse AI-generated face images from our dataset to create fake social media profiles and spread misinformation. To mitigate this risk, only users who submit a signed end-user license agreement will be granted access to our dataset." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.671, + 0.663, + 0.687 + ], + "angle": 0, + "content": "Ethics Statement" + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.689, + 0.909, + 0.901 + ], + "angle": 0, + "content": "Our dataset collection and annotation generation are approved by Purdue's Institutional Review Board. The dataset is only for research purposes. All data included in this work are sourced from publicly available datasets, and we strictly comply with each dataset's license agreement to ensure lawful inclusion and permissible secondary use for training and testing. All collected data and their associated licenses are mentioned in the Datasheet of AI-Face in Appendix E. Our annotation processes prioritize ethical considerations: 1) \\(76\\%\\) images we annotated are generated facial images, ensuring no potential for harm to any individual. 2) For real images, we only provide annotations for content either licensed by the original copyright holders or explicitly stated as freely shareable for research purposes." + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "3510" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.092, + 0.091, + 0.251, + 0.108 + ], + "angle": 0, + "content": "Acknowledgments" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.109, + 0.486, + 0.214 + ], + "angle": 0, + "content": "This work is supported by the U.S. National Science Foundation (NSF) under grant IIS-2434967 and the National Artificial Intelligence Research Resource (NAIRR) Pilot and TACC Lonestar6. The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of NSF and NAIRR Pilot." + }, + { + "type": "title", + "bbox": [ + 0.093, + 0.227, + 0.188, + 0.243 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.108, + 0.252, + 0.486, + 0.308 + ], + "angle": 0, + "content": "[1] L. Lin, N. Gupta, Y. Zhang, H. Ren, C.-H. Liu, F. Ding, X. Wang, X. Li, L. Verdoliva, and S. Hu, \"Detecting multimedia generated by large ai models: A survey,\" arXiv preprint arXiv:2402.00045, 2024. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.109, + 0.31, + 0.485, + 0.377 + ], + "angle": 0, + "content": "[2] A. Rossler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, and M. Nießner, “Faceforensics++: Learning to detect manipulated facial images,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 1-11, 2019, 1, 2, 3, 18, 19" + }, + { + "type": "ref_text", + "bbox": [ + 0.109, + 0.379, + 0.484, + 0.42 + ], + "angle": 0, + "content": "[3] “Deepfakes github.” https://github.com/deepfakes/faceswap. Accessed: 2024-04-17. 1,2" + }, + { + "type": "ref_text", + "bbox": [ + 0.109, + 0.422, + 0.484, + 0.449 + ], + "angle": 0, + "content": "[4] “Fakeapp.” https://www.fakeapp.com/. Accessed: 2024-04-17. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.109, + 0.451, + 0.484, + 0.505 + ], + "angle": 0, + "content": "[5] A. Brock, J. Donahue, and K. Simonyan, \"Large scale gan training for high fidelity natural image synthesis,\" in 7th International Conference on Learning Representations, ICLR 2019, 2019. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.109, + 0.508, + 0.484, + 0.562 + ], + "angle": 0, + "content": "[6] T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401–4410, 2019. 2, 3, 18" + }, + { + "type": "ref_text", + "bbox": [ + 0.109, + 0.564, + 0.484, + 0.631 + ], + "angle": 0, + "content": "[7] T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila, \"Analyzing and improving the image quality of stylegan,\" in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.109, + 0.634, + 0.484, + 0.689 + ], + "angle": 0, + "content": "[8] T. Karras, M. Aittala, S. Laine, E. Härkönen, J. Hellsten, J. Lehtinen, and T. Aila, \"Alias-free generative adversarial networks,\" Advances in neural information processing systems, vol. 34, pp. 852-863, 2021. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.109, + 0.691, + 0.484, + 0.758 + ], + "angle": 0, + "content": "[9] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, \"High-resolution image synthesis with latent diffusion models,\" in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10684-10695, 2022. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.761, + 0.485, + 0.815 + ], + "angle": 0, + "content": "[10] D. J. Tojin T. Eapen, “How generative ai can augment human creativity.” https://hbr.org/2023/07/how-generative-ai-can-augment-human-creativity, 2023. Accessed: 2024-04-21. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.817, + 0.484, + 0.871 + ], + "angle": 0, + "content": "[11] B. News, \"Trump supporters target black voters with faked ai images.\" https://www.bbc.com/news/world-us-canada-68440150, 2024. Accessed: 2023-05-09. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.873, + 0.484, + 0.901 + ], + "angle": 0, + "content": "[12] H. S. Sætra, “Generative ai: Here to stay, but for good?,” Technology in Society, vol. 75, p. 102372, 2023. 1" + }, + { + "type": "list", + "bbox": [ + 0.102, + 0.252, + 0.486, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.523, + 0.093, + 0.907, + 0.133 + ], + "angle": 0, + "content": "[13] M. Westerlund, “The emergence of deepfake technology: A review,” Technology innovation management review, vol. 9, no. 11, 2019. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.136, + 0.907, + 0.205 + ], + "angle": 0, + "content": "[14] L. Jiang, R. Li, W. Wu, C. Qian, and C. C. Loy, \"Deeperforensics-1.0: A large-scale dataset for real-world face forgery detection,\" in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2889-2898, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.207, + 0.907, + 0.261 + ], + "angle": 0, + "content": "[15] K. Narayan, H. Agarwal, K. Thakral, S. Mittal, M. Vatsa, and R. Singh, \"Deeply: On deepfake phylogeny,\" in 2022 IEEE International Joint Conference on Biometrics (IJCB), pp. 1-10, IEEE, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.263, + 0.906, + 0.33 + ], + "angle": 0, + "content": "[16] K. Narayan, H. Agarwal, K. Thakral, S. Mittal, M. Vatsa, and R. Singh, \"Df-platter: multi-face heterogeneous deepfake dataset,\" in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9739-9748, 2023. 2, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.334, + 0.907, + 0.375 + ], + "angle": 0, + "content": "[17] C. Teo, M. Abdollahzadeh, and N.-M. M. Cheung, “On measuring fairness in generative models,” Advances in Neural Information Processing Systems, vol. 36, 2023. 1, 2, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.378, + 0.907, + 0.418 + ], + "angle": 0, + "content": "[18] Z. Liu, P. Luo, X. Wang, and X. Tang, \"Deep learning face attributes in the wild,\" in Proceedings of International Conference on Computer Vision (ICCV), December 2015. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.42, + 0.907, + 0.473 + ], + "angle": 0, + "content": "[19] Y. Xu, P. Terhöst, M. Pedersen, and K. Raja, \"Analyzing fairness in deepfake detection with massively annotated databases,\" IEEE Transactions on Technology and Society, 2024. 1, 2, 14" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.476, + 0.907, + 0.531 + ], + "angle": 0, + "content": "[20] R. Rothe, R. Timofte, and L. Van Gool, “Dex: Deep expectation of apparent age from a single image,” in Proceedings of the IEEE international conference on computer vision workshops, pp. 10–15, 2015. 2, 3, 4, 18" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.533, + 0.907, + 0.587 + ], + "angle": 0, + "content": "[21] B. Dolhansky, J. Bitton, B. Pflaum, J. Lu, R. Howes, M. Wang, and C. C. Ferrer, \"The deepfake detection challenge (dfdc) dataset,\" arXiv preprint arXiv:2006.07397, 2020. 2, 3, 18, 19" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.59, + 0.907, + 0.617 + ], + "angle": 0, + "content": "[22] G. Research, \"Contributing data to deepfake detection research,\" 2019. Accessed: 2024-04-12. 2, 3, 18, 19" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.619, + 0.907, + 0.686 + ], + "angle": 0, + "content": "[23] Y. Li, X. Yang, P. Sun, H. Qi, and S. Lyu, \"Celeb-df: A large-scale challenging dataset for deepfake forensics,\" in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3207-3216, 2020. 2, 3, 18, 19" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.689, + 0.907, + 0.744 + ], + "angle": 0, + "content": "[24] W. Pu, J. Hu, X. Wang, Y. Li, S. Hu, B. Zhu, R. Song, Q. Song, X. Wu, and S. Lyu, \"Learning a deep dual-level network for robust deepfake detection,\" Pattern Recognition, vol. 130, p. 108832, 2022. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.747, + 0.906, + 0.787 + ], + "angle": 0, + "content": "[25] H. Guo, S. Hu, X. Wang, M.-C. Chang, and S. Lyu, \"Robust attentive deep neural network for detecting gan-generated faces,\" IEEE Access, vol. 10, pp. 32574-32583, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.789, + 0.907, + 0.857 + ], + "angle": 0, + "content": "[26] Z. Yan, Y. Zhang, Y. Fan, and B. Wu, \"Ucf: Uncovering common features for generalizable deepfake detection,\" in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 22412-22423, 2023. 5, 6, 19, 21, 23, 24, 25, 26, 27" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.86, + 0.907, + 0.901 + ], + "angle": 0, + "content": "[27] L. Papa, L. Faiella, L. Corvitto, L. Maiano, and I. Amerini, \"On the use of stable diffusion for creating realistic faces: from generation to detection,\" in 2023 11th International" + }, + { + "type": "list", + "bbox": [ + 0.523, + 0.093, + 0.907, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.514, + 0.957 + ], + "angle": 0, + "content": "3511" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.133, + 0.093, + 0.484, + 0.12 + ], + "angle": 0, + "content": "Workshop on Biometrics and Forensics (IWBF), pp. 1-6, IEEE, 2023. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.122, + 0.483, + 0.149 + ], + "angle": 0, + "content": "[28] L. Trinh and Y. Liu, “An examination of fairness of ai models for deepfake detection,” IJCAI, 2021. 1, 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.151, + 0.484, + 0.219 + ], + "angle": 0, + "content": "[29] Y. Ju, S. Hu, S. Jia, G. H. Chen, and S. Lyu, “Improving fairness in deepfake detection,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 4655–4665, 2024. 1, 2, 5, 6, 21, 23, 24, 25, 26, 27" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.221, + 0.484, + 0.262 + ], + "angle": 0, + "content": "[30] L. Lin, X. He, Y. Ju, X. Wang, F. Ding, and S. Hu, “Preserving fairness generalization in deepfake detection,” CVPR, 2024. 1, 2, 5, 6, 19, 21, 23, 24, 25, 26, 27" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.264, + 0.484, + 0.305 + ], + "angle": 0, + "content": "[31] Z. Yan, T. Yao, S. Chen, Y. Zhao, X. Fu, J. Zhu, D. Luo, L. Yuan, C. Wang, S. Ding, et al., \"Df40: Toward next-generation deepfake detection,\" NeurIPS, 2024. 1, 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.307, + 0.484, + 0.361 + ], + "angle": 0, + "content": "[32] C. Li et al., “A continual deepfake detection benchmark: Dataset, methods, and essentials,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1339–1349, 2023. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.363, + 0.484, + 0.417 + ], + "angle": 0, + "content": "[33] J. Deng, C. Lin, P. Hu, C. Shen, Q. Wang, Q. Li, and Q. Li, \"Towards benchmarking and evaluating deepfake detection,\" IEEE Transactions on Dependable and Secure Computing, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.42, + 0.484, + 0.46 + ], + "angle": 0, + "content": "[34] Z. Yan, Y. Zhang, X. Yuan, S. Lyu, and B. Wu, \"Deepfakebench: A comprehensive benchmark of deepfake detection,\" in NeurIPS, 2023. 3, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.462, + 0.484, + 0.503 + ], + "angle": 0, + "content": "[35] B. M. Le, J. Kim, S. Tariq, K. Moore, A. Abuadbba, and S. S. Woo, \"Sok: Facial deepfake detectors,\" arXiv, 2024. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.505, + 0.484, + 0.572 + ], + "angle": 0, + "content": "[36] C. Hazirbas, J. Bitton, B. Dolhansky, J. Pan, A. Gordo, and C. C. Ferrer, \"Towards measuring fairness in ai: the casual conversations dataset,\" IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 4, no. 3, pp. 324-332, 2021. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.576, + 0.484, + 0.644 + ], + "angle": 0, + "content": "[37] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al., \"Learning transferable visual models from natural language supervision,\" in International conference on machine learning, pp. 8748-8763, PMLR, 2021. 2, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.647, + 0.484, + 0.701 + ], + "angle": 0, + "content": "[38] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, \"Generative adversarial nets,\" Advances in neural information processing systems, vol. 27, 2014. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.703, + 0.484, + 0.729 + ], + "angle": 0, + "content": "[39] “Midjourney.” https://mid-journey.ai/. Accessed: 2024-04-17. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.732, + 0.484, + 0.772 + ], + "angle": 0, + "content": "[40] A. Ramesh et al., \"Hierarchical text-conditional image generation with clip latents,\" arXiv, vol. 1, no. 2, p. 3, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.775, + 0.484, + 0.815 + ], + "angle": 0, + "content": "[41] D. O'Sullivan, \"A high school student created a fake 2020 us candidate. twitter verified it.\" https://cnn.it/3HpHfzz, 2020. Accessed: 2024-04-21. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.817, + 0.484, + 0.871 + ], + "angle": 0, + "content": "[42] S. Bond, \"That smiling linkedin profile face might be a computer-generated fake.\" https://www.npr.org/2022/03/27/1088140809/fake-linkedin-profiles, 2022. Accessed: 2024-04-21. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.874, + 0.484, + 0.901 + ], + "angle": 0, + "content": "[43] V. Albiero, K. Bowyer, K. Vangara, and M. King, \"Does face recognition accuracy get better with age? deep face matchers" + }, + { + "type": "list", + "bbox": [ + 0.101, + 0.093, + 0.484, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.554, + 0.093, + 0.905, + 0.12 + ], + "angle": 0, + "content": "say no,\" in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 261-269, 2020. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.525, + 0.123, + 0.907, + 0.19 + ], + "angle": 0, + "content": "[44] V. Albiero, K. Ks, K. Vangara, K. Zhang, M. C. King, and K. W. Bowyer, \"Analysis of gender inequality in face recognition accuracy,\" in Proceedings of the IEEE/cvf winter conference on applications of computer vision workshops, pp. 81-89, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.195, + 0.907, + 0.276 + ], + "angle": 0, + "content": "[45] C. M. Cook, J. J. Howard, Y. B. Sirotin, J. L. Tipton, and A. R. Vermury, \"Demographic effects in facial recognition and their dependence on image acquisition: An evaluation of eleven commercial systems,\" IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 1, no. 1, pp. 32-41, 2019. 3, 14" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.279, + 0.906, + 0.334 + ], + "angle": 0, + "content": "[46] K. Krishnapriya, V. Albiero, K. Vangara, M. C. King, and K. W. Bowyer, “Issues related to face recognition accuracy varying based on race and skin tone,” IEEE Transactions on Technology and Society, vol. 1, no. 1, pp. 8–20, 2020. 14" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.337, + 0.907, + 0.392 + ], + "angle": 0, + "content": "[47] B. Porgali, V. Albiero, J. Ryda, C. C. Ferrer, and C. Hazirbas, \"The casual conversations v2 dataset,\" in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 10-17, June 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.395, + 0.906, + 0.422 + ], + "angle": 0, + "content": "[48] Google, “The Monk Skin Tone Scale,” 2024. [Accessed October 16, 2024]. 3, 14" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.425, + 0.907, + 0.465 + ], + "angle": 0, + "content": "[49] United States Department of State — Bureau of Consular Affairs, “Selecting your gender marker - travel,” 2022. [Accessed October 16, 2024]. 3, 14" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.469, + 0.906, + 0.509 + ], + "angle": 0, + "content": "[50] Australian Bureau of Statistics, \"Standard for Sex, Gender, Variations of Sex Characteristics and Sexual Orientation Variables,\" 2024. [Accessed October 16, 2024]. 3, 14" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.512, + 0.907, + 0.595 + ], + "angle": 0, + "content": "[51] J. J. Howard, Y. B. Sirotin, and A. R. Vemury, “The effect of broad and specific demographic homogeneity on the imposter distributions and false match rates in face recognition algorithm performance,” in 2019 IEEE 10th international conference on biometrics theory, applications and systems (btas), pp. 1–8, IEEE, 2019. 3, 14" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.598, + 0.907, + 0.666 + ], + "angle": 0, + "content": "[52] I. D. Raji and J. Buolamwini, \"Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products,\" in Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 429-435, 2019. 3, 14" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.669, + 0.907, + 0.71 + ], + "angle": 0, + "content": "[53] United Nations, “Provisional Guidelines on Standard International Age Classifications,” 1982. [Accessed October 16, 2024]. 3, 14" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.713, + 0.907, + 0.74 + ], + "angle": 0, + "content": "[54] Statistics Canada, \"Age Categories, Life Cycle Groupings,\" 2017. [Accessed October 16, 2024]. 3, 14" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.743, + 0.907, + 0.783 + ], + "angle": 0, + "content": "[55] O. Giudice, L. Guarnera, and S. Battiato, “Fighting deep-fakes by detecting gan dct anomalies,” Journal of Imaging, vol. 7, no. 8, p. 128, 2021. 3, 18" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.786, + 0.907, + 0.841 + ], + "angle": 0, + "content": "[56] V. Asnani, X. Yin, T. Hassner, and X. Liu, \"Reverse engineering of generative models: Inferring model hyperparameters from generated images,\" IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023. 3, 18" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.844, + 0.907, + 0.87 + ], + "angle": 0, + "content": "[57] D. Beniaguev, “Synthetic faces high quality (sfhq) dataset,” 2022. 3, 18" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.873, + 0.906, + 0.901 + ], + "angle": 0, + "content": "[58] Z. Lu, D. Huang, L. Bai, J. Qu, C. Wu, X. Liu, and W. Ouyang, \"Seeing is not always believing: Benchmarking" + }, + { + "type": "list", + "bbox": [ + 0.524, + 0.093, + 0.907, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "3512" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.131, + 0.092, + 0.486, + 0.133 + ], + "angle": 0, + "content": "human and model perception of ai-generated images,\" Advances in Neural Information Processing Systems, vol. 36, 2024. 3, 18" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.134, + 0.486, + 0.189 + ], + "angle": 0, + "content": "[59] L. M. Dang, S. I. Hassan, S. Im, J. Lee, S. Lee, and H. Moon, \"Deep learning based computer generated face identification using convolutional neural network,\" Applied Sciences, vol. 8, no. 12, p. 2610, 2018. 3, 18" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.19, + 0.485, + 0.245 + ], + "angle": 0, + "content": "[60] P. Esser, R. Rombach, and B. Ommer, “Taming transformers for high-resolution image synthesis,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12873–12883, 2021. 3, 18" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.246, + 0.484, + 0.287 + ], + "angle": 0, + "content": "[61] Z. Wang, J. Bao, W. Zhou, W. Wang, H. Hu, H. Chen, and H. Li, \"Dire for diffusion-generated image detection,\" arXiv preprint arXiv:2303.09295, 2023. 3, 18" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.288, + 0.484, + 0.355 + ], + "angle": 0, + "content": "[62] M. Kim, F. Liu, A. Jain, and X. Liu, \"Dface: Synthetic face generation with dual condition diffusion model,\" in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12715-12725, 2023. 3, 18" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.357, + 0.484, + 0.426 + ], + "angle": 0, + "content": "[63] R. Corvi, D. Cozzolino, G. Zingarini, G. Poggi, K. Nagano, and L. Verdoliva, \"On the detection of synthetic images generated by diffusion models,\" in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1-5, IEEE, 2023. 3, 18" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.427, + 0.484, + 0.496 + ], + "angle": 0, + "content": "[64] M. Awsafur Rahman, B. Paul, N. Haque Sarker, Z. I. A. Hakim, and S. Anowarul Fattah, \"Artifact: A large-scale dataset with artificial and factual images for generalizable and robust synthetic image detection,\" arXiv e-prints, pp. arXiv-2302, 2023. 3, 18" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.497, + 0.484, + 0.55 + ], + "angle": 0, + "content": "[65] H. Song, S. Huang, Y. Dong, and W.-W. Tu, \"Robustness and generalizability of deepfake detection: A study with diffusion models,\" arXiv preprint arXiv:2309.02218, 2023. 3, 18" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.552, + 0.484, + 0.62 + ], + "angle": 0, + "content": "[66] J. Deng, J. Guo, E. Ververas, I. Kotsia, and S. Zafeiriou, \"Retinaface: Single-shot multi-level face localisation in the wild,\" in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5203-5212, 2020. 3, 31, 32" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.622, + 0.484, + 0.691 + ], + "angle": 0, + "content": "[67] K. S. Krishnapriya, G. Pangelinan, M. C. King, and K. W. Bowyer, \"Analysis of manual and automated skin tone assignments,\" in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops, pp. 429-438, January 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.692, + 0.484, + 0.747 + ], + "angle": 0, + "content": "[68] W. Thong, P. Joniak, and A. Xiang, \"Beyond skin tone: A multidimensional measure of apparent skin color,\" in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 4903-4913, October 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.748, + 0.484, + 0.803 + ], + "angle": 0, + "content": "[69] C. Lugaresi, J. Tang, H. Nash, C. McClanahan, E. Uboweja, M. Hays, F. Zhang, C.-L. Chang, M. G. Yong, J. Lee, et al., \"Mediapipe: A framework for building perception pipelines,\" arXiv preprint arXiv:1906.08172, 2019. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.804, + 0.484, + 0.857 + ], + "angle": 0, + "content": "[70] J. A. Hartigan and M. A. Wong, \"Algorithm as 136: A k-means clustering algorithm,\" Journal of the royal statistical society. series c (applied statistics), vol. 28, no. 1, pp. 100-108, 1979. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.859, + 0.484, + 0.9 + ], + "angle": 0, + "content": "[71] Megvii Technology Limited, “Face++ Face Detection.” https://www(faceplusplus.com/face-detection/. Accessed: 2024-03. 3, 14, 18, 19" + }, + { + "type": "list", + "bbox": [ + 0.101, + 0.092, + 0.486, + 0.9 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.093, + 0.909, + 0.133 + ], + "angle": 0, + "content": "[72] InsightFace Project Contributors, \"InsightFace: State-of-the-Art Face Analysis Toolbox.\" https://insightface.ai/. Accessed: 2024-03. 3, 14, 18, 19" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.134, + 0.908, + 0.203 + ], + "angle": 0, + "content": "[73] G. Ilharco, M. Wortsman, R. Wightman, C. Gordon, N. Carlini, R. Taori, A. Dave, V. Shankar, H. Namkoong, J. Miller, H. Hajishirzi, A. Farhadi, and L. Schmidt, \"Open clip.\" https://github.com/mlfoundations/open Clip, 2021.4.6.21" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.204, + 0.908, + 0.258 + ], + "angle": 0, + "content": "[74] K. Cao, C. Wei, A. Gaidon, N. Arechiga, and T. Ma, \"Learning imbalanced datasets with label-distribution-aware margin loss,\" Advances in neural information processing systems, vol. 32, 2019. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.26, + 0.906, + 0.315 + ], + "angle": 0, + "content": "[75] S. Agarwal, G. Krueger, J. Clark, A. Radford, J. W. Kim, and M. Brundage, “Evaluating clip: towards characterization of broader capabilities and downstream implications,” arXiv preprint arXiv:2108.02818, 2021. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.316, + 0.908, + 0.383 + ], + "angle": 0, + "content": "[76] M. M. Tanjim, K. K. Singh, K. Kafle, R. Sinha, and G. W. Cottrell, “Discovering and mitigating biases in clip-based image editing,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 2984–2993, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.385, + 0.908, + 0.44 + ], + "angle": 0, + "content": "[77] J. Wang and G. Kang, “Learn to rectify the bias of clip for unsupervised semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4102–4112, 2024. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.441, + 0.908, + 0.496 + ], + "angle": 0, + "content": "[78] G. R. Kini, O. Paraskevas, S. Oymak, and C. Thrampoulidis, \"Label-imbalanced and group-sensitive classification under overparameterization,\" Advances in Neural Information Processing Systems, vol. 34, pp. 18970-18983, 2021. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.497, + 0.908, + 0.55 + ], + "angle": 0, + "content": "[79] G. Peyré, M. Cuturi, et al., \"Computational optimal transport: With applications to data science,\" Foundations and Trends® in Machine Learning, vol. 11, no. 5-6, pp. 355-607, 2019. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.552, + 0.908, + 0.594 + ], + "angle": 0, + "content": "[80] M. Cuturi, \"Sinkhorn distances: Lightspeed computation of optimal transport,\" Advances in neural information processing systems, vol. 26, 2013. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.595, + 0.908, + 0.65 + ], + "angle": 0, + "content": "[81] P. Foret, A. Kleiner, H. Mobahi, and B. Neyshabur, \"Sharpness-aware minimization for efficiently improving generalization,\" in International Conference on Learning Representations, 2020. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.651, + 0.908, + 0.704 + ], + "angle": 0, + "content": "[82] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1251–1258, 2017. 5, 6, 19, 21, 23, 24, 25, 26, 27" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.705, + 0.908, + 0.761 + ], + "angle": 0, + "content": "[83] M. Tan and Q. Le, \"Efficientnet: Rethinking model scaling for convolutional neural networks,\" in International conference on machine learning, pp. 6105-6114, PMLR, 2019. 5, 6, 19, 21, 23, 24, 25, 26, 27" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.762, + 0.908, + 0.844 + ], + "angle": 0, + "content": "[84] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., \"An image is worth 16x16 words: Transformers for image recognition at scale,\" in 9th International Conference on Learning Representations, 2021. 5, 6, 20, 21, 23, 24, 25, 26, 27" + }, + { + "type": "ref_text", + "bbox": [ + 0.524, + 0.845, + 0.908, + 0.9 + ], + "angle": 0, + "content": "[85] Y. Qian, G. Yin, L. Sheng, Z. Chen, and J. Shao, “Thinking in frequency: Face forgery detection by mining frequency-aware clues,” in European conference on computer vision, pp. 86–103, Springer, 2020. 5, 6, 21, 23, 24, 25, 26, 27" + }, + { + "type": "list", + "bbox": [ + 0.524, + 0.093, + 0.909, + 0.9 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "3513" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.092, + 0.486, + 0.162 + ], + "angle": 0, + "content": "[86] H. Liu, X. Li, W. Zhou, Y. Chen, Y. He, H. Xue, W. Zhang, and N. Yu, \"Spatial-phase shallow learning: rethinking face forgery detection in frequency domain,\" in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 772-781, 2021. 5, 6, 21, 23, 24, 25, 26, 27" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.164, + 0.485, + 0.233 + ], + "angle": 0, + "content": "[87] Y. Luo, Y. Zhang, J. Yan, and W. Liu, \"Generalizing face forgery detection with high-frequency features,\" in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 16317-16326, 2021. 5, 6, 19, 21, 23, 24, 25, 26, 27" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.234, + 0.484, + 0.302 + ], + "angle": 0, + "content": "[88] U. Ojha, Y. Li, and Y. J. Lee, \"Towards universal fake image detectors that generalize across generative models,\" in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 24480-24489, 2023. 5, 6, 21, 23, 24, 25, 26, 27" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.304, + 0.485, + 0.373 + ], + "angle": 0, + "content": "[89] Y. Ni, D. Meng, C. Yu, C. Quan, D. Ren, and Y. Zhao, \"Core: Consistent representation learning for face forgery detection,\" in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12-21, 2022. 5, 6, 21, 23, 24, 25, 26, 27" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.375, + 0.483, + 0.417 + ], + "angle": 0, + "content": "[90] X. Han, J. Chi, Y. Chen, Q. Wang, H. Zhao, N. Zou, and X. Hu, “Ffb: A fair fairness benchmark for in-processing group fairness methods,” in ICLR, 2024. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.418, + 0.484, + 0.473 + ], + "angle": 0, + "content": "[91] N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan, “A survey on bias and fairness in machine learning,” ACM computing surveys (CSUR), vol. 54, no. 6, pp. 1–35, 2021. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.475, + 0.485, + 0.529 + ], + "angle": 0, + "content": "[92] J. Wang, X. E. Wang, and Y. Liu, “Understanding instance-level impact of fairness constraints,” in International Conference on Machine Learning, pp. 23114–23130, PMLR, 2022. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.532, + 0.483, + 0.559 + ], + "angle": 0, + "content": "[93] H. Wang, L. He, R. Gao, and F. P. Calmon, \"Aleatoric and epistemic discrimination in classification,\" ICML, 2023. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.561, + 0.485, + 0.615 + ], + "angle": 0, + "content": "[94] C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel, \"Fairness through awareness,\" in Proceedings of the 3rd innovations in theoretical computer science conference, pp. 214-226, 2012. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.617, + 0.484, + 0.687 + ], + "angle": 0, + "content": "[95] Z. Yan, Y. Luo, S. Lyu, Q. Liu, and B. Wu, \"Transcending forgery specificity with latent space augmentation for generalizable deepfake detection,\" in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8984-8994, June 2024. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.688, + 0.484, + 0.728 + ], + "angle": 0, + "content": "[96] H. Ren, L. Lin, C.-H. Liu, X. Wang, and S. Hu, \"Improving generalization for ai-synthesized voice detection,\" in AAAI, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.731, + 0.484, + 0.786 + ], + "angle": 0, + "content": "[97] Z. Yan, Y. Zhao, S. Chen, M. Guo, X. Fu, T. Yao, S. Ding, and L. Yuan, \"Generalizing deepfake video detection with plug-and-play: Video-level blending and spatiotemporal adapter tuning,\" in CVPR, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.788, + 0.484, + 0.829 + ], + "angle": 0, + "content": "[98] J. Cheng, Z. Yan, Y. Zhang, L. Hao, J. Ai, Q. Zou, C. Li, and Z. Wang, \"Stacking brick by brick: Aligned feature isolation for incremental face forgery detection,\" in CVPR, 2025. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.831, + 0.483, + 0.872 + ], + "angle": 0, + "content": "[99] \"Monk skin tone scale,\" in https://en.wikipedia.org/wiki/Monk_Skin_Tone_Scale, Wikipedia, The Free Encyclopedia. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.096, + 0.874, + 0.483, + 0.901 + ], + "angle": 0, + "content": "[100] S. Hu and G. H. Chen, \"Fairness in survival analysis with distributionally robust optimization,\" arXiv, 2023. 5" + }, + { + "type": "list", + "bbox": [ + 0.096, + 0.092, + 0.486, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.092, + 0.908, + 0.163 + ], + "angle": 0, + "content": "[101] F. Cocchi, L. Baraldi, S. Poppi, M. Cornia, L. Baraldi, and R. Cucchiara, “Unveiling the impact of image transformations on deepfake detection: An experimental analysis,” in International Conference on Image Analysis and Processing, pp. 345–356, Springer, 2023. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.165, + 0.907, + 0.207 + ], + "angle": 0, + "content": "[102] D. Cozzolino, G. Poggi, R. Corvi, M. Nießner, and L. Verdoliva, “Raising the bar of ai-generated image detection with clip,” arXiv preprint arXiv:2312.00195, 2023. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.209, + 0.908, + 0.264 + ], + "angle": 0, + "content": "[103] B. Porgali, V. Albiero, J. Ryda, C. C. Ferrer, and C. Hazirbas, “The casual conversations v2 dataset,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10–17, 2023. 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.267, + 0.907, + 0.322 + ], + "angle": 0, + "content": "[104] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, et al., \"Llama 2: Open foundation and fine-tuned chat models,\" arXiv preprint arXiv:2307.09288, 2023. 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.324, + 0.907, + 0.38 + ], + "angle": 0, + "content": "[105] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat, et al., \"Gpt-4 technical report,\" arXiv preprint arXiv:2303.08774, 2023. 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.382, + 0.906, + 0.438 + ], + "angle": 0, + "content": "[106] J. Buolamwini and T. Gebru, “Gender shades: Intersectional accuracy disparities in commercial gender classification,” in Conference on fairness, accountability and transparency, pp. 77–91, PMLR, 2018. 14" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.441, + 0.908, + 0.509 + ], + "angle": 0, + "content": "[107] B. Lu, J.-C. Chen, C. D. Castillo, and R. Chellappa, “An experimental evaluation of covariates effects on unconstrained face verification,” IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 1, no. 1, pp. 42–55, 2019. 14" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.512, + 0.908, + 0.568 + ], + "angle": 0, + "content": "[108] Z. Khan and Y. Fu, “One label, one billion faces: Usage and consistency of racial categories in computer vision,” in Proceedings of the 2021 acm conference on fairness, accountability, and transparency, pp. 587–597, 2021. 14" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.57, + 0.907, + 0.612 + ], + "angle": 0, + "content": "[109] S. Sachdeva, “Fitzpatrick skin typing: Applications in dermatology,” Indian journal of dermatology, venereology and leprology, vol. 75, p. 93, 2009. 14" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.614, + 0.908, + 0.682 + ], + "angle": 0, + "content": "[110] J. J. Howard, Y. B. Sirotin, J. L. Tipton, and A. R. Vemury, \"Reliability and validity of image-based and self-reported skin phenotype metrics,\" IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 3, no. 4, pp. 550-560, 2021. 14" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.686, + 0.907, + 0.728 + ], + "angle": 0, + "content": "[111] U. Okoji, S. Taylor, and J. Lipoff, “Equity in skin typing: why it is time to replace the fitzpatrick scale,” British Journal of Dermatology, vol. 185, no. 1, pp. 198–199, 2021. 14" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.73, + 0.908, + 0.799 + ], + "angle": 0, + "content": "[112] M. Groh, C. Harris, R. Daneshjou, O. Badri, and A. Koochek, \"Towards transparency in dermatology image datasets with skin tone annotations by experts, crowds, and an algorithm,\" Proceedings of the ACM on Human-Computer Interaction, vol. 6, no. CSCW2, pp. 1-26, 2022. 14" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.802, + 0.908, + 0.842 + ], + "angle": 0, + "content": "[113] R. Williamson and A. Menon, “Fairness risk measures,” in International conference on machine learning, pp. 6786–6797, PMLR, 2019. 21" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.846, + 0.907, + 0.901 + ], + "angle": 0, + "content": "[114] D. Levy, Y. Carmon, J. C. Duchi, and A. Sidford, \"Largescale methods for distributionally robust optimization,\" Advances in Neural Information Processing Systems, vol. 33, pp. 8847-8860, 2020. 21" + }, + { + "type": "list", + "bbox": [ + 0.096, + 0.092, + 0.908, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.945, + 0.516, + 0.957 + ], + "angle": 0, + "content": "3514" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.092, + 0.486, + 0.133 + ], + "angle": 0, + "content": "[115] R. T. Rockafellar, S. Uryasev, et al., \"Optimization of conditional value-at-risk,\" Journal of risk, vol. 2, pp. 21-42, 2000. 21" + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.136, + 0.485, + 0.19 + ], + "angle": 0, + "content": "[116] T. Hashimoto, M. Srivastava, H. Namkoong, and P. Liang, \"Fairness without demographics in repeated loss minimization,\" in International Conference on Machine Learning, pp. 1929-1938, PMLR, 2018. 21" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.192, + 0.486, + 0.246 + ], + "angle": 0, + "content": "[117] J. C. Duchi and H. Namkoong, “Learning models with uniform performance via distributionally robust optimization,” The Annals of Statistics, vol. 49, no. 3, pp. 1378–1406, 2021. 21" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.249, + 0.485, + 0.305 + ], + "angle": 0, + "content": "[118] T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach, H. D. Iiii, and K. Crawford, \"Datasheets for datasets,\" Communications of the ACM, vol. 64, no. 12, pp. 86-92, 2021. 31" + }, + { + "type": "list", + "bbox": [ + 0.091, + 0.092, + 0.486, + 0.305 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.483, + 0.946, + 0.516, + 0.957 + ], + "angle": 0, + "content": "3515" + } + ] +] \ No newline at end of file diff --git a/2025/AI-Face_ A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark/69a93785-ab72-45f7-87fa-9182ea82821d_origin.pdf b/2025/AI-Face_ A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark/69a93785-ab72-45f7-87fa-9182ea82821d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4b32cbd04493c35776c3ad40e4b8c947db3c2a2b --- /dev/null +++ b/2025/AI-Face_ A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark/69a93785-ab72-45f7-87fa-9182ea82821d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7540b1abafa93ee6228edb7205eb8c2d52becf6b6b67052045c20f22215cba16 +size 7356333 diff --git a/2025/AI-Face_ A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark/full.md b/2025/AI-Face_ A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark/full.md new file mode 100644 index 0000000000000000000000000000000000000000..4b52bf97b6011770d965cbfbaac8fc4e710ed3e9 --- /dev/null +++ b/2025/AI-Face_ A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark/full.md @@ -0,0 +1,374 @@ +# AI-Face: A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark + +Li Lin $^{1}$ , Santosh $^{1}$ , Mingyang Wu $^{1}$ , Xin Wang $^{2}$ , Shu Hu $^{1\dagger}$ + +$^{1}$ Purdue University, West Lafayette, USA {lin1785, santosh2, wu2415, hu968}@purdue.edu + +$^{2}$ University at Albany, State University of New York, New York, USA xwang56@albany.edu + +# Abstract + +AI-generated faces have enriched human life, such as entertainment, education, and art. However, they also pose misuse risks. Therefore, detecting AI-generated faces becomes crucial, yet current detectors show biased performance across different demographic groups. Mitigating biases can be done by designing algorithmic fairness methods, which usually require demographically annotated face datasets for model training. However, no existing dataset encompasses both demographic attributes and diverse generative methods simultaneously, which hinders the development of fair detectors for AI-generated faces. In this work, we introduce the AI-Face dataset, the first million-scale demographically annotated AI-generated face image dataset, including real faces, faces from deepfake videos, and faces generated by Generative Adversarial Networks and Diffusion Models. Based on this dataset, we conduct the first comprehensive fairness benchmark to assess various AI face detectors and provide valuable insights and findings to promote the future fair design of AI face detectors. Our AI-Face dataset and benchmark code are publicly available at https://github.com/Purdue-M2/AI-Face-FairnessBench. + +# 1. Introduction + +AI-generated faces are created using sophisticated AI technologies that are visually difficult to discern from real ones [1]. They can be summarized into three categories: deepfake videos [2] created by typically using Variational Autoencoders (VAEs) [3, 4], faces generated from Generative Adversarial Networks (GANs) [5-8], and Diffusion Models (DMs) [9]. These technologies have significantly advanced the realism and controllability of synthetic facial representations. Generated faces can enrich media and increase creativity [10]. However, they also carry significant risks of misuse. For example, during the 2024 United States presidential election, fake face images of Donald Trump surrounded by groups of black people smiling and laughing to + +![](images/b60e36a480d2233a4dde3cd316471aec9b9fb7003921d83df06178da82c10f17.jpg) +Figure 1. Comparison between AI-Face and other datasets in terms of demographic annotation, generation category, and the number of generation methods. 'DF', 'GAN', and 'DM' stand for Deepfake Videos, Generative Adversarial Networks, and Diffusion Models. + +encourage African Americans to vote Republican are spreading online [11]. This could distort public opinion and erode people's trust in media [12, 13], necessitating the detection of AI-generated faces for their ethical use. + +However, one major issue existing in current AI face detectors [24-27] is biased detection (i.e., unfair detection performance among demographic groups [19, 28-30]). Mitigating biases can be done by designing algorithmic fairness methods, but they usually require demographically annotated face datasets for model training. For example, works like [29, 30] have made efforts to enhance fairness in the detection based on A-FF++ [19] and A-DFD [19]. However, both datasets are limited to containing only faces from deepfake videos, which could cause the trained models not to be applicable for fairly detecting faces generated by GANs and DMs. While some datasets (e.g., GenData [17], DF40 [31]) include GAN and DM faces, they either lack demographic annotations or provide only limited demographic attributes. Most importantly, no existing dataset offers sufficient diversity in generation methods while also providing demographic labels. A comparison of existing datasets is shown in Fig. 1. These limitations of existing datasets hamper the development of fair technologies for detecting AI-generated faces. + +
DatasetYearFace ImagesGeneration Category#Generation MethodsSource of Real ImagesDemographic Annotation
#Real#FakeDeepfake VideosGANDMSkin ToneGenderAge
DF-1.0 [14]20202.9M14.7M1Self-Recording
DeePhy [15]20221K50.4K3YouTube
DF-Platter [16]2023392.3K653.4K3YouTube
GenData [17]2023-20K3CelebA [18]
A-FF++ [19]202429.8K149.1K5YouTube
A-DFD [19]202410.8K89.6K5Self-Recording
A-DFDC [19]202454.5K52.6K8Self-Recording
A-Celeb-DF-v2 [19]202426.3K166.5K1Self-Recording
A-DF-1.0 [19]2024870.3K321.5K1Self-Recording
AI-Face2025400K1.2M37FFHQ [6], IMDB-WIKI [20], real from FF++ [2], DFDC [21], DFD [22],Celeb-DF-v2 [23]
+ +Table 1. Quantitative comparison of existing datasets with ours on demographically annotated AI-generated faces. + +Moreover, benchmarking fairness provides a direct method to uncover prevalent and unique fairness issues in recent AI-generated face detection. However, there is a lack of a comprehensive benchmark to estimate the fairness of existing AI face detectors. Existing benchmarks [32-35] primarily assess utility, neglecting systematic fairness evaluation. Two studies [28, 36] do evaluate fairness in detection models, but their examination is based on a few outdated detectors. Furthermore, detectors' fairness reliability (e.g., robustness with test set post-processing, fairness generalization) has not been assessed. The absence of a comprehensive fairness benchmark impedes a thorough understanding of the fairness behaviors of recent AI face detectors and obscures the research path for detector fairness guarantees. + +In this work, we build the first million-scale demographically annotated AI-generated face image dataset: AI-Face. The face images are collected from various public datasets, including the real faces that are usually used to train AI face generators, faces from deepfake videos, and faces generated by GANs and DMs. Each face is demographically annotated by our designed measurement method and Contrastive Language-Image Pretraining (CLIP) [37]-based lightweight annotator. Next, we conduct the first comprehensive fairness benchmark on our dataset to estimate the fairness performance of 12 representative detectors coming from four model types. Our benchmark exposes common and unique fairness challenges in recent AI face detectors, providing essential insights that can guide and enhance the future design of fair AI face detectors. Our contributions are as follows: + +- We build the first million-scale demographically annotated AI-generated face dataset by leveraging our designed measure and developed lightweight annotator. +- We conduct the first comprehensive fairness benchmark of AI-generated face detectors, providing an extensive fairness assessment of current representative detectors. +- Based on our experiments, we summarize the unsolved questions and offer valuable insights within this research field, setting the stage for future investigations. + +# 2. Background and Motivation + +AI-generated Faces and Biased Detection. AI-generated face images, created by advanced AI technologies, are vi + +sually difficult to discern from real ones. They can be summarized into three categories: 1) Deepfake Videos. Initiated in 2017 [13], these use face-swapping and face-reenactment techniques with a variational autoencoder to replace a face in a target video with one from a source [3, 4]. Note that our paper focuses solely on images extracted from videos. 2) GAN-generated Faces. Post-2017, Generative Adversarial Networks (GANs) [38] like StyleGANs [6-8] have significantly improved generated face realism. 3) DM-generated Faces. Diffusion models (DMs), emerging in 2021, generate detailed faces from textual descriptions and offer greater controllability. Tools like Midjourney [39] and DALLE2 [40] facilitate customized face generation. While these AI-generated faces can enhance visual media and creativity [10], they also pose risks, such as being misused in social media profiles [41, 42]. Therefore, numerous studies focus on detecting AI-generated faces [24-27], but current detectors often show performance disparities among demographic groups [19, 28-30]. This bias can lead to unfair targeting or exclusion, undermining trust in detection models. Recent efforts [29, 30] aim to enhance fairness in deepfake detection but mainly address deepfake videos, overlooking biases in detecting GAN- and DM-generated faces. + +The Existing Datasets. Current AI-generated facial datasets with demographic annotations are limited in size, generation categories, methods, and annotations, as illustrated in Table 1. For instance, A-FF++, A-DFD, A-DFDC, and A-Celeb-DF-v2 [19] are deepfake video datasets with fewer than one million images. Datasets like DF-1.0 [14] and DF-Platter [16] lack various demographic annotations. Additionally, existing datasets offer limited generation methods. These limitations hinder the development of fair AI face detectors, motivating us to build a million-scale demographically annotated AI-Face dataset. + +Benchmark for Detecting AI-generated Faces. Benchmarks are essential for evaluating AI-generated face detectors under standardized conditions. Existing benchmarks, as shown in Table 2, mainly focus on detectors' utility, often overlooking fairness [31-35]. Loc et al. [28] and CCv1 [36] examined detector fairness. However, their study did not have an analysis on DM-generated faces and only measured bias between groups in basic scenarios without considering + +
Existing BenchmarksYearCategoryScope of Benchmark
Deepfake VideosGANDMUtilityFairness General Reliability
Loc et al. [28]2021
CCv1 [36]2021
DeepfakeBench [34]2023
CDDB [32]2023
Lin et al. [33]2024
Le et al. [35]2024
DF40 [31]2024
Ours2025
+ +Table 2. Comparison of existing AI-generated face detection benchmarks and ours. Fairness 'General' means fairness evaluation under default/basic settings. Fairness 'Reliability' measures fairness consistency across dynamic scenarios (e.g., post-processing). + +fairness reliability under real-world variations and transformations. This motivates us to conduct a comprehensive benchmark to evaluate AI face detectors' fairness. + +The Definition of Demographic Categories. Demography-related labels are highly salient to measuring bias. Following prior works [36, 43-47], we will focus on three key demographic categories: Skin Tone, Gender, and Age, in this work. For skin tone, this vital attribute spans a range from pale to dark. We use the Monk Skin Tone scale [48], specifically designed for computer vision applications. For gender, we adopt binary categories (i.e., Male and Female), following practices by many governments [49, 50] and facial recognition research [45, 51, 52], based on sex at birth. For age, using definitions from the United Nations [53] and Statistics Canada [54], we define five age groups: Child (0-14), Youth (15-24), Adult (25-44), Middle-age Adult (45-64), and Senior $(65+)$ . More discussion is in Appendix A. + +# 3.AI-Face Dataset + +This section outlines the process of building our demographically annotated AI-Face dataset (see Fig. 2), along with its statistics and annotation quality assessment. + +# 3.1. Data Collection + +We build our AI-Face dataset by collecting and integrating public real and AI-generated face images sourced from academic publications, GitHub repositories, and commercial tools. We strictly adhere to the license agreements of all datasets to ensure that they allow inclusion in our datasets and secondary use for training and testing. More details are in Appendix B.1. Specifically, the fake face images in our dataset originate from 4 Deepfake Video datasets (i.e., $\mathrm{FF} + +$ [2], DFDC [21], DFD [22], and Celeb-DFv2 [23]), generated by 10 GAN models (i.e., AttGAN [55], MMDGAN [56], StarGAN [55], StyleGANs [55, 57, 58], MSGGAN [56], ProGAN [59], STGAN [56], and VQGAN [60]), and 8 DM models (i.e., DALLE2 [61], IF [61], Midjourney [61], DCFace [62], Latent Diffusion [63], Palette [64], Stable Diffusion v1.5 [65], Stable Diffusion Inpainting [65]). This constitutes a total of 1,245,660 fake face images in our dataset. We include 6 real source datasets + +(i.e., FFHQ [6], IMDB-WIKI [20], and real images from $\mathrm{FF}++$ [2], DFDC [21], DFD [22], and Celeb-DF-v2 [23]). All of them are usually used as a training set for generative models to generate fake face images. This constitutes a total of 400,885 real face images in our dataset. In general, our dataset contains 28 subsets and 37 generation methods (i.e., 5 in $\mathrm{FF}++$ , 5 in DFD, 8 in DFDC, 1 in Celeb-DF-v2, 10 GANs, and 8 DMs). For all images, we use RetinaFace [66] for detecting and cropping faces. + +# 3.2. Annotation Generation + +# 3.2.1. Skin Tone Annotation Generation + +Skin tone is typically measured using an intuitive approach [67, 68], without requiring a predictive model. Inspired by [67], we developed a method to estimate skin tone using the Monk Skin Tone (MST) Scale [48] (including 10-shade scales: Tone 1 to 10) by combining facial landmark detection with color analysis. Specifically, utilizing Mediapipe's FaceMesh [69] for precise facial landmark localization, we isolate skin regions while excluding non-skin areas such as the eyes and mouth. Based on the detected landmarks, we generate a mask to extract skin pixels from the facial area. These pixels are then subjected to K-Means clustering [70] (we use $\mathrm{K} = 3$ in practice) to identify the dominant skin color within the region of interest. The top-1 largest color cluster is mapped to the closest tone in the MST Scale by calculating the Euclidean distance between the cluster centroid and the MST reference colors in RGB space. + +# 3.2.2. Gender and Age Annotation Generation + +For generating gender and age annotations, the existing online software (e.g., Face++ [71]) and open-source tools (e.g., InsightFace [72]) can be used for the prediction. However, they fall short in our task due to two reasons: 1) They are mostly designed for face recognition and trained on datasets of real face images but lack generalization capability for annotating AI-generated face images. 2) Their use may introduce bias into our dataset, as they are typically designed and trained without careful consideration of bias and imbalance in the training set. See Appendix B.3 for our experimental study on these tools. To this end, we have to develop our specific annotators to predict gender and age annotations for each image in our dataset. + +Problem Definition. Given a training dataset $\mathbb{D} = \{(X_i, A_i)\}_{i=1}^n$ with size $n$ , where $X_i$ represents the $i$ -th face image and $A_i$ signifies a demographic attribute associated with $X_i$ . Here, $A_i \in \mathcal{A}$ , where $\mathcal{A}$ represents user-defined groups (e.g., for gender, $\mathcal{A} = \{\text{Female, Male}\}$ . For age, $\mathcal{A} = \{\text{Child, Youth, Adult, Middle-age Adult, Senior}\}$ ). Our goal is to design a lightweight, generalizable annotator based on $\mathbb{D}$ that reduces bias while predicting facial demographic attributes for each image in our dataset. In practice, we use IMDB-WIKI [20] as training dataset, which contains + +![](images/73cdb83d2fd072d055ecc2be288d34736d78933d58e073c80c3646a5aedbf2b0.jpg) +Figure 2. Generation pipeline of our Demographically Annotated AI-Face Dataset. First, we collect and filter face images from Deepfake Videos, GAN-generated faces, and DM-generated faces found in public datasets. Second, we perform skin tone, gender, and age annotation generation. Skin tone is estimated by combining facial landmark detection with color analysis to generate the corresponding annotation. For gender and age, we develop annotators trained on the IMDB-WIKI dataset [20], then use them to predict attributes for each image. + +images along with profile metadata sourced from IMDb and Wikipedia, ensuring that the demographic annotations are as accurate as possible. We trained two annotators with identical architecture and training procedures for gender and age annotations, respectively. + +Annotator Architecture. We build a lightweight annotator based on the CLIP [37] foundation model by leveraging its strong zero-shot and few-shot learning capabilities. Specifically, our annotator employs a frozen pre-trained CLIP ViT L/14 [73] as a feature extractor $\mathbf{E}$ followed by a trainable classifier parameterized by $\theta$ , which contains 3-layer Multi-layer Perceptron (MLP) M and a classification head $h$ . + +Learning Objective. Aware that neural networks can perform poorly when the training dataset suffers from class-imbalance [74] and CLIP is not free from demographic bias [75-77], we introduce an imbalance loss and fairness loss to address these challenges in the annotator training. Specifically, for image $X_{i}$ , its feature $f_{i}$ is obtained through $f_{i} = \mathbf{M}(\mathbf{E}(X_{i}))$ . Next, two losses are detailed below. + +Imbalance Loss: To mitigate the impact of imbalance data, we use Vector Scaling [78] loss, which is a re-weighting method for training models on the imbalanced data with distribution shifts and can be expressed as + +$$ +L _ {i m b} = \frac {1}{n} \sum_ {i = 1} ^ {n} - u _ {A _ {i}} \log \frac {e ^ {\zeta_ {A _ {i}} h (f _ {i}) _ {A _ {i}} + \Delta_ {A _ {i}}}}{\sum_ {A \in \mathcal {A}} e ^ {\zeta_ {A} h (f _ {i}) _ {A} + \Delta_ {A}}}, +$$ + +where $u_{A_i}$ is the weighting factor for attribute $A_i$ . $h(f_i)_{A_i}$ is the predict logit on $A_i$ . $\zeta_{A_i}$ is the multiplicative logit scaling factor, calculated as the inverse of $A_i$ 's frequency. $\Delta_{A_i}$ is the additive logit scaling factor, calculated as the log of $A_i$ probabilities. More details about them are in appendix B.4. + +Fairness Loss: We introduce a fairness loss to minimize + +the disparity between the distribution $\mathcal{D}^f$ of $f$ and the conditional distribution $\mathcal{D}^{f_A}$ of $f$ on attribute $A\in \mathcal{A}$ . Specifically, we follow [79, 80] to minimize the summation of the following Sinkhorn distance between these two distributions: + +$$ +L_{fair} = \sum_{A\in \mathcal{A}}\inf_{\gamma \in \Gamma (\mathcal{D}^{f},\mathcal{D}^{f_{A}})}\bigl\{\mathbb{E}_{X\sim \gamma}[c(p,q)] + \alpha H(\gamma |\mu \otimes \nu)\bigr \} , +$$ + +where $\Gamma(\mathcal{D}^f, \mathcal{D}^{f_A})$ is the set of joint distributions based on $\mathcal{D}^f$ and $\mathcal{D}^{f_A}$ . Let $p$ and $q$ be the points from $\mathcal{D}^f$ and $\mathcal{D}^{f_A}$ , respectively. Then, $c(p, q)$ represents the transport cost [80]. Let $\mu$ and $\nu$ be the reference measures from the set of measures on $f$ . Then, $H(\gamma | \mu \otimes \nu)$ represents the relative entropy of $\gamma$ with respect to the product measure $\mu \otimes \nu$ . $\alpha \geq 0$ is a regularization hyperparameter. In practice, we use the empirical form of $L_{fair}$ . + +Total Loss: Therefore, the final learning objective becomes $\mathcal{L}(\theta) = L_{imb} + \lambda L_{fair}$ , where $\lambda$ is a hyperparameter. Train. Traditional optimization methods like stochastic gradient descent can lead to poor model generalization due to sharp loss landscapes with multiple local and global minima. To address this, we use Sharpness-Aware Minimization (SAM) [81] to enhance our annotator's generalization by flattening the loss landscape. Specifically, flattening is attained by determining the optimal $\epsilon^{*}$ for perturbing model parameters $\theta$ to maximize the loss, formulated as: $\epsilon^{*} = \arg \max_{\| \epsilon \|_{2} \leq \beta} \mathcal{L}(\theta + \epsilon) \approx \arg \max_{\| \epsilon \|_{2} \leq \beta} \epsilon^{\top} \nabla_{\theta} \mathcal{L} = \beta \text{sign}(\nabla_{\theta} \mathcal{L})$ , where $\beta$ controls the perturbation magnitude. The approximation is based on the first-order Taylor expansion with assuming $\epsilon$ is small. The final equation is obtained by solving a dual norm problem, where sign represents a sign function and $\nabla_{\theta} \mathcal{L}$ being the gradient of $\mathcal{L}$ with respect to $\theta$ . As a result, the model parameters are updated by solving: $\min_{\theta} \mathcal{L}(\theta + \epsilon^{*})$ . + +![](images/ee33e2d2ecf2b2de8ef43e6b49c40496f7bca36c846a744baca4b05c14801536.jpg) +Figure 3. Distribution of face images of the AI-Face dataset. The figure shows the (a) subset distribution and the demographic distribution for (b) skin tone, (c) gender, and (d) gender. The outer rings in (b), (c), and (d) represent the proportion of groups within each attribute category, while the inner rings indicate the distribution of fake $(F)$ and real $(R)$ images within those groups. + +![](images/73dce0516cea83156d02b385626f5b7c5190a16667a7fb0a7c2413c9ae3dae08.jpg) + +![](images/64b85c5d29c5a70ca88dd95e3cd88007ea0acd67770d93cd21e538fb0c9ef472.jpg) + +![](images/7c9ff3f0b1c04966ed4493926d615fc9c3189dd883ddb15e16207aafedd36107.jpg) + +Inference. We use the trained annotators to predict demographic labels for each image in AI-Face dataset, except for those from IMDB-WIKI, which already contain true labels. + +# 3.3. Dataset Statistics + +Fig. 3 illustrates the subset distribution and demographic attributes of the AI-Face dataset. The dataset contains approximately three times more generated images than real images, with diffusion model-generated images constituting the majority. In terms of demographic attributes, the majorities in skin tone are Tone 5 (31.14%) and Tone 6 (35.16%). The lightest skin tones (Tones 1-3) are underrepresented, comprising only 0.97% of the dataset. The dataset is relatively balanced across gender. Adult (25-44) (49.67%) is the predominant representation in age groups. + +# 3.4. Annotation Quality Assessment + +To assess the quality of demographic annotations in our AI-Face dataset, we conducted a user study. Three participants label the demographic attributes for the given images (the details of labeling activities are in appendix B.5), with the final ground truth determined by majority vote. We then compare our annotations with those in A-FF++, A-DFDC, A-CelebDF-V2, and A-DFD datasets. Specifically, we perform two assessments: 1) Strategic comparison: We select 1,000 images from A-FF++ and A-DFDC that have different annotations from AI-Face. These images likely represent challenging cases. 2) Random comparison: We randomly sampled 1,000 images from A-Celeb-DF-V2 and A-DFD. Due to the limited age classes in these datasets, only gender was evaluated. The results, presented in Table 3, demonstrate the high correctness of the AI-Face annotations and their superior quality compared to the annotations of other datasets. For example, our annotation quality (ACC) surpasses those in A-FF++ by $78.714\%$ on gender and $48.000\%$ on age. + +# 4. Fairness Benchmark Settings + +This section demonstrates the fairness benchmark settings for detection methods and evaluation metrics on AI-Face (80%/20%: Train/Test). More settings are in Appendix C.1. + +Detection Methods. Our benchmark has implemented 12 detectors. The methodologies cover a spectrum that + +
Evaluation TypeDatasetGenderAge
ACCPrecisionRecallACCPrecisionRecall
StrategicA-FF++8.14317.5835.96637.70039.45945.381
AI-Face86.85774.40477.36785.70074.02463.751
A-DFDC21.60028.60423.08233.40038.01140.165
AI-Face91.70092.12983.44877.00076.18462.646
RandomA-Celeb-DF-V289.62890.62690.494-
AI-Face91.20691.47491.767-
A-DFD70.90071.68674.435-
AI-Face92.30091.06091.727-
+ +Table 3. Annotation quality assessment results $(\%)$ for $A - FF + +$ A- DFDC, A-Celeb-DF-V2, A-DFD, and our AI-Face. ACC: Accuracy. + +is specifically tailored to detect AI-generated faces from Deepfake Videos, GANs, and DMs. They can be classified into four types: Naive detectors: refer to backbone models that can be directly utilized as the detector for binary classification, including CNN-based (i.e., Xception [82], EfficientB4 [83]) and transformer-based (i.e., ViT-B/16 [84]). Frequency-based: explore the frequency domain for forgery detection (i.e., F3Net [85], SPSL [86], SRM [87]). Spatial-based: focus on mining spatial characteristics (e.g., texture) within images for detection (i.e., UCF [26], UnivFD [88], CORE [89]). Fairness-enhanced: focus on improving fairness in AI-generated face detection by designing specific algorithms (i.e., DAW-FDD [29], DAGFDD [29], PG-FDD [30]). + +Evaluation Metrics. To provide a comprehensive benchmarking, we consider 5 fairness metrics commonly used in fairness community [90-94] and 5 widely used utility metrics [95-98]. For fairness metrics, we consider Demographic Parity $(F_{DP})$ [90, 91], Max Equalized Odds $(F_{MEO})$ [93], Equal Odds $(F_{EO})$ [92], and Overall Accuracy Equality $(F_{OAE})$ [93] for evaluating group (e.g., gender) and intersectional (e.g., individuals of a specific gender and simultaneously a specific skin tone) fairness. In experiments, the intersectional groups are Female-Light (F-L), Female-Medium (F-M), Female-Dark (Dark), Male-Light (M-L), Male-Medium (M-M), and Male-Dark (M-D), where we group 10 categories of skin tones into Light (Tone 1-3), Medium (Tone 4-6), and Dark (Tone 7-10) for simplicity according to [99]. We also use individual fairness $(F_{IND})$ [94, 100] (i.e., similar individuals should have similar predicted outcomes) for estimation. For utility metrics, we employ the Area Under the ROC Curve (AUC), Accuracy (ACC), Average Precision (AP), Equal Error Rate (EER), + +
MeasureAttributeMetricModel Type
NaiveFrequencySpatialFairness-enhanced
Xception [82]EfficientB4 [83]ViT-B/16 [84]F3Net [85]SPSL [86]SRM [87]UCF [26]UnivFD [88]CORE [89]DAW-FDD [29]DAG-FDD [29]PG-FDD [30]
Fairness(%)↓Skin ToneFMEO8.8368.3006.26419.9388.05510.00217.3252.57710.77914.1186.5516.465
FDP9.7516.1847.72812.8769.37910.89712.5818.55610.31710.7068.6179.746
FOAE1.2714.3772.1682.8181.1350.9151.8832.7481.3321.6671.3880.882
FEO12.13211.0628.81323.7089.78914.23921.925.53613.06916.6047.3839.115
GenderFMEO3.9755.3855.1044.7174.4116.2715.0744.5035.7955.5105.9103.190
FDP1.6911.7251.3441.8641.8271.9571.7361.1902.1542.0152.1511.252
FOAE0.9751.4871.8031.1291.0371.7721.4511.6221.3891.3251.4201.071
FEO4.1435.8636.0314.8704.5346.785.5105.4085.9315.6966.0663.702
AgeFMEO27.8836.79614.93738.80127.61424.84347.5005.43633.88245.46615.22914.804
FDP10.90511.84911.83914.90611.23211.57017.04915.24912.56414.1069.63310.467
FOAE7.2652.8566.83810.1167.2706.52411.6523.7938.76011.8785.5335.009
FEO42.21610.30030.79555.03240.94338.52867.54514.14848.72964.38430.18229.585
IntersectionFMEO10.50517.5869.38421.36910.37915.14220.1346.11915.3416.56512.1789.578
FDP14.5118.60711.53517.17513.25915.18617.0314.02614.30114.08811.70514.697
FOAE2.5368.4614.9284.8702.4643.9983.5366.2872.7753.5474.0353.062
FEO24.31525.11427.44347.78321.67930.11243.37620.25528.8433.12226.29518.348
IndividualFIND10.33825.7420.0221.8722.5187.6210.7673.5230.0413.7720.9010.780
Utility(%)-AUC↑98.58398.61198.6998.71498.74797.93698.08298.19298.57997.81198.77199.172
ACC↑96.30894.20394.47295.71996.34695.09295.15193.65196.22495.42695.72296.174
AP↑99.35099.54299.57199.45399.35699.17299.27399.40099.36099.01599.49899.694
EER↓5.1496.6896.3725.2564.3716.4837.7087.6335.1457.0635.4994.961
FPR↓12.96120.06616.42614.67913.66115.74613.64618.55013.41016.67014.84410.971
Training Time / Epoch1h15min2h25min2h40min1h18min1h20min3h10min5h05min4h1h16min1h25min1h17min7h20min
+ +Table 4. Overall performance comparison of difference methods on the AI-Face dataset. The best performance is shown in bold. + +and False Positive Rate (FPR). + +# 5. Results and Analysis + +In this section, we estimate the existing AI-generated image detectors' fairness performance alongside their utility on our AI-Face Dataset. More results can be found in Appendix D. + +# 5.1. General Fairness Comparison + +Overall Performance. Table 4 reports the overall performance on our AI-Face test set. Our observations are: 1) Fairness-Enhanced Models (specifically PG-FDD [30]) are the most effective in achieving both high fairness and utility, underscoring the effectiveness of specialized fairness-enhancement techniques in mitigating demographic biases. 2) UnivFD [88], based on the CLIP backbone [73], also achieves commendable fairness, suggesting that foundation models equipped with fairness-focused enhancements could be a promising direction for developing fairer detectors. 3) Naive detectors, such as EfficientB4 [83], trained on large, diverse datasets (e.g., our AI-Face) can achieve competitive fairness and utility, highlighting the potential of fairness improvements by choosing specific architecture. 4) 10 out of 12 detectors have an AUC higher than $98\%$ , demonstrating our AI-Face dataset is significant for training AI-face detectors in resulting high utility. 5) PG-FDD demonstrates superior performance but has a long training time, which can be explored and addressed in the future. + +Performance on Different Subsets. 1) Fig. 4 demonstrates the intersectional $F_{EO}$ and AUC performance of detectors on each test subset. We observe that the fairness performance varies a lot among different generative methods for every detector. The largest bias on most detectors comes from detecting face images generated by diffusion models. 2) DAG-FDD [29] and SRM [87] demonstrate the most + +consistent fairness across subsets, indicating a robust handling of bias introduced by different generative methods. 3) Moreover, the stable utility demonstrates our dataset's expansiveness and diversity, enabling effective training to detect AI-generated faces from various generative techniques. + +Performance on Different Subgroups. We conduct an analysis of all detectors on intersectional subgroups. 1) As shown in Fig. 5, facial images with lighter skin tone are more often misclassified as fake, likely due to the underrepresentation of lighter tones (Tone 1-3) in our dataset (see Fig. 3 (b)). This suggests detectors tend to show higher error rates for minority groups. 2) Although gender representation is relatively balanced (see Fig. 3 (c)) in our dataset, the detectors consistently exhibit higher false positive rates for female subgroups, indicating a persistent gender-based bias. + +# 5.2. Fairness Reliability Assessment + +Fairness Robustness Evaluation. We apply 6 post-processing methods: Random Crop (RC) [101], Rotation (RT) [34], Brightness Contrast (BC) [34], Hue Saturation Value (HSV) [34], Gaussian Blur (GB) [34], and JEPG Compression (JC) [102] to the test images. Fig. 6 shows each detector's intersectional $F_{EO}$ and AUC performance changes after using post-processing. Our observations are: 1) These impairments tend to wash out forensic traces, so that detectors have evident performance degradation. 2) Post-processing does not always cause detectors more bias (e.g., UCF, UnivFD, CORE, DAW-FDD have better fairness after rotation), though they hurt the utility. 3) Fairness-enhanced detectors struggle to maintain fairness when images undergo post-processing. 4) Spatial detectors have better fairness robustness compared with other model types. + +Fairness Generalization Evaluation. To evaluate detectors' fairness generalization capability, we test them on Casual + +![](images/e46031def0f22af80823816c764efa8d609057504dfa7814e2516c6ac5abeecb.jpg) + +![](images/6ec0f78a6d4f1e7cf989ca92809a4fd0af26df25346a5b2ad4dfc784590fa136.jpg) + +![](images/01b5f57c01e0a85aab33413e5edc86dec9275e457a2fcc570db0106a9a8b77cd.jpg) + +![](images/3ed381744d2a149f361ac5002bf73cdb9224e270715f0926951af5bd27b443d2.jpg) + +![](images/f703227f7c0a1d3042de5635e5224485e29975e281ebb83c2f19c9c1928eb972.jpg) + +![](images/b5cf2bdd2f22f0fe64e0b55ea896aff1bfb41e28fc01205ab907db05edd9c00d.jpg) + +![](images/e33836a66a827e63ba49d5986faa44a281c5f61c13bdc81a1326c85ec9b10c12.jpg) +Figure 4. Visualization of the intersectional $F_{EO}$ (\%) and AUC (\%) of detectors on different subsets. The smaller $F_{EO}$ polygon area represents better fairness. The larger AUC area means better utility. + +![](images/54aaaa160577b20bc7065760bc2188b1c0a14a90ef49b20360fb2d3397e8e5f7.jpg) + +![](images/a7cd58856432107fdfe153cd0a7f772366496c4324f72f4b142fbf760ebb6672.jpg) + +![](images/04dd2777922990ee178b7c87375c81b7f3c5b33437c35c7ebb0d8e9771a5386d.jpg) + +![](images/0dda740b787a117192b48a263b4d80ea906be8d79747a6fd1edd3122ea595f52.jpg) + +![](images/3fb87041182ba581a4144a41520a81745c8ffea2f4b79db682ca098f23a32c69.jpg) + +![](images/be00424190d93f7bc86666e788fa87f2a877b5ed9b6f4a274c244514da0fd855.jpg) +Figure 5. FPR(%) of each intersectional subgroup The dashline represents the lowest FPR on Female-Light (F-L) subgroup. + +![](images/c8b428974b8488c6589a108584966d8fd60bd1d37b3a3fd5055c70f86681c9f6.jpg) + +![](images/7b66e46430a04c9303d75e59aa66d9b11e3b84337a57a3059062880851871ded.jpg) + +![](images/07971c0affdc5b1324513f078b01c55b5fe787d10e186a57af172f5bf4f5000c.jpg) + +![](images/0c507a0a9d0f1ddb836170ca5ed44a91ce6d0652e050b0f96195d9cbc1692452.jpg) + +![](images/59f86cfc141be60c428d5a68cff6f5c5454d96715dbd8157a104501234ee22ca.jpg) + +Conversations v2 (CCv2) [103], DF-Platter [16], and GenData [17], none of which are part of AI-Face. Notably, CCv2 is a dataset that contains only real face images with demographic annotations (e.g., gender) self-reported by the participants. Results on gender attribute in Table 5 show that: 1) Even well-designed detectors that focus on improving utility or fairness generalization (e.g., UCF, PG-FDD) struggle to achieve consistently superior performance across different dataset domains. This highlights the remaining fairness generalization issue. 2) DAW-FDD and PG-PDD are two fairness-enhanced detectors that require accessing demographic information during training, but their fairness does not encounter a drastic drop when evaluating on CCv2. This reflects the high accuracy of the annotations in our AI-face. + +Effect of Training Set Size. We randomly sample $20\%$ , $40\%$ , $60\%$ , and $80\%$ of each training subset from AI-Face to assess the impact of training size on performance. Key observations from Fig. 7 (Left): 1) Among all detectors, UnivFD demonstrates the most stable fairness and utility performance as the training dataset size changes, likely due to its fixed CLIP backbone. 2) Increasing the training dataset size generally improves model utility, but this pattern does not extend to fairness metrics. In fact, certain detectors such as F3Net and UCF exhibit worsening fairness as the training size reaches its maximum. This suggests that more training data does not necessarily lead to fairer detectors. + +Effect of the Ratio of Real and Fake. To examine how training real-to-fake sample ratios affect detector performance, we set the ratios at 1:10, 1:1, and 10:1 while keeping the total sample count constant. Experimental results in Fig. 7 (Right) show: 1) Most detectors' fairness improves as real sample representation increases. Probably because increasing real and reducing fake samples helps detectors reduce overfitting to artifacts specific to fake samples. This makes it easier for detectors to distinguish real from fake, even for underrepresented groups, thereby enhancing fairness. 2) Most detectors achieve the highest AUC with balanced data. + +# 5.3. Discussion + +According to the above experiments, we summarize the unsolved fairness problems in recent detectors: 1) Detectors' fairness is unstable when detecting face images generated by different generative methods, indicating a future direction for enhancing fairness stability since new generative models continue to emerge. 2) Even though fairness-enhanced detectors exhibit small overall fairness metrics, they still show biased detection towards minority groups. Future studies should be more cautious when designing fair detectors to ensure balanced performance across all demographic groups. 3) There is currently no reliable detector, as all detectors experience severe large performance degradation under image post-processing and cross-domain evaluation. Future + +![](images/62c71090c95651797a47b03964b61722d2a55d6802d1bfa822336b0986e770aa.jpg) +Figure 6. Performance ratio after vs. before post-processing. Points closer to 1.0 (i.e., no post-processing) indicate better robustness. + +![](images/68008447f733340256d211e2ea028e169a18c9aa3abaec3961e24c8e2fd13c1d.jpg) + +![](images/f7f61cca8e81436fef80425da46d9a5460d41d86dcba5847c49004dcf222480b.jpg) + +![](images/578ff42d96fa595ace4f5a6a53b17142d06e0d0d598dc743a023d62a2e304a01.jpg) + +
Model TypeDetectorDataset
CCv2 [103]DF-Platter [16]GenData [17]
Fairness(%)↓FOAEUtility(%)↑ACCFairness(%)↓FOAEFEOUtility(%)↑AUCFairness(%)↓FOAEFEOUtility(%)↑AUC
NaiveXception1.006(+0.031)86.465(-9.843)6.836(+5.861)9.789(+5.646)81.273(-17.310)2.539(+1.564)13.487(+9.344)96.971(-1.612)
EfficientB44.077(+0.259)82.980(-11.223)8.786(+7.299)12.370(+6.507)67.694(-30.917)3.304(+1.817)1.995 (-3.686)93.213(-5.398)
ViT-B/162.167(+0.364)81.489(-12.983)0.015 (-1.788)12.373(+6.342)76.050(-22.640)3.164(+1.361)9.610(+3.579)88.253(-10.437)
FrequencyF3Net5.743(+4.614)87.867(-7.852)3.521(+2.392)6.445(+1.575)85.112(-13.602)1.188(+0.059)16.306(+11.436)91.603(-7.111)
SPSL0.601 (-0.436)80.006(-16.340)5.109(+4.072)7.842(+3.308)82.175(-16.572)1.385(+0.348)9.261(+4.272)98.838 (+0.091)
SRM7.000(+5.228)79.768(-15.324)3.823(+2.051)6.567(-0.213)66.401(-31.535)3.281(+1.509)7.907(+1.127)90.049(-7.887)
SpatialUCF2.169(+0.718)93.009 (-2.142)8.687(+7.236)17.068(+11.558)80.821(-17.261)3.513(+2.062)10.529(+5.019)87.778(-10.304)
UnivFD7.625(+6.003)67.983(-25.668)4.540(+2.918)9.950(+4.542)76.443(-21.749)1.645(+0.023)3.848(-1.560)94.418(-3.774)
CORE4.410(+3.021)83.328(-12.896)7.741(+6.352)17.348(+11.417)77.226(-21.353)3.759(+2.370)23.289(+17.358)98.408(-0.171)
Fairness-enhancedDAW-FDD4.726(+3.401)84.685(-10.741)5.536(+4.211)13.667(+7.791)81.807(-16.004)1.443(+0.118)10.228(+4.532)97.854(+0.043)
DAG-FDD2.364(+0.944)83.918(-11.804)3.064(+1.644)22.203(+16.137)75.206(-23.565)0.714 (-0.706)10.332(+4.266)92.108(-6.663)
PG-FDD1.513(+0.442)92.852(-3.322)4.565(+3.494)9.717(+6.015)85.271 (-13.901)3.063(+1.992)9.479(+5.777)93.329(-5.843)
+ +Table 5. Fairness generalization results based on the gender attribute. The smallest performance changes (in parentheses) and the best performance are in red and in bold, respectively. Only $F_{OAE}$ fairness metric and ACC metric are used in CCv2 due to all samples are real. + +![](images/084be84553a199b922a7ec664074fd2869b88a754601e70c2177a9a14abde391.jpg) +Figure 7. Impact of the training set size (Left) and the ratio of real and fake (Right) on detectors' intersectional $F_{EO}(\%)$ and AUC (\%). + +studies should aim to develop a unified framework that ensures fairness, robustness, and generalization, as these three characteristics are essential for creating a reliable detector. Moreover, integrating foundation models (e.g., CLIP) into detector design may help mitigate bias. + +# 6. Conclusion + +This work presents the first demographically annotated million-scale AI-Face dataset, serving as a pivotal foundation for addressing the urgent need for developing fair AI face detectors. Based on this dataset, we conduct the first comprehensive fairness benchmark, shedding light on the fairness performance and challenges of current representative AI face detectors. Our findings can inspire and guide researchers in refining current models and exploring new methods to mitigate bias. Limitation and Future Work: One limitation is that our dataset's annotations are algorithmically generated, so they may lack $100\%$ accuracy. This challenge is difficult to resolve, as demographic attributes for most AI-generated faces are often too ambiguous to predict and do not map to real-world individuals. We plan to enhance annotation quality through human labeling in the future. We + +also plan to extend our fairness benchmark to evaluate large language models like LLaMA2 [104] and GPT4 [105] for detecting AI faces. Social Impact: Malicious users could misuse AI-generated face images from our dataset to create fake social media profiles and spread misinformation. To mitigate this risk, only users who submit a signed end-user license agreement will be granted access to our dataset. + +# Ethics Statement + +Our dataset collection and annotation generation are approved by Purdue's Institutional Review Board. The dataset is only for research purposes. All data included in this work are sourced from publicly available datasets, and we strictly comply with each dataset's license agreement to ensure lawful inclusion and permissible secondary use for training and testing. All collected data and their associated licenses are mentioned in the Datasheet of AI-Face in Appendix E. Our annotation processes prioritize ethical considerations: 1) $76\%$ images we annotated are generated facial images, ensuring no potential for harm to any individual. 2) For real images, we only provide annotations for content either licensed by the original copyright holders or explicitly stated as freely shareable for research purposes. + +# Acknowledgments + +This work is supported by the U.S. National Science Foundation (NSF) under grant IIS-2434967 and the National Artificial Intelligence Research Resource (NAIRR) Pilot and TACC Lonestar6. The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of NSF and NAIRR Pilot. + +# References + +[1] L. Lin, N. Gupta, Y. Zhang, H. Ren, C.-H. Liu, F. Ding, X. Wang, X. Li, L. Verdoliva, and S. Hu, "Detecting multimedia generated by large ai models: A survey," arXiv preprint arXiv:2402.00045, 2024. 1 +[2] A. Rossler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, and M. Nießner, “Faceforensics++: Learning to detect manipulated facial images,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 1-11, 2019, 1, 2, 3, 18, 19 +[3] “Deepfakes github.” https://github.com/deepfakes/faceswap. Accessed: 2024-04-17. 1,2 +[4] “Fakeapp.” https://www.fakeapp.com/. Accessed: 2024-04-17. 1, 2 +[5] A. Brock, J. Donahue, and K. Simonyan, "Large scale gan training for high fidelity natural image synthesis," in 7th International Conference on Learning Representations, ICLR 2019, 2019. 1 +[6] T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401–4410, 2019. 2, 3, 18 +[7] T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila, "Analyzing and improving the image quality of stylegan," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119, 2020. +[8] T. Karras, M. Aittala, S. Laine, E. Härkönen, J. Hellsten, J. Lehtinen, and T. Aila, "Alias-free generative adversarial networks," Advances in neural information processing systems, vol. 34, pp. 852-863, 2021. 1, 2 +[9] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, "High-resolution image synthesis with latent diffusion models," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10684-10695, 2022. 1 +[10] D. J. Tojin T. Eapen, “How generative ai can augment human creativity.” https://hbr.org/2023/07/how-generative-ai-can-augment-human-creativity, 2023. Accessed: 2024-04-21. 1, 2 +[11] B. News, "Trump supporters target black voters with faked ai images." https://www.bbc.com/news/world-us-canada-68440150, 2024. Accessed: 2023-05-09. 1 +[12] H. S. Sætra, “Generative ai: Here to stay, but for good?,” Technology in Society, vol. 75, p. 102372, 2023. 1 + +[13] M. Westerlund, “The emergence of deepfake technology: A review,” Technology innovation management review, vol. 9, no. 11, 2019. 1, 2 +[14] L. Jiang, R. Li, W. Wu, C. Qian, and C. C. Loy, "Deeperforensics-1.0: A large-scale dataset for real-world face forgery detection," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2889-2898, 2020. 2 +[15] K. Narayan, H. Agarwal, K. Thakral, S. Mittal, M. Vatsa, and R. Singh, "Deeply: On deepfake phylogeny," in 2022 IEEE International Joint Conference on Biometrics (IJCB), pp. 1-10, IEEE, 2022. 2 +[16] K. Narayan, H. Agarwal, K. Thakral, S. Mittal, M. Vatsa, and R. Singh, "Df-platter: multi-face heterogeneous deepfake dataset," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9739-9748, 2023. 2, 7, 8 +[17] C. Teo, M. Abdollahzadeh, and N.-M. M. Cheung, “On measuring fairness in generative models,” Advances in Neural Information Processing Systems, vol. 36, 2023. 1, 2, 7, 8 +[18] Z. Liu, P. Luo, X. Wang, and X. Tang, "Deep learning face attributes in the wild," in Proceedings of International Conference on Computer Vision (ICCV), December 2015. 2 +[19] Y. Xu, P. Terhöst, M. Pedersen, and K. Raja, "Analyzing fairness in deepfake detection with massively annotated databases," IEEE Transactions on Technology and Society, 2024. 1, 2, 14 +[20] R. Rothe, R. Timofte, and L. Van Gool, “Dex: Deep expectation of apparent age from a single image,” in Proceedings of the IEEE international conference on computer vision workshops, pp. 10–15, 2015. 2, 3, 4, 18 +[21] B. Dolhansky, J. Bitton, B. Pflaum, J. Lu, R. Howes, M. Wang, and C. C. Ferrer, "The deepfake detection challenge (dfdc) dataset," arXiv preprint arXiv:2006.07397, 2020. 2, 3, 18, 19 +[22] G. Research, "Contributing data to deepfake detection research," 2019. Accessed: 2024-04-12. 2, 3, 18, 19 +[23] Y. Li, X. Yang, P. Sun, H. Qi, and S. Lyu, "Celeb-df: A large-scale challenging dataset for deepfake forensics," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3207-3216, 2020. 2, 3, 18, 19 +[24] W. Pu, J. Hu, X. Wang, Y. Li, S. Hu, B. Zhu, R. Song, Q. Song, X. Wu, and S. Lyu, "Learning a deep dual-level network for robust deepfake detection," Pattern Recognition, vol. 130, p. 108832, 2022. 1, 2 +[25] H. Guo, S. Hu, X. Wang, M.-C. Chang, and S. Lyu, "Robust attentive deep neural network for detecting gan-generated faces," IEEE Access, vol. 10, pp. 32574-32583, 2022. +[26] Z. Yan, Y. Zhang, Y. Fan, and B. Wu, "Ucf: Uncovering common features for generalizable deepfake detection," in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 22412-22423, 2023. 5, 6, 19, 21, 23, 24, 25, 26, 27 +[27] L. Papa, L. Faiella, L. Corvitto, L. Maiano, and I. Amerini, "On the use of stable diffusion for creating realistic faces: from generation to detection," in 2023 11th International + +Workshop on Biometrics and Forensics (IWBF), pp. 1-6, IEEE, 2023. 1, 2 +[28] L. Trinh and Y. Liu, “An examination of fairness of ai models for deepfake detection,” IJCAI, 2021. 1, 2, 3 +[29] Y. Ju, S. Hu, S. Jia, G. H. Chen, and S. Lyu, “Improving fairness in deepfake detection,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 4655–4665, 2024. 1, 2, 5, 6, 21, 23, 24, 25, 26, 27 +[30] L. Lin, X. He, Y. Ju, X. Wang, F. Ding, and S. Hu, “Preserving fairness generalization in deepfake detection,” CVPR, 2024. 1, 2, 5, 6, 19, 21, 23, 24, 25, 26, 27 +[31] Z. Yan, T. Yao, S. Chen, Y. Zhao, X. Fu, J. Zhu, D. Luo, L. Yuan, C. Wang, S. Ding, et al., "Df40: Toward next-generation deepfake detection," NeurIPS, 2024. 1, 2, 3 +[32] C. Li et al., “A continual deepfake detection benchmark: Dataset, methods, and essentials,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1339–1349, 2023. 2, 3 +[33] J. Deng, C. Lin, P. Hu, C. Shen, Q. Wang, Q. Li, and Q. Li, "Towards benchmarking and evaluating deepfake detection," IEEE Transactions on Dependable and Secure Computing, 2024. 3 +[34] Z. Yan, Y. Zhang, X. Yuan, S. Lyu, and B. Wu, "Deepfakebench: A comprehensive benchmark of deepfake detection," in NeurIPS, 2023. 3, 6 +[35] B. M. Le, J. Kim, S. Tariq, K. Moore, A. Abuadbba, and S. S. Woo, "Sok: Facial deepfake detectors," arXiv, 2024. 2, 3 +[36] C. Hazirbas, J. Bitton, B. Dolhansky, J. Pan, A. Gordo, and C. C. Ferrer, "Towards measuring fairness in ai: the casual conversations dataset," IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 4, no. 3, pp. 324-332, 2021. 2, 3 +[37] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al., "Learning transferable visual models from natural language supervision," in International conference on machine learning, pp. 8748-8763, PMLR, 2021. 2, 4 +[38] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial nets," Advances in neural information processing systems, vol. 27, 2014. 2 +[39] “Midjourney.” https://mid-journey.ai/. Accessed: 2024-04-17. 2 +[40] A. Ramesh et al., "Hierarchical text-conditional image generation with clip latents," arXiv, vol. 1, no. 2, p. 3, 2022. 2 +[41] D. O'Sullivan, "A high school student created a fake 2020 us candidate. twitter verified it." https://cnn.it/3HpHfzz, 2020. Accessed: 2024-04-21. 2 +[42] S. Bond, "That smiling linkedin profile face might be a computer-generated fake." https://www.npr.org/2022/03/27/1088140809/fake-linkedin-profiles, 2022. Accessed: 2024-04-21. 2 +[43] V. Albiero, K. Bowyer, K. Vangara, and M. King, "Does face recognition accuracy get better with age? deep face matchers + +say no," in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 261-269, 2020. 3 +[44] V. Albiero, K. Ks, K. Vangara, K. Zhang, M. C. King, and K. W. Bowyer, "Analysis of gender inequality in face recognition accuracy," in Proceedings of the IEEE/cvf winter conference on applications of computer vision workshops, pp. 81-89, 2020. +[45] C. M. Cook, J. J. Howard, Y. B. Sirotin, J. L. Tipton, and A. R. Vermury, "Demographic effects in facial recognition and their dependence on image acquisition: An evaluation of eleven commercial systems," IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 1, no. 1, pp. 32-41, 2019. 3, 14 +[46] K. Krishnapriya, V. Albiero, K. Vangara, M. C. King, and K. W. Bowyer, “Issues related to face recognition accuracy varying based on race and skin tone,” IEEE Transactions on Technology and Society, vol. 1, no. 1, pp. 8–20, 2020. 14 +[47] B. Porgali, V. Albiero, J. Ryda, C. C. Ferrer, and C. Hazirbas, "The casual conversations v2 dataset," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 10-17, June 2023. 3 +[48] Google, “The Monk Skin Tone Scale,” 2024. [Accessed October 16, 2024]. 3, 14 +[49] United States Department of State — Bureau of Consular Affairs, “Selecting your gender marker - travel,” 2022. [Accessed October 16, 2024]. 3, 14 +[50] Australian Bureau of Statistics, "Standard for Sex, Gender, Variations of Sex Characteristics and Sexual Orientation Variables," 2024. [Accessed October 16, 2024]. 3, 14 +[51] J. J. Howard, Y. B. Sirotin, and A. R. Vemury, “The effect of broad and specific demographic homogeneity on the imposter distributions and false match rates in face recognition algorithm performance,” in 2019 IEEE 10th international conference on biometrics theory, applications and systems (btas), pp. 1–8, IEEE, 2019. 3, 14 +[52] I. D. Raji and J. Buolamwini, "Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products," in Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 429-435, 2019. 3, 14 +[53] United Nations, “Provisional Guidelines on Standard International Age Classifications,” 1982. [Accessed October 16, 2024]. 3, 14 +[54] Statistics Canada, "Age Categories, Life Cycle Groupings," 2017. [Accessed October 16, 2024]. 3, 14 +[55] O. Giudice, L. Guarnera, and S. Battiato, “Fighting deep-fakes by detecting gan dct anomalies,” Journal of Imaging, vol. 7, no. 8, p. 128, 2021. 3, 18 +[56] V. Asnani, X. Yin, T. Hassner, and X. Liu, "Reverse engineering of generative models: Inferring model hyperparameters from generated images," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023. 3, 18 +[57] D. Beniaguev, “Synthetic faces high quality (sfhq) dataset,” 2022. 3, 18 +[58] Z. Lu, D. Huang, L. Bai, J. Qu, C. Wu, X. Liu, and W. Ouyang, "Seeing is not always believing: Benchmarking + +human and model perception of ai-generated images," Advances in Neural Information Processing Systems, vol. 36, 2024. 3, 18 +[59] L. M. Dang, S. I. Hassan, S. Im, J. Lee, S. Lee, and H. Moon, "Deep learning based computer generated face identification using convolutional neural network," Applied Sciences, vol. 8, no. 12, p. 2610, 2018. 3, 18 +[60] P. Esser, R. Rombach, and B. Ommer, “Taming transformers for high-resolution image synthesis,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12873–12883, 2021. 3, 18 +[61] Z. Wang, J. Bao, W. Zhou, W. Wang, H. Hu, H. Chen, and H. Li, "Dire for diffusion-generated image detection," arXiv preprint arXiv:2303.09295, 2023. 3, 18 +[62] M. Kim, F. Liu, A. Jain, and X. Liu, "Dface: Synthetic face generation with dual condition diffusion model," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12715-12725, 2023. 3, 18 +[63] R. Corvi, D. Cozzolino, G. Zingarini, G. Poggi, K. Nagano, and L. Verdoliva, "On the detection of synthetic images generated by diffusion models," in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1-5, IEEE, 2023. 3, 18 +[64] M. Awsafur Rahman, B. Paul, N. Haque Sarker, Z. I. A. Hakim, and S. Anowarul Fattah, "Artifact: A large-scale dataset with artificial and factual images for generalizable and robust synthetic image detection," arXiv e-prints, pp. arXiv-2302, 2023. 3, 18 +[65] H. Song, S. Huang, Y. Dong, and W.-W. Tu, "Robustness and generalizability of deepfake detection: A study with diffusion models," arXiv preprint arXiv:2309.02218, 2023. 3, 18 +[66] J. Deng, J. Guo, E. Ververas, I. Kotsia, and S. Zafeiriou, "Retinaface: Single-shot multi-level face localisation in the wild," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5203-5212, 2020. 3, 31, 32 +[67] K. S. Krishnapriya, G. Pangelinan, M. C. King, and K. W. Bowyer, "Analysis of manual and automated skin tone assignments," in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops, pp. 429-438, January 2022. 3 +[68] W. Thong, P. Joniak, and A. Xiang, "Beyond skin tone: A multidimensional measure of apparent skin color," in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 4903-4913, October 2023. 3 +[69] C. Lugaresi, J. Tang, H. Nash, C. McClanahan, E. Uboweja, M. Hays, F. Zhang, C.-L. Chang, M. G. Yong, J. Lee, et al., "Mediapipe: A framework for building perception pipelines," arXiv preprint arXiv:1906.08172, 2019. 3 +[70] J. A. Hartigan and M. A. Wong, "Algorithm as 136: A k-means clustering algorithm," Journal of the royal statistical society. series c (applied statistics), vol. 28, no. 1, pp. 100-108, 1979. 3 +[71] Megvii Technology Limited, “Face++ Face Detection.” https://www(faceplusplus.com/face-detection/. Accessed: 2024-03. 3, 14, 18, 19 + +[72] InsightFace Project Contributors, "InsightFace: State-of-the-Art Face Analysis Toolbox." https://insightface.ai/. Accessed: 2024-03. 3, 14, 18, 19 +[73] G. Ilharco, M. Wortsman, R. Wightman, C. Gordon, N. Carlini, R. Taori, A. Dave, V. Shankar, H. Namkoong, J. Miller, H. Hajishirzi, A. Farhadi, and L. Schmidt, "Open clip." https://github.com/mlfoundations/open Clip, 2021.4.6.21 +[74] K. Cao, C. Wei, A. Gaidon, N. Arechiga, and T. Ma, "Learning imbalanced datasets with label-distribution-aware margin loss," Advances in neural information processing systems, vol. 32, 2019. 4 +[75] S. Agarwal, G. Krueger, J. Clark, A. Radford, J. W. Kim, and M. Brundage, “Evaluating clip: towards characterization of broader capabilities and downstream implications,” arXiv preprint arXiv:2108.02818, 2021. 4 +[76] M. M. Tanjim, K. K. Singh, K. Kafle, R. Sinha, and G. W. Cottrell, “Discovering and mitigating biases in clip-based image editing,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 2984–2993, 2024. +[77] J. Wang and G. Kang, “Learn to rectify the bias of clip for unsupervised semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4102–4112, 2024. 4 +[78] G. R. Kini, O. Paraskevas, S. Oymak, and C. Thrampoulidis, "Label-imbalanced and group-sensitive classification under overparameterization," Advances in Neural Information Processing Systems, vol. 34, pp. 18970-18983, 2021. 4 +[79] G. Peyré, M. Cuturi, et al., "Computational optimal transport: With applications to data science," Foundations and Trends® in Machine Learning, vol. 11, no. 5-6, pp. 355-607, 2019. 4 +[80] M. Cuturi, "Sinkhorn distances: Lightspeed computation of optimal transport," Advances in neural information processing systems, vol. 26, 2013. 4 +[81] P. Foret, A. Kleiner, H. Mobahi, and B. Neyshabur, "Sharpness-aware minimization for efficiently improving generalization," in International Conference on Learning Representations, 2020. 4 +[82] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1251–1258, 2017. 5, 6, 19, 21, 23, 24, 25, 26, 27 +[83] M. Tan and Q. Le, "Efficientnet: Rethinking model scaling for convolutional neural networks," in International conference on machine learning, pp. 6105-6114, PMLR, 2019. 5, 6, 19, 21, 23, 24, 25, 26, 27 +[84] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., "An image is worth 16x16 words: Transformers for image recognition at scale," in 9th International Conference on Learning Representations, 2021. 5, 6, 20, 21, 23, 24, 25, 26, 27 +[85] Y. Qian, G. Yin, L. Sheng, Z. Chen, and J. Shao, “Thinking in frequency: Face forgery detection by mining frequency-aware clues,” in European conference on computer vision, pp. 86–103, Springer, 2020. 5, 6, 21, 23, 24, 25, 26, 27 + +[86] H. Liu, X. Li, W. Zhou, Y. Chen, Y. He, H. Xue, W. Zhang, and N. Yu, "Spatial-phase shallow learning: rethinking face forgery detection in frequency domain," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 772-781, 2021. 5, 6, 21, 23, 24, 25, 26, 27 +[87] Y. Luo, Y. Zhang, J. Yan, and W. Liu, "Generalizing face forgery detection with high-frequency features," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 16317-16326, 2021. 5, 6, 19, 21, 23, 24, 25, 26, 27 +[88] U. Ojha, Y. Li, and Y. J. Lee, "Towards universal fake image detectors that generalize across generative models," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 24480-24489, 2023. 5, 6, 21, 23, 24, 25, 26, 27 +[89] Y. Ni, D. Meng, C. Yu, C. Quan, D. Ren, and Y. Zhao, "Core: Consistent representation learning for face forgery detection," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12-21, 2022. 5, 6, 21, 23, 24, 25, 26, 27 +[90] X. Han, J. Chi, Y. Chen, Q. Wang, H. Zhao, N. Zou, and X. Hu, “Ffb: A fair fairness benchmark for in-processing group fairness methods,” in ICLR, 2024. 5 +[91] N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan, “A survey on bias and fairness in machine learning,” ACM computing surveys (CSUR), vol. 54, no. 6, pp. 1–35, 2021. 5 +[92] J. Wang, X. E. Wang, and Y. Liu, “Understanding instance-level impact of fairness constraints,” in International Conference on Machine Learning, pp. 23114–23130, PMLR, 2022. 5 +[93] H. Wang, L. He, R. Gao, and F. P. Calmon, "Aleatoric and epistemic discrimination in classification," ICML, 2023. 5 +[94] C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel, "Fairness through awareness," in Proceedings of the 3rd innovations in theoretical computer science conference, pp. 214-226, 2012. 5 +[95] Z. Yan, Y. Luo, S. Lyu, Q. Liu, and B. Wu, "Transcending forgery specificity with latent space augmentation for generalizable deepfake detection," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8984-8994, June 2024. 5 +[96] H. Ren, L. Lin, C.-H. Liu, X. Wang, and S. Hu, "Improving generalization for ai-synthesized voice detection," in AAAI, 2025. +[97] Z. Yan, Y. Zhao, S. Chen, M. Guo, X. Fu, T. Yao, S. Ding, and L. Yuan, "Generalizing deepfake video detection with plug-and-play: Video-level blending and spatiotemporal adapter tuning," in CVPR, 2025. +[98] J. Cheng, Z. Yan, Y. Zhang, L. Hao, J. Ai, Q. Zou, C. Li, and Z. Wang, "Stacking brick by brick: Aligned feature isolation for incremental face forgery detection," in CVPR, 2025. 5 +[99] "Monk skin tone scale," in https://en.wikipedia.org/wiki/Monk_Skin_Tone_Scale, Wikipedia, The Free Encyclopedia. 5 +[100] S. Hu and G. H. Chen, "Fairness in survival analysis with distributionally robust optimization," arXiv, 2023. 5 + +[101] F. Cocchi, L. Baraldi, S. Poppi, M. Cornia, L. Baraldi, and R. Cucchiara, “Unveiling the impact of image transformations on deepfake detection: An experimental analysis,” in International Conference on Image Analysis and Processing, pp. 345–356, Springer, 2023. 6 +[102] D. Cozzolino, G. Poggi, R. Corvi, M. Nießner, and L. Verdoliva, “Raising the bar of ai-generated image detection with clip,” arXiv preprint arXiv:2312.00195, 2023. 6 +[103] B. Porgali, V. Albiero, J. Ryda, C. C. Ferrer, and C. Hazirbas, “The casual conversations v2 dataset,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10–17, 2023. 7, 8 +[104] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, et al., "Llama 2: Open foundation and fine-tuned chat models," arXiv preprint arXiv:2307.09288, 2023. 8 +[105] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat, et al., "Gpt-4 technical report," arXiv preprint arXiv:2303.08774, 2023. 8 +[106] J. Buolamwini and T. Gebru, “Gender shades: Intersectional accuracy disparities in commercial gender classification,” in Conference on fairness, accountability and transparency, pp. 77–91, PMLR, 2018. 14 +[107] B. Lu, J.-C. Chen, C. D. Castillo, and R. Chellappa, “An experimental evaluation of covariates effects on unconstrained face verification,” IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 1, no. 1, pp. 42–55, 2019. 14 +[108] Z. Khan and Y. Fu, “One label, one billion faces: Usage and consistency of racial categories in computer vision,” in Proceedings of the 2021 acm conference on fairness, accountability, and transparency, pp. 587–597, 2021. 14 +[109] S. Sachdeva, “Fitzpatrick skin typing: Applications in dermatology,” Indian journal of dermatology, venereology and leprology, vol. 75, p. 93, 2009. 14 +[110] J. J. Howard, Y. B. Sirotin, J. L. Tipton, and A. R. Vemury, "Reliability and validity of image-based and self-reported skin phenotype metrics," IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 3, no. 4, pp. 550-560, 2021. 14 +[111] U. Okoji, S. Taylor, and J. Lipoff, “Equity in skin typing: why it is time to replace the fitzpatrick scale,” British Journal of Dermatology, vol. 185, no. 1, pp. 198–199, 2021. 14 +[112] M. Groh, C. Harris, R. Daneshjou, O. Badri, and A. Koochek, "Towards transparency in dermatology image datasets with skin tone annotations by experts, crowds, and an algorithm," Proceedings of the ACM on Human-Computer Interaction, vol. 6, no. CSCW2, pp. 1-26, 2022. 14 +[113] R. Williamson and A. Menon, “Fairness risk measures,” in International conference on machine learning, pp. 6786–6797, PMLR, 2019. 21 +[114] D. Levy, Y. Carmon, J. C. Duchi, and A. Sidford, "Largescale methods for distributionally robust optimization," Advances in Neural Information Processing Systems, vol. 33, pp. 8847-8860, 2020. 21 + +[115] R. T. Rockafellar, S. Uryasev, et al., "Optimization of conditional value-at-risk," Journal of risk, vol. 2, pp. 21-42, 2000. 21 +[116] T. Hashimoto, M. Srivastava, H. Namkoong, and P. Liang, "Fairness without demographics in repeated loss minimization," in International Conference on Machine Learning, pp. 1929-1938, PMLR, 2018. 21 +[117] J. C. Duchi and H. Namkoong, “Learning models with uniform performance via distributionally robust optimization,” The Annals of Statistics, vol. 49, no. 3, pp. 1378–1406, 2021. 21 +[118] T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach, H. D. Iiii, and K. Crawford, "Datasheets for datasets," Communications of the ACM, vol. 64, no. 12, pp. 86-92, 2021. 31 \ No newline at end of file diff --git a/2025/AI-Face_ A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark/images.zip b/2025/AI-Face_ A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..437e138140ef8ce64a12c7defb1e52f280feafde --- /dev/null +++ b/2025/AI-Face_ A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:db7b50d64bd5bb7b99325c12090bd514a9108228b6f6ae94f2ee6cc56d9aae8f +size 1003978 diff --git a/2025/AI-Face_ A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark/layout.json b/2025/AI-Face_ A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..79dfcbfb198224e2c1dff477ee4a65039f45ebdb --- /dev/null +++ b/2025/AI-Face_ A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark/layout.json @@ -0,0 +1,11180 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 56, + 103, + 555, + 138 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 103, + 555, + 138 + ], + "spans": [ + { + "bbox": [ + 56, + 103, + 555, + 138 + ], + "type": "text", + "content": "AI-Face: A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 165, + 160, + 443, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 165, + 160, + 443, + 175 + ], + "spans": [ + { + "bbox": [ + 165, + 160, + 443, + 175 + ], + "type": "text", + "content": "Li Lin" + }, + { + "bbox": [ + 165, + 160, + 443, + 175 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 165, + 160, + 443, + 175 + ], + "type": "text", + "content": ", Santosh" + }, + { + "bbox": [ + 165, + 160, + 443, + 175 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 165, + 160, + 443, + 175 + ], + "type": "text", + "content": ", Mingyang Wu" + }, + { + "bbox": [ + 165, + 160, + 443, + 175 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 165, + 160, + 443, + 175 + ], + "type": "text", + "content": ", Xin Wang" + }, + { + "bbox": [ + 165, + 160, + 443, + 175 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 165, + 160, + 443, + 175 + ], + "type": "text", + "content": ", Shu Hu" + }, + { + "bbox": [ + 165, + 160, + 443, + 175 + ], + "type": "inline_equation", + "content": "^{1\\dagger}" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 83, + 175, + 529, + 189 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 175, + 529, + 189 + ], + "spans": [ + { + "bbox": [ + 83, + 175, + 529, + 189 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 83, + 175, + 529, + 189 + ], + "type": "text", + "content": "Purdue University, West Lafayette, USA {lin1785, santosh2, wu2415, hu968}@purdue.edu" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 85, + 190, + 525, + 203 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 190, + 525, + 203 + ], + "spans": [ + { + "bbox": [ + 85, + 190, + 525, + 203 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 85, + 190, + 525, + 203 + ], + "type": "text", + "content": "University at Albany, State University of New York, New York, USA xwang56@albany.edu" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 151, + 231, + 200, + 243 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 231, + 200, + 243 + ], + "spans": [ + { + "bbox": [ + 151, + 231, + 200, + 243 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 255, + 297, + 506 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 255, + 297, + 506 + ], + "spans": [ + { + "bbox": [ + 55, + 255, + 297, + 506 + ], + "type": "text", + "content": "AI-generated faces have enriched human life, such as entertainment, education, and art. However, they also pose misuse risks. Therefore, detecting AI-generated faces becomes crucial, yet current detectors show biased performance across different demographic groups. Mitigating biases can be done by designing algorithmic fairness methods, which usually require demographically annotated face datasets for model training. However, no existing dataset encompasses both demographic attributes and diverse generative methods simultaneously, which hinders the development of fair detectors for AI-generated faces. In this work, we introduce the AI-Face dataset, the first million-scale demographically annotated AI-generated face image dataset, including real faces, faces from deepfake videos, and faces generated by Generative Adversarial Networks and Diffusion Models. Based on this dataset, we conduct the first comprehensive fairness benchmark to assess various AI face detectors and provide valuable insights and findings to promote the future fair design of AI face detectors. Our AI-Face dataset and benchmark code are publicly available at https://github.com/Purdue-M2/AI-Face-FairnessBench." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 523, + 135, + 536 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 523, + 135, + 536 + ], + "spans": [ + { + "bbox": [ + 56, + 523, + 135, + 536 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 540, + 297, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 540, + 297, + 696 + ], + "spans": [ + { + "bbox": [ + 55, + 540, + 297, + 696 + ], + "type": "text", + "content": "AI-generated faces are created using sophisticated AI technologies that are visually difficult to discern from real ones [1]. They can be summarized into three categories: deepfake videos [2] created by typically using Variational Autoencoders (VAEs) [3, 4], faces generated from Generative Adversarial Networks (GANs) [5-8], and Diffusion Models (DMs) [9]. These technologies have significantly advanced the realism and controllability of synthetic facial representations. Generated faces can enrich media and increase creativity [10]. However, they also carry significant risks of misuse. For example, during the 2024 United States presidential election, fake face images of Donald Trump surrounded by groups of black people smiling and laughing to" + } + ] + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 325, + 229, + 541, + 384 + ], + "blocks": [ + { + "bbox": [ + 325, + 229, + 541, + 384 + ], + "lines": [ + { + "bbox": [ + 325, + 229, + 541, + 384 + ], + "spans": [ + { + "bbox": [ + 325, + 229, + 541, + 384 + ], + "type": "image", + "image_path": "b60e36a480d2233a4dde3cd316471aec9b9fb7003921d83df06178da82c10f17.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 388, + 555, + 431 + ], + "lines": [ + { + "bbox": [ + 313, + 388, + 555, + 431 + ], + "spans": [ + { + "bbox": [ + 313, + 388, + 555, + 431 + ], + "type": "text", + "content": "Figure 1. Comparison between AI-Face and other datasets in terms of demographic annotation, generation category, and the number of generation methods. 'DF', 'GAN', and 'DM' stand for Deepfake Videos, Generative Adversarial Networks, and Diffusion Models." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 437, + 555, + 485 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 437, + 555, + 485 + ], + "spans": [ + { + "bbox": [ + 313, + 437, + 555, + 485 + ], + "type": "text", + "content": "encourage African Americans to vote Republican are spreading online [11]. This could distort public opinion and erode people's trust in media [12, 13], necessitating the detection of AI-generated faces for their ethical use." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 312, + 486, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 486, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 312, + 486, + 556, + 715 + ], + "type": "text", + "content": "However, one major issue existing in current AI face detectors [24-27] is biased detection (i.e., unfair detection performance among demographic groups [19, 28-30]). Mitigating biases can be done by designing algorithmic fairness methods, but they usually require demographically annotated face datasets for model training. For example, works like [29, 30] have made efforts to enhance fairness in the detection based on A-FF++ [19] and A-DFD [19]. However, both datasets are limited to containing only faces from deepfake videos, which could cause the trained models not to be applicable for fairly detecting faces generated by GANs and DMs. While some datasets (e.g., GenData [17], DF40 [31]) include GAN and DM faces, they either lack demographic annotations or provide only limited demographic attributes. Most importantly, no existing dataset offers sufficient diversity in generation methods while also providing demographic labels. A comparison of existing datasets is shown in Fig. 1. These limitations of existing datasets hamper the development of fair technologies for detecting AI-generated faces." + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 35 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 35 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 35 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 70, + 702, + 147, + 712 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 70, + 702, + 147, + 712 + ], + "spans": [ + { + "bbox": [ + 70, + 702, + 147, + 712 + ], + "type": "text", + "content": "†Corresponding author" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "3503" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 82, + 70, + 526, + 183 + ], + "blocks": [ + { + "bbox": [ + 82, + 70, + 526, + 183 + ], + "lines": [ + { + "bbox": [ + 82, + 70, + 526, + 183 + ], + "spans": [ + { + "bbox": [ + 82, + 70, + 526, + 183 + ], + "type": "table", + "html": "
DatasetYearFace ImagesGeneration Category#Generation MethodsSource of Real ImagesDemographic Annotation
#Real#FakeDeepfake VideosGANDMSkin ToneGenderAge
DF-1.0 [14]20202.9M14.7M1Self-Recording
DeePhy [15]20221K50.4K3YouTube
DF-Platter [16]2023392.3K653.4K3YouTube
GenData [17]2023-20K3CelebA [18]
A-FF++ [19]202429.8K149.1K5YouTube
A-DFD [19]202410.8K89.6K5Self-Recording
A-DFDC [19]202454.5K52.6K8Self-Recording
A-Celeb-DF-v2 [19]202426.3K166.5K1Self-Recording
A-DF-1.0 [19]2024870.3K321.5K1Self-Recording
AI-Face2025400K1.2M37FFHQ [6], IMDB-WIKI [20], real from FF++ [2], DFDC [21], DFD [22],Celeb-DF-v2 [23]
", + "image_path": "e7814b570fbb09fb05be359e226cd83fc885db7cd71cc6212792e3312cdc5733.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 96, + 186, + 511, + 198 + ], + "lines": [ + { + "bbox": [ + 96, + 186, + 511, + 198 + ], + "spans": [ + { + "bbox": [ + 96, + 186, + 511, + 198 + ], + "type": "text", + "content": "Table 1. Quantitative comparison of existing datasets with ours on demographically annotated AI-generated faces." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 54, + 201, + 294, + 369 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 201, + 294, + 369 + ], + "spans": [ + { + "bbox": [ + 54, + 201, + 294, + 369 + ], + "type": "text", + "content": "Moreover, benchmarking fairness provides a direct method to uncover prevalent and unique fairness issues in recent AI-generated face detection. However, there is a lack of a comprehensive benchmark to estimate the fairness of existing AI face detectors. Existing benchmarks [32-35] primarily assess utility, neglecting systematic fairness evaluation. Two studies [28, 36] do evaluate fairness in detection models, but their examination is based on a few outdated detectors. Furthermore, detectors' fairness reliability (e.g., robustness with test set post-processing, fairness generalization) has not been assessed. The absence of a comprehensive fairness benchmark impedes a thorough understanding of the fairness behaviors of recent AI face detectors and obscures the research path for detector fairness guarantees." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 369, + 295, + 548 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 369, + 295, + 548 + ], + "spans": [ + { + "bbox": [ + 55, + 369, + 295, + 548 + ], + "type": "text", + "content": "In this work, we build the first million-scale demographically annotated AI-generated face image dataset: AI-Face. The face images are collected from various public datasets, including the real faces that are usually used to train AI face generators, faces from deepfake videos, and faces generated by GANs and DMs. Each face is demographically annotated by our designed measurement method and Contrastive Language-Image Pretraining (CLIP) [37]-based lightweight annotator. Next, we conduct the first comprehensive fairness benchmark on our dataset to estimate the fairness performance of 12 representative detectors coming from four model types. Our benchmark exposes common and unique fairness challenges in recent AI face detectors, providing essential insights that can guide and enhance the future design of fair AI face detectors. Our contributions are as follows:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 550, + 295, + 657 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 67, + 550, + 295, + 585 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 550, + 295, + 585 + ], + "spans": [ + { + "bbox": [ + 67, + 550, + 295, + 585 + ], + "type": "text", + "content": "- We build the first million-scale demographically annotated AI-generated face dataset by leveraging our designed measure and developed lightweight annotator." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 586, + 295, + 620 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 586, + 295, + 620 + ], + "spans": [ + { + "bbox": [ + 67, + 586, + 295, + 620 + ], + "type": "text", + "content": "- We conduct the first comprehensive fairness benchmark of AI-generated face detectors, providing an extensive fairness assessment of current representative detectors." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 622, + 295, + 657 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 622, + 295, + 657 + ], + "spans": [ + { + "bbox": [ + 67, + 622, + 295, + 657 + ], + "type": "text", + "content": "- Based on our experiments, we summarize the unsolved questions and offer valuable insights within this research field, setting the stage for future investigations." + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 55, + 671, + 216, + 685 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 671, + 216, + 685 + ], + "spans": [ + { + "bbox": [ + 55, + 671, + 216, + 685 + ], + "type": "text", + "content": "2. Background and Motivation" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 689, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 689, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 689, + 295, + 713 + ], + "type": "text", + "content": "AI-generated Faces and Biased Detection. AI-generated face images, created by advanced AI technologies, are vi" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 201, + 555, + 477 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 201, + 555, + 477 + ], + "spans": [ + { + "bbox": [ + 313, + 201, + 555, + 477 + ], + "type": "text", + "content": "sually difficult to discern from real ones. They can be summarized into three categories: 1) Deepfake Videos. Initiated in 2017 [13], these use face-swapping and face-reenactment techniques with a variational autoencoder to replace a face in a target video with one from a source [3, 4]. Note that our paper focuses solely on images extracted from videos. 2) GAN-generated Faces. Post-2017, Generative Adversarial Networks (GANs) [38] like StyleGANs [6-8] have significantly improved generated face realism. 3) DM-generated Faces. Diffusion models (DMs), emerging in 2021, generate detailed faces from textual descriptions and offer greater controllability. Tools like Midjourney [39] and DALLE2 [40] facilitate customized face generation. While these AI-generated faces can enhance visual media and creativity [10], they also pose risks, such as being misused in social media profiles [41, 42]. Therefore, numerous studies focus on detecting AI-generated faces [24-27], but current detectors often show performance disparities among demographic groups [19, 28-30]. This bias can lead to unfair targeting or exclusion, undermining trust in detection models. Recent efforts [29, 30] aim to enhance fairness in deepfake detection but mainly address deepfake videos, overlooking biases in detecting GAN- and DM-generated faces." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 481, + 556, + 613 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 481, + 556, + 613 + ], + "spans": [ + { + "bbox": [ + 313, + 481, + 556, + 613 + ], + "type": "text", + "content": "The Existing Datasets. Current AI-generated facial datasets with demographic annotations are limited in size, generation categories, methods, and annotations, as illustrated in Table 1. For instance, A-FF++, A-DFD, A-DFDC, and A-Celeb-DF-v2 [19] are deepfake video datasets with fewer than one million images. Datasets like DF-1.0 [14] and DF-Platter [16] lack various demographic annotations. Additionally, existing datasets offer limited generation methods. These limitations hinder the development of fair AI face detectors, motivating us to build a million-scale demographically annotated AI-Face dataset." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 617, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 617, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 617, + 556, + 715 + ], + "type": "text", + "content": "Benchmark for Detecting AI-generated Faces. Benchmarks are essential for evaluating AI-generated face detectors under standardized conditions. Existing benchmarks, as shown in Table 2, mainly focus on detectors' utility, often overlooking fairness [31-35]. Loc et al. [28] and CCv1 [36] examined detector fairness. However, their study did not have an analysis on DM-generated faces and only measured bias between groups in basic scenarios without considering" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "3504" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 58, + 70, + 295, + 156 + ], + "blocks": [ + { + "bbox": [ + 58, + 70, + 295, + 156 + ], + "lines": [ + { + "bbox": [ + 58, + 70, + 295, + 156 + ], + "spans": [ + { + "bbox": [ + 58, + 70, + 295, + 156 + ], + "type": "table", + "html": "
Existing BenchmarksYearCategoryScope of Benchmark
Deepfake VideosGANDMUtilityFairness General Reliability
Loc et al. [28]2021
CCv1 [36]2021
DeepfakeBench [34]2023
CDDB [32]2023
Lin et al. [33]2024
Le et al. [35]2024
DF40 [31]2024
Ours2025
", + "image_path": "a9414d9a48c0ad63987ca40394ecd53fb009bc3821b028feca68a85dca62cc55.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 160, + 296, + 205 + ], + "lines": [ + { + "bbox": [ + 55, + 160, + 296, + 205 + ], + "spans": [ + { + "bbox": [ + 55, + 160, + 296, + 205 + ], + "type": "text", + "content": "Table 2. Comparison of existing AI-generated face detection benchmarks and ours. Fairness 'General' means fairness evaluation under default/basic settings. Fairness 'Reliability' measures fairness consistency across dynamic scenarios (e.g., post-processing)." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 55, + 209, + 296, + 245 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 209, + 296, + 245 + ], + "spans": [ + { + "bbox": [ + 55, + 209, + 296, + 245 + ], + "type": "text", + "content": "fairness reliability under real-world variations and transformations. This motivates us to conduct a comprehensive benchmark to evaluate AI face detectors' fairness." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 250, + 296, + 417 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 250, + 296, + 417 + ], + "spans": [ + { + "bbox": [ + 55, + 250, + 296, + 417 + ], + "type": "text", + "content": "The Definition of Demographic Categories. Demography-related labels are highly salient to measuring bias. Following prior works [36, 43-47], we will focus on three key demographic categories: Skin Tone, Gender, and Age, in this work. For skin tone, this vital attribute spans a range from pale to dark. We use the Monk Skin Tone scale [48], specifically designed for computer vision applications. For gender, we adopt binary categories (i.e., Male and Female), following practices by many governments [49, 50] and facial recognition research [45, 51, 52], based on sex at birth. For age, using definitions from the United Nations [53] and Statistics Canada [54], we define five age groups: Child (0-14), Youth (15-24), Adult (25-44), Middle-age Adult (45-64), and Senior " + }, + { + "bbox": [ + 55, + 250, + 296, + 417 + ], + "type": "inline_equation", + "content": "(65+)" + }, + { + "bbox": [ + 55, + 250, + 296, + 417 + ], + "type": "text", + "content": ". More discussion is in Appendix A." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 430, + 153, + 441 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 430, + 153, + 441 + ], + "spans": [ + { + "bbox": [ + 55, + 430, + 153, + 441 + ], + "type": "text", + "content": "3.AI-Face Dataset" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 447, + 296, + 483 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 447, + 296, + 483 + ], + "spans": [ + { + "bbox": [ + 55, + 447, + 296, + 483 + ], + "type": "text", + "content": "This section outlines the process of building our demographically annotated AI-Face dataset (see Fig. 2), along with its statistics and annotation quality assessment." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 492, + 151, + 503 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 492, + 151, + 503 + ], + "spans": [ + { + "bbox": [ + 55, + 492, + 151, + 503 + ], + "type": "text", + "content": "3.1. Data Collection" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 510, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 510, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 510, + 295, + 713 + ], + "type": "text", + "content": "We build our AI-Face dataset by collecting and integrating public real and AI-generated face images sourced from academic publications, GitHub repositories, and commercial tools. We strictly adhere to the license agreements of all datasets to ensure that they allow inclusion in our datasets and secondary use for training and testing. More details are in Appendix B.1. Specifically, the fake face images in our dataset originate from 4 Deepfake Video datasets (i.e., " + }, + { + "bbox": [ + 55, + 510, + 295, + 713 + ], + "type": "inline_equation", + "content": "\\mathrm{FF} + +" + }, + { + "bbox": [ + 55, + 510, + 295, + 713 + ], + "type": "text", + "content": " [2], DFDC [21], DFD [22], and Celeb-DFv2 [23]), generated by 10 GAN models (i.e., AttGAN [55], MMDGAN [56], StarGAN [55], StyleGANs [55, 57, 58], MSGGAN [56], ProGAN [59], STGAN [56], and VQGAN [60]), and 8 DM models (i.e., DALLE2 [61], IF [61], Midjourney [61], DCFace [62], Latent Diffusion [63], Palette [64], Stable Diffusion v1.5 [65], Stable Diffusion Inpainting [65]). This constitutes a total of 1,245,660 fake face images in our dataset. We include 6 real source datasets" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 72, + 555, + 180 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 555, + 180 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 555, + 180 + ], + "type": "text", + "content": "(i.e., FFHQ [6], IMDB-WIKI [20], and real images from " + }, + { + "bbox": [ + 313, + 72, + 555, + 180 + ], + "type": "inline_equation", + "content": "\\mathrm{FF}++" + }, + { + "bbox": [ + 313, + 72, + 555, + 180 + ], + "type": "text", + "content": " [2], DFDC [21], DFD [22], and Celeb-DF-v2 [23]). All of them are usually used as a training set for generative models to generate fake face images. This constitutes a total of 400,885 real face images in our dataset. In general, our dataset contains 28 subsets and 37 generation methods (i.e., 5 in " + }, + { + "bbox": [ + 313, + 72, + 555, + 180 + ], + "type": "inline_equation", + "content": "\\mathrm{FF}++" + }, + { + "bbox": [ + 313, + 72, + 555, + 180 + ], + "type": "text", + "content": ", 5 in DFD, 8 in DFDC, 1 in Celeb-DF-v2, 10 GANs, and 8 DMs). For all images, we use RetinaFace [66] for detecting and cropping faces." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 188, + 446, + 200 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 188, + 446, + 200 + ], + "spans": [ + { + "bbox": [ + 313, + 188, + 446, + 200 + ], + "type": "text", + "content": "3.2. Annotation Generation" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 206, + 486, + 217 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 206, + 486, + 217 + ], + "spans": [ + { + "bbox": [ + 313, + 206, + 486, + 217 + ], + "type": "text", + "content": "3.2.1. Skin Tone Annotation Generation" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 221, + 555, + 413 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 221, + 555, + 413 + ], + "spans": [ + { + "bbox": [ + 313, + 221, + 555, + 413 + ], + "type": "text", + "content": "Skin tone is typically measured using an intuitive approach [67, 68], without requiring a predictive model. Inspired by [67], we developed a method to estimate skin tone using the Monk Skin Tone (MST) Scale [48] (including 10-shade scales: Tone 1 to 10) by combining facial landmark detection with color analysis. Specifically, utilizing Mediapipe's FaceMesh [69] for precise facial landmark localization, we isolate skin regions while excluding non-skin areas such as the eyes and mouth. Based on the detected landmarks, we generate a mask to extract skin pixels from the facial area. These pixels are then subjected to K-Means clustering [70] (we use " + }, + { + "bbox": [ + 313, + 221, + 555, + 413 + ], + "type": "inline_equation", + "content": "\\mathrm{K} = 3" + }, + { + "bbox": [ + 313, + 221, + 555, + 413 + ], + "type": "text", + "content": " in practice) to identify the dominant skin color within the region of interest. The top-1 largest color cluster is mapped to the closest tone in the MST Scale by calculating the Euclidean distance between the cluster centroid and the MST reference colors in RGB space." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 420, + 514, + 431 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 420, + 514, + 431 + ], + "spans": [ + { + "bbox": [ + 313, + 420, + 514, + 431 + ], + "type": "text", + "content": "3.2.2. Gender and Age Annotation Generation" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 434, + 556, + 590 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 434, + 556, + 590 + ], + "spans": [ + { + "bbox": [ + 313, + 434, + 556, + 590 + ], + "type": "text", + "content": "For generating gender and age annotations, the existing online software (e.g., Face++ [71]) and open-source tools (e.g., InsightFace [72]) can be used for the prediction. However, they fall short in our task due to two reasons: 1) They are mostly designed for face recognition and trained on datasets of real face images but lack generalization capability for annotating AI-generated face images. 2) Their use may introduce bias into our dataset, as they are typically designed and trained without careful consideration of bias and imbalance in the training set. See Appendix B.3 for our experimental study on these tools. To this end, we have to develop our specific annotators to predict gender and age annotations for each image in our dataset." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "text", + "content": "Problem Definition. Given a training dataset " + }, + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "inline_equation", + "content": "\\mathbb{D} = \\{(X_i, A_i)\\}_{i=1}^n" + }, + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "text", + "content": " with size " + }, + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "inline_equation", + "content": "X_i" + }, + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "text", + "content": " represents the " + }, + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "text", + "content": "-th face image and " + }, + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "inline_equation", + "content": "A_i" + }, + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "text", + "content": " signifies a demographic attribute associated with " + }, + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "inline_equation", + "content": "X_i" + }, + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "text", + "content": ". Here, " + }, + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "inline_equation", + "content": "A_i \\in \\mathcal{A}" + }, + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "inline_equation", + "content": "\\mathcal{A}" + }, + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "text", + "content": " represents user-defined groups (e.g., for gender, " + }, + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "inline_equation", + "content": "\\mathcal{A} = \\{\\text{Female, Male}\\}" + }, + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "text", + "content": ". For age, " + }, + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "inline_equation", + "content": "\\mathcal{A} = \\{\\text{Child, Youth, Adult, Middle-age Adult, Senior}\\}" + }, + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "text", + "content": "). Our goal is to design a lightweight, generalizable annotator based on " + }, + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "inline_equation", + "content": "\\mathbb{D}" + }, + { + "bbox": [ + 313, + 594, + 556, + 713 + ], + "type": "text", + "content": " that reduces bias while predicting facial demographic attributes for each image in our dataset. In practice, we use IMDB-WIKI [20] as training dataset, which contains" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "3505" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 81, + 69, + 529, + 266 + ], + "blocks": [ + { + "bbox": [ + 81, + 69, + 529, + 266 + ], + "lines": [ + { + "bbox": [ + 81, + 69, + 529, + 266 + ], + "spans": [ + { + "bbox": [ + 81, + 69, + 529, + 266 + ], + "type": "image", + "image_path": "73cdb83d2fd072d055ecc2be288d34736d78933d58e073c80c3646a5aedbf2b0.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 54, + 269, + 555, + 314 + ], + "lines": [ + { + "bbox": [ + 54, + 269, + 555, + 314 + ], + "spans": [ + { + "bbox": [ + 54, + 269, + 555, + 314 + ], + "type": "text", + "content": "Figure 2. Generation pipeline of our Demographically Annotated AI-Face Dataset. First, we collect and filter face images from Deepfake Videos, GAN-generated faces, and DM-generated faces found in public datasets. Second, we perform skin tone, gender, and age annotation generation. Skin tone is estimated by combining facial landmark detection with color analysis to generate the corresponding annotation. For gender and age, we develop annotators trained on the IMDB-WIKI dataset [20], then use them to predict attributes for each image." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 54, + 317, + 294, + 376 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 317, + 294, + 376 + ], + "spans": [ + { + "bbox": [ + 54, + 317, + 294, + 376 + ], + "type": "text", + "content": "images along with profile metadata sourced from IMDb and Wikipedia, ensuring that the demographic annotations are as accurate as possible. We trained two annotators with identical architecture and training procedures for gender and age annotations, respectively." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 54, + 380, + 295, + 464 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 380, + 295, + 464 + ], + "spans": [ + { + "bbox": [ + 54, + 380, + 295, + 464 + ], + "type": "text", + "content": "Annotator Architecture. We build a lightweight annotator based on the CLIP [37] foundation model by leveraging its strong zero-shot and few-shot learning capabilities. Specifically, our annotator employs a frozen pre-trained CLIP ViT L/14 [73] as a feature extractor " + }, + { + "bbox": [ + 54, + 380, + 295, + 464 + ], + "type": "inline_equation", + "content": "\\mathbf{E}" + }, + { + "bbox": [ + 54, + 380, + 295, + 464 + ], + "type": "text", + "content": " followed by a trainable classifier parameterized by " + }, + { + "bbox": [ + 54, + 380, + 295, + 464 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 54, + 380, + 295, + 464 + ], + "type": "text", + "content": ", which contains 3-layer Multi-layer Perceptron (MLP) M and a classification head " + }, + { + "bbox": [ + 54, + 380, + 295, + 464 + ], + "type": "inline_equation", + "content": "h" + }, + { + "bbox": [ + 54, + 380, + 295, + 464 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 467, + 295, + 551 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 467, + 295, + 551 + ], + "spans": [ + { + "bbox": [ + 55, + 467, + 295, + 551 + ], + "type": "text", + "content": "Learning Objective. Aware that neural networks can perform poorly when the training dataset suffers from class-imbalance [74] and CLIP is not free from demographic bias [75-77], we introduce an imbalance loss and fairness loss to address these challenges in the annotator training. Specifically, for image " + }, + { + "bbox": [ + 55, + 467, + 295, + 551 + ], + "type": "inline_equation", + "content": "X_{i}" + }, + { + "bbox": [ + 55, + 467, + 295, + 551 + ], + "type": "text", + "content": ", its feature " + }, + { + "bbox": [ + 55, + 467, + 295, + 551 + ], + "type": "inline_equation", + "content": "f_{i}" + }, + { + "bbox": [ + 55, + 467, + 295, + 551 + ], + "type": "text", + "content": " is obtained through " + }, + { + "bbox": [ + 55, + 467, + 295, + 551 + ], + "type": "inline_equation", + "content": "f_{i} = \\mathbf{M}(\\mathbf{E}(X_{i}))" + }, + { + "bbox": [ + 55, + 467, + 295, + 551 + ], + "type": "text", + "content": ". Next, two losses are detailed below." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 551, + 296, + 598 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 551, + 296, + 598 + ], + "spans": [ + { + "bbox": [ + 55, + 551, + 296, + 598 + ], + "type": "text", + "content": "Imbalance Loss: To mitigate the impact of imbalance data, we use Vector Scaling [78] loss, which is a re-weighting method for training models on the imbalanced data with distribution shifts and can be expressed as" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 78, + 605, + 272, + 635 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 78, + 605, + 272, + 635 + ], + "spans": [ + { + "bbox": [ + 78, + 605, + 272, + 635 + ], + "type": "interline_equation", + "content": "L _ {i m b} = \\frac {1}{n} \\sum_ {i = 1} ^ {n} - u _ {A _ {i}} \\log \\frac {e ^ {\\zeta_ {A _ {i}} h (f _ {i}) _ {A _ {i}} + \\Delta_ {A _ {i}}}}{\\sum_ {A \\in \\mathcal {A}} e ^ {\\zeta_ {A} h (f _ {i}) _ {A} + \\Delta_ {A}}},", + "image_path": "a9f657379a92b804b780a8befe0f8200300e674cb14235380758e69d825141e5.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 641, + 295, + 701 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 641, + 295, + 701 + ], + "spans": [ + { + "bbox": [ + 55, + 641, + 295, + 701 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 641, + 295, + 701 + ], + "type": "inline_equation", + "content": "u_{A_i}" + }, + { + "bbox": [ + 55, + 641, + 295, + 701 + ], + "type": "text", + "content": " is the weighting factor for attribute " + }, + { + "bbox": [ + 55, + 641, + 295, + 701 + ], + "type": "inline_equation", + "content": "A_i" + }, + { + "bbox": [ + 55, + 641, + 295, + 701 + ], + "type": "text", + "content": ". " + }, + { + "bbox": [ + 55, + 641, + 295, + 701 + ], + "type": "inline_equation", + "content": "h(f_i)_{A_i}" + }, + { + "bbox": [ + 55, + 641, + 295, + 701 + ], + "type": "text", + "content": " is the predict logit on " + }, + { + "bbox": [ + 55, + 641, + 295, + 701 + ], + "type": "inline_equation", + "content": "A_i" + }, + { + "bbox": [ + 55, + 641, + 295, + 701 + ], + "type": "text", + "content": ". " + }, + { + "bbox": [ + 55, + 641, + 295, + 701 + ], + "type": "inline_equation", + "content": "\\zeta_{A_i}" + }, + { + "bbox": [ + 55, + 641, + 295, + 701 + ], + "type": "text", + "content": " is the multiplicative logit scaling factor, calculated as the inverse of " + }, + { + "bbox": [ + 55, + 641, + 295, + 701 + ], + "type": "inline_equation", + "content": "A_i" + }, + { + "bbox": [ + 55, + 641, + 295, + 701 + ], + "type": "text", + "content": "'s frequency. " + }, + { + "bbox": [ + 55, + 641, + 295, + 701 + ], + "type": "inline_equation", + "content": "\\Delta_{A_i}" + }, + { + "bbox": [ + 55, + 641, + 295, + 701 + ], + "type": "text", + "content": " is the additive logit scaling factor, calculated as the log of " + }, + { + "bbox": [ + 55, + 641, + 295, + 701 + ], + "type": "inline_equation", + "content": "A_i" + }, + { + "bbox": [ + 55, + 641, + 295, + 701 + ], + "type": "text", + "content": " probabilities. More details about them are in appendix B.4." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 701, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 701, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 67, + 701, + 295, + 713 + ], + "type": "text", + "content": "Fairness Loss: We introduce a fairness loss to minimize" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 317, + 555, + 365 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 317, + 555, + 365 + ], + "spans": [ + { + "bbox": [ + 313, + 317, + 555, + 365 + ], + "type": "text", + "content": "the disparity between the distribution " + }, + { + "bbox": [ + 313, + 317, + 555, + 365 + ], + "type": "inline_equation", + "content": "\\mathcal{D}^f" + }, + { + "bbox": [ + 313, + 317, + 555, + 365 + ], + "type": "text", + "content": " of " + }, + { + "bbox": [ + 313, + 317, + 555, + 365 + ], + "type": "inline_equation", + "content": "f" + }, + { + "bbox": [ + 313, + 317, + 555, + 365 + ], + "type": "text", + "content": " and the conditional distribution " + }, + { + "bbox": [ + 313, + 317, + 555, + 365 + ], + "type": "inline_equation", + "content": "\\mathcal{D}^{f_A}" + }, + { + "bbox": [ + 313, + 317, + 555, + 365 + ], + "type": "text", + "content": " of " + }, + { + "bbox": [ + 313, + 317, + 555, + 365 + ], + "type": "inline_equation", + "content": "f" + }, + { + "bbox": [ + 313, + 317, + 555, + 365 + ], + "type": "text", + "content": " on attribute " + }, + { + "bbox": [ + 313, + 317, + 555, + 365 + ], + "type": "inline_equation", + "content": "A\\in \\mathcal{A}" + }, + { + "bbox": [ + 313, + 317, + 555, + 365 + ], + "type": "text", + "content": ". Specifically, we follow [79, 80] to minimize the summation of the following Sinkhorn distance between these two distributions:" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 314, + 369, + 555, + 395 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 369, + 555, + 395 + ], + "spans": [ + { + "bbox": [ + 314, + 369, + 555, + 395 + ], + "type": "interline_equation", + "content": "L_{fair} = \\sum_{A\\in \\mathcal{A}}\\inf_{\\gamma \\in \\Gamma (\\mathcal{D}^{f},\\mathcal{D}^{f_{A}})}\\bigl\\{\\mathbb{E}_{X\\sim \\gamma}[c(p,q)] + \\alpha H(\\gamma |\\mu \\otimes \\nu)\\bigr \\} ,", + "image_path": "c09d6d6031c6afe5c7d922a57ae13479089805ecded576a226a5f9d802637cab.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "spans": [ + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "inline_equation", + "content": "\\Gamma(\\mathcal{D}^f, \\mathcal{D}^{f_A})" + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "text", + "content": " is the set of joint distributions based on " + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "inline_equation", + "content": "\\mathcal{D}^f" + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "inline_equation", + "content": "\\mathcal{D}^{f_A}" + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "text", + "content": ". Let " + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "inline_equation", + "content": "p" + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "inline_equation", + "content": "q" + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "text", + "content": " be the points from " + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "inline_equation", + "content": "\\mathcal{D}^f" + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "inline_equation", + "content": "\\mathcal{D}^{f_A}" + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "text", + "content": ", respectively. Then, " + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "inline_equation", + "content": "c(p, q)" + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "text", + "content": " represents the transport cost [80]. Let " + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "inline_equation", + "content": "\\mu" + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "inline_equation", + "content": "\\nu" + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "text", + "content": " be the reference measures from the set of measures on " + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "inline_equation", + "content": "f" + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "text", + "content": ". Then, " + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "inline_equation", + "content": "H(\\gamma | \\mu \\otimes \\nu)" + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "text", + "content": " represents the relative entropy of " + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "text", + "content": " with respect to the product measure " + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "inline_equation", + "content": "\\mu \\otimes \\nu" + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "text", + "content": ". " + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "inline_equation", + "content": "\\alpha \\geq 0" + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "text", + "content": " is a regularization hyperparameter. In practice, we use the empirical form of " + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "inline_equation", + "content": "L_{fair}" + }, + { + "bbox": [ + 313, + 399, + 555, + 496 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "text", + "content": "Total Loss: Therefore, the final learning objective becomes " + }, + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "inline_equation", + "content": "\\mathcal{L}(\\theta) = L_{imb} + \\lambda L_{fair}" + }, + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "text", + "content": " is a hyperparameter. Train. Traditional optimization methods like stochastic gradient descent can lead to poor model generalization due to sharp loss landscapes with multiple local and global minima. To address this, we use Sharpness-Aware Minimization (SAM) [81] to enhance our annotator's generalization by flattening the loss landscape. Specifically, flattening is attained by determining the optimal " + }, + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "inline_equation", + "content": "\\epsilon^{*}" + }, + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "text", + "content": " for perturbing model parameters " + }, + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "text", + "content": " to maximize the loss, formulated as: " + }, + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "inline_equation", + "content": "\\epsilon^{*} = \\arg \\max_{\\| \\epsilon \\|_{2} \\leq \\beta} \\mathcal{L}(\\theta + \\epsilon) \\approx \\arg \\max_{\\| \\epsilon \\|_{2} \\leq \\beta} \\epsilon^{\\top} \\nabla_{\\theta} \\mathcal{L} = \\beta \\text{sign}(\\nabla_{\\theta} \\mathcal{L})" + }, + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "inline_equation", + "content": "\\beta" + }, + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "text", + "content": " controls the perturbation magnitude. The approximation is based on the first-order Taylor expansion with assuming " + }, + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "inline_equation", + "content": "\\epsilon" + }, + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "text", + "content": " is small. The final equation is obtained by solving a dual norm problem, where sign represents a sign function and " + }, + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "inline_equation", + "content": "\\nabla_{\\theta} \\mathcal{L}" + }, + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "text", + "content": " being the gradient of " + }, + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "inline_equation", + "content": "\\mathcal{L}" + }, + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "text", + "content": " with respect to " + }, + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "text", + "content": ". As a result, the model parameters are updated by solving: " + }, + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "inline_equation", + "content": "\\min_{\\theta} \\mathcal{L}(\\theta + \\epsilon^{*})" + }, + { + "bbox": [ + 313, + 496, + 556, + 714 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "3506" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 55, + 68, + 194, + 168 + ], + "blocks": [ + { + "bbox": [ + 55, + 68, + 194, + 168 + ], + "lines": [ + { + "bbox": [ + 55, + 68, + 194, + 168 + ], + "spans": [ + { + "bbox": [ + 55, + 68, + 194, + 168 + ], + "type": "image", + "image_path": "ee33e2d2ecf2b2de8ef43e6b49c40496f7bca36c846a744baca4b05c14801536.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 54, + 171, + 555, + 205 + ], + "lines": [ + { + "bbox": [ + 54, + 171, + 555, + 205 + ], + "spans": [ + { + "bbox": [ + 54, + 171, + 555, + 205 + ], + "type": "text", + "content": "Figure 3. Distribution of face images of the AI-Face dataset. The figure shows the (a) subset distribution and the demographic distribution for (b) skin tone, (c) gender, and (d) gender. The outer rings in (b), (c), and (d) represent the proportion of groups within each attribute category, while the inner rings indicate the distribution of fake " + }, + { + "bbox": [ + 54, + 171, + 555, + 205 + ], + "type": "inline_equation", + "content": "(F)" + }, + { + "bbox": [ + 54, + 171, + 555, + 205 + ], + "type": "text", + "content": " and real " + }, + { + "bbox": [ + 54, + 171, + 555, + 205 + ], + "type": "inline_equation", + "content": "(R)" + }, + { + "bbox": [ + 54, + 171, + 555, + 205 + ], + "type": "text", + "content": " images within those groups." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 195, + 77, + 336, + 168 + ], + "blocks": [ + { + "bbox": [ + 195, + 77, + 336, + 168 + ], + "lines": [ + { + "bbox": [ + 195, + 77, + 336, + 168 + ], + "spans": [ + { + "bbox": [ + 195, + 77, + 336, + 168 + ], + "type": "image", + "image_path": "73dce0516cea83156d02b385626f5b7c5190a16667a7fb0a7c2413c9ae3dae08.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 335, + 78, + 432, + 167 + ], + "blocks": [ + { + "bbox": [ + 335, + 78, + 432, + 167 + ], + "lines": [ + { + "bbox": [ + 335, + 78, + 432, + 167 + ], + "spans": [ + { + "bbox": [ + 335, + 78, + 432, + 167 + ], + "type": "image", + "image_path": "64b85c5d29c5a70ca88dd95e3cd88007ea0acd67770d93cd21e538fb0c9ef472.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 433, + 78, + 554, + 168 + ], + "blocks": [ + { + "bbox": [ + 433, + 78, + 554, + 168 + ], + "lines": [ + { + "bbox": [ + 433, + 78, + 554, + 168 + ], + "spans": [ + { + "bbox": [ + 433, + 78, + 554, + 168 + ], + "type": "image", + "image_path": "7c9ff3f0b1c04966ed4493926d615fc9c3189dd883ddb15e16207aafedd36107.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 209, + 297, + 244 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 209, + 297, + 244 + ], + "spans": [ + { + "bbox": [ + 55, + 209, + 297, + 244 + ], + "type": "text", + "content": "Inference. We use the trained annotators to predict demographic labels for each image in AI-Face dataset, except for those from IMDB-WIKI, which already contain true labels." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 252, + 159, + 263 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 252, + 159, + 263 + ], + "spans": [ + { + "bbox": [ + 55, + 252, + 159, + 263 + ], + "type": "text", + "content": "3.3. Dataset Statistics" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 54, + 267, + 297, + 388 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 267, + 297, + 388 + ], + "spans": [ + { + "bbox": [ + 54, + 267, + 297, + 388 + ], + "type": "text", + "content": "Fig. 3 illustrates the subset distribution and demographic attributes of the AI-Face dataset. The dataset contains approximately three times more generated images than real images, with diffusion model-generated images constituting the majority. In terms of demographic attributes, the majorities in skin tone are Tone 5 (31.14%) and Tone 6 (35.16%). The lightest skin tones (Tones 1-3) are underrepresented, comprising only 0.97% of the dataset. The dataset is relatively balanced across gender. Adult (25-44) (49.67%) is the predominant representation in age groups." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 395, + 226, + 406 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 395, + 226, + 406 + ], + "spans": [ + { + "bbox": [ + 55, + 395, + 226, + 406 + ], + "type": "text", + "content": "3.4. Annotation Quality Assessment" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 409, + 297, + 625 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 409, + 297, + 625 + ], + "spans": [ + { + "bbox": [ + 55, + 409, + 297, + 625 + ], + "type": "text", + "content": "To assess the quality of demographic annotations in our AI-Face dataset, we conducted a user study. Three participants label the demographic attributes for the given images (the details of labeling activities are in appendix B.5), with the final ground truth determined by majority vote. We then compare our annotations with those in A-FF++, A-DFDC, A-CelebDF-V2, and A-DFD datasets. Specifically, we perform two assessments: 1) Strategic comparison: We select 1,000 images from A-FF++ and A-DFDC that have different annotations from AI-Face. These images likely represent challenging cases. 2) Random comparison: We randomly sampled 1,000 images from A-Celeb-DF-V2 and A-DFD. Due to the limited age classes in these datasets, only gender was evaluated. The results, presented in Table 3, demonstrate the high correctness of the AI-Face annotations and their superior quality compared to the annotations of other datasets. For example, our annotation quality (ACC) surpasses those in A-FF++ by " + }, + { + "bbox": [ + 55, + 409, + 297, + 625 + ], + "type": "inline_equation", + "content": "78.714\\%" + }, + { + "bbox": [ + 55, + 409, + 297, + 625 + ], + "type": "text", + "content": " on gender and " + }, + { + "bbox": [ + 55, + 409, + 297, + 625 + ], + "type": "inline_equation", + "content": "48.000\\%" + }, + { + "bbox": [ + 55, + 409, + 297, + 625 + ], + "type": "text", + "content": " on age." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 633, + 220, + 647 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 633, + 220, + 647 + ], + "spans": [ + { + "bbox": [ + 55, + 633, + 220, + 647 + ], + "type": "text", + "content": "4. Fairness Benchmark Settings" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 55, + 650, + 296, + 686 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 650, + 296, + 686 + ], + "spans": [ + { + "bbox": [ + 55, + 650, + 296, + 686 + ], + "type": "text", + "content": "This section demonstrates the fairness benchmark settings for detection methods and evaluation metrics on AI-Face (80%/20%: Train/Test). More settings are in Appendix C.1." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 55, + 689, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 689, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 689, + 296, + 713 + ], + "type": "text", + "content": "Detection Methods. Our benchmark has implemented 12 detectors. The methodologies cover a spectrum that" + } + ] + } + ], + "index": 12 + }, + { + "type": "table", + "bbox": [ + 316, + 206, + 558, + 285 + ], + "blocks": [ + { + "bbox": [ + 316, + 206, + 558, + 285 + ], + "lines": [ + { + "bbox": [ + 316, + 206, + 558, + 285 + ], + "spans": [ + { + "bbox": [ + 316, + 206, + 558, + 285 + ], + "type": "table", + "html": "
Evaluation TypeDatasetGenderAge
ACCPrecisionRecallACCPrecisionRecall
StrategicA-FF++8.14317.5835.96637.70039.45945.381
AI-Face86.85774.40477.36785.70074.02463.751
A-DFDC21.60028.60423.08233.40038.01140.165
AI-Face91.70092.12983.44877.00076.18462.646
RandomA-Celeb-DF-V289.62890.62690.494-
AI-Face91.20691.47491.767-
A-DFD70.90071.68674.435-
AI-Face92.30091.06091.727-
", + "image_path": "95a24ad1b607c1481f8c4106de72b750295ab509feb3316e4a35a864ef0ddefc.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "table_body" + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 289, + 555, + 312 + ], + "lines": [ + { + "bbox": [ + 313, + 289, + 555, + 312 + ], + "spans": [ + { + "bbox": [ + 313, + 289, + 555, + 312 + ], + "type": "text", + "content": "Table 3. Annotation quality assessment results " + }, + { + "bbox": [ + 313, + 289, + 555, + 312 + ], + "type": "inline_equation", + "content": "(\\%)" + }, + { + "bbox": [ + 313, + 289, + 555, + 312 + ], + "type": "text", + "content": " for " + }, + { + "bbox": [ + 313, + 289, + 555, + 312 + ], + "type": "inline_equation", + "content": "A - FF + +" + }, + { + "bbox": [ + 313, + 289, + 555, + 312 + ], + "type": "text", + "content": " A- DFDC, A-Celeb-DF-V2, A-DFD, and our AI-Face. ACC: Accuracy." + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 313, + 316, + 556, + 483 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 316, + 556, + 483 + ], + "spans": [ + { + "bbox": [ + 313, + 316, + 556, + 483 + ], + "type": "text", + "content": "is specifically tailored to detect AI-generated faces from Deepfake Videos, GANs, and DMs. They can be classified into four types: Naive detectors: refer to backbone models that can be directly utilized as the detector for binary classification, including CNN-based (i.e., Xception [82], EfficientB4 [83]) and transformer-based (i.e., ViT-B/16 [84]). Frequency-based: explore the frequency domain for forgery detection (i.e., F3Net [85], SPSL [86], SRM [87]). Spatial-based: focus on mining spatial characteristics (e.g., texture) within images for detection (i.e., UCF [26], UnivFD [88], CORE [89]). Fairness-enhanced: focus on improving fairness in AI-generated face detection by designing specific algorithms (i.e., DAW-FDD [29], DAGFDD [29], PG-FDD [30])." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 486, + 557, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 486, + 557, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 486, + 557, + 713 + ], + "type": "text", + "content": "Evaluation Metrics. To provide a comprehensive benchmarking, we consider 5 fairness metrics commonly used in fairness community [90-94] and 5 widely used utility metrics [95-98]. For fairness metrics, we consider Demographic Parity " + }, + { + "bbox": [ + 313, + 486, + 557, + 713 + ], + "type": "inline_equation", + "content": "(F_{DP})" + }, + { + "bbox": [ + 313, + 486, + 557, + 713 + ], + "type": "text", + "content": " [90, 91], Max Equalized Odds " + }, + { + "bbox": [ + 313, + 486, + 557, + 713 + ], + "type": "inline_equation", + "content": "(F_{MEO})" + }, + { + "bbox": [ + 313, + 486, + 557, + 713 + ], + "type": "text", + "content": " [93], Equal Odds " + }, + { + "bbox": [ + 313, + 486, + 557, + 713 + ], + "type": "inline_equation", + "content": "(F_{EO})" + }, + { + "bbox": [ + 313, + 486, + 557, + 713 + ], + "type": "text", + "content": " [92], and Overall Accuracy Equality " + }, + { + "bbox": [ + 313, + 486, + 557, + 713 + ], + "type": "inline_equation", + "content": "(F_{OAE})" + }, + { + "bbox": [ + 313, + 486, + 557, + 713 + ], + "type": "text", + "content": " [93] for evaluating group (e.g., gender) and intersectional (e.g., individuals of a specific gender and simultaneously a specific skin tone) fairness. In experiments, the intersectional groups are Female-Light (F-L), Female-Medium (F-M), Female-Dark (Dark), Male-Light (M-L), Male-Medium (M-M), and Male-Dark (M-D), where we group 10 categories of skin tones into Light (Tone 1-3), Medium (Tone 4-6), and Dark (Tone 7-10) for simplicity according to [99]. We also use individual fairness " + }, + { + "bbox": [ + 313, + 486, + 557, + 713 + ], + "type": "inline_equation", + "content": "(F_{IND})" + }, + { + "bbox": [ + 313, + 486, + 557, + 713 + ], + "type": "text", + "content": " [94, 100] (i.e., similar individuals should have similar predicted outcomes) for estimation. For utility metrics, we employ the Area Under the ROC Curve (AUC), Accuracy (ACC), Average Precision (AP), Equal Error Rate (EER)," + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "3507" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 58, + 64, + 555, + 277 + ], + "blocks": [ + { + "bbox": [ + 58, + 64, + 555, + 277 + ], + "lines": [ + { + "bbox": [ + 58, + 64, + 555, + 277 + ], + "spans": [ + { + "bbox": [ + 58, + 64, + 555, + 277 + ], + "type": "table", + "html": "
MeasureAttributeMetricModel Type
NaiveFrequencySpatialFairness-enhanced
Xception [82]EfficientB4 [83]ViT-B/16 [84]F3Net [85]SPSL [86]SRM [87]UCF [26]UnivFD [88]CORE [89]DAW-FDD [29]DAG-FDD [29]PG-FDD [30]
Fairness(%)↓Skin ToneFMEO8.8368.3006.26419.9388.05510.00217.3252.57710.77914.1186.5516.465
FDP9.7516.1847.72812.8769.37910.89712.5818.55610.31710.7068.6179.746
FOAE1.2714.3772.1682.8181.1350.9151.8832.7481.3321.6671.3880.882
FEO12.13211.0628.81323.7089.78914.23921.925.53613.06916.6047.3839.115
GenderFMEO3.9755.3855.1044.7174.4116.2715.0744.5035.7955.5105.9103.190
FDP1.6911.7251.3441.8641.8271.9571.7361.1902.1542.0152.1511.252
FOAE0.9751.4871.8031.1291.0371.7721.4511.6221.3891.3251.4201.071
FEO4.1435.8636.0314.8704.5346.785.5105.4085.9315.6966.0663.702
AgeFMEO27.8836.79614.93738.80127.61424.84347.5005.43633.88245.46615.22914.804
FDP10.90511.84911.83914.90611.23211.57017.04915.24912.56414.1069.63310.467
FOAE7.2652.8566.83810.1167.2706.52411.6523.7938.76011.8785.5335.009
FEO42.21610.30030.79555.03240.94338.52867.54514.14848.72964.38430.18229.585
IntersectionFMEO10.50517.5869.38421.36910.37915.14220.1346.11915.3416.56512.1789.578
FDP14.5118.60711.53517.17513.25915.18617.0314.02614.30114.08811.70514.697
FOAE2.5368.4614.9284.8702.4643.9983.5366.2872.7753.5474.0353.062
FEO24.31525.11427.44347.78321.67930.11243.37620.25528.8433.12226.29518.348
IndividualFIND10.33825.7420.0221.8722.5187.6210.7673.5230.0413.7720.9010.780
Utility(%)-AUC↑98.58398.61198.6998.71498.74797.93698.08298.19298.57997.81198.77199.172
ACC↑96.30894.20394.47295.71996.34695.09295.15193.65196.22495.42695.72296.174
AP↑99.35099.54299.57199.45399.35699.17299.27399.40099.36099.01599.49899.694
EER↓5.1496.6896.3725.2564.3716.4837.7087.6335.1457.0635.4994.961
FPR↓12.96120.06616.42614.67913.66115.74613.64618.55013.41016.67014.84410.971
Training Time / Epoch1h15min2h25min2h40min1h18min1h20min3h10min5h05min4h1h16min1h25min1h17min7h20min
", + "image_path": "29af5e2554ab6eb21ea1181e6fc447cf25726e3560b37dc9569d40ac740ee3a4.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 72, + 281, + 536, + 293 + ], + "lines": [ + { + "bbox": [ + 72, + 281, + 536, + 293 + ], + "spans": [ + { + "bbox": [ + 72, + 281, + 536, + 293 + ], + "type": "text", + "content": "Table 4. Overall performance comparison of difference methods on the AI-Face dataset. The best performance is shown in bold." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 55, + 297, + 181, + 308 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 297, + 181, + 308 + ], + "spans": [ + { + "bbox": [ + 55, + 297, + 181, + 308 + ], + "type": "text", + "content": "and False Positive Rate (FPR)." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 319, + 177, + 334 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 319, + 177, + 334 + ], + "spans": [ + { + "bbox": [ + 55, + 319, + 177, + 334 + ], + "type": "text", + "content": "5. Results and Analysis" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 337, + 297, + 374 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 337, + 297, + 374 + ], + "spans": [ + { + "bbox": [ + 55, + 337, + 297, + 374 + ], + "type": "text", + "content": "In this section, we estimate the existing AI-generated image detectors' fairness performance alongside their utility on our AI-Face Dataset. More results can be found in Appendix D." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 381, + 218, + 395 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 381, + 218, + 395 + ], + "spans": [ + { + "bbox": [ + 55, + 381, + 218, + 395 + ], + "type": "text", + "content": "5.1. General Fairness Comparison" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 399, + 297, + 626 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 399, + 297, + 626 + ], + "spans": [ + { + "bbox": [ + 55, + 399, + 297, + 626 + ], + "type": "text", + "content": "Overall Performance. Table 4 reports the overall performance on our AI-Face test set. Our observations are: 1) Fairness-Enhanced Models (specifically PG-FDD [30]) are the most effective in achieving both high fairness and utility, underscoring the effectiveness of specialized fairness-enhancement techniques in mitigating demographic biases. 2) UnivFD [88], based on the CLIP backbone [73], also achieves commendable fairness, suggesting that foundation models equipped with fairness-focused enhancements could be a promising direction for developing fairer detectors. 3) Naive detectors, such as EfficientB4 [83], trained on large, diverse datasets (e.g., our AI-Face) can achieve competitive fairness and utility, highlighting the potential of fairness improvements by choosing specific architecture. 4) 10 out of 12 detectors have an AUC higher than " + }, + { + "bbox": [ + 55, + 399, + 297, + 626 + ], + "type": "inline_equation", + "content": "98\\%" + }, + { + "bbox": [ + 55, + 399, + 297, + 626 + ], + "type": "text", + "content": ", demonstrating our AI-Face dataset is significant for training AI-face detectors in resulting high utility. 5) PG-FDD demonstrates superior performance but has a long training time, which can be explored and addressed in the future." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 630, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 630, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 630, + 296, + 714 + ], + "type": "text", + "content": "Performance on Different Subsets. 1) Fig. 4 demonstrates the intersectional " + }, + { + "bbox": [ + 55, + 630, + 296, + 714 + ], + "type": "inline_equation", + "content": "F_{EO}" + }, + { + "bbox": [ + 55, + 630, + 296, + 714 + ], + "type": "text", + "content": " and AUC performance of detectors on each test subset. We observe that the fairness performance varies a lot among different generative methods for every detector. The largest bias on most detectors comes from detecting face images generated by diffusion models. 2) DAG-FDD [29] and SRM [87] demonstrate the most" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 297, + 556, + 357 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 297, + 556, + 357 + ], + "spans": [ + { + "bbox": [ + 313, + 297, + 556, + 357 + ], + "type": "text", + "content": "consistent fairness across subsets, indicating a robust handling of bias introduced by different generative methods. 3) Moreover, the stable utility demonstrates our dataset's expansiveness and diversity, enabling effective training to detect AI-generated faces from various generative techniques." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 361, + 556, + 482 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 361, + 556, + 482 + ], + "spans": [ + { + "bbox": [ + 313, + 361, + 556, + 482 + ], + "type": "text", + "content": "Performance on Different Subgroups. We conduct an analysis of all detectors on intersectional subgroups. 1) As shown in Fig. 5, facial images with lighter skin tone are more often misclassified as fake, likely due to the underrepresentation of lighter tones (Tone 1-3) in our dataset (see Fig. 3 (b)). This suggests detectors tend to show higher error rates for minority groups. 2) Although gender representation is relatively balanced (see Fig. 3 (c)) in our dataset, the detectors consistently exhibit higher false positive rates for female subgroups, indicating a persistent gender-based bias." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 491, + 484, + 503 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 491, + 484, + 503 + ], + "spans": [ + { + "bbox": [ + 313, + 491, + 484, + 503 + ], + "type": "text", + "content": "5.2. Fairness Reliability Assessment" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 506, + 556, + 686 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 506, + 556, + 686 + ], + "spans": [ + { + "bbox": [ + 313, + 506, + 556, + 686 + ], + "type": "text", + "content": "Fairness Robustness Evaluation. We apply 6 post-processing methods: Random Crop (RC) [101], Rotation (RT) [34], Brightness Contrast (BC) [34], Hue Saturation Value (HSV) [34], Gaussian Blur (GB) [34], and JEPG Compression (JC) [102] to the test images. Fig. 6 shows each detector's intersectional " + }, + { + "bbox": [ + 313, + 506, + 556, + 686 + ], + "type": "inline_equation", + "content": "F_{EO}" + }, + { + "bbox": [ + 313, + 506, + 556, + 686 + ], + "type": "text", + "content": " and AUC performance changes after using post-processing. Our observations are: 1) These impairments tend to wash out forensic traces, so that detectors have evident performance degradation. 2) Post-processing does not always cause detectors more bias (e.g., UCF, UnivFD, CORE, DAW-FDD have better fairness after rotation), though they hurt the utility. 3) Fairness-enhanced detectors struggle to maintain fairness when images undergo post-processing. 4) Spatial detectors have better fairness robustness compared with other model types." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 689, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 689, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 689, + 556, + 715 + ], + "type": "text", + "content": "Fairness Generalization Evaluation. To evaluate detectors' fairness generalization capability, we test them on Casual" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "3508" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 59, + 71, + 140, + 223 + ], + "blocks": [ + { + "bbox": [ + 59, + 71, + 140, + 223 + ], + "lines": [ + { + "bbox": [ + 59, + 71, + 140, + 223 + ], + "spans": [ + { + "bbox": [ + 59, + 71, + 140, + 223 + ], + "type": "image", + "image_path": "e46031def0f22af80823816c764efa8d609057504dfa7814e2516c6ac5abeecb.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 141, + 72, + 222, + 223 + ], + "blocks": [ + { + "bbox": [ + 141, + 72, + 222, + 223 + ], + "lines": [ + { + "bbox": [ + 141, + 72, + 222, + 223 + ], + "spans": [ + { + "bbox": [ + 141, + 72, + 222, + 223 + ], + "type": "image", + "image_path": "6ec0f78a6d4f1e7cf989ca92809a4fd0af26df25346a5b2ad4dfc784590fa136.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 223, + 72, + 304, + 223 + ], + "blocks": [ + { + "bbox": [ + 223, + 72, + 304, + 223 + ], + "lines": [ + { + "bbox": [ + 223, + 72, + 304, + 223 + ], + "spans": [ + { + "bbox": [ + 223, + 72, + 304, + 223 + ], + "type": "image", + "image_path": "01b5f57c01e0a85aab33413e5edc86dec9275e457a2fcc570db0106a9a8b77cd.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 304, + 72, + 385, + 223 + ], + "blocks": [ + { + "bbox": [ + 304, + 72, + 385, + 223 + ], + "lines": [ + { + "bbox": [ + 304, + 72, + 385, + 223 + ], + "spans": [ + { + "bbox": [ + 304, + 72, + 385, + 223 + ], + "type": "image", + "image_path": "3ed381744d2a149f361ac5002bf73cdb9224e270715f0926951af5bd27b443d2.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 385, + 72, + 466, + 223 + ], + "blocks": [ + { + "bbox": [ + 385, + 72, + 466, + 223 + ], + "lines": [ + { + "bbox": [ + 385, + 72, + 466, + 223 + ], + "spans": [ + { + "bbox": [ + 385, + 72, + 466, + 223 + ], + "type": "image", + "image_path": "f703227f7c0a1d3042de5635e5224485e29975e281ebb83c2f19c9c1928eb972.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 468, + 72, + 549, + 223 + ], + "blocks": [ + { + "bbox": [ + 468, + 72, + 549, + 223 + ], + "lines": [ + { + "bbox": [ + 468, + 72, + 549, + 223 + ], + "spans": [ + { + "bbox": [ + 468, + 72, + 549, + 223 + ], + "type": "image", + "image_path": "b5cf2bdd2f22f0fe64e0b55ea896aff1bfb41e28fc01205ab907db05edd9c00d.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 56, + 245, + 140, + 312 + ], + "blocks": [ + { + "bbox": [ + 55, + 224, + 555, + 245 + ], + "lines": [ + { + "bbox": [ + 55, + 224, + 555, + 245 + ], + "spans": [ + { + "bbox": [ + 55, + 224, + 555, + 245 + ], + "type": "text", + "content": "Figure 4. Visualization of the intersectional " + }, + { + "bbox": [ + 55, + 224, + 555, + 245 + ], + "type": "inline_equation", + "content": "F_{EO}" + }, + { + "bbox": [ + 55, + 224, + 555, + 245 + ], + "type": "text", + "content": " (\\%) and AUC (\\%) of detectors on different subsets. The smaller " + }, + { + "bbox": [ + 55, + 224, + 555, + 245 + ], + "type": "inline_equation", + "content": "F_{EO}" + }, + { + "bbox": [ + 55, + 224, + 555, + 245 + ], + "type": "text", + "content": " polygon area represents better fairness. The larger AUC area means better utility." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 56, + 245, + 140, + 312 + ], + "lines": [ + { + "bbox": [ + 56, + 245, + 140, + 312 + ], + "spans": [ + { + "bbox": [ + 56, + 245, + 140, + 312 + ], + "type": "image", + "image_path": "e33836a66a827e63ba49d5986faa44a281c5f61c13bdc81a1326c85ec9b10c12.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 141, + 245, + 225, + 312 + ], + "blocks": [ + { + "bbox": [ + 141, + 245, + 225, + 312 + ], + "lines": [ + { + "bbox": [ + 141, + 245, + 225, + 312 + ], + "spans": [ + { + "bbox": [ + 141, + 245, + 225, + 312 + ], + "type": "image", + "image_path": "54aaaa160577b20bc7065760bc2188b1c0a14a90ef49b20360fb2d3397e8e5f7.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 225, + 245, + 306, + 312 + ], + "blocks": [ + { + "bbox": [ + 225, + 245, + 306, + 312 + ], + "lines": [ + { + "bbox": [ + 225, + 245, + 306, + 312 + ], + "spans": [ + { + "bbox": [ + 225, + 245, + 306, + 312 + ], + "type": "image", + "image_path": "a7cd58856432107fdfe153cd0a7f772366496c4324f72f4b142fbf760ebb6672.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 306, + 245, + 389, + 312 + ], + "blocks": [ + { + "bbox": [ + 306, + 245, + 389, + 312 + ], + "lines": [ + { + "bbox": [ + 306, + 245, + 389, + 312 + ], + "spans": [ + { + "bbox": [ + 306, + 245, + 389, + 312 + ], + "type": "image", + "image_path": "04dd2777922990ee178b7c87375c81b7f3c5b33437c35c7ebb0d8e9771a5386d.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 389, + 245, + 472, + 312 + ], + "blocks": [ + { + "bbox": [ + 389, + 245, + 472, + 312 + ], + "lines": [ + { + "bbox": [ + 389, + 245, + 472, + 312 + ], + "spans": [ + { + "bbox": [ + 389, + 245, + 472, + 312 + ], + "type": "image", + "image_path": "0dda740b787a117192b48a263b4d80ea906be8d79747a6fd1edd3122ea595f52.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 472, + 245, + 553, + 312 + ], + "blocks": [ + { + "bbox": [ + 472, + 245, + 553, + 312 + ], + "lines": [ + { + "bbox": [ + 472, + 245, + 553, + 312 + ], + "spans": [ + { + "bbox": [ + 472, + 245, + 553, + 312 + ], + "type": "image", + "image_path": "3fb87041182ba581a4144a41520a81745c8ffea2f4b79db682ca098f23a32c69.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 56, + 312, + 140, + 380 + ], + "blocks": [ + { + "bbox": [ + 56, + 312, + 140, + 380 + ], + "lines": [ + { + "bbox": [ + 56, + 312, + 140, + 380 + ], + "spans": [ + { + "bbox": [ + 56, + 312, + 140, + 380 + ], + "type": "image", + "image_path": "be00424190d93f7bc86666e788fa87f2a877b5ed9b6f4a274c244514da0fd855.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 79, + 383, + 529, + 395 + ], + "lines": [ + { + "bbox": [ + 79, + 383, + 529, + 395 + ], + "spans": [ + { + "bbox": [ + 79, + 383, + 529, + 395 + ], + "type": "text", + "content": "Figure 5. FPR(%) of each intersectional subgroup The dashline represents the lowest FPR on Female-Light (F-L) subgroup." + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_caption" + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 141, + 312, + 225, + 380 + ], + "blocks": [ + { + "bbox": [ + 141, + 312, + 225, + 380 + ], + "lines": [ + { + "bbox": [ + 141, + 312, + 225, + 380 + ], + "spans": [ + { + "bbox": [ + 141, + 312, + 225, + 380 + ], + "type": "image", + "image_path": "c8b428974b8488c6589a108584966d8fd60bd1d37b3a3fd5055c70f86681c9f6.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 225, + 312, + 306, + 380 + ], + "blocks": [ + { + "bbox": [ + 225, + 312, + 306, + 380 + ], + "lines": [ + { + "bbox": [ + 225, + 312, + 306, + 380 + ], + "spans": [ + { + "bbox": [ + 225, + 312, + 306, + 380 + ], + "type": "image", + "image_path": "7b66e46430a04c9303d75e59aa66d9b11e3b84337a57a3059062880851871ded.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 306, + 312, + 389, + 380 + ], + "blocks": [ + { + "bbox": [ + 306, + 312, + 389, + 380 + ], + "lines": [ + { + "bbox": [ + 306, + 312, + 389, + 380 + ], + "spans": [ + { + "bbox": [ + 306, + 312, + 389, + 380 + ], + "type": "image", + "image_path": "07971c0affdc5b1324513f078b01c55b5fe787d10e186a57af172f5bf4f5000c.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 389, + 312, + 472, + 380 + ], + "blocks": [ + { + "bbox": [ + 389, + 312, + 472, + 380 + ], + "lines": [ + { + "bbox": [ + 389, + 312, + 472, + 380 + ], + "spans": [ + { + "bbox": [ + 389, + 312, + 472, + 380 + ], + "type": "image", + "image_path": "0c507a0a9d0f1ddb836170ca5ed44a91ce6d0652e050b0f96195d9cbc1692452.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + } + ], + "index": 17 + }, + { + "type": "image", + "bbox": [ + 472, + 312, + 553, + 380 + ], + "blocks": [ + { + "bbox": [ + 472, + 312, + 553, + 380 + ], + "lines": [ + { + "bbox": [ + 472, + 312, + 553, + 380 + ], + "spans": [ + { + "bbox": [ + 472, + 312, + 553, + 380 + ], + "type": "image", + "image_path": "59f86cfc141be60c428d5a68cff6f5c5454d96715dbd8157a104501234ee22ca.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + } + ], + "index": 18 + }, + { + "bbox": [ + 54, + 398, + 297, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 398, + 297, + 567 + ], + "spans": [ + { + "bbox": [ + 54, + 398, + 297, + 567 + ], + "type": "text", + "content": "Conversations v2 (CCv2) [103], DF-Platter [16], and GenData [17], none of which are part of AI-Face. Notably, CCv2 is a dataset that contains only real face images with demographic annotations (e.g., gender) self-reported by the participants. Results on gender attribute in Table 5 show that: 1) Even well-designed detectors that focus on improving utility or fairness generalization (e.g., UCF, PG-FDD) struggle to achieve consistently superior performance across different dataset domains. This highlights the remaining fairness generalization issue. 2) DAW-FDD and PG-PDD are two fairness-enhanced detectors that require accessing demographic information during training, but their fairness does not encounter a drastic drop when evaluating on CCv2. This reflects the high accuracy of the annotations in our AI-face." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 54, + 570, + 297, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 570, + 297, + 714 + ], + "spans": [ + { + "bbox": [ + 54, + 570, + 297, + 714 + ], + "type": "text", + "content": "Effect of Training Set Size. We randomly sample " + }, + { + "bbox": [ + 54, + 570, + 297, + 714 + ], + "type": "inline_equation", + "content": "20\\%" + }, + { + "bbox": [ + 54, + 570, + 297, + 714 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 54, + 570, + 297, + 714 + ], + "type": "inline_equation", + "content": "40\\%" + }, + { + "bbox": [ + 54, + 570, + 297, + 714 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 54, + 570, + 297, + 714 + ], + "type": "inline_equation", + "content": "60\\%" + }, + { + "bbox": [ + 54, + 570, + 297, + 714 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 54, + 570, + 297, + 714 + ], + "type": "inline_equation", + "content": "80\\%" + }, + { + "bbox": [ + 54, + 570, + 297, + 714 + ], + "type": "text", + "content": " of each training subset from AI-Face to assess the impact of training size on performance. Key observations from Fig. 7 (Left): 1) Among all detectors, UnivFD demonstrates the most stable fairness and utility performance as the training dataset size changes, likely due to its fixed CLIP backbone. 2) Increasing the training dataset size generally improves model utility, but this pattern does not extend to fairness metrics. In fact, certain detectors such as F3Net and UCF exhibit worsening fairness as the training size reaches its maximum. This suggests that more training data does not necessarily lead to fairer detectors." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 313, + 398, + 556, + 530 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 398, + 556, + 530 + ], + "spans": [ + { + "bbox": [ + 313, + 398, + 556, + 530 + ], + "type": "text", + "content": "Effect of the Ratio of Real and Fake. To examine how training real-to-fake sample ratios affect detector performance, we set the ratios at 1:10, 1:1, and 10:1 while keeping the total sample count constant. Experimental results in Fig. 7 (Right) show: 1) Most detectors' fairness improves as real sample representation increases. Probably because increasing real and reducing fake samples helps detectors reduce overfitting to artifacts specific to fake samples. This makes it easier for detectors to distinguish real from fake, even for underrepresented groups, thereby enhancing fairness. 2) Most detectors achieve the highest AUC with balanced data." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 313, + 544, + 387, + 555 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 544, + 387, + 555 + ], + "spans": [ + { + "bbox": [ + 313, + 544, + 387, + 555 + ], + "type": "text", + "content": "5.3. Discussion" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 312, + 558, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 558, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 312, + 558, + 556, + 715 + ], + "type": "text", + "content": "According to the above experiments, we summarize the unsolved fairness problems in recent detectors: 1) Detectors' fairness is unstable when detecting face images generated by different generative methods, indicating a future direction for enhancing fairness stability since new generative models continue to emerge. 2) Even though fairness-enhanced detectors exhibit small overall fairness metrics, they still show biased detection towards minority groups. Future studies should be more cautious when designing fair detectors to ensure balanced performance across all demographic groups. 3) There is currently no reliable detector, as all detectors experience severe large performance degradation under image post-processing and cross-domain evaluation. Future" + } + ] + } + ], + "index": 24 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "3509" + } + ] + } + ], + "index": 25 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 58, + 69, + 180, + 149 + ], + "blocks": [ + { + "bbox": [ + 58, + 69, + 180, + 149 + ], + "lines": [ + { + "bbox": [ + 58, + 69, + 180, + 149 + ], + "spans": [ + { + "bbox": [ + 58, + 69, + 180, + 149 + ], + "type": "image", + "image_path": "62c71090c95651797a47b03964b61722d2a55d6802d1bfa822336b0986e770aa.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 63, + 151, + 544, + 162 + ], + "lines": [ + { + "bbox": [ + 63, + 151, + 544, + 162 + ], + "spans": [ + { + "bbox": [ + 63, + 151, + 544, + 162 + ], + "type": "text", + "content": "Figure 6. Performance ratio after vs. before post-processing. Points closer to 1.0 (i.e., no post-processing) indicate better robustness." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 181, + 70, + 304, + 149 + ], + "blocks": [ + { + "bbox": [ + 181, + 70, + 304, + 149 + ], + "lines": [ + { + "bbox": [ + 181, + 70, + 304, + 149 + ], + "spans": [ + { + "bbox": [ + 181, + 70, + 304, + 149 + ], + "type": "image", + "image_path": "68008447f733340256d211e2ea028e169a18c9aa3abaec3961e24c8e2fd13c1d.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 306, + 70, + 429, + 149 + ], + "blocks": [ + { + "bbox": [ + 306, + 70, + 429, + 149 + ], + "lines": [ + { + "bbox": [ + 306, + 70, + 429, + 149 + ], + "spans": [ + { + "bbox": [ + 306, + 70, + 429, + 149 + ], + "type": "image", + "image_path": "f7f61cca8e81436fef80425da46d9a5460d41d86dcba5847c49004dcf222480b.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 430, + 70, + 553, + 149 + ], + "blocks": [ + { + "bbox": [ + 430, + 70, + 553, + 149 + ], + "lines": [ + { + "bbox": [ + 430, + 70, + 553, + 149 + ], + "spans": [ + { + "bbox": [ + 430, + 70, + 553, + 149 + ], + "type": "image", + "image_path": "578ff42d96fa595ace4f5a6a53b17142d06e0d0d598dc743a023d62a2e304a01.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "table", + "bbox": [ + 62, + 164, + 546, + 301 + ], + "blocks": [ + { + "bbox": [ + 62, + 164, + 546, + 301 + ], + "lines": [ + { + "bbox": [ + 62, + 164, + 546, + 301 + ], + "spans": [ + { + "bbox": [ + 62, + 164, + 546, + 301 + ], + "type": "table", + "html": "
Model TypeDetectorDataset
CCv2 [103]DF-Platter [16]GenData [17]
Fairness(%)↓FOAEUtility(%)↑ACCFairness(%)↓FOAEFEOUtility(%)↑AUCFairness(%)↓FOAEFEOUtility(%)↑AUC
NaiveXception1.006(+0.031)86.465(-9.843)6.836(+5.861)9.789(+5.646)81.273(-17.310)2.539(+1.564)13.487(+9.344)96.971(-1.612)
EfficientB44.077(+0.259)82.980(-11.223)8.786(+7.299)12.370(+6.507)67.694(-30.917)3.304(+1.817)1.995 (-3.686)93.213(-5.398)
ViT-B/162.167(+0.364)81.489(-12.983)0.015 (-1.788)12.373(+6.342)76.050(-22.640)3.164(+1.361)9.610(+3.579)88.253(-10.437)
FrequencyF3Net5.743(+4.614)87.867(-7.852)3.521(+2.392)6.445(+1.575)85.112(-13.602)1.188(+0.059)16.306(+11.436)91.603(-7.111)
SPSL0.601 (-0.436)80.006(-16.340)5.109(+4.072)7.842(+3.308)82.175(-16.572)1.385(+0.348)9.261(+4.272)98.838 (+0.091)
SRM7.000(+5.228)79.768(-15.324)3.823(+2.051)6.567(-0.213)66.401(-31.535)3.281(+1.509)7.907(+1.127)90.049(-7.887)
SpatialUCF2.169(+0.718)93.009 (-2.142)8.687(+7.236)17.068(+11.558)80.821(-17.261)3.513(+2.062)10.529(+5.019)87.778(-10.304)
UnivFD7.625(+6.003)67.983(-25.668)4.540(+2.918)9.950(+4.542)76.443(-21.749)1.645(+0.023)3.848(-1.560)94.418(-3.774)
CORE4.410(+3.021)83.328(-12.896)7.741(+6.352)17.348(+11.417)77.226(-21.353)3.759(+2.370)23.289(+17.358)98.408(-0.171)
Fairness-enhancedDAW-FDD4.726(+3.401)84.685(-10.741)5.536(+4.211)13.667(+7.791)81.807(-16.004)1.443(+0.118)10.228(+4.532)97.854(+0.043)
DAG-FDD2.364(+0.944)83.918(-11.804)3.064(+1.644)22.203(+16.137)75.206(-23.565)0.714 (-0.706)10.332(+4.266)92.108(-6.663)
PG-FDD1.513(+0.442)92.852(-3.322)4.565(+3.494)9.717(+6.015)85.271 (-13.901)3.063(+1.992)9.479(+5.777)93.329(-5.843)
", + "image_path": "d49bb99d9380d2e3f49b97da2a2341c9ce5b91983ac5e555cbecef162926305a.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 303, + 555, + 325 + ], + "lines": [ + { + "bbox": [ + 55, + 303, + 555, + 325 + ], + "spans": [ + { + "bbox": [ + 55, + 303, + 555, + 325 + ], + "type": "text", + "content": "Table 5. Fairness generalization results based on the gender attribute. The smallest performance changes (in parentheses) and the best performance are in red and in bold, respectively. Only " + }, + { + "bbox": [ + 55, + 303, + 555, + 325 + ], + "type": "inline_equation", + "content": "F_{OAE}" + }, + { + "bbox": [ + 55, + 303, + 555, + 325 + ], + "type": "text", + "content": " fairness metric and ACC metric are used in CCv2 due to all samples are real." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "text" + }, + { + "type": "image", + "bbox": [ + 56, + 329, + 555, + 426 + ], + "blocks": [ + { + "bbox": [ + 56, + 329, + 555, + 426 + ], + "lines": [ + { + "bbox": [ + 56, + 329, + 555, + 426 + ], + "spans": [ + { + "bbox": [ + 56, + 329, + 555, + 426 + ], + "type": "image", + "image_path": "084be84553a199b922a7ec664074fd2869b88a754601e70c2177a9a14abde391.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 58, + 428, + 550, + 440 + ], + "lines": [ + { + "bbox": [ + 58, + 428, + 550, + 440 + ], + "spans": [ + { + "bbox": [ + 58, + 428, + 550, + 440 + ], + "type": "text", + "content": "Figure 7. Impact of the training set size (Left) and the ratio of real and fake (Right) on detectors' intersectional " + }, + { + "bbox": [ + 58, + 428, + 550, + 440 + ], + "type": "inline_equation", + "content": "F_{EO}(\\%)" + }, + { + "bbox": [ + 58, + 428, + 550, + 440 + ], + "type": "text", + "content": " and AUC (\\%)." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 444, + 297, + 504 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 444, + 297, + 504 + ], + "spans": [ + { + "bbox": [ + 55, + 444, + 297, + 504 + ], + "type": "text", + "content": "studies should aim to develop a unified framework that ensures fairness, robustness, and generalization, as these three characteristics are essential for creating a reliable detector. Moreover, integrating foundation models (e.g., CLIP) into detector design may help mitigate bias." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 516, + 128, + 529 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 516, + 128, + 529 + ], + "spans": [ + { + "bbox": [ + 55, + 516, + 128, + 529 + ], + "type": "text", + "content": "6. Conclusion" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 54, + 533, + 297, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 533, + 297, + 713 + ], + "spans": [ + { + "bbox": [ + 54, + 533, + 297, + 713 + ], + "type": "text", + "content": "This work presents the first demographically annotated million-scale AI-Face dataset, serving as a pivotal foundation for addressing the urgent need for developing fair AI face detectors. Based on this dataset, we conduct the first comprehensive fairness benchmark, shedding light on the fairness performance and challenges of current representative AI face detectors. Our findings can inspire and guide researchers in refining current models and exploring new methods to mitigate bias. Limitation and Future Work: One limitation is that our dataset's annotations are algorithmically generated, so they may lack " + }, + { + "bbox": [ + 54, + 533, + 297, + 713 + ], + "type": "inline_equation", + "content": "100\\%" + }, + { + "bbox": [ + 54, + 533, + 297, + 713 + ], + "type": "text", + "content": " accuracy. This challenge is difficult to resolve, as demographic attributes for most AI-generated faces are often too ambiguous to predict and do not map to real-world individuals. We plan to enhance annotation quality through human labeling in the future. We" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 444, + 555, + 528 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 444, + 555, + 528 + ], + "spans": [ + { + "bbox": [ + 313, + 444, + 555, + 528 + ], + "type": "text", + "content": "also plan to extend our fairness benchmark to evaluate large language models like LLaMA2 [104] and GPT4 [105] for detecting AI faces. Social Impact: Malicious users could misuse AI-generated face images from our dataset to create fake social media profiles and spread misinformation. To mitigate this risk, only users who submit a signed end-user license agreement will be granted access to our dataset." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 531, + 405, + 544 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 531, + 405, + 544 + ], + "spans": [ + { + "bbox": [ + 313, + 531, + 405, + 544 + ], + "type": "text", + "content": "Ethics Statement" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 312, + 545, + 556, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 545, + 556, + 713 + ], + "spans": [ + { + "bbox": [ + 312, + 545, + 556, + 713 + ], + "type": "text", + "content": "Our dataset collection and annotation generation are approved by Purdue's Institutional Review Board. The dataset is only for research purposes. All data included in this work are sourced from publicly available datasets, and we strictly comply with each dataset's license agreement to ensure lawful inclusion and permissible secondary use for training and testing. All collected data and their associated licenses are mentioned in the Datasheet of AI-Face in Appendix E. Our annotation processes prioritize ethical considerations: 1) " + }, + { + "bbox": [ + 312, + 545, + 556, + 713 + ], + "type": "inline_equation", + "content": "76\\%" + }, + { + "bbox": [ + 312, + 545, + 556, + 713 + ], + "type": "text", + "content": " images we annotated are generated facial images, ensuring no potential for harm to any individual. 2) For real images, we only provide annotations for content either licensed by the original copyright holders or explicitly stated as freely shareable for research purposes." + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "3510" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 153, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 153, + 85 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 153, + 85 + ], + "type": "text", + "content": "Acknowledgments" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 86, + 297, + 169 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 86, + 297, + 169 + ], + "spans": [ + { + "bbox": [ + 55, + 86, + 297, + 169 + ], + "type": "text", + "content": "This work is supported by the U.S. National Science Foundation (NSF) under grant IIS-2434967 and the National Artificial Intelligence Research Resource (NAIRR) Pilot and TACC Lonestar6. The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of NSF and NAIRR Pilot." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 179, + 115, + 192 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 179, + 115, + 192 + ], + "spans": [ + { + "bbox": [ + 56, + 179, + 115, + 192 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 62, + 199, + 297, + 713 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 66, + 199, + 297, + 243 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 199, + 297, + 243 + ], + "spans": [ + { + "bbox": [ + 66, + 199, + 297, + 243 + ], + "type": "text", + "content": "[1] L. Lin, N. Gupta, Y. Zhang, H. Ren, C.-H. Liu, F. Ding, X. Wang, X. Li, L. Verdoliva, and S. Hu, \"Detecting multimedia generated by large ai models: A survey,\" arXiv preprint arXiv:2402.00045, 2024. 1" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 66, + 245, + 296, + 298 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 245, + 296, + 298 + ], + "spans": [ + { + "bbox": [ + 66, + 245, + 296, + 298 + ], + "type": "text", + "content": "[2] A. Rossler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, and M. Nießner, “Faceforensics++: Learning to detect manipulated facial images,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 1-11, 2019, 1, 2, 3, 18, 19" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 66, + 300, + 296, + 332 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 300, + 296, + 332 + ], + "spans": [ + { + "bbox": [ + 66, + 300, + 296, + 332 + ], + "type": "text", + "content": "[3] “Deepfakes github.” https://github.com/deepfakes/faceswap. Accessed: 2024-04-17. 1,2" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 66, + 334, + 296, + 355 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 334, + 296, + 355 + ], + "spans": [ + { + "bbox": [ + 66, + 334, + 296, + 355 + ], + "type": "text", + "content": "[4] “Fakeapp.” https://www.fakeapp.com/. Accessed: 2024-04-17. 1, 2" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 66, + 357, + 296, + 399 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 357, + 296, + 399 + ], + "spans": [ + { + "bbox": [ + 66, + 357, + 296, + 399 + ], + "type": "text", + "content": "[5] A. Brock, J. Donahue, and K. Simonyan, \"Large scale gan training for high fidelity natural image synthesis,\" in 7th International Conference on Learning Representations, ICLR 2019, 2019. 1" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 66, + 402, + 296, + 445 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 402, + 296, + 445 + ], + "spans": [ + { + "bbox": [ + 66, + 402, + 296, + 445 + ], + "type": "text", + "content": "[6] T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401–4410, 2019. 2, 3, 18" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 66, + 446, + 296, + 499 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 446, + 296, + 499 + ], + "spans": [ + { + "bbox": [ + 66, + 446, + 296, + 499 + ], + "type": "text", + "content": "[7] T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila, \"Analyzing and improving the image quality of stylegan,\" in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119, 2020." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 66, + 502, + 296, + 545 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 502, + 296, + 545 + ], + "spans": [ + { + "bbox": [ + 66, + 502, + 296, + 545 + ], + "type": "text", + "content": "[8] T. Karras, M. Aittala, S. Laine, E. Härkönen, J. Hellsten, J. Lehtinen, and T. Aila, \"Alias-free generative adversarial networks,\" Advances in neural information processing systems, vol. 34, pp. 852-863, 2021. 1, 2" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 66, + 547, + 296, + 600 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 547, + 296, + 600 + ], + "spans": [ + { + "bbox": [ + 66, + 547, + 296, + 600 + ], + "type": "text", + "content": "[9] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, \"High-resolution image synthesis with latent diffusion models,\" in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10684-10695, 2022. 1" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 62, + 602, + 296, + 645 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 602, + 296, + 645 + ], + "spans": [ + { + "bbox": [ + 62, + 602, + 296, + 645 + ], + "type": "text", + "content": "[10] D. J. Tojin T. Eapen, “How generative ai can augment human creativity.” https://hbr.org/2023/07/how-generative-ai-can-augment-human-creativity, 2023. Accessed: 2024-04-21. 1, 2" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 62, + 647, + 296, + 689 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 647, + 296, + 689 + ], + "spans": [ + { + "bbox": [ + 62, + 647, + 296, + 689 + ], + "type": "text", + "content": "[11] B. News, \"Trump supporters target black voters with faked ai images.\" https://www.bbc.com/news/world-us-canada-68440150, 2024. Accessed: 2023-05-09. 1" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 62, + 691, + 296, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 691, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 62, + 691, + 296, + 713 + ], + "type": "text", + "content": "[12] H. S. Sætra, “Generative ai: Here to stay, but for good?,” Technology in Society, vol. 75, p. 102372, 2023. 1" + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 320, + 73, + 555, + 713 + ], + "type": "list", + "angle": 0, + "index": 31, + "blocks": [ + { + "bbox": [ + 320, + 73, + 555, + 105 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 73, + 555, + 105 + ], + "spans": [ + { + "bbox": [ + 320, + 73, + 555, + 105 + ], + "type": "text", + "content": "[13] M. Westerlund, “The emergence of deepfake technology: A review,” Technology innovation management review, vol. 9, no. 11, 2019. 1, 2" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 321, + 107, + 555, + 162 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 107, + 555, + 162 + ], + "spans": [ + { + "bbox": [ + 321, + 107, + 555, + 162 + ], + "type": "text", + "content": "[14] L. Jiang, R. Li, W. Wu, C. Qian, and C. C. Loy, \"Deeperforensics-1.0: A large-scale dataset for real-world face forgery detection,\" in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2889-2898, 2020. 2" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 321, + 163, + 555, + 206 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 163, + 555, + 206 + ], + "spans": [ + { + "bbox": [ + 321, + 163, + 555, + 206 + ], + "type": "text", + "content": "[15] K. Narayan, H. Agarwal, K. Thakral, S. Mittal, M. Vatsa, and R. Singh, \"Deeply: On deepfake phylogeny,\" in 2022 IEEE International Joint Conference on Biometrics (IJCB), pp. 1-10, IEEE, 2022. 2" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 321, + 208, + 554, + 261 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 208, + 554, + 261 + ], + "spans": [ + { + "bbox": [ + 321, + 208, + 554, + 261 + ], + "type": "text", + "content": "[16] K. Narayan, H. Agarwal, K. Thakral, S. Mittal, M. Vatsa, and R. Singh, \"Df-platter: multi-face heterogeneous deepfake dataset,\" in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9739-9748, 2023. 2, 7, 8" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 321, + 264, + 555, + 297 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 264, + 555, + 297 + ], + "spans": [ + { + "bbox": [ + 321, + 264, + 555, + 297 + ], + "type": "text", + "content": "[17] C. Teo, M. Abdollahzadeh, and N.-M. M. Cheung, “On measuring fairness in generative models,” Advances in Neural Information Processing Systems, vol. 36, 2023. 1, 2, 7, 8" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 321, + 299, + 555, + 331 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 299, + 555, + 331 + ], + "spans": [ + { + "bbox": [ + 321, + 299, + 555, + 331 + ], + "type": "text", + "content": "[18] Z. Liu, P. Luo, X. Wang, and X. Tang, \"Deep learning face attributes in the wild,\" in Proceedings of International Conference on Computer Vision (ICCV), December 2015. 2" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 321, + 332, + 555, + 374 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 332, + 555, + 374 + ], + "spans": [ + { + "bbox": [ + 321, + 332, + 555, + 374 + ], + "type": "text", + "content": "[19] Y. Xu, P. Terhöst, M. Pedersen, and K. Raja, \"Analyzing fairness in deepfake detection with massively annotated databases,\" IEEE Transactions on Technology and Society, 2024. 1, 2, 14" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 321, + 376, + 555, + 420 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 376, + 555, + 420 + ], + "spans": [ + { + "bbox": [ + 321, + 376, + 555, + 420 + ], + "type": "text", + "content": "[20] R. Rothe, R. Timofte, and L. Van Gool, “Dex: Deep expectation of apparent age from a single image,” in Proceedings of the IEEE international conference on computer vision workshops, pp. 10–15, 2015. 2, 3, 4, 18" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 321, + 422, + 555, + 464 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 422, + 555, + 464 + ], + "spans": [ + { + "bbox": [ + 321, + 422, + 555, + 464 + ], + "type": "text", + "content": "[21] B. Dolhansky, J. Bitton, B. Pflaum, J. Lu, R. Howes, M. Wang, and C. C. Ferrer, \"The deepfake detection challenge (dfdc) dataset,\" arXiv preprint arXiv:2006.07397, 2020. 2, 3, 18, 19" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 321, + 467, + 555, + 488 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 467, + 555, + 488 + ], + "spans": [ + { + "bbox": [ + 321, + 467, + 555, + 488 + ], + "type": "text", + "content": "[22] G. Research, \"Contributing data to deepfake detection research,\" 2019. Accessed: 2024-04-12. 2, 3, 18, 19" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 321, + 490, + 555, + 543 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 490, + 555, + 543 + ], + "spans": [ + { + "bbox": [ + 321, + 490, + 555, + 543 + ], + "type": "text", + "content": "[23] Y. Li, X. Yang, P. Sun, H. Qi, and S. Lyu, \"Celeb-df: A large-scale challenging dataset for deepfake forensics,\" in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3207-3216, 2020. 2, 3, 18, 19" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 321, + 545, + 555, + 589 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 545, + 555, + 589 + ], + "spans": [ + { + "bbox": [ + 321, + 545, + 555, + 589 + ], + "type": "text", + "content": "[24] W. Pu, J. Hu, X. Wang, Y. Li, S. Hu, B. Zhu, R. Song, Q. Song, X. Wu, and S. Lyu, \"Learning a deep dual-level network for robust deepfake detection,\" Pattern Recognition, vol. 130, p. 108832, 2022. 1, 2" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 321, + 591, + 554, + 623 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 591, + 554, + 623 + ], + "spans": [ + { + "bbox": [ + 321, + 591, + 554, + 623 + ], + "type": "text", + "content": "[25] H. Guo, S. Hu, X. Wang, M.-C. Chang, and S. Lyu, \"Robust attentive deep neural network for detecting gan-generated faces,\" IEEE Access, vol. 10, pp. 32574-32583, 2022." + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 321, + 624, + 555, + 678 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 624, + 555, + 678 + ], + "spans": [ + { + "bbox": [ + 321, + 624, + 555, + 678 + ], + "type": "text", + "content": "[26] Z. Yan, Y. Zhang, Y. Fan, and B. Wu, \"Ucf: Uncovering common features for generalizable deepfake detection,\" in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 22412-22423, 2023. 5, 6, 19, 21, 23, 24, 25, 26, 27" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 321, + 681, + 555, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 681, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 321, + 681, + 555, + 713 + ], + "type": "text", + "content": "[27] L. Papa, L. Faiella, L. Corvitto, L. Maiano, and I. Amerini, \"On the use of stable diffusion for creating realistic faces: from generation to detection,\" in 2023 11th International" + } + ] + } + ], + "index": 30 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 314, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 314, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 314, + 757 + ], + "type": "text", + "content": "3511" + } + ] + } + ], + "index": 32 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 61, + 73, + 296, + 713 + ], + "type": "list", + "angle": 0, + "index": 17, + "blocks": [ + { + "bbox": [ + 81, + 73, + 296, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 73, + 296, + 95 + ], + "spans": [ + { + "bbox": [ + 81, + 73, + 296, + 95 + ], + "type": "text", + "content": "Workshop on Biometrics and Forensics (IWBF), pp. 1-6, IEEE, 2023. 1, 2" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 61, + 96, + 295, + 118 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 96, + 295, + 118 + ], + "spans": [ + { + "bbox": [ + 61, + 96, + 295, + 118 + ], + "type": "text", + "content": "[28] L. Trinh and Y. Liu, “An examination of fairness of ai models for deepfake detection,” IJCAI, 2021. 1, 2, 3" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 119, + 296, + 173 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 119, + 296, + 173 + ], + "spans": [ + { + "bbox": [ + 61, + 119, + 296, + 173 + ], + "type": "text", + "content": "[29] Y. Ju, S. Hu, S. Jia, G. H. Chen, and S. Lyu, “Improving fairness in deepfake detection,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 4655–4665, 2024. 1, 2, 5, 6, 21, 23, 24, 25, 26, 27" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 62, + 175, + 296, + 207 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 175, + 296, + 207 + ], + "spans": [ + { + "bbox": [ + 62, + 175, + 296, + 207 + ], + "type": "text", + "content": "[30] L. Lin, X. He, Y. Ju, X. Wang, F. Ding, and S. Hu, “Preserving fairness generalization in deepfake detection,” CVPR, 2024. 1, 2, 5, 6, 19, 21, 23, 24, 25, 26, 27" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 62, + 209, + 296, + 241 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 209, + 296, + 241 + ], + "spans": [ + { + "bbox": [ + 62, + 209, + 296, + 241 + ], + "type": "text", + "content": "[31] Z. Yan, T. Yao, S. Chen, Y. Zhao, X. Fu, J. Zhu, D. Luo, L. Yuan, C. Wang, S. Ding, et al., \"Df40: Toward next-generation deepfake detection,\" NeurIPS, 2024. 1, 2, 3" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 62, + 243, + 296, + 285 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 243, + 296, + 285 + ], + "spans": [ + { + "bbox": [ + 62, + 243, + 296, + 285 + ], + "type": "text", + "content": "[32] C. Li et al., “A continual deepfake detection benchmark: Dataset, methods, and essentials,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1339–1349, 2023. 2, 3" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 287, + 296, + 330 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 287, + 296, + 330 + ], + "spans": [ + { + "bbox": [ + 62, + 287, + 296, + 330 + ], + "type": "text", + "content": "[33] J. Deng, C. Lin, P. Hu, C. Shen, Q. Wang, Q. Li, and Q. Li, \"Towards benchmarking and evaluating deepfake detection,\" IEEE Transactions on Dependable and Secure Computing, 2024. 3" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 332, + 296, + 364 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 332, + 296, + 364 + ], + "spans": [ + { + "bbox": [ + 62, + 332, + 296, + 364 + ], + "type": "text", + "content": "[34] Z. Yan, Y. Zhang, X. Yuan, S. Lyu, and B. Wu, \"Deepfakebench: A comprehensive benchmark of deepfake detection,\" in NeurIPS, 2023. 3, 6" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 365, + 296, + 398 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 365, + 296, + 398 + ], + "spans": [ + { + "bbox": [ + 62, + 365, + 296, + 398 + ], + "type": "text", + "content": "[35] B. M. Le, J. Kim, S. Tariq, K. Moore, A. Abuadbba, and S. S. Woo, \"Sok: Facial deepfake detectors,\" arXiv, 2024. 2, 3" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 399, + 296, + 453 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 399, + 296, + 453 + ], + "spans": [ + { + "bbox": [ + 62, + 399, + 296, + 453 + ], + "type": "text", + "content": "[36] C. Hazirbas, J. Bitton, B. Dolhansky, J. Pan, A. Gordo, and C. C. Ferrer, \"Towards measuring fairness in ai: the casual conversations dataset,\" IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 4, no. 3, pp. 324-332, 2021. 2, 3" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 62, + 456, + 296, + 510 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 456, + 296, + 510 + ], + "spans": [ + { + "bbox": [ + 62, + 456, + 296, + 510 + ], + "type": "text", + "content": "[37] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al., \"Learning transferable visual models from natural language supervision,\" in International conference on machine learning, pp. 8748-8763, PMLR, 2021. 2, 4" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 62, + 512, + 296, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 512, + 296, + 555 + ], + "spans": [ + { + "bbox": [ + 62, + 512, + 296, + 555 + ], + "type": "text", + "content": "[38] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, \"Generative adversarial nets,\" Advances in neural information processing systems, vol. 27, 2014. 2" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 62, + 556, + 296, + 577 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 556, + 296, + 577 + ], + "spans": [ + { + "bbox": [ + 62, + 556, + 296, + 577 + ], + "type": "text", + "content": "[39] “Midjourney.” https://mid-journey.ai/. Accessed: 2024-04-17. 2" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 62, + 579, + 296, + 611 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 579, + 296, + 611 + ], + "spans": [ + { + "bbox": [ + 62, + 579, + 296, + 611 + ], + "type": "text", + "content": "[40] A. Ramesh et al., \"Hierarchical text-conditional image generation with clip latents,\" arXiv, vol. 1, no. 2, p. 3, 2022. 2" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 62, + 613, + 296, + 645 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 613, + 296, + 645 + ], + "spans": [ + { + "bbox": [ + 62, + 613, + 296, + 645 + ], + "type": "text", + "content": "[41] D. O'Sullivan, \"A high school student created a fake 2020 us candidate. twitter verified it.\" https://cnn.it/3HpHfzz, 2020. Accessed: 2024-04-21. 2" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 62, + 647, + 296, + 689 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 647, + 296, + 689 + ], + "spans": [ + { + "bbox": [ + 62, + 647, + 296, + 689 + ], + "type": "text", + "content": "[42] S. Bond, \"That smiling linkedin profile face might be a computer-generated fake.\" https://www.npr.org/2022/03/27/1088140809/fake-linkedin-profiles, 2022. Accessed: 2024-04-21. 2" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 62, + 692, + 296, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 692, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 62, + 692, + 296, + 713 + ], + "type": "text", + "content": "[43] V. Albiero, K. Bowyer, K. Vangara, and M. King, \"Does face recognition accuracy get better with age? deep face matchers" + } + ] + } + ], + "index": 16 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 320, + 73, + 555, + 713 + ], + "type": "list", + "angle": 0, + "index": 34, + "blocks": [ + { + "bbox": [ + 339, + 73, + 553, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 339, + 73, + 553, + 95 + ], + "spans": [ + { + "bbox": [ + 339, + 73, + 553, + 95 + ], + "type": "text", + "content": "say no,\" in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 261-269, 2020. 3" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 321, + 97, + 555, + 150 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 321, + 97, + 555, + 150 + ], + "spans": [ + { + "bbox": [ + 321, + 97, + 555, + 150 + ], + "type": "text", + "content": "[44] V. Albiero, K. Ks, K. Vangara, K. Zhang, M. C. King, and K. W. Bowyer, \"Analysis of gender inequality in face recognition accuracy,\" in Proceedings of the IEEE/cvf winter conference on applications of computer vision workshops, pp. 81-89, 2020." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 320, + 154, + 555, + 218 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 154, + 555, + 218 + ], + "spans": [ + { + "bbox": [ + 320, + 154, + 555, + 218 + ], + "type": "text", + "content": "[45] C. M. Cook, J. J. Howard, Y. B. Sirotin, J. L. Tipton, and A. R. Vermury, \"Demographic effects in facial recognition and their dependence on image acquisition: An evaluation of eleven commercial systems,\" IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 1, no. 1, pp. 32-41, 2019. 3, 14" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 320, + 220, + 554, + 264 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 220, + 554, + 264 + ], + "spans": [ + { + "bbox": [ + 320, + 220, + 554, + 264 + ], + "type": "text", + "content": "[46] K. Krishnapriya, V. Albiero, K. Vangara, M. C. King, and K. W. Bowyer, “Issues related to face recognition accuracy varying based on race and skin tone,” IEEE Transactions on Technology and Society, vol. 1, no. 1, pp. 8–20, 2020. 14" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 320, + 266, + 555, + 310 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 266, + 555, + 310 + ], + "spans": [ + { + "bbox": [ + 320, + 266, + 555, + 310 + ], + "type": "text", + "content": "[47] B. Porgali, V. Albiero, J. Ryda, C. C. Ferrer, and C. Hazirbas, \"The casual conversations v2 dataset,\" in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 10-17, June 2023. 3" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 320, + 312, + 554, + 334 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 312, + 554, + 334 + ], + "spans": [ + { + "bbox": [ + 320, + 312, + 554, + 334 + ], + "type": "text", + "content": "[48] Google, “The Monk Skin Tone Scale,” 2024. [Accessed October 16, 2024]. 3, 14" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 320, + 336, + 555, + 368 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 336, + 555, + 368 + ], + "spans": [ + { + "bbox": [ + 320, + 336, + 555, + 368 + ], + "type": "text", + "content": "[49] United States Department of State — Bureau of Consular Affairs, “Selecting your gender marker - travel,” 2022. [Accessed October 16, 2024]. 3, 14" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 320, + 371, + 554, + 403 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 371, + 554, + 403 + ], + "spans": [ + { + "bbox": [ + 320, + 371, + 554, + 403 + ], + "type": "text", + "content": "[50] Australian Bureau of Statistics, \"Standard for Sex, Gender, Variations of Sex Characteristics and Sexual Orientation Variables,\" 2024. [Accessed October 16, 2024]. 3, 14" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 320, + 405, + 555, + 471 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 405, + 555, + 471 + ], + "spans": [ + { + "bbox": [ + 320, + 405, + 555, + 471 + ], + "type": "text", + "content": "[51] J. J. Howard, Y. B. Sirotin, and A. R. Vemury, “The effect of broad and specific demographic homogeneity on the imposter distributions and false match rates in face recognition algorithm performance,” in 2019 IEEE 10th international conference on biometrics theory, applications and systems (btas), pp. 1–8, IEEE, 2019. 3, 14" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 320, + 473, + 555, + 527 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 473, + 555, + 527 + ], + "spans": [ + { + "bbox": [ + 320, + 473, + 555, + 527 + ], + "type": "text", + "content": "[52] I. D. Raji and J. Buolamwini, \"Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products,\" in Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 429-435, 2019. 3, 14" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 320, + 529, + 555, + 562 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 529, + 555, + 562 + ], + "spans": [ + { + "bbox": [ + 320, + 529, + 555, + 562 + ], + "type": "text", + "content": "[53] United Nations, “Provisional Guidelines on Standard International Age Classifications,” 1982. [Accessed October 16, 2024]. 3, 14" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 320, + 564, + 555, + 586 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 564, + 555, + 586 + ], + "spans": [ + { + "bbox": [ + 320, + 564, + 555, + 586 + ], + "type": "text", + "content": "[54] Statistics Canada, \"Age Categories, Life Cycle Groupings,\" 2017. [Accessed October 16, 2024]. 3, 14" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 320, + 588, + 555, + 620 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 588, + 555, + 620 + ], + "spans": [ + { + "bbox": [ + 320, + 588, + 555, + 620 + ], + "type": "text", + "content": "[55] O. Giudice, L. Guarnera, and S. Battiato, “Fighting deep-fakes by detecting gan dct anomalies,” Journal of Imaging, vol. 7, no. 8, p. 128, 2021. 3, 18" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 320, + 622, + 555, + 666 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 622, + 555, + 666 + ], + "spans": [ + { + "bbox": [ + 320, + 622, + 555, + 666 + ], + "type": "text", + "content": "[56] V. Asnani, X. Yin, T. Hassner, and X. Liu, \"Reverse engineering of generative models: Inferring model hyperparameters from generated images,\" IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023. 3, 18" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 320, + 668, + 555, + 689 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 668, + 555, + 689 + ], + "spans": [ + { + "bbox": [ + 320, + 668, + 555, + 689 + ], + "type": "text", + "content": "[57] D. Beniaguev, “Synthetic faces high quality (sfhq) dataset,” 2022. 3, 18" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 320, + 691, + 554, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 691, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 320, + 691, + 554, + 713 + ], + "type": "text", + "content": "[58] Z. Lu, D. Huang, L. Bai, J. Qu, C. Wu, X. Liu, and W. Ouyang, \"Seeing is not always believing: Benchmarking" + } + ] + } + ], + "index": 33 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "3512" + } + ] + } + ], + "index": 35 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 61, + 72, + 297, + 712 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 80, + 72, + 297, + 105 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 72, + 297, + 105 + ], + "spans": [ + { + "bbox": [ + 80, + 72, + 297, + 105 + ], + "type": "text", + "content": "human and model perception of ai-generated images,\" Advances in Neural Information Processing Systems, vol. 36, 2024. 3, 18" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 61, + 106, + 297, + 149 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 106, + 297, + 149 + ], + "spans": [ + { + "bbox": [ + 61, + 106, + 297, + 149 + ], + "type": "text", + "content": "[59] L. M. Dang, S. I. Hassan, S. Im, J. Lee, S. Lee, and H. Moon, \"Deep learning based computer generated face identification using convolutional neural network,\" Applied Sciences, vol. 8, no. 12, p. 2610, 2018. 3, 18" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 150, + 296, + 194 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 150, + 296, + 194 + ], + "spans": [ + { + "bbox": [ + 61, + 150, + 296, + 194 + ], + "type": "text", + "content": "[60] P. Esser, R. Rombach, and B. Ommer, “Taming transformers for high-resolution image synthesis,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12873–12883, 2021. 3, 18" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 61, + 194, + 296, + 227 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 194, + 296, + 227 + ], + "spans": [ + { + "bbox": [ + 61, + 194, + 296, + 227 + ], + "type": "text", + "content": "[61] Z. Wang, J. Bao, W. Zhou, W. Wang, H. Hu, H. Chen, and H. Li, \"Dire for diffusion-generated image detection,\" arXiv preprint arXiv:2303.09295, 2023. 3, 18" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 228, + 296, + 281 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 228, + 296, + 281 + ], + "spans": [ + { + "bbox": [ + 61, + 228, + 296, + 281 + ], + "type": "text", + "content": "[62] M. Kim, F. Liu, A. Jain, and X. Liu, \"Dface: Synthetic face generation with dual condition diffusion model,\" in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12715-12725, 2023. 3, 18" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 61, + 282, + 296, + 337 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 282, + 296, + 337 + ], + "spans": [ + { + "bbox": [ + 61, + 282, + 296, + 337 + ], + "type": "text", + "content": "[63] R. Corvi, D. Cozzolino, G. Zingarini, G. Poggi, K. Nagano, and L. Verdoliva, \"On the detection of synthetic images generated by diffusion models,\" in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1-5, IEEE, 2023. 3, 18" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 61, + 338, + 296, + 392 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 338, + 296, + 392 + ], + "spans": [ + { + "bbox": [ + 61, + 338, + 296, + 392 + ], + "type": "text", + "content": "[64] M. Awsafur Rahman, B. Paul, N. Haque Sarker, Z. I. A. Hakim, and S. Anowarul Fattah, \"Artifact: A large-scale dataset with artificial and factual images for generalizable and robust synthetic image detection,\" arXiv e-prints, pp. arXiv-2302, 2023. 3, 18" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 61, + 393, + 296, + 435 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 393, + 296, + 435 + ], + "spans": [ + { + "bbox": [ + 61, + 393, + 296, + 435 + ], + "type": "text", + "content": "[65] H. Song, S. Huang, Y. Dong, and W.-W. Tu, \"Robustness and generalizability of deepfake detection: A study with diffusion models,\" arXiv preprint arXiv:2309.02218, 2023. 3, 18" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 61, + 437, + 296, + 491 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 437, + 296, + 491 + ], + "spans": [ + { + "bbox": [ + 61, + 437, + 296, + 491 + ], + "type": "text", + "content": "[66] J. Deng, J. Guo, E. Ververas, I. Kotsia, and S. Zafeiriou, \"Retinaface: Single-shot multi-level face localisation in the wild,\" in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5203-5212, 2020. 3, 31, 32" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 61, + 492, + 296, + 547 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 492, + 296, + 547 + ], + "spans": [ + { + "bbox": [ + 61, + 492, + 296, + 547 + ], + "type": "text", + "content": "[67] K. S. Krishnapriya, G. Pangelinan, M. C. King, and K. W. Bowyer, \"Analysis of manual and automated skin tone assignments,\" in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops, pp. 429-438, January 2022. 3" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 61, + 548, + 296, + 591 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 548, + 296, + 591 + ], + "spans": [ + { + "bbox": [ + 61, + 548, + 296, + 591 + ], + "type": "text", + "content": "[68] W. Thong, P. Joniak, and A. Xiang, \"Beyond skin tone: A multidimensional measure of apparent skin color,\" in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 4903-4913, October 2023. 3" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 61, + 592, + 296, + 635 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 592, + 296, + 635 + ], + "spans": [ + { + "bbox": [ + 61, + 592, + 296, + 635 + ], + "type": "text", + "content": "[69] C. Lugaresi, J. Tang, H. Nash, C. McClanahan, E. Uboweja, M. Hays, F. Zhang, C.-L. Chang, M. G. Yong, J. Lee, et al., \"Mediapipe: A framework for building perception pipelines,\" arXiv preprint arXiv:1906.08172, 2019. 3" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 61, + 636, + 296, + 678 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 636, + 296, + 678 + ], + "spans": [ + { + "bbox": [ + 61, + 636, + 296, + 678 + ], + "type": "text", + "content": "[70] J. A. Hartigan and M. A. Wong, \"Algorithm as 136: A k-means clustering algorithm,\" Journal of the royal statistical society. series c (applied statistics), vol. 28, no. 1, pp. 100-108, 1979. 3" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 61, + 680, + 296, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 680, + 296, + 712 + ], + "spans": [ + { + "bbox": [ + 61, + 680, + 296, + 712 + ], + "type": "text", + "content": "[71] Megvii Technology Limited, “Face++ Face Detection.” https://www(faceplusplus.com/face-detection/. Accessed: 2024-03. 3, 14, 18, 19" + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 320, + 73, + 556, + 712 + ], + "type": "list", + "angle": 0, + "index": 29, + "blocks": [ + { + "bbox": [ + 320, + 73, + 556, + 105 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 73, + 556, + 105 + ], + "spans": [ + { + "bbox": [ + 320, + 73, + 556, + 105 + ], + "type": "text", + "content": "[72] InsightFace Project Contributors, \"InsightFace: State-of-the-Art Face Analysis Toolbox.\" https://insightface.ai/. Accessed: 2024-03. 3, 14, 18, 19" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 320, + 106, + 555, + 160 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 106, + 555, + 160 + ], + "spans": [ + { + "bbox": [ + 320, + 106, + 555, + 160 + ], + "type": "text", + "content": "[73] G. Ilharco, M. Wortsman, R. Wightman, C. Gordon, N. Carlini, R. Taori, A. Dave, V. Shankar, H. Namkoong, J. Miller, H. Hajishirzi, A. Farhadi, and L. Schmidt, \"Open clip.\" https://github.com/mlfoundations/open Clip, 2021.4.6.21" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 320, + 161, + 555, + 204 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 161, + 555, + 204 + ], + "spans": [ + { + "bbox": [ + 320, + 161, + 555, + 204 + ], + "type": "text", + "content": "[74] K. Cao, C. Wei, A. Gaidon, N. Arechiga, and T. Ma, \"Learning imbalanced datasets with label-distribution-aware margin loss,\" Advances in neural information processing systems, vol. 32, 2019. 4" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 320, + 205, + 554, + 249 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 205, + 554, + 249 + ], + "spans": [ + { + "bbox": [ + 320, + 205, + 554, + 249 + ], + "type": "text", + "content": "[75] S. Agarwal, G. Krueger, J. Clark, A. Radford, J. W. Kim, and M. Brundage, “Evaluating clip: towards characterization of broader capabilities and downstream implications,” arXiv preprint arXiv:2108.02818, 2021. 4" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 320, + 250, + 555, + 303 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 250, + 555, + 303 + ], + "spans": [ + { + "bbox": [ + 320, + 250, + 555, + 303 + ], + "type": "text", + "content": "[76] M. M. Tanjim, K. K. Singh, K. Kafle, R. Sinha, and G. W. Cottrell, “Discovering and mitigating biases in clip-based image editing,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 2984–2993, 2024." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 320, + 304, + 555, + 348 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 304, + 555, + 348 + ], + "spans": [ + { + "bbox": [ + 320, + 304, + 555, + 348 + ], + "type": "text", + "content": "[77] J. Wang and G. Kang, “Learn to rectify the bias of clip for unsupervised semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4102–4112, 2024. 4" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 320, + 349, + 555, + 392 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 349, + 555, + 392 + ], + "spans": [ + { + "bbox": [ + 320, + 349, + 555, + 392 + ], + "type": "text", + "content": "[78] G. R. Kini, O. Paraskevas, S. Oymak, and C. Thrampoulidis, \"Label-imbalanced and group-sensitive classification under overparameterization,\" Advances in Neural Information Processing Systems, vol. 34, pp. 18970-18983, 2021. 4" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 320, + 393, + 555, + 435 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 393, + 555, + 435 + ], + "spans": [ + { + "bbox": [ + 320, + 393, + 555, + 435 + ], + "type": "text", + "content": "[79] G. Peyré, M. Cuturi, et al., \"Computational optimal transport: With applications to data science,\" Foundations and Trends® in Machine Learning, vol. 11, no. 5-6, pp. 355-607, 2019. 4" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 320, + 437, + 555, + 470 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 437, + 555, + 470 + ], + "spans": [ + { + "bbox": [ + 320, + 437, + 555, + 470 + ], + "type": "text", + "content": "[80] M. Cuturi, \"Sinkhorn distances: Lightspeed computation of optimal transport,\" Advances in neural information processing systems, vol. 26, 2013. 4" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 320, + 471, + 555, + 514 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 471, + 555, + 514 + ], + "spans": [ + { + "bbox": [ + 320, + 471, + 555, + 514 + ], + "type": "text", + "content": "[81] P. Foret, A. Kleiner, H. Mobahi, and B. Neyshabur, \"Sharpness-aware minimization for efficiently improving generalization,\" in International Conference on Learning Representations, 2020. 4" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 320, + 515, + 555, + 557 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 515, + 555, + 557 + ], + "spans": [ + { + "bbox": [ + 320, + 515, + 555, + 557 + ], + "type": "text", + "content": "[82] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1251–1258, 2017. 5, 6, 19, 21, 23, 24, 25, 26, 27" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 320, + 558, + 555, + 602 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 558, + 555, + 602 + ], + "spans": [ + { + "bbox": [ + 320, + 558, + 555, + 602 + ], + "type": "text", + "content": "[83] M. Tan and Q. Le, \"Efficientnet: Rethinking model scaling for convolutional neural networks,\" in International conference on machine learning, pp. 6105-6114, PMLR, 2019. 5, 6, 19, 21, 23, 24, 25, 26, 27" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 320, + 603, + 555, + 668 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 603, + 555, + 668 + ], + "spans": [ + { + "bbox": [ + 320, + 603, + 555, + 668 + ], + "type": "text", + "content": "[84] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., \"An image is worth 16x16 words: Transformers for image recognition at scale,\" in 9th International Conference on Learning Representations, 2021. 5, 6, 20, 21, 23, 24, 25, 26, 27" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 320, + 669, + 555, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 669, + 555, + 712 + ], + "spans": [ + { + "bbox": [ + 320, + 669, + 555, + 712 + ], + "type": "text", + "content": "[85] Y. Qian, G. Yin, L. Sheng, Z. Chen, and J. Shao, “Thinking in frequency: Face forgery detection by mining frequency-aware clues,” in European conference on computer vision, pp. 86–103, Springer, 2020. 5, 6, 21, 23, 24, 25, 26, 27" + } + ] + } + ], + "index": 28 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "3513" + } + ] + } + ], + "index": 30 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 58, + 72, + 297, + 713 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 61, + 72, + 297, + 128 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 72, + 297, + 128 + ], + "spans": [ + { + "bbox": [ + 61, + 72, + 297, + 128 + ], + "type": "text", + "content": "[86] H. Liu, X. Li, W. Zhou, Y. Chen, Y. He, H. Xue, W. Zhang, and N. Yu, \"Spatial-phase shallow learning: rethinking face forgery detection in frequency domain,\" in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 772-781, 2021. 5, 6, 21, 23, 24, 25, 26, 27" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 61, + 129, + 296, + 184 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 129, + 296, + 184 + ], + "spans": [ + { + "bbox": [ + 61, + 129, + 296, + 184 + ], + "type": "text", + "content": "[87] Y. Luo, Y. Zhang, J. Yan, and W. Liu, \"Generalizing face forgery detection with high-frequency features,\" in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 16317-16326, 2021. 5, 6, 19, 21, 23, 24, 25, 26, 27" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 62, + 185, + 296, + 239 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 185, + 296, + 239 + ], + "spans": [ + { + "bbox": [ + 62, + 185, + 296, + 239 + ], + "type": "text", + "content": "[88] U. Ojha, Y. Li, and Y. J. Lee, \"Towards universal fake image detectors that generalize across generative models,\" in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 24480-24489, 2023. 5, 6, 21, 23, 24, 25, 26, 27" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 62, + 240, + 296, + 295 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 240, + 296, + 295 + ], + "spans": [ + { + "bbox": [ + 62, + 240, + 296, + 295 + ], + "type": "text", + "content": "[89] Y. Ni, D. Meng, C. Yu, C. Quan, D. Ren, and Y. Zhao, \"Core: Consistent representation learning for face forgery detection,\" in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12-21, 2022. 5, 6, 21, 23, 24, 25, 26, 27" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 62, + 297, + 295, + 330 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 297, + 295, + 330 + ], + "spans": [ + { + "bbox": [ + 62, + 297, + 295, + 330 + ], + "type": "text", + "content": "[90] X. Han, J. Chi, Y. Chen, Q. Wang, H. Zhao, N. Zou, and X. Hu, “Ffb: A fair fairness benchmark for in-processing group fairness methods,” in ICLR, 2024. 5" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 62, + 331, + 296, + 374 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 331, + 296, + 374 + ], + "spans": [ + { + "bbox": [ + 62, + 331, + 296, + 374 + ], + "type": "text", + "content": "[91] N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan, “A survey on bias and fairness in machine learning,” ACM computing surveys (CSUR), vol. 54, no. 6, pp. 1–35, 2021. 5" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 376, + 296, + 418 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 376, + 296, + 418 + ], + "spans": [ + { + "bbox": [ + 62, + 376, + 296, + 418 + ], + "type": "text", + "content": "[92] J. Wang, X. E. Wang, and Y. Liu, “Understanding instance-level impact of fairness constraints,” in International Conference on Machine Learning, pp. 23114–23130, PMLR, 2022. 5" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 421, + 295, + 442 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 421, + 295, + 442 + ], + "spans": [ + { + "bbox": [ + 62, + 421, + 295, + 442 + ], + "type": "text", + "content": "[93] H. Wang, L. He, R. Gao, and F. P. Calmon, \"Aleatoric and epistemic discrimination in classification,\" ICML, 2023. 5" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 444, + 296, + 487 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 444, + 296, + 487 + ], + "spans": [ + { + "bbox": [ + 62, + 444, + 296, + 487 + ], + "type": "text", + "content": "[94] C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel, \"Fairness through awareness,\" in Proceedings of the 3rd innovations in theoretical computer science conference, pp. 214-226, 2012. 5" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 488, + 296, + 544 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 488, + 296, + 544 + ], + "spans": [ + { + "bbox": [ + 62, + 488, + 296, + 544 + ], + "type": "text", + "content": "[95] Z. Yan, Y. Luo, S. Lyu, Q. Liu, and B. Wu, \"Transcending forgery specificity with latent space augmentation for generalizable deepfake detection,\" in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8984-8994, June 2024. 5" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 62, + 544, + 296, + 576 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 544, + 296, + 576 + ], + "spans": [ + { + "bbox": [ + 62, + 544, + 296, + 576 + ], + "type": "text", + "content": "[96] H. Ren, L. Lin, C.-H. Liu, X. Wang, and S. Hu, \"Improving generalization for ai-synthesized voice detection,\" in AAAI, 2025." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 62, + 578, + 296, + 622 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 578, + 296, + 622 + ], + "spans": [ + { + "bbox": [ + 62, + 578, + 296, + 622 + ], + "type": "text", + "content": "[97] Z. Yan, Y. Zhao, S. Chen, M. Guo, X. Fu, T. Yao, S. Ding, and L. Yuan, \"Generalizing deepfake video detection with plug-and-play: Video-level blending and spatiotemporal adapter tuning,\" in CVPR, 2025." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 62, + 624, + 296, + 656 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 624, + 296, + 656 + ], + "spans": [ + { + "bbox": [ + 62, + 624, + 296, + 656 + ], + "type": "text", + "content": "[98] J. Cheng, Z. Yan, Y. Zhang, L. Hao, J. Ai, Q. Zou, C. Li, and Z. Wang, \"Stacking brick by brick: Aligned feature isolation for incremental face forgery detection,\" in CVPR, 2025. 5" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 62, + 658, + 295, + 690 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 658, + 295, + 690 + ], + "spans": [ + { + "bbox": [ + 62, + 658, + 295, + 690 + ], + "type": "text", + "content": "[99] \"Monk skin tone scale,\" in https://en.wikipedia.org/wiki/Monk_Skin_Tone_Scale, Wikipedia, The Free Encyclopedia. 5" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 58, + 692, + 295, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 692, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 58, + 692, + 295, + 713 + ], + "type": "text", + "content": "[100] S. Hu and G. H. Chen, \"Fairness in survival analysis with distributionally robust optimization,\" arXiv, 2023. 5" + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 58, + 72, + 555, + 713 + ], + "type": "list", + "angle": 0, + "index": 30, + "blocks": [ + { + "bbox": [ + 316, + 72, + 555, + 129 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 72, + 555, + 129 + ], + "spans": [ + { + "bbox": [ + 316, + 72, + 555, + 129 + ], + "type": "text", + "content": "[101] F. Cocchi, L. Baraldi, S. Poppi, M. Cornia, L. Baraldi, and R. Cucchiara, “Unveiling the impact of image transformations on deepfake detection: An experimental analysis,” in International Conference on Image Analysis and Processing, pp. 345–356, Springer, 2023. 6" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 130, + 555, + 163 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 130, + 555, + 163 + ], + "spans": [ + { + "bbox": [ + 316, + 130, + 555, + 163 + ], + "type": "text", + "content": "[102] D. Cozzolino, G. Poggi, R. Corvi, M. Nießner, and L. Verdoliva, “Raising the bar of ai-generated image detection with clip,” arXiv preprint arXiv:2312.00195, 2023. 6" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 317, + 165, + 555, + 209 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 165, + 555, + 209 + ], + "spans": [ + { + "bbox": [ + 317, + 165, + 555, + 209 + ], + "type": "text", + "content": "[103] B. Porgali, V. Albiero, J. Ryda, C. C. Ferrer, and C. Hazirbas, “The casual conversations v2 dataset,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10–17, 2023. 7, 8" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 317, + 211, + 555, + 255 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 211, + 555, + 255 + ], + "spans": [ + { + "bbox": [ + 317, + 211, + 555, + 255 + ], + "type": "text", + "content": "[104] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, et al., \"Llama 2: Open foundation and fine-tuned chat models,\" arXiv preprint arXiv:2307.09288, 2023. 8" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 317, + 256, + 555, + 300 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 256, + 555, + 300 + ], + "spans": [ + { + "bbox": [ + 317, + 256, + 555, + 300 + ], + "type": "text", + "content": "[105] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat, et al., \"Gpt-4 technical report,\" arXiv preprint arXiv:2303.08774, 2023. 8" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 302, + 554, + 346 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 302, + 554, + 346 + ], + "spans": [ + { + "bbox": [ + 316, + 302, + 554, + 346 + ], + "type": "text", + "content": "[106] J. Buolamwini and T. Gebru, “Gender shades: Intersectional accuracy disparities in commercial gender classification,” in Conference on fairness, accountability and transparency, pp. 77–91, PMLR, 2018. 14" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 317, + 349, + 555, + 403 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 349, + 555, + 403 + ], + "spans": [ + { + "bbox": [ + 317, + 349, + 555, + 403 + ], + "type": "text", + "content": "[107] B. Lu, J.-C. Chen, C. D. Castillo, and R. Chellappa, “An experimental evaluation of covariates effects on unconstrained face verification,” IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 1, no. 1, pp. 42–55, 2019. 14" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 317, + 405, + 555, + 449 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 405, + 555, + 449 + ], + "spans": [ + { + "bbox": [ + 317, + 405, + 555, + 449 + ], + "type": "text", + "content": "[108] Z. Khan and Y. Fu, “One label, one billion faces: Usage and consistency of racial categories in computer vision,” in Proceedings of the 2021 acm conference on fairness, accountability, and transparency, pp. 587–597, 2021. 14" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 317, + 451, + 555, + 484 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 451, + 555, + 484 + ], + "spans": [ + { + "bbox": [ + 317, + 451, + 555, + 484 + ], + "type": "text", + "content": "[109] S. Sachdeva, “Fitzpatrick skin typing: Applications in dermatology,” Indian journal of dermatology, venereology and leprology, vol. 75, p. 93, 2009. 14" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 317, + 486, + 555, + 540 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 486, + 555, + 540 + ], + "spans": [ + { + "bbox": [ + 317, + 486, + 555, + 540 + ], + "type": "text", + "content": "[110] J. J. Howard, Y. B. Sirotin, J. L. Tipton, and A. R. Vemury, \"Reliability and validity of image-based and self-reported skin phenotype metrics,\" IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 3, no. 4, pp. 550-560, 2021. 14" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 543, + 555, + 576 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 543, + 555, + 576 + ], + "spans": [ + { + "bbox": [ + 316, + 543, + 555, + 576 + ], + "type": "text", + "content": "[111] U. Okoji, S. Taylor, and J. Lipoff, “Equity in skin typing: why it is time to replace the fitzpatrick scale,” British Journal of Dermatology, vol. 185, no. 1, pp. 198–199, 2021. 14" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 317, + 578, + 555, + 632 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 578, + 555, + 632 + ], + "spans": [ + { + "bbox": [ + 317, + 578, + 555, + 632 + ], + "type": "text", + "content": "[112] M. Groh, C. Harris, R. Daneshjou, O. Badri, and A. Koochek, \"Towards transparency in dermatology image datasets with skin tone annotations by experts, crowds, and an algorithm,\" Proceedings of the ACM on Human-Computer Interaction, vol. 6, no. CSCW2, pp. 1-26, 2022. 14" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 317, + 635, + 555, + 666 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 635, + 555, + 666 + ], + "spans": [ + { + "bbox": [ + 317, + 635, + 555, + 666 + ], + "type": "text", + "content": "[113] R. Williamson and A. Menon, “Fairness risk measures,” in International conference on machine learning, pp. 6786–6797, PMLR, 2019. 21" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 316, + 670, + 555, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 670, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 670, + 555, + 713 + ], + "type": "text", + "content": "[114] D. Levy, Y. Carmon, J. C. Duchi, and A. Sidford, \"Largescale methods for distributionally robust optimization,\" Advances in Neural Information Processing Systems, vol. 33, pp. 8847-8860, 2020. 21" + } + ] + } + ], + "index": 29 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 748, + 315, + 757 + ], + "type": "text", + "content": "3514" + } + ] + } + ], + "index": 31 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 297, + 241 + ], + "type": "list", + "angle": 0, + "index": 4, + "blocks": [ + { + "bbox": [ + 56, + 72, + 297, + 105 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 297, + 105 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 297, + 105 + ], + "type": "text", + "content": "[115] R. T. Rockafellar, S. Uryasev, et al., \"Optimization of conditional value-at-risk,\" Journal of risk, vol. 2, pp. 21-42, 2000. 21" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 107, + 296, + 150 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 107, + 296, + 150 + ], + "spans": [ + { + "bbox": [ + 55, + 107, + 296, + 150 + ], + "type": "text", + "content": "[116] T. Hashimoto, M. Srivastava, H. Namkoong, and P. Liang, \"Fairness without demographics in repeated loss minimization,\" in International Conference on Machine Learning, pp. 1929-1938, PMLR, 2018. 21" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 57, + 152, + 297, + 194 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 152, + 297, + 194 + ], + "spans": [ + { + "bbox": [ + 57, + 152, + 297, + 194 + ], + "type": "text", + "content": "[117] J. C. Duchi and H. Namkoong, “Learning models with uniform performance via distributionally robust optimization,” The Annals of Statistics, vol. 49, no. 3, pp. 1378–1406, 2021. 21" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 57, + 197, + 296, + 241 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 197, + 296, + 241 + ], + "spans": [ + { + "bbox": [ + 57, + 197, + 296, + 241 + ], + "type": "text", + "content": "[118] T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach, H. D. Iiii, and K. Crawford, \"Datasheets for datasets,\" Communications of the ACM, vol. 64, no. 12, pp. 86-92, 2021. 31" + } + ] + } + ], + "index": 3 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "spans": [ + { + "bbox": [ + 295, + 749, + 315, + 757 + ], + "type": "text", + "content": "3515" + } + ] + } + ], + "index": 5 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/AIGV-Assessor_ Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMM/6da59f8b-88b1-4dd7-9656-385f5fb4c136_content_list.json b/2025/AIGV-Assessor_ Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMM/6da59f8b-88b1-4dd7-9656-385f5fb4c136_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..bdadabbb5de1a0655b6a27341b100ff40be4d1b8 --- /dev/null +++ b/2025/AIGV-Assessor_ Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMM/6da59f8b-88b1-4dd7-9656-385f5fb4c136_content_list.json @@ -0,0 +1,1354 @@ +[ + { + "type": "text", + "text": "AIGV-Assessor: Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMM", + "text_level": 1, + "bbox": [ + 129, + 128, + 870, + 174 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Jiarui Wang $^{1}$ , Huiyu Duan $^{1,2}$ , Guangtao Zhai $^{1,2}$ , Juntong Wang $^{1}$ , Xiongkuo Min $^{1*}$ , $^{1}$ Institute of Image Communication and Network Engineering, $^{2}$ MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China", + "bbox": [ + 99, + 202, + 903, + 276 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 246, + 308, + 326, + 325 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "The rapid advancement of large multimodal models (LMMs) has led to the rapid expansion of artificial intelligence generated videos (AIGVs), which highlights the pressing need for effective video quality assessment (VQA) models designed specifically for AIGVs. Current VQA models generally fall short in accurately assessing the perceptual quality of AIGVs due to the presence of unique distortions, such as unrealistic objects, unnatural movements, or inconsistent visual elements. To address this challenge, we first present AIGVQA-DB, a large-scale dataset comprising 36,576 AIGVs generated by 15 advanced text-to-video models using 1,048 diverse prompts. With these AIGVs, a systematic annotation pipeline including scoring and ranking processes is devised, which collects 370k expert ratings to date. Based on AIGVQA-DB, we further introduce AIGV-Assessor, a novel VQA model that leverages spatiotemporal features and LMM frameworks to capture the intricate quality attributes of AIGVs, thereby accurately predicting precise video quality scores and video pair preferences. Through comprehensive experiments on both AIGVQA-DB and existing AIGV databases, AIGV-Assessor demonstrates state-of-the-art performance, significantly surpassing existing scoring or evaluation methods in terms of multiple perceptual quality dimensions. The dataset and code are released at https://github.com/IntMeGroup/AIGV-Assessor.", + "bbox": [ + 88, + 340, + 485, + 720 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 89, + 751, + 223, + 767 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Text-to-video generative models [12, 27, 44, 64, 73], including auto-regressive [23, 81] and diffusion-based [12, 27, 55] approaches, have experienced rapid advancements in recent years with the explosion of large multimodal models", + "bbox": [ + 89, + 777, + 483, + 839 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "(LMMs). Given appropriate text prompts, these models can generate high-fidelity and semantically-aligned videos, commonly referred to as AI-generated videos (AIGVs), which have significantly facilitated the content creation in various domains, including entertainment, art, design, and advertising, etc [11, 13, 43]. Despite the significant progress, current AIGVs are still far from satisfactory. Unlike natural videos, which are usually affected by low-level distortions, such as noise, blur, low-light, etc, AIGVs generally suffer from degradations such as unrealistic objects, unnatural movements, inconsistent visual elements, and misalignment with text descriptions [25, 31, 43, 65, 79, 84, 85].", + "bbox": [ + 511, + 310, + 906, + 491 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "The unique distortions in AIGVs also bring challenges to the video evaluation. Traditional video quality assessment (VQA) methods [10, 18, 33, 35, 57, 70, 71] mainly focus on evaluating the quality of professionally-generated content (PGC) and user-generated content (UGC), thus struggling to address the specific distortions associated with AIGVs, such as spatial artifacts, temporal inconsistencies, and misalignment between generated content and text prompts. For evaluation of AIGVs, some metrics such as Inception Score (IS) [52] and Fréchet Video Distance (FVD) [61] have been widely used, which are computed over distributions of videos and may not reflect the human preference for an individual video. Moreover, these metrics mainly evaluate the fidelity of videos, while failing to assess the text-video correspondence. Vision-language pre-training models, such as CLIPScore [22], BLIPScore [37], and AestheticScore [53] are frequently employed to evaluate the alignment between generated videos and their text prompts. However, these models mainly consider the text-video alignment at the image level, while ignoring the dynamic diversity and motion consistency of visual elements that are crucial to the vide-viewing experience.", + "bbox": [ + 511, + 492, + 908, + 824 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "In this paper, to facilitate the development of more comprehensive and precise metrics for evaluating AI-generated videos, we present AIGVQA-DB, a large-scale VQA dataset, including 36,576 AIGVs generated by 15 advanced text-to-video models using 1,048 diverse prompts. An", + "bbox": [ + 511, + 825, + 908, + 902 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 46 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "*Corresponding Author. \nThis work was supported in part by the National Natural Science Foundation of China under Grant 62271312, 62132006, 62401365, 62225112, and in part by STCSM under Grant 22DZ2229005.", + "bbox": [ + 89, + 851, + 482, + 900 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "18869", + "bbox": [ + 480, + 944, + 519, + 955 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/e2f88f0b1752496702c959cf991c9bb7ab6e815905c8a1788b7757a50de31c59.jpg", + "image_caption": [ + "Figure 1. An overview of the AIGVQA-DB construction pipeline, illustrating the generation and the subjective evaluation procedures for the AIGVs in the database. (a) Prompt categorization according to the spatial major content. (b) Prompt categorization according to the temporal descriptions. (c) Prompt categorization according to the attribute control. (d) Prompt categorization according to the prompt complexity. (e) The 15 generative models used in the database. (f) Four visual quality evaluation perspectives, including static quality, temporal smoothness, dynamic degree, and text-video correspondence. (g) and (h) demonstrates the pair comparison and preference scoring processes, respectively." + ], + "image_footnote": [], + "bbox": [ + 93, + 53, + 906, + 275 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "overview of the dataset construction pipeline is shown in Figure 1. The prompts are collected from existing open-domain text-video datasets [7, 8, 38, 43, 68, 76] or manually-written, which can be categorized based on four orthogonal aspects respectively, as shown in Figure 1(a)-(d). Based on the AIGVs, we collect 370k expert ratings comprising both mean opinion scores (MOSs) and pairwise comparisons, which are evaluated from four dimensions, including: (1) static quality, (2) temporal smoothness, (3) dynamic degree, and (4) text-video correspondence. Equipped with the dataset, we propose AIGV-Assessor, a large multimodal model-based (LMM-based) VQA method for AIGVs, which reformulates the quality regression task into an interactive question-and-answer (Q&A) framework and leverages the powerful multimodal representation capabilities of LMMs to provide accurate and robust quality assessments. AIGV-Assessor not only classifies videos into different quality levels through natural language output, but also generates precise quality scores through regression, thus enhancing the interpretability and usability of VQA results. Moreover, AIGV-Assessor also excels in pairwise video comparisons, enabling nuanced assessments that are closer to human preferences. Extensive experimental results demonstrate that AIGV-Assessor outperforms existing text-to-video scoring methods in terms of multiple dimensions relevant to human preference.", + "bbox": [ + 91, + 372, + 483, + 765 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The main contributions of this paper are summarized as follows:", + "bbox": [ + 89, + 772, + 482, + 801 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "- We construct AIGVQA-DB, a large-scale dataset comprising 36,576 AI-generated videos annotated with MOS scores and pairwise comparisons. Compared with existing benchmarks, AIGVQA-DB provides a more comprehensive assessment of the capabilities of text-to-video models from multiple perspectives.", + "bbox": [ + 89, + 809, + 483, + 901 + ], + "page_idx": 1 + }, + { + "type": "table", + "img_path": "images/e3de484902a96fbadcaab3b506b556478f05878361edece5b32e33fcc4891946.jpg", + "table_caption": [ + "Table 1. An overview of popular text-to-video (T2V) and image-to-video (I2V) generation models. ${}^{ \\dagger }$ Representative variable." + ], + "table_footnote": [], + "table_body": "
ModelYearModeResolutionFramesOpen
CogVideo [23]22.05T2V480×48032
Make-a-Video [55]22.09T2V256×25616
LVDM [21]22.11T2V256×25616
Tune-A-Video [73]22.12T2V512×5128
VideoFusion [44]23.03T2V128×12816
Text2Video-Zero [27]23.03T2V512×5128
ModelScope [64]23.03T2V256×25616
Lavie [67]23.09T2V512×32016
VideoCrafter [12]23.10T2V, I2V1024×57616
Hotshot-XL [1]23.10T2V672×3848
StableVideoDiffusion [9]23.11I2V576×102414
AnimateDiff [20]23.12T2V, I2V384×25620
Floor33 [2]23.08T2V,I2V1024×64016-
Genmo [3]23.10T2V, I2V2048×153660-
Gen-2 [4]23.12T2V, I2V1408×76896-
MoonValley [5]24.01T2V, I2V1184×672200†-
MorphStudio [6]24.01T2V, I2V1920×108072-
Sora [7]24.02T2V, I2V1920×1080600†-
", + "bbox": [ + 514, + 398, + 906, + 585 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Based on AIGVQA-DB, we evaluate and benchmark 15 representative text-to-video models, and reveal their strengths and weaknesses from four crucial preference dimensions, i.e., static quality, temporal smoothness, dynamic degree, and text-to-video correspondence.", + "- We present a novel LMM-based VQA model for AIGVs, termed AIGV-Assessor, which integrates both spatial and temporal visual features as well as prompt features into a LMM to give quality levels, predict quality scores, and conduct quality comparisons.", + "- Thorough analysis of our AIGV-Assessor is provided and extensive experiments on our proposed AIGVQA-DB and other AIGV quality assessment datasets have shown the effectiveness and applicability of AIGV-Assessor." + ], + "bbox": [ + 513, + 601, + 906, + 811 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related Work", + "text_level": 1, + "bbox": [ + 513, + 827, + 653, + 842 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.1. Text-to-video Generation", + "text_level": 1, + "bbox": [ + 513, + 849, + 741, + 864 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Recent advancements in text-to-video generative models have substantially broadened video creation and modifica", + "bbox": [ + 511, + 869, + 906, + 900 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "18870", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 1 + }, + { + "type": "table", + "img_path": "images/6787ec8d15512f02dc1d3ce899c3b3e6c185f3b18c784a74c4644ea72e531e53.jpg", + "table_caption": [ + "Table 2. Summary of existing text-to-image and text-to-video evaluation datasets." + ], + "table_footnote": [], + "table_body": "
Dataset TypesNameNumbersPromptsModelsAnnotatorsDimensionsMOSs / PairsAnnotation
AIGIQAAGIQA-3k [34]2,98218062125,964MOS
AIGCIQA2023 [63]2,40010062837,200MOS
RichHF-18k [39]17,76017,76033471,040MOS
HPS [75]98,80725,20512,659125,205Pairs
AIGVQAPick-a-Pic [29]-37,52334,3751584,247Pairs
MQT [15]1,00520152422,010MOS
EvalCrafter [42]2,5007005741,024MOS
FETV [43]2,4766194337,428MOS
LGVQ [84]2,80846862038,424MOS
T2VQA-DB [31]10,0001,000927110,000MOS
GAIA [13]9,1805101854327,540MOS
AIGVQA-DB (Ours)36,5761,048151204122,304MOS and Pairs
", + "bbox": [ + 91, + 89, + 906, + 232 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "tion possibilities. As shown in Table 1, these models exhibit distinct characteristics and capacities, including modes, resolution, and total frames. CogVideo [23] is an early text-to-video (T2V) model capable of generating short videos based on CogView2 [16]. Make-a-video [55] adds effective spatial-temporal modules on a diffusion-based text-to-image (T2I) model (i.e., DALLE-2 [50]). VideoFusion [44] also leverages the DALLE-2 and presents a decomposed diffusion process. LVDM [21], Text2Video-Zero [27], Tune-A-Video [73], and ModelScope [64] are models that inherit the success of Stable Diffusion (SD) [51] for video generation. Lavie [67] extends the original transformer block in SD to a spatio-temporal transformer. Hotshot-XL [1] introduces personalized video generation. Beyond these laboratory-driven advancements, the video generation landscape has also been enriched by a series of commercial products. Notable among them are Floor33 [2], Gen-2 [4], Genmo [3], MoonValley [5], MorphStudio [6], and Sora [7], which have gained substantial attention in both academia and industry, demonstrating the widespread application potential of AI-assisted video creation.", + "bbox": [ + 89, + 239, + 485, + 556 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.2. Text-to-video Evaluation", + "text_level": 1, + "bbox": [ + 89, + 565, + 316, + 579 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The establishment of the AI-generated image quality assessment (AIGIQA) dataset is relatively well-developed, including both mean opinion scores (MOSs) for absolute quality evaluations, and pairwise comparisons for relative quality judgments. Recent developments in text-to-video generation models have also spurred the creation of various AI-generated video quality assessment (AIGVQA) datasets, addressing different aspects of the T2V generation challenge, as shown in Table 2. MQT [15] consists of 1,005 videos generated by 5 models using 201 prompts. EvalCrafter [42] and FETV [43] extend the scale of the videos, prompts, and evaluation dimensions. LGVQ [84] increases the number of annotators, providing more reliable MOSs. T2VQA-DB [31] consists of 10,000 videos from 1,000 prompts representing a significant improvement in scale. GAIA [13] collects 9,180 videos focusing on action quality assessment in AIGVs, but falls short in addressing the consistency between the generated visuals and their textual prompts. Most existing VQA datasets predominantly rely on MOS, an absolute scoring method, which suffers from the same drawback: absolute scores alone may cause am", + "bbox": [ + 89, + 583, + 483, + 900 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "biguity and overlook subtle quality differences. In contrast, our AIGVQA-DB includes both MOSs and pairwise comparisons, addressing the limitations of current works by providing fine-grained preference feedbacks.", + "bbox": [ + 511, + 241, + 906, + 303 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. Database Construction and Analysis", + "text_level": 1, + "bbox": [ + 511, + 309, + 844, + 325 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Data Collection", + "text_level": 1, + "bbox": [ + 511, + 330, + 669, + 345 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Prompt Scources and Categorization. Prompts of the AIGVQA-DB are primarily sourced from existing open-domain text-video pair datasets, including InternVid [68], MSRVTT [76], WebVid [8], TGIF [38], FETV [43] and Sora website [7]. We also manually craft prompts describing highly unusual scenarios to test the generalization ability of the generation models. As shown in Figure 1(a)-(d), we follow the categorization principles from FETV [43] to organize each prompt based on the \"spatial major content\", \"temporal major content\", \"attribute control\", and \"prompt complexity\".", + "bbox": [ + 511, + 354, + 906, + 521 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Text-to-Video Generation. We utilize 15 latest text-to-video generative models to create AI-generated videos as shown in Figure 1(e). We leverage open-source website APIs and code with default weights for these models to produce AIGVs. For the construction of the MOS subset, we collect 48 videos from the Sora Website [7], along with their corresponding text prompts. Using these prompts, we generate additional videos using 11 different generative models. This process results in a total of 576 videos (12 generative models $\\times$ 48 prompts). In addition to the MOS subset, we construct the pair-comparison subset using 1,000 diverse prompts, and 12 generative models including 8 open-sourced and 4 close-sourced are employed for text-to-video generation. Specifically, for each prompt, we generate four distinct videos for each open-source generative model and one video for each closed-source generative model. This process yields a total of 36,000 videos. More details of the database can be found in the supplementary material.", + "bbox": [ + 511, + 526, + 908, + 799 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.2. Subjective Experiment Setup and Procedure", + "text_level": 1, + "bbox": [ + 511, + 806, + 890, + 821 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Due to the unique and unnatural characteristics of AI-generated videos and the varying target video spaces dictated by different text prompts, relying solely on a single score, such as \"quality\", to represent human visual preferences is insufficient. In this paper, we propose to measure", + "bbox": [ + 511, + 825, + 905, + 902 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "18871", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/fb0385b3e755dd0ce7af1b2bf2c5ae793450e483574d3b22a009ca8c5c39ce30.jpg", + "image_caption": [ + "(a) Distribution of Raw Scores" + ], + "image_footnote": [], + "bbox": [ + 91, + 74, + 346, + 196 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/8eac122b8407f48e45e263fdddc23287419766654acec0728f07feffd5b4ba0f.jpg", + "image_caption": [ + "(b) Distribution of Mean Opinion Scores (MOSs)" + ], + "image_footnote": [], + "bbox": [ + 346, + 74, + 903, + 196 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/36c72da19ded12678e8963e4090622f43eaf08a13abe22c112d4b9d85ec226dc.jpg", + "image_caption": [ + "Figure 2. Video score distribution from the four perspectives including static quality, temporal smoothness, dynamic degree, and t2v correspondence. (a) Distribution of raw scores. (b) Distribution of Mean Opinion Scores (MOSs)", + "(a)", + "(b)", + "(c)" + ], + "image_footnote": [], + "bbox": [ + 91, + 250, + 908, + 373 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/1f6e6965f46d7954d0df029d696dd198d84ae054cd372b5706fa71d41fde0ab3.jpg", + "image_caption": [ + "Figure 3. Comparison of averaged win rates of different generation models across different categories. (a) Results across prompt complexity. (b) Results across attribute control. (c) Results across temporal major contents. (d) Results across spatial major contents.", + "Figure 4. (a) Comparison of text-to-video generation models regarding the MOS in terms of four dimensions sorted bottom-up by their averaged MOS. (b) Comparison of text-to-video generation models regarding the win rate in terms of four dimensions sorted bottom-up by their averaged win rate." + ], + "image_footnote": [], + "bbox": [ + 94, + 426, + 483, + 611 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "the human visual preferences of AIGVs from four perspectives. Static quality assesses the clarity, sharpness, color accuracy, and overall aesthetic appeal of the frames when viewed as standalone images. Temporal smoothness evaluates the temporal coherence of video frames and the absence of temporal artifacts such as flickering or jittering. Dynamic degree evaluates the extent to which the video incorporates large motions and dynamic scenes, which contributes to the overall liveliness and engagement measurement of the content. Text-video (TV) correspondence assesses how accurately the video content reflects the details, themes, and actions described in the prompt, ensuring that the generated video effectively translates the text input into", + "bbox": [ + 89, + 704, + 483, + 900 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "a visual narrative. Each of these four visual perception perspectives is related but distinct, offering a comprehensive evaluation for AIGVs. To evaluate the quality of the videos in the AIGVQA-DB, we conduct subjective experiments adhering to the guidelines outlined in ITU-R BT.500-14 [17, 54]. For the MOS annotation type, we use a 1-5 Likert-scale judgment to score the videos. For the pairs annotation type, participants are presented with pairs of videos and asked to choose the one they prefer, providing a direct comparison method for evaluating relative video quality. The videos are displayed using an interface designed with Python Tkinter, as illustrated in Figure 1(g)-(h). A total of 120 graduate students participate in the experiment.", + "bbox": [ + 511, + 429, + 906, + 626 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.3. Subjective Data Processing", + "text_level": 1, + "bbox": [ + 511, + 633, + 756, + 650 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In order to obtain the MOS for an AIGV, we linearly scale the raw ratings to the range [0, 100] as follows:", + "bbox": [ + 511, + 656, + 905, + 686 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nz _ {i j} = \\frac {r _ {i j} - \\mu_ {i j}}{\\sigma_ {i}}, \\quad z _ {i j} ^ {\\prime} = \\frac {1 0 0 (z _ {i j} + 3)}{6},\n$$\n", + "text_format": "latex", + "bbox": [ + 573, + 698, + 843, + 729 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mu_ {i} = \\frac {1}{N _ {i}} \\sum_ {j = 1} ^ {N _ {i}} r _ {i j}, \\sigma_ {i} = \\sqrt {\\frac {1}{N _ {i} - 1} \\sum_ {j = 1} ^ {N _ {i}} (r _ {i j} - \\mu_ {i j}) ^ {2}}\n$$\n", + "text_format": "latex", + "bbox": [ + 535, + 739, + 880, + 787 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $r_{ij}$ is the raw ratings given by the $i$ -th subject to the $j$ -th video. $N_i$ is the number of videos judged by subject $i$ . Next, the MOS of the video $j$ is computed by averaging the rescaled z-scores as follows:", + "bbox": [ + 511, + 794, + 905, + 853 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nM O S _ {j} = \\frac {1}{M} \\sum_ {i = 1} ^ {M} z _ {i j} ^ {\\prime}\n$$\n", + "text_format": "latex", + "bbox": [ + 637, + 862, + 779, + 902 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "18872", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/6e2271fc92b4aadac9147836c9305b52893182b0a5fb8fd4763fd447322a3f64.jpg", + "image_caption": [ + "Figure 5. The framework of AIGV-Assessor: (a) AIGV-Assessor takes AI-generated video frames as input and outputs both text-based quality levels and numerical quality scores. The system begins with the extraction of spatiotemporal features using two vision encoders, which are then passed through spatial and temporal projection modules to generate aligned visual tokens into language space. The LLM decoder produces text-based feedback describing the video quality level for four evaluation dimensions, respectively. Simultaneously, the last-hidden-states from the LLM are used to perform quality regression that outputs final quality scores in terms of four dimensions. (b) AIGV-Assessor is fine-tuned on pairwise comparison, further allowing the model to output the evaluation comparison between two videos." + ], + "image_footnote": [], + "bbox": [ + 91, + 75, + 911, + 300 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $MOS_{j}$ indicates the MOS for the $j$ -th AIGV, $M$ is the number of subjects, and $z_{ij}^{\\prime}$ are the rescaled z-scores.", + "bbox": [ + 88, + 401, + 480, + 431 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "For the pairs annotation type, given a text prompt $p_i$ and 12 video generation models labeled $\\{A,B,C,\\dots,L\\}$ , we generate videos using each model, forming a group of videos $G_{i,j} = \\{V_{i,A,j},V_{i,B,j},V_{i,C,j},\\dots,V_{i,L,j}\\}$ . For each prompt $p_i$ , we generate four different videos randomly for each of the eight open-source generative models and one video for each of the four closed-source generative models, resulting in a group of 36 videos $\\{G_{i,A,1},G_{i,A,2},G_{i,A,3},G_{i,A,4},G_{i,B,1},\\dots,G_{i,L,1}\\}$ . For each group, we create all possible pairwise combinations, resulting in $C_{36}^2$ pairs: $(V_{A1},V_{B1})$ , $(V_{A1},V_{B2})$ , $(V_{A1},V_{B3})$ , $(V_{A1},V_{B4})$ , $(V_{A1},V_{C1})$ , ..., $(V_{K1},V_{L1})$ . In the AIGVQA-DB construction pipeline, a prompt suite of 1000 prompts results in 630,000 $(1000\\times C_{36}^2)$ pairwise video comparisons. From this extensive dataset, we randomly sample 30,000 pairs for evaluation from four perspectives. Each pair is judged by three annotators, and the final decision of the better video in each pair is determined by the majority vote. Finally, we obtain a total of 46,080 reliable score ratings (20 annotators × 4 perspectives × 576 videos) and 360,000 pair ratings (3 annotators × 4 perspectives × 30,000 pairs).", + "bbox": [ + 91, + 433, + 483, + 750 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.4. AIGV Analysis from Four Perspectives", + "text_level": 1, + "bbox": [ + 89, + 757, + 426, + 773 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "As shown in Figure 2, the videos in the AIGVQA-DB cover a wide range of perceptual quality. We further analyze the win rates of various generation models across categories in Figure 3, revealing the strengths and weaknesses of each T2V model. As shown in Figure 3(a), the performances of T2V models rank uniform for different prompt complexity items in terms of static quality, which manifests current T2V model rank consistently for different prompts, likely", + "bbox": [ + 89, + 779, + 483, + 901 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "due to shared architectures like diffusion-based systems, with common strengths and limitations in handling complex prompts. As shown in Figure 3(b), in terms of attribute control, StableVideoDiffusion [9] excels in managing quantity over event order, as it first generates static images before animating them, preserving the original event sequence. As shown in Figure 3(d), in terms of spatial content, most videos featuring \"plants\" and \"people\" show poor T2V correspondence. More comparison and analysis can be found in the supplementary material. We also launch comparisons among text-to-video generation models regarding the MOS and pairwise win rates shown in Figure 4. Notably, models such as LVDM [21] demonstrate exceptional performance in handling dynamic content, but exhibit relatively lower performance in temporal smoothness. Sora [7] and MorphStudio [6] perform well in static quality and temporal smoothness while lagging in dynamic degree. Additionally, closed-source models exhibit much better performance compared to open-source models.", + "bbox": [ + 511, + 402, + 906, + 689 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4. Proposed Method", + "text_level": 1, + "bbox": [ + 511, + 699, + 687, + 717 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.1. Model Structure", + "text_level": 1, + "bbox": [ + 511, + 720, + 678, + 736 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Spatial and Temporal Vision Encoder. As shown in Figure 5(a), the model leverages two different types of encoders to capture the spatial and temporal characteristics of the video: (1) 2D Encoder: A pre-trained 2D vision transformer (InternViT [69]) is used to process individual video frames. (2) 3D Encoder: A 3D network, i.e., SlowFast [19], is employed to extract temporal features by processing sequences of video frames.", + "bbox": [ + 511, + 744, + 906, + 864 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Spatiotemporal Projection Module. Once the spatial and temporal features are extracted, they are projected into a", + "bbox": [ + 511, + 869, + 906, + 901 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "18873", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/9990b47d31735c217cd27e3efc02ce8fe8b71d0dd0ca2393c1f10529779ecab1.jpg", + "table_caption": [ + "Table 3. Performance comparisons of the state-of-the-art quality evaluation methods on the AIGVQA-DB from four perspectives. The best performance results are marked in RED and the second-best performance results are marked in BLUE." + ], + "table_footnote": [], + "table_body": "
DimensionStatic QualityTemporal SmoothnessDynamic DegreeTV Correspondence
Methods / MetricsPair AccSRCCPLCCKRCCPair AccSRCCPLCCKRCCPair AccSRCCPLCCKRCCPair AccSRCCPLCCKRCC
NIQE [49]54.32%0.08670.16260.061552.67%0.06410.11520.045145.64%0.17650.24480.119446.99%0.17710.22310.1193
QAC [80]49.96%0.10220.13630.068054.90%0.16330.20390.110554.72%0.04480.04270.029554.48%0.03030.01970.2233
BRISQUE [48]59.98%0.29090.24430.196955.67%0.23250.15690.155344.60%0.13510.09590.089351.02%0.12940.10170.0869
BPRI [46]52.28%0.21810.17230.139847.26%0.17660.08800.113846.83%0.19560.16880.132949.13%0.15690.15480.1052
HOSA [77]61.54%0.24200.21060.164357.31%0.23110.17570.155944.97%0.07550.04490.049652.23%0.16450.13240.1097
BMPRI [47]53.71%0.16900.14810.107549.31%0.14340.08440.089445.07%0.11530.09250.077748.43%0.15670.15000.1041
V-Dynamic [25]51.34%0.07680.07920.049431.91%0.37130.48710.255753.11%0.14660.02530.098846.96%0.04050.05760.0223
V-Smoothness [25]61.63%0.67480.45060.459076.59%0.85260.83130.653347.63%0.24460.23280.158061.28%0.31880.30730.2214
CLIPScore [22]47.09%0.07310.08160.047346.33%0.04230.03340.027152.99%0.06750.08350.043955.62%0.15190.17310.1014
BLIPScore [37]53.24%0.04920.04210.033053.07%0.06590.04870.043753.03%0.17860.19040.120561.53%0.18130.18960.1219
AestheticScore [53]70.24%0.67130.69590.478454.82%0.51540.49460.348452.96%0.22950.23220.152759.64%0.23810.24400.1602
ImageReward [78]56.69%0.26060.26460.174954.09%0.23820.23050.160053.90%0.18400.18360.123763.97%0.23110.24500.1568
UMTScore [43]48.93%0.01680.01990.011749.93%0.03020.03700.020752.69%0.01680.01980.011753.82%0.01720.00650.0108
Video-LLaVA [40]50.90%0.03840.05130.029750.36%0.04310.02810.034750.34%0.15610.14360.117650.54%0.13640.10510.1009
Video-ChatGPT [45]51.20%0.12420.15870.094050.16%0.05800.05330.045350.47%0.07240.04360.056350.07%0.03570.01240.0274
LLaVA-NeXT [36]52.85%0.12390.16250.095452.41%0.40210.37220.305251.84%0.17670.16550.132859.20%0.41160.34280.3261
VideoLLaMA2 [14]52.73%0.26430.32710.192852.27%0.36080.24500.269650.78%0.19000.15610.137954.25%0.16560.16330.1210
Qwen2-VL [66]56.50%0.49220.52910.383849.12%0.16810.42190.123352.08%0.11220.13350.084953.30%0.31110.27750.2306
HyperIQA [56]68.30%0.79310.80930.596954.65%0.74260.66300.540753.32%0.21030.21000.138457.54%0.62260.62500.4432
MUSIQ [26]66.46%0.78800.80440.577355.16%0.71990.69200.503452.85%0.52060.48460.352158.46%0.41250.40930.2844
LIQE [83]63.86%0.87760.86910.700855.84%0.79350.77200.608449.02%0.53030.58400.383755.10%0.38620.36390.2640
VSFA [35]46.43%0.33650.34210.226850.95%0.33170.32730.220251.46%0.12010.13620.081548.07%0.10240.10640.0666
BVQA [33]29.98%0.45940.47010.326837.65%0.37040.38190.250755.08%0.45940.47010.326842.32%0.37200.39780.2559
simpleVQA [57]68.12%0.83550.64380.848954.14%0.70820.70080.497853.08%0.46710.31600.399458.20%0.46430.54400.3163
FAST-VQA [70]70.64%0.87380.86440.686062.93%0.90360.91340.716654.34%0.56030.57030.389565.05%0.68750.67040.4978
DOVER [71]72.92%0.89070.88950.700458.83%0.90630.91950.718753.16%0.55490.54890.380062.35%0.67830.68020.4969
Q-Align [72]71.86%0.85160.83830.664157.95%0.81160.70250.619553.71%0.56550.50120.395062.91%0.55420.56470.3870
AIGV-Assessor (Ours)79.83%0.91620.91900.757676.60%0.92320.92160.803860.30%0.60930.60820.443570.32%0.75000.76970.5591
Improvement+ 6.9%+2.7%+3.0%+ 5.7%13.7%+1.7%+0.2%+8.5%+5.2%+4.4%+3.8%+4.4%+5.3%+6.3%+9.9%+6.13%
", + "bbox": [ + 91, + 104, + 906, + 459 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "shared feature space for alignment with text-based queries. This is done through two projection modules that map the spatial and temporal visual features respectively into the language space. The mapped visual tokens are aligned with text tokens, enabling the model to query the video content in a multimodal fashion.", + "bbox": [ + 89, + 470, + 482, + 561 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Feature Fusion and Quality Regression. We apply LLM (InternVL2-8B [69]) to combine the visual tokens and user-provided quality prompts to perform the following tasks: (1) Quality level descriptions: the model generates a descriptive quality level evaluation of the input video, such as \"The static quality of the video is (bad, poor, fair, good, excellent).\" This initial categorization provides a preliminary classification of the video's quality, which is beneficial for subsequent quality regression tasks. By obtaining a rough quality level, the model can more accurately predict numerical scores in later evaluations. (2) Regression score output: the model uses the final hidden states from the LLM to perform a regression task, outputting numerical quality scores for the video from four different dimensions.", + "bbox": [ + 89, + 568, + 482, + 779 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2. Training and Fine-tuning Strategy", + "text_level": 1, + "bbox": [ + 89, + 789, + 390, + 806 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The training process of AIGV-Assessor follows a three-stage approach to ensure high-quality video assessment with quality level prediction, individual quality scoring, and pairwise preference comparison capabilities. This process includes: (1) training the spatial and temporal projectors to align visual and language features, (2) fine-tuning the vision", + "bbox": [ + 89, + 810, + 482, + 901 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "encoder and LLM with LoRA [24], and training the quality regression module to generate accurate quality scores, (3) incorporating pairwise comparison training using the pair-comparison subset with a pairwise loss function for robust video quality comparison.", + "bbox": [ + 511, + 470, + 906, + 547 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Spatiotemporal Projector Training. The first stage focuses on training the spatial and temporal projectors to extract meaningful spatiotemporal visual features and map them into the language space. Through this process, the LLM is able to produce the quality level descriptions i.e., bad, poor, fair, good, excellent.", + "bbox": [ + 511, + 554, + 905, + 645 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Quality Regression Fine-tuning. Once the model can generate coherent descriptions of video quality level, the second stage focuses on fine-tuning the quality regression module. The goal here is to enable the model to output stable and precise numerical quality scores (MOS-like predictions). The quality regression model takes the last-hidden-state features from LLM as input and generates quality scores from four perspectives. The training objective uses an L1 loss function to minimize the difference between the predicted quality score and the groundtruth MOS.", + "bbox": [ + 511, + 651, + 906, + 801 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Pairwise Comparison Fine-tuning. The third stage mainly focuses on integrating the pairwise comparison into the training pipeline. As shown in Figure 5(b), two input video pairs share network weights within the same batch. We design a judge network inspired by LPIPS [82] to determine which video performs better. This network leverages", + "bbox": [ + 511, + 810, + 908, + 902 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "18874", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/7529383b9f1fa9fd0ba3b690e7c094e086264f814a555984a0376c1e18c18b28.jpg", + "table_caption": [ + "Table 4. Performance comparisons on LGVQ [84] and FETV [43]." + ], + "table_footnote": [], + "table_body": "
AspectsMethodsLGVQFETV
SRCCPLCCKRCCSRCCPLCCKRCC
SpatialMUSIQ [26]0.6690.6820.4910.7220.7580.613
StairlQA [59]0.7010.7370.5210.8060.8120.643
CLIP-IQA [62]0.6840.7090.5020.7410.7670.619
LIQE [83]0.7210.7520.5380.7650.7990.635
UGVQ [84]0.7590.7950.5670.8410.8410.685
TemporalAIGV-Assessor (Ours)0.8030.8190.6170.8530.8560.699
Improvement+4.4%+2.4%+5.0%+1.2%+1.5%+1.4%
VSFA [35]0.8410.8570.6430.8390.8590.705
SimpleVQA [57]0.8570.8670.6590.8520.8620.726
FastVQA [70]0.8490.8430.6470.8420.8470.714
DOVER [71]0.8670.8780.6720.8680.8810.731
UGVQ [84]0.8930.9070.7030.8970.9070.753
AlignmentAIGV-Assessor (Ours)0.9000.9200.7170.9360.9400.815
Improvement+0.7%+1.3%+1.4%+3.9%+3.3%+6.2%
CLIPScore [22]0.4460.4530.3010.6070.6330.498
BLIPScore [37]0.4550.4640.3190.6160.6450.505
ImageReward [78]0.4980.4990.3440.6570.6870.519
PickScore [28]0.5010.5150.3530.6690.7080.533
HPSv2 [74]0.5040.5110.3570.6860.7030.540
UGVQ [84]0.5510.5550.3940.7340.7370.572
AlignmentAIGV-Assessor (Ours)0.5770.5780.4110.7530.7460.585
Improvement+2.6%+2.3%+1.7%+1.9%+0.9%+1.3%
", + "bbox": [ + 94, + 95, + 486, + 357 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "learned features and evaluates the perceptual differences between the two videos, allowing more reliable quality assessments in video pair comparison.", + "bbox": [ + 89, + 371, + 482, + 417 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Loss Function. In the first stage, the spatial and temporal projectors are trained to align visual and language features using language loss. The second stage refines the vision encoder, LLM, and quality regression module's scoring ability with an L1 loss. The third stage incorporates pairwise comparison training with cross-entropy loss to improve the model's performance on relative quality evaluation.", + "bbox": [ + 89, + 421, + 483, + 527 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5. Experiments", + "text_level": 1, + "bbox": [ + 89, + 540, + 223, + 556 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.1. Experiment Settings", + "text_level": 1, + "bbox": [ + 89, + 565, + 282, + 583 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Evaluation Datasets and Metrics. Our proposed method is validated on five AIGVQA datasets: AIGVQA-DB, LGVQ [84], FETV [43], T2VQA [31], and GAIA [13]. To evaluate the correlation between the predicted scores and the ground-truth MOSs, we utilize three evaluation criteria: Spearman Rank Correlation Coefficient (SRCC), Pearson Linear Correlation Coefficient (PLCC), and Kendall's Rank Correlation Coefficient (KRCC). For pair comparison, we adopt the comparison accuracy as the metric.", + "bbox": [ + 89, + 592, + 483, + 729 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Reference Algorithms. To assess the performance of our proposed method, we select state-of-the-art evaluation metrics for comparison, which can be classified into five groups: (1) Handcrafted-based I/VQA models, including: NIQE [49], BRISQUE [48], QAC [80], BMPRI [47], HOSA [77], BPRI [46], HIGRADE [32], etc. (2) Action-related evaluation models, including: V-Dynamic [25], V-Smoothness [25] which are proposed in VBench [25]. (3) Vision-language pre-training models, including: CLIPScore [22], BLIPScore [37], AestheticScore [53], ImageReward [78], and UMTScore [43]. (4) LLM-based models, in-", + "bbox": [ + 89, + 734, + 483, + 902 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/bc822205d538c82a0f131868139149ec3cd63807bd32a6e8e58103a9365c0c29.jpg", + "table_caption": [ + "Table 5. Performance comparisons on T2VQA-DB [31]." + ], + "table_footnote": [], + "table_body": "
AspectsMethodsT2VQA-DBSora Testing
SRCCPLCCKRCCSRCCPLCCKRCC
zero-shotCLIPScore [22]0.10470.12770.07020.21160.15380.1406
BLIPScore [37]0.16590.18600.11120.21160.10380.1515
ImageReward [78]0.18750.21210.12660.09920.04150.0748
UMTScore [43]0.06760.07210.04530.25940.08400.1680
finetunedSimpleVQA [57]0.62750.63880.44660.03400.23440.0237
BVQA [37]0.73900.74860.54870.42350.24890.2635
FAST-VQA [70]0.71730.72950.53030.43010.23690.2939
DOVER [71]0.76090.76930.57040.44210.26890.2757
T2VQA [31]0.79650.80660.60580.64850.31240.4874
AIGV-Assessor (Ours)0.81310.82220.63640.66120.33180.5075
Improvement+1.7%+1.6%+3.1%+1.3%+1.9%+2.0%
", + "bbox": [ + 517, + 95, + 906, + 247 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/b4d017fc4c6ba92f0cf7ec26ae660352542b38cc083117f0ce382e3991ccba80.jpg", + "table_caption": [ + "Table 6. Performance comparisons on GAIA [13]." + ], + "table_footnote": [], + "table_body": "
DimensionSubjectCompletenessInteraction
Methods / MetricsSRCCPLCCSRCCPLCCSRCCPLCC
V-Smoothness [25]0.24020.19130.14740.16250.17410.1693
V-Dynamic [25]0.12850.08310.09030.06820.11410.0758
Action-Score [42]0.20230.18230.28670.26230.26890.2432
Flow-Score [42]0.14710.15410.08160.12730.10410.1309
CLIPScore [22]0.33980.33300.39440.38710.38750.3821
BLIPScore [37]0.34530.33860.41740.40820.40440.3994
LLaVAScore [41]0.34840.34360.41890.41330.40770.4025
TLVQM [30]0.50370.51370.41270.41580.40790.4093
VIDEVAL [60]0.52370.54460.42830.43750.41210.4234
VSFA [35]0.55940.57620.49400.50170.47090.4811
BVQA [37]0.57020.58880.48760.49460.47610.4825
SimpleVQA [58]0.59200.59740.49810.50780.48430.4971
FAST-VQA [70]0.60150.60920.51570.52150.51540.5216
DOVER [71]0.61730.63010.51980.53230.51640.5278
AIGV-Assessor (Ours)0.68420.68970.66350.66940.63290.6340
Improvement+6.7%+6.0%+14.4%+13.7%+11.65%+10.6%
", + "bbox": [ + 516, + 272, + 903, + 489 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "cluding: Video-LLaVA [40], Video-ChatGPT [45], LLaVA-NeXT [36], VideoLLaMA2 [14], and Qwen2-VL [66]. (5) Deep learning-based I/VQA models, including: HyperIQA [56], MUSIQ [26], LIQE [83], VSFA [35], BVQA [33], SimpleVQA [58], FAST-VQA [70], DOVER [71], and Q-Align [72].", + "bbox": [ + 511, + 503, + 905, + 594 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Training Settings. Traditional handcrafted models are directly evaluated on the corresponding databases, and the average score of all frames is calculated. For vision-language pre-training and LLM-based models, we load the pre-trained weights for inference. CLIPscore [22], BLIP-score [37], and other vision-language pre-training models are calculated directly as the average cosine similarity between text and each video frame. SimpleVQA [58], BVQA [33], FAST-VQA [70], DOVER [71], and Q-Align [72] are fine-tuned on every test dataset. For deep learning-based IQA and VQA models, all experiments for each method are retrained on each dimension using the same training and testing split as the previous literature at a ratio of 4:1. All results are averaged after ten random splits.", + "bbox": [ + 511, + 599, + 906, + 811 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.2. Results and Analysis", + "text_level": 1, + "bbox": [ + 511, + 820, + 709, + 835 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 3 presents the pairwise win rates and the score prediction correlation between predicted results and human ground truths. The results indicate that handcrafted-based methods consistently underperform across all four evalu-", + "bbox": [ + 511, + 840, + 905, + 900 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "18875", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/3a323b11ba3b97928a16f87d4016f9fd2550b9e4d10c010126e4ee85c42f694d.jpg", + "image_caption": [ + "Figure 6. Comparison of win rates of different generation models across four dimensions evaluated by different VQA methods, demonstrating our AIGV-Assessor has better win-rate evaluation ability aligned with Ground Truth (GT)." + ], + "image_footnote": [], + "bbox": [ + 91, + 56, + 911, + 185 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/0f6cb2fe3e785491eca62a8a3ab3b074b73d70560fa69d07aff269862d26be66.jpg", + "table_caption": [ + "Table 7. Ablation study of the proposed AIGV-Assessor method." + ], + "table_footnote": [], + "table_body": "
No.Feature & StrategyStatic QualityTemporal SmoothnessDynamic DegreeT2V Correspondence
spatialtemporalquality levelLoRA finetuningSRCCPLCCKRCCSRCCPLCCKRCCSRCCPLCCKRCCSRCCPLCCKRCC
(1)0.8640.8660.7260.8700.8680.7270.5560.5720.4320.6160.6200.492
(2)0.8740.8760.7230.8750.8760.7360.5580.5730.4310.7230.7340.533
(3)0.8870.8840.7220.8810.8830.7060.5620.5750.4330.7390.7580.544
(4)0.8870.8880.7530.9170.9100.7960.5690.5360.4380.6880.6730.557
(5)0.9050.9080.7540.9190.9170.7990.5890.5870.4410.7420.7630.549
(6)0.9160.9190.7580.9230.9220.8040.6090.6080.4440.7500.7700.559
", + "bbox": [ + 91, + 238, + 903, + 345 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "ation perspectives. Vision-language pre-training methods such as CLIPscore [22] and BLIPscore [37] demonstrate moderate performance but are still surpassed by more specialized and fine-tuned VQA models. Specifically, deep learning-based models like FAST-VQA [70] and DOVER [71] achieve more competitive performances after fin-tuning. However, they are still far away from satisfactory. Notably, most VQA models perform better on quality evaluation than on text-video correspondence, as they lack text prompts input used in video generation, making it challenging to extract relation features from the AI-generated videos, which inevitably leads to the performance drop. Finally, the performance exploration of recent LMMs on our database shows that current LMMs are able to produce meaningful evaluations, which can motivate future works to further explore the use of LMMs for AIGV assessment.", + "bbox": [ + 88, + 356, + 482, + 595 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "The proposed AIGV-Assessor achieves the best performance compared to the competitors for both MOS prediction and pair ranking tasks in terms of all four dimensions. To further validate the effectiveness and generalizability of our proposed model, we also evaluate it on four other AIGVQA datasets [13, 31, 43, 84]. From Tables 4-6, we observe that AIGV-Assessor consistently achieves the best performance across these datasets. As shown in Figure 6, AIGV-Assessor achieves the highest overlap in area with Ground Truth (GT), indicating that AIGV-Assessor can reliably perform T2V model benchmarking, outperforming other assessment models in discerning quality differences in AI-generated videos.", + "bbox": [ + 89, + 598, + 482, + 794 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5.3. Ablation Study", + "text_level": 1, + "bbox": [ + 89, + 801, + 243, + 819 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We conduct ablation experiments to verify the effectiveness of the main components in our AIGV-Assessor method, including the spatial feature, the temporal feature, the quality level, and the LoRA finetuning strategy. Additionally, we assess how each feature contributes to the performance", + "bbox": [ + 89, + 824, + 483, + 901 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "across different quality dimensions. The results of these experiments are summarized in Table 7. Experiments (1), (2), and (3) validate the effectiveness of the quality regression module and the LoRA finetuning strategy, confirming that fine-tuning and quality regression significantly enhance model performance over only regressing the generated text outputs from the LLM. The addition of temporal features, as seen in Experiments (4), (5), and (6), significantly improves model performance. Experiment (6), which integrates all components, yields the best overall performance, showing that the combination of spatial and temporal features, quality level prediction, and LoRA finetuning provides the most robust and accurate AIGV assessment.", + "bbox": [ + 511, + 356, + 906, + 551 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6. Conclusion", + "text_level": 1, + "bbox": [ + 513, + 580, + 633, + 598 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this paper, we study the human visual preference evaluation problem for AIGVs. We first construct AIGVQA-DB, which includes 36,576 videos generated based on 1048 various text-prompts, with the MOSs and pair comparisons evaluated from four perspectives. Our detailed manual evaluations reflect different aspects of human visual preferences on AIGVs and reveal critical insights into the strengths and weaknesses of various text-to-video models. Based on the database, we evaluate the performance of state-of-the-art quality evaluation models and establish a new benchmark, revealing their limitations in measuring the perceptual preference of AIGVs. Finally, we propose AIGV-Assessor, a novel VQA model that leverages the capabilities of LMMs to give quality levels, predict quality scores, and compare preferences from four dimensions. Extensive experiments demonstrate that AIGV-Assessor achieves state-of-the-art performance on both AIGVQA-DB and other AIGVQA benchmarks, validating its robustness in understanding and evaluating the AI-generated videos.", + "bbox": [ + 511, + 613, + 906, + 900 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "18876", + "bbox": [ + 480, + 944, + 519, + 955 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 89, + 187, + 104 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Hotshot-XL. https://github.com/hotshotco/hotshot-xl, 2023.2, 3", + "[2] Floor33. https://discord.qq/EuB9KT6H, 2023.2, 3", + "[3] Gemo. https://www.genmo.ai, 2024.2, 3", + "[4] Gen2. https://research.runwayml.com/gen2, 2024.2,3", + "[5] Moonvalley. https://moonvalley.ai, 2024. 2, 3", + "[6] Morph studio. https://www.morphstudio.com, 2024.2,3,5", + "[7] Sora. https://openai.com/research/video-generation-models-as-world-simulators, 2024.2,3,5", + "[8] Max Bain, Arsha Nagrani, Gül Varol, and Andrew Zisserman. Frozen in time: A joint video and image encoder for end-to-end retrieval. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 1728-1738, 2021. 2, 3", + "[9] Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint arXiv:2311.15127, 2023. 2, 5", + "[10] Yuqin Cao, Xiongkuo Min, Yixuan Gao, Wei Sun, Weisi Lin, and Guangtao Zhai. Unqa: Unified no-reference quality assessment for audio, image, video, and audio-visual content. arXiv preprint arXiv:2407.19704, 2024. 1", + "[11] Yuqin Cao, Xiongkuo Min, Yixuan Gao, Wei Sun, and Guangtao Zhai. Agav-rater: Adapting large multimodal model for ai-generated audio-visual quality assessment. arXiv preprint arXiv:2501.18314, 2025. 1", + "[12] Haoxin Chen, Menghan Xia, Yingqing He, Yong Zhang, Xiaodong Cun, Shaoshu Yang, Jinbo Xing, Yaofang Liu, Qifeng Chen, Xintao Wang, Chao Weng, and Ying Shan. Videocraft1: Open diffusion models for high-quality video generation. arXiv preprint arXiv:2310.19512, 2023. 1, 2", + "[13] Zijian Chen, Wei Sun, Yuan Tian, Jun Jia, Zicheng Zhang, Jiarui Wang, Ru Huang, Xiongkuo Min, Guangtao Zhai, and Wenjun Zhang. Gaia: Rethinking action quality assessment for ai-generated videos. arXiv preprint arXiv:2406.06087, 2024. 1, 3, 7, 8", + "[14] Zesen Cheng, Sicong Leng, Hang Zhang, Yifei Xin, Xin Li, Guanzheng Chen, Yongxin Zhu, Wenqi Zhang, Ziyang Luo, Deli Zhao, and Lidong Bing. Videollama 2: Advancing spatial-temporal modeling and audio understanding in videolms. arXiv preprint arXiv:2406.07476, 2024. 6, 7", + "[15] Iya Chivileva, Philip Lynch, Tomas E Ward, and Alan F Smeaton. Measuring the quality of text-to-video model outputs: Metrics and dataset. arXiv preprint arXiv:2309.08009, 2023. 3", + "[16] Ming Ding, Wendi Zheng, Wenyi Hong, and Jie Tang. Cogview2: Faster and better text-to-image generation via hierarchical transformers. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), pages 16890-16902, 2022. 3" + ], + "bbox": [ + 93, + 114, + 482, + 898 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[17] Huiyu Duan, Xiongkuo Min, Yucheng Zhu, Guangtao Zhai, Xiaokang Yang, and Patrick Le Callet. Confusing image quality assessment: Toward better augmented reality experience. IEEE Transactions on Image Processing (TIP), 31: 7206-7221, 2022. 4", + "[18] Huiyu Duan, Qiang Hu, Jiarui Wang, Liu Yang, Zitong Xu, Lu Liu, Xiongkuo Min, Chunlei Cai, Tianxiao Ye, Xiaoyun Zhang, et al. Finevq: Fine-grained user generated content video quality assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025. 1", + "[19] Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 6202-6211, 2019. 5", + "[20] Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, and Bo Dai. Animatediff:Animate your personalized text-to-image diffusion models without specific tuning. arXiv preprint arXiv:2307.04725, 2023. 2", + "[21] Yingqing He, Tianyu Yang, Yong Zhang, Ying Shan, and Qifeng Chen. Latent video diffusion models for high-fidelity video generation with arbitrary lengths. arXiv preprint arXiv:2211.13221, 2022. 2, 3, 5", + "[22] Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation metric for image captioning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7514-7528, 2021. 1, 6, 7, 8", + "[23] Wenyi Hong, Ming Ding, Wendi Zheng, Xinghan Liu, and Jie Tang. Cogvideo: Large-scale pretraining for text-to-video generation via transformers. arXiv preprint arXiv:2205.15868, 2022. 1, 2, 3", + "[24] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. 6", + "[25] Ziqi Huang, Yinan He, Jiashuo Yu, Fan Zhang, Chenyang Si, Yuming Jiang, Yuanhan Zhang, Tianxing Wu, Qingyang Jin, Nattapol Chanpaisit, Yaohui Wang, Xinyuan Chen, Limin Wang, Dahua Lin, Yu Qiao, and Ziwei Liu. VBenchmark: Comprehensive benchmark suite for video generative models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024. 1, 6, 7", + "[26] Junjie Ke, Qifei Wang, Yilin Wang, Peyman Milanfar, and Feng Yang. Musiq: Multi-scale image quality transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 5128-5137, 2021. 6, 7", + "[27] Levon Khachatryan, Andranik Movsisyan, Vahram Tadevosyan, Roberto Henschel, Zhangyang Wang, Shant Navasardyan, and Humphrey Shi. Text2video-zero: Text-to-image diffusion models are zero-shot video generators. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 15954-15964, 2023. 1, 2, 3", + "[28] Yuval Kirstain, Adam Poliak, Uriel Singer, and Omer Levy. Pick-a-pic: An open dataset of user preferences for text-to-image generation. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 2023. 7" + ], + "bbox": [ + 516, + 92, + 903, + 898 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "18877", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[29] Yuval Kirstain, Adam Polyak, Uriel Singer, Shahbuland Martiana, Joe Penna, and Omer Levy. Pick-a-pic: An open dataset of user preferences for text-to-image generation. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), pages 36652-36663, 2023. 3", + "[30] Jari Korhonen. Two-level approach for no-reference consumer video quality assessment. IEEE Transactions on Image Processing (TIP), 28(12):5923-5938, 2019. 7", + "[31] Tengchuan Kou, Xiaohong Liu, Zicheng Zhang, Chunyi Li, Haoning Wu, Xiongkuo Min, Guangtao Zhai, and Ning Liu. Subjective-aligned dataset and metric for text-to-video quality assessment. arXiv preprint arXiv:2403.11956, 2024. 1, 3, 7, 8", + "[32] Debarati Kundu, Deepti Ghadiyaram, Alan C Bovik, and Brian L Evans. Large-scale crowdsourced study for tonemapbed hdr pictures. IEEE Transactions on Image Processing (TIP), pages 4725-4740, 2017. 7", + "[33] Bowen Li, Weixia Zhang, Meng Tian, Guangtao Zhai, and Xianpei Wang. Blindly assess quality of in-the-wild videos via quality-aware pre-training and motion perception. IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 32(9):5944-5958, 2022. 1, 6, 7", + "[34] Chunyi Li, Zicheng Zhang, Haoning Wu, Wei Sun, Xiongkuo Min, Xiaohong Liu, Guangtao Zhai, and Weisi Lin. Agiqa-3k: An open database for ai-generated image quality assessment. IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2023. 3", + "[35] Dingquan Li, Tingting Jiang, and Ming Jiang. Quality assessment of in-the-wild videos. In Proceedings of the ACM International Conference on Multimedia (ACMMM). ACM, 2019. 1, 6, 7", + "[36] Feng Li, Renrui Zhang, Hao Zhang, Yuanhan Zhang, Bo Li, Wei Li, Zejun Ma, and Chunyuan Li. Llava next-interleave: Tackling multi-image, video, and 3d in large multimodal models. arXiv preprint arXiv:2407.07895, 2024. 6, 7", + "[37] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In Proceedings of the International Conference on Machine Learning (ICML), pages 12888-12900. PMLR, 2022. 1, 6, 7, 8", + "[38] Yuncheng Li, Yale Song, Liangliang Cao, Joel Tetreault, Larry Goldberg, Alejandro Jaimes, and Jiebo Luo. Tgif: A new dataset and benchmark on animated gif description. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4641-4650, 2016. 2, 3", + "[39] Youwei Liang, Junfeng He, Gang Li, Peizhao Li, Arseniy Klimovskiy, Nicholas Carolan, Jiao Sun, Jordi Pont-Tuset, Sarah Young, Feng Yang, et al. Rich human feedback for text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 19401-19411, 2024. 3", + "[40] Bin Lin, Bin Zhu, Yang Ye, Munan Ning, Peng Jin, and Li Yuan. Video-llava: Learning united visual representation by alignment before projection. arXiv preprint arXiv:2311.10122, 2023. 6, 7", + "[41] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee." + ], + "bbox": [ + 91, + 90, + 480, + 900 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Visual instruction tuning. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 2024. 7", + "[42] Yaofang Liu, Xiaodong Cun, Xuebo Liu, Xintao Wang, Yong Zhang, Haoxin Chen, Yang Liu, Tieyong Zeng, Raymond Chan, and Ying Shan. Evalcrafter: Benchmarking and evaluating large video generation models. arXiv preprint arXiv:2310.11440, 2023. 3, 7", + "[43] Yuanxin Liu, Lei Li, Shuhuai Ren, Rundong Gao, Shicheng Li, Sishuo Chen, Xu Sun, and Lu Hou. Fetv: A benchmark for fine-grained evaluation of open-domain text-to-video generation. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 2024. 1, 2, 3, 6, 7, 8", + "[44] Zhengxiong Luo, Dayou Chen, Yingya Zhang, Yan Huang, Liang Wang, Yujun Shen, Deli Zhao, Jingren Zhou, and Tieniu Tan. Videofusion: Decomposed diffusion models for high-quality video generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10209-10218, 2023. 1, 2, 3", + "[45] Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Shahbaz Khan. Video-chatgpt: Towards detailed video understanding via large vision and language models. In Proceedings of the Association for Computational Linguistics (ACL), 2024. 6, 7", + "[46] Xiongkuo Min, Ke Gu, Guangtao Zhai, Jing Liu, Xiaokang Yang, and Chang Wen Chen. Blind quality assessment based on pseudo-reference image. IEEE Transactions on Multimedia (TMM), pages 2049-2062, 2017. 6, 7", + "[47] Xiongkuo Min, Guangtao Zhai, Ke Gu, Yutao Liu, and Xiaokang Yang. Blind image quality estimation via distortion aggravation. IEEE Transactions on Broadcasting (TBC), pages 508-517, 2018. 6, 7", + "[48] Anish Mittal, Anush Krishna Moorthy, and Alan Conrad Bovik. No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing (TIP), pages 4695-4708, 2012. 6, 7", + "[49] Anish Mittal, Rajiv Soundararajan, and Alan C Bovik. Making a “completely blind” image quality analyzer. IEEE Signal Processing Letters (SPL), pages 209–212, 2012. 6, 7", + "[50] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 1 (2):3, 2022. 3", + "[51] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10684-10695, 2022. 3", + "[52] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 2016. 1", + "[53] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. In Proceedings of the Ad" + ], + "bbox": [ + 516, + 92, + 903, + 900 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "18878", + "bbox": [ + 480, + 944, + 517, + 955 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "vances in Neural Information Processing Systems (NeurIPS), pages 25278-25294, 2022. 1, 6, 7", + "[54] BT Series. Methodology for the subjective assessment of the quality of television pictures. Recommendation ITU-R BT, pages 500-13, 2012. 4", + "[55] Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, et al. Make-a-video: Text-to-video generation without text-video data. arXiv preprint arXiv:2209.14792, 2022. 1, 2, 3", + "[56] Shaolin Su, Qingsen Yan, Yu Zhu, Cheng Zhang, Xin Ge, Jinqiu Sun, and Yanning Zhang. Blindly assess image quality in the wild guided by a self-adaptive hyper network. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 6, 7", + "[57] Wei Sun, Xiongkuo Min, Wei Lu, and Guangtao Zhai. A deep learning based no-reference quality assessment model forUGC videos. In Proceedings of the 30th ACM International Conference on Multimedia (ACMMM), page 856-865, 2022. 1, 6, 7", + "[58] Wei Sun, Xiongkuo Min, Wei Lu, and Guangtao Zhai. A deep learning based no-reference quality assessment model forUGC videos. In Proceedings of the ACM International Conference on Multimedia (ACMMM), pages 856-865, 2022. 7", + "[59] Wei Sun, Xiongkuo Min, Danyang Tu, Siwei Ma, and Guangtao Zhai. Blind quality assessment for in-the-wild images via hierarchical feature fusion and iterative mixed database training. IEEE Journal of Selected Topics in Signal Processing (JSTSP), 2023. 7", + "[60] Zhengzhong Tu, Yilin Wang, Neil Birkbeck, Balu Adsumilli, and Alan C Bovik. Ugc-vqa: Benchmarking blind video quality assessment for user generated content. IEEE Transactions on Image Processing (TIP), 30:4449-4464, 2021. 7", + "[61] Thomas Unterthiner, Sjoerd Van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, and Sylvain Gelly. Towards accurate generative models of video: A new metric & challenges. arXiv preprint arXiv:1812.01717, 2018. 1", + "[62] Jianyi Wang, Kelvin C.K. Chan, and Chen Change Loy. Exploring clip for assessing the look and feel of images. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pages 2555-2563, 2023. 7", + "[63] Jiarui Wang, Huiyu Duan, Jing Liu, Shi Chen, Xiongkuo Min, and Guangtao Zhai. Aigciqa2023: A large-scale image quality assessment database for ai generated images: from the perspectives of quality, authenticity and correspondence. In CAAI International Conference on Artificial Intelligence (CICAI), pages 46-57. Springer, 2023. 3", + "[64] Jiuniu Wang, Hangjie Yuan, Dayou Chen, Yingya Zhang, Xiang Wang, and Shiwei Zhang. Modelscope text-to-video technical report. arXiv preprint arXiv:2308.06571, 2023. 1, 2, 3", + "[65] Jiarui Wang, Huiyu Duan, Guangtao Zhai, and Xiongkuo Min. Quality assessment for ai generated images with instruction tuning. arXiv preprint arXiv:2405.07346, 2024. 1", + "[66] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin" + ], + "bbox": [ + 91, + 90, + 482, + 902 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Ge, Yang Fan, Kai Dang, Mengfei Du, Xuancheng Ren, Rui Men, Dayiheng Liu, Chang Zhou, Jingren Zhou, and Junyang Lin. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191, 2024. 6, 7", + "[67] Yaohui Wang, Xinyuan Chen, Xin Ma, Shangchen Zhou, Ziqi Huang, Yi Wang, Ceyuan Yang, Yinan He, Jiashuo Yu, Peiqing Yang, et al. Lavie: High-quality video generation with cascaded latent diffusion models. arXiv preprint arXiv:2309.15103, 2023. 2, 3", + "[68] Yi Wang, Yinan He, Yizhuo Li, Kunchang Li, Jiashuo Yu, Xin Ma, Xinhao Li, Guo Chen, Xinyuan Chen, Yaohui Wang, et al. Internvid: A large-scale video-text dataset for multimodal understanding and generation. In Proceedings of the International Conference on Learning Representations (ICLR), 2023. 2, 3", + "[69] Zirui Wang, Mengzhou Xia, Luxi He, Howard Chen, Yitao Liu, Richard Zhu, Kaiqu Liang, Xindi Wu, Haotian Liu, Sadhika Malladi, Alexis Chevalier, Sanjeev Arora, and Danqi Chen. Charxiv: Charting gaps in realistic chart understanding in multimodal llms. arXiv preprint arXiv:2406.18521, 2024. 5, 6", + "[70] Haoning Wu, Chaofeng Chen, Jingwen Hou, Liang Liao, Annan Wang, Wenxiu Sun, Qiong Yan, and Weisi Lin. Fastvqa: Efficient end-to-end video quality assessment with fragment sampling. In Proceedings of the European Conference on Computer Vision (ECCV), pages 538-554. Springer, 2022. 1, 6, 7, 8", + "[71] Haoning Wu, Erli Zhang, Liang Liao, Chaofeng Chen, Jingwen Hou Hou, Annan Wang, Wenxiu Sun Sun, Qiong Yan, and Weisi Lin. Exploring video quality assessment on user generated contents from aesthetic and technical perspectives. In Proceedings of the International Conference on Computer Vision (ICCV), 2023. 1, 6, 7, 8", + "[72] Haoning Wu, Zicheng Zhang, Weixia Zhang, Chaofeng Chen, Liang Liao, Chunyi Li, Yixuan Gao, Annan Wang, Erli Zhang, Wenxiu Sun, et al. Q-align: Teaching lmm's for visual scoring via discrete text-defined levels. arXiv preprint arXiv:2312.17090, 2023. 6, 7", + "[73] Jay Zhangjie Wu, Yixiao Ge, Xintao Wang, Stan Weixian Lei, Yuchao Gu, Yufei Shi, Wynne Hsu, Ying Shan, Xiaohu Qie, and Mike Zheng Shou. Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 7623-7633, 2023. 1, 2, 3", + "[74] Xiaoshi Wu, Yiming Hao, Keqiang Sun, Yixiong Chen, Feng Zhu, Rui Zhao, and Hongsheng Li. Human preference score v2: A solid benchmark for evaluating human preferences of text-to-image synthesis. arXiv preprint arXiv:2306.09341, 2023. 7", + "[75] Xiaoshi Wu, Keqiang Sun, Feng Zhu, Rui Zhao, and Hongsheng Li. Better aligning text-to-image models with human preference. arXiv preprint arXiv:2303.14420, 1(3), 2023. 3", + "[76] Jun Xu, Tao Mei, Ting Yao, and Yong Rui. Msr-vtt: A large video description dataset for bridging video and language. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 2, 3" + ], + "bbox": [ + 516, + 90, + 905, + 901 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "18879", + "bbox": [ + 480, + 944, + 519, + 955 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[77] Jingtao Xu, Peng Ye, Qiaohong Li, Haiqing Du, Yong Liu, and David Doermann. Blind image quality assessment based on high order statistics aggregation. IEEE Transactions on Image Processing (TIP), pages 4444-4457, 2016. 6, 7", + "[78] Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, and Yuxiao Dong. Imagereward: Learning and evaluating human preferences for text-to-image generation. arXiv preprint arXiv:2304.05977, 2023.6,7", + "[79] Zitong Xu, Huiyu Duan, Guangji Ma, Liu Yang, Jiarui Wang, Qingbo Wu, Xiongkuo Min, Guangtao Zhai, and Patrick Le Callet. Harmonyiqa: Pioneering benchmark and model for image harmonization quality assessment. In Proceedings of the International Conference on Multimedia and Expo (ICME), 2025. 1", + "[80] Wufeng Xue, Lei Zhang, and Xuanqin Mou. Learning without human scores for blind image quality assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 995-1002, 2013. 6, 7", + "[81] Wilson Yan, Yunzhi Zhang, Pieter Abbeel, and Aravind Srinivas. Videogpt: Video generation using vq-vae and transformers. arXiv preprint arXiv:2104.10157, 2021. 1", + "[82] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 586-595, 2018. 6", + "[83] Weixia Zhang, Guangtao Zhai, Ying Wei, Xiaokang Yang, and Kede Ma. Blind image quality assessment via vision-language correspondence: A multitask learning perspective. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023. 6, 7", + "[84] Zhichao Zhang, Xinyue Li, Wei Sun, Jun Jia, Xiongkuo Min, Zicheng Zhang, Chunyi Li, Zijian Chen, Puyi Wang, Zhongpeng Ji, et al. Benchmarking aigc video quality assessment: A dataset and unified model. arXiv preprint arXiv:2407.21408, 2024. 1, 3, 7, 8", + "[85] Tianwei Zhou, Songbai Tan, Wei Zhou, Yu Luo, Yuan-Gen Wang, and Guanghui Yue. Adaptive mixed-scale feature fusion network for blind ai-generated image quality assessment. IEEE Transactions on Broadcasting (TBC), 2024. 1" + ], + "bbox": [ + 91, + 92, + 482, + 683 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "18880", + "bbox": [ + 480, + 945, + 517, + 955 + ], + "page_idx": 11 + } +] \ No newline at end of file diff --git a/2025/AIGV-Assessor_ Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMM/6da59f8b-88b1-4dd7-9656-385f5fb4c136_model.json b/2025/AIGV-Assessor_ Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMM/6da59f8b-88b1-4dd7-9656-385f5fb4c136_model.json new file mode 100644 index 0000000000000000000000000000000000000000..433cfeaa2cd696f024b432e9b83217e56af2f140 --- /dev/null +++ b/2025/AIGV-Assessor_ Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMM/6da59f8b-88b1-4dd7-9656-385f5fb4c136_model.json @@ -0,0 +1,2380 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.043 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.001, + 0.812, + 0.047 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.13, + 0.13, + 0.872, + 0.175 + ], + "angle": 0, + "content": "AIGV-Assessor: Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMM" + }, + { + "type": "text", + "bbox": [ + 0.1, + 0.203, + 0.905, + 0.277 + ], + "angle": 0, + "content": "Jiarui Wang\\(^{1}\\), Huiyu Duan\\(^{1,2}\\), Guangtao Zhai\\(^{1,2}\\), Juntong Wang\\(^{1}\\), Xiongkuo Min\\(^{1*}\\), \\(^{1}\\)Institute of Image Communication and Network Engineering, \\(^{2}\\)MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China" + }, + { + "type": "title", + "bbox": [ + 0.248, + 0.309, + 0.328, + 0.327 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.342, + 0.486, + 0.721 + ], + "angle": 0, + "content": "The rapid advancement of large multimodal models (LMMs) has led to the rapid expansion of artificial intelligence generated videos (AIGVs), which highlights the pressing need for effective video quality assessment (VQA) models designed specifically for AIGVs. Current VQA models generally fall short in accurately assessing the perceptual quality of AIGVs due to the presence of unique distortions, such as unrealistic objects, unnatural movements, or inconsistent visual elements. To address this challenge, we first present AIGVQA-DB, a large-scale dataset comprising 36,576 AIGVs generated by 15 advanced text-to-video models using 1,048 diverse prompts. With these AIGVs, a systematic annotation pipeline including scoring and ranking processes is devised, which collects 370k expert ratings to date. Based on AIGVQA-DB, we further introduce AIGV-Assessor, a novel VQA model that leverages spatiotemporal features and LMM frameworks to capture the intricate quality attributes of AIGVs, thereby accurately predicting precise video quality scores and video pair preferences. Through comprehensive experiments on both AIGVQA-DB and existing AIGV databases, AIGV-Assessor demonstrates state-of-the-art performance, significantly surpassing existing scoring or evaluation methods in terms of multiple perceptual quality dimensions. The dataset and code are released at https://github.com/IntMeGroup/AIGV-Assessor." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.752, + 0.224, + 0.768 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.778, + 0.484, + 0.84 + ], + "angle": 0, + "content": "Text-to-video generative models [12, 27, 44, 64, 73], including auto-regressive [23, 81] and diffusion-based [12, 27, 55] approaches, have experienced rapid advancements in recent years with the explosion of large multimodal models" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.311, + 0.908, + 0.492 + ], + "angle": 0, + "content": "(LMMs). Given appropriate text prompts, these models can generate high-fidelity and semantically-aligned videos, commonly referred to as AI-generated videos (AIGVs), which have significantly facilitated the content creation in various domains, including entertainment, art, design, and advertising, etc [11, 13, 43]. Despite the significant progress, current AIGVs are still far from satisfactory. Unlike natural videos, which are usually affected by low-level distortions, such as noise, blur, low-light, etc, AIGVs generally suffer from degradations such as unrealistic objects, unnatural movements, inconsistent visual elements, and misalignment with text descriptions [25, 31, 43, 65, 79, 84, 85]." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.493, + 0.909, + 0.825 + ], + "angle": 0, + "content": "The unique distortions in AIGVs also bring challenges to the video evaluation. Traditional video quality assessment (VQA) methods [10, 18, 33, 35, 57, 70, 71] mainly focus on evaluating the quality of professionally-generated content (PGC) and user-generated content (UGC), thus struggling to address the specific distortions associated with AIGVs, such as spatial artifacts, temporal inconsistencies, and misalignment between generated content and text prompts. For evaluation of AIGVs, some metrics such as Inception Score (IS) [52] and Fréchet Video Distance (FVD) [61] have been widely used, which are computed over distributions of videos and may not reflect the human preference for an individual video. Moreover, these metrics mainly evaluate the fidelity of videos, while failing to assess the text-video correspondence. Vision-language pre-training models, such as CLIPScore [22], BLIPScore [37], and AestheticScore [53] are frequently employed to evaluate the alignment between generated videos and their text prompts. However, these models mainly consider the text-video alignment at the image level, while ignoring the dynamic diversity and motion consistency of visual elements that are crucial to the vide-viewing experience." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.826, + 0.909, + 0.903 + ], + "angle": 0, + "content": "In this paper, to facilitate the development of more comprehensive and precise metrics for evaluating AI-generated videos, we present AIGVQA-DB, a large-scale VQA dataset, including 36,576 AIGVs generated by 15 advanced text-to-video models using 1,048 diverse prompts. An" + }, + { + "type": "page_footnote", + "bbox": [ + 0.09, + 0.852, + 0.483, + 0.901 + ], + "angle": 0, + "content": "*Corresponding Author. \nThis work was supported in part by the National Natural Science Foundation of China under Grant 62271312, 62132006, 62401365, 62225112, and in part by STCSM under Grant 22DZ2229005." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "18869" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.094, + 0.054, + 0.907, + 0.276 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.277, + 0.907, + 0.361 + ], + "angle": 0, + "content": "Figure 1. An overview of the AIGVQA-DB construction pipeline, illustrating the generation and the subjective evaluation procedures for the AIGVs in the database. (a) Prompt categorization according to the spatial major content. (b) Prompt categorization according to the temporal descriptions. (c) Prompt categorization according to the attribute control. (d) Prompt categorization according to the prompt complexity. (e) The 15 generative models used in the database. (f) Four visual quality evaluation perspectives, including static quality, temporal smoothness, dynamic degree, and text-video correspondence. (g) and (h) demonstrates the pair comparison and preference scoring processes, respectively." + }, + { + "type": "text", + "bbox": [ + 0.093, + 0.373, + 0.485, + 0.766 + ], + "angle": 0, + "content": "overview of the dataset construction pipeline is shown in Figure 1. The prompts are collected from existing open-domain text-video datasets [7, 8, 38, 43, 68, 76] or manually-written, which can be categorized based on four orthogonal aspects respectively, as shown in Figure 1(a)-(d). Based on the AIGVs, we collect 370k expert ratings comprising both mean opinion scores (MOSs) and pairwise comparisons, which are evaluated from four dimensions, including: (1) static quality, (2) temporal smoothness, (3) dynamic degree, and (4) text-video correspondence. Equipped with the dataset, we propose AIGV-Assessor, a large multimodal model-based (LMM-based) VQA method for AIGVs, which reformulates the quality regression task into an interactive question-and-answer (Q&A) framework and leverages the powerful multimodal representation capabilities of LMMs to provide accurate and robust quality assessments. AIGV-Assessor not only classifies videos into different quality levels through natural language output, but also generates precise quality scores through regression, thus enhancing the interpretability and usability of VQA results. Moreover, AIGV-Assessor also excels in pairwise video comparisons, enabling nuanced assessments that are closer to human preferences. Extensive experimental results demonstrate that AIGV-Assessor outperforms existing text-to-video scoring methods in terms of multiple dimensions relevant to human preference." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.773, + 0.483, + 0.802 + ], + "angle": 0, + "content": "The main contributions of this paper are summarized as follows:" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.81, + 0.484, + 0.902 + ], + "angle": 0, + "content": "- We construct AIGVQA-DB, a large-scale dataset comprising 36,576 AI-generated videos annotated with MOS scores and pairwise comparisons. Compared with existing benchmarks, AIGVQA-DB provides a more comprehensive assessment of the capabilities of text-to-video models from multiple perspectives." + }, + { + "type": "table_caption", + "bbox": [ + 0.513, + 0.371, + 0.907, + 0.398 + ], + "angle": 0, + "content": "Table 1. An overview of popular text-to-video (T2V) and image-to-video (I2V) generation models. \\( {}^{ \\dagger } \\) Representative variable." + }, + { + "type": "table", + "bbox": [ + 0.516, + 0.4, + 0.907, + 0.587 + ], + "angle": 0, + "content": "
ModelYearModeResolutionFramesOpen
CogVideo [23]22.05T2V480×48032
Make-a-Video [55]22.09T2V256×25616
LVDM [21]22.11T2V256×25616
Tune-A-Video [73]22.12T2V512×5128
VideoFusion [44]23.03T2V128×12816
Text2Video-Zero [27]23.03T2V512×5128
ModelScope [64]23.03T2V256×25616
Lavie [67]23.09T2V512×32016
VideoCrafter [12]23.10T2V, I2V1024×57616
Hotshot-XL [1]23.10T2V672×3848
StableVideoDiffusion [9]23.11I2V576×102414
AnimateDiff [20]23.12T2V, I2V384×25620
Floor33 [2]23.08T2V,I2V1024×64016-
Genmo [3]23.10T2V, I2V2048×153660-
Gen-2 [4]23.12T2V, I2V1408×76896-
MoonValley [5]24.01T2V, I2V1184×672200†-
MorphStudio [6]24.01T2V, I2V1920×108072-
Sora [7]24.02T2V, I2V1920×1080600†-
" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.602, + 0.907, + 0.677 + ], + "angle": 0, + "content": "- Based on AIGVQA-DB, we evaluate and benchmark 15 representative text-to-video models, and reveal their strengths and weaknesses from four crucial preference dimensions, i.e., static quality, temporal smoothness, dynamic degree, and text-to-video correspondence." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.678, + 0.907, + 0.753 + ], + "angle": 0, + "content": "- We present a novel LMM-based VQA model for AIGVs, termed AIGV-Assessor, which integrates both spatial and temporal visual features as well as prompt features into a LMM to give quality levels, predict quality scores, and conduct quality comparisons." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.753, + 0.907, + 0.813 + ], + "angle": 0, + "content": "- Thorough analysis of our AIGV-Assessor is provided and extensive experiments on our proposed AIGVQA-DB and other AIGV quality assessment datasets have shown the effectiveness and applicability of AIGV-Assessor." + }, + { + "type": "list", + "bbox": [ + 0.514, + 0.602, + 0.907, + 0.813 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.828, + 0.655, + 0.843 + ], + "angle": 0, + "content": "2. Related Work" + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.851, + 0.743, + 0.865 + ], + "angle": 0, + "content": "2.1. Text-to-video Generation" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.871, + 0.907, + 0.901 + ], + "angle": 0, + "content": "Recent advancements in text-to-video generative models have substantially broadened video creation and modifica" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "18870" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.256, + 0.076, + 0.741, + 0.089 + ], + "angle": 0, + "content": "Table 2. Summary of existing text-to-image and text-to-video evaluation datasets." + }, + { + "type": "table", + "bbox": [ + 0.093, + 0.09, + 0.907, + 0.233 + ], + "angle": 0, + "content": "
Dataset TypesNameNumbersPromptsModelsAnnotatorsDimensionsMOSs / PairsAnnotation
AIGIQAAGIQA-3k [34]2,98218062125,964MOS
AIGCIQA2023 [63]2,40010062837,200MOS
RichHF-18k [39]17,76017,76033471,040MOS
HPS [75]98,80725,20512,659125,205Pairs
AIGVQAPick-a-Pic [29]-37,52334,3751584,247Pairs
MQT [15]1,00520152422,010MOS
EvalCrafter [42]2,5007005741,024MOS
FETV [43]2,4766194337,428MOS
LGVQ [84]2,80846862038,424MOS
T2VQA-DB [31]10,0001,000927110,000MOS
GAIA [13]9,1805101854327,540MOS
AIGVQA-DB (Ours)36,5761,048151204122,304MOS and Pairs
" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.241, + 0.486, + 0.558 + ], + "angle": 0, + "content": "tion possibilities. As shown in Table 1, these models exhibit distinct characteristics and capacities, including modes, resolution, and total frames. CogVideo [23] is an early text-to-video (T2V) model capable of generating short videos based on CogView2 [16]. Make-a-video [55] adds effective spatial-temporal modules on a diffusion-based text-to-image (T2I) model (i.e., DALLE-2 [50]). VideoFusion [44] also leverages the DALLE-2 and presents a decomposed diffusion process. LVDM [21], Text2Video-Zero [27], Tune-A-Video [73], and ModelScope [64] are models that inherit the success of Stable Diffusion (SD) [51] for video generation. Lavie [67] extends the original transformer block in SD to a spatio-temporal transformer. Hotshot-XL [1] introduces personalized video generation. Beyond these laboratory-driven advancements, the video generation landscape has also been enriched by a series of commercial products. Notable among them are Floor33 [2], Gen-2 [4], Genmo [3], MoonValley [5], MorphStudio [6], and Sora [7], which have gained substantial attention in both academia and industry, demonstrating the widespread application potential of AI-assisted video creation." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.566, + 0.317, + 0.58 + ], + "angle": 0, + "content": "2.2. Text-to-video Evaluation" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.584, + 0.485, + 0.901 + ], + "angle": 0, + "content": "The establishment of the AI-generated image quality assessment (AIGIQA) dataset is relatively well-developed, including both mean opinion scores (MOSs) for absolute quality evaluations, and pairwise comparisons for relative quality judgments. Recent developments in text-to-video generation models have also spurred the creation of various AI-generated video quality assessment (AIGVQA) datasets, addressing different aspects of the T2V generation challenge, as shown in Table 2. MQT [15] consists of 1,005 videos generated by 5 models using 201 prompts. EvalCrafter [42] and FETV [43] extend the scale of the videos, prompts, and evaluation dimensions. LGVQ [84] increases the number of annotators, providing more reliable MOSs. T2VQA-DB [31] consists of 10,000 videos from 1,000 prompts representing a significant improvement in scale. GAIA [13] collects 9,180 videos focusing on action quality assessment in AIGVs, but falls short in addressing the consistency between the generated visuals and their textual prompts. Most existing VQA datasets predominantly rely on MOS, an absolute scoring method, which suffers from the same drawback: absolute scores alone may cause am" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.242, + 0.907, + 0.304 + ], + "angle": 0, + "content": "biguity and overlook subtle quality differences. In contrast, our AIGVQA-DB includes both MOSs and pairwise comparisons, addressing the limitations of current works by providing fine-grained preference feedbacks." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.31, + 0.845, + 0.327 + ], + "angle": 0, + "content": "3. Database Construction and Analysis" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.332, + 0.671, + 0.347 + ], + "angle": 0, + "content": "3.1. Data Collection" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.356, + 0.907, + 0.522 + ], + "angle": 0, + "content": "Prompt Scources and Categorization. Prompts of the AIGVQA-DB are primarily sourced from existing open-domain text-video pair datasets, including InternVid [68], MSRVTT [76], WebVid [8], TGIF [38], FETV [43] and Sora website [7]. We also manually craft prompts describing highly unusual scenarios to test the generalization ability of the generation models. As shown in Figure 1(a)-(d), we follow the categorization principles from FETV [43] to organize each prompt based on the \"spatial major content\", \"temporal major content\", \"attribute control\", and \"prompt complexity\"." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.527, + 0.909, + 0.8 + ], + "angle": 0, + "content": "Text-to-Video Generation. We utilize 15 latest text-to-video generative models to create AI-generated videos as shown in Figure 1(e). We leverage open-source website APIs and code with default weights for these models to produce AIGVs. For the construction of the MOS subset, we collect 48 videos from the Sora Website [7], along with their corresponding text prompts. Using these prompts, we generate additional videos using 11 different generative models. This process results in a total of 576 videos (12 generative models \\(\\times\\) 48 prompts). In addition to the MOS subset, we construct the pair-comparison subset using 1,000 diverse prompts, and 12 generative models including 8 open-sourced and 4 close-sourced are employed for text-to-video generation. Specifically, for each prompt, we generate four distinct videos for each open-source generative model and one video for each closed-source generative model. This process yields a total of 36,000 videos. More details of the database can be found in the supplementary material." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.807, + 0.892, + 0.823 + ], + "angle": 0, + "content": "3.2. Subjective Experiment Setup and Procedure" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.826, + 0.906, + 0.903 + ], + "angle": 0, + "content": "Due to the unique and unnatural characteristics of AI-generated videos and the varying target video spaces dictated by different text prompts, relying solely on a single score, such as \"quality\", to represent human visual preferences is insufficient. In this paper, we propose to measure" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.518, + 0.956 + ], + "angle": 0, + "content": "18871" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.092, + 0.075, + 0.347, + 0.197 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.178, + 0.198, + 0.326, + 0.208 + ], + "angle": 0, + "content": "(a) Distribution of Raw Scores" + }, + { + "type": "image", + "bbox": [ + 0.347, + 0.075, + 0.905, + 0.198 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.521, + 0.199, + 0.756, + 0.209 + ], + "angle": 0, + "content": "(b) Distribution of Mean Opinion Scores (MOSs)" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.211, + 0.907, + 0.239 + ], + "angle": 0, + "content": "Figure 2. Video score distribution from the four perspectives including static quality, temporal smoothness, dynamic degree, and t2v correspondence. (a) Distribution of raw scores. (b) Distribution of Mean Opinion Scores (MOSs)" + }, + { + "type": "image", + "bbox": [ + 0.092, + 0.25, + 0.909, + 0.374 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.175, + 0.378, + 0.191, + 0.387 + ], + "angle": 0, + "content": "(a)" + }, + { + "type": "image_caption", + "bbox": [ + 0.356, + 0.377, + 0.372, + 0.387 + ], + "angle": 0, + "content": "(b)" + }, + { + "type": "image_caption", + "bbox": [ + 0.525, + 0.377, + 0.541, + 0.387 + ], + "angle": 0, + "content": "(c)" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.39, + 0.904, + 0.418 + ], + "angle": 0, + "content": "Figure 3. Comparison of averaged win rates of different generation models across different categories. (a) Results across prompt complexity. (b) Results across attribute control. (c) Results across temporal major contents. (d) Results across spatial major contents." + }, + { + "type": "image", + "bbox": [ + 0.095, + 0.427, + 0.484, + 0.612 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.614, + 0.483, + 0.682 + ], + "angle": 0, + "content": "Figure 4. (a) Comparison of text-to-video generation models regarding the MOS in terms of four dimensions sorted bottom-up by their averaged MOS. (b) Comparison of text-to-video generation models regarding the win rate in terms of four dimensions sorted bottom-up by their averaged win rate." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.705, + 0.484, + 0.901 + ], + "angle": 0, + "content": "the human visual preferences of AIGVs from four perspectives. Static quality assesses the clarity, sharpness, color accuracy, and overall aesthetic appeal of the frames when viewed as standalone images. Temporal smoothness evaluates the temporal coherence of video frames and the absence of temporal artifacts such as flickering or jittering. Dynamic degree evaluates the extent to which the video incorporates large motions and dynamic scenes, which contributes to the overall liveliness and engagement measurement of the content. Text-video (TV) correspondence assesses how accurately the video content reflects the details, themes, and actions described in the prompt, ensuring that the generated video effectively translates the text input into" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.43, + 0.907, + 0.627 + ], + "angle": 0, + "content": "a visual narrative. Each of these four visual perception perspectives is related but distinct, offering a comprehensive evaluation for AIGVs. To evaluate the quality of the videos in the AIGVQA-DB, we conduct subjective experiments adhering to the guidelines outlined in ITU-R BT.500-14 [17, 54]. For the MOS annotation type, we use a 1-5 Likert-scale judgment to score the videos. For the pairs annotation type, participants are presented with pairs of videos and asked to choose the one they prefer, providing a direct comparison method for evaluating relative video quality. The videos are displayed using an interface designed with Python Tkinter, as illustrated in Figure 1(g)-(h). A total of 120 graduate students participate in the experiment." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.635, + 0.757, + 0.651 + ], + "angle": 0, + "content": "3.3. Subjective Data Processing" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.657, + 0.906, + 0.688 + ], + "angle": 0, + "content": "In order to obtain the MOS for an AIGV, we linearly scale the raw ratings to the range [0, 100] as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.575, + 0.699, + 0.844, + 0.731 + ], + "angle": 0, + "content": "\\[\nz _ {i j} = \\frac {r _ {i j} - \\mu_ {i j}}{\\sigma_ {i}}, \\quad z _ {i j} ^ {\\prime} = \\frac {1 0 0 (z _ {i j} + 3)}{6},\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.537, + 0.741, + 0.882, + 0.789 + ], + "angle": 0, + "content": "\\[\n\\mu_ {i} = \\frac {1}{N _ {i}} \\sum_ {j = 1} ^ {N _ {i}} r _ {i j}, \\sigma_ {i} = \\sqrt {\\frac {1}{N _ {i} - 1} \\sum_ {j = 1} ^ {N _ {i}} (r _ {i j} - \\mu_ {i j}) ^ {2}}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.795, + 0.906, + 0.854 + ], + "angle": 0, + "content": "where \\( r_{ij} \\) is the raw ratings given by the \\( i \\)-th subject to the \\( j \\)-th video. \\( N_i \\) is the number of videos judged by subject \\( i \\). Next, the MOS of the video \\( j \\) is computed by averaging the rescaled z-scores as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.638, + 0.863, + 0.781, + 0.904 + ], + "angle": 0, + "content": "\\[\nM O S _ {j} = \\frac {1}{M} \\sum_ {i = 1} ^ {M} z _ {i j} ^ {\\prime}\n\\]" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "18872" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.092, + 0.077, + 0.912, + 0.301 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.303, + 0.908, + 0.388 + ], + "angle": 0, + "content": "Figure 5. The framework of AIGV-Assessor: (a) AIGV-Assessor takes AI-generated video frames as input and outputs both text-based quality levels and numerical quality scores. The system begins with the extraction of spatiotemporal features using two vision encoders, which are then passed through spatial and temporal projection modules to generate aligned visual tokens into language space. The LLM decoder produces text-based feedback describing the video quality level for four evaluation dimensions, respectively. Simultaneously, the last-hidden-states from the LLM are used to perform quality regression that outputs final quality scores in terms of four dimensions. (b) AIGV-Assessor is fine-tuned on pairwise comparison, further allowing the model to output the evaluation comparison between two videos." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.402, + 0.482, + 0.432 + ], + "angle": 0, + "content": "where \\(MOS_{j}\\) indicates the MOS for the \\(j\\)-th AIGV, \\(M\\) is the number of subjects, and \\(z_{ij}^{\\prime}\\) are the rescaled z-scores." + }, + { + "type": "text", + "bbox": [ + 0.093, + 0.434, + 0.485, + 0.751 + ], + "angle": 0, + "content": "For the pairs annotation type, given a text prompt \\( p_i \\) and 12 video generation models labeled \\( \\{A,B,C,\\dots,L\\} \\), we generate videos using each model, forming a group of videos \\( G_{i,j} = \\{V_{i,A,j},V_{i,B,j},V_{i,C,j},\\dots,V_{i,L,j}\\} \\). For each prompt \\( p_i \\), we generate four different videos randomly for each of the eight open-source generative models and one video for each of the four closed-source generative models, resulting in a group of 36 videos \\( \\{G_{i,A,1},G_{i,A,2},G_{i,A,3},G_{i,A,4},G_{i,B,1},\\dots,G_{i,L,1}\\} \\). For each group, we create all possible pairwise combinations, resulting in \\( C_{36}^2 \\) pairs: \\( (V_{A1},V_{B1}) \\), \\( (V_{A1},V_{B2}) \\), \\( (V_{A1},V_{B3}) \\), \\( (V_{A1},V_{B4}) \\), \\( (V_{A1},V_{C1}) \\), ..., \\( (V_{K1},V_{L1}) \\). In the AIGVQA-DB construction pipeline, a prompt suite of 1000 prompts results in 630,000 \\( (1000\\times C_{36}^2) \\) pairwise video comparisons. From this extensive dataset, we randomly sample 30,000 pairs for evaluation from four perspectives. Each pair is judged by three annotators, and the final decision of the better video in each pair is determined by the majority vote. Finally, we obtain a total of 46,080 reliable score ratings (20 annotators × 4 perspectives × 576 videos) and 360,000 pair ratings (3 annotators × 4 perspectives × 30,000 pairs)." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.758, + 0.427, + 0.774 + ], + "angle": 0, + "content": "3.4. AIGV Analysis from Four Perspectives" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.78, + 0.484, + 0.902 + ], + "angle": 0, + "content": "As shown in Figure 2, the videos in the AIGVQA-DB cover a wide range of perceptual quality. We further analyze the win rates of various generation models across categories in Figure 3, revealing the strengths and weaknesses of each T2V model. As shown in Figure 3(a), the performances of T2V models rank uniform for different prompt complexity items in terms of static quality, which manifests current T2V model rank consistently for different prompts, likely" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.403, + 0.908, + 0.69 + ], + "angle": 0, + "content": "due to shared architectures like diffusion-based systems, with common strengths and limitations in handling complex prompts. As shown in Figure 3(b), in terms of attribute control, StableVideoDiffusion [9] excels in managing quantity over event order, as it first generates static images before animating them, preserving the original event sequence. As shown in Figure 3(d), in terms of spatial content, most videos featuring \"plants\" and \"people\" show poor T2V correspondence. More comparison and analysis can be found in the supplementary material. We also launch comparisons among text-to-video generation models regarding the MOS and pairwise win rates shown in Figure 4. Notably, models such as LVDM [21] demonstrate exceptional performance in handling dynamic content, but exhibit relatively lower performance in temporal smoothness. Sora [7] and MorphStudio [6] perform well in static quality and temporal smoothness while lagging in dynamic degree. Additionally, closed-source models exhibit much better performance compared to open-source models." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.7, + 0.688, + 0.718 + ], + "angle": 0, + "content": "4. Proposed Method" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.722, + 0.679, + 0.737 + ], + "angle": 0, + "content": "4.1. Model Structure" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.745, + 0.907, + 0.866 + ], + "angle": 0, + "content": "Spatial and Temporal Vision Encoder. As shown in Figure 5(a), the model leverages two different types of encoders to capture the spatial and temporal characteristics of the video: (1) 2D Encoder: A pre-trained 2D vision transformer (InternViT [69]) is used to process individual video frames. (2) 3D Encoder: A 3D network, i.e., SlowFast [19], is employed to extract temporal features by processing sequences of video frames." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.871, + 0.908, + 0.902 + ], + "angle": 0, + "content": "Spatiotemporal Projection Module. Once the spatial and temporal features are extracted, they are projected into a" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "18873" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.076, + 0.907, + 0.103 + ], + "angle": 0, + "content": "Table 3. Performance comparisons of the state-of-the-art quality evaluation methods on the AIGVQA-DB from four perspectives. The best performance results are marked in RED and the second-best performance results are marked in BLUE." + }, + { + "type": "table", + "bbox": [ + 0.093, + 0.105, + 0.907, + 0.46 + ], + "angle": 0, + "content": "
DimensionStatic QualityTemporal SmoothnessDynamic DegreeTV Correspondence
Methods / MetricsPair AccSRCCPLCCKRCCPair AccSRCCPLCCKRCCPair AccSRCCPLCCKRCCPair AccSRCCPLCCKRCC
NIQE [49]54.32%0.08670.16260.061552.67%0.06410.11520.045145.64%0.17650.24480.119446.99%0.17710.22310.1193
QAC [80]49.96%0.10220.13630.068054.90%0.16330.20390.110554.72%0.04480.04270.029554.48%0.03030.01970.2233
BRISQUE [48]59.98%0.29090.24430.196955.67%0.23250.15690.155344.60%0.13510.09590.089351.02%0.12940.10170.0869
BPRI [46]52.28%0.21810.17230.139847.26%0.17660.08800.113846.83%0.19560.16880.132949.13%0.15690.15480.1052
HOSA [77]61.54%0.24200.21060.164357.31%0.23110.17570.155944.97%0.07550.04490.049652.23%0.16450.13240.1097
BMPRI [47]53.71%0.16900.14810.107549.31%0.14340.08440.089445.07%0.11530.09250.077748.43%0.15670.15000.1041
V-Dynamic [25]51.34%0.07680.07920.049431.91%0.37130.48710.255753.11%0.14660.02530.098846.96%0.04050.05760.0223
V-Smoothness [25]61.63%0.67480.45060.459076.59%0.85260.83130.653347.63%0.24460.23280.158061.28%0.31880.30730.2214
CLIPScore [22]47.09%0.07310.08160.047346.33%0.04230.03340.027152.99%0.06750.08350.043955.62%0.15190.17310.1014
BLIPScore [37]53.24%0.04920.04210.033053.07%0.06590.04870.043753.03%0.17860.19040.120561.53%0.18130.18960.1219
AestheticScore [53]70.24%0.67130.69590.478454.82%0.51540.49460.348452.96%0.22950.23220.152759.64%0.23810.24400.1602
ImageReward [78]56.69%0.26060.26460.174954.09%0.23820.23050.160053.90%0.18400.18360.123763.97%0.23110.24500.1568
UMTScore [43]48.93%0.01680.01990.011749.93%0.03020.03700.020752.69%0.01680.01980.011753.82%0.01720.00650.0108
Video-LLaVA [40]50.90%0.03840.05130.029750.36%0.04310.02810.034750.34%0.15610.14360.117650.54%0.13640.10510.1009
Video-ChatGPT [45]51.20%0.12420.15870.094050.16%0.05800.05330.045350.47%0.07240.04360.056350.07%0.03570.01240.0274
LLaVA-NeXT [36]52.85%0.12390.16250.095452.41%0.40210.37220.305251.84%0.17670.16550.132859.20%0.41160.34280.3261
VideoLLaMA2 [14]52.73%0.26430.32710.192852.27%0.36080.24500.269650.78%0.19000.15610.137954.25%0.16560.16330.1210
Qwen2-VL [66]56.50%0.49220.52910.383849.12%0.16810.42190.123352.08%0.11220.13350.084953.30%0.31110.27750.2306
HyperIQA [56]68.30%0.79310.80930.596954.65%0.74260.66300.540753.32%0.21030.21000.138457.54%0.62260.62500.4432
MUSIQ [26]66.46%0.78800.80440.577355.16%0.71990.69200.503452.85%0.52060.48460.352158.46%0.41250.40930.2844
LIQE [83]63.86%0.87760.86910.700855.84%0.79350.77200.608449.02%0.53030.58400.383755.10%0.38620.36390.2640
VSFA [35]46.43%0.33650.34210.226850.95%0.33170.32730.220251.46%0.12010.13620.081548.07%0.10240.10640.0666
BVQA [33]29.98%0.45940.47010.326837.65%0.37040.38190.250755.08%0.45940.47010.326842.32%0.37200.39780.2559
simpleVQA [57]68.12%0.83550.64380.848954.14%0.70820.70080.497853.08%0.46710.31600.399458.20%0.46430.54400.3163
FAST-VQA [70]70.64%0.87380.86440.686062.93%0.90360.91340.716654.34%0.56030.57030.389565.05%0.68750.67040.4978
DOVER [71]72.92%0.89070.88950.700458.83%0.90630.91950.718753.16%0.55490.54890.380062.35%0.67830.68020.4969
Q-Align [72]71.86%0.85160.83830.664157.95%0.81160.70250.619553.71%0.56550.50120.395062.91%0.55420.56470.3870
AIGV-Assessor (Ours)79.83%0.91620.91900.757676.60%0.92320.92160.803860.30%0.60930.60820.443570.32%0.75000.76970.5591
Improvement+ 6.9%+2.7%+3.0%+ 5.7%13.7%+1.7%+0.2%+8.5%+5.2%+4.4%+3.8%+4.4%+5.3%+6.3%+9.9%+6.13%
" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.472, + 0.483, + 0.562 + ], + "angle": 0, + "content": "shared feature space for alignment with text-based queries. This is done through two projection modules that map the spatial and temporal visual features respectively into the language space. The mapped visual tokens are aligned with text tokens, enabling the model to query the video content in a multimodal fashion." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.569, + 0.483, + 0.78 + ], + "angle": 0, + "content": "Feature Fusion and Quality Regression. We apply LLM (InternVL2-8B [69]) to combine the visual tokens and user-provided quality prompts to perform the following tasks: (1) Quality level descriptions: the model generates a descriptive quality level evaluation of the input video, such as \"The static quality of the video is (bad, poor, fair, good, excellent).\" This initial categorization provides a preliminary classification of the video's quality, which is beneficial for subsequent quality regression tasks. By obtaining a rough quality level, the model can more accurately predict numerical scores in later evaluations. (2) Regression score output: the model uses the final hidden states from the LLM to perform a regression task, outputting numerical quality scores for the video from four different dimensions." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.79, + 0.391, + 0.807 + ], + "angle": 0, + "content": "4.2. Training and Fine-tuning Strategy" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.811, + 0.483, + 0.902 + ], + "angle": 0, + "content": "The training process of AIGV-Assessor follows a three-stage approach to ensure high-quality video assessment with quality level prediction, individual quality scoring, and pairwise preference comparison capabilities. This process includes: (1) training the spatial and temporal projectors to align visual and language features, (2) fine-tuning the vision" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.472, + 0.907, + 0.548 + ], + "angle": 0, + "content": "encoder and LLM with LoRA [24], and training the quality regression module to generate accurate quality scores, (3) incorporating pairwise comparison training using the pair-comparison subset with a pairwise loss function for robust video quality comparison." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.555, + 0.906, + 0.646 + ], + "angle": 0, + "content": "Spatiotemporal Projector Training. The first stage focuses on training the spatial and temporal projectors to extract meaningful spatiotemporal visual features and map them into the language space. Through this process, the LLM is able to produce the quality level descriptions i.e., bad, poor, fair, good, excellent." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.652, + 0.907, + 0.803 + ], + "angle": 0, + "content": "Quality Regression Fine-tuning. Once the model can generate coherent descriptions of video quality level, the second stage focuses on fine-tuning the quality regression module. The goal here is to enable the model to output stable and precise numerical quality scores (MOS-like predictions). The quality regression model takes the last-hidden-state features from LLM as input and generates quality scores from four perspectives. The training objective uses an L1 loss function to minimize the difference between the predicted quality score and the groundtruth MOS." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.811, + 0.909, + 0.903 + ], + "angle": 0, + "content": "Pairwise Comparison Fine-tuning. The third stage mainly focuses on integrating the pairwise comparison into the training pipeline. As shown in Figure 5(b), two input video pairs share network weights within the same batch. We design a judge network inspired by LPIPS [82] to determine which video performs better. This network leverages" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.956 + ], + "angle": 0, + "content": "18874" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.093, + 0.083, + 0.482, + 0.097 + ], + "angle": 0, + "content": "Table 4. Performance comparisons on LGVQ [84] and FETV [43]." + }, + { + "type": "table", + "bbox": [ + 0.095, + 0.097, + 0.487, + 0.358 + ], + "angle": 0, + "content": "
AspectsMethodsLGVQFETV
SRCCPLCCKRCCSRCCPLCCKRCC
SpatialMUSIQ [26]0.6690.6820.4910.7220.7580.613
StairlQA [59]0.7010.7370.5210.8060.8120.643
CLIP-IQA [62]0.6840.7090.5020.7410.7670.619
LIQE [83]0.7210.7520.5380.7650.7990.635
UGVQ [84]0.7590.7950.5670.8410.8410.685
TemporalAIGV-Assessor (Ours)0.8030.8190.6170.8530.8560.699
Improvement+4.4%+2.4%+5.0%+1.2%+1.5%+1.4%
VSFA [35]0.8410.8570.6430.8390.8590.705
SimpleVQA [57]0.8570.8670.6590.8520.8620.726
FastVQA [70]0.8490.8430.6470.8420.8470.714
DOVER [71]0.8670.8780.6720.8680.8810.731
UGVQ [84]0.8930.9070.7030.8970.9070.753
AlignmentAIGV-Assessor (Ours)0.9000.9200.7170.9360.9400.815
Improvement+0.7%+1.3%+1.4%+3.9%+3.3%+6.2%
CLIPScore [22]0.4460.4530.3010.6070.6330.498
BLIPScore [37]0.4550.4640.3190.6160.6450.505
ImageReward [78]0.4980.4990.3440.6570.6870.519
PickScore [28]0.5010.5150.3530.6690.7080.533
HPSv2 [74]0.5040.5110.3570.6860.7030.540
UGVQ [84]0.5510.5550.3940.7340.7370.572
AlignmentAIGV-Assessor (Ours)0.5770.5780.4110.7530.7460.585
Improvement+2.6%+2.3%+1.7%+1.9%+0.9%+1.3%
" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.372, + 0.483, + 0.418 + ], + "angle": 0, + "content": "learned features and evaluates the perceptual differences between the two videos, allowing more reliable quality assessments in video pair comparison." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.422, + 0.484, + 0.529 + ], + "angle": 0, + "content": "Loss Function. In the first stage, the spatial and temporal projectors are trained to align visual and language features using language loss. The second stage refines the vision encoder, LLM, and quality regression module's scoring ability with an L1 loss. The third stage incorporates pairwise comparison training with cross-entropy loss to improve the model's performance on relative quality evaluation." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.541, + 0.224, + 0.558 + ], + "angle": 0, + "content": "5. Experiments" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.566, + 0.284, + 0.584 + ], + "angle": 0, + "content": "5.1. Experiment Settings" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.593, + 0.484, + 0.731 + ], + "angle": 0, + "content": "Evaluation Datasets and Metrics. Our proposed method is validated on five AIGVQA datasets: AIGVQA-DB, LGVQ [84], FETV [43], T2VQA [31], and GAIA [13]. To evaluate the correlation between the predicted scores and the ground-truth MOSs, we utilize three evaluation criteria: Spearman Rank Correlation Coefficient (SRCC), Pearson Linear Correlation Coefficient (PLCC), and Kendall's Rank Correlation Coefficient (KRCC). For pair comparison, we adopt the comparison accuracy as the metric." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.735, + 0.484, + 0.903 + ], + "angle": 0, + "content": "Reference Algorithms. To assess the performance of our proposed method, we select state-of-the-art evaluation metrics for comparison, which can be classified into five groups: (1) Handcrafted-based I/VQA models, including: NIQE [49], BRISQUE [48], QAC [80], BMPRI [47], HOSA [77], BPRI [46], HIGRADE [32], etc. (2) Action-related evaluation models, including: V-Dynamic [25], V-Smoothness [25] which are proposed in VBench [25]. (3) Vision-language pre-training models, including: CLIPScore [22], BLIPScore [37], AestheticScore [53], ImageReward [78], and UMTScore [43]. (4) LLM-based models, in-" + }, + { + "type": "table_caption", + "bbox": [ + 0.542, + 0.083, + 0.878, + 0.097 + ], + "angle": 0, + "content": "Table 5. Performance comparisons on T2VQA-DB [31]." + }, + { + "type": "table", + "bbox": [ + 0.518, + 0.097, + 0.908, + 0.248 + ], + "angle": 0, + "content": "
AspectsMethodsT2VQA-DBSora Testing
SRCCPLCCKRCCSRCCPLCCKRCC
zero-shotCLIPScore [22]0.10470.12770.07020.21160.15380.1406
BLIPScore [37]0.16590.18600.11120.21160.10380.1515
ImageReward [78]0.18750.21210.12660.09920.04150.0748
UMTScore [43]0.06760.07210.04530.25940.08400.1680
finetunedSimpleVQA [57]0.62750.63880.44660.03400.23440.0237
BVQA [37]0.73900.74860.54870.42350.24890.2635
FAST-VQA [70]0.71730.72950.53030.43010.23690.2939
DOVER [71]0.76090.76930.57040.44210.26890.2757
T2VQA [31]0.79650.80660.60580.64850.31240.4874
AIGV-Assessor (Ours)0.81310.82220.63640.66120.33180.5075
Improvement+1.7%+1.6%+3.1%+1.3%+1.9%+2.0%
" + }, + { + "type": "table_caption", + "bbox": [ + 0.56, + 0.259, + 0.859, + 0.272 + ], + "angle": 0, + "content": "Table 6. Performance comparisons on GAIA [13]." + }, + { + "type": "table", + "bbox": [ + 0.517, + 0.273, + 0.905, + 0.49 + ], + "angle": 0, + "content": "
DimensionSubjectCompletenessInteraction
Methods / MetricsSRCCPLCCSRCCPLCCSRCCPLCC
V-Smoothness [25]0.24020.19130.14740.16250.17410.1693
V-Dynamic [25]0.12850.08310.09030.06820.11410.0758
Action-Score [42]0.20230.18230.28670.26230.26890.2432
Flow-Score [42]0.14710.15410.08160.12730.10410.1309
CLIPScore [22]0.33980.33300.39440.38710.38750.3821
BLIPScore [37]0.34530.33860.41740.40820.40440.3994
LLaVAScore [41]0.34840.34360.41890.41330.40770.4025
TLVQM [30]0.50370.51370.41270.41580.40790.4093
VIDEVAL [60]0.52370.54460.42830.43750.41210.4234
VSFA [35]0.55940.57620.49400.50170.47090.4811
BVQA [37]0.57020.58880.48760.49460.47610.4825
SimpleVQA [58]0.59200.59740.49810.50780.48430.4971
FAST-VQA [70]0.60150.60920.51570.52150.51540.5216
DOVER [71]0.61730.63010.51980.53230.51640.5278
AIGV-Assessor (Ours)0.68420.68970.66350.66940.63290.6340
Improvement+6.7%+6.0%+14.4%+13.7%+11.65%+10.6%
" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.504, + 0.906, + 0.595 + ], + "angle": 0, + "content": "cluding: Video-LLaVA [40], Video-ChatGPT [45], LLaVA-NeXT [36], VideoLLaMA2 [14], and Qwen2-VL [66]. (5) Deep learning-based I/VQA models, including: HyperIQA [56], MUSIQ [26], LIQE [83], VSFA [35], BVQA [33], SimpleVQA [58], FAST-VQA [70], DOVER [71], and Q-Align [72]." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.6, + 0.907, + 0.812 + ], + "angle": 0, + "content": "Training Settings. Traditional handcrafted models are directly evaluated on the corresponding databases, and the average score of all frames is calculated. For vision-language pre-training and LLM-based models, we load the pre-trained weights for inference. CLIPscore [22], BLIP-score [37], and other vision-language pre-training models are calculated directly as the average cosine similarity between text and each video frame. SimpleVQA [58], BVQA [33], FAST-VQA [70], DOVER [71], and Q-Align [72] are fine-tuned on every test dataset. For deep learning-based IQA and VQA models, all experiments for each method are retrained on each dimension using the same training and testing split as the previous literature at a ratio of 4:1. All results are averaged after ten random splits." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.821, + 0.71, + 0.837 + ], + "angle": 0, + "content": "5.2. Results and Analysis" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.841, + 0.906, + 0.901 + ], + "angle": 0, + "content": "Table 3 presents the pairwise win rates and the score prediction correlation between predicted results and human ground truths. The results indicate that handcrafted-based methods consistently underperform across all four evalu-" + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "18875" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.092, + 0.058, + 0.912, + 0.186 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.186, + 0.908, + 0.215 + ], + "angle": 0, + "content": "Figure 6. Comparison of win rates of different generation models across four dimensions evaluated by different VQA methods, demonstrating our AIGV-Assessor has better win-rate evaluation ability aligned with Ground Truth (GT)." + }, + { + "type": "table_caption", + "bbox": [ + 0.306, + 0.225, + 0.692, + 0.239 + ], + "angle": 0, + "content": "Table 7. Ablation study of the proposed AIGV-Assessor method." + }, + { + "type": "table", + "bbox": [ + 0.093, + 0.239, + 0.905, + 0.346 + ], + "angle": 0, + "content": "
No.Feature & StrategyStatic QualityTemporal SmoothnessDynamic DegreeT2V Correspondence
spatialtemporalquality levelLoRA finetuningSRCCPLCCKRCCSRCCPLCCKRCCSRCCPLCCKRCCSRCCPLCCKRCC
(1)0.8640.8660.7260.8700.8680.7270.5560.5720.4320.6160.6200.492
(2)0.8740.8760.7230.8750.8760.7360.5580.5730.4310.7230.7340.533
(3)0.8870.8840.7220.8810.8830.7060.5620.5750.4330.7390.7580.544
(4)0.8870.8880.7530.9170.9100.7960.5690.5360.4380.6880.6730.557
(5)0.9050.9080.7540.9190.9170.7990.5890.5870.4410.7420.7630.549
(6)0.9160.9190.7580.9230.9220.8040.6090.6080.4440.7500.7700.559
" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.357, + 0.483, + 0.597 + ], + "angle": 0, + "content": "ation perspectives. Vision-language pre-training methods such as CLIPscore [22] and BLIPscore [37] demonstrate moderate performance but are still surpassed by more specialized and fine-tuned VQA models. Specifically, deep learning-based models like FAST-VQA [70] and DOVER [71] achieve more competitive performances after fin-tuning. However, they are still far away from satisfactory. Notably, most VQA models perform better on quality evaluation than on text-video correspondence, as they lack text prompts input used in video generation, making it challenging to extract relation features from the AI-generated videos, which inevitably leads to the performance drop. Finally, the performance exploration of recent LMMs on our database shows that current LMMs are able to produce meaningful evaluations, which can motivate future works to further explore the use of LMMs for AIGV assessment." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.599, + 0.483, + 0.795 + ], + "angle": 0, + "content": "The proposed AIGV-Assessor achieves the best performance compared to the competitors for both MOS prediction and pair ranking tasks in terms of all four dimensions. To further validate the effectiveness and generalizability of our proposed model, we also evaluate it on four other AIGVQA datasets [13, 31, 43, 84]. From Tables 4-6, we observe that AIGV-Assessor consistently achieves the best performance across these datasets. As shown in Figure 6, AIGV-Assessor achieves the highest overlap in area with Ground Truth (GT), indicating that AIGV-Assessor can reliably perform T2V model benchmarking, outperforming other assessment models in discerning quality differences in AI-generated videos." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.803, + 0.245, + 0.82 + ], + "angle": 0, + "content": "5.3. Ablation Study" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.825, + 0.484, + 0.902 + ], + "angle": 0, + "content": "We conduct ablation experiments to verify the effectiveness of the main components in our AIGV-Assessor method, including the spatial feature, the temporal feature, the quality level, and the LoRA finetuning strategy. Additionally, we assess how each feature contributes to the performance" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.357, + 0.908, + 0.552 + ], + "angle": 0, + "content": "across different quality dimensions. The results of these experiments are summarized in Table 7. Experiments (1), (2), and (3) validate the effectiveness of the quality regression module and the LoRA finetuning strategy, confirming that fine-tuning and quality regression significantly enhance model performance over only regressing the generated text outputs from the LLM. The addition of temporal features, as seen in Experiments (4), (5), and (6), significantly improves model performance. Experiment (6), which integrates all components, yields the best overall performance, showing that the combination of spatial and temporal features, quality level prediction, and LoRA finetuning provides the most robust and accurate AIGV assessment." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.582, + 0.634, + 0.599 + ], + "angle": 0, + "content": "6. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.614, + 0.907, + 0.901 + ], + "angle": 0, + "content": "In this paper, we study the human visual preference evaluation problem for AIGVs. We first construct AIGVQA-DB, which includes 36,576 videos generated based on 1048 various text-prompts, with the MOSs and pair comparisons evaluated from four perspectives. Our detailed manual evaluations reflect different aspects of human visual preferences on AIGVs and reveal critical insights into the strengths and weaknesses of various text-to-video models. Based on the database, we evaluate the performance of state-of-the-art quality evaluation models and establish a new benchmark, revealing their limitations in measuring the perceptual preference of AIGVs. Finally, we propose AIGV-Assessor, a novel VQA model that leverages the capabilities of LMMs to give quality levels, predict quality scores, and compare preferences from four dimensions. Extensive experiments demonstrate that AIGV-Assessor achieves state-of-the-art performance on both AIGVQA-DB and other AIGVQA benchmarks, validating its robustness in understanding and evaluating the AI-generated videos." + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "18876" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.093, + 0.09, + 0.188, + 0.106 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.116, + 0.482, + 0.142 + ], + "angle": 0, + "content": "[1] Hotshot-XL. https://github.com/hotshotco/hotshot-xl, 2023.2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.146, + 0.483, + 0.171 + ], + "angle": 0, + "content": "[2] Floor33. https://discord.qq/EuB9KT6H, 2023.2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.175, + 0.418, + 0.187 + ], + "angle": 0, + "content": "[3] Gemo. https://www.genmo.ai, 2024.2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.19, + 0.482, + 0.216 + ], + "angle": 0, + "content": "[4] Gen2. https://research.runwayml.com/gen2, 2024.2,3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.219, + 0.458, + 0.233 + ], + "angle": 0, + "content": "[5] Moonvalley. https://moonvalley.ai, 2024. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.236, + 0.482, + 0.26 + ], + "angle": 0, + "content": "[6] Morph studio. https://www.morphstudio.com, 2024.2,3,5" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.265, + 0.482, + 0.304 + ], + "angle": 0, + "content": "[7] Sora. https://openai.com/research/video-generation-models-as-world-simulators, 2024.2,3,5" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.307, + 0.482, + 0.375 + ], + "angle": 0, + "content": "[8] Max Bain, Arsha Nagrani, Gül Varol, and Andrew Zisserman. Frozen in time: A joint video and image encoder for end-to-end retrieval. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 1728-1738, 2021. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.379, + 0.482, + 0.446 + ], + "angle": 0, + "content": "[9] Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint arXiv:2311.15127, 2023. 2, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.449, + 0.482, + 0.504 + ], + "angle": 0, + "content": "[10] Yuqin Cao, Xiongkuo Min, Yixuan Gao, Wei Sun, Weisi Lin, and Guangtao Zhai. Unqa: Unified no-reference quality assessment for audio, image, video, and audio-visual content. arXiv preprint arXiv:2407.19704, 2024. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.506, + 0.482, + 0.56 + ], + "angle": 0, + "content": "[11] Yuqin Cao, Xiongkuo Min, Yixuan Gao, Wei Sun, and Guangtao Zhai. Agav-rater: Adapting large multimodal model for ai-generated audio-visual quality assessment. arXiv preprint arXiv:2501.18314, 2025. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.564, + 0.482, + 0.631 + ], + "angle": 0, + "content": "[12] Haoxin Chen, Menghan Xia, Yingqing He, Yong Zhang, Xiaodong Cun, Shaoshu Yang, Jinbo Xing, Yaofang Liu, Qifeng Chen, Xintao Wang, Chao Weng, and Ying Shan. Videocraft1: Open diffusion models for high-quality video generation. arXiv preprint arXiv:2310.19512, 2023. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.634, + 0.482, + 0.701 + ], + "angle": 0, + "content": "[13] Zijian Chen, Wei Sun, Yuan Tian, Jun Jia, Zicheng Zhang, Jiarui Wang, Ru Huang, Xiongkuo Min, Guangtao Zhai, and Wenjun Zhang. Gaia: Rethinking action quality assessment for ai-generated videos. arXiv preprint arXiv:2406.06087, 2024. 1, 3, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.704, + 0.482, + 0.773 + ], + "angle": 0, + "content": "[14] Zesen Cheng, Sicong Leng, Hang Zhang, Yifei Xin, Xin Li, Guanzheng Chen, Yongxin Zhu, Wenqi Zhang, Ziyang Luo, Deli Zhao, and Lidong Bing. Videollama 2: Advancing spatial-temporal modeling and audio understanding in videolms. arXiv preprint arXiv:2406.07476, 2024. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.776, + 0.482, + 0.829 + ], + "angle": 0, + "content": "[15] Iya Chivileva, Philip Lynch, Tomas E Ward, and Alan F Smeaton. Measuring the quality of text-to-video model outputs: Metrics and dataset. arXiv preprint arXiv:2309.08009, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.832, + 0.482, + 0.9 + ], + "angle": 0, + "content": "[16] Ming Ding, Wendi Zheng, Wenyi Hong, and Jie Tang. Cogview2: Faster and better text-to-image generation via hierarchical transformers. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), pages 16890-16902, 2022. 3" + }, + { + "type": "list", + "bbox": [ + 0.094, + 0.116, + 0.483, + 0.9 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.161 + ], + "angle": 0, + "content": "[17] Huiyu Duan, Xiongkuo Min, Yucheng Zhu, Guangtao Zhai, Xiaokang Yang, and Patrick Le Callet. Confusing image quality assessment: Toward better augmented reality experience. IEEE Transactions on Image Processing (TIP), 31: 7206-7221, 2022. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.165, + 0.905, + 0.246 + ], + "angle": 0, + "content": "[18] Huiyu Duan, Qiang Hu, Jiarui Wang, Liu Yang, Zitong Xu, Lu Liu, Xiongkuo Min, Chunlei Cai, Tianxiao Ye, Xiaoyun Zhang, et al. Finevq: Fine-grained user generated content video quality assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.249, + 0.905, + 0.304 + ], + "angle": 0, + "content": "[19] Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 6202-6211, 2019. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.307, + 0.905, + 0.36 + ], + "angle": 0, + "content": "[20] Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, and Bo Dai. Animatediff:Animate your personalized text-to-image diffusion models without specific tuning. arXiv preprint arXiv:2307.04725, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.364, + 0.905, + 0.417 + ], + "angle": 0, + "content": "[21] Yingqing He, Tianyu Yang, Yong Zhang, Ying Shan, and Qifeng Chen. Latent video diffusion models for high-fidelity video generation with arbitrary lengths. arXiv preprint arXiv:2211.13221, 2022. 2, 3, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.421, + 0.905, + 0.489 + ], + "angle": 0, + "content": "[22] Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation metric for image captioning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7514-7528, 2021. 1, 6, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.492, + 0.905, + 0.545 + ], + "angle": 0, + "content": "[23] Wenyi Hong, Ming Ding, Wendi Zheng, Xinghan Liu, and Jie Tang. Cogvideo: Large-scale pretraining for text-to-video generation via transformers. arXiv preprint arXiv:2205.15868, 2022. 1, 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.549, + 0.905, + 0.603 + ], + "angle": 0, + "content": "[24] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.607, + 0.905, + 0.701 + ], + "angle": 0, + "content": "[25] Ziqi Huang, Yinan He, Jiashuo Yu, Fan Zhang, Chenyang Si, Yuming Jiang, Yuanhan Zhang, Tianxing Wu, Qingyang Jin, Nattapol Chanpaisit, Yaohui Wang, Xinyuan Chen, Limin Wang, Dahua Lin, Yu Qiao, and Ziwei Liu. VBenchmark: Comprehensive benchmark suite for video generative models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024. 1, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.705, + 0.905, + 0.759 + ], + "angle": 0, + "content": "[26] Junjie Ke, Qifei Wang, Yilin Wang, Peyman Milanfar, and Feng Yang. Musiq: Multi-scale image quality transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 5128-5137, 2021. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.762, + 0.905, + 0.844 + ], + "angle": 0, + "content": "[27] Levon Khachatryan, Andranik Movsisyan, Vahram Tadevosyan, Roberto Henschel, Zhangyang Wang, Shant Navasardyan, and Humphrey Shi. Text2video-zero: Text-to-image diffusion models are zero-shot video generators. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 15954-15964, 2023. 1, 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.847, + 0.905, + 0.9 + ], + "angle": 0, + "content": "[28] Yuval Kirstain, Adam Poliak, Uriel Singer, and Omer Levy. Pick-a-pic: An open dataset of user preferences for text-to-image generation. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 2023. 7" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.9 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "18877" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.092, + 0.482, + 0.161 + ], + "angle": 0, + "content": "[29] Yuval Kirstain, Adam Polyak, Uriel Singer, Shahbuland Martiana, Joe Penna, and Omer Levy. Pick-a-pic: An open dataset of user preferences for text-to-image generation. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), pages 36652-36663, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.164, + 0.482, + 0.205 + ], + "angle": 0, + "content": "[30] Jari Korhonen. Two-level approach for no-reference consumer video quality assessment. IEEE Transactions on Image Processing (TIP), 28(12):5923-5938, 2019. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.207, + 0.482, + 0.274 + ], + "angle": 0, + "content": "[31] Tengchuan Kou, Xiaohong Liu, Zicheng Zhang, Chunyi Li, Haoning Wu, Xiongkuo Min, Guangtao Zhai, and Ning Liu. Subjective-aligned dataset and metric for text-to-video quality assessment. arXiv preprint arXiv:2403.11956, 2024. 1, 3, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.278, + 0.482, + 0.332 + ], + "angle": 0, + "content": "[32] Debarati Kundu, Deepti Ghadiyaram, Alan C Bovik, and Brian L Evans. Large-scale crowdsourced study for tonemapbed hdr pictures. IEEE Transactions on Image Processing (TIP), pages 4725-4740, 2017. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.334, + 0.482, + 0.403 + ], + "angle": 0, + "content": "[33] Bowen Li, Weixia Zhang, Meng Tian, Guangtao Zhai, and Xianpei Wang. Blindly assess quality of in-the-wild videos via quality-aware pre-training and motion perception. IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 32(9):5944-5958, 2022. 1, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.406, + 0.482, + 0.473 + ], + "angle": 0, + "content": "[34] Chunyi Li, Zicheng Zhang, Haoning Wu, Wei Sun, Xiongkuo Min, Xiaohong Liu, Guangtao Zhai, and Weisi Lin. Agiqa-3k: An open database for ai-generated image quality assessment. IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.476, + 0.482, + 0.53 + ], + "angle": 0, + "content": "[35] Dingquan Li, Tingting Jiang, and Ming Jiang. Quality assessment of in-the-wild videos. In Proceedings of the ACM International Conference on Multimedia (ACMMM). ACM, 2019. 1, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.533, + 0.482, + 0.588 + ], + "angle": 0, + "content": "[36] Feng Li, Renrui Zhang, Hao Zhang, Yuanhan Zhang, Bo Li, Wei Li, Zejun Ma, and Chunyuan Li. Llava next-interleave: Tackling multi-image, video, and 3d in large multimodal models. arXiv preprint arXiv:2407.07895, 2024. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.59, + 0.482, + 0.659 + ], + "angle": 0, + "content": "[37] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In Proceedings of the International Conference on Machine Learning (ICML), pages 12888-12900. PMLR, 2022. 1, 6, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.661, + 0.482, + 0.742 + ], + "angle": 0, + "content": "[38] Yuncheng Li, Yale Song, Liangliang Cao, Joel Tetreault, Larry Goldberg, Alejandro Jaimes, and Jiebo Luo. Tgif: A new dataset and benchmark on animated gif description. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4641-4650, 2016. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.746, + 0.482, + 0.828 + ], + "angle": 0, + "content": "[39] Youwei Liang, Junfeng He, Gang Li, Peizhao Li, Arseniy Klimovskiy, Nicholas Carolan, Jiao Sun, Jordi Pont-Tuset, Sarah Young, Feng Yang, et al. Rich human feedback for text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 19401-19411, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.83, + 0.482, + 0.884 + ], + "angle": 0, + "content": "[40] Bin Lin, Bin Zhu, Yang Ye, Munan Ning, Peng Jin, and Li Yuan. Video-llava: Learning united visual representation by alignment before projection. arXiv preprint arXiv:2311.10122, 2023. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.887, + 0.482, + 0.901 + ], + "angle": 0, + "content": "[41] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee." + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.482, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.545, + 0.093, + 0.905, + 0.12 + ], + "angle": 0, + "content": "Visual instruction tuning. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 2024. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.122, + 0.905, + 0.19 + ], + "angle": 0, + "content": "[42] Yaofang Liu, Xiaodong Cun, Xuebo Liu, Xintao Wang, Yong Zhang, Haoxin Chen, Yang Liu, Tieyong Zeng, Raymond Chan, and Ying Shan. Evalcrafter: Benchmarking and evaluating large video generation models. arXiv preprint arXiv:2310.11440, 2023. 3, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.193, + 0.905, + 0.274 + ], + "angle": 0, + "content": "[43] Yuanxin Liu, Lei Li, Shuhuai Ren, Rundong Gao, Shicheng Li, Sishuo Chen, Xu Sun, and Lu Hou. Fetv: A benchmark for fine-grained evaluation of open-domain text-to-video generation. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 2024. 1, 2, 3, 6, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.278, + 0.905, + 0.36 + ], + "angle": 0, + "content": "[44] Zhengxiong Luo, Dayou Chen, Yingya Zhang, Yan Huang, Liang Wang, Yujun Shen, Deli Zhao, Jingren Zhou, and Tieniu Tan. Videofusion: Decomposed diffusion models for high-quality video generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10209-10218, 2023. 1, 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.362, + 0.905, + 0.431 + ], + "angle": 0, + "content": "[45] Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Shahbaz Khan. Video-chatgpt: Towards detailed video understanding via large vision and language models. In Proceedings of the Association for Computational Linguistics (ACL), 2024. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.433, + 0.905, + 0.488 + ], + "angle": 0, + "content": "[46] Xiongkuo Min, Ke Gu, Guangtao Zhai, Jing Liu, Xiaokang Yang, and Chang Wen Chen. Blind quality assessment based on pseudo-reference image. IEEE Transactions on Multimedia (TMM), pages 2049-2062, 2017. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.49, + 0.905, + 0.545 + ], + "angle": 0, + "content": "[47] Xiongkuo Min, Guangtao Zhai, Ke Gu, Yutao Liu, and Xiaokang Yang. Blind image quality estimation via distortion aggravation. IEEE Transactions on Broadcasting (TBC), pages 508-517, 2018. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.548, + 0.905, + 0.602 + ], + "angle": 0, + "content": "[48] Anish Mittal, Anush Krishna Moorthy, and Alan Conrad Bovik. No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing (TIP), pages 4695-4708, 2012. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.604, + 0.905, + 0.645 + ], + "angle": 0, + "content": "[49] Anish Mittal, Rajiv Soundararajan, and Alan C Bovik. Making a “completely blind” image quality analyzer. IEEE Signal Processing Letters (SPL), pages 209–212, 2012. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.648, + 0.905, + 0.701 + ], + "angle": 0, + "content": "[50] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 1 (2):3, 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.704, + 0.905, + 0.773 + ], + "angle": 0, + "content": "[51] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10684-10695, 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.775, + 0.905, + 0.83 + ], + "angle": 0, + "content": "[52] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 2016. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.832, + 0.905, + 0.901 + ], + "angle": 0, + "content": "[53] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. In Proceedings of the Ad" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.519, + 0.956 + ], + "angle": 0, + "content": "18878" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.126, + 0.092, + 0.482, + 0.12 + ], + "angle": 0, + "content": "vances in Neural Information Processing Systems (NeurIPS), pages 25278-25294, 2022. 1, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.122, + 0.483, + 0.164 + ], + "angle": 0, + "content": "[54] BT Series. Methodology for the subjective assessment of the quality of television pictures. Recommendation ITU-R BT, pages 500-13, 2012. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.165, + 0.483, + 0.233 + ], + "angle": 0, + "content": "[55] Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, et al. Make-a-video: Text-to-video generation without text-video data. arXiv preprint arXiv:2209.14792, 2022. 1, 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.235, + 0.483, + 0.305 + ], + "angle": 0, + "content": "[56] Shaolin Su, Qingsen Yan, Yu Zhu, Cheng Zhang, Xin Ge, Jinqiu Sun, and Yanning Zhang. Blindly assess image quality in the wild guided by a self-adaptive hyper network. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.307, + 0.483, + 0.375 + ], + "angle": 0, + "content": "[57] Wei Sun, Xiongkuo Min, Wei Lu, and Guangtao Zhai. A deep learning based no-reference quality assessment model forUGC videos. In Proceedings of the 30th ACM International Conference on Multimedia (ACMMM), page 856-865, 2022. 1, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.377, + 0.483, + 0.446 + ], + "angle": 0, + "content": "[58] Wei Sun, Xiongkuo Min, Wei Lu, and Guangtao Zhai. A deep learning based no-reference quality assessment model forUGC videos. In Proceedings of the ACM International Conference on Multimedia (ACMMM), pages 856-865, 2022. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.448, + 0.483, + 0.517 + ], + "angle": 0, + "content": "[59] Wei Sun, Xiongkuo Min, Danyang Tu, Siwei Ma, and Guangtao Zhai. Blind quality assessment for in-the-wild images via hierarchical feature fusion and iterative mixed database training. IEEE Journal of Selected Topics in Signal Processing (JSTSP), 2023. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.518, + 0.483, + 0.574 + ], + "angle": 0, + "content": "[60] Zhengzhong Tu, Yilin Wang, Neil Birkbeck, Balu Adsumilli, and Alan C Bovik. Ugc-vqa: Benchmarking blind video quality assessment for user generated content. IEEE Transactions on Image Processing (TIP), 30:4449-4464, 2021. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.576, + 0.483, + 0.631 + ], + "angle": 0, + "content": "[61] Thomas Unterthiner, Sjoerd Van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, and Sylvain Gelly. Towards accurate generative models of video: A new metric & challenges. arXiv preprint arXiv:1812.01717, 2018. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.632, + 0.483, + 0.687 + ], + "angle": 0, + "content": "[62] Jianyi Wang, Kelvin C.K. Chan, and Chen Change Loy. Exploring clip for assessing the look and feel of images. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pages 2555-2563, 2023. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.689, + 0.483, + 0.772 + ], + "angle": 0, + "content": "[63] Jiarui Wang, Huiyu Duan, Jing Liu, Shi Chen, Xiongkuo Min, and Guangtao Zhai. Aigciqa2023: A large-scale image quality assessment database for ai generated images: from the perspectives of quality, authenticity and correspondence. In CAAI International Conference on Artificial Intelligence (CICAI), pages 46-57. Springer, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.774, + 0.483, + 0.829 + ], + "angle": 0, + "content": "[64] Jiuniu Wang, Hangjie Yuan, Dayou Chen, Yingya Zhang, Xiang Wang, and Shiwei Zhang. Modelscope text-to-video technical report. arXiv preprint arXiv:2308.06571, 2023. 1, 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.831, + 0.483, + 0.872 + ], + "angle": 0, + "content": "[65] Jiarui Wang, Huiyu Duan, Guangtao Zhai, and Xiongkuo Min. Quality assessment for ai generated images with instruction tuning. arXiv preprint arXiv:2405.07346, 2024. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.873, + 0.483, + 0.903 + ], + "angle": 0, + "content": "[66] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.483, + 0.903 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.545, + 0.092, + 0.906, + 0.162 + ], + "angle": 0, + "content": "Ge, Yang Fan, Kai Dang, Mengfei Du, Xuancheng Ren, Rui Men, Dayiheng Liu, Chang Zhou, Jingren Zhou, and Junyang Lin. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191, 2024. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.164, + 0.906, + 0.233 + ], + "angle": 0, + "content": "[67] Yaohui Wang, Xinyuan Chen, Xin Ma, Shangchen Zhou, Ziqi Huang, Yi Wang, Ceyuan Yang, Yinan He, Jiashuo Yu, Peiqing Yang, et al. Lavie: High-quality video generation with cascaded latent diffusion models. arXiv preprint arXiv:2309.15103, 2023. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.235, + 0.906, + 0.319 + ], + "angle": 0, + "content": "[68] Yi Wang, Yinan He, Yizhuo Li, Kunchang Li, Jiashuo Yu, Xin Ma, Xinhao Li, Guo Chen, Xinyuan Chen, Yaohui Wang, et al. Internvid: A large-scale video-text dataset for multimodal understanding and generation. In Proceedings of the International Conference on Learning Representations (ICLR), 2023. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.321, + 0.906, + 0.403 + ], + "angle": 0, + "content": "[69] Zirui Wang, Mengzhou Xia, Luxi He, Howard Chen, Yitao Liu, Richard Zhu, Kaiqu Liang, Xindi Wu, Haotian Liu, Sadhika Malladi, Alexis Chevalier, Sanjeev Arora, and Danqi Chen. Charxiv: Charting gaps in realistic chart understanding in multimodal llms. arXiv preprint arXiv:2406.18521, 2024. 5, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.405, + 0.906, + 0.489 + ], + "angle": 0, + "content": "[70] Haoning Wu, Chaofeng Chen, Jingwen Hou, Liang Liao, Annan Wang, Wenxiu Sun, Qiong Yan, and Weisi Lin. Fastvqa: Efficient end-to-end video quality assessment with fragment sampling. In Proceedings of the European Conference on Computer Vision (ECCV), pages 538-554. Springer, 2022. 1, 6, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.49, + 0.906, + 0.573 + ], + "angle": 0, + "content": "[71] Haoning Wu, Erli Zhang, Liang Liao, Chaofeng Chen, Jingwen Hou Hou, Annan Wang, Wenxiu Sun Sun, Qiong Yan, and Weisi Lin. Exploring video quality assessment on user generated contents from aesthetic and technical perspectives. In Proceedings of the International Conference on Computer Vision (ICCV), 2023. 1, 6, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.575, + 0.906, + 0.644 + ], + "angle": 0, + "content": "[72] Haoning Wu, Zicheng Zhang, Weixia Zhang, Chaofeng Chen, Liang Liao, Chunyi Li, Yixuan Gao, Annan Wang, Erli Zhang, Wenxiu Sun, et al. Q-align: Teaching lmm's for visual scoring via discrete text-defined levels. arXiv preprint arXiv:2312.17090, 2023. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.646, + 0.906, + 0.73 + ], + "angle": 0, + "content": "[73] Jay Zhangjie Wu, Yixiao Ge, Xintao Wang, Stan Weixian Lei, Yuchao Gu, Yufei Shi, Wynne Hsu, Ying Shan, Xiaohu Qie, and Mike Zheng Shou. Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 7623-7633, 2023. 1, 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.732, + 0.906, + 0.8 + ], + "angle": 0, + "content": "[74] Xiaoshi Wu, Yiming Hao, Keqiang Sun, Yixiong Chen, Feng Zhu, Rui Zhao, and Hongsheng Li. Human preference score v2: A solid benchmark for evaluating human preferences of text-to-image synthesis. arXiv preprint arXiv:2306.09341, 2023. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.802, + 0.906, + 0.845 + ], + "angle": 0, + "content": "[75] Xiaoshi Wu, Keqiang Sun, Feng Zhu, Rui Zhao, and Hongsheng Li. Better aligning text-to-image models with human preference. arXiv preprint arXiv:2303.14420, 1(3), 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.846, + 0.906, + 0.902 + ], + "angle": 0, + "content": "[76] Jun Xu, Tao Mei, Ting Yao, and Yong Rui. Msr-vtt: A large video description dataset for bridging video and language. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 2, 3" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.092, + 0.906, + 0.902 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "18879" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.093, + 0.482, + 0.147 + ], + "angle": 0, + "content": "[77] Jingtao Xu, Peng Ye, Qiaohong Li, Haiqing Du, Yong Liu, and David Doermann. Blind image quality assessment based on high order statistics aggregation. IEEE Transactions on Image Processing (TIP), pages 4444-4457, 2016. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.149, + 0.482, + 0.217 + ], + "angle": 0, + "content": "[78] Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, and Yuxiao Dong. Imagereward: Learning and evaluating human preferences for text-to-image generation. arXiv preprint arXiv:2304.05977, 2023.6,7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.219, + 0.482, + 0.302 + ], + "angle": 0, + "content": "[79] Zitong Xu, Huiyu Duan, Guangji Ma, Liu Yang, Jiarui Wang, Qingbo Wu, Xiongkuo Min, Guangtao Zhai, and Patrick Le Callet. Harmonyiqa: Pioneering benchmark and model for image harmonization quality assessment. In Proceedings of the International Conference on Multimedia and Expo (ICME), 2025. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.304, + 0.482, + 0.371 + ], + "angle": 0, + "content": "[80] Wufeng Xue, Lei Zhang, and Xuanqin Mou. Learning without human scores for blind image quality assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 995-1002, 2013. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.375, + 0.482, + 0.415 + ], + "angle": 0, + "content": "[81] Wilson Yan, Yunzhi Zhang, Pieter Abbeel, and Aravind Srinivas. Videogpt: Video generation using vq-vae and transformers. arXiv preprint arXiv:2104.10157, 2021. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.418, + 0.483, + 0.486 + ], + "angle": 0, + "content": "[82] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 586-595, 2018. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.488, + 0.482, + 0.556 + ], + "angle": 0, + "content": "[83] Weixia Zhang, Guangtao Zhai, Ying Wei, Xiaokang Yang, and Kede Ma. Blind image quality assessment via vision-language correspondence: A multitask learning perspective. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.559, + 0.482, + 0.626 + ], + "angle": 0, + "content": "[84] Zhichao Zhang, Xinyue Li, Wei Sun, Jun Jia, Xiongkuo Min, Zicheng Zhang, Chunyi Li, Zijian Chen, Puyi Wang, Zhongpeng Ji, et al. Benchmarking aigc video quality assessment: A dataset and unified model. arXiv preprint arXiv:2407.21408, 2024. 1, 3, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.629, + 0.482, + 0.684 + ], + "angle": 0, + "content": "[85] Tianwei Zhou, Songbai Tan, Wei Zhou, Yu Luo, Yuan-Gen Wang, and Guanghui Yue. Adaptive mixed-scale feature fusion network for blind ai-generated image quality assessment. IEEE Transactions on Broadcasting (TBC), 2024. 1" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.093, + 0.483, + 0.684 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.481, + 0.946, + 0.519, + 0.956 + ], + "angle": 0, + "content": "18880" + } + ] +] \ No newline at end of file diff --git a/2025/AIGV-Assessor_ Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMM/6da59f8b-88b1-4dd7-9656-385f5fb4c136_origin.pdf b/2025/AIGV-Assessor_ Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMM/6da59f8b-88b1-4dd7-9656-385f5fb4c136_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c2235867d9565316152db5da0264797a643037c1 --- /dev/null +++ b/2025/AIGV-Assessor_ Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMM/6da59f8b-88b1-4dd7-9656-385f5fb4c136_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8ed639b3e3696bc007c7dddd449d88a8467cd8fde500472d1963b2bcec107a93 +size 3117040 diff --git a/2025/AIGV-Assessor_ Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMM/full.md b/2025/AIGV-Assessor_ Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMM/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3ecb0dd4532e018382975eb98aaf31823534a12a --- /dev/null +++ b/2025/AIGV-Assessor_ Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMM/full.md @@ -0,0 +1,293 @@ +# AIGV-Assessor: Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMM + +Jiarui Wang $^{1}$ , Huiyu Duan $^{1,2}$ , Guangtao Zhai $^{1,2}$ , Juntong Wang $^{1}$ , Xiongkuo Min $^{1*}$ , $^{1}$ Institute of Image Communication and Network Engineering, $^{2}$ MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China + +# Abstract + +The rapid advancement of large multimodal models (LMMs) has led to the rapid expansion of artificial intelligence generated videos (AIGVs), which highlights the pressing need for effective video quality assessment (VQA) models designed specifically for AIGVs. Current VQA models generally fall short in accurately assessing the perceptual quality of AIGVs due to the presence of unique distortions, such as unrealistic objects, unnatural movements, or inconsistent visual elements. To address this challenge, we first present AIGVQA-DB, a large-scale dataset comprising 36,576 AIGVs generated by 15 advanced text-to-video models using 1,048 diverse prompts. With these AIGVs, a systematic annotation pipeline including scoring and ranking processes is devised, which collects 370k expert ratings to date. Based on AIGVQA-DB, we further introduce AIGV-Assessor, a novel VQA model that leverages spatiotemporal features and LMM frameworks to capture the intricate quality attributes of AIGVs, thereby accurately predicting precise video quality scores and video pair preferences. Through comprehensive experiments on both AIGVQA-DB and existing AIGV databases, AIGV-Assessor demonstrates state-of-the-art performance, significantly surpassing existing scoring or evaluation methods in terms of multiple perceptual quality dimensions. The dataset and code are released at https://github.com/IntMeGroup/AIGV-Assessor. + +# 1. Introduction + +Text-to-video generative models [12, 27, 44, 64, 73], including auto-regressive [23, 81] and diffusion-based [12, 27, 55] approaches, have experienced rapid advancements in recent years with the explosion of large multimodal models + +(LMMs). Given appropriate text prompts, these models can generate high-fidelity and semantically-aligned videos, commonly referred to as AI-generated videos (AIGVs), which have significantly facilitated the content creation in various domains, including entertainment, art, design, and advertising, etc [11, 13, 43]. Despite the significant progress, current AIGVs are still far from satisfactory. Unlike natural videos, which are usually affected by low-level distortions, such as noise, blur, low-light, etc, AIGVs generally suffer from degradations such as unrealistic objects, unnatural movements, inconsistent visual elements, and misalignment with text descriptions [25, 31, 43, 65, 79, 84, 85]. + +The unique distortions in AIGVs also bring challenges to the video evaluation. Traditional video quality assessment (VQA) methods [10, 18, 33, 35, 57, 70, 71] mainly focus on evaluating the quality of professionally-generated content (PGC) and user-generated content (UGC), thus struggling to address the specific distortions associated with AIGVs, such as spatial artifacts, temporal inconsistencies, and misalignment between generated content and text prompts. For evaluation of AIGVs, some metrics such as Inception Score (IS) [52] and Fréchet Video Distance (FVD) [61] have been widely used, which are computed over distributions of videos and may not reflect the human preference for an individual video. Moreover, these metrics mainly evaluate the fidelity of videos, while failing to assess the text-video correspondence. Vision-language pre-training models, such as CLIPScore [22], BLIPScore [37], and AestheticScore [53] are frequently employed to evaluate the alignment between generated videos and their text prompts. However, these models mainly consider the text-video alignment at the image level, while ignoring the dynamic diversity and motion consistency of visual elements that are crucial to the vide-viewing experience. + +In this paper, to facilitate the development of more comprehensive and precise metrics for evaluating AI-generated videos, we present AIGVQA-DB, a large-scale VQA dataset, including 36,576 AIGVs generated by 15 advanced text-to-video models using 1,048 diverse prompts. An + +![](images/e2f88f0b1752496702c959cf991c9bb7ab6e815905c8a1788b7757a50de31c59.jpg) +Figure 1. An overview of the AIGVQA-DB construction pipeline, illustrating the generation and the subjective evaluation procedures for the AIGVs in the database. (a) Prompt categorization according to the spatial major content. (b) Prompt categorization according to the temporal descriptions. (c) Prompt categorization according to the attribute control. (d) Prompt categorization according to the prompt complexity. (e) The 15 generative models used in the database. (f) Four visual quality evaluation perspectives, including static quality, temporal smoothness, dynamic degree, and text-video correspondence. (g) and (h) demonstrates the pair comparison and preference scoring processes, respectively. + +overview of the dataset construction pipeline is shown in Figure 1. The prompts are collected from existing open-domain text-video datasets [7, 8, 38, 43, 68, 76] or manually-written, which can be categorized based on four orthogonal aspects respectively, as shown in Figure 1(a)-(d). Based on the AIGVs, we collect 370k expert ratings comprising both mean opinion scores (MOSs) and pairwise comparisons, which are evaluated from four dimensions, including: (1) static quality, (2) temporal smoothness, (3) dynamic degree, and (4) text-video correspondence. Equipped with the dataset, we propose AIGV-Assessor, a large multimodal model-based (LMM-based) VQA method for AIGVs, which reformulates the quality regression task into an interactive question-and-answer (Q&A) framework and leverages the powerful multimodal representation capabilities of LMMs to provide accurate and robust quality assessments. AIGV-Assessor not only classifies videos into different quality levels through natural language output, but also generates precise quality scores through regression, thus enhancing the interpretability and usability of VQA results. Moreover, AIGV-Assessor also excels in pairwise video comparisons, enabling nuanced assessments that are closer to human preferences. Extensive experimental results demonstrate that AIGV-Assessor outperforms existing text-to-video scoring methods in terms of multiple dimensions relevant to human preference. + +The main contributions of this paper are summarized as follows: + +- We construct AIGVQA-DB, a large-scale dataset comprising 36,576 AI-generated videos annotated with MOS scores and pairwise comparisons. Compared with existing benchmarks, AIGVQA-DB provides a more comprehensive assessment of the capabilities of text-to-video models from multiple perspectives. + +Table 1. An overview of popular text-to-video (T2V) and image-to-video (I2V) generation models. ${}^{ \dagger }$ Representative variable. + +
ModelYearModeResolutionFramesOpen
CogVideo [23]22.05T2V480×48032
Make-a-Video [55]22.09T2V256×25616
LVDM [21]22.11T2V256×25616
Tune-A-Video [73]22.12T2V512×5128
VideoFusion [44]23.03T2V128×12816
Text2Video-Zero [27]23.03T2V512×5128
ModelScope [64]23.03T2V256×25616
Lavie [67]23.09T2V512×32016
VideoCrafter [12]23.10T2V, I2V1024×57616
Hotshot-XL [1]23.10T2V672×3848
StableVideoDiffusion [9]23.11I2V576×102414
AnimateDiff [20]23.12T2V, I2V384×25620
Floor33 [2]23.08T2V,I2V1024×64016-
Genmo [3]23.10T2V, I2V2048×153660-
Gen-2 [4]23.12T2V, I2V1408×76896-
MoonValley [5]24.01T2V, I2V1184×672200†-
MorphStudio [6]24.01T2V, I2V1920×108072-
Sora [7]24.02T2V, I2V1920×1080600†-
+ +- Based on AIGVQA-DB, we evaluate and benchmark 15 representative text-to-video models, and reveal their strengths and weaknesses from four crucial preference dimensions, i.e., static quality, temporal smoothness, dynamic degree, and text-to-video correspondence. +- We present a novel LMM-based VQA model for AIGVs, termed AIGV-Assessor, which integrates both spatial and temporal visual features as well as prompt features into a LMM to give quality levels, predict quality scores, and conduct quality comparisons. +- Thorough analysis of our AIGV-Assessor is provided and extensive experiments on our proposed AIGVQA-DB and other AIGV quality assessment datasets have shown the effectiveness and applicability of AIGV-Assessor. + +# 2. Related Work + +# 2.1. Text-to-video Generation + +Recent advancements in text-to-video generative models have substantially broadened video creation and modifica + +Table 2. Summary of existing text-to-image and text-to-video evaluation datasets. + +
Dataset TypesNameNumbersPromptsModelsAnnotatorsDimensionsMOSs / PairsAnnotation
AIGIQAAGIQA-3k [34]2,98218062125,964MOS
AIGCIQA2023 [63]2,40010062837,200MOS
RichHF-18k [39]17,76017,76033471,040MOS
HPS [75]98,80725,20512,659125,205Pairs
AIGVQAPick-a-Pic [29]-37,52334,3751584,247Pairs
MQT [15]1,00520152422,010MOS
EvalCrafter [42]2,5007005741,024MOS
FETV [43]2,4766194337,428MOS
LGVQ [84]2,80846862038,424MOS
T2VQA-DB [31]10,0001,000927110,000MOS
GAIA [13]9,1805101854327,540MOS
AIGVQA-DB (Ours)36,5761,048151204122,304MOS and Pairs
+ +tion possibilities. As shown in Table 1, these models exhibit distinct characteristics and capacities, including modes, resolution, and total frames. CogVideo [23] is an early text-to-video (T2V) model capable of generating short videos based on CogView2 [16]. Make-a-video [55] adds effective spatial-temporal modules on a diffusion-based text-to-image (T2I) model (i.e., DALLE-2 [50]). VideoFusion [44] also leverages the DALLE-2 and presents a decomposed diffusion process. LVDM [21], Text2Video-Zero [27], Tune-A-Video [73], and ModelScope [64] are models that inherit the success of Stable Diffusion (SD) [51] for video generation. Lavie [67] extends the original transformer block in SD to a spatio-temporal transformer. Hotshot-XL [1] introduces personalized video generation. Beyond these laboratory-driven advancements, the video generation landscape has also been enriched by a series of commercial products. Notable among them are Floor33 [2], Gen-2 [4], Genmo [3], MoonValley [5], MorphStudio [6], and Sora [7], which have gained substantial attention in both academia and industry, demonstrating the widespread application potential of AI-assisted video creation. + +# 2.2. Text-to-video Evaluation + +The establishment of the AI-generated image quality assessment (AIGIQA) dataset is relatively well-developed, including both mean opinion scores (MOSs) for absolute quality evaluations, and pairwise comparisons for relative quality judgments. Recent developments in text-to-video generation models have also spurred the creation of various AI-generated video quality assessment (AIGVQA) datasets, addressing different aspects of the T2V generation challenge, as shown in Table 2. MQT [15] consists of 1,005 videos generated by 5 models using 201 prompts. EvalCrafter [42] and FETV [43] extend the scale of the videos, prompts, and evaluation dimensions. LGVQ [84] increases the number of annotators, providing more reliable MOSs. T2VQA-DB [31] consists of 10,000 videos from 1,000 prompts representing a significant improvement in scale. GAIA [13] collects 9,180 videos focusing on action quality assessment in AIGVs, but falls short in addressing the consistency between the generated visuals and their textual prompts. Most existing VQA datasets predominantly rely on MOS, an absolute scoring method, which suffers from the same drawback: absolute scores alone may cause am + +biguity and overlook subtle quality differences. In contrast, our AIGVQA-DB includes both MOSs and pairwise comparisons, addressing the limitations of current works by providing fine-grained preference feedbacks. + +# 3. Database Construction and Analysis + +# 3.1. Data Collection + +Prompt Scources and Categorization. Prompts of the AIGVQA-DB are primarily sourced from existing open-domain text-video pair datasets, including InternVid [68], MSRVTT [76], WebVid [8], TGIF [38], FETV [43] and Sora website [7]. We also manually craft prompts describing highly unusual scenarios to test the generalization ability of the generation models. As shown in Figure 1(a)-(d), we follow the categorization principles from FETV [43] to organize each prompt based on the "spatial major content", "temporal major content", "attribute control", and "prompt complexity". + +Text-to-Video Generation. We utilize 15 latest text-to-video generative models to create AI-generated videos as shown in Figure 1(e). We leverage open-source website APIs and code with default weights for these models to produce AIGVs. For the construction of the MOS subset, we collect 48 videos from the Sora Website [7], along with their corresponding text prompts. Using these prompts, we generate additional videos using 11 different generative models. This process results in a total of 576 videos (12 generative models $\times$ 48 prompts). In addition to the MOS subset, we construct the pair-comparison subset using 1,000 diverse prompts, and 12 generative models including 8 open-sourced and 4 close-sourced are employed for text-to-video generation. Specifically, for each prompt, we generate four distinct videos for each open-source generative model and one video for each closed-source generative model. This process yields a total of 36,000 videos. More details of the database can be found in the supplementary material. + +# 3.2. Subjective Experiment Setup and Procedure + +Due to the unique and unnatural characteristics of AI-generated videos and the varying target video spaces dictated by different text prompts, relying solely on a single score, such as "quality", to represent human visual preferences is insufficient. In this paper, we propose to measure + +![](images/fb0385b3e755dd0ce7af1b2bf2c5ae793450e483574d3b22a009ca8c5c39ce30.jpg) +(a) Distribution of Raw Scores + +![](images/8eac122b8407f48e45e263fdddc23287419766654acec0728f07feffd5b4ba0f.jpg) +(b) Distribution of Mean Opinion Scores (MOSs) + +![](images/36c72da19ded12678e8963e4090622f43eaf08a13abe22c112d4b9d85ec226dc.jpg) +Figure 2. Video score distribution from the four perspectives including static quality, temporal smoothness, dynamic degree, and t2v correspondence. (a) Distribution of raw scores. (b) Distribution of Mean Opinion Scores (MOSs) +(a) +(b) +(c) + +![](images/1f6e6965f46d7954d0df029d696dd198d84ae054cd372b5706fa71d41fde0ab3.jpg) +Figure 3. Comparison of averaged win rates of different generation models across different categories. (a) Results across prompt complexity. (b) Results across attribute control. (c) Results across temporal major contents. (d) Results across spatial major contents. +Figure 4. (a) Comparison of text-to-video generation models regarding the MOS in terms of four dimensions sorted bottom-up by their averaged MOS. (b) Comparison of text-to-video generation models regarding the win rate in terms of four dimensions sorted bottom-up by their averaged win rate. + +the human visual preferences of AIGVs from four perspectives. Static quality assesses the clarity, sharpness, color accuracy, and overall aesthetic appeal of the frames when viewed as standalone images. Temporal smoothness evaluates the temporal coherence of video frames and the absence of temporal artifacts such as flickering or jittering. Dynamic degree evaluates the extent to which the video incorporates large motions and dynamic scenes, which contributes to the overall liveliness and engagement measurement of the content. Text-video (TV) correspondence assesses how accurately the video content reflects the details, themes, and actions described in the prompt, ensuring that the generated video effectively translates the text input into + +a visual narrative. Each of these four visual perception perspectives is related but distinct, offering a comprehensive evaluation for AIGVs. To evaluate the quality of the videos in the AIGVQA-DB, we conduct subjective experiments adhering to the guidelines outlined in ITU-R BT.500-14 [17, 54]. For the MOS annotation type, we use a 1-5 Likert-scale judgment to score the videos. For the pairs annotation type, participants are presented with pairs of videos and asked to choose the one they prefer, providing a direct comparison method for evaluating relative video quality. The videos are displayed using an interface designed with Python Tkinter, as illustrated in Figure 1(g)-(h). A total of 120 graduate students participate in the experiment. + +# 3.3. Subjective Data Processing + +In order to obtain the MOS for an AIGV, we linearly scale the raw ratings to the range [0, 100] as follows: + +$$ +z _ {i j} = \frac {r _ {i j} - \mu_ {i j}}{\sigma_ {i}}, \quad z _ {i j} ^ {\prime} = \frac {1 0 0 (z _ {i j} + 3)}{6}, +$$ + +$$ +\mu_ {i} = \frac {1}{N _ {i}} \sum_ {j = 1} ^ {N _ {i}} r _ {i j}, \sigma_ {i} = \sqrt {\frac {1}{N _ {i} - 1} \sum_ {j = 1} ^ {N _ {i}} (r _ {i j} - \mu_ {i j}) ^ {2}} +$$ + +where $r_{ij}$ is the raw ratings given by the $i$ -th subject to the $j$ -th video. $N_i$ is the number of videos judged by subject $i$ . Next, the MOS of the video $j$ is computed by averaging the rescaled z-scores as follows: + +$$ +M O S _ {j} = \frac {1}{M} \sum_ {i = 1} ^ {M} z _ {i j} ^ {\prime} +$$ + +![](images/6e2271fc92b4aadac9147836c9305b52893182b0a5fb8fd4763fd447322a3f64.jpg) +Figure 5. The framework of AIGV-Assessor: (a) AIGV-Assessor takes AI-generated video frames as input and outputs both text-based quality levels and numerical quality scores. The system begins with the extraction of spatiotemporal features using two vision encoders, which are then passed through spatial and temporal projection modules to generate aligned visual tokens into language space. The LLM decoder produces text-based feedback describing the video quality level for four evaluation dimensions, respectively. Simultaneously, the last-hidden-states from the LLM are used to perform quality regression that outputs final quality scores in terms of four dimensions. (b) AIGV-Assessor is fine-tuned on pairwise comparison, further allowing the model to output the evaluation comparison between two videos. + +where $MOS_{j}$ indicates the MOS for the $j$ -th AIGV, $M$ is the number of subjects, and $z_{ij}^{\prime}$ are the rescaled z-scores. + +For the pairs annotation type, given a text prompt $p_i$ and 12 video generation models labeled $\{A,B,C,\dots,L\}$ , we generate videos using each model, forming a group of videos $G_{i,j} = \{V_{i,A,j},V_{i,B,j},V_{i,C,j},\dots,V_{i,L,j}\}$ . For each prompt $p_i$ , we generate four different videos randomly for each of the eight open-source generative models and one video for each of the four closed-source generative models, resulting in a group of 36 videos $\{G_{i,A,1},G_{i,A,2},G_{i,A,3},G_{i,A,4},G_{i,B,1},\dots,G_{i,L,1}\}$ . For each group, we create all possible pairwise combinations, resulting in $C_{36}^2$ pairs: $(V_{A1},V_{B1})$ , $(V_{A1},V_{B2})$ , $(V_{A1},V_{B3})$ , $(V_{A1},V_{B4})$ , $(V_{A1},V_{C1})$ , ..., $(V_{K1},V_{L1})$ . In the AIGVQA-DB construction pipeline, a prompt suite of 1000 prompts results in 630,000 $(1000\times C_{36}^2)$ pairwise video comparisons. From this extensive dataset, we randomly sample 30,000 pairs for evaluation from four perspectives. Each pair is judged by three annotators, and the final decision of the better video in each pair is determined by the majority vote. Finally, we obtain a total of 46,080 reliable score ratings (20 annotators × 4 perspectives × 576 videos) and 360,000 pair ratings (3 annotators × 4 perspectives × 30,000 pairs). + +# 3.4. AIGV Analysis from Four Perspectives + +As shown in Figure 2, the videos in the AIGVQA-DB cover a wide range of perceptual quality. We further analyze the win rates of various generation models across categories in Figure 3, revealing the strengths and weaknesses of each T2V model. As shown in Figure 3(a), the performances of T2V models rank uniform for different prompt complexity items in terms of static quality, which manifests current T2V model rank consistently for different prompts, likely + +due to shared architectures like diffusion-based systems, with common strengths and limitations in handling complex prompts. As shown in Figure 3(b), in terms of attribute control, StableVideoDiffusion [9] excels in managing quantity over event order, as it first generates static images before animating them, preserving the original event sequence. As shown in Figure 3(d), in terms of spatial content, most videos featuring "plants" and "people" show poor T2V correspondence. More comparison and analysis can be found in the supplementary material. We also launch comparisons among text-to-video generation models regarding the MOS and pairwise win rates shown in Figure 4. Notably, models such as LVDM [21] demonstrate exceptional performance in handling dynamic content, but exhibit relatively lower performance in temporal smoothness. Sora [7] and MorphStudio [6] perform well in static quality and temporal smoothness while lagging in dynamic degree. Additionally, closed-source models exhibit much better performance compared to open-source models. + +# 4. Proposed Method + +# 4.1. Model Structure + +Spatial and Temporal Vision Encoder. As shown in Figure 5(a), the model leverages two different types of encoders to capture the spatial and temporal characteristics of the video: (1) 2D Encoder: A pre-trained 2D vision transformer (InternViT [69]) is used to process individual video frames. (2) 3D Encoder: A 3D network, i.e., SlowFast [19], is employed to extract temporal features by processing sequences of video frames. + +Spatiotemporal Projection Module. Once the spatial and temporal features are extracted, they are projected into a + +Table 3. Performance comparisons of the state-of-the-art quality evaluation methods on the AIGVQA-DB from four perspectives. The best performance results are marked in RED and the second-best performance results are marked in BLUE. + +
DimensionStatic QualityTemporal SmoothnessDynamic DegreeTV Correspondence
Methods / MetricsPair AccSRCCPLCCKRCCPair AccSRCCPLCCKRCCPair AccSRCCPLCCKRCCPair AccSRCCPLCCKRCC
NIQE [49]54.32%0.08670.16260.061552.67%0.06410.11520.045145.64%0.17650.24480.119446.99%0.17710.22310.1193
QAC [80]49.96%0.10220.13630.068054.90%0.16330.20390.110554.72%0.04480.04270.029554.48%0.03030.01970.2233
BRISQUE [48]59.98%0.29090.24430.196955.67%0.23250.15690.155344.60%0.13510.09590.089351.02%0.12940.10170.0869
BPRI [46]52.28%0.21810.17230.139847.26%0.17660.08800.113846.83%0.19560.16880.132949.13%0.15690.15480.1052
HOSA [77]61.54%0.24200.21060.164357.31%0.23110.17570.155944.97%0.07550.04490.049652.23%0.16450.13240.1097
BMPRI [47]53.71%0.16900.14810.107549.31%0.14340.08440.089445.07%0.11530.09250.077748.43%0.15670.15000.1041
V-Dynamic [25]51.34%0.07680.07920.049431.91%0.37130.48710.255753.11%0.14660.02530.098846.96%0.04050.05760.0223
V-Smoothness [25]61.63%0.67480.45060.459076.59%0.85260.83130.653347.63%0.24460.23280.158061.28%0.31880.30730.2214
CLIPScore [22]47.09%0.07310.08160.047346.33%0.04230.03340.027152.99%0.06750.08350.043955.62%0.15190.17310.1014
BLIPScore [37]53.24%0.04920.04210.033053.07%0.06590.04870.043753.03%0.17860.19040.120561.53%0.18130.18960.1219
AestheticScore [53]70.24%0.67130.69590.478454.82%0.51540.49460.348452.96%0.22950.23220.152759.64%0.23810.24400.1602
ImageReward [78]56.69%0.26060.26460.174954.09%0.23820.23050.160053.90%0.18400.18360.123763.97%0.23110.24500.1568
UMTScore [43]48.93%0.01680.01990.011749.93%0.03020.03700.020752.69%0.01680.01980.011753.82%0.01720.00650.0108
Video-LLaVA [40]50.90%0.03840.05130.029750.36%0.04310.02810.034750.34%0.15610.14360.117650.54%0.13640.10510.1009
Video-ChatGPT [45]51.20%0.12420.15870.094050.16%0.05800.05330.045350.47%0.07240.04360.056350.07%0.03570.01240.0274
LLaVA-NeXT [36]52.85%0.12390.16250.095452.41%0.40210.37220.305251.84%0.17670.16550.132859.20%0.41160.34280.3261
VideoLLaMA2 [14]52.73%0.26430.32710.192852.27%0.36080.24500.269650.78%0.19000.15610.137954.25%0.16560.16330.1210
Qwen2-VL [66]56.50%0.49220.52910.383849.12%0.16810.42190.123352.08%0.11220.13350.084953.30%0.31110.27750.2306
HyperIQA [56]68.30%0.79310.80930.596954.65%0.74260.66300.540753.32%0.21030.21000.138457.54%0.62260.62500.4432
MUSIQ [26]66.46%0.78800.80440.577355.16%0.71990.69200.503452.85%0.52060.48460.352158.46%0.41250.40930.2844
LIQE [83]63.86%0.87760.86910.700855.84%0.79350.77200.608449.02%0.53030.58400.383755.10%0.38620.36390.2640
VSFA [35]46.43%0.33650.34210.226850.95%0.33170.32730.220251.46%0.12010.13620.081548.07%0.10240.10640.0666
BVQA [33]29.98%0.45940.47010.326837.65%0.37040.38190.250755.08%0.45940.47010.326842.32%0.37200.39780.2559
simpleVQA [57]68.12%0.83550.64380.848954.14%0.70820.70080.497853.08%0.46710.31600.399458.20%0.46430.54400.3163
FAST-VQA [70]70.64%0.87380.86440.686062.93%0.90360.91340.716654.34%0.56030.57030.389565.05%0.68750.67040.4978
DOVER [71]72.92%0.89070.88950.700458.83%0.90630.91950.718753.16%0.55490.54890.380062.35%0.67830.68020.4969
Q-Align [72]71.86%0.85160.83830.664157.95%0.81160.70250.619553.71%0.56550.50120.395062.91%0.55420.56470.3870
AIGV-Assessor (Ours)79.83%0.91620.91900.757676.60%0.92320.92160.803860.30%0.60930.60820.443570.32%0.75000.76970.5591
Improvement+ 6.9%+2.7%+3.0%+ 5.7%13.7%+1.7%+0.2%+8.5%+5.2%+4.4%+3.8%+4.4%+5.3%+6.3%+9.9%+6.13%
+ +shared feature space for alignment with text-based queries. This is done through two projection modules that map the spatial and temporal visual features respectively into the language space. The mapped visual tokens are aligned with text tokens, enabling the model to query the video content in a multimodal fashion. + +Feature Fusion and Quality Regression. We apply LLM (InternVL2-8B [69]) to combine the visual tokens and user-provided quality prompts to perform the following tasks: (1) Quality level descriptions: the model generates a descriptive quality level evaluation of the input video, such as "The static quality of the video is (bad, poor, fair, good, excellent)." This initial categorization provides a preliminary classification of the video's quality, which is beneficial for subsequent quality regression tasks. By obtaining a rough quality level, the model can more accurately predict numerical scores in later evaluations. (2) Regression score output: the model uses the final hidden states from the LLM to perform a regression task, outputting numerical quality scores for the video from four different dimensions. + +# 4.2. Training and Fine-tuning Strategy + +The training process of AIGV-Assessor follows a three-stage approach to ensure high-quality video assessment with quality level prediction, individual quality scoring, and pairwise preference comparison capabilities. This process includes: (1) training the spatial and temporal projectors to align visual and language features, (2) fine-tuning the vision + +encoder and LLM with LoRA [24], and training the quality regression module to generate accurate quality scores, (3) incorporating pairwise comparison training using the pair-comparison subset with a pairwise loss function for robust video quality comparison. + +Spatiotemporal Projector Training. The first stage focuses on training the spatial and temporal projectors to extract meaningful spatiotemporal visual features and map them into the language space. Through this process, the LLM is able to produce the quality level descriptions i.e., bad, poor, fair, good, excellent. + +Quality Regression Fine-tuning. Once the model can generate coherent descriptions of video quality level, the second stage focuses on fine-tuning the quality regression module. The goal here is to enable the model to output stable and precise numerical quality scores (MOS-like predictions). The quality regression model takes the last-hidden-state features from LLM as input and generates quality scores from four perspectives. The training objective uses an L1 loss function to minimize the difference between the predicted quality score and the groundtruth MOS. + +Pairwise Comparison Fine-tuning. The third stage mainly focuses on integrating the pairwise comparison into the training pipeline. As shown in Figure 5(b), two input video pairs share network weights within the same batch. We design a judge network inspired by LPIPS [82] to determine which video performs better. This network leverages + +Table 4. Performance comparisons on LGVQ [84] and FETV [43]. + +
AspectsMethodsLGVQFETV
SRCCPLCCKRCCSRCCPLCCKRCC
SpatialMUSIQ [26]0.6690.6820.4910.7220.7580.613
StairlQA [59]0.7010.7370.5210.8060.8120.643
CLIP-IQA [62]0.6840.7090.5020.7410.7670.619
LIQE [83]0.7210.7520.5380.7650.7990.635
UGVQ [84]0.7590.7950.5670.8410.8410.685
TemporalAIGV-Assessor (Ours)0.8030.8190.6170.8530.8560.699
Improvement+4.4%+2.4%+5.0%+1.2%+1.5%+1.4%
VSFA [35]0.8410.8570.6430.8390.8590.705
SimpleVQA [57]0.8570.8670.6590.8520.8620.726
FastVQA [70]0.8490.8430.6470.8420.8470.714
DOVER [71]0.8670.8780.6720.8680.8810.731
UGVQ [84]0.8930.9070.7030.8970.9070.753
AlignmentAIGV-Assessor (Ours)0.9000.9200.7170.9360.9400.815
Improvement+0.7%+1.3%+1.4%+3.9%+3.3%+6.2%
CLIPScore [22]0.4460.4530.3010.6070.6330.498
BLIPScore [37]0.4550.4640.3190.6160.6450.505
ImageReward [78]0.4980.4990.3440.6570.6870.519
PickScore [28]0.5010.5150.3530.6690.7080.533
HPSv2 [74]0.5040.5110.3570.6860.7030.540
UGVQ [84]0.5510.5550.3940.7340.7370.572
AlignmentAIGV-Assessor (Ours)0.5770.5780.4110.7530.7460.585
Improvement+2.6%+2.3%+1.7%+1.9%+0.9%+1.3%
+ +learned features and evaluates the perceptual differences between the two videos, allowing more reliable quality assessments in video pair comparison. + +Loss Function. In the first stage, the spatial and temporal projectors are trained to align visual and language features using language loss. The second stage refines the vision encoder, LLM, and quality regression module's scoring ability with an L1 loss. The third stage incorporates pairwise comparison training with cross-entropy loss to improve the model's performance on relative quality evaluation. + +# 5. Experiments + +# 5.1. Experiment Settings + +Evaluation Datasets and Metrics. Our proposed method is validated on five AIGVQA datasets: AIGVQA-DB, LGVQ [84], FETV [43], T2VQA [31], and GAIA [13]. To evaluate the correlation between the predicted scores and the ground-truth MOSs, we utilize three evaluation criteria: Spearman Rank Correlation Coefficient (SRCC), Pearson Linear Correlation Coefficient (PLCC), and Kendall's Rank Correlation Coefficient (KRCC). For pair comparison, we adopt the comparison accuracy as the metric. + +Reference Algorithms. To assess the performance of our proposed method, we select state-of-the-art evaluation metrics for comparison, which can be classified into five groups: (1) Handcrafted-based I/VQA models, including: NIQE [49], BRISQUE [48], QAC [80], BMPRI [47], HOSA [77], BPRI [46], HIGRADE [32], etc. (2) Action-related evaluation models, including: V-Dynamic [25], V-Smoothness [25] which are proposed in VBench [25]. (3) Vision-language pre-training models, including: CLIPScore [22], BLIPScore [37], AestheticScore [53], ImageReward [78], and UMTScore [43]. (4) LLM-based models, in- + +Table 5. Performance comparisons on T2VQA-DB [31]. + +
AspectsMethodsT2VQA-DBSora Testing
SRCCPLCCKRCCSRCCPLCCKRCC
zero-shotCLIPScore [22]0.10470.12770.07020.21160.15380.1406
BLIPScore [37]0.16590.18600.11120.21160.10380.1515
ImageReward [78]0.18750.21210.12660.09920.04150.0748
UMTScore [43]0.06760.07210.04530.25940.08400.1680
finetunedSimpleVQA [57]0.62750.63880.44660.03400.23440.0237
BVQA [37]0.73900.74860.54870.42350.24890.2635
FAST-VQA [70]0.71730.72950.53030.43010.23690.2939
DOVER [71]0.76090.76930.57040.44210.26890.2757
T2VQA [31]0.79650.80660.60580.64850.31240.4874
AIGV-Assessor (Ours)0.81310.82220.63640.66120.33180.5075
Improvement+1.7%+1.6%+3.1%+1.3%+1.9%+2.0%
+ +Table 6. Performance comparisons on GAIA [13]. + +
DimensionSubjectCompletenessInteraction
Methods / MetricsSRCCPLCCSRCCPLCCSRCCPLCC
V-Smoothness [25]0.24020.19130.14740.16250.17410.1693
V-Dynamic [25]0.12850.08310.09030.06820.11410.0758
Action-Score [42]0.20230.18230.28670.26230.26890.2432
Flow-Score [42]0.14710.15410.08160.12730.10410.1309
CLIPScore [22]0.33980.33300.39440.38710.38750.3821
BLIPScore [37]0.34530.33860.41740.40820.40440.3994
LLaVAScore [41]0.34840.34360.41890.41330.40770.4025
TLVQM [30]0.50370.51370.41270.41580.40790.4093
VIDEVAL [60]0.52370.54460.42830.43750.41210.4234
VSFA [35]0.55940.57620.49400.50170.47090.4811
BVQA [37]0.57020.58880.48760.49460.47610.4825
SimpleVQA [58]0.59200.59740.49810.50780.48430.4971
FAST-VQA [70]0.60150.60920.51570.52150.51540.5216
DOVER [71]0.61730.63010.51980.53230.51640.5278
AIGV-Assessor (Ours)0.68420.68970.66350.66940.63290.6340
Improvement+6.7%+6.0%+14.4%+13.7%+11.65%+10.6%
+ +cluding: Video-LLaVA [40], Video-ChatGPT [45], LLaVA-NeXT [36], VideoLLaMA2 [14], and Qwen2-VL [66]. (5) Deep learning-based I/VQA models, including: HyperIQA [56], MUSIQ [26], LIQE [83], VSFA [35], BVQA [33], SimpleVQA [58], FAST-VQA [70], DOVER [71], and Q-Align [72]. + +Training Settings. Traditional handcrafted models are directly evaluated on the corresponding databases, and the average score of all frames is calculated. For vision-language pre-training and LLM-based models, we load the pre-trained weights for inference. CLIPscore [22], BLIP-score [37], and other vision-language pre-training models are calculated directly as the average cosine similarity between text and each video frame. SimpleVQA [58], BVQA [33], FAST-VQA [70], DOVER [71], and Q-Align [72] are fine-tuned on every test dataset. For deep learning-based IQA and VQA models, all experiments for each method are retrained on each dimension using the same training and testing split as the previous literature at a ratio of 4:1. All results are averaged after ten random splits. + +# 5.2. Results and Analysis + +Table 3 presents the pairwise win rates and the score prediction correlation between predicted results and human ground truths. The results indicate that handcrafted-based methods consistently underperform across all four evalu- + +![](images/3a323b11ba3b97928a16f87d4016f9fd2550b9e4d10c010126e4ee85c42f694d.jpg) +Figure 6. Comparison of win rates of different generation models across four dimensions evaluated by different VQA methods, demonstrating our AIGV-Assessor has better win-rate evaluation ability aligned with Ground Truth (GT). + +Table 7. Ablation study of the proposed AIGV-Assessor method. + +
No.Feature & StrategyStatic QualityTemporal SmoothnessDynamic DegreeT2V Correspondence
spatialtemporalquality levelLoRA finetuningSRCCPLCCKRCCSRCCPLCCKRCCSRCCPLCCKRCCSRCCPLCCKRCC
(1)0.8640.8660.7260.8700.8680.7270.5560.5720.4320.6160.6200.492
(2)0.8740.8760.7230.8750.8760.7360.5580.5730.4310.7230.7340.533
(3)0.8870.8840.7220.8810.8830.7060.5620.5750.4330.7390.7580.544
(4)0.8870.8880.7530.9170.9100.7960.5690.5360.4380.6880.6730.557
(5)0.9050.9080.7540.9190.9170.7990.5890.5870.4410.7420.7630.549
(6)0.9160.9190.7580.9230.9220.8040.6090.6080.4440.7500.7700.559
+ +ation perspectives. Vision-language pre-training methods such as CLIPscore [22] and BLIPscore [37] demonstrate moderate performance but are still surpassed by more specialized and fine-tuned VQA models. Specifically, deep learning-based models like FAST-VQA [70] and DOVER [71] achieve more competitive performances after fin-tuning. However, they are still far away from satisfactory. Notably, most VQA models perform better on quality evaluation than on text-video correspondence, as they lack text prompts input used in video generation, making it challenging to extract relation features from the AI-generated videos, which inevitably leads to the performance drop. Finally, the performance exploration of recent LMMs on our database shows that current LMMs are able to produce meaningful evaluations, which can motivate future works to further explore the use of LMMs for AIGV assessment. + +The proposed AIGV-Assessor achieves the best performance compared to the competitors for both MOS prediction and pair ranking tasks in terms of all four dimensions. To further validate the effectiveness and generalizability of our proposed model, we also evaluate it on four other AIGVQA datasets [13, 31, 43, 84]. From Tables 4-6, we observe that AIGV-Assessor consistently achieves the best performance across these datasets. As shown in Figure 6, AIGV-Assessor achieves the highest overlap in area with Ground Truth (GT), indicating that AIGV-Assessor can reliably perform T2V model benchmarking, outperforming other assessment models in discerning quality differences in AI-generated videos. + +# 5.3. Ablation Study + +We conduct ablation experiments to verify the effectiveness of the main components in our AIGV-Assessor method, including the spatial feature, the temporal feature, the quality level, and the LoRA finetuning strategy. Additionally, we assess how each feature contributes to the performance + +across different quality dimensions. The results of these experiments are summarized in Table 7. Experiments (1), (2), and (3) validate the effectiveness of the quality regression module and the LoRA finetuning strategy, confirming that fine-tuning and quality regression significantly enhance model performance over only regressing the generated text outputs from the LLM. The addition of temporal features, as seen in Experiments (4), (5), and (6), significantly improves model performance. Experiment (6), which integrates all components, yields the best overall performance, showing that the combination of spatial and temporal features, quality level prediction, and LoRA finetuning provides the most robust and accurate AIGV assessment. + +# 6. Conclusion + +In this paper, we study the human visual preference evaluation problem for AIGVs. We first construct AIGVQA-DB, which includes 36,576 videos generated based on 1048 various text-prompts, with the MOSs and pair comparisons evaluated from four perspectives. Our detailed manual evaluations reflect different aspects of human visual preferences on AIGVs and reveal critical insights into the strengths and weaknesses of various text-to-video models. Based on the database, we evaluate the performance of state-of-the-art quality evaluation models and establish a new benchmark, revealing their limitations in measuring the perceptual preference of AIGVs. Finally, we propose AIGV-Assessor, a novel VQA model that leverages the capabilities of LMMs to give quality levels, predict quality scores, and compare preferences from four dimensions. Extensive experiments demonstrate that AIGV-Assessor achieves state-of-the-art performance on both AIGVQA-DB and other AIGVQA benchmarks, validating its robustness in understanding and evaluating the AI-generated videos. + +# References + +[1] Hotshot-XL. https://github.com/hotshotco/hotshot-xl, 2023.2, 3 +[2] Floor33. https://discord.qq/EuB9KT6H, 2023.2, 3 +[3] Gemo. https://www.genmo.ai, 2024.2, 3 +[4] Gen2. https://research.runwayml.com/gen2, 2024.2,3 +[5] Moonvalley. https://moonvalley.ai, 2024. 2, 3 +[6] Morph studio. https://www.morphstudio.com, 2024.2,3,5 +[7] Sora. https://openai.com/research/video-generation-models-as-world-simulators, 2024.2,3,5 +[8] Max Bain, Arsha Nagrani, Gül Varol, and Andrew Zisserman. Frozen in time: A joint video and image encoder for end-to-end retrieval. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 1728-1738, 2021. 2, 3 +[9] Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint arXiv:2311.15127, 2023. 2, 5 +[10] Yuqin Cao, Xiongkuo Min, Yixuan Gao, Wei Sun, Weisi Lin, and Guangtao Zhai. Unqa: Unified no-reference quality assessment for audio, image, video, and audio-visual content. arXiv preprint arXiv:2407.19704, 2024. 1 +[11] Yuqin Cao, Xiongkuo Min, Yixuan Gao, Wei Sun, and Guangtao Zhai. Agav-rater: Adapting large multimodal model for ai-generated audio-visual quality assessment. arXiv preprint arXiv:2501.18314, 2025. 1 +[12] Haoxin Chen, Menghan Xia, Yingqing He, Yong Zhang, Xiaodong Cun, Shaoshu Yang, Jinbo Xing, Yaofang Liu, Qifeng Chen, Xintao Wang, Chao Weng, and Ying Shan. Videocraft1: Open diffusion models for high-quality video generation. arXiv preprint arXiv:2310.19512, 2023. 1, 2 +[13] Zijian Chen, Wei Sun, Yuan Tian, Jun Jia, Zicheng Zhang, Jiarui Wang, Ru Huang, Xiongkuo Min, Guangtao Zhai, and Wenjun Zhang. Gaia: Rethinking action quality assessment for ai-generated videos. arXiv preprint arXiv:2406.06087, 2024. 1, 3, 7, 8 +[14] Zesen Cheng, Sicong Leng, Hang Zhang, Yifei Xin, Xin Li, Guanzheng Chen, Yongxin Zhu, Wenqi Zhang, Ziyang Luo, Deli Zhao, and Lidong Bing. Videollama 2: Advancing spatial-temporal modeling and audio understanding in videolms. arXiv preprint arXiv:2406.07476, 2024. 6, 7 +[15] Iya Chivileva, Philip Lynch, Tomas E Ward, and Alan F Smeaton. Measuring the quality of text-to-video model outputs: Metrics and dataset. arXiv preprint arXiv:2309.08009, 2023. 3 +[16] Ming Ding, Wendi Zheng, Wenyi Hong, and Jie Tang. Cogview2: Faster and better text-to-image generation via hierarchical transformers. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), pages 16890-16902, 2022. 3 + +[17] Huiyu Duan, Xiongkuo Min, Yucheng Zhu, Guangtao Zhai, Xiaokang Yang, and Patrick Le Callet. Confusing image quality assessment: Toward better augmented reality experience. IEEE Transactions on Image Processing (TIP), 31: 7206-7221, 2022. 4 +[18] Huiyu Duan, Qiang Hu, Jiarui Wang, Liu Yang, Zitong Xu, Lu Liu, Xiongkuo Min, Chunlei Cai, Tianxiao Ye, Xiaoyun Zhang, et al. Finevq: Fine-grained user generated content video quality assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025. 1 +[19] Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 6202-6211, 2019. 5 +[20] Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, and Bo Dai. Animatediff:Animate your personalized text-to-image diffusion models without specific tuning. arXiv preprint arXiv:2307.04725, 2023. 2 +[21] Yingqing He, Tianyu Yang, Yong Zhang, Ying Shan, and Qifeng Chen. Latent video diffusion models for high-fidelity video generation with arbitrary lengths. arXiv preprint arXiv:2211.13221, 2022. 2, 3, 5 +[22] Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation metric for image captioning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7514-7528, 2021. 1, 6, 7, 8 +[23] Wenyi Hong, Ming Ding, Wendi Zheng, Xinghan Liu, and Jie Tang. Cogvideo: Large-scale pretraining for text-to-video generation via transformers. arXiv preprint arXiv:2205.15868, 2022. 1, 2, 3 +[24] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. 6 +[25] Ziqi Huang, Yinan He, Jiashuo Yu, Fan Zhang, Chenyang Si, Yuming Jiang, Yuanhan Zhang, Tianxing Wu, Qingyang Jin, Nattapol Chanpaisit, Yaohui Wang, Xinyuan Chen, Limin Wang, Dahua Lin, Yu Qiao, and Ziwei Liu. VBenchmark: Comprehensive benchmark suite for video generative models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024. 1, 6, 7 +[26] Junjie Ke, Qifei Wang, Yilin Wang, Peyman Milanfar, and Feng Yang. Musiq: Multi-scale image quality transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 5128-5137, 2021. 6, 7 +[27] Levon Khachatryan, Andranik Movsisyan, Vahram Tadevosyan, Roberto Henschel, Zhangyang Wang, Shant Navasardyan, and Humphrey Shi. Text2video-zero: Text-to-image diffusion models are zero-shot video generators. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 15954-15964, 2023. 1, 2, 3 +[28] Yuval Kirstain, Adam Poliak, Uriel Singer, and Omer Levy. Pick-a-pic: An open dataset of user preferences for text-to-image generation. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 2023. 7 + +[29] Yuval Kirstain, Adam Polyak, Uriel Singer, Shahbuland Martiana, Joe Penna, and Omer Levy. Pick-a-pic: An open dataset of user preferences for text-to-image generation. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), pages 36652-36663, 2023. 3 +[30] Jari Korhonen. Two-level approach for no-reference consumer video quality assessment. IEEE Transactions on Image Processing (TIP), 28(12):5923-5938, 2019. 7 +[31] Tengchuan Kou, Xiaohong Liu, Zicheng Zhang, Chunyi Li, Haoning Wu, Xiongkuo Min, Guangtao Zhai, and Ning Liu. Subjective-aligned dataset and metric for text-to-video quality assessment. arXiv preprint arXiv:2403.11956, 2024. 1, 3, 7, 8 +[32] Debarati Kundu, Deepti Ghadiyaram, Alan C Bovik, and Brian L Evans. Large-scale crowdsourced study for tonemapbed hdr pictures. IEEE Transactions on Image Processing (TIP), pages 4725-4740, 2017. 7 +[33] Bowen Li, Weixia Zhang, Meng Tian, Guangtao Zhai, and Xianpei Wang. Blindly assess quality of in-the-wild videos via quality-aware pre-training and motion perception. IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 32(9):5944-5958, 2022. 1, 6, 7 +[34] Chunyi Li, Zicheng Zhang, Haoning Wu, Wei Sun, Xiongkuo Min, Xiaohong Liu, Guangtao Zhai, and Weisi Lin. Agiqa-3k: An open database for ai-generated image quality assessment. IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2023. 3 +[35] Dingquan Li, Tingting Jiang, and Ming Jiang. Quality assessment of in-the-wild videos. In Proceedings of the ACM International Conference on Multimedia (ACMMM). ACM, 2019. 1, 6, 7 +[36] Feng Li, Renrui Zhang, Hao Zhang, Yuanhan Zhang, Bo Li, Wei Li, Zejun Ma, and Chunyuan Li. Llava next-interleave: Tackling multi-image, video, and 3d in large multimodal models. arXiv preprint arXiv:2407.07895, 2024. 6, 7 +[37] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In Proceedings of the International Conference on Machine Learning (ICML), pages 12888-12900. PMLR, 2022. 1, 6, 7, 8 +[38] Yuncheng Li, Yale Song, Liangliang Cao, Joel Tetreault, Larry Goldberg, Alejandro Jaimes, and Jiebo Luo. Tgif: A new dataset and benchmark on animated gif description. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4641-4650, 2016. 2, 3 +[39] Youwei Liang, Junfeng He, Gang Li, Peizhao Li, Arseniy Klimovskiy, Nicholas Carolan, Jiao Sun, Jordi Pont-Tuset, Sarah Young, Feng Yang, et al. Rich human feedback for text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 19401-19411, 2024. 3 +[40] Bin Lin, Bin Zhu, Yang Ye, Munan Ning, Peng Jin, and Li Yuan. Video-llava: Learning united visual representation by alignment before projection. arXiv preprint arXiv:2311.10122, 2023. 6, 7 +[41] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. + +Visual instruction tuning. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 2024. 7 +[42] Yaofang Liu, Xiaodong Cun, Xuebo Liu, Xintao Wang, Yong Zhang, Haoxin Chen, Yang Liu, Tieyong Zeng, Raymond Chan, and Ying Shan. Evalcrafter: Benchmarking and evaluating large video generation models. arXiv preprint arXiv:2310.11440, 2023. 3, 7 +[43] Yuanxin Liu, Lei Li, Shuhuai Ren, Rundong Gao, Shicheng Li, Sishuo Chen, Xu Sun, and Lu Hou. Fetv: A benchmark for fine-grained evaluation of open-domain text-to-video generation. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 2024. 1, 2, 3, 6, 7, 8 +[44] Zhengxiong Luo, Dayou Chen, Yingya Zhang, Yan Huang, Liang Wang, Yujun Shen, Deli Zhao, Jingren Zhou, and Tieniu Tan. Videofusion: Decomposed diffusion models for high-quality video generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10209-10218, 2023. 1, 2, 3 +[45] Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Shahbaz Khan. Video-chatgpt: Towards detailed video understanding via large vision and language models. In Proceedings of the Association for Computational Linguistics (ACL), 2024. 6, 7 +[46] Xiongkuo Min, Ke Gu, Guangtao Zhai, Jing Liu, Xiaokang Yang, and Chang Wen Chen. Blind quality assessment based on pseudo-reference image. IEEE Transactions on Multimedia (TMM), pages 2049-2062, 2017. 6, 7 +[47] Xiongkuo Min, Guangtao Zhai, Ke Gu, Yutao Liu, and Xiaokang Yang. Blind image quality estimation via distortion aggravation. IEEE Transactions on Broadcasting (TBC), pages 508-517, 2018. 6, 7 +[48] Anish Mittal, Anush Krishna Moorthy, and Alan Conrad Bovik. No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing (TIP), pages 4695-4708, 2012. 6, 7 +[49] Anish Mittal, Rajiv Soundararajan, and Alan C Bovik. Making a “completely blind” image quality analyzer. IEEE Signal Processing Letters (SPL), pages 209–212, 2012. 6, 7 +[50] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 1 (2):3, 2022. 3 +[51] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10684-10695, 2022. 3 +[52] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 2016. 1 +[53] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. In Proceedings of the Ad + +vances in Neural Information Processing Systems (NeurIPS), pages 25278-25294, 2022. 1, 6, 7 +[54] BT Series. Methodology for the subjective assessment of the quality of television pictures. Recommendation ITU-R BT, pages 500-13, 2012. 4 +[55] Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, et al. Make-a-video: Text-to-video generation without text-video data. arXiv preprint arXiv:2209.14792, 2022. 1, 2, 3 +[56] Shaolin Su, Qingsen Yan, Yu Zhu, Cheng Zhang, Xin Ge, Jinqiu Sun, and Yanning Zhang. Blindly assess image quality in the wild guided by a self-adaptive hyper network. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 6, 7 +[57] Wei Sun, Xiongkuo Min, Wei Lu, and Guangtao Zhai. A deep learning based no-reference quality assessment model forUGC videos. In Proceedings of the 30th ACM International Conference on Multimedia (ACMMM), page 856-865, 2022. 1, 6, 7 +[58] Wei Sun, Xiongkuo Min, Wei Lu, and Guangtao Zhai. A deep learning based no-reference quality assessment model forUGC videos. In Proceedings of the ACM International Conference on Multimedia (ACMMM), pages 856-865, 2022. 7 +[59] Wei Sun, Xiongkuo Min, Danyang Tu, Siwei Ma, and Guangtao Zhai. Blind quality assessment for in-the-wild images via hierarchical feature fusion and iterative mixed database training. IEEE Journal of Selected Topics in Signal Processing (JSTSP), 2023. 7 +[60] Zhengzhong Tu, Yilin Wang, Neil Birkbeck, Balu Adsumilli, and Alan C Bovik. Ugc-vqa: Benchmarking blind video quality assessment for user generated content. IEEE Transactions on Image Processing (TIP), 30:4449-4464, 2021. 7 +[61] Thomas Unterthiner, Sjoerd Van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, and Sylvain Gelly. Towards accurate generative models of video: A new metric & challenges. arXiv preprint arXiv:1812.01717, 2018. 1 +[62] Jianyi Wang, Kelvin C.K. Chan, and Chen Change Loy. Exploring clip for assessing the look and feel of images. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pages 2555-2563, 2023. 7 +[63] Jiarui Wang, Huiyu Duan, Jing Liu, Shi Chen, Xiongkuo Min, and Guangtao Zhai. Aigciqa2023: A large-scale image quality assessment database for ai generated images: from the perspectives of quality, authenticity and correspondence. In CAAI International Conference on Artificial Intelligence (CICAI), pages 46-57. Springer, 2023. 3 +[64] Jiuniu Wang, Hangjie Yuan, Dayou Chen, Yingya Zhang, Xiang Wang, and Shiwei Zhang. Modelscope text-to-video technical report. arXiv preprint arXiv:2308.06571, 2023. 1, 2, 3 +[65] Jiarui Wang, Huiyu Duan, Guangtao Zhai, and Xiongkuo Min. Quality assessment for ai generated images with instruction tuning. arXiv preprint arXiv:2405.07346, 2024. 1 +[66] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin + +Ge, Yang Fan, Kai Dang, Mengfei Du, Xuancheng Ren, Rui Men, Dayiheng Liu, Chang Zhou, Jingren Zhou, and Junyang Lin. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191, 2024. 6, 7 +[67] Yaohui Wang, Xinyuan Chen, Xin Ma, Shangchen Zhou, Ziqi Huang, Yi Wang, Ceyuan Yang, Yinan He, Jiashuo Yu, Peiqing Yang, et al. Lavie: High-quality video generation with cascaded latent diffusion models. arXiv preprint arXiv:2309.15103, 2023. 2, 3 +[68] Yi Wang, Yinan He, Yizhuo Li, Kunchang Li, Jiashuo Yu, Xin Ma, Xinhao Li, Guo Chen, Xinyuan Chen, Yaohui Wang, et al. Internvid: A large-scale video-text dataset for multimodal understanding and generation. In Proceedings of the International Conference on Learning Representations (ICLR), 2023. 2, 3 +[69] Zirui Wang, Mengzhou Xia, Luxi He, Howard Chen, Yitao Liu, Richard Zhu, Kaiqu Liang, Xindi Wu, Haotian Liu, Sadhika Malladi, Alexis Chevalier, Sanjeev Arora, and Danqi Chen. Charxiv: Charting gaps in realistic chart understanding in multimodal llms. arXiv preprint arXiv:2406.18521, 2024. 5, 6 +[70] Haoning Wu, Chaofeng Chen, Jingwen Hou, Liang Liao, Annan Wang, Wenxiu Sun, Qiong Yan, and Weisi Lin. Fastvqa: Efficient end-to-end video quality assessment with fragment sampling. In Proceedings of the European Conference on Computer Vision (ECCV), pages 538-554. Springer, 2022. 1, 6, 7, 8 +[71] Haoning Wu, Erli Zhang, Liang Liao, Chaofeng Chen, Jingwen Hou Hou, Annan Wang, Wenxiu Sun Sun, Qiong Yan, and Weisi Lin. Exploring video quality assessment on user generated contents from aesthetic and technical perspectives. In Proceedings of the International Conference on Computer Vision (ICCV), 2023. 1, 6, 7, 8 +[72] Haoning Wu, Zicheng Zhang, Weixia Zhang, Chaofeng Chen, Liang Liao, Chunyi Li, Yixuan Gao, Annan Wang, Erli Zhang, Wenxiu Sun, et al. Q-align: Teaching lmm's for visual scoring via discrete text-defined levels. arXiv preprint arXiv:2312.17090, 2023. 6, 7 +[73] Jay Zhangjie Wu, Yixiao Ge, Xintao Wang, Stan Weixian Lei, Yuchao Gu, Yufei Shi, Wynne Hsu, Ying Shan, Xiaohu Qie, and Mike Zheng Shou. Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 7623-7633, 2023. 1, 2, 3 +[74] Xiaoshi Wu, Yiming Hao, Keqiang Sun, Yixiong Chen, Feng Zhu, Rui Zhao, and Hongsheng Li. Human preference score v2: A solid benchmark for evaluating human preferences of text-to-image synthesis. arXiv preprint arXiv:2306.09341, 2023. 7 +[75] Xiaoshi Wu, Keqiang Sun, Feng Zhu, Rui Zhao, and Hongsheng Li. Better aligning text-to-image models with human preference. arXiv preprint arXiv:2303.14420, 1(3), 2023. 3 +[76] Jun Xu, Tao Mei, Ting Yao, and Yong Rui. Msr-vtt: A large video description dataset for bridging video and language. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 2, 3 + +[77] Jingtao Xu, Peng Ye, Qiaohong Li, Haiqing Du, Yong Liu, and David Doermann. Blind image quality assessment based on high order statistics aggregation. IEEE Transactions on Image Processing (TIP), pages 4444-4457, 2016. 6, 7 +[78] Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, and Yuxiao Dong. Imagereward: Learning and evaluating human preferences for text-to-image generation. arXiv preprint arXiv:2304.05977, 2023.6,7 +[79] Zitong Xu, Huiyu Duan, Guangji Ma, Liu Yang, Jiarui Wang, Qingbo Wu, Xiongkuo Min, Guangtao Zhai, and Patrick Le Callet. Harmonyiqa: Pioneering benchmark and model for image harmonization quality assessment. In Proceedings of the International Conference on Multimedia and Expo (ICME), 2025. 1 +[80] Wufeng Xue, Lei Zhang, and Xuanqin Mou. Learning without human scores for blind image quality assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 995-1002, 2013. 6, 7 +[81] Wilson Yan, Yunzhi Zhang, Pieter Abbeel, and Aravind Srinivas. Videogpt: Video generation using vq-vae and transformers. arXiv preprint arXiv:2104.10157, 2021. 1 +[82] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 586-595, 2018. 6 +[83] Weixia Zhang, Guangtao Zhai, Ying Wei, Xiaokang Yang, and Kede Ma. Blind image quality assessment via vision-language correspondence: A multitask learning perspective. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023. 6, 7 +[84] Zhichao Zhang, Xinyue Li, Wei Sun, Jun Jia, Xiongkuo Min, Zicheng Zhang, Chunyi Li, Zijian Chen, Puyi Wang, Zhongpeng Ji, et al. Benchmarking aigc video quality assessment: A dataset and unified model. arXiv preprint arXiv:2407.21408, 2024. 1, 3, 7, 8 +[85] Tianwei Zhou, Songbai Tan, Wei Zhou, Yu Luo, Yuan-Gen Wang, and Guanghui Yue. Adaptive mixed-scale feature fusion network for blind ai-generated image quality assessment. IEEE Transactions on Broadcasting (TBC), 2024. 1 \ No newline at end of file diff --git a/2025/AIGV-Assessor_ Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMM/images.zip b/2025/AIGV-Assessor_ Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMM/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4eab3b83b7d0ce4e150759756f2905fd5e934718 --- /dev/null +++ b/2025/AIGV-Assessor_ Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMM/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ffad709b0ee6af8c1ff64f95cd53ade14b0413a49b95e78f0e1642daa1bda669 +size 1060622 diff --git a/2025/AIGV-Assessor_ Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMM/layout.json b/2025/AIGV-Assessor_ Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMM/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..843b03104b5655e0259bd89a9844b7e8b3e156db --- /dev/null +++ b/2025/AIGV-Assessor_ Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMM/layout.json @@ -0,0 +1,7866 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 79, + 102, + 533, + 138 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 102, + 533, + 138 + ], + "spans": [ + { + "bbox": [ + 79, + 102, + 533, + 138 + ], + "type": "text", + "content": "AIGV-Assessor: Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMM" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 61, + 160, + 553, + 219 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 160, + 553, + 219 + ], + "spans": [ + { + "bbox": [ + 61, + 160, + 553, + 219 + ], + "type": "text", + "content": "Jiarui Wang" + }, + { + "bbox": [ + 61, + 160, + 553, + 219 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 61, + 160, + 553, + 219 + ], + "type": "text", + "content": ", Huiyu Duan" + }, + { + "bbox": [ + 61, + 160, + 553, + 219 + ], + "type": "inline_equation", + "content": "^{1,2}" + }, + { + "bbox": [ + 61, + 160, + 553, + 219 + ], + "type": "text", + "content": ", Guangtao Zhai" + }, + { + "bbox": [ + 61, + 160, + 553, + 219 + ], + "type": "inline_equation", + "content": "^{1,2}" + }, + { + "bbox": [ + 61, + 160, + 553, + 219 + ], + "type": "text", + "content": ", Juntong Wang" + }, + { + "bbox": [ + 61, + 160, + 553, + 219 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 61, + 160, + 553, + 219 + ], + "type": "text", + "content": ", Xiongkuo Min" + }, + { + "bbox": [ + 61, + 160, + 553, + 219 + ], + "type": "inline_equation", + "content": "^{1*}" + }, + { + "bbox": [ + 61, + 160, + 553, + 219 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 61, + 160, + 553, + 219 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 61, + 160, + 553, + 219 + ], + "type": "text", + "content": "Institute of Image Communication and Network Engineering, " + }, + { + "bbox": [ + 61, + 160, + 553, + 219 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 61, + 160, + 553, + 219 + ], + "type": "text", + "content": "MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 151, + 244, + 200, + 258 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 244, + 200, + 258 + ], + "spans": [ + { + "bbox": [ + 151, + 244, + 200, + 258 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 54, + 270, + 297, + 571 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 270, + 297, + 571 + ], + "spans": [ + { + "bbox": [ + 54, + 270, + 297, + 571 + ], + "type": "text", + "content": "The rapid advancement of large multimodal models (LMMs) has led to the rapid expansion of artificial intelligence generated videos (AIGVs), which highlights the pressing need for effective video quality assessment (VQA) models designed specifically for AIGVs. Current VQA models generally fall short in accurately assessing the perceptual quality of AIGVs due to the presence of unique distortions, such as unrealistic objects, unnatural movements, or inconsistent visual elements. To address this challenge, we first present AIGVQA-DB, a large-scale dataset comprising 36,576 AIGVs generated by 15 advanced text-to-video models using 1,048 diverse prompts. With these AIGVs, a systematic annotation pipeline including scoring and ranking processes is devised, which collects 370k expert ratings to date. Based on AIGVQA-DB, we further introduce AIGV-Assessor, a novel VQA model that leverages spatiotemporal features and LMM frameworks to capture the intricate quality attributes of AIGVs, thereby accurately predicting precise video quality scores and video pair preferences. Through comprehensive experiments on both AIGVQA-DB and existing AIGV databases, AIGV-Assessor demonstrates state-of-the-art performance, significantly surpassing existing scoring or evaluation methods in terms of multiple perceptual quality dimensions. The dataset and code are released at https://github.com/IntMeGroup/AIGV-Assessor." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 595, + 137, + 608 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 595, + 137, + 608 + ], + "spans": [ + { + "bbox": [ + 55, + 595, + 137, + 608 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 616, + 296, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 616, + 296, + 665 + ], + "spans": [ + { + "bbox": [ + 55, + 616, + 296, + 665 + ], + "type": "text", + "content": "Text-to-video generative models [12, 27, 44, 64, 73], including auto-regressive [23, 81] and diffusion-based [12, 27, 55] approaches, have experienced rapid advancements in recent years with the explosion of large multimodal models" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 246, + 555, + 389 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 246, + 555, + 389 + ], + "spans": [ + { + "bbox": [ + 313, + 246, + 555, + 389 + ], + "type": "text", + "content": "(LMMs). Given appropriate text prompts, these models can generate high-fidelity and semantically-aligned videos, commonly referred to as AI-generated videos (AIGVs), which have significantly facilitated the content creation in various domains, including entertainment, art, design, and advertising, etc [11, 13, 43]. Despite the significant progress, current AIGVs are still far from satisfactory. Unlike natural videos, which are usually affected by low-level distortions, such as noise, blur, low-light, etc, AIGVs generally suffer from degradations such as unrealistic objects, unnatural movements, inconsistent visual elements, and misalignment with text descriptions [25, 31, 43, 65, 79, 84, 85]." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 390, + 556, + 653 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 390, + 556, + 653 + ], + "spans": [ + { + "bbox": [ + 313, + 390, + 556, + 653 + ], + "type": "text", + "content": "The unique distortions in AIGVs also bring challenges to the video evaluation. Traditional video quality assessment (VQA) methods [10, 18, 33, 35, 57, 70, 71] mainly focus on evaluating the quality of professionally-generated content (PGC) and user-generated content (UGC), thus struggling to address the specific distortions associated with AIGVs, such as spatial artifacts, temporal inconsistencies, and misalignment between generated content and text prompts. For evaluation of AIGVs, some metrics such as Inception Score (IS) [52] and Fréchet Video Distance (FVD) [61] have been widely used, which are computed over distributions of videos and may not reflect the human preference for an individual video. Moreover, these metrics mainly evaluate the fidelity of videos, while failing to assess the text-video correspondence. Vision-language pre-training models, such as CLIPScore [22], BLIPScore [37], and AestheticScore [53] are frequently employed to evaluate the alignment between generated videos and their text prompts. However, these models mainly consider the text-video alignment at the image level, while ignoring the dynamic diversity and motion consistency of visual elements that are crucial to the vide-viewing experience." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 654, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 654, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 654, + 556, + 715 + ], + "type": "text", + "content": "In this paper, to facilitate the development of more comprehensive and precise metrics for evaluating AI-generated videos, we present AIGVQA-DB, a large-scale VQA dataset, including 36,576 AIGVs generated by 15 advanced text-to-video models using 1,048 diverse prompts. An" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 674, + 295, + 713 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 674, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 674, + 295, + 713 + ], + "type": "text", + "content": "*Corresponding Author. \nThis work was supported in part by the National Natural Science Foundation of China under Grant 62271312, 62132006, 62401365, 62225112, and in part by STCSM under Grant 22DZ2229005." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "text", + "content": "18869" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 57, + 42, + 555, + 218 + ], + "blocks": [ + { + "bbox": [ + 57, + 42, + 555, + 218 + ], + "lines": [ + { + "bbox": [ + 57, + 42, + 555, + 218 + ], + "spans": [ + { + "bbox": [ + 57, + 42, + 555, + 218 + ], + "type": "image", + "image_path": "e2f88f0b1752496702c959cf991c9bb7ab6e815905c8a1788b7757a50de31c59.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 219, + 555, + 285 + ], + "lines": [ + { + "bbox": [ + 55, + 219, + 555, + 285 + ], + "spans": [ + { + "bbox": [ + 55, + 219, + 555, + 285 + ], + "type": "text", + "content": "Figure 1. An overview of the AIGVQA-DB construction pipeline, illustrating the generation and the subjective evaluation procedures for the AIGVs in the database. (a) Prompt categorization according to the spatial major content. (b) Prompt categorization according to the temporal descriptions. (c) Prompt categorization according to the attribute control. (d) Prompt categorization according to the prompt complexity. (e) The 15 generative models used in the database. (f) Four visual quality evaluation perspectives, including static quality, temporal smoothness, dynamic degree, and text-video correspondence. (g) and (h) demonstrates the pair comparison and preference scoring processes, respectively." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 295, + 296, + 606 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 295, + 296, + 606 + ], + "spans": [ + { + "bbox": [ + 56, + 295, + 296, + 606 + ], + "type": "text", + "content": "overview of the dataset construction pipeline is shown in Figure 1. The prompts are collected from existing open-domain text-video datasets [7, 8, 38, 43, 68, 76] or manually-written, which can be categorized based on four orthogonal aspects respectively, as shown in Figure 1(a)-(d). Based on the AIGVs, we collect 370k expert ratings comprising both mean opinion scores (MOSs) and pairwise comparisons, which are evaluated from four dimensions, including: (1) static quality, (2) temporal smoothness, (3) dynamic degree, and (4) text-video correspondence. Equipped with the dataset, we propose AIGV-Assessor, a large multimodal model-based (LMM-based) VQA method for AIGVs, which reformulates the quality regression task into an interactive question-and-answer (Q&A) framework and leverages the powerful multimodal representation capabilities of LMMs to provide accurate and robust quality assessments. AIGV-Assessor not only classifies videos into different quality levels through natural language output, but also generates precise quality scores through regression, thus enhancing the interpretability and usability of VQA results. Moreover, AIGV-Assessor also excels in pairwise video comparisons, enabling nuanced assessments that are closer to human preferences. Extensive experimental results demonstrate that AIGV-Assessor outperforms existing text-to-video scoring methods in terms of multiple dimensions relevant to human preference." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 612, + 295, + 635 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 612, + 295, + 635 + ], + "spans": [ + { + "bbox": [ + 55, + 612, + 295, + 635 + ], + "type": "text", + "content": "The main contributions of this paper are summarized as follows:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 641, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 641, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 641, + 296, + 714 + ], + "type": "text", + "content": "- We construct AIGVQA-DB, a large-scale dataset comprising 36,576 AI-generated videos annotated with MOS scores and pairwise comparisons. Compared with existing benchmarks, AIGVQA-DB provides a more comprehensive assessment of the capabilities of text-to-video models from multiple perspectives." + } + ] + } + ], + "index": 4 + }, + { + "type": "table", + "bbox": [ + 315, + 316, + 555, + 464 + ], + "blocks": [ + { + "bbox": [ + 313, + 293, + 555, + 315 + ], + "lines": [ + { + "bbox": [ + 313, + 293, + 555, + 315 + ], + "spans": [ + { + "bbox": [ + 313, + 293, + 555, + 315 + ], + "type": "text", + "content": "Table 1. An overview of popular text-to-video (T2V) and image-to-video (I2V) generation models. " + }, + { + "bbox": [ + 313, + 293, + 555, + 315 + ], + "type": "inline_equation", + "content": "{}^{ \\dagger }" + }, + { + "bbox": [ + 313, + 293, + 555, + 315 + ], + "type": "text", + "content": " Representative variable." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 315, + 316, + 555, + 464 + ], + "lines": [ + { + "bbox": [ + 315, + 316, + 555, + 464 + ], + "spans": [ + { + "bbox": [ + 315, + 316, + 555, + 464 + ], + "type": "table", + "html": "
ModelYearModeResolutionFramesOpen
CogVideo [23]22.05T2V480×48032
Make-a-Video [55]22.09T2V256×25616
LVDM [21]22.11T2V256×25616
Tune-A-Video [73]22.12T2V512×5128
VideoFusion [44]23.03T2V128×12816
Text2Video-Zero [27]23.03T2V512×5128
ModelScope [64]23.03T2V256×25616
Lavie [67]23.09T2V512×32016
VideoCrafter [12]23.10T2V, I2V1024×57616
Hotshot-XL [1]23.10T2V672×3848
StableVideoDiffusion [9]23.11I2V576×102414
AnimateDiff [20]23.12T2V, I2V384×25620
Floor33 [2]23.08T2V,I2V1024×64016-
Genmo [3]23.10T2V, I2V2048×153660-
Gen-2 [4]23.12T2V, I2V1408×76896-
MoonValley [5]24.01T2V, I2V1184×672200†-
MorphStudio [6]24.01T2V, I2V1920×108072-
Sora [7]24.02T2V, I2V1920×1080600†-
", + "image_path": "e3de484902a96fbadcaab3b506b556478f05878361edece5b32e33fcc4891946.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_body" + } + ], + "index": 6 + }, + { + "bbox": [ + 314, + 476, + 555, + 643 + ], + "type": "list", + "angle": 0, + "index": 10, + "blocks": [ + { + "bbox": [ + 314, + 476, + 555, + 536 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 476, + 555, + 536 + ], + "spans": [ + { + "bbox": [ + 314, + 476, + 555, + 536 + ], + "type": "text", + "content": "- Based on AIGVQA-DB, we evaluate and benchmark 15 representative text-to-video models, and reveal their strengths and weaknesses from four crucial preference dimensions, i.e., static quality, temporal smoothness, dynamic degree, and text-to-video correspondence." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 314, + 536, + 555, + 596 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 536, + 555, + 596 + ], + "spans": [ + { + "bbox": [ + 314, + 536, + 555, + 596 + ], + "type": "text", + "content": "- We present a novel LMM-based VQA model for AIGVs, termed AIGV-Assessor, which integrates both spatial and temporal visual features as well as prompt features into a LMM to give quality levels, predict quality scores, and conduct quality comparisons." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 314, + 596, + 555, + 643 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 596, + 555, + 643 + ], + "spans": [ + { + "bbox": [ + 314, + 596, + 555, + 643 + ], + "type": "text", + "content": "- Thorough analysis of our AIGV-Assessor is provided and extensive experiments on our proposed AIGVQA-DB and other AIGV quality assessment datasets have shown the effectiveness and applicability of AIGV-Assessor." + } + ] + } + ], + "index": 9 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 314, + 655, + 400, + 667 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 655, + 400, + 667 + ], + "spans": [ + { + "bbox": [ + 314, + 655, + 400, + 667 + ], + "type": "text", + "content": "2. Related Work" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 314, + 673, + 454, + 685 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 673, + 454, + 685 + ], + "spans": [ + { + "bbox": [ + 314, + 673, + 454, + 685 + ], + "type": "text", + "content": "2.1. Text-to-video Generation" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 689, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 689, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 689, + 555, + 713 + ], + "type": "text", + "content": "Recent advancements in text-to-video generative models have substantially broadened video creation and modifica" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "18870" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 56, + 71, + 555, + 184 + ], + "blocks": [ + { + "bbox": [ + 156, + 60, + 453, + 70 + ], + "lines": [ + { + "bbox": [ + 156, + 60, + 453, + 70 + ], + "spans": [ + { + "bbox": [ + 156, + 60, + 453, + 70 + ], + "type": "text", + "content": "Table 2. Summary of existing text-to-image and text-to-video evaluation datasets." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 56, + 71, + 555, + 184 + ], + "lines": [ + { + "bbox": [ + 56, + 71, + 555, + 184 + ], + "spans": [ + { + "bbox": [ + 56, + 71, + 555, + 184 + ], + "type": "table", + "html": "
Dataset TypesNameNumbersPromptsModelsAnnotatorsDimensionsMOSs / PairsAnnotation
AIGIQAAGIQA-3k [34]2,98218062125,964MOS
AIGCIQA2023 [63]2,40010062837,200MOS
RichHF-18k [39]17,76017,76033471,040MOS
HPS [75]98,80725,20512,659125,205Pairs
AIGVQAPick-a-Pic [29]-37,52334,3751584,247Pairs
MQT [15]1,00520152422,010MOS
EvalCrafter [42]2,5007005741,024MOS
FETV [43]2,4766194337,428MOS
LGVQ [84]2,80846862038,424MOS
T2VQA-DB [31]10,0001,000927110,000MOS
GAIA [13]9,1805101854327,540MOS
AIGVQA-DB (Ours)36,5761,048151204122,304MOS and Pairs
", + "image_path": "6787ec8d15512f02dc1d3ce899c3b3e6c185f3b18c784a74c4644ea72e531e53.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 190, + 297, + 441 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 190, + 297, + 441 + ], + "spans": [ + { + "bbox": [ + 55, + 190, + 297, + 441 + ], + "type": "text", + "content": "tion possibilities. As shown in Table 1, these models exhibit distinct characteristics and capacities, including modes, resolution, and total frames. CogVideo [23] is an early text-to-video (T2V) model capable of generating short videos based on CogView2 [16]. Make-a-video [55] adds effective spatial-temporal modules on a diffusion-based text-to-image (T2I) model (i.e., DALLE-2 [50]). VideoFusion [44] also leverages the DALLE-2 and presents a decomposed diffusion process. LVDM [21], Text2Video-Zero [27], Tune-A-Video [73], and ModelScope [64] are models that inherit the success of Stable Diffusion (SD) [51] for video generation. Lavie [67] extends the original transformer block in SD to a spatio-temporal transformer. Hotshot-XL [1] introduces personalized video generation. Beyond these laboratory-driven advancements, the video generation landscape has also been enriched by a series of commercial products. Notable among them are Floor33 [2], Gen-2 [4], Genmo [3], MoonValley [5], MorphStudio [6], and Sora [7], which have gained substantial attention in both academia and industry, demonstrating the widespread application potential of AI-assisted video creation." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 448, + 194, + 459 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 448, + 194, + 459 + ], + "spans": [ + { + "bbox": [ + 55, + 448, + 194, + 459 + ], + "type": "text", + "content": "2.2. Text-to-video Evaluation" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 462, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 462, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 462, + 296, + 713 + ], + "type": "text", + "content": "The establishment of the AI-generated image quality assessment (AIGIQA) dataset is relatively well-developed, including both mean opinion scores (MOSs) for absolute quality evaluations, and pairwise comparisons for relative quality judgments. Recent developments in text-to-video generation models have also spurred the creation of various AI-generated video quality assessment (AIGVQA) datasets, addressing different aspects of the T2V generation challenge, as shown in Table 2. MQT [15] consists of 1,005 videos generated by 5 models using 201 prompts. EvalCrafter [42] and FETV [43] extend the scale of the videos, prompts, and evaluation dimensions. LGVQ [84] increases the number of annotators, providing more reliable MOSs. T2VQA-DB [31] consists of 10,000 videos from 1,000 prompts representing a significant improvement in scale. GAIA [13] collects 9,180 videos focusing on action quality assessment in AIGVs, but falls short in addressing the consistency between the generated visuals and their textual prompts. Most existing VQA datasets predominantly rely on MOS, an absolute scoring method, which suffers from the same drawback: absolute scores alone may cause am" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 313, + 191, + 555, + 240 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 191, + 555, + 240 + ], + "spans": [ + { + "bbox": [ + 313, + 191, + 555, + 240 + ], + "type": "text", + "content": "biguity and overlook subtle quality differences. In contrast, our AIGVQA-DB includes both MOSs and pairwise comparisons, addressing the limitations of current works by providing fine-grained preference feedbacks." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 245, + 517, + 258 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 245, + 517, + 258 + ], + "spans": [ + { + "bbox": [ + 313, + 245, + 517, + 258 + ], + "type": "text", + "content": "3. Database Construction and Analysis" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 262, + 410, + 274 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 262, + 410, + 274 + ], + "spans": [ + { + "bbox": [ + 313, + 262, + 410, + 274 + ], + "type": "text", + "content": "3.1. Data Collection" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 281, + 555, + 413 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 281, + 555, + 413 + ], + "spans": [ + { + "bbox": [ + 313, + 281, + 555, + 413 + ], + "type": "text", + "content": "Prompt Scources and Categorization. Prompts of the AIGVQA-DB are primarily sourced from existing open-domain text-video pair datasets, including InternVid [68], MSRVTT [76], WebVid [8], TGIF [38], FETV [43] and Sora website [7]. We also manually craft prompts describing highly unusual scenarios to test the generalization ability of the generation models. As shown in Figure 1(a)-(d), we follow the categorization principles from FETV [43] to organize each prompt based on the \"spatial major content\", \"temporal major content\", \"attribute control\", and \"prompt complexity\"." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 417, + 556, + 633 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 417, + 556, + 633 + ], + "spans": [ + { + "bbox": [ + 313, + 417, + 556, + 633 + ], + "type": "text", + "content": "Text-to-Video Generation. We utilize 15 latest text-to-video generative models to create AI-generated videos as shown in Figure 1(e). We leverage open-source website APIs and code with default weights for these models to produce AIGVs. For the construction of the MOS subset, we collect 48 videos from the Sora Website [7], along with their corresponding text prompts. Using these prompts, we generate additional videos using 11 different generative models. This process results in a total of 576 videos (12 generative models " + }, + { + "bbox": [ + 313, + 417, + 556, + 633 + ], + "type": "inline_equation", + "content": "\\times" + }, + { + "bbox": [ + 313, + 417, + 556, + 633 + ], + "type": "text", + "content": " 48 prompts). In addition to the MOS subset, we construct the pair-comparison subset using 1,000 diverse prompts, and 12 generative models including 8 open-sourced and 4 close-sourced are employed for text-to-video generation. Specifically, for each prompt, we generate four distinct videos for each open-source generative model and one video for each closed-source generative model. This process yields a total of 36,000 videos. More details of the database can be found in the supplementary material." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 639, + 545, + 651 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 639, + 545, + 651 + ], + "spans": [ + { + "bbox": [ + 313, + 639, + 545, + 651 + ], + "type": "text", + "content": "3.2. Subjective Experiment Setup and Procedure" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 654, + 554, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 654, + 554, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 654, + 554, + 715 + ], + "type": "text", + "content": "Due to the unique and unnatural characteristics of AI-generated videos and the varying target video spaces dictated by different text prompts, relying solely on a single score, such as \"quality\", to represent human visual preferences is insufficient. In this paper, we propose to measure" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "18871" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 56, + 59, + 212, + 156 + ], + "blocks": [ + { + "bbox": [ + 56, + 59, + 212, + 156 + ], + "lines": [ + { + "bbox": [ + 56, + 59, + 212, + 156 + ], + "spans": [ + { + "bbox": [ + 56, + 59, + 212, + 156 + ], + "type": "image", + "image_path": "fb0385b3e755dd0ce7af1b2bf2c5ae793450e483574d3b22a009ca8c5c39ce30.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 108, + 156, + 199, + 164 + ], + "lines": [ + { + "bbox": [ + 108, + 156, + 199, + 164 + ], + "spans": [ + { + "bbox": [ + 108, + 156, + 199, + 164 + ], + "type": "text", + "content": "(a) Distribution of Raw Scores" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 212, + 59, + 553, + 156 + ], + "blocks": [ + { + "bbox": [ + 212, + 59, + 553, + 156 + ], + "lines": [ + { + "bbox": [ + 212, + 59, + 553, + 156 + ], + "spans": [ + { + "bbox": [ + 212, + 59, + 553, + 156 + ], + "type": "image", + "image_path": "8eac122b8407f48e45e263fdddc23287419766654acec0728f07feffd5b4ba0f.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 318, + 157, + 462, + 165 + ], + "lines": [ + { + "bbox": [ + 318, + 157, + 462, + 165 + ], + "spans": [ + { + "bbox": [ + 318, + 157, + 462, + 165 + ], + "type": "text", + "content": "(b) Distribution of Mean Opinion Scores (MOSs)" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 56, + 198, + 556, + 296 + ], + "blocks": [ + { + "bbox": [ + 55, + 167, + 555, + 189 + ], + "lines": [ + { + "bbox": [ + 55, + 167, + 555, + 189 + ], + "spans": [ + { + "bbox": [ + 55, + 167, + 555, + 189 + ], + "type": "text", + "content": "Figure 2. Video score distribution from the four perspectives including static quality, temporal smoothness, dynamic degree, and t2v correspondence. (a) Distribution of raw scores. (b) Distribution of Mean Opinion Scores (MOSs)" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 56, + 198, + 556, + 296 + ], + "lines": [ + { + "bbox": [ + 56, + 198, + 556, + 296 + ], + "spans": [ + { + "bbox": [ + 56, + 198, + 556, + 296 + ], + "type": "image", + "image_path": "36c72da19ded12678e8963e4090622f43eaf08a13abe22c112d4b9d85ec226dc.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 107, + 299, + 116, + 306 + ], + "lines": [ + { + "bbox": [ + 107, + 299, + 116, + 306 + ], + "spans": [ + { + "bbox": [ + 107, + 299, + 116, + 306 + ], + "type": "text", + "content": "(a)" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 217, + 298, + 227, + 306 + ], + "lines": [ + { + "bbox": [ + 217, + 298, + 227, + 306 + ], + "spans": [ + { + "bbox": [ + 217, + 298, + 227, + 306 + ], + "type": "text", + "content": "(b)" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 321, + 298, + 331, + 306 + ], + "lines": [ + { + "bbox": [ + 321, + 298, + 331, + 306 + ], + "spans": [ + { + "bbox": [ + 321, + 298, + 331, + 306 + ], + "type": "text", + "content": "(c)" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 58, + 338, + 296, + 484 + ], + "blocks": [ + { + "bbox": [ + 55, + 308, + 553, + 331 + ], + "lines": [ + { + "bbox": [ + 55, + 308, + 553, + 331 + ], + "spans": [ + { + "bbox": [ + 55, + 308, + 553, + 331 + ], + "type": "text", + "content": "Figure 3. Comparison of averaged win rates of different generation models across different categories. (a) Results across prompt complexity. (b) Results across attribute control. (c) Results across temporal major contents. (d) Results across spatial major contents." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 58, + 338, + 296, + 484 + ], + "lines": [ + { + "bbox": [ + 58, + 338, + 296, + 484 + ], + "spans": [ + { + "bbox": [ + 58, + 338, + 296, + 484 + ], + "type": "image", + "image_path": "1f6e6965f46d7954d0df029d696dd198d84ae054cd372b5706fa71d41fde0ab3.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 486, + 295, + 540 + ], + "lines": [ + { + "bbox": [ + 55, + 486, + 295, + 540 + ], + "spans": [ + { + "bbox": [ + 55, + 486, + 295, + 540 + ], + "type": "text", + "content": "Figure 4. (a) Comparison of text-to-video generation models regarding the MOS in terms of four dimensions sorted bottom-up by their averaged MOS. (b) Comparison of text-to-video generation models regarding the win rate in terms of four dimensions sorted bottom-up by their averaged win rate." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "bbox": [ + 55, + 558, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 558, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 558, + 296, + 713 + ], + "type": "text", + "content": "the human visual preferences of AIGVs from four perspectives. Static quality assesses the clarity, sharpness, color accuracy, and overall aesthetic appeal of the frames when viewed as standalone images. Temporal smoothness evaluates the temporal coherence of video frames and the absence of temporal artifacts such as flickering or jittering. Dynamic degree evaluates the extent to which the video incorporates large motions and dynamic scenes, which contributes to the overall liveliness and engagement measurement of the content. Text-video (TV) correspondence assesses how accurately the video content reflects the details, themes, and actions described in the prompt, ensuring that the generated video effectively translates the text input into" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 340, + 555, + 496 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 340, + 555, + 496 + ], + "spans": [ + { + "bbox": [ + 313, + 340, + 555, + 496 + ], + "type": "text", + "content": "a visual narrative. Each of these four visual perception perspectives is related but distinct, offering a comprehensive evaluation for AIGVs. To evaluate the quality of the videos in the AIGVQA-DB, we conduct subjective experiments adhering to the guidelines outlined in ITU-R BT.500-14 [17, 54]. For the MOS annotation type, we use a 1-5 Likert-scale judgment to score the videos. For the pairs annotation type, participants are presented with pairs of videos and asked to choose the one they prefer, providing a direct comparison method for evaluating relative video quality. The videos are displayed using an interface designed with Python Tkinter, as illustrated in Figure 1(g)-(h). A total of 120 graduate students participate in the experiment." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 502, + 463, + 515 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 502, + 463, + 515 + ], + "spans": [ + { + "bbox": [ + 313, + 502, + 463, + 515 + ], + "type": "text", + "content": "3.3. Subjective Data Processing" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 520, + 554, + 544 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 520, + 554, + 544 + ], + "spans": [ + { + "bbox": [ + 313, + 520, + 554, + 544 + ], + "type": "text", + "content": "In order to obtain the MOS for an AIGV, we linearly scale the raw ratings to the range [0, 100] as follows:" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 351, + 553, + 516, + 578 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 351, + 553, + 516, + 578 + ], + "spans": [ + { + "bbox": [ + 351, + 553, + 516, + 578 + ], + "type": "interline_equation", + "content": "z _ {i j} = \\frac {r _ {i j} - \\mu_ {i j}}{\\sigma_ {i}}, \\quad z _ {i j} ^ {\\prime} = \\frac {1 0 0 (z _ {i j} + 3)}{6},", + "image_path": "3720893653894e08e26dbbf7f8e87e032a12e4515b6b13b8bdb5145033ae3317.jpg" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 328, + 586, + 539, + 624 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 328, + 586, + 539, + 624 + ], + "spans": [ + { + "bbox": [ + 328, + 586, + 539, + 624 + ], + "type": "interline_equation", + "content": "\\mu_ {i} = \\frac {1}{N _ {i}} \\sum_ {j = 1} ^ {N _ {i}} r _ {i j}, \\sigma_ {i} = \\sqrt {\\frac {1}{N _ {i} - 1} \\sum_ {j = 1} ^ {N _ {i}} (r _ {i j} - \\mu_ {i j}) ^ {2}}", + "image_path": "96c053e7254590f6ebbbf16dad2287e91210edde9c6ec1a096a08bcd8deb5ea2.jpg" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 629, + 554, + 676 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 629, + 554, + 676 + ], + "spans": [ + { + "bbox": [ + 313, + 629, + 554, + 676 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 629, + 554, + 676 + ], + "type": "inline_equation", + "content": "r_{ij}" + }, + { + "bbox": [ + 313, + 629, + 554, + 676 + ], + "type": "text", + "content": " is the raw ratings given by the " + }, + { + "bbox": [ + 313, + 629, + 554, + 676 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 313, + 629, + 554, + 676 + ], + "type": "text", + "content": "-th subject to the " + }, + { + "bbox": [ + 313, + 629, + 554, + 676 + ], + "type": "inline_equation", + "content": "j" + }, + { + "bbox": [ + 313, + 629, + 554, + 676 + ], + "type": "text", + "content": "-th video. " + }, + { + "bbox": [ + 313, + 629, + 554, + 676 + ], + "type": "inline_equation", + "content": "N_i" + }, + { + "bbox": [ + 313, + 629, + 554, + 676 + ], + "type": "text", + "content": " is the number of videos judged by subject " + }, + { + "bbox": [ + 313, + 629, + 554, + 676 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 313, + 629, + 554, + 676 + ], + "type": "text", + "content": ". Next, the MOS of the video " + }, + { + "bbox": [ + 313, + 629, + 554, + 676 + ], + "type": "inline_equation", + "content": "j" + }, + { + "bbox": [ + 313, + 629, + 554, + 676 + ], + "type": "text", + "content": " is computed by averaging the rescaled z-scores as follows:" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 390, + 683, + 477, + 715 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 390, + 683, + 477, + 715 + ], + "spans": [ + { + "bbox": [ + 390, + 683, + 477, + 715 + ], + "type": "interline_equation", + "content": "M O S _ {j} = \\frac {1}{M} \\sum_ {i = 1} ^ {M} z _ {i j} ^ {\\prime}", + "image_path": "9028f8d62bbacd80d9aeac7ec7f97ddbcd27ed4f01ef89ba7ac89c6e5a75d5f4.jpg" + } + ] + } + ], + "index": 19 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "18872" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 56, + 60, + 558, + 238 + ], + "blocks": [ + { + "bbox": [ + 56, + 60, + 558, + 238 + ], + "lines": [ + { + "bbox": [ + 56, + 60, + 558, + 238 + ], + "spans": [ + { + "bbox": [ + 56, + 60, + 558, + 238 + ], + "type": "image", + "image_path": "6e2271fc92b4aadac9147836c9305b52893182b0a5fb8fd4763fd447322a3f64.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 54, + 239, + 555, + 307 + ], + "lines": [ + { + "bbox": [ + 54, + 239, + 555, + 307 + ], + "spans": [ + { + "bbox": [ + 54, + 239, + 555, + 307 + ], + "type": "text", + "content": "Figure 5. The framework of AIGV-Assessor: (a) AIGV-Assessor takes AI-generated video frames as input and outputs both text-based quality levels and numerical quality scores. The system begins with the extraction of spatiotemporal features using two vision encoders, which are then passed through spatial and temporal projection modules to generate aligned visual tokens into language space. The LLM decoder produces text-based feedback describing the video quality level for four evaluation dimensions, respectively. Simultaneously, the last-hidden-states from the LLM are used to perform quality regression that outputs final quality scores in terms of four dimensions. (b) AIGV-Assessor is fine-tuned on pairwise comparison, further allowing the model to output the evaluation comparison between two videos." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 54, + 318, + 294, + 342 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 318, + 294, + 342 + ], + "spans": [ + { + "bbox": [ + 54, + 318, + 294, + 342 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 54, + 318, + 294, + 342 + ], + "type": "inline_equation", + "content": "MOS_{j}" + }, + { + "bbox": [ + 54, + 318, + 294, + 342 + ], + "type": "text", + "content": " indicates the MOS for the " + }, + { + "bbox": [ + 54, + 318, + 294, + 342 + ], + "type": "inline_equation", + "content": "j" + }, + { + "bbox": [ + 54, + 318, + 294, + 342 + ], + "type": "text", + "content": "-th AIGV, " + }, + { + "bbox": [ + 54, + 318, + 294, + 342 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 54, + 318, + 294, + 342 + ], + "type": "text", + "content": " is the number of subjects, and " + }, + { + "bbox": [ + 54, + 318, + 294, + 342 + ], + "type": "inline_equation", + "content": "z_{ij}^{\\prime}" + }, + { + "bbox": [ + 54, + 318, + 294, + 342 + ], + "type": "text", + "content": " are the rescaled z-scores." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "spans": [ + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "text", + "content": "For the pairs annotation type, given a text prompt " + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "inline_equation", + "content": "p_i" + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "text", + "content": " and 12 video generation models labeled " + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "inline_equation", + "content": "\\{A,B,C,\\dots,L\\}" + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "text", + "content": ", we generate videos using each model, forming a group of videos " + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "inline_equation", + "content": "G_{i,j} = \\{V_{i,A,j},V_{i,B,j},V_{i,C,j},\\dots,V_{i,L,j}\\}" + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "text", + "content": ". For each prompt " + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "inline_equation", + "content": "p_i" + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "text", + "content": ", we generate four different videos randomly for each of the eight open-source generative models and one video for each of the four closed-source generative models, resulting in a group of 36 videos " + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "inline_equation", + "content": "\\{G_{i,A,1},G_{i,A,2},G_{i,A,3},G_{i,A,4},G_{i,B,1},\\dots,G_{i,L,1}\\}" + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "text", + "content": ". For each group, we create all possible pairwise combinations, resulting in " + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "inline_equation", + "content": "C_{36}^2" + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "text", + "content": " pairs: " + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "inline_equation", + "content": "(V_{A1},V_{B1})" + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "inline_equation", + "content": "(V_{A1},V_{B2})" + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "inline_equation", + "content": "(V_{A1},V_{B3})" + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "inline_equation", + "content": "(V_{A1},V_{B4})" + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "inline_equation", + "content": "(V_{A1},V_{C1})" + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "text", + "content": ", ..., " + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "inline_equation", + "content": "(V_{K1},V_{L1})" + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "text", + "content": ". In the AIGVQA-DB construction pipeline, a prompt suite of 1000 prompts results in 630,000 " + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "inline_equation", + "content": "(1000\\times C_{36}^2)" + }, + { + "bbox": [ + 56, + 343, + 296, + 594 + ], + "type": "text", + "content": " pairwise video comparisons. From this extensive dataset, we randomly sample 30,000 pairs for evaluation from four perspectives. Each pair is judged by three annotators, and the final decision of the better video in each pair is determined by the majority vote. Finally, we obtain a total of 46,080 reliable score ratings (20 annotators × 4 perspectives × 576 videos) and 360,000 pair ratings (3 annotators × 4 perspectives × 30,000 pairs)." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 600, + 261, + 613 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 600, + 261, + 613 + ], + "spans": [ + { + "bbox": [ + 55, + 600, + 261, + 613 + ], + "type": "text", + "content": "3.4. AIGV Analysis from Four Perspectives" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 617, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 617, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 617, + 296, + 714 + ], + "type": "text", + "content": "As shown in Figure 2, the videos in the AIGVQA-DB cover a wide range of perceptual quality. We further analyze the win rates of various generation models across categories in Figure 3, revealing the strengths and weaknesses of each T2V model. As shown in Figure 3(a), the performances of T2V models rank uniform for different prompt complexity items in terms of static quality, which manifests current T2V model rank consistently for different prompts, likely" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 319, + 555, + 546 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 319, + 555, + 546 + ], + "spans": [ + { + "bbox": [ + 313, + 319, + 555, + 546 + ], + "type": "text", + "content": "due to shared architectures like diffusion-based systems, with common strengths and limitations in handling complex prompts. As shown in Figure 3(b), in terms of attribute control, StableVideoDiffusion [9] excels in managing quantity over event order, as it first generates static images before animating them, preserving the original event sequence. As shown in Figure 3(d), in terms of spatial content, most videos featuring \"plants\" and \"people\" show poor T2V correspondence. More comparison and analysis can be found in the supplementary material. We also launch comparisons among text-to-video generation models regarding the MOS and pairwise win rates shown in Figure 4. Notably, models such as LVDM [21] demonstrate exceptional performance in handling dynamic content, but exhibit relatively lower performance in temporal smoothness. Sora [7] and MorphStudio [6] perform well in static quality and temporal smoothness while lagging in dynamic degree. Additionally, closed-source models exhibit much better performance compared to open-source models." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 554, + 421, + 568 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 554, + 421, + 568 + ], + "spans": [ + { + "bbox": [ + 313, + 554, + 421, + 568 + ], + "type": "text", + "content": "4. Proposed Method" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 571, + 415, + 583 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 571, + 415, + 583 + ], + "spans": [ + { + "bbox": [ + 313, + 571, + 415, + 583 + ], + "type": "text", + "content": "4.1. Model Structure" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 590, + 555, + 685 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 590, + 555, + 685 + ], + "spans": [ + { + "bbox": [ + 313, + 590, + 555, + 685 + ], + "type": "text", + "content": "Spatial and Temporal Vision Encoder. As shown in Figure 5(a), the model leverages two different types of encoders to capture the spatial and temporal characteristics of the video: (1) 2D Encoder: A pre-trained 2D vision transformer (InternViT [69]) is used to process individual video frames. (2) 3D Encoder: A 3D network, i.e., SlowFast [19], is employed to extract temporal features by processing sequences of video frames." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 689, + 555, + 714 + ], + "type": "text", + "content": "Spatiotemporal Projection Module. Once the spatial and temporal features are extracted, they are projected into a" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "18873" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 56, + 83, + 555, + 364 + ], + "blocks": [ + { + "bbox": [ + 55, + 60, + 555, + 81 + ], + "lines": [ + { + "bbox": [ + 55, + 60, + 555, + 81 + ], + "spans": [ + { + "bbox": [ + 55, + 60, + 555, + 81 + ], + "type": "text", + "content": "Table 3. Performance comparisons of the state-of-the-art quality evaluation methods on the AIGVQA-DB from four perspectives. The best performance results are marked in RED and the second-best performance results are marked in BLUE." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 56, + 83, + 555, + 364 + ], + "lines": [ + { + "bbox": [ + 56, + 83, + 555, + 364 + ], + "spans": [ + { + "bbox": [ + 56, + 83, + 555, + 364 + ], + "type": "table", + "html": "
DimensionStatic QualityTemporal SmoothnessDynamic DegreeTV Correspondence
Methods / MetricsPair AccSRCCPLCCKRCCPair AccSRCCPLCCKRCCPair AccSRCCPLCCKRCCPair AccSRCCPLCCKRCC
NIQE [49]54.32%0.08670.16260.061552.67%0.06410.11520.045145.64%0.17650.24480.119446.99%0.17710.22310.1193
QAC [80]49.96%0.10220.13630.068054.90%0.16330.20390.110554.72%0.04480.04270.029554.48%0.03030.01970.2233
BRISQUE [48]59.98%0.29090.24430.196955.67%0.23250.15690.155344.60%0.13510.09590.089351.02%0.12940.10170.0869
BPRI [46]52.28%0.21810.17230.139847.26%0.17660.08800.113846.83%0.19560.16880.132949.13%0.15690.15480.1052
HOSA [77]61.54%0.24200.21060.164357.31%0.23110.17570.155944.97%0.07550.04490.049652.23%0.16450.13240.1097
BMPRI [47]53.71%0.16900.14810.107549.31%0.14340.08440.089445.07%0.11530.09250.077748.43%0.15670.15000.1041
V-Dynamic [25]51.34%0.07680.07920.049431.91%0.37130.48710.255753.11%0.14660.02530.098846.96%0.04050.05760.0223
V-Smoothness [25]61.63%0.67480.45060.459076.59%0.85260.83130.653347.63%0.24460.23280.158061.28%0.31880.30730.2214
CLIPScore [22]47.09%0.07310.08160.047346.33%0.04230.03340.027152.99%0.06750.08350.043955.62%0.15190.17310.1014
BLIPScore [37]53.24%0.04920.04210.033053.07%0.06590.04870.043753.03%0.17860.19040.120561.53%0.18130.18960.1219
AestheticScore [53]70.24%0.67130.69590.478454.82%0.51540.49460.348452.96%0.22950.23220.152759.64%0.23810.24400.1602
ImageReward [78]56.69%0.26060.26460.174954.09%0.23820.23050.160053.90%0.18400.18360.123763.97%0.23110.24500.1568
UMTScore [43]48.93%0.01680.01990.011749.93%0.03020.03700.020752.69%0.01680.01980.011753.82%0.01720.00650.0108
Video-LLaVA [40]50.90%0.03840.05130.029750.36%0.04310.02810.034750.34%0.15610.14360.117650.54%0.13640.10510.1009
Video-ChatGPT [45]51.20%0.12420.15870.094050.16%0.05800.05330.045350.47%0.07240.04360.056350.07%0.03570.01240.0274
LLaVA-NeXT [36]52.85%0.12390.16250.095452.41%0.40210.37220.305251.84%0.17670.16550.132859.20%0.41160.34280.3261
VideoLLaMA2 [14]52.73%0.26430.32710.192852.27%0.36080.24500.269650.78%0.19000.15610.137954.25%0.16560.16330.1210
Qwen2-VL [66]56.50%0.49220.52910.383849.12%0.16810.42190.123352.08%0.11220.13350.084953.30%0.31110.27750.2306
HyperIQA [56]68.30%0.79310.80930.596954.65%0.74260.66300.540753.32%0.21030.21000.138457.54%0.62260.62500.4432
MUSIQ [26]66.46%0.78800.80440.577355.16%0.71990.69200.503452.85%0.52060.48460.352158.46%0.41250.40930.2844
LIQE [83]63.86%0.87760.86910.700855.84%0.79350.77200.608449.02%0.53030.58400.383755.10%0.38620.36390.2640
VSFA [35]46.43%0.33650.34210.226850.95%0.33170.32730.220251.46%0.12010.13620.081548.07%0.10240.10640.0666
BVQA [33]29.98%0.45940.47010.326837.65%0.37040.38190.250755.08%0.45940.47010.326842.32%0.37200.39780.2559
simpleVQA [57]68.12%0.83550.64380.848954.14%0.70820.70080.497853.08%0.46710.31600.399458.20%0.46430.54400.3163
FAST-VQA [70]70.64%0.87380.86440.686062.93%0.90360.91340.716654.34%0.56030.57030.389565.05%0.68750.67040.4978
DOVER [71]72.92%0.89070.88950.700458.83%0.90630.91950.718753.16%0.55490.54890.380062.35%0.67830.68020.4969
Q-Align [72]71.86%0.85160.83830.664157.95%0.81160.70250.619553.71%0.56550.50120.395062.91%0.55420.56470.3870
AIGV-Assessor (Ours)79.83%0.91620.91900.757676.60%0.92320.92160.803860.30%0.60930.60820.443570.32%0.75000.76970.5591
Improvement+ 6.9%+2.7%+3.0%+ 5.7%13.7%+1.7%+0.2%+8.5%+5.2%+4.4%+3.8%+4.4%+5.3%+6.3%+9.9%+6.13%
", + "image_path": "9990b47d31735c217cd27e3efc02ce8fe8b71d0dd0ca2393c1f10529779ecab1.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 373, + 295, + 445 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 373, + 295, + 445 + ], + "spans": [ + { + "bbox": [ + 55, + 373, + 295, + 445 + ], + "type": "text", + "content": "shared feature space for alignment with text-based queries. This is done through two projection modules that map the spatial and temporal visual features respectively into the language space. The mapped visual tokens are aligned with text tokens, enabling the model to query the video content in a multimodal fashion." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 450, + 295, + 617 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 450, + 295, + 617 + ], + "spans": [ + { + "bbox": [ + 55, + 450, + 295, + 617 + ], + "type": "text", + "content": "Feature Fusion and Quality Regression. We apply LLM (InternVL2-8B [69]) to combine the visual tokens and user-provided quality prompts to perform the following tasks: (1) Quality level descriptions: the model generates a descriptive quality level evaluation of the input video, such as \"The static quality of the video is (bad, poor, fair, good, excellent).\" This initial categorization provides a preliminary classification of the video's quality, which is beneficial for subsequent quality regression tasks. By obtaining a rough quality level, the model can more accurately predict numerical scores in later evaluations. (2) Regression score output: the model uses the final hidden states from the LLM to perform a regression task, outputting numerical quality scores for the video from four different dimensions." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 625, + 239, + 639 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 625, + 239, + 639 + ], + "spans": [ + { + "bbox": [ + 55, + 625, + 239, + 639 + ], + "type": "text", + "content": "4.2. Training and Fine-tuning Strategy" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 642, + 295, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 642, + 295, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 642, + 295, + 714 + ], + "type": "text", + "content": "The training process of AIGV-Assessor follows a three-stage approach to ensure high-quality video assessment with quality level prediction, individual quality scoring, and pairwise preference comparison capabilities. This process includes: (1) training the spatial and temporal projectors to align visual and language features, (2) fine-tuning the vision" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 373, + 555, + 434 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 373, + 555, + 434 + ], + "spans": [ + { + "bbox": [ + 313, + 373, + 555, + 434 + ], + "type": "text", + "content": "encoder and LLM with LoRA [24], and training the quality regression module to generate accurate quality scores, (3) incorporating pairwise comparison training using the pair-comparison subset with a pairwise loss function for robust video quality comparison." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 439, + 554, + 511 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 439, + 554, + 511 + ], + "spans": [ + { + "bbox": [ + 313, + 439, + 554, + 511 + ], + "type": "text", + "content": "Spatiotemporal Projector Training. The first stage focuses on training the spatial and temporal projectors to extract meaningful spatiotemporal visual features and map them into the language space. Through this process, the LLM is able to produce the quality level descriptions i.e., bad, poor, fair, good, excellent." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 516, + 555, + 635 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 516, + 555, + 635 + ], + "spans": [ + { + "bbox": [ + 313, + 516, + 555, + 635 + ], + "type": "text", + "content": "Quality Regression Fine-tuning. Once the model can generate coherent descriptions of video quality level, the second stage focuses on fine-tuning the quality regression module. The goal here is to enable the model to output stable and precise numerical quality scores (MOS-like predictions). The quality regression model takes the last-hidden-state features from LLM as input and generates quality scores from four perspectives. The training objective uses an L1 loss function to minimize the difference between the predicted quality score and the groundtruth MOS." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 642, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 642, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 642, + 556, + 715 + ], + "type": "text", + "content": "Pairwise Comparison Fine-tuning. The third stage mainly focuses on integrating the pairwise comparison into the training pipeline. As shown in Figure 5(b), two input video pairs share network weights within the same batch. We design a judge network inspired by LPIPS [82] to determine which video performs better. This network leverages" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "18874" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 58, + 76, + 298, + 283 + ], + "blocks": [ + { + "bbox": [ + 56, + 65, + 294, + 76 + ], + "lines": [ + { + "bbox": [ + 56, + 65, + 294, + 76 + ], + "spans": [ + { + "bbox": [ + 56, + 65, + 294, + 76 + ], + "type": "text", + "content": "Table 4. Performance comparisons on LGVQ [84] and FETV [43]." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 58, + 76, + 298, + 283 + ], + "lines": [ + { + "bbox": [ + 58, + 76, + 298, + 283 + ], + "spans": [ + { + "bbox": [ + 58, + 76, + 298, + 283 + ], + "type": "table", + "html": "
AspectsMethodsLGVQFETV
SRCCPLCCKRCCSRCCPLCCKRCC
SpatialMUSIQ [26]0.6690.6820.4910.7220.7580.613
StairlQA [59]0.7010.7370.5210.8060.8120.643
CLIP-IQA [62]0.6840.7090.5020.7410.7670.619
LIQE [83]0.7210.7520.5380.7650.7990.635
UGVQ [84]0.7590.7950.5670.8410.8410.685
TemporalAIGV-Assessor (Ours)0.8030.8190.6170.8530.8560.699
Improvement+4.4%+2.4%+5.0%+1.2%+1.5%+1.4%
VSFA [35]0.8410.8570.6430.8390.8590.705
SimpleVQA [57]0.8570.8670.6590.8520.8620.726
FastVQA [70]0.8490.8430.6470.8420.8470.714
DOVER [71]0.8670.8780.6720.8680.8810.731
UGVQ [84]0.8930.9070.7030.8970.9070.753
AlignmentAIGV-Assessor (Ours)0.9000.9200.7170.9360.9400.815
Improvement+0.7%+1.3%+1.4%+3.9%+3.3%+6.2%
CLIPScore [22]0.4460.4530.3010.6070.6330.498
BLIPScore [37]0.4550.4640.3190.6160.6450.505
ImageReward [78]0.4980.4990.3440.6570.6870.519
PickScore [28]0.5010.5150.3530.6690.7080.533
HPSv2 [74]0.5040.5110.3570.6860.7030.540
UGVQ [84]0.5510.5550.3940.7340.7370.572
AlignmentAIGV-Assessor (Ours)0.5770.5780.4110.7530.7460.585
Improvement+2.6%+2.3%+1.7%+1.9%+0.9%+1.3%
", + "image_path": "7529383b9f1fa9fd0ba3b690e7c094e086264f814a555984a0376c1e18c18b28.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 294, + 295, + 331 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 294, + 295, + 331 + ], + "spans": [ + { + "bbox": [ + 55, + 294, + 295, + 331 + ], + "type": "text", + "content": "learned features and evaluates the perceptual differences between the two videos, allowing more reliable quality assessments in video pair comparison." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 334, + 296, + 418 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 334, + 296, + 418 + ], + "spans": [ + { + "bbox": [ + 55, + 334, + 296, + 418 + ], + "type": "text", + "content": "Loss Function. In the first stage, the spatial and temporal projectors are trained to align visual and language features using language loss. The second stage refines the vision encoder, LLM, and quality regression module's scoring ability with an L1 loss. The third stage incorporates pairwise comparison training with cross-entropy loss to improve the model's performance on relative quality evaluation." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 428, + 137, + 441 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 428, + 137, + 441 + ], + "spans": [ + { + "bbox": [ + 55, + 428, + 137, + 441 + ], + "type": "text", + "content": "5. Experiments" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 448, + 173, + 462 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 448, + 173, + 462 + ], + "spans": [ + { + "bbox": [ + 55, + 448, + 173, + 462 + ], + "type": "text", + "content": "5.1. Experiment Settings" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 469, + 296, + 578 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 469, + 296, + 578 + ], + "spans": [ + { + "bbox": [ + 55, + 469, + 296, + 578 + ], + "type": "text", + "content": "Evaluation Datasets and Metrics. Our proposed method is validated on five AIGVQA datasets: AIGVQA-DB, LGVQ [84], FETV [43], T2VQA [31], and GAIA [13]. To evaluate the correlation between the predicted scores and the ground-truth MOSs, we utilize three evaluation criteria: Spearman Rank Correlation Coefficient (SRCC), Pearson Linear Correlation Coefficient (PLCC), and Kendall's Rank Correlation Coefficient (KRCC). For pair comparison, we adopt the comparison accuracy as the metric." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 582, + 296, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 582, + 296, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 582, + 296, + 715 + ], + "type": "text", + "content": "Reference Algorithms. To assess the performance of our proposed method, we select state-of-the-art evaluation metrics for comparison, which can be classified into five groups: (1) Handcrafted-based I/VQA models, including: NIQE [49], BRISQUE [48], QAC [80], BMPRI [47], HOSA [77], BPRI [46], HIGRADE [32], etc. (2) Action-related evaluation models, including: V-Dynamic [25], V-Smoothness [25] which are proposed in VBench [25]. (3) Vision-language pre-training models, including: CLIPScore [22], BLIPScore [37], AestheticScore [53], ImageReward [78], and UMTScore [43]. (4) LLM-based models, in-" + } + ] + } + ], + "index": 7 + }, + { + "type": "table", + "bbox": [ + 317, + 76, + 555, + 196 + ], + "blocks": [ + { + "bbox": [ + 331, + 65, + 537, + 76 + ], + "lines": [ + { + "bbox": [ + 331, + 65, + 537, + 76 + ], + "spans": [ + { + "bbox": [ + 331, + 65, + 537, + 76 + ], + "type": "text", + "content": "Table 5. Performance comparisons on T2VQA-DB [31]." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 317, + 76, + 555, + 196 + ], + "lines": [ + { + "bbox": [ + 317, + 76, + 555, + 196 + ], + "spans": [ + { + "bbox": [ + 317, + 76, + 555, + 196 + ], + "type": "table", + "html": "
AspectsMethodsT2VQA-DBSora Testing
SRCCPLCCKRCCSRCCPLCCKRCC
zero-shotCLIPScore [22]0.10470.12770.07020.21160.15380.1406
BLIPScore [37]0.16590.18600.11120.21160.10380.1515
ImageReward [78]0.18750.21210.12660.09920.04150.0748
UMTScore [43]0.06760.07210.04530.25940.08400.1680
finetunedSimpleVQA [57]0.62750.63880.44660.03400.23440.0237
BVQA [37]0.73900.74860.54870.42350.24890.2635
FAST-VQA [70]0.71730.72950.53030.43010.23690.2939
DOVER [71]0.76090.76930.57040.44210.26890.2757
T2VQA [31]0.79650.80660.60580.64850.31240.4874
AIGV-Assessor (Ours)0.81310.82220.63640.66120.33180.5075
Improvement+1.7%+1.6%+3.1%+1.3%+1.9%+2.0%
", + "image_path": "bc822205d538c82a0f131868139149ec3cd63807bd32a6e8e58103a9365c0c29.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_body" + } + ], + "index": 9 + }, + { + "type": "table", + "bbox": [ + 316, + 216, + 553, + 388 + ], + "blocks": [ + { + "bbox": [ + 342, + 205, + 525, + 215 + ], + "lines": [ + { + "bbox": [ + 342, + 205, + 525, + 215 + ], + "spans": [ + { + "bbox": [ + 342, + 205, + 525, + 215 + ], + "type": "text", + "content": "Table 6. Performance comparisons on GAIA [13]." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 316, + 216, + 553, + 388 + ], + "lines": [ + { + "bbox": [ + 316, + 216, + 553, + 388 + ], + "spans": [ + { + "bbox": [ + 316, + 216, + 553, + 388 + ], + "type": "table", + "html": "
DimensionSubjectCompletenessInteraction
Methods / MetricsSRCCPLCCSRCCPLCCSRCCPLCC
V-Smoothness [25]0.24020.19130.14740.16250.17410.1693
V-Dynamic [25]0.12850.08310.09030.06820.11410.0758
Action-Score [42]0.20230.18230.28670.26230.26890.2432
Flow-Score [42]0.14710.15410.08160.12730.10410.1309
CLIPScore [22]0.33980.33300.39440.38710.38750.3821
BLIPScore [37]0.34530.33860.41740.40820.40440.3994
LLaVAScore [41]0.34840.34360.41890.41330.40770.4025
TLVQM [30]0.50370.51370.41270.41580.40790.4093
VIDEVAL [60]0.52370.54460.42830.43750.41210.4234
VSFA [35]0.55940.57620.49400.50170.47090.4811
BVQA [37]0.57020.58880.48760.49460.47610.4825
SimpleVQA [58]0.59200.59740.49810.50780.48430.4971
FAST-VQA [70]0.60150.60920.51570.52150.51540.5216
DOVER [71]0.61730.63010.51980.53230.51640.5278
AIGV-Assessor (Ours)0.68420.68970.66350.66940.63290.6340
Improvement+6.7%+6.0%+14.4%+13.7%+11.65%+10.6%
", + "image_path": "b4d017fc4c6ba92f0cf7ec26ae660352542b38cc083117f0ce382e3991ccba80.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "table_body" + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 399, + 554, + 471 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 399, + 554, + 471 + ], + "spans": [ + { + "bbox": [ + 313, + 399, + 554, + 471 + ], + "type": "text", + "content": "cluding: Video-LLaVA [40], Video-ChatGPT [45], LLaVA-NeXT [36], VideoLLaMA2 [14], and Qwen2-VL [66]. (5) Deep learning-based I/VQA models, including: HyperIQA [56], MUSIQ [26], LIQE [83], VSFA [35], BVQA [33], SimpleVQA [58], FAST-VQA [70], DOVER [71], and Q-Align [72]." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 475, + 555, + 643 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 475, + 555, + 643 + ], + "spans": [ + { + "bbox": [ + 313, + 475, + 555, + 643 + ], + "type": "text", + "content": "Training Settings. Traditional handcrafted models are directly evaluated on the corresponding databases, and the average score of all frames is calculated. For vision-language pre-training and LLM-based models, we load the pre-trained weights for inference. CLIPscore [22], BLIP-score [37], and other vision-language pre-training models are calculated directly as the average cosine similarity between text and each video frame. SimpleVQA [58], BVQA [33], FAST-VQA [70], DOVER [71], and Q-Align [72] are fine-tuned on every test dataset. For deep learning-based IQA and VQA models, all experiments for each method are retrained on each dimension using the same training and testing split as the previous literature at a ratio of 4:1. All results are averaged after ten random splits." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 650, + 434, + 662 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 650, + 434, + 662 + ], + "spans": [ + { + "bbox": [ + 313, + 650, + 434, + 662 + ], + "type": "text", + "content": "5.2. Results and Analysis" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 666, + 554, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 666, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 666, + 554, + 713 + ], + "type": "text", + "content": "Table 3 presents the pairwise win rates and the score prediction correlation between predicted results and human ground truths. The results indicate that handcrafted-based methods consistently underperform across all four evalu-" + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "18875" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 56, + 45, + 558, + 147 + ], + "blocks": [ + { + "bbox": [ + 56, + 45, + 558, + 147 + ], + "lines": [ + { + "bbox": [ + 56, + 45, + 558, + 147 + ], + "spans": [ + { + "bbox": [ + 56, + 45, + 558, + 147 + ], + "type": "image", + "image_path": "3a323b11ba3b97928a16f87d4016f9fd2550b9e4d10c010126e4ee85c42f694d.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 147, + 555, + 170 + ], + "lines": [ + { + "bbox": [ + 55, + 147, + 555, + 170 + ], + "spans": [ + { + "bbox": [ + 55, + 147, + 555, + 170 + ], + "type": "text", + "content": "Figure 6. Comparison of win rates of different generation models across four dimensions evaluated by different VQA methods, demonstrating our AIGV-Assessor has better win-rate evaluation ability aligned with Ground Truth (GT)." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 56, + 189, + 553, + 274 + ], + "blocks": [ + { + "bbox": [ + 187, + 178, + 423, + 189 + ], + "lines": [ + { + "bbox": [ + 187, + 178, + 423, + 189 + ], + "spans": [ + { + "bbox": [ + 187, + 178, + 423, + 189 + ], + "type": "text", + "content": "Table 7. Ablation study of the proposed AIGV-Assessor method." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 56, + 189, + 553, + 274 + ], + "lines": [ + { + "bbox": [ + 56, + 189, + 553, + 274 + ], + "spans": [ + { + "bbox": [ + 56, + 189, + 553, + 274 + ], + "type": "table", + "html": "
No.Feature & StrategyStatic QualityTemporal SmoothnessDynamic DegreeT2V Correspondence
spatialtemporalquality levelLoRA finetuningSRCCPLCCKRCCSRCCPLCCKRCCSRCCPLCCKRCCSRCCPLCCKRCC
(1)0.8640.8660.7260.8700.8680.7270.5560.5720.4320.6160.6200.492
(2)0.8740.8760.7230.8750.8760.7360.5580.5730.4310.7230.7340.533
(3)0.8870.8840.7220.8810.8830.7060.5620.5750.4330.7390.7580.544
(4)0.8870.8880.7530.9170.9100.7960.5690.5360.4380.6880.6730.557
(5)0.9050.9080.7540.9190.9170.7990.5890.5870.4410.7420.7630.549
(6)0.9160.9190.7580.9230.9220.8040.6090.6080.4440.7500.7700.559
", + "image_path": "0f6cb2fe3e785491eca62a8a3ab3b074b73d70560fa69d07aff269862d26be66.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 54, + 282, + 295, + 472 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 282, + 295, + 472 + ], + "spans": [ + { + "bbox": [ + 54, + 282, + 295, + 472 + ], + "type": "text", + "content": "ation perspectives. Vision-language pre-training methods such as CLIPscore [22] and BLIPscore [37] demonstrate moderate performance but are still surpassed by more specialized and fine-tuned VQA models. Specifically, deep learning-based models like FAST-VQA [70] and DOVER [71] achieve more competitive performances after fin-tuning. However, they are still far away from satisfactory. Notably, most VQA models perform better on quality evaluation than on text-video correspondence, as they lack text prompts input used in video generation, making it challenging to extract relation features from the AI-generated videos, which inevitably leads to the performance drop. Finally, the performance exploration of recent LMMs on our database shows that current LMMs are able to produce meaningful evaluations, which can motivate future works to further explore the use of LMMs for AIGV assessment." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 474, + 295, + 629 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 474, + 295, + 629 + ], + "spans": [ + { + "bbox": [ + 55, + 474, + 295, + 629 + ], + "type": "text", + "content": "The proposed AIGV-Assessor achieves the best performance compared to the competitors for both MOS prediction and pair ranking tasks in terms of all four dimensions. To further validate the effectiveness and generalizability of our proposed model, we also evaluate it on four other AIGVQA datasets [13, 31, 43, 84]. From Tables 4-6, we observe that AIGV-Assessor consistently achieves the best performance across these datasets. As shown in Figure 6, AIGV-Assessor achieves the highest overlap in area with Ground Truth (GT), indicating that AIGV-Assessor can reliably perform T2V model benchmarking, outperforming other assessment models in discerning quality differences in AI-generated videos." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 635, + 149, + 649 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 635, + 149, + 649 + ], + "spans": [ + { + "bbox": [ + 55, + 635, + 149, + 649 + ], + "type": "text", + "content": "5.3. Ablation Study" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 653, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 653, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 653, + 296, + 714 + ], + "type": "text", + "content": "We conduct ablation experiments to verify the effectiveness of the main components in our AIGV-Assessor method, including the spatial feature, the temporal feature, the quality level, and the LoRA finetuning strategy. Additionally, we assess how each feature contributes to the performance" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 282, + 555, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 282, + 555, + 437 + ], + "spans": [ + { + "bbox": [ + 313, + 282, + 555, + 437 + ], + "type": "text", + "content": "across different quality dimensions. The results of these experiments are summarized in Table 7. Experiments (1), (2), and (3) validate the effectiveness of the quality regression module and the LoRA finetuning strategy, confirming that fine-tuning and quality regression significantly enhance model performance over only regressing the generated text outputs from the LLM. The addition of temporal features, as seen in Experiments (4), (5), and (6), significantly improves model performance. Experiment (6), which integrates all components, yields the best overall performance, showing that the combination of spatial and temporal features, quality level prediction, and LoRA finetuning provides the most robust and accurate AIGV assessment." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 314, + 460, + 388, + 474 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 460, + 388, + 474 + ], + "spans": [ + { + "bbox": [ + 314, + 460, + 388, + 474 + ], + "type": "text", + "content": "6. Conclusion" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 486, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 486, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 486, + 555, + 713 + ], + "type": "text", + "content": "In this paper, we study the human visual preference evaluation problem for AIGVs. We first construct AIGVQA-DB, which includes 36,576 videos generated based on 1048 various text-prompts, with the MOSs and pair comparisons evaluated from four perspectives. Our detailed manual evaluations reflect different aspects of human visual preferences on AIGVs and reveal critical insights into the strengths and weaknesses of various text-to-video models. Based on the database, we evaluate the performance of state-of-the-art quality evaluation models and establish a new benchmark, revealing their limitations in measuring the perceptual preference of AIGVs. Finally, we propose AIGV-Assessor, a novel VQA model that leverages the capabilities of LMMs to give quality levels, predict quality scores, and compare preferences from four dimensions. Extensive experiments demonstrate that AIGV-Assessor achieves state-of-the-art performance on both AIGVQA-DB and other AIGVQA benchmarks, validating its robustness in understanding and evaluating the AI-generated videos." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "text", + "content": "18876" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "spans": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 57, + 91, + 295, + 712 + ], + "type": "list", + "angle": 0, + "index": 17, + "blocks": [ + { + "bbox": [ + 61, + 91, + 294, + 112 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 91, + 294, + 112 + ], + "spans": [ + { + "bbox": [ + 61, + 91, + 294, + 112 + ], + "type": "text", + "content": "[1] Hotshot-XL. https://github.com/hotshotco/hotshot-xl, 2023.2, 3" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 62, + 115, + 295, + 135 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 115, + 295, + 135 + ], + "spans": [ + { + "bbox": [ + 62, + 115, + 295, + 135 + ], + "type": "text", + "content": "[2] Floor33. https://discord.qq/EuB9KT6H, 2023.2, 3" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 62, + 138, + 255, + 148 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 138, + 255, + 148 + ], + "spans": [ + { + "bbox": [ + 62, + 138, + 255, + 148 + ], + "type": "text", + "content": "[3] Gemo. https://www.genmo.ai, 2024.2, 3" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 62, + 150, + 294, + 171 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 150, + 294, + 171 + ], + "spans": [ + { + "bbox": [ + 62, + 150, + 294, + 171 + ], + "type": "text", + "content": "[4] Gen2. https://research.runwayml.com/gen2, 2024.2,3" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 62, + 173, + 280, + 184 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 173, + 280, + 184 + ], + "spans": [ + { + "bbox": [ + 62, + 173, + 280, + 184 + ], + "type": "text", + "content": "[5] Moonvalley. https://moonvalley.ai, 2024. 2, 3" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 186, + 294, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 186, + 294, + 205 + ], + "spans": [ + { + "bbox": [ + 62, + 186, + 294, + 205 + ], + "type": "text", + "content": "[6] Morph studio. https://www.morphstudio.com, 2024.2,3,5" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 209, + 294, + 240 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 209, + 294, + 240 + ], + "spans": [ + { + "bbox": [ + 62, + 209, + 294, + 240 + ], + "type": "text", + "content": "[7] Sora. https://openai.com/research/video-generation-models-as-world-simulators, 2024.2,3,5" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 243, + 294, + 297 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 243, + 294, + 297 + ], + "spans": [ + { + "bbox": [ + 62, + 243, + 294, + 297 + ], + "type": "text", + "content": "[8] Max Bain, Arsha Nagrani, Gül Varol, and Andrew Zisserman. Frozen in time: A joint video and image encoder for end-to-end retrieval. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 1728-1738, 2021. 2, 3" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 300, + 294, + 353 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 300, + 294, + 353 + ], + "spans": [ + { + "bbox": [ + 62, + 300, + 294, + 353 + ], + "type": "text", + "content": "[9] Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint arXiv:2311.15127, 2023. 2, 5" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 57, + 355, + 294, + 399 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 355, + 294, + 399 + ], + "spans": [ + { + "bbox": [ + 57, + 355, + 294, + 399 + ], + "type": "text", + "content": "[10] Yuqin Cao, Xiongkuo Min, Yixuan Gao, Wei Sun, Weisi Lin, and Guangtao Zhai. Unqa: Unified no-reference quality assessment for audio, image, video, and audio-visual content. arXiv preprint arXiv:2407.19704, 2024. 1" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 57, + 400, + 294, + 443 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 400, + 294, + 443 + ], + "spans": [ + { + "bbox": [ + 57, + 400, + 294, + 443 + ], + "type": "text", + "content": "[11] Yuqin Cao, Xiongkuo Min, Yixuan Gao, Wei Sun, and Guangtao Zhai. Agav-rater: Adapting large multimodal model for ai-generated audio-visual quality assessment. arXiv preprint arXiv:2501.18314, 2025. 1" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 446, + 294, + 499 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 446, + 294, + 499 + ], + "spans": [ + { + "bbox": [ + 57, + 446, + 294, + 499 + ], + "type": "text", + "content": "[12] Haoxin Chen, Menghan Xia, Yingqing He, Yong Zhang, Xiaodong Cun, Shaoshu Yang, Jinbo Xing, Yaofang Liu, Qifeng Chen, Xintao Wang, Chao Weng, and Ying Shan. Videocraft1: Open diffusion models for high-quality video generation. arXiv preprint arXiv:2310.19512, 2023. 1, 2" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 57, + 502, + 294, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 502, + 294, + 555 + ], + "spans": [ + { + "bbox": [ + 57, + 502, + 294, + 555 + ], + "type": "text", + "content": "[13] Zijian Chen, Wei Sun, Yuan Tian, Jun Jia, Zicheng Zhang, Jiarui Wang, Ru Huang, Xiongkuo Min, Guangtao Zhai, and Wenjun Zhang. Gaia: Rethinking action quality assessment for ai-generated videos. arXiv preprint arXiv:2406.06087, 2024. 1, 3, 7, 8" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 57, + 557, + 294, + 612 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 557, + 294, + 612 + ], + "spans": [ + { + "bbox": [ + 57, + 557, + 294, + 612 + ], + "type": "text", + "content": "[14] Zesen Cheng, Sicong Leng, Hang Zhang, Yifei Xin, Xin Li, Guanzheng Chen, Yongxin Zhu, Wenqi Zhang, Ziyang Luo, Deli Zhao, and Lidong Bing. Videollama 2: Advancing spatial-temporal modeling and audio understanding in videolms. arXiv preprint arXiv:2406.07476, 2024. 6, 7" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 57, + 614, + 294, + 656 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 614, + 294, + 656 + ], + "spans": [ + { + "bbox": [ + 57, + 614, + 294, + 656 + ], + "type": "text", + "content": "[15] Iya Chivileva, Philip Lynch, Tomas E Ward, and Alan F Smeaton. Measuring the quality of text-to-video model outputs: Metrics and dataset. arXiv preprint arXiv:2309.08009, 2023. 3" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 57, + 658, + 294, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 658, + 294, + 712 + ], + "spans": [ + { + "bbox": [ + 57, + 658, + 294, + 712 + ], + "type": "text", + "content": "[16] Ming Ding, Wendi Zheng, Wenyi Hong, and Jie Tang. Cogview2: Faster and better text-to-image generation via hierarchical transformers. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), pages 16890-16902, 2022. 3" + } + ] + } + ], + "index": 16 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 553, + 712 + ], + "type": "list", + "angle": 0, + "index": 30, + "blocks": [ + { + "bbox": [ + 316, + 73, + 553, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 73, + 553, + 127 + ], + "spans": [ + { + "bbox": [ + 316, + 73, + 553, + 127 + ], + "type": "text", + "content": "[17] Huiyu Duan, Xiongkuo Min, Yucheng Zhu, Guangtao Zhai, Xiaokang Yang, and Patrick Le Callet. Confusing image quality assessment: Toward better augmented reality experience. IEEE Transactions on Image Processing (TIP), 31: 7206-7221, 2022. 4" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 317, + 130, + 553, + 194 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 130, + 553, + 194 + ], + "spans": [ + { + "bbox": [ + 317, + 130, + 553, + 194 + ], + "type": "text", + "content": "[18] Huiyu Duan, Qiang Hu, Jiarui Wang, Liu Yang, Zitong Xu, Lu Liu, Xiongkuo Min, Chunlei Cai, Tianxiao Ye, Xiaoyun Zhang, et al. Finevq: Fine-grained user generated content video quality assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025. 1" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 317, + 197, + 553, + 240 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 197, + 553, + 240 + ], + "spans": [ + { + "bbox": [ + 317, + 197, + 553, + 240 + ], + "type": "text", + "content": "[19] Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 6202-6211, 2019. 5" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 317, + 243, + 553, + 285 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 243, + 553, + 285 + ], + "spans": [ + { + "bbox": [ + 317, + 243, + 553, + 285 + ], + "type": "text", + "content": "[20] Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, and Bo Dai. Animatediff:Animate your personalized text-to-image diffusion models without specific tuning. arXiv preprint arXiv:2307.04725, 2023. 2" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 288, + 553, + 330 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 288, + 553, + 330 + ], + "spans": [ + { + "bbox": [ + 316, + 288, + 553, + 330 + ], + "type": "text", + "content": "[21] Yingqing He, Tianyu Yang, Yong Zhang, Ying Shan, and Qifeng Chen. Latent video diffusion models for high-fidelity video generation with arbitrary lengths. arXiv preprint arXiv:2211.13221, 2022. 2, 3, 5" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 333, + 553, + 387 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 333, + 553, + 387 + ], + "spans": [ + { + "bbox": [ + 316, + 333, + 553, + 387 + ], + "type": "text", + "content": "[22] Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation metric for image captioning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7514-7528, 2021. 1, 6, 7, 8" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 389, + 553, + 431 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 389, + 553, + 431 + ], + "spans": [ + { + "bbox": [ + 316, + 389, + 553, + 431 + ], + "type": "text", + "content": "[23] Wenyi Hong, Ming Ding, Wendi Zheng, Xinghan Liu, and Jie Tang. Cogvideo: Large-scale pretraining for text-to-video generation via transformers. arXiv preprint arXiv:2205.15868, 2022. 1, 2, 3" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 434, + 553, + 477 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 434, + 553, + 477 + ], + "spans": [ + { + "bbox": [ + 316, + 434, + 553, + 477 + ], + "type": "text", + "content": "[24] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. 6" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 317, + 480, + 553, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 480, + 553, + 555 + ], + "spans": [ + { + "bbox": [ + 317, + 480, + 553, + 555 + ], + "type": "text", + "content": "[25] Ziqi Huang, Yinan He, Jiashuo Yu, Fan Zhang, Chenyang Si, Yuming Jiang, Yuanhan Zhang, Tianxing Wu, Qingyang Jin, Nattapol Chanpaisit, Yaohui Wang, Xinyuan Chen, Limin Wang, Dahua Lin, Yu Qiao, and Ziwei Liu. VBenchmark: Comprehensive benchmark suite for video generative models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024. 1, 6, 7" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 317, + 558, + 553, + 601 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 558, + 553, + 601 + ], + "spans": [ + { + "bbox": [ + 317, + 558, + 553, + 601 + ], + "type": "text", + "content": "[26] Junjie Ke, Qifei Wang, Yilin Wang, Peyman Milanfar, and Feng Yang. Musiq: Multi-scale image quality transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 5128-5137, 2021. 6, 7" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 317, + 603, + 553, + 668 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 603, + 553, + 668 + ], + "spans": [ + { + "bbox": [ + 317, + 603, + 553, + 668 + ], + "type": "text", + "content": "[27] Levon Khachatryan, Andranik Movsisyan, Vahram Tadevosyan, Roberto Henschel, Zhangyang Wang, Shant Navasardyan, and Humphrey Shi. Text2video-zero: Text-to-image diffusion models are zero-shot video generators. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 15954-15964, 2023. 1, 2, 3" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 316, + 670, + 553, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 670, + 553, + 712 + ], + "spans": [ + { + "bbox": [ + 316, + 670, + 553, + 712 + ], + "type": "text", + "content": "[28] Yuval Kirstain, Adam Poliak, Uriel Singer, and Omer Levy. Pick-a-pic: An open dataset of user preferences for text-to-image generation. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 2023. 7" + } + ] + } + ], + "index": 29 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "18877" + } + ] + } + ], + "index": 31 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 294, + 713 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 56, + 72, + 294, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 294, + 127 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 294, + 127 + ], + "type": "text", + "content": "[29] Yuval Kirstain, Adam Polyak, Uriel Singer, Shahbuland Martiana, Joe Penna, and Omer Levy. Pick-a-pic: An open dataset of user preferences for text-to-image generation. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), pages 36652-36663, 2023. 3" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 129, + 294, + 162 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 129, + 294, + 162 + ], + "spans": [ + { + "bbox": [ + 56, + 129, + 294, + 162 + ], + "type": "text", + "content": "[30] Jari Korhonen. Two-level approach for no-reference consumer video quality assessment. IEEE Transactions on Image Processing (TIP), 28(12):5923-5938, 2019. 7" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 163, + 294, + 217 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 163, + 294, + 217 + ], + "spans": [ + { + "bbox": [ + 56, + 163, + 294, + 217 + ], + "type": "text", + "content": "[31] Tengchuan Kou, Xiaohong Liu, Zicheng Zhang, Chunyi Li, Haoning Wu, Xiongkuo Min, Guangtao Zhai, and Ning Liu. Subjective-aligned dataset and metric for text-to-video quality assessment. arXiv preprint arXiv:2403.11956, 2024. 1, 3, 7, 8" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 220, + 294, + 262 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 220, + 294, + 262 + ], + "spans": [ + { + "bbox": [ + 56, + 220, + 294, + 262 + ], + "type": "text", + "content": "[32] Debarati Kundu, Deepti Ghadiyaram, Alan C Bovik, and Brian L Evans. Large-scale crowdsourced study for tonemapbed hdr pictures. IEEE Transactions on Image Processing (TIP), pages 4725-4740, 2017. 7" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 264, + 294, + 319 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 264, + 294, + 319 + ], + "spans": [ + { + "bbox": [ + 56, + 264, + 294, + 319 + ], + "type": "text", + "content": "[33] Bowen Li, Weixia Zhang, Meng Tian, Guangtao Zhai, and Xianpei Wang. Blindly assess quality of in-the-wild videos via quality-aware pre-training and motion perception. IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 32(9):5944-5958, 2022. 1, 6, 7" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 321, + 294, + 374 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 321, + 294, + 374 + ], + "spans": [ + { + "bbox": [ + 56, + 321, + 294, + 374 + ], + "type": "text", + "content": "[34] Chunyi Li, Zicheng Zhang, Haoning Wu, Wei Sun, Xiongkuo Min, Xiaohong Liu, Guangtao Zhai, and Weisi Lin. Agiqa-3k: An open database for ai-generated image quality assessment. IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2023. 3" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 376, + 294, + 419 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 376, + 294, + 419 + ], + "spans": [ + { + "bbox": [ + 56, + 376, + 294, + 419 + ], + "type": "text", + "content": "[35] Dingquan Li, Tingting Jiang, and Ming Jiang. Quality assessment of in-the-wild videos. In Proceedings of the ACM International Conference on Multimedia (ACMMM). ACM, 2019. 1, 6, 7" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 422, + 294, + 465 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 422, + 294, + 465 + ], + "spans": [ + { + "bbox": [ + 56, + 422, + 294, + 465 + ], + "type": "text", + "content": "[36] Feng Li, Renrui Zhang, Hao Zhang, Yuanhan Zhang, Bo Li, Wei Li, Zejun Ma, and Chunyuan Li. Llava next-interleave: Tackling multi-image, video, and 3d in large multimodal models. arXiv preprint arXiv:2407.07895, 2024. 6, 7" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 467, + 294, + 521 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 467, + 294, + 521 + ], + "spans": [ + { + "bbox": [ + 56, + 467, + 294, + 521 + ], + "type": "text", + "content": "[37] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In Proceedings of the International Conference on Machine Learning (ICML), pages 12888-12900. PMLR, 2022. 1, 6, 7, 8" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 523, + 294, + 587 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 523, + 294, + 587 + ], + "spans": [ + { + "bbox": [ + 56, + 523, + 294, + 587 + ], + "type": "text", + "content": "[38] Yuncheng Li, Yale Song, Liangliang Cao, Joel Tetreault, Larry Goldberg, Alejandro Jaimes, and Jiebo Luo. Tgif: A new dataset and benchmark on animated gif description. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4641-4650, 2016. 2, 3" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 590, + 294, + 655 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 590, + 294, + 655 + ], + "spans": [ + { + "bbox": [ + 56, + 590, + 294, + 655 + ], + "type": "text", + "content": "[39] Youwei Liang, Junfeng He, Gang Li, Peizhao Li, Arseniy Klimovskiy, Nicholas Carolan, Jiao Sun, Jordi Pont-Tuset, Sarah Young, Feng Yang, et al. Rich human feedback for text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 19401-19411, 2024. 3" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 657, + 294, + 700 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 657, + 294, + 700 + ], + "spans": [ + { + "bbox": [ + 56, + 657, + 294, + 700 + ], + "type": "text", + "content": "[40] Bin Lin, Bin Zhu, Yang Ye, Munan Ning, Peng Jin, and Li Yuan. Video-llava: Learning united visual representation by alignment before projection. arXiv preprint arXiv:2311.10122, 2023. 6, 7" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 56, + 702, + 294, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 702, + 294, + 713 + ], + "spans": [ + { + "bbox": [ + 56, + 702, + 294, + 713 + ], + "type": "text", + "content": "[41] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee." + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 553, + 713 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 333, + 73, + 553, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 333, + 73, + 553, + 95 + ], + "spans": [ + { + "bbox": [ + 333, + 73, + 553, + 95 + ], + "type": "text", + "content": "Visual instruction tuning. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 2024. 7" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 96, + 553, + 150 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 96, + 553, + 150 + ], + "spans": [ + { + "bbox": [ + 316, + 96, + 553, + 150 + ], + "type": "text", + "content": "[42] Yaofang Liu, Xiaodong Cun, Xuebo Liu, Xintao Wang, Yong Zhang, Haoxin Chen, Yang Liu, Tieyong Zeng, Raymond Chan, and Ying Shan. Evalcrafter: Benchmarking and evaluating large video generation models. arXiv preprint arXiv:2310.11440, 2023. 3, 7" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 152, + 553, + 217 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 152, + 553, + 217 + ], + "spans": [ + { + "bbox": [ + 316, + 152, + 553, + 217 + ], + "type": "text", + "content": "[43] Yuanxin Liu, Lei Li, Shuhuai Ren, Rundong Gao, Shicheng Li, Sishuo Chen, Xu Sun, and Lu Hou. Fetv: A benchmark for fine-grained evaluation of open-domain text-to-video generation. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 2024. 1, 2, 3, 6, 7, 8" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 220, + 553, + 285 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 220, + 553, + 285 + ], + "spans": [ + { + "bbox": [ + 316, + 220, + 553, + 285 + ], + "type": "text", + "content": "[44] Zhengxiong Luo, Dayou Chen, Yingya Zhang, Yan Huang, Liang Wang, Yujun Shen, Deli Zhao, Jingren Zhou, and Tieniu Tan. Videofusion: Decomposed diffusion models for high-quality video generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10209-10218, 2023. 1, 2, 3" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 286, + 553, + 341 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 286, + 553, + 341 + ], + "spans": [ + { + "bbox": [ + 316, + 286, + 553, + 341 + ], + "type": "text", + "content": "[45] Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Shahbaz Khan. Video-chatgpt: Towards detailed video understanding via large vision and language models. In Proceedings of the Association for Computational Linguistics (ACL), 2024. 6, 7" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 342, + 553, + 386 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 342, + 553, + 386 + ], + "spans": [ + { + "bbox": [ + 316, + 342, + 553, + 386 + ], + "type": "text", + "content": "[46] Xiongkuo Min, Ke Gu, Guangtao Zhai, Jing Liu, Xiaokang Yang, and Chang Wen Chen. Blind quality assessment based on pseudo-reference image. IEEE Transactions on Multimedia (TMM), pages 2049-2062, 2017. 6, 7" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 388, + 553, + 431 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 388, + 553, + 431 + ], + "spans": [ + { + "bbox": [ + 316, + 388, + 553, + 431 + ], + "type": "text", + "content": "[47] Xiongkuo Min, Guangtao Zhai, Ke Gu, Yutao Liu, and Xiaokang Yang. Blind image quality estimation via distortion aggravation. IEEE Transactions on Broadcasting (TBC), pages 508-517, 2018. 6, 7" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 434, + 553, + 476 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 434, + 553, + 476 + ], + "spans": [ + { + "bbox": [ + 316, + 434, + 553, + 476 + ], + "type": "text", + "content": "[48] Anish Mittal, Anush Krishna Moorthy, and Alan Conrad Bovik. No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing (TIP), pages 4695-4708, 2012. 6, 7" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 478, + 553, + 510 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 478, + 553, + 510 + ], + "spans": [ + { + "bbox": [ + 316, + 478, + 553, + 510 + ], + "type": "text", + "content": "[49] Anish Mittal, Rajiv Soundararajan, and Alan C Bovik. Making a “completely blind” image quality analyzer. IEEE Signal Processing Letters (SPL), pages 209–212, 2012. 6, 7" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 513, + 553, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 513, + 553, + 555 + ], + "spans": [ + { + "bbox": [ + 316, + 513, + 553, + 555 + ], + "type": "text", + "content": "[50] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 1 (2):3, 2022. 3" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 557, + 553, + 612 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 557, + 553, + 612 + ], + "spans": [ + { + "bbox": [ + 316, + 557, + 553, + 612 + ], + "type": "text", + "content": "[51] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10684-10695, 2022. 3" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 613, + 553, + 657 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 613, + 553, + 657 + ], + "spans": [ + { + "bbox": [ + 316, + 613, + 553, + 657 + ], + "type": "text", + "content": "[52] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 2016. 1" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 658, + 553, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 658, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 658, + 553, + 713 + ], + "type": "text", + "content": "[53] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. In Proceedings of the Ad" + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 317, + 757 + ], + "type": "text", + "content": "18878" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 295, + 715 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 77, + 72, + 294, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 72, + 294, + 95 + ], + "spans": [ + { + "bbox": [ + 77, + 72, + 294, + 95 + ], + "type": "text", + "content": "vances in Neural Information Processing Systems (NeurIPS), pages 25278-25294, 2022. 1, 6, 7" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 57, + 96, + 295, + 129 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 96, + 295, + 129 + ], + "spans": [ + { + "bbox": [ + 57, + 96, + 295, + 129 + ], + "type": "text", + "content": "[54] BT Series. Methodology for the subjective assessment of the quality of television pictures. Recommendation ITU-R BT, pages 500-13, 2012. 4" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 130, + 295, + 184 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 130, + 295, + 184 + ], + "spans": [ + { + "bbox": [ + 56, + 130, + 295, + 184 + ], + "type": "text", + "content": "[55] Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, et al. Make-a-video: Text-to-video generation without text-video data. arXiv preprint arXiv:2209.14792, 2022. 1, 2, 3" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 186, + 295, + 241 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 186, + 295, + 241 + ], + "spans": [ + { + "bbox": [ + 56, + 186, + 295, + 241 + ], + "type": "text", + "content": "[56] Shaolin Su, Qingsen Yan, Yu Zhu, Cheng Zhang, Xin Ge, Jinqiu Sun, and Yanning Zhang. Blindly assess image quality in the wild guided by a self-adaptive hyper network. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 6, 7" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 243, + 295, + 297 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 243, + 295, + 297 + ], + "spans": [ + { + "bbox": [ + 56, + 243, + 295, + 297 + ], + "type": "text", + "content": "[57] Wei Sun, Xiongkuo Min, Wei Lu, and Guangtao Zhai. A deep learning based no-reference quality assessment model forUGC videos. In Proceedings of the 30th ACM International Conference on Multimedia (ACMMM), page 856-865, 2022. 1, 6, 7" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 298, + 295, + 353 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 298, + 295, + 353 + ], + "spans": [ + { + "bbox": [ + 56, + 298, + 295, + 353 + ], + "type": "text", + "content": "[58] Wei Sun, Xiongkuo Min, Wei Lu, and Guangtao Zhai. A deep learning based no-reference quality assessment model forUGC videos. In Proceedings of the ACM International Conference on Multimedia (ACMMM), pages 856-865, 2022. 7" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 354, + 295, + 409 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 354, + 295, + 409 + ], + "spans": [ + { + "bbox": [ + 56, + 354, + 295, + 409 + ], + "type": "text", + "content": "[59] Wei Sun, Xiongkuo Min, Danyang Tu, Siwei Ma, and Guangtao Zhai. Blind quality assessment for in-the-wild images via hierarchical feature fusion and iterative mixed database training. IEEE Journal of Selected Topics in Signal Processing (JSTSP), 2023. 7" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 410, + 295, + 454 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 410, + 295, + 454 + ], + "spans": [ + { + "bbox": [ + 56, + 410, + 295, + 454 + ], + "type": "text", + "content": "[60] Zhengzhong Tu, Yilin Wang, Neil Birkbeck, Balu Adsumilli, and Alan C Bovik. Ugc-vqa: Benchmarking blind video quality assessment for user generated content. IEEE Transactions on Image Processing (TIP), 30:4449-4464, 2021. 7" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 456, + 295, + 499 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 456, + 295, + 499 + ], + "spans": [ + { + "bbox": [ + 56, + 456, + 295, + 499 + ], + "type": "text", + "content": "[61] Thomas Unterthiner, Sjoerd Van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, and Sylvain Gelly. Towards accurate generative models of video: A new metric & challenges. arXiv preprint arXiv:1812.01717, 2018. 1" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 500, + 295, + 544 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 500, + 295, + 544 + ], + "spans": [ + { + "bbox": [ + 56, + 500, + 295, + 544 + ], + "type": "text", + "content": "[62] Jianyi Wang, Kelvin C.K. Chan, and Chen Change Loy. Exploring clip for assessing the look and feel of images. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pages 2555-2563, 2023. 7" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 545, + 295, + 611 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 545, + 295, + 611 + ], + "spans": [ + { + "bbox": [ + 56, + 545, + 295, + 611 + ], + "type": "text", + "content": "[63] Jiarui Wang, Huiyu Duan, Jing Liu, Shi Chen, Xiongkuo Min, and Guangtao Zhai. Aigciqa2023: A large-scale image quality assessment database for ai generated images: from the perspectives of quality, authenticity and correspondence. In CAAI International Conference on Artificial Intelligence (CICAI), pages 46-57. Springer, 2023. 3" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 613, + 295, + 656 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 613, + 295, + 656 + ], + "spans": [ + { + "bbox": [ + 56, + 613, + 295, + 656 + ], + "type": "text", + "content": "[64] Jiuniu Wang, Hangjie Yuan, Dayou Chen, Yingya Zhang, Xiang Wang, and Shiwei Zhang. Modelscope text-to-video technical report. arXiv preprint arXiv:2308.06571, 2023. 1, 2, 3" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 56, + 658, + 295, + 690 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 658, + 295, + 690 + ], + "spans": [ + { + "bbox": [ + 56, + 658, + 295, + 690 + ], + "type": "text", + "content": "[65] Jiarui Wang, Huiyu Duan, Guangtao Zhai, and Xiongkuo Min. Quality assessment for ai generated images with instruction tuning. arXiv preprint arXiv:2405.07346, 2024. 1" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 56, + 691, + 295, + 715 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 691, + 295, + 715 + ], + "spans": [ + { + "bbox": [ + 56, + 691, + 295, + 715 + ], + "type": "text", + "content": "[66] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin" + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 72, + 554, + 714 + ], + "type": "list", + "angle": 0, + "index": 26, + "blocks": [ + { + "bbox": [ + 333, + 72, + 554, + 128 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 333, + 72, + 554, + 128 + ], + "spans": [ + { + "bbox": [ + 333, + 72, + 554, + 128 + ], + "type": "text", + "content": "Ge, Yang Fan, Kai Dang, Mengfei Du, Xuancheng Ren, Rui Men, Dayiheng Liu, Chang Zhou, Jingren Zhou, and Junyang Lin. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191, 2024. 6, 7" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 129, + 554, + 184 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 129, + 554, + 184 + ], + "spans": [ + { + "bbox": [ + 316, + 129, + 554, + 184 + ], + "type": "text", + "content": "[67] Yaohui Wang, Xinyuan Chen, Xin Ma, Shangchen Zhou, Ziqi Huang, Yi Wang, Ceyuan Yang, Yinan He, Jiashuo Yu, Peiqing Yang, et al. Lavie: High-quality video generation with cascaded latent diffusion models. arXiv preprint arXiv:2309.15103, 2023. 2, 3" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 186, + 554, + 252 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 186, + 554, + 252 + ], + "spans": [ + { + "bbox": [ + 316, + 186, + 554, + 252 + ], + "type": "text", + "content": "[68] Yi Wang, Yinan He, Yizhuo Li, Kunchang Li, Jiashuo Yu, Xin Ma, Xinhao Li, Guo Chen, Xinyuan Chen, Yaohui Wang, et al. Internvid: A large-scale video-text dataset for multimodal understanding and generation. In Proceedings of the International Conference on Learning Representations (ICLR), 2023. 2, 3" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 254, + 554, + 319 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 254, + 554, + 319 + ], + "spans": [ + { + "bbox": [ + 316, + 254, + 554, + 319 + ], + "type": "text", + "content": "[69] Zirui Wang, Mengzhou Xia, Luxi He, Howard Chen, Yitao Liu, Richard Zhu, Kaiqu Liang, Xindi Wu, Haotian Liu, Sadhika Malladi, Alexis Chevalier, Sanjeev Arora, and Danqi Chen. Charxiv: Charting gaps in realistic chart understanding in multimodal llms. arXiv preprint arXiv:2406.18521, 2024. 5, 6" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 320, + 554, + 387 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 320, + 554, + 387 + ], + "spans": [ + { + "bbox": [ + 316, + 320, + 554, + 387 + ], + "type": "text", + "content": "[70] Haoning Wu, Chaofeng Chen, Jingwen Hou, Liang Liao, Annan Wang, Wenxiu Sun, Qiong Yan, and Weisi Lin. Fastvqa: Efficient end-to-end video quality assessment with fragment sampling. In Proceedings of the European Conference on Computer Vision (ECCV), pages 538-554. Springer, 2022. 1, 6, 7, 8" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 388, + 554, + 453 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 388, + 554, + 453 + ], + "spans": [ + { + "bbox": [ + 316, + 388, + 554, + 453 + ], + "type": "text", + "content": "[71] Haoning Wu, Erli Zhang, Liang Liao, Chaofeng Chen, Jingwen Hou Hou, Annan Wang, Wenxiu Sun Sun, Qiong Yan, and Weisi Lin. Exploring video quality assessment on user generated contents from aesthetic and technical perspectives. In Proceedings of the International Conference on Computer Vision (ICCV), 2023. 1, 6, 7, 8" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 455, + 554, + 510 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 455, + 554, + 510 + ], + "spans": [ + { + "bbox": [ + 316, + 455, + 554, + 510 + ], + "type": "text", + "content": "[72] Haoning Wu, Zicheng Zhang, Weixia Zhang, Chaofeng Chen, Liang Liao, Chunyi Li, Yixuan Gao, Annan Wang, Erli Zhang, Wenxiu Sun, et al. Q-align: Teaching lmm's for visual scoring via discrete text-defined levels. arXiv preprint arXiv:2312.17090, 2023. 6, 7" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 511, + 554, + 578 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 511, + 554, + 578 + ], + "spans": [ + { + "bbox": [ + 316, + 511, + 554, + 578 + ], + "type": "text", + "content": "[73] Jay Zhangjie Wu, Yixiao Ge, Xintao Wang, Stan Weixian Lei, Yuchao Gu, Yufei Shi, Wynne Hsu, Ying Shan, Xiaohu Qie, and Mike Zheng Shou. Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 7623-7633, 2023. 1, 2, 3" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 579, + 554, + 633 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 579, + 554, + 633 + ], + "spans": [ + { + "bbox": [ + 316, + 579, + 554, + 633 + ], + "type": "text", + "content": "[74] Xiaoshi Wu, Yiming Hao, Keqiang Sun, Yixiong Chen, Feng Zhu, Rui Zhao, and Hongsheng Li. Human preference score v2: A solid benchmark for evaluating human preferences of text-to-image synthesis. arXiv preprint arXiv:2306.09341, 2023. 7" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 635, + 554, + 669 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 635, + 554, + 669 + ], + "spans": [ + { + "bbox": [ + 316, + 635, + 554, + 669 + ], + "type": "text", + "content": "[75] Xiaoshi Wu, Keqiang Sun, Feng Zhu, Rui Zhao, and Hongsheng Li. Better aligning text-to-image models with human preference. arXiv preprint arXiv:2303.14420, 1(3), 2023. 3" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 670, + 554, + 714 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 670, + 554, + 714 + ], + "spans": [ + { + "bbox": [ + 316, + 670, + 554, + 714 + ], + "type": "text", + "content": "[76] Jun Xu, Tao Mei, Ting Yao, and Yong Rui. Msr-vtt: A large video description dataset for bridging video and language. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 2, 3" + } + ] + } + ], + "index": 25 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 748, + 318, + 757 + ], + "type": "text", + "content": "18879" + } + ] + } + ], + "index": 27 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 73, + 295, + 541 + ], + "type": "list", + "angle": 0, + "index": 9, + "blocks": [ + { + "bbox": [ + 56, + 73, + 294, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 73, + 294, + 116 + ], + "spans": [ + { + "bbox": [ + 56, + 73, + 294, + 116 + ], + "type": "text", + "content": "[77] Jingtao Xu, Peng Ye, Qiaohong Li, Haiqing Du, Yong Liu, and David Doermann. Blind image quality assessment based on high order statistics aggregation. IEEE Transactions on Image Processing (TIP), pages 4444-4457, 2016. 6, 7" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 118, + 294, + 171 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 118, + 294, + 171 + ], + "spans": [ + { + "bbox": [ + 56, + 118, + 294, + 171 + ], + "type": "text", + "content": "[78] Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, and Yuxiao Dong. Imagereward: Learning and evaluating human preferences for text-to-image generation. arXiv preprint arXiv:2304.05977, 2023.6,7" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 173, + 294, + 239 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 173, + 294, + 239 + ], + "spans": [ + { + "bbox": [ + 56, + 173, + 294, + 239 + ], + "type": "text", + "content": "[79] Zitong Xu, Huiyu Duan, Guangji Ma, Liu Yang, Jiarui Wang, Qingbo Wu, Xiongkuo Min, Guangtao Zhai, and Patrick Le Callet. Harmonyiqa: Pioneering benchmark and model for image harmonization quality assessment. In Proceedings of the International Conference on Multimedia and Expo (ICME), 2025. 1" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 240, + 294, + 293 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 240, + 294, + 293 + ], + "spans": [ + { + "bbox": [ + 56, + 240, + 294, + 293 + ], + "type": "text", + "content": "[80] Wufeng Xue, Lei Zhang, and Xuanqin Mou. Learning without human scores for blind image quality assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 995-1002, 2013. 6, 7" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 297, + 294, + 328 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 297, + 294, + 328 + ], + "spans": [ + { + "bbox": [ + 56, + 297, + 294, + 328 + ], + "type": "text", + "content": "[81] Wilson Yan, Yunzhi Zhang, Pieter Abbeel, and Aravind Srinivas. Videogpt: Video generation using vq-vae and transformers. arXiv preprint arXiv:2104.10157, 2021. 1" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 331, + 295, + 384 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 331, + 295, + 384 + ], + "spans": [ + { + "bbox": [ + 56, + 331, + 295, + 384 + ], + "type": "text", + "content": "[82] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 586-595, 2018. 6" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 386, + 294, + 440 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 386, + 294, + 440 + ], + "spans": [ + { + "bbox": [ + 56, + 386, + 294, + 440 + ], + "type": "text", + "content": "[83] Weixia Zhang, Guangtao Zhai, Ying Wei, Xiaokang Yang, and Kede Ma. Blind image quality assessment via vision-language correspondence: A multitask learning perspective. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023. 6, 7" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 442, + 294, + 495 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 442, + 294, + 495 + ], + "spans": [ + { + "bbox": [ + 56, + 442, + 294, + 495 + ], + "type": "text", + "content": "[84] Zhichao Zhang, Xinyue Li, Wei Sun, Jun Jia, Xiongkuo Min, Zicheng Zhang, Chunyi Li, Zijian Chen, Puyi Wang, Zhongpeng Ji, et al. Benchmarking aigc video quality assessment: A dataset and unified model. arXiv preprint arXiv:2407.21408, 2024. 1, 3, 7, 8" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 498, + 294, + 541 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 498, + 294, + 541 + ], + "spans": [ + { + "bbox": [ + 56, + 498, + 294, + 541 + ], + "type": "text", + "content": "[85] Tianwei Zhou, Songbai Tan, Wei Zhou, Yu Luo, Yuan-Gen Wang, and Guanghui Yue. Adaptive mixed-scale feature fusion network for blind ai-generated image quality assessment. IEEE Transactions on Broadcasting (TBC), 2024. 1" + } + ] + } + ], + "index": 8 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 294, + 749, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 749, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 294, + 749, + 317, + 757 + ], + "type": "text", + "content": "18880" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/AIM-Fair_ Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data/e3f123cb-93dd-4a70-bf37-7825bb88c80c_content_list.json b/2025/AIM-Fair_ Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data/e3f123cb-93dd-4a70-bf37-7825bb88c80c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..74bc55a9466cb9f8a7b56c26638428177ef77e78 --- /dev/null +++ b/2025/AIM-Fair_ Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data/e3f123cb-93dd-4a70-bf37-7825bb88c80c_content_list.json @@ -0,0 +1,1469 @@ +[ + { + "type": "text", + "text": "AIM-Fair: Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data", + "text_level": 1, + "bbox": [ + 102, + 128, + 893, + 176 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Zengqun Zhao Ziquan Liu Yu Cao Shaogang Gong Ioannis Patras Centre for Multimodal AI, Queen Mary University of London", + "bbox": [ + 168, + 203, + 826, + 239 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "{zengqun.zhao, ziquan.liu, yu.cao, s.gong, i.patras}@qmul.ac.uk", + "bbox": [ + 217, + 241, + 774, + 257 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 246, + 291, + 326, + 306 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Recent advances in generative models have sparked research on improving model fairness with AI-generated data. However, existing methods often face limitations in the diversity and quality of synthetic data, leading to compromised fairness and overall model accuracy. Moreover, many approaches rely on the availability of demographic group labels, which are often costly to annotate. This paper proposes AIM-Fair, aiming to overcome these limitations and harness the potential of cutting-edge generative models in promoting algorithmic fairness. We investigate a fine-tuning paradigm starting from a biased model initially trained on real-world data without demographic annotations. This model is then fine-tuned using unbiased synthetic data generated by a state-of-the-art diffusion model to improve its fairness. Two key challenges are identified in this fine-tuning paradigm, 1) the low quality of synthetic data, which can still happen even with advanced generative models, and 2) the domain and bias gap between real and synthetic data. To address the limitation of synthetic data quality, we propose Contextual Synthetic Data Generation (CSDG) to generate data using a text-to-image diffusion model (T2I) with prompts generated by a context-aware LLM, ensuring both data diversity and control of bias in synthetic data. To resolve domain and bias shifts, we introduce a novel selective fine-tuning scheme in which only model parameters more sensitive to bias and less sensitive to domain shift are updated. Experiments on CelebA and UTKFace datasets show that our AIM-Fair improves model fairness while maintaining utility, outperforming both fully and partially fine-tuned approaches to model fairness. The code is available at https://github.com/zengqunzhao/AIM-Fair.", + "bbox": [ + 89, + 323, + 483, + 806 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 89, + 828, + 220, + 843 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Recent research has raised significant concerns about fairness and bias in machine learning models [30]. These mod-", + "bbox": [ + 89, + 847, + 482, + 878 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/f8d0530fa245105e65d5f0c4b739b0f8b3dea27cba27db507c87aa7fbc563021.jpg", + "image_caption": [ + "Figure 1. Facial attributes classification (FAC) on the CelebA dataset based on different training strategies, in which Smiling is the target attribute and Gender is the protected attribute. This shows different learning strategies result in variable model utility (Overall Accuracy) vs. model fairness (Worst Group Accuracy and Equalized Odds) on demographic groups. A model trained solely on real data if biased exhibits high accuracy but poor fairness scores. Conversely, models trained on balanced synthetic data show better fairness but poorer accuracy due to a \"domain gap\" between the real and synthetic data, and a lack of selective model fine-tuning when synthetic data is deployed. Strategies to repair imbalances in real data [9, 56] or to supplement real data with synthetic data [47] marginally increase accuracy but do little to improve fairness. Our method for selective fine-tuning of a pre-trained (biased) model with synthetic data not only preserves model accuracy but also substantially improves fairness, outperforming fully fine-tuning (FFT) in both model utility and fairness." + ], + "image_footnote": [], + "bbox": [ + 522, + 291, + 890, + 517 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "els often demonstrate varying performance across different demographic groups, leading to unfair outcomes. To mitigate the spurious correlation caused by learning from imbalanced (biased) data, regularizations were used as optimization objectives [1, 10, 12, 23]. Methods like distributionally robust optimization (DRO) [12] optimizes the worst-case performance, while invariant risk minimization (IRM) [1] learns unbiased representations with invariance to different environments. Influenced by the success of repre", + "bbox": [ + 511, + 763, + 908, + 902 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 46 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "Corresponding author: Ziquan Liu {ziquan.liu@qmul.ac.uk}.", + "bbox": [ + 112, + 886, + 441, + 901 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "28748", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/968ef6fce9fb3dfd9ee650e2f55496e5ce680ace7aa301e85fbf4cca5b71cf7c.jpg", + "image_caption": [ + "Figure 2. The effects of selective fine-tuning by layer-wise freezing from facial attribute classifications on the CelebA dataset, with Smiling as the target attribute and Male as the protected attribute: When only the fully connected (FC) layer is frozen the model shows improved worst demographic group accuracy and reduced equalized odds, i.e. more fair, but sacrifices some utility (overall accuracy). This indicates increased cross-domain generalisability (real and synthetic). Conversely, freezing only block 2 while fine-tuning the remaining parameters results in high overall accuracy but poorer fairness, i.e. further enhanced domain-bias specificity. When only block 1 is frozen, the model not only maintained equalized odds but also increased utility (overall accuracy) and worst group accuracies." + ], + "image_footnote": [], + "bbox": [ + 109, + 64, + 367, + 204 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/0ee6f775f8d3b9eaeae3aaa12f11590e01bdf33b2ec7a9f98e8e538dd8a10a16.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 367, + 63, + 625, + 204 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/3365cae16dacab792e323a67948865d1373f9eaaeba542387d0e645ff9fca91d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 625, + 64, + 883, + 204 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "sentation learning [51], attempts were made to learn a fair feature representation invariant to protected facial attributes [6, 29, 34, 54]. However, these methods rely on demographic group labelling, often unavailable in practice [2, 4]. More recent advances in generative models have utilized generative augmentation for a more balanced training data distribution [8, 9, 17, 36, 40, 59]. These techniques primarily focus on image editing, either on real data [36, 40] or synthetic data [9], to create less biased training data. Yet, these methods still require additional group annotations of the data they edit. To address this limitation, DiGA [59] proposed to create an unbiased training set by editing spurious facial attributes to a random degree while keeping a target facial attribute unchanged without knowing each sample's group label. However, DiGA increases training data size multiple times, leading to substantially higher training costs, and the quality of generated data is limited due to the unknown group labels.", + "bbox": [ + 88, + 294, + 485, + 566 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Recent advancements in text-to-image generative models showcase impressive data fidelity [43], yet their potential for improving fairness through data expansion has not been fully explored by the fairness community. This raises the question: can AI-generated synthetic data play a crucial role in mitigating biases within machine learning models? This research question is particularly important given the successful application of AI-generated data in fine-tuning large language models (LLMs) [11], data augmentation [64], and long-tail recognition [47], alongside some reported limitations of synthetic data effectiveness [49]. This work presents a comprehensive empirical investigation into whether fine-tuning high-quality, balanced generative data from a contemporary text-to-image model can counteract model biases caused by training on imbalanced real data. We identify two key challenges in bias-correcting fine-tuning with synthetic data: (1) A data-related challenge arising from linguistic ambiguity of the textual prompt and/or model misrepresentation [48, 57], which results in low-quality and low-diversity generated data. (2) A model learning challenge caused by both a domain shift (synthetic vs. real) and a bias shift (unbiased vs. biased) between", + "bbox": [ + 89, + 569, + 486, + 901 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "the real and the synthetic data. Fine-tuning blindly on the synthetic will result in a model with decreased utility.", + "bbox": [ + 511, + 295, + 906, + 325 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To address the data quality issue, we propose Contextual Synthetic Data Generation (CSDG). Existing Latent Diffusion Models (LDMs) [3, 36, 41, 45] often use vision-language models, such as CLIP, for cross-modality alignment. To yield more diverse and better fine-grained image generation, we formulate a contextual synthetic data generation strategy that uses more detailed text descriptions from expansive linguistic expressions of LLMs to condition an LDM with richer, context-driven text. As a result, the generated images cover more scenarios and provide more details, enhancing diversity and mitigating bias in real data. In contrast to other methods that edit or manipulate real data samples, a key strength of our method is that it does not require annotating the Protected Group Attributes of real data.", + "bbox": [ + 511, + 325, + 908, + 537 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The model learning challenge is due to the existence of two types of shifts between the generated and the real data, namely, the desired bias shift and the undesired domain shift (image quality, realism, scene characteristics). To address this challenge we propose to locate and update model parameters that are more sensitive to the bias shift and less sensitive to the domain shift. Our observation is that some parameters are more sensitive to data distribution shift (cross-domain generalisability) - we call domain-sensitive parameters, while others are more sensitive to demographic group discrepancies (real data domain bias specificity), which we call fairness-sensitive parameters. This observation is supported by the experiments shown in Fig. 2. This is from fine-tuning a real data pre-trained model on balanced synthetic data while keeping parameters at different layers frozen. To identify which parameters to update, we propose a novel selection scheme in which the gradient differences between the updates by the real (biased) dataset and two synthetic datasets with one unbiased and another biased. Ranking the gradient differences between the synthetic-biased data and the synthetic-unbiased data reveals parameters sensitive to fairness (fairness-sensitive parameters). In contrast, inverse ranking the gradient differences between the real data and the synthetic-biased data", + "bbox": [ + 511, + 537, + 908, + 902 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "28749", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/8860308feb1162fc43c47e04d965954e07a35434eeed64d6650d343d90084e9c.jpg", + "image_caption": [ + "Figure 3. A selective fine-tuning model consisting of three parts: (1) Contextual Synthetic Data Generation (CSDG) for generating diverse images using GPT-4 generated prompts, (2) Selective Mask Generation (SMG) for creating a selection mask that determines which parameters are updated during fine-tuning, and (3) Selective Fine-Tuning (SFT) to enhance model fairness obtained from synthetic data whilst simultaneously to preserve model utility yielded from real data in pre-training." + ], + "image_footnote": [], + "bbox": [ + 122, + 65, + 872, + 303 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "discovers parameters less sensitive to domain shift (domain-insensitive parameters). In fine-tuning, a selection mask is constructed as the intersection of the top-k rankings. This selection mask is applied to initialize the pre-trained model, so that only the selected parameters are updated using the balanced synthetic data.", + "bbox": [ + 88, + 364, + 480, + 455 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Fig. 1 presents a comparison of the results across various training strategies using real and/or synthetic data. Fig 3 shows an overview of our model, which consists of three parts: Contextual Synthetic Data Generation (CSDG), Selective Mask Generation (SMG), and Selective Fine-Tuning (SFT). CSDG uses a pre-trained and fixed LDM to generate high-quality images, where a series of contextual prompts serve as the conditions. SMG creates a selection mask that determines which parameters are updated during finetuning. SFT is designed to correct bias in the model. We initialize a pre-trained model using the parameter selection mask. Then the model is fine-tuned on balanced synthetic data to enhance its fairness while retaining the model utility. Our contributions are as follows:", + "bbox": [ + 89, + 459, + 482, + 670 + ], + "page_idx": 2 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- We investigate a fine-tuning paradigm that mitigates model bias stemming from unbalanced real data using synthetic data generated by a text-to-image (T2I) process without requiring demographic group annotations.", + "- We design contextual synthetic data generating by using a T2I diffusion mode with prompts generated by a context-aware LLM, ensuring both data diversity and control of bias in synthetic data.", + "- We introduce a selective fine-tuning method for fair model learning, which identifies domain-sensitive and fairness-sensitive parameters for improving model fairness and utility simultaneously.", + "- Our method outperforms both full and partial fine-tuning methods and achieves superior performance compared to state-of-the-art methods across several datasets." + ], + "bbox": [ + 89, + 674, + 482, + 898 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2. Related Work", + "text_level": 1, + "bbox": [ + 513, + 364, + 655, + 378 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Mitigating Model Bias. Current methods for model fairness can be categorized into three types: pre-processing, in-processing, and post-processing. Pre-processing approaches modify sample distributions of protected variables or perform transformations to remove discrimination from the data. Many recent works use generative models to create balanced, unbiased datasets [36, 40, 58]. In-processing methods incorporate fairness metrics into model optimization to maximize both performance and fairness [1, 10, 12, 20, 23, 55]. Some studies explore mitigating bias without group annotations, such as Just Train Twice (JTT) [24], which upweights misclassified examples to improve worst-group performance, and Cross-Tisk Minimization (XRM) [37], which trains twin classifiers to reduce spurious correlations. Post-processing methods apply transformations to model outputs to improve fairness, such as model calibration [25, 33, 38] and thresholding [5, 13] to align predicted positive outcomes with actual positive examples across groups. Our method falls within the pre-processing paradigm, but instead of simply balancing real data with synthetic data, we focus on using synthetic data to enhance model fairness.", + "bbox": [ + 509, + 383, + 906, + 715 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Improving Model Fairness on Generative Data. Generative models have advanced rapidly in recent years [43, 46], with several studies exploring their use to improve model fairness through generative data [9, 17, 59]. Much of the prior research has focused on generating counterfactual samples to assess fairness [8, 17], and generative methods have also made strides in bias mitigation by creating balanced, unbiased datasets [36, 40, 58]. Instead of generating counterfactual samples based on real data, D'Inca et al. [9] use a diffusion model for uncontrolled image generation, followed by manipulation of the synthetic images in a semantic space. However, these approaches require additional", + "bbox": [ + 511, + 719, + 908, + 901 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "28750", + "bbox": [ + 478, + 944, + 519, + 957 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "group annotations. Zhang et al. [59] use generative models to identify spurious attributes that may bias the model, then edit each image to modify these attributes. They train a fair FAC model on the augmented dataset, enhancing its invariance to these spurious variations. While this approach does not require protected group labels for training the FAC model, annotations are still needed to train the generative models that detect and edit spurious attributes. Additionally, generating accurate counterfactual images remains challenging, as the edits are applied randomly to images without known protected attributes. In contrast, our method uses generative data from a text-to-image LDM, enabling greater control over the diversity of the generated data and allowing a broader range of scenarios and variations.", + "bbox": [ + 89, + 90, + 483, + 301 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Model Fine-Tuning. A common approach to transfer learning in the presence of distribution shifts is to fine-tune the last few layers of a pre-trained model, retaining the learned features while adapting to the new task [27, 65]. To prevent overfitting during this fine-tuning process, existing methods suggest using a smaller learning rate compared to the initial pretraining phase [21], freezing the early backbone layers and gradually unfreezing them [15], or applying different learning rates to each layer [42]. Lee et al. [22] introduced surgical fine-tuning, which showed that selectively fine-tuning a subset of layers can either match or outperform conventional fine-tuning strategies. Recent research [19] indicates that training a carefully selected subset of layers while keeping the remaining weights frozen at their initial values can lead to varying contributions from different layers across the network to overall performance. Our proposed method inherits some common findings from these works but we further investigated with parameters-wise selective fine-tuning, which details the parameters' sensibility to the property from the pre-train data distribution and the property from the downstream data distribution.", + "bbox": [ + 91, + 303, + 483, + 619 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3. Methods", + "text_level": 1, + "bbox": [ + 89, + 631, + 191, + 647 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "This section first introduces how to generate contextual synthetic data with LLM-generated prompts. Then, we detail how to obtain the parameters selection mask. Finally, we present how to conduct model fair fine-tuning with a selection mask on balanced synthetic data. We provide the algorithm for the selective fine-tuning in the Appendix.", + "bbox": [ + 89, + 656, + 483, + 747 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.1. Contextual Synthetic Data Generation", + "text_level": 1, + "bbox": [ + 89, + 750, + 419, + 763 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Our method attempts to correct the biased model trained on imbalanced real data by fine-tuning on balanced synthetic data, so the quality of the synthetic data is crucial to the success of fair learning. Although current text-to-image models have demonstrated remarkable performance, several studies indicate that directly expressing the desired attributes in the prompt often results in sub-optimal outcomes due to linguistic ambiguity or model misrepresentation [48, 57]. For example, as shown in Fig. 4, to generate a face photo with", + "bbox": [ + 89, + 763, + 483, + 901 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/748d899da50ee608b8a1e088410a484573eb1820cb8f563d2335b8625262dfb3.jpg", + "image_caption": [ + "Figure 4. Generated images of Smiling Male conditioned on different prompts. Compared to the plain prompt, our contextual prompts enhance the diversity. More on UTKFace in Appendix." + ], + "image_footnote": [], + "bbox": [ + 540, + 92, + 872, + 329 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "both target and protected attributes, one might use a prompt like \"Portrait face photo of a smiling male.\" as suggested in [56]. However, such manually designed prompts may lead to stereotypical image generation, excluding certain attributes or minority groups, which in turn can introduce bias in other attributes, such as age or hairstyle [57].", + "bbox": [ + 511, + 381, + 906, + 472 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The recent work in zero-shot [31] and supervised [62] classification suggest that leveraging additional contextual and detailed information can enhance vision-language alignment. Considering current text-to-image models [3, 36, 41, 45] often use vision-language models, such as CLIP [39], for cross-modality alignment, we propose a contextual synthetic data generation strategy that leverages the powerful linguistic capabilities of large language models, such as GPT-4, to condition an LDM with richer and context-driven text. As a result, the generated images cover more scenarios and provide more details, enhancing diversity and mitigating the bias in the characteristics not involved in the target and protected. The structure of the CSDG is shown in the upper right of Fig. 3.", + "bbox": [ + 511, + 474, + 908, + 686 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Concretely, the instruction provided for the GPT-4 model following the format: “{Task}, {Number of Prompts}, {Target Attribute}, {Protected Attribute}, {Other Detailed Descriptions}, and {Prompt Format}”. For generating facial photos, for example, the other detailed descriptions contain specific facial features, hair characteristics, eye-related details, and head orientation or angle. Our text-to-image generation model is a pre-trained latent diffusion model-Stable Diffusion (SD) [43], which reverses noises applied to the latent embedding of images. SD contains a variational autoencoder (VAE) [28], a CLIP text encoder [39], and a U-Net [44]. During the inference, the random Gaussian noise $\\varepsilon_{t} \\sim \\mathcal{N}(0,\\mathbf{I})$ and the contextual prompt features $c = (w_{1}, w_{2}, \\ldots, w_{n})$ encoded via a CLIP text", + "bbox": [ + 511, + 688, + 910, + 902 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "28751", + "bbox": [ + 478, + 944, + 517, + 957 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "encoder $\\Psi (\\cdot)$ will be fed into the U-Net to condition the denoising process, where $n$ is the number of the contextual prompts. We provide detailed instruction and contextual prompts in the Appendix. The empirical results in Tab. 5 demonstrate that the generated contextual synthetic data mitigate domain shift and improve model fairness.", + "bbox": [ + 89, + 90, + 480, + 181 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.2. Selective Mask Generation", + "text_level": 1, + "bbox": [ + 89, + 181, + 333, + 196 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "As recent studies [50, 53, 64] have mentioned, synthetic data often fail to align with the real data distribution due to domain shift, suggesting that fine-tuning the pre-trained model directly on the balanced synthetic data may not learn the desired properties effectively. This is caused by the fact that there is both a domain shift (synthetic vs. real) and a bias shift (unbiased vs. biased) between the real and the synthetic data, and fine-tuning with the domain shift will result in a model with decreased utility. The empirical finding shown in Fig. 2 demonstrates that when fine-tuning a real data pre-trained model on balanced synthetic data, some parameters are more sensitive to the data distribution shift, called domain-sensitive parameters, while some are more sensitive to group discrepancies, called fairness-sensitive parameters. To discover the parameters' sensibility towards different scenarios, we propose to construct three distinct datasets with different distributions to elicit the model responses by calculating the parameter-wise gradients.", + "bbox": [ + 89, + 198, + 483, + 469 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We aim to fine-tune a pre-trained model $f_{\\theta}(x)$ to improve fairness while accounting for domain differences between real and synthetic data, hence, we want to find the parameters which are more sensitive to fairness while less sensitive to domain shift. Specifically, we start by constructing three different datasets: biased real data $\\{(x_i^{(R)},y_i^{(R)})\\}_{i = 1}^{N_R}\\in \\mathcal{D}_R$ representing training data with inherent biases, biased synthetic data $\\{(x_i^{(S_1)},y_i^{(S_1)})\\}_{i = 1}^{N_{S_1}}\\in \\mathcal{D}_{S_1}$ mirroring the unfairness of the real data, and unbiased synthetic data $\\{(x_i^{(S_2)},y_i^{(S_2)})\\}_{i = 1}^{N_{S_2}}\\in \\mathcal{D}_{S_2}$ designed to be fair, mitigating biases. $N_{R},N_{S_{1}}$ , and $N_{S_2}$ is the image number of the corresponding dataset.", + "bbox": [ + 89, + 470, + 483, + 657 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We then compute the gradients $\\pmb{g}_R$ , $\\pmb{g}_{S1}$ , and $\\pmb{g}_{S2}$ of the loss function $\\mathcal{L}(\\theta ;x,y)$ , which is defined from a binary softmax loss. Rather than considering fine-grained, scalar-wise parameters, we select parameters at the level of weights and biases, denoted by $\\theta$ , which includes weights $W^{(l)}$ and biases $b^{(l)}$ for the Convolution Layer, Batch Normalization, and Fully Connected Layer, where $l$ refers to the layer index. Then, for the model parameters $\\theta$ on each dataset:", + "bbox": [ + 89, + 659, + 483, + 780 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {g} _ {R} = \\frac {1}{N _ {R}} \\sum_ {i = 1} ^ {N _ {R}} \\nabla_ {\\boldsymbol {\\theta}} \\mathcal {L} \\left(\\theta ; x _ {i} ^ {(R)}, y _ {i} ^ {(R)}\\right) \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 169, + 785, + 483, + 823 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {g} _ {S _ {1}} = \\frac {1}{N _ {S _ {1}}} \\sum_ {i = 1} ^ {N _ {S _ {1}}} \\nabla_ {\\theta} \\mathcal {L} \\left(\\theta ; x _ {i} ^ {(S _ {1})}, y _ {i} ^ {(S _ {1})}\\right) \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 158, + 825, + 483, + 864 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {g} _ {S _ {2}} = \\frac {1}{N _ {S _ {2}}} \\sum_ {i = 1} ^ {N _ {S _ {2}}} \\nabla_ {\\theta} \\mathcal {L} (\\theta ; x _ {i} ^ {(S _ {2})}, y _ {i} ^ {(S _ {2})}) \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 158, + 864, + 483, + 902 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\nabla$ is the gradient operator. We then calculate the gradient differences to capture how each parameter behaves across different data distributions. For parameter $\\theta_{j}$ :", + "bbox": [ + 511, + 90, + 903, + 136 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\Delta_ {1, j} = \\left| \\boldsymbol {g} _ {R, j} - \\boldsymbol {g} _ {S _ {1}, j} \\right| \\tag {4} \\\\ \\Delta_ {2, j} = \\left| \\mathbf {g} _ {S _ {1}, j} - \\mathbf {g} _ {S _ {2}, j} \\right| \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 633, + 138, + 903, + 176 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\Delta_{1}$ measures parameters sensitive to the domain shift while $\\Delta_{2}$ identifies parameters crucial for fairness, $j$ denotes the index of all the parameters. To obtain the parameters which are less affected by domain differences and more impactful for fairness, we conduct a ranking in ascending order on $\\Delta_{1}$ (smaller differences first) and a ranking in descending order on $\\Delta_{2}$ (larger differences first):", + "bbox": [ + 511, + 175, + 906, + 279 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nR _ {1} = \\operatorname {a r g s o r t} \\left(\\Delta_ {1}\\right); R _ {2} = \\operatorname {a r g s o r t} \\left(- \\Delta_ {2}\\right) \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 576, + 281, + 903, + 297 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Then we can find the intersections from the top- $k$ parameters are both less sensitive to the domain gap and significant for fairness by $K = K_{1}\\cap K_{2}$ , where", + "bbox": [ + 511, + 297, + 903, + 342 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\begin{array}{l} K _ {1} = \\left\\{\\theta_ {j} \\mid j \\in R _ {1} [ 1: k ] \\right\\} \\\\ I = \\left\\{\\theta_ {i} \\mid i \\in D _ {1} [ 1: k ] \\right\\} \\end{array} \\tag {6} \\\\ K _ {2} = \\left\\{\\theta_ {j} \\mid j \\in R _ {2} [ 1: k ] \\right\\} \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 619, + 343, + 903, + 378 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Finally, the selection mask can be obtained by:", + "bbox": [ + 532, + 377, + 841, + 392 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nM _ {j} = \\left\\{ \\begin{array}{l l} \\text {T r u e}, & \\text {i f} \\theta_ {j} \\in K \\\\ \\text {F a l s e}, & \\text {o t h e r w i s e} \\end{array} \\right. \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 614, + 393, + 903, + 431 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.3. Selective Fine-Tuning on Synthetic Data", + "text_level": 1, + "bbox": [ + 513, + 431, + 856, + 446 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Our goal is to fine-tune the pre-trained model $f_{\\theta}(x)$ on the balanced synthetic dataset $\\mathcal{D}_{S_2}$ following the ERM framework, while only updating selected parameters as indicated by the mask $M$ . Specifically, given the model $f_{\\theta}(x)$ pretrained on the biased real data, balanced synthetic data $\\{(x_i^{(S)},y_i^{(S)})_{i = 1}^{N_{S_2}}$ , and the parameters selection mask $M$ , during optimization, we apply the mask to the gradients so that only the selected parameters are updated:", + "bbox": [ + 511, + 448, + 905, + 571 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\theta^ {(t + 1)} = \\theta^ {(t)} - \\eta \\left(M \\odot \\nabla_ {\\theta} \\mathcal {L} \\left(f _ {\\theta^ {(t)}} \\left(x _ {i} ^ {(S _ {2})}\\right), y _ {i} ^ {(S _ {2})}\\right)\\right) \\tag {8}\n$$\n", + "text_format": "latex", + "bbox": [ + 522, + 571, + 903, + 597 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\theta^{(t)}$ are the parameters at iteration $t$ , $\\eta$ is the learning rate, and $\\odot$ denotes element-wise multiplication. Notably, once the mask $M$ is provided by SMG, it will be applied throughout the entire fine-tuning optimization process.", + "bbox": [ + 511, + 599, + 905, + 659 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4. Experiments", + "text_level": 1, + "bbox": [ + 513, + 674, + 643, + 690 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "This section presents the experimental setup and results. We begin by describing the datasets, followed by the implementation details. The main results and ablation studies are presented at the end.", + "bbox": [ + 511, + 699, + 905, + 758 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.1. Datasets", + "text_level": 1, + "bbox": [ + 513, + 763, + 614, + 777 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "CelebA [26] contains over 200,000 facial images with 40 binary attribute annotations. Following the setting of the previous works [35, 58, 59], we set Male and Young to protected attributes and selected Smiling and Young as the target attribute which has the highest correlation with the protected attributes. For each experiment, we randomly sample a biased subset as a training dataset with a size of 20,000 images, where the majority group and minority group have", + "bbox": [ + 511, + 779, + 906, + 901 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "28752", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/a4c911b65802cae282ffecebb49d791e994d9f825fef5ab524763d504778a6b1.jpg", + "table_caption": [ + "Table 1. Comparisons to other methods on the CelebA dataset under settings of varied target and protected attributes." + ], + "table_footnote": [], + "table_body": "
MethodsT: Smiling, P: MaleT: Smiling, P: YoungT: Young, P: Male
ACC (↑) WST(↑) EO (↓)ACC (↑) WST(↑) EO (↓)ACC (↑) WST(↑) EO (↓)
ERM [14]88.2070.1025.3088.3071.5015.6077.7042.0052.00
CVaR DRO [23]87.3074.0022.8087.0076.1013.9075.4042.3048.80
EIL [7]87.9075.6019.7087.9072.5013.3077.5045.6039.20
LfF [32]87.1077.5017.0085.3072.9014.3077.4044.2043.60
JTT [24]88.0074.8019.4087.6073.3014.2076.3043.6047.70
MAPLE [63]88.1072.0019.6088.1073.6013.6076.3046.2043.50
DiGA [59]88.4081.907.4089.1078.509.5080.0051.3033.30
AIM-Fair (Ours)89.0284.206.0790.2187.785.4478.1954.8928.18
", + "bbox": [ + 99, + 116, + 491, + 231 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "90% and 10% of the sample size respectively. We report performance on the whole original test dataset. UTK Face [60] consists of over 20,000 facial images with three kinds of annotations: gender, age, and ethnicity. Following the experimental setup in the previous works [17, 18, 59], we define a binary spurious attribute \"Ethnicity\" based on whether the facial image is white or not. The task is to predict the Gender. We randomly sample a biased subset of 10,000 images, with the same bias degree as CelebA. We also construct a balanced and unbiased test dataset consisting of 3,200 images.", + "bbox": [ + 89, + 237, + 483, + 404 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2. Fairness Metrics", + "text_level": 1, + "bbox": [ + 89, + 410, + 254, + 424 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Our goal is to learn a fair and accurate model. For model utility, we present the overall accuracy (ACC) and group accuracy. For fairness, follow previous work [35, 58, 59] we use equalized odds (EO) [13], defined as:", + "bbox": [ + 89, + 426, + 483, + 487 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\left. \\overline {{\\sum}} _ {\\forall y, \\hat {y}} \\left| P _ {s _ {0}} (\\hat {Y} = \\hat {y} \\mid Y = y) - P _ {s _ {1}} (\\hat {Y} = \\hat {y} \\mid Y = y) \\right|, \\right. \\tag {9}\n$$\n", + "text_format": "latex", + "bbox": [ + 101, + 488, + 480, + 527 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "where $\\overline{\\sum}$ is the averaged sum, $Y$ target label, $\\hat{Y}$ classifier predictive label, and $s_0,s_1\\in S$ protected attributes. The worst-group accuracy (WST) defined as:", + "bbox": [ + 89, + 527, + 482, + 571 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\min _ {\\forall y \\in \\mathcal {Y}, \\forall s \\in S} P _ {s} (\\hat {Y} = y \\mid Y = y) \\tag {10}\n$$\n", + "text_format": "latex", + "bbox": [ + 184, + 574, + 480, + 597 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "and group standard deviation (STD) [52] defined as:", + "bbox": [ + 89, + 595, + 434, + 609 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\underset {\\forall y \\in \\mathcal {Y}, \\forall s \\in S} {\\operatorname {s t d}} P _ {s} (\\hat {Y} = y \\mid Y = y) \\tag {11}\n$$\n", + "text_format": "latex", + "bbox": [ + 184, + 609, + 480, + 633 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.3. Implementation Details", + "text_level": 1, + "bbox": [ + 89, + 642, + 307, + 657 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Following previous work [59], we use ResNet-18 [14] as the backbone for all experiments. We use stable diffusion v2.1 as our latent diffusion model for generating images. We generated 10,000 images for each group and then randomly sampled $N_{S}$ . To be consistent with the real data image number $N_{R}$ , we set the $N_{S} = N_{R} / G$ , where $G$ is the group number. During the pre-training phase, we set the batch size to 128 and trained the model for 15 epochs. The initial learning rate was set to 0.01, which was then reduced by a factor of 0.01 at the 10th epoch. During the fine-tuning phase, we set the batch size to 128 and trained the model for 10 epochs. And the learning rate is searched from \\{0.4, 0.5, 0.6\\}. All models are optimized by the SGD optimizer and trained on the Tesla A100 GPU based on the open-source PyTorch platform. To obtain more stable and reliable results, we conducted all experiments 10 times with different", + "bbox": [ + 89, + 659, + 482, + 900 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/e6bbc9f4924f25570d8c475cab0f52d0f0d3f79418b99345f6170c9245ea587e.jpg", + "table_caption": [ + "Table 2. Comparisons to other methods on the CelebA dataset (T=Smiling, P=Male) under settings of training set sizes." + ], + "table_footnote": [], + "table_body": "
MethodsSamples Ratio = 50%Samples Ratio = 25%Samples Ratio = 10%
ACC (↑)WST(↑)EO (↓)ACC (↑)WST(↑)EO (↓)ACC (↑)WST(↑)EO (↓)
ERM [14]87.5067.8026.1087.1065.9027.7086.9062.8028.90
CVaR DRO [23]86.6072.9022.1086.6072.3022.4085.5069.1027.30
EIL [7]86.2071.3022.5085.9069.6025.4086.8064.2026.70
LfF [32]86.9075.5019.4085.9072.1023.6085.5066.1027.70
JTT [24]87.3072.9020.1086.7071.1020.6086.8067.1023.10
MAPLE [63]87.4073.7023.8087.0072.7024.2085.6069.2027.10
DiGA [59]88.4081.107.8088.4078.308.0088.3078.808.40
AIM-Fair (Ours)88.8583.996.2888.8982.547.5987.9081.738.16
", + "bbox": [ + 504, + 116, + 895, + 231 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "random seeds and then used the average as the final result.", + "bbox": [ + 511, + 237, + 895, + 252 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.4. Comparisons to State of the Art", + "text_level": 1, + "bbox": [ + 511, + 257, + 795, + 272 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We compared our method against seven contemporary techniques, including the baseline ERM [14] method and six debiasing models all of which do not require protected attribute labels: Two regularization-based methods (CVaR DRO [23] and EIIL [7]); three reweighting-based methods (LfF [32], JTT [24], and MAPLE [63]); and generativemodel-based method (DiGA [59]). Our comparison covers two settings: different target and protected attributes and varying numbers of target labels.", + "bbox": [ + 511, + 273, + 903, + 409 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Tab.1 shows that ERM achieves good accuracy but suffers from significant unfairness. Other debiasing methods, while improving fairness to some extent, generally sacrifice accuracy. DiGA [59] improves fairness while maintaining accuracy by using a generative model to edit real data and create more balanced training data. In contrast, our method generates images from scratch using a text-driven latent diffusion model. It is evident that our method outperforms the best of the existing models DiGA [59] in both fairness and accuracy in all categories, except \"Young / Male\" where we come close second to DiGA. We outperform all other methods consistently. We also conducted comparisons with other methods under smaller training set sizes, with the subsampling ratio of $50\\%$ , $25\\%$ , and $10\\%$ . Tab. 2 shows that our method outperforms all others consistently, except on ACC score for Lable Ratio at $10\\%$ where we come close second to DiGA. Critically, our method maintains robust model utility and fairness even with varying amounts of real training data, with these improvements attributed to the selective updating with balanced synthetic data.", + "bbox": [ + 511, + 410, + 905, + 712 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.5. Ablation Analysis", + "text_level": 1, + "bbox": [ + 511, + 717, + 687, + 732 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Comparisons of Different Training Strategies. To evaluate the effectiveness of the proposed selective fine-tuning, we first compare our method with several other strategies trained on different types of data. These include the baseline, which trains the model using conventional ERM on real data, training the model solely on synthetic data, supplementing real data with synthetic data [47], and balancing the real data [9, 56]. Furthermore, we compare common fine-tuning methods, such as linear probing and full fine-tuning.", + "bbox": [ + 511, + 733, + 905, + 883 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The results in Tab. 3 indicate that both data supplen", + "bbox": [ + 532, + 885, + 903, + 900 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "28753", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/9c9dca4303ad606ad67d01988e4b3dbbc29c8828eb47a1a00d1270697745b2bc.jpg", + "table_caption": [ + "Table 3. Comparisons of varied training strategies on CelebA and UTKFace datasets." + ], + "table_footnote": [], + "table_body": "
MethodsTargetCelebAUTKFace
ProtectedT: Smiling; P: MaleProtectedT: Smiling; P: YoungProtectedT: Female; P: White
P=0P=1ACC (↑) WST (↑) EO (↓) STD (↓)P=0P=1ACC (↑) WST (↑) EO (↓) STD (↓)P=0P=1ACC (↑) WST (↑) EO (↓) STD (↓)
Baseline [14]T=083.5798.5089.2371.5223.8410.6578.7296.0990.1978.7217.376.9279.5896.1688.8679.5816.597.26
T=195.3671.5294.3786.4295.7583.96
Trained on Synthetic DataT=086.8289.4085.9878.527.874.0785.4585.7685.1282.423.501.3482.7188.1285.4482.715.501.96
T=186.3978.5282.4285.2784.9086.04
Data Supplementation [47]T=084.3998.3989.7773.6721.969.7678.0595.3690.3578.0517.317.0282.1293.9790.2582.1211.855.24
T=195.4073.6794.9087.6295.6289.28
Data Repairing [9, 56]T=082.8298.1089.7275.0521.189.5079.5095.1990.6879.5015.696.3081.1594.2289.4281.1513.375.89
T=196.0475.0594.5188.3595.7186.61
Linear ProbT=084.5596.2289.6977.4117.517.7185.2093.8190.1485.208.613.1881.0691.5188.4081.0610.595.14
T=194.8277.4190.4387.8894.5986.42
Fully Fine-TuningT=088.8191.6288.7782.746.833.3185.9890.0687.8585.984.371.6883.0088.0487.9083.005.402.99
T=189.5582.7486.0286.8190.7489.83
AIM-Fair (Ours)T=088.1691.4089.0284.206.072.7488.1993.6490.2187.785.442.3484.2689.0888.3084.264.812.41
T=190.2584.2088.9887.7890.6289.25
", + "bbox": [ + 107, + 95, + 883, + 311 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/073a2617a287bebd5ae1956cdc488467fd052d1db2fb9330563cf1bfe102ea34.jpg", + "table_caption": [ + "Table 4. Comparisons of varied partial fine-tuning strategies on CelebA and UTKFace datasets." + ], + "table_footnote": [], + "table_body": "
MethodsTargetCelebAUTKFace
ProtectedT: Smiling; P: MaleProtectedT: Smiling; P: YoungProtectedT: Female; P: White
P=0P=1ACC (↑)WST (↑)EO (↓)STD (↓)P=0P=1ACC (↑)WST (↑)EO (↓)STD (↓)P=0P=1ACC (↑)WST (↑)EO (↓)STD (↓)
Best Random SelectionT=087.9392.8189.3382.179.144.0985.5291.5388.9585.526.022.1683.5088.8587.9283.505.382.61
T=191.3182.1788.2587.6390.3089.01
Best Sub-Tuning [19] (By Updating One Block)T=087.5094.7590.0480.6412.455.5287.2894.6390.3387.287.343.0083.0589.5689.0183.056.513.56
T=193.0980.6489.1587.3292.0791.34
Best Sub-Tuning [19] (By Freezing One Block)T=088.6991.0989.0683.487.003.0086.5591.8988.6586.555.342.2583.5888.3487.4283.584.762.23
T=190.4883.4887.0886.5588.9688.81
Selective Fine-Tuning (Cosine Similarity)T=086.0791.7788.8583.338.403.6187.4392.9690.2687.435.532.0583.7288.8288.2183.725.102.64
T=191.5383.3389.5288.6590.1490.15
AIM-Fair (Ours) (Absolute Difference)T=088.1691.4089.0284.206.072.7488.1993.6490.2187.785.442.3484.2689.0888.3084.264.812.41
T=190.2584.2088.9887.7890.6289.25
", + "bbox": [ + 109, + 325, + 883, + 487 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "tation and data repairing do not significantly improve fairness. Data supplementation, which combines biased real data with balanced synthetic data for co-training, makes it difficult for the model to learn fairness properties from the synthetic data because of the domain gap. While data repairing can create balanced training data, the domain shift between real and synthetic data limits the effectiveness of fair learning. Common transfer learning methods, such as linear probing and full fine-tuning, also face challenges due to domain shifts. Specifically, linear probing is affected by discrepancies in feature representations, while full fine-tuning improves fairness at the cost of reduced model utility.", + "bbox": [ + 89, + 493, + 482, + 675 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Comparisons of Different Partial Fine-Tuning. We also compare our method with other partial fine-tuning approaches, including random selection and sub-tuning [19]. For random selection, we set the updating ratios at $40\\%$ , $55\\%$ , $70\\%$ , and $85\\%$ , and select the best result as the final one. For sub-tuning, we perform block-wise fine-tuning and block-wise freezing (i.e., updating one block or freezing one block while updating the rest), also selecting the block with the best result.", + "bbox": [ + 89, + 681, + 482, + 816 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Tab. 4 shows that fine-tuning only one block consistently yields the best accuracy, but at the cost of fairness, while freezing one block tends to improve fairness but often sacrifices model utility. This demonstrates that coarse-grained approaches, such as block-wise updating or freezing, strut", + "bbox": [ + 89, + 824, + 482, + 902 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/94dcd9d29213e01897686134b27555b736c43439bf341d9022e2a3f7ae760a76.jpg", + "table_caption": [ + "Table 5. Classification results on CelebA dataset (T=Smiling, P=Male) under settings of different prompt types and numbers." + ], + "table_footnote": [], + "table_body": "
Prompt Types (Number)TargetProtectedT: Smiling; P: MaleProtectedT: Smiling; P: Young
P=0P=1ACC(↑)WST(↑)EO(↓)STD(↓)P=0P=1ACC(↑)WST(↑)EO(↓)STD(↓)
Plain Prompt (1)t=065.1889.4471.6543.2834.2017.0783.1077.7272.7559.539.308.96
t=177.4743.2859.5368.83
Contextual Prompts (25)t=077.1289.2581.8569.7316.37.6576.7267.9079.8067.908.828.75
t=185.9969.7385.1691.09
Contextual Prompts (50)t=089.4789.8082.6672.225.077.6687.7290.4385.8579.674.364.20
t=177.2672.2279.6782.67
Contextual + Head Poses (50)t=086.8289.4085.9878.527.874.0785.4585.7685.1282.423.501.34
t=186.3978.5282.4285.27
", + "bbox": [ + 514, + 520, + 903, + 637 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "gle to achieve an optimal balance between model utility and fairness. In contrast, our method performs fine-grained, parameter-wise updating, which allows the model to enhance fairness while maintaining utility. As a result, our method achieves the best worst-group accuracy while preserving high overall accuracy.", + "bbox": [ + 511, + 643, + 906, + 734 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Additionally, our method uses gradient differences to identify parameter sensitivity to varying data distributions. We also compared absolute gradient differences with the cosine similarity between gradients. The results in Tab. 4 show that using absolute gradient differences yields better results. The possible reason is that the cosine similarity only provides the direction of gradient disparity, whereas the absolute difference directly captures the magnitude of the gradient disparity.", + "bbox": [ + 511, + 734, + 908, + 869 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Comparisons of Different Prompts. To evaluate the performance of the varied prompts used for generating images,", + "bbox": [ + 511, + 869, + 903, + 901 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "28754", + "bbox": [ + 478, + 945, + 517, + 955 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/e44f458f7551955a43c17778c446f70cb11207d0428beb2a67bac569ce9b1397.jpg", + "image_caption": [ + "Figure 5. Classification results on the CelebA dataset (T=Smiling, P=Male) with different top-k values. The top-k values {40, 45, 50, 55, 60} correspond approximately to $\\{64\\%, 72\\%, 80\\%, 88\\%, 96\\% \\}$ of the total model parameters." + ], + "image_footnote": [], + "bbox": [ + 98, + 88, + 472, + 275 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "we trained the model solely on the balanced synthetic data generated with corresponding prompts and tested it on the real test set. Tab. 5 shows the results of different prompt types and quantities, indicating that images generated with contextual prompts lead to significantly better performance in both model accuracy and fairness. We believe this improvement is due to the increased diversity and details in the synthetic images generated from contextual prompts. Moreover, as the number of contextual prompts increases, test performance improves as well. Specifically, incorporating head pose variation into the prompt further enhances both accuracy and fairness in the generated images, as the results shown in the last row of Tab. 5.", + "bbox": [ + 88, + 339, + 482, + 532 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Evaluations of Varied Top-k Values. In our method, to identify the intersection parameters from the two gradient difference rankings, we use the top-k selection to ensure that the chosen parameters are more sensitive to fairness and less affected by domain shift. The results for different top-k values are shown in Fig. 5. When k is set to 55, our method achieves the best fairness performance while retaining overall accuracy. Intuitively, a lower k results in fewer updates, leading to better model utility but worse fairness, while a high k involves more parameter updates, which can negatively impact both accuracy and fairness.", + "bbox": [ + 89, + 537, + 482, + 702 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Evaluations of Varied Bias Ratio for Synthetic Data Construction. To evaluate the model's sensitivity to domain shift and group disparity, we construct three distinct datasets with different distributions. In our method, the bias ratio for the biased synthetic data distribution is treated as a hyperparameter, referenced by the error set of real training examples, as suggested in JTT [24]. To assess the impact of varying bias ratios in synthetic data, we explore different ratios. As shown in Tab. 6, a disparity between the synthetic data bias ratio and the real data bias ratio leads to worse performance in both accuracy and fairness. However, despite these differences in bias ratios, our method still outperforms the fully fine-tuning approach. This demonstrates that our", + "bbox": [ + 89, + 704, + 482, + 900 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/e1e4533cbe7ffde0a115e197e0136d9a55f186d8fe5e9778e2696a1003b5de19.jpg", + "table_caption": [ + "Table 6. Classification results on CelebA dataset (T=Smiling, P=Male) with varied bias ratio of the biased synthetic data." + ], + "table_footnote": [], + "table_body": "
Bias RatioTargetProtectedACC (↑)WST (↑)EOD (↓)STD (↓)
P=0P=1
Fully Fine-Tuningt=088.8191.6288.7782.746.833.31
t=189.5582.74
4:6t=085.7291.7189.2383.908.783.77
t=192.6983.90
3:7t=087.0591.6288.9083.527.723.26
t=190.9383.52
2:8t=088.0191.7688.9783.167.303.28
t=190.4683.16
1:9t=088.1691.4089.0284.206.072.74
t=190.2584.20
", + "bbox": [ + 532, + 117, + 883, + 284 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/e8f91f1805a7dcf9dee6b7de8772ad303ca911c65ad743972a143b1d89297676.jpg", + "table_caption": [ + "Table 7. Classification results on CelebA dataset (T=Smiling, P=Male) with different number of synthetic data." + ], + "table_footnote": [], + "table_body": "
Ratio To Real DataTargetProtectedACC (↑)WST (↑)EOD (↓)STD (↓)
P=0P=1
0.5t=088.6593.9689.5180.8010.314.90
t=191.1080.80
1t=088.1691.4089.0284.206.072.74
t=190.2584.20
1.5t=089.1892.0588.3481.257.133.98
t=188.3881.25
2t=089.0191.2688.8183.366.082.96
t=191.2689.44
", + "bbox": [ + 532, + 319, + 883, + 464 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "method is able to identify the model's sensitivity under the bias ratios different to real data.", + "bbox": [ + 511, + 469, + 903, + 497 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "**Evaluations of Different Number of Synthetic Data.** We also evaluate fine-tuning with varying amounts of balanced synthetic data. As shown in Tab. 7, we use the real training data count as a reference and set different ratios to determine the number of synthetic data. The results indicate that when the amount of synthetic data matches that of real data, the model achieves the best fairness. Additionally, using half the amount of real data results in the best accuracy. We believe that using too much synthetic data during fine-tuning can lead to overfitting, while using too little data may fail to adequately debias the model.", + "bbox": [ + 511, + 500, + 906, + 666 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5. Conclusion", + "text_level": 1, + "bbox": [ + 511, + 679, + 633, + 694 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this work, we proposed a method to mitigate bias in machine learning models using synthetic data generated by a text-to-image process. By designing contextual synthetic data generation and selective fine-tuning, we enhance model fairness without requiring demographic group annotations. Our model updates selectively fairness-sensitive parameters, optimizing simultaneously model fairness and utility scores. Empirical results demonstrate that our method outperforms existing techniques, improving fairness while maintaining model utility performance. This work highlights the potential of synthetic data for creating fairer AI systems, offering a promising direction for future research in bias mitigation.", + "bbox": [ + 511, + 704, + 906, + 900 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "28755", + "bbox": [ + 478, + 944, + 517, + 955 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6. Acknowledgments", + "text_level": 1, + "bbox": [ + 91, + 90, + 269, + 107 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "This research utilised Queen Mary's Apocrita HPC facility, supported by QMUL Research-IT. Zengqun Zhao is funded by Queen Mary Principal's PhD Studentships. Zengqun Zhao wants to thank Yining Wang and James Oldfield for the valuable comments and help, and Yiming Lin and Jie Shen for the valuable discussions.", + "bbox": [ + 89, + 114, + 483, + 205 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 217, + 187, + 233 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019. 1, 3", + "[2] Carolyn Ashurst and Adrian Weller. Fairness without demographic data: A survey of approaches. In EAAMO, pages 1-12, 2023. 2", + "[3] Junsong Chen, YU Jincheng, GE Chongjian, Lewei Yao, Enze Xie, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li. Pixart-α: Fast training of diffusion transformer for photorealistic text-to-image synthesis. In ICLR, 2023. 2, 4", + "[4] Kristy Choi, Aditya Grover, Trisha Singh, Rui Shu, and Stefano Ermon. Fair generative modeling via weak supervision. In ICML, pages 1887-1898, 2020. 2", + "[5] Sam Corbett-Davies, Johann D Gaebler, Hamed Nilforoshan, Ravi Shroff, and Sharad Goel. The measure and mismeasure of fairness. JMLR, 24(1):14730-14846, 2023. 3", + "[6] Elliot Creager, David Madras, Jorn-Henrik Jacobsen, Marissa Weis, Kevin Swersky, Toniann Pitassi, and Richard Zemel. Flexibly fair representation learning by disentangle-ment. In ICML, pages 1436-1445, 2019. 2", + "[7] Elliot Creager, Jorn-Henrik Jacobsen, and Richard Zemel. Environment inference for invariant learning. In ICML, pages 2189-2200, 2021. 6", + "[8] Saloni Dash, Vineeth N Balasubramanian, and Amit Sharma. Evaluating and mitigating bias in image classifiers: A causal perspective using counterfactuals. In WACV, pages 915-924, 2022. 2, 3", + "[9] Moreno D'Inca, Christos Tzelepis, Ioannis Patras, and Nicu Sebe. Improving fairness using vision-language driven image augmentation. In WACV, pages 4695-4704, 2024. 1, 2, 3, 6, 7", + "[10] Michele Donini, Luca Oneto, Shai Ben-David, John S Shawe-Taylor, and Massimiliano Pontil. Empirical risk minimization under fairness constraints. NeurIPS, 31, 2018. 1, 3", + "[11] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024. 2", + "[12] John C Duchi and Hongseok Namkoong. Learning models with uniform performance via distributionally robust optimization. The Annals of Statistics, 49(3):1378-1406, 2021. 1, 3", + "[13] Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. NeurIPS, 29, 2016. 3, 6" + ], + "bbox": [ + 93, + 243, + 483, + 901 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770-778, 2016. 6, 7", + "[15] Jeremy Howard and Sebastian Ruder. Universal language model fine-tuning for text classification. In ACL, pages 328-339, 2018. 4", + "[16] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. In ICLR, pages 1-13, 2022. 13", + "[17] Jungseock Joo and Kimmo Kärkkäinen. Gender slopes: Counterfactual fairness for computer vision models by attribute manipulation. In FATE/MM, pages 1-5, 2020. 2, 3, 6", + "[18] Sangwon Jung, Sanghyuk Chun, and Taesup Moon. Learning fair classifiers with partially annotated group labels. In CVPR, pages 10348-10357, 2022. 6", + "[19] Gal Kaplun, Andrey Gurevich, Tal Swisa, Mazor David, Shai Shalev-Shwartz, and Eran Malach. Less is more: Selective layer finetuning with subtuning. arXiv preprint arXiv:2302.06354, 2023. 4, 7", + "[20] Adarsh Kappiyath, Abhra Chaudhuri, AJAY KUMAR JAISWAL, Ziquan Liu, Yunpeng Li, Xiatian Zhu, and Lu Yin. Sebra: Debiasing through self-guided bias ranking. In ICLR, pages 1-12, 2025. 3", + "[21] Simon Kornblith, Jonathon Shlens, and Quoc V Le. Do better imagenet models transfer better? In CVPR, pages 2661-2671, 2019. 4", + "[22] Yoonho Lee, Annie S Chen, Fahim Tajwar, Ananya Kumar, Huaxiu Yao, Percy Liang, and Chelsea Finn. Surgical finetuning improves adaptation to distribution shifts. In ICLR, 2022. 4", + "[23] Daniel Levy, Yair Carmon, John C Duchi, and Aaron Sidford. Large-scale methods for distributionally robust optimization. NeurIPS, 33:8847-8860, 2020. 1, 3, 6", + "[24] Evan Z Liu, Behzad Haghgoo, Annie S Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. Just train twice: Improving group robustness without training group information. In ICML, pages 6781-6792, 2021. 3, 6, 8", + "[25] Lydia T Liu, Max Simchowitz, and Moritz Hardt. The implicit fairness criterion of unconstrained learning. In ICML, pages 4051-4060. PMLR, 2019. 3", + "[26] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaou Tang. Deep learning face attributes in the wild. In ICCV, 2015, 5, 12", + "[27] Ziquan Liu, Yi Xu, Xiangyang Ji, and Antoni B Chan. Twins: A fine-tuning framework for improved transferability of adversarial robustness and generalization. In CVPR, pages 16436-16446, 2023. 4", + "[28] Romain Lopez, Jeffrey Regier, Michael I Jordan, and Nir Yosef. Information constraints on auto-encoding variational bayes. NeurIPS, 31, 2018. 4", + "[29] David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. Learning adversarially fair and transferable representations. In ICML, pages 3384-3393, 2018. 2" + ], + "bbox": [ + 516, + 92, + 903, + 900 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "28756", + "bbox": [ + 478, + 944, + 519, + 955 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[30] Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6):1-35, 2021. 1", + "[31] Sachit Menon and Carl Vondrick. Visual classification via description from large language models. In ICLR, 2022. 4", + "[32] Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, and Jinwoo Shin. Learning from failure: De-biasing classifier from biased classifier. NeurlPS, 33:20673-20684, 2020. 6", + "[33] James Oldfield, Markos Georgopoulos, Grigorios Chrysos, Christos Tzelepis, Yannis Panagakis, Mihalis Nicolaou, Jiankang Deng, and Ioannis Patras. Multilinear mixture of experts: Scalable expert specialization through factorization. NeurIPS, 37:53022-53063, 2025. 3", + "[34] Sungho Park, Sunhee Hwang, Dohyung Kim, and Hyeran Byun. Learning disentangled representation for fair facial attribute classification via fairness-aware information alignment. In AAAI, pages 2403-2411, 2021. 2", + "[35] Sungho Park, Jewook Lee, Pilhyeon Lee, Sunhee Hwang, Dohyung Kim, and Hyeran Byun. Fair contrastive learning for facial attribute classification. In CVPR, pages 10389-10398, 2022. 5, 6", + "[36] Momchil Peychev, Anian Ruoss, Mislav Balunovic, Maximilian Baader, and Martin Vechev. Latent space smoothing for individually fair representations. In ECCV, pages 535-554, 2022. 2, 3, 4", + "[37] Mohammad Pezeshki, Diane Bouchacourt, Mark Ibrahim, Nicolas Ballas, Pascal Vincent, and David Lopez-Paz. Discovering environments with xrm. In ICML, 2024. 3", + "[38] Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. On fairness and calibration. NeurIPS, 30, 2017. 3", + "[39] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, pages 8748-8763, 2021. 4", + "[40] Vikram V Ramaswamy, Sunnie SY Kim, and Olga Russakovsky. Fair attribute classification through latent space de-biasing. In CVPR, pages 9301-9310, 2021. 2, 3", + "[41] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, pages 1-27, 2022. 2, 4", + "[42] Youngmin Ro and Jin Young Choi. Autolr: Layer-wise pruning and auto-tuning of learning rates in fine-tuning of deep networks. In AAAI, pages 2486-2494, 2021. 4", + "[43] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, pages 10684-10695, 2022. 2, 3, 4", + "[44] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, pages 234-241, 2015. 4", + "[45] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans," + ], + "bbox": [ + 91, + 92, + 480, + 900 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "et al. Photorealistic text-to-image diffusion models with deep language understanding. In NeurIPS, pages 36479-36494, 2022. 2, 4", + "[46] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. NeurIPS, 35:36479-36494, 2022. 3", + "[47] Joonghyuk Shin, Minguk Kang, and Jaesik Park. Fill-up: Balancing long-tailed data with generative models. arXiv preprint arXiv:2306.07200, 2023. 1, 2, 6, 7", + "[48] Robik Shrestha, Yang Zou, Qiuyu Chen, Zhiheng Li, Yusheng Xie, and Siqi Deng. Fairrag: Fair human generation via fair retrieval augmentation. In CVPR, pages 11996-12005, 2024. 2, 4", + "[49] Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Nicolas Papernot, Ross Anderson, and Yarin Gal. Ai models collapse when trained on recursively generated data. Nature, 631 (8022):755-759, 2024. 2", + "[50] Krishnakant Singh, Thanush Navaratnam, Jannik Holmer, Simone Schaub-Meyer, and Stefan Roth. Is synthetic data all we need? benchmarking the robustness of models trained with synthetic images. In CVPR, pages 2505-2515, 2024. 5", + "[51] Tobias Uelwer, Jan Robine, Stefan Sylvius Wagner, Marc Höftmann, Eric Upschulte, Sebastian Konietzny, Maike Behrendt, and Stefan Harmeling. A survey on self-supervised representation learning. arXiv preprint arXiv:2308.11455, 2023. 2", + "[52] Mei Wang and Weihong Deng. Mitigating bias in face recognition using skewness-aware reinforcement learning. In CVPR, pages 9322-9331, 2020. 6", + "[53] Yinong Oliver Wang, Younjoon Chung, Chen Henry Wu, and Fernando De la Torre. Domain gap embeddings for generative dataset augmentation. In CVPR, pages 28684-28694, 2024. 5", + "[54] Zeyu Wang, Klint Qinami, Ioannis Christos Karakozis, Kyle Genova, Prem Nair, Kenji Hata, and Olga Russakovsky. Towards fairness in visual recognition: Effective strategies for bias mitigation. In CVPR, pages 8919-8928, 2020. 2", + "[55] Yuyang Xue, Junyu Yan, Raman Dutt, Fasih Haider, Jingshuai Liu, Steven McDonagh, and Sotirios A Tsaftaris. Bmft: Achieving fairness via bias-based weight masking fine-tuning. In MICCAI Workshop on Fairness of AI in Medical Imaging, pages 98-108, 2024. 3", + "[56] Moon Ye-Bin, Nam Hyeon-Woo, Wonseok Choi, Nayeong Kim, Suha Kwak, and Tae-Hyun Oh. Exploiting synthetic data for data imbalance problems: Baselines from a data perspective. arXiv preprint arXiv:2308.00994, 2023. 1, 4, 6, 7", + "[57] Cheng Zhang, Xuanbai Chen, Siqi Chai, Chen Henry Wu, Dmitry Lagun, Thabo Beeler, and Fernando De la Torre. Iti-gen: Inclusive text-to-image generation. In ICCV, pages 3969-3980, 2023. 2, 4", + "[58] Fengda Zhang, Kun Kuang, Long Chen, Yuxuan Liu, Chao Wu, and Jun Xiao. Fairness-aware contrastive learning with partially annotated sensitive attributes. In ICLR, 2023. 3, 5, 6" + ], + "bbox": [ + 516, + 92, + 903, + 898 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "28757", + "bbox": [ + 478, + 945, + 517, + 955 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[59] Fengda Zhang, Qianpei He, Kun Kuang, Jiashuo Liu, Long Chen, Chao Wu, Jun Xiao, and Hanwang Zhang. Distributionally generative augmentation for fair facial attribute classification. In CVPR, pages 22797-22808, 2024. 2, 3, 4, 5, 6", + "[60] Zhifei Zhang, Yang Song, and Hairong Qi. Age progression/regression by conditional adversarial autoencoder. In CVPR, 2017. 6, 12", + "[61] Qihao Zhao, Yalun Dai, Hao Li, Wei Hu, Fan Zhang, and Jun Liu. Ltgc: Long-tail recognition via leveraging llms-driven generated content. In CVPR, pages 19510-19520, 2024. 13", + "[62] Zengqun Zhao and Ioannis Patras. Prompting visual-language models for dynamic facial expression recognition. In BMVC, pages 1-14, 2023. 4", + "[63] Xiao Zhou, Yong Lin, Renjie Pi, Weizhong Zhang, Renzhe Xu, Peng Cui, and Tong Zhang. Model agnostic sample reweighting for out-of-distribution learning. In ICML, pages 27203-27221, 2022. 6", + "[64] Yongchao Zhou, Hshmat Sahak, and Jimmy Ba. Using synthetic data for data augmentation to improve classification accuracy. In ICML, 2023. 2, 5", + "[65] Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He. A comprehensive survey on transfer learning. Proceedings of the IEEE, 109(1):43-76, 2020. 4" + ], + "bbox": [ + 91, + 92, + 480, + 445 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "28758", + "bbox": [ + 478, + 945, + 517, + 955 + ], + "page_idx": 10 + } +] \ No newline at end of file diff --git a/2025/AIM-Fair_ Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data/e3f123cb-93dd-4a70-bf37-7825bb88c80c_model.json b/2025/AIM-Fair_ Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data/e3f123cb-93dd-4a70-bf37-7825bb88c80c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2ab9afdc775fc45a29410acc914ee96170794c54 --- /dev/null +++ b/2025/AIM-Fair_ Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data/e3f123cb-93dd-4a70-bf37-7825bb88c80c_model.json @@ -0,0 +1,2224 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.043 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.001, + 0.812, + 0.047 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.104, + 0.13, + 0.895, + 0.177 + ], + "angle": 0, + "content": "AIM-Fair: Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data" + }, + { + "type": "text", + "bbox": [ + 0.169, + 0.204, + 0.828, + 0.24 + ], + "angle": 0, + "content": "Zengqun Zhao Ziquan Liu Yu Cao Shaogang Gong Ioannis Patras Centre for Multimodal AI, Queen Mary University of London" + }, + { + "type": "text", + "bbox": [ + 0.218, + 0.242, + 0.775, + 0.258 + ], + "angle": 0, + "content": "{zengqun.zhao, ziquan.liu, yu.cao, s.gong, i.patras}@qmul.ac.uk" + }, + { + "type": "title", + "bbox": [ + 0.248, + 0.292, + 0.327, + 0.308 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.324, + 0.485, + 0.807 + ], + "angle": 0, + "content": "Recent advances in generative models have sparked research on improving model fairness with AI-generated data. However, existing methods often face limitations in the diversity and quality of synthetic data, leading to compromised fairness and overall model accuracy. Moreover, many approaches rely on the availability of demographic group labels, which are often costly to annotate. This paper proposes AIM-Fair, aiming to overcome these limitations and harness the potential of cutting-edge generative models in promoting algorithmic fairness. We investigate a fine-tuning paradigm starting from a biased model initially trained on real-world data without demographic annotations. This model is then fine-tuned using unbiased synthetic data generated by a state-of-the-art diffusion model to improve its fairness. Two key challenges are identified in this fine-tuning paradigm, 1) the low quality of synthetic data, which can still happen even with advanced generative models, and 2) the domain and bias gap between real and synthetic data. To address the limitation of synthetic data quality, we propose Contextual Synthetic Data Generation (CSDG) to generate data using a text-to-image diffusion model (T2I) with prompts generated by a context-aware LLM, ensuring both data diversity and control of bias in synthetic data. To resolve domain and bias shifts, we introduce a novel selective fine-tuning scheme in which only model parameters more sensitive to bias and less sensitive to domain shift are updated. Experiments on CelebA and UTKFace datasets show that our AIM-Fair improves model fairness while maintaining utility, outperforming both fully and partially fine-tuned approaches to model fairness. The code is available at https://github.com/zengqunzhao/AIM-Fair." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.829, + 0.222, + 0.844 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.848, + 0.483, + 0.879 + ], + "angle": 0, + "content": "Recent research has raised significant concerns about fairness and bias in machine learning models [30]. These mod-" + }, + { + "type": "image", + "bbox": [ + 0.523, + 0.292, + 0.891, + 0.518 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.512, + 0.521, + 0.907, + 0.756 + ], + "angle": 0, + "content": "Figure 1. Facial attributes classification (FAC) on the CelebA dataset based on different training strategies, in which Smiling is the target attribute and Gender is the protected attribute. This shows different learning strategies result in variable model utility (Overall Accuracy) vs. model fairness (Worst Group Accuracy and Equalized Odds) on demographic groups. A model trained solely on real data if biased exhibits high accuracy but poor fairness scores. Conversely, models trained on balanced synthetic data show better fairness but poorer accuracy due to a \"domain gap\" between the real and synthetic data, and a lack of selective model fine-tuning when synthetic data is deployed. Strategies to repair imbalances in real data [9, 56] or to supplement real data with synthetic data [47] marginally increase accuracy but do little to improve fairness. Our method for selective fine-tuning of a pre-trained (biased) model with synthetic data not only preserves model accuracy but also substantially improves fairness, outperforming fully fine-tuning (FFT) in both model utility and fairness." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.765, + 0.909, + 0.903 + ], + "angle": 0, + "content": "els often demonstrate varying performance across different demographic groups, leading to unfair outcomes. To mitigate the spurious correlation caused by learning from imbalanced (biased) data, regularizations were used as optimization objectives [1, 10, 12, 23]. Methods like distributionally robust optimization (DRO) [12] optimizes the worst-case performance, while invariant risk minimization (IRM) [1] learns unbiased representations with invariance to different environments. Influenced by the success of repre" + }, + { + "type": "page_footnote", + "bbox": [ + 0.114, + 0.887, + 0.442, + 0.902 + ], + "angle": 0, + "content": "Corresponding author: Ziquan Liu {ziquan.liu@qmul.ac.uk}." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.52, + 0.958 + ], + "angle": 0, + "content": "28748" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.11, + 0.065, + 0.368, + 0.205 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.369, + 0.064, + 0.626, + 0.205 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.626, + 0.065, + 0.885, + 0.205 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.207, + 0.907, + 0.292 + ], + "angle": 0, + "content": "Figure 2. The effects of selective fine-tuning by layer-wise freezing from facial attribute classifications on the CelebA dataset, with Smiling as the target attribute and Male as the protected attribute: When only the fully connected (FC) layer is frozen the model shows improved worst demographic group accuracy and reduced equalized odds, i.e. more fair, but sacrifices some utility (overall accuracy). This indicates increased cross-domain generalisability (real and synthetic). Conversely, freezing only block 2 while fine-tuning the remaining parameters results in high overall accuracy but poorer fairness, i.e. further enhanced domain-bias specificity. When only block 1 is frozen, the model not only maintained equalized odds but also increased utility (overall accuracy) and worst group accuracies." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.295, + 0.486, + 0.568 + ], + "angle": 0, + "content": "sentation learning [51], attempts were made to learn a fair feature representation invariant to protected facial attributes [6, 29, 34, 54]. However, these methods rely on demographic group labelling, often unavailable in practice [2, 4]. More recent advances in generative models have utilized generative augmentation for a more balanced training data distribution [8, 9, 17, 36, 40, 59]. These techniques primarily focus on image editing, either on real data [36, 40] or synthetic data [9], to create less biased training data. Yet, these methods still require additional group annotations of the data they edit. To address this limitation, DiGA [59] proposed to create an unbiased training set by editing spurious facial attributes to a random degree while keeping a target facial attribute unchanged without knowing each sample's group label. However, DiGA increases training data size multiple times, leading to substantially higher training costs, and the quality of generated data is limited due to the unknown group labels." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.57, + 0.487, + 0.902 + ], + "angle": 0, + "content": "Recent advancements in text-to-image generative models showcase impressive data fidelity [43], yet their potential for improving fairness through data expansion has not been fully explored by the fairness community. This raises the question: can AI-generated synthetic data play a crucial role in mitigating biases within machine learning models? This research question is particularly important given the successful application of AI-generated data in fine-tuning large language models (LLMs) [11], data augmentation [64], and long-tail recognition [47], alongside some reported limitations of synthetic data effectiveness [49]. This work presents a comprehensive empirical investigation into whether fine-tuning high-quality, balanced generative data from a contemporary text-to-image model can counteract model biases caused by training on imbalanced real data. We identify two key challenges in bias-correcting fine-tuning with synthetic data: (1) A data-related challenge arising from linguistic ambiguity of the textual prompt and/or model misrepresentation [48, 57], which results in low-quality and low-diversity generated data. (2) A model learning challenge caused by both a domain shift (synthetic vs. real) and a bias shift (unbiased vs. biased) between" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.296, + 0.907, + 0.326 + ], + "angle": 0, + "content": "the real and the synthetic data. Fine-tuning blindly on the synthetic will result in a model with decreased utility." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.327, + 0.909, + 0.538 + ], + "angle": 0, + "content": "To address the data quality issue, we propose Contextual Synthetic Data Generation (CSDG). Existing Latent Diffusion Models (LDMs) [3, 36, 41, 45] often use vision-language models, such as CLIP, for cross-modality alignment. To yield more diverse and better fine-grained image generation, we formulate a contextual synthetic data generation strategy that uses more detailed text descriptions from expansive linguistic expressions of LLMs to condition an LDM with richer, context-driven text. As a result, the generated images cover more scenarios and provide more details, enhancing diversity and mitigating bias in real data. In contrast to other methods that edit or manipulate real data samples, a key strength of our method is that it does not require annotating the Protected Group Attributes of real data." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.539, + 0.909, + 0.903 + ], + "angle": 0, + "content": "The model learning challenge is due to the existence of two types of shifts between the generated and the real data, namely, the desired bias shift and the undesired domain shift (image quality, realism, scene characteristics). To address this challenge we propose to locate and update model parameters that are more sensitive to the bias shift and less sensitive to the domain shift. Our observation is that some parameters are more sensitive to data distribution shift (cross-domain generalisability) - we call domain-sensitive parameters, while others are more sensitive to demographic group discrepancies (real data domain bias specificity), which we call fairness-sensitive parameters. This observation is supported by the experiments shown in Fig. 2. This is from fine-tuning a real data pre-trained model on balanced synthetic data while keeping parameters at different layers frozen. To identify which parameters to update, we propose a novel selection scheme in which the gradient differences between the updates by the real (biased) dataset and two synthetic datasets with one unbiased and another biased. Ranking the gradient differences between the synthetic-biased data and the synthetic-unbiased data reveals parameters sensitive to fairness (fairness-sensitive parameters). In contrast, inverse ranking the gradient differences between the real data and the synthetic-biased data" + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.521, + 0.958 + ], + "angle": 0, + "content": "28749" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.123, + 0.066, + 0.873, + 0.304 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.306, + 0.907, + 0.363 + ], + "angle": 0, + "content": "Figure 3. A selective fine-tuning model consisting of three parts: (1) Contextual Synthetic Data Generation (CSDG) for generating diverse images using GPT-4 generated prompts, (2) Selective Mask Generation (SMG) for creating a selection mask that determines which parameters are updated during fine-tuning, and (3) Selective Fine-Tuning (SFT) to enhance model fairness obtained from synthetic data whilst simultaneously to preserve model utility yielded from real data in pre-training." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.366, + 0.482, + 0.456 + ], + "angle": 0, + "content": "discovers parameters less sensitive to domain shift (domain-insensitive parameters). In fine-tuning, a selection mask is constructed as the intersection of the top-k rankings. This selection mask is applied to initialize the pre-trained model, so that only the selected parameters are updated using the balanced synthetic data." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.46, + 0.483, + 0.671 + ], + "angle": 0, + "content": "Fig. 1 presents a comparison of the results across various training strategies using real and/or synthetic data. Fig 3 shows an overview of our model, which consists of three parts: Contextual Synthetic Data Generation (CSDG), Selective Mask Generation (SMG), and Selective Fine-Tuning (SFT). CSDG uses a pre-trained and fixed LDM to generate high-quality images, where a series of contextual prompts serve as the conditions. SMG creates a selection mask that determines which parameters are updated during finetuning. SFT is designed to correct bias in the model. We initialize a pre-trained model using the parameter selection mask. Then the model is fine-tuned on balanced synthetic data to enhance its fairness while retaining the model utility. Our contributions are as follows:" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.675, + 0.483, + 0.735 + ], + "angle": 0, + "content": "- We investigate a fine-tuning paradigm that mitigates model bias stemming from unbalanced real data using synthetic data generated by a text-to-image (T2I) process without requiring demographic group annotations." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.736, + 0.483, + 0.795 + ], + "angle": 0, + "content": "- We design contextual synthetic data generating by using a T2I diffusion mode with prompts generated by a context-aware LLM, ensuring both data diversity and control of bias in synthetic data." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.796, + 0.483, + 0.855 + ], + "angle": 0, + "content": "- We introduce a selective fine-tuning method for fair model learning, which identifies domain-sensitive and fairness-sensitive parameters for improving model fairness and utility simultaneously." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.856, + 0.483, + 0.9 + ], + "angle": 0, + "content": "- Our method outperforms both full and partial fine-tuning methods and achieves superior performance compared to state-of-the-art methods across several datasets." + }, + { + "type": "list", + "bbox": [ + 0.091, + 0.675, + 0.483, + 0.9 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.365, + 0.656, + 0.38 + ], + "angle": 0, + "content": "2. Related Work" + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.385, + 0.907, + 0.717 + ], + "angle": 0, + "content": "Mitigating Model Bias. Current methods for model fairness can be categorized into three types: pre-processing, in-processing, and post-processing. Pre-processing approaches modify sample distributions of protected variables or perform transformations to remove discrimination from the data. Many recent works use generative models to create balanced, unbiased datasets [36, 40, 58]. In-processing methods incorporate fairness metrics into model optimization to maximize both performance and fairness [1, 10, 12, 20, 23, 55]. Some studies explore mitigating bias without group annotations, such as Just Train Twice (JTT) [24], which upweights misclassified examples to improve worst-group performance, and Cross-Tisk Minimization (XRM) [37], which trains twin classifiers to reduce spurious correlations. Post-processing methods apply transformations to model outputs to improve fairness, such as model calibration [25, 33, 38] and thresholding [5, 13] to align predicted positive outcomes with actual positive examples across groups. Our method falls within the pre-processing paradigm, but instead of simply balancing real data with synthetic data, we focus on using synthetic data to enhance model fairness." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.72, + 0.909, + 0.902 + ], + "angle": 0, + "content": "Improving Model Fairness on Generative Data. Generative models have advanced rapidly in recent years [43, 46], with several studies exploring their use to improve model fairness through generative data [9, 17, 59]. Much of the prior research has focused on generating counterfactual samples to assess fairness [8, 17], and generative methods have also made strides in bias mitigation by creating balanced, unbiased datasets [36, 40, 58]. Instead of generating counterfactual samples based on real data, D'Inca et al. [9] use a diffusion model for uncontrolled image generation, followed by manipulation of the synthetic images in a semantic space. However, these approaches require additional" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.521, + 0.958 + ], + "angle": 0, + "content": "28750" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.484, + 0.302 + ], + "angle": 0, + "content": "group annotations. Zhang et al. [59] use generative models to identify spurious attributes that may bias the model, then edit each image to modify these attributes. They train a fair FAC model on the augmented dataset, enhancing its invariance to these spurious variations. While this approach does not require protected group labels for training the FAC model, annotations are still needed to train the generative models that detect and edit spurious attributes. Additionally, generating accurate counterfactual images remains challenging, as the edits are applied randomly to images without known protected attributes. In contrast, our method uses generative data from a text-to-image LDM, enabling greater control over the diversity of the generated data and allowing a broader range of scenarios and variations." + }, + { + "type": "text", + "bbox": [ + 0.093, + 0.304, + 0.485, + 0.621 + ], + "angle": 0, + "content": "Model Fine-Tuning. A common approach to transfer learning in the presence of distribution shifts is to fine-tune the last few layers of a pre-trained model, retaining the learned features while adapting to the new task [27, 65]. To prevent overfitting during this fine-tuning process, existing methods suggest using a smaller learning rate compared to the initial pretraining phase [21], freezing the early backbone layers and gradually unfreezing them [15], or applying different learning rates to each layer [42]. Lee et al. [22] introduced surgical fine-tuning, which showed that selectively fine-tuning a subset of layers can either match or outperform conventional fine-tuning strategies. Recent research [19] indicates that training a carefully selected subset of layers while keeping the remaining weights frozen at their initial values can lead to varying contributions from different layers across the network to overall performance. Our proposed method inherits some common findings from these works but we further investigated with parameters-wise selective fine-tuning, which details the parameters' sensibility to the property from the pre-train data distribution and the property from the downstream data distribution." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.632, + 0.192, + 0.648 + ], + "angle": 0, + "content": "3. Methods" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.657, + 0.484, + 0.748 + ], + "angle": 0, + "content": "This section first introduces how to generate contextual synthetic data with LLM-generated prompts. Then, we detail how to obtain the parameters selection mask. Finally, we present how to conduct model fair fine-tuning with a selection mask on balanced synthetic data. We provide the algorithm for the selective fine-tuning in the Appendix." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.75, + 0.421, + 0.765 + ], + "angle": 0, + "content": "3.1. Contextual Synthetic Data Generation" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.765, + 0.484, + 0.902 + ], + "angle": 0, + "content": "Our method attempts to correct the biased model trained on imbalanced real data by fine-tuning on balanced synthetic data, so the quality of the synthetic data is crucial to the success of fair learning. Although current text-to-image models have demonstrated remarkable performance, several studies indicate that directly expressing the desired attributes in the prompt often results in sub-optimal outcomes due to linguistic ambiguity or model misrepresentation [48, 57]. For example, as shown in Fig. 4, to generate a face photo with" + }, + { + "type": "image", + "bbox": [ + 0.541, + 0.093, + 0.874, + 0.33 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.513, + 0.333, + 0.907, + 0.377 + ], + "angle": 0, + "content": "Figure 4. Generated images of Smiling Male conditioned on different prompts. Compared to the plain prompt, our contextual prompts enhance the diversity. More on UTKFace in Appendix." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.382, + 0.907, + 0.473 + ], + "angle": 0, + "content": "both target and protected attributes, one might use a prompt like \"Portrait face photo of a smiling male.\" as suggested in [56]. However, such manually designed prompts may lead to stereotypical image generation, excluding certain attributes or minority groups, which in turn can introduce bias in other attributes, such as age or hairstyle [57]." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.476, + 0.909, + 0.688 + ], + "angle": 0, + "content": "The recent work in zero-shot [31] and supervised [62] classification suggest that leveraging additional contextual and detailed information can enhance vision-language alignment. Considering current text-to-image models [3, 36, 41, 45] often use vision-language models, such as CLIP [39], for cross-modality alignment, we propose a contextual synthetic data generation strategy that leverages the powerful linguistic capabilities of large language models, such as GPT-4, to condition an LDM with richer and context-driven text. As a result, the generated images cover more scenarios and provide more details, enhancing diversity and mitigating the bias in the characteristics not involved in the target and protected. The structure of the CSDG is shown in the upper right of Fig. 3." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.689, + 0.911, + 0.903 + ], + "angle": 0, + "content": "Concretely, the instruction provided for the GPT-4 model following the format: “{Task}, {Number of Prompts}, {Target Attribute}, {Protected Attribute}, {Other Detailed Descriptions}, and {Prompt Format}”. For generating facial photos, for example, the other detailed descriptions contain specific facial features, hair characteristics, eye-related details, and head orientation or angle. Our text-to-image generation model is a pre-trained latent diffusion model-Stable Diffusion (SD) [43], which reverses noises applied to the latent embedding of images. SD contains a variational autoencoder (VAE) [28], a CLIP text encoder [39], and a U-Net [44]. During the inference, the random Gaussian noise \\(\\varepsilon_{t} \\sim \\mathcal{N}(0,\\mathbf{I})\\) and the contextual prompt features \\(c = (w_{1}, w_{2}, \\ldots, w_{n})\\) encoded via a CLIP text" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.519, + 0.958 + ], + "angle": 0, + "content": "28751" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.482, + 0.182 + ], + "angle": 0, + "content": "encoder \\(\\Psi (\\cdot)\\) will be fed into the U-Net to condition the denoising process, where \\(n\\) is the number of the contextual prompts. We provide detailed instruction and contextual prompts in the Appendix. The empirical results in Tab. 5 demonstrate that the generated contextual synthetic data mitigate domain shift and improve model fairness." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.183, + 0.334, + 0.197 + ], + "angle": 0, + "content": "3.2. Selective Mask Generation" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.199, + 0.484, + 0.47 + ], + "angle": 0, + "content": "As recent studies [50, 53, 64] have mentioned, synthetic data often fail to align with the real data distribution due to domain shift, suggesting that fine-tuning the pre-trained model directly on the balanced synthetic data may not learn the desired properties effectively. This is caused by the fact that there is both a domain shift (synthetic vs. real) and a bias shift (unbiased vs. biased) between the real and the synthetic data, and fine-tuning with the domain shift will result in a model with decreased utility. The empirical finding shown in Fig. 2 demonstrates that when fine-tuning a real data pre-trained model on balanced synthetic data, some parameters are more sensitive to the data distribution shift, called domain-sensitive parameters, while some are more sensitive to group discrepancies, called fairness-sensitive parameters. To discover the parameters' sensibility towards different scenarios, we propose to construct three distinct datasets with different distributions to elicit the model responses by calculating the parameter-wise gradients." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.471, + 0.484, + 0.659 + ], + "angle": 0, + "content": "We aim to fine-tune a pre-trained model \\( f_{\\theta}(x) \\) to improve fairness while accounting for domain differences between real and synthetic data, hence, we want to find the parameters which are more sensitive to fairness while less sensitive to domain shift. Specifically, we start by constructing three different datasets: biased real data \\( \\{(x_i^{(R)},y_i^{(R)})\\}_{i = 1}^{N_R}\\in \\mathcal{D}_R \\) representing training data with inherent biases, biased synthetic data \\( \\{(x_i^{(S_1)},y_i^{(S_1)})\\}_{i = 1}^{N_{S_1}}\\in \\mathcal{D}_{S_1} \\) mirroring the unfairness of the real data, and unbiased synthetic data \\( \\{(x_i^{(S_2)},y_i^{(S_2)})\\}_{i = 1}^{N_{S_2}}\\in \\mathcal{D}_{S_2} \\) designed to be fair, mitigating biases. \\( N_{R},N_{S_{1}} \\), and \\( N_{S_2} \\) is the image number of the corresponding dataset." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.66, + 0.484, + 0.781 + ], + "angle": 0, + "content": "We then compute the gradients \\(\\pmb{g}_R\\), \\(\\pmb{g}_{S1}\\), and \\(\\pmb{g}_{S2}\\) of the loss function \\(\\mathcal{L}(\\theta ;x,y)\\), which is defined from a binary softmax loss. Rather than considering fine-grained, scalar-wise parameters, we select parameters at the level of weights and biases, denoted by \\(\\theta\\), which includes weights \\(W^{(l)}\\) and biases \\(b^{(l)}\\) for the Convolution Layer, Batch Normalization, and Fully Connected Layer, where \\(l\\) refers to the layer index. Then, for the model parameters \\(\\theta\\) on each dataset:" + }, + { + "type": "equation", + "bbox": [ + 0.17, + 0.786, + 0.484, + 0.824 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {g} _ {R} = \\frac {1}{N _ {R}} \\sum_ {i = 1} ^ {N _ {R}} \\nabla_ {\\boldsymbol {\\theta}} \\mathcal {L} \\left(\\theta ; x _ {i} ^ {(R)}, y _ {i} ^ {(R)}\\right) \\tag {1}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.16, + 0.826, + 0.484, + 0.865 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {g} _ {S _ {1}} = \\frac {1}{N _ {S _ {1}}} \\sum_ {i = 1} ^ {N _ {S _ {1}}} \\nabla_ {\\theta} \\mathcal {L} \\left(\\theta ; x _ {i} ^ {(S _ {1})}, y _ {i} ^ {(S _ {1})}\\right) \\tag {2}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.16, + 0.866, + 0.484, + 0.904 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {g} _ {S _ {2}} = \\frac {1}{N _ {S _ {2}}} \\sum_ {i = 1} ^ {N _ {S _ {2}}} \\nabla_ {\\theta} \\mathcal {L} (\\theta ; x _ {i} ^ {(S _ {2})}, y _ {i} ^ {(S _ {2})}) \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.092, + 0.905, + 0.137 + ], + "angle": 0, + "content": "where \\(\\nabla\\) is the gradient operator. We then calculate the gradient differences to capture how each parameter behaves across different data distributions. For parameter \\(\\theta_{j}\\):" + }, + { + "type": "equation", + "bbox": [ + 0.635, + 0.139, + 0.905, + 0.177 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\Delta_ {1, j} = \\left| \\boldsymbol {g} _ {R, j} - \\boldsymbol {g} _ {S _ {1}, j} \\right| \\tag {4} \\\\ \\Delta_ {2, j} = \\left| \\mathbf {g} _ {S _ {1}, j} - \\mathbf {g} _ {S _ {2}, j} \\right| \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.176, + 0.907, + 0.28 + ], + "angle": 0, + "content": "where \\(\\Delta_{1}\\) measures parameters sensitive to the domain shift while \\(\\Delta_{2}\\) identifies parameters crucial for fairness, \\(j\\) denotes the index of all the parameters. To obtain the parameters which are less affected by domain differences and more impactful for fairness, we conduct a ranking in ascending order on \\(\\Delta_{1}\\) (smaller differences first) and a ranking in descending order on \\(\\Delta_{2}\\) (larger differences first):" + }, + { + "type": "equation", + "bbox": [ + 0.577, + 0.282, + 0.905, + 0.298 + ], + "angle": 0, + "content": "\\[\nR _ {1} = \\operatorname {a r g s o r t} \\left(\\Delta_ {1}\\right); R _ {2} = \\operatorname {a r g s o r t} \\left(- \\Delta_ {2}\\right) \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.299, + 0.905, + 0.343 + ], + "angle": 0, + "content": "Then we can find the intersections from the top- \\(k\\) parameters are both less sensitive to the domain gap and significant for fairness by \\(K = K_{1}\\cap K_{2}\\), where" + }, + { + "type": "equation", + "bbox": [ + 0.62, + 0.344, + 0.905, + 0.379 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\begin{array}{l} K _ {1} = \\left\\{\\theta_ {j} \\mid j \\in R _ {1} [ 1: k ] \\right\\} \\\\ I = \\left\\{\\theta_ {i} \\mid i \\in D _ {1} [ 1: k ] \\right\\} \\end{array} \\tag {6} \\\\ K _ {2} = \\left\\{\\theta_ {j} \\mid j \\in R _ {2} [ 1: k ] \\right\\} \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.533, + 0.378, + 0.843, + 0.393 + ], + "angle": 0, + "content": "Finally, the selection mask can be obtained by:" + }, + { + "type": "equation", + "bbox": [ + 0.615, + 0.394, + 0.905, + 0.433 + ], + "angle": 0, + "content": "\\[\nM _ {j} = \\left\\{ \\begin{array}{l l} \\text {T r u e}, & \\text {i f} \\theta_ {j} \\in K \\\\ \\text {F a l s e}, & \\text {o t h e r w i s e} \\end{array} \\right. \\tag {7}\n\\]" + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.433, + 0.857, + 0.448 + ], + "angle": 0, + "content": "3.3. Selective Fine-Tuning on Synthetic Data" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.449, + 0.906, + 0.572 + ], + "angle": 0, + "content": "Our goal is to fine-tune the pre-trained model \\( f_{\\theta}(x) \\) on the balanced synthetic dataset \\( \\mathcal{D}_{S_2} \\) following the ERM framework, while only updating selected parameters as indicated by the mask \\( M \\). Specifically, given the model \\( f_{\\theta}(x) \\) pretrained on the biased real data, balanced synthetic data \\( \\{(x_i^{(S)},y_i^{(S)})_{i = 1}^{N_{S_2}} \\), and the parameters selection mask \\( M \\), during optimization, we apply the mask to the gradients so that only the selected parameters are updated:" + }, + { + "type": "equation", + "bbox": [ + 0.523, + 0.572, + 0.905, + 0.598 + ], + "angle": 0, + "content": "\\[\n\\theta^ {(t + 1)} = \\theta^ {(t)} - \\eta \\left(M \\odot \\nabla_ {\\theta} \\mathcal {L} \\left(f _ {\\theta^ {(t)}} \\left(x _ {i} ^ {(S _ {2})}\\right), y _ {i} ^ {(S _ {2})}\\right)\\right) \\tag {8}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.6, + 0.906, + 0.66 + ], + "angle": 0, + "content": "where \\(\\theta^{(t)}\\) are the parameters at iteration \\(t\\), \\(\\eta\\) is the learning rate, and \\(\\odot\\) denotes element-wise multiplication. Notably, once the mask \\(M\\) is provided by SMG, it will be applied throughout the entire fine-tuning optimization process." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.675, + 0.645, + 0.691 + ], + "angle": 0, + "content": "4. Experiments" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.7, + 0.906, + 0.76 + ], + "angle": 0, + "content": "This section presents the experimental setup and results. We begin by describing the datasets, followed by the implementation details. The main results and ablation studies are presented at the end." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.765, + 0.615, + 0.778 + ], + "angle": 0, + "content": "4.1. Datasets" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.78, + 0.907, + 0.902 + ], + "angle": 0, + "content": "CelebA [26] contains over 200,000 facial images with 40 binary attribute annotations. Following the setting of the previous works [35, 58, 59], we set Male and Young to protected attributes and selected Smiling and Young as the target attribute which has the highest correlation with the protected attributes. For each experiment, we randomly sample a biased subset as a training dataset with a size of 20,000 images, where the majority group and minority group have" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "28752" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.1, + 0.09, + 0.493, + 0.117 + ], + "angle": 0, + "content": "Table 1. Comparisons to other methods on the CelebA dataset under settings of varied target and protected attributes." + }, + { + "type": "table", + "bbox": [ + 0.1, + 0.117, + 0.492, + 0.232 + ], + "angle": 0, + "content": "
MethodsT: Smiling, P: MaleT: Smiling, P: YoungT: Young, P: Male
ACC (↑) WST(↑) EO (↓)ACC (↑) WST(↑) EO (↓)ACC (↑) WST(↑) EO (↓)
ERM [14]88.2070.1025.3088.3071.5015.6077.7042.0052.00
CVaR DRO [23]87.3074.0022.8087.0076.1013.9075.4042.3048.80
EIL [7]87.9075.6019.7087.9072.5013.3077.5045.6039.20
LfF [32]87.1077.5017.0085.3072.9014.3077.4044.2043.60
JTT [24]88.0074.8019.4087.6073.3014.2076.3043.6047.70
MAPLE [63]88.1072.0019.6088.1073.6013.6076.3046.2043.50
DiGA [59]88.4081.907.4089.1078.509.5080.0051.3033.30
AIM-Fair (Ours)89.0284.206.0790.2187.785.4478.1954.8928.18
" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.238, + 0.485, + 0.405 + ], + "angle": 0, + "content": "90% and 10% of the sample size respectively. We report performance on the whole original test dataset. UTK Face [60] consists of over 20,000 facial images with three kinds of annotations: gender, age, and ethnicity. Following the experimental setup in the previous works [17, 18, 59], we define a binary spurious attribute \"Ethnicity\" based on whether the facial image is white or not. The task is to predict the Gender. We randomly sample a biased subset of 10,000 images, with the same bias degree as CelebA. We also construct a balanced and unbiased test dataset consisting of 3,200 images." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.411, + 0.255, + 0.425 + ], + "angle": 0, + "content": "4.2. Fairness Metrics" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.427, + 0.484, + 0.488 + ], + "angle": 0, + "content": "Our goal is to learn a fair and accurate model. For model utility, we present the overall accuracy (ACC) and group accuracy. For fairness, follow previous work [35, 58, 59] we use equalized odds (EO) [13], defined as:" + }, + { + "type": "equation", + "bbox": [ + 0.102, + 0.489, + 0.482, + 0.528 + ], + "angle": 0, + "content": "\\[\n\\left. \\overline {{\\sum}} _ {\\forall y, \\hat {y}} \\left| P _ {s _ {0}} (\\hat {Y} = \\hat {y} \\mid Y = y) - P _ {s _ {1}} (\\hat {Y} = \\hat {y} \\mid Y = y) \\right|, \\right. \\tag {9}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.528, + 0.483, + 0.573 + ], + "angle": 0, + "content": "where \\(\\overline{\\sum}\\) is the averaged sum, \\(Y\\) target label, \\(\\hat{Y}\\) classifier predictive label, and \\(s_0,s_1\\in S\\) protected attributes. The worst-group accuracy (WST) defined as:" + }, + { + "type": "equation", + "bbox": [ + 0.185, + 0.575, + 0.482, + 0.598 + ], + "angle": 0, + "content": "\\[\n\\min _ {\\forall y \\in \\mathcal {Y}, \\forall s \\in S} P _ {s} (\\hat {Y} = y \\mid Y = y) \\tag {10}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.597, + 0.436, + 0.61 + ], + "angle": 0, + "content": "and group standard deviation (STD) [52] defined as:" + }, + { + "type": "equation", + "bbox": [ + 0.186, + 0.611, + 0.482, + 0.635 + ], + "angle": 0, + "content": "\\[\n\\underset {\\forall y \\in \\mathcal {Y}, \\forall s \\in S} {\\operatorname {s t d}} P _ {s} (\\hat {Y} = y \\mid Y = y) \\tag {11}\n\\]" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.643, + 0.308, + 0.658 + ], + "angle": 0, + "content": "4.3. Implementation Details" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.66, + 0.483, + 0.901 + ], + "angle": 0, + "content": "Following previous work [59], we use ResNet-18 [14] as the backbone for all experiments. We use stable diffusion v2.1 as our latent diffusion model for generating images. We generated 10,000 images for each group and then randomly sampled \\( N_{S} \\). To be consistent with the real data image number \\( N_{R} \\), we set the \\( N_{S} = N_{R} / G \\), where \\( G \\) is the group number. During the pre-training phase, we set the batch size to 128 and trained the model for 15 epochs. The initial learning rate was set to 0.01, which was then reduced by a factor of 0.01 at the 10th epoch. During the fine-tuning phase, we set the batch size to 128 and trained the model for 10 epochs. And the learning rate is searched from \\{0.4, 0.5, 0.6\\}. All models are optimized by the SGD optimizer and trained on the Tesla A100 GPU based on the open-source PyTorch platform. To obtain more stable and reliable results, we conducted all experiments 10 times with different" + }, + { + "type": "table_caption", + "bbox": [ + 0.503, + 0.09, + 0.897, + 0.117 + ], + "angle": 0, + "content": "Table 2. Comparisons to other methods on the CelebA dataset (T=Smiling, P=Male) under settings of training set sizes." + }, + { + "type": "table", + "bbox": [ + 0.505, + 0.117, + 0.897, + 0.232 + ], + "angle": 0, + "content": "
MethodsSamples Ratio = 50%Samples Ratio = 25%Samples Ratio = 10%
ACC (↑)WST(↑)EO (↓)ACC (↑)WST(↑)EO (↓)ACC (↑)WST(↑)EO (↓)
ERM [14]87.5067.8026.1087.1065.9027.7086.9062.8028.90
CVaR DRO [23]86.6072.9022.1086.6072.3022.4085.5069.1027.30
EIL [7]86.2071.3022.5085.9069.6025.4086.8064.2026.70
LfF [32]86.9075.5019.4085.9072.1023.6085.5066.1027.70
JTT [24]87.3072.9020.1086.7071.1020.6086.8067.1023.10
MAPLE [63]87.4073.7023.8087.0072.7024.2085.6069.2027.10
DiGA [59]88.4081.107.8088.4078.308.0088.3078.808.40
AIM-Fair (Ours)88.8583.996.2888.8982.547.5987.9081.738.16
" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.238, + 0.897, + 0.253 + ], + "angle": 0, + "content": "random seeds and then used the average as the final result." + }, + { + "type": "title", + "bbox": [ + 0.512, + 0.258, + 0.796, + 0.273 + ], + "angle": 0, + "content": "4.4. Comparisons to State of the Art" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.275, + 0.905, + 0.41 + ], + "angle": 0, + "content": "We compared our method against seven contemporary techniques, including the baseline ERM [14] method and six debiasing models all of which do not require protected attribute labels: Two regularization-based methods (CVaR DRO [23] and EIIL [7]); three reweighting-based methods (LfF [32], JTT [24], and MAPLE [63]); and generativemodel-based method (DiGA [59]). Our comparison covers two settings: different target and protected attributes and varying numbers of target labels." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.411, + 0.906, + 0.713 + ], + "angle": 0, + "content": "Tab.1 shows that ERM achieves good accuracy but suffers from significant unfairness. Other debiasing methods, while improving fairness to some extent, generally sacrifice accuracy. DiGA [59] improves fairness while maintaining accuracy by using a generative model to edit real data and create more balanced training data. In contrast, our method generates images from scratch using a text-driven latent diffusion model. It is evident that our method outperforms the best of the existing models DiGA [59] in both fairness and accuracy in all categories, except \"Young / Male\" where we come close second to DiGA. We outperform all other methods consistently. We also conducted comparisons with other methods under smaller training set sizes, with the subsampling ratio of \\(50\\%\\), \\(25\\%\\), and \\(10\\%\\). Tab. 2 shows that our method outperforms all others consistently, except on ACC score for Lable Ratio at \\(10\\%\\) where we come close second to DiGA. Critically, our method maintains robust model utility and fairness even with varying amounts of real training data, with these improvements attributed to the selective updating with balanced synthetic data." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.718, + 0.688, + 0.733 + ], + "angle": 0, + "content": "4.5. Ablation Analysis" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.734, + 0.906, + 0.884 + ], + "angle": 0, + "content": "Comparisons of Different Training Strategies. To evaluate the effectiveness of the proposed selective fine-tuning, we first compare our method with several other strategies trained on different types of data. These include the baseline, which trains the model using conventional ERM on real data, training the model solely on synthetic data, supplementing real data with synthetic data [47], and balancing the real data [9, 56]. Furthermore, we compare common fine-tuning methods, such as linear probing and full fine-tuning." + }, + { + "type": "text", + "bbox": [ + 0.534, + 0.886, + 0.905, + 0.901 + ], + "angle": 0, + "content": "The results in Tab. 3 indicate that both data supplen" + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.518, + 0.957 + ], + "angle": 0, + "content": "28753" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.246, + 0.084, + 0.75, + 0.097 + ], + "angle": 0, + "content": "Table 3. Comparisons of varied training strategies on CelebA and UTKFace datasets." + }, + { + "type": "table", + "bbox": [ + 0.108, + 0.097, + 0.884, + 0.312 + ], + "angle": 0, + "content": "
MethodsTargetCelebAUTKFace
ProtectedT: Smiling; P: MaleProtectedT: Smiling; P: YoungProtectedT: Female; P: White
P=0P=1ACC (↑) WST (↑) EO (↓) STD (↓)P=0P=1ACC (↑) WST (↑) EO (↓) STD (↓)P=0P=1ACC (↑) WST (↑) EO (↓) STD (↓)
Baseline [14]T=083.5798.5089.2371.5223.8410.6578.7296.0990.1978.7217.376.9279.5896.1688.8679.5816.597.26
T=195.3671.5294.3786.4295.7583.96
Trained on Synthetic DataT=086.8289.4085.9878.527.874.0785.4585.7685.1282.423.501.3482.7188.1285.4482.715.501.96
T=186.3978.5282.4285.2784.9086.04
Data Supplementation [47]T=084.3998.3989.7773.6721.969.7678.0595.3690.3578.0517.317.0282.1293.9790.2582.1211.855.24
T=195.4073.6794.9087.6295.6289.28
Data Repairing [9, 56]T=082.8298.1089.7275.0521.189.5079.5095.1990.6879.5015.696.3081.1594.2289.4281.1513.375.89
T=196.0475.0594.5188.3595.7186.61
Linear ProbT=084.5596.2289.6977.4117.517.7185.2093.8190.1485.208.613.1881.0691.5188.4081.0610.595.14
T=194.8277.4190.4387.8894.5986.42
Fully Fine-TuningT=088.8191.6288.7782.746.833.3185.9890.0687.8585.984.371.6883.0088.0487.9083.005.402.99
T=189.5582.7486.0286.8190.7489.83
AIM-Fair (Ours)T=088.1691.4089.0284.206.072.7488.1993.6490.2187.785.442.3484.2689.0888.3084.264.812.41
T=190.2584.2088.9887.7890.6289.25
" + }, + { + "type": "table_caption", + "bbox": [ + 0.218, + 0.313, + 0.779, + 0.326 + ], + "angle": 0, + "content": "Table 4. Comparisons of varied partial fine-tuning strategies on CelebA and UTKFace datasets." + }, + { + "type": "table", + "bbox": [ + 0.111, + 0.326, + 0.885, + 0.488 + ], + "angle": 0, + "content": "
MethodsTargetCelebAUTKFace
ProtectedT: Smiling; P: MaleProtectedT: Smiling; P: YoungProtectedT: Female; P: White
P=0P=1ACC (↑)WST (↑)EO (↓)STD (↓)P=0P=1ACC (↑)WST (↑)EO (↓)STD (↓)P=0P=1ACC (↑)WST (↑)EO (↓)STD (↓)
Best Random SelectionT=087.9392.8189.3382.179.144.0985.5291.5388.9585.526.022.1683.5088.8587.9283.505.382.61
T=191.3182.1788.2587.6390.3089.01
Best Sub-Tuning [19] (By Updating One Block)T=087.5094.7590.0480.6412.455.5287.2894.6390.3387.287.343.0083.0589.5689.0183.056.513.56
T=193.0980.6489.1587.3292.0791.34
Best Sub-Tuning [19] (By Freezing One Block)T=088.6991.0989.0683.487.003.0086.5591.8988.6586.555.342.2583.5888.3487.4283.584.762.23
T=190.4883.4887.0886.5588.9688.81
Selective Fine-Tuning (Cosine Similarity)T=086.0791.7788.8583.338.403.6187.4392.9690.2687.435.532.0583.7288.8288.2183.725.102.64
T=191.5383.3389.5288.6590.1490.15
AIM-Fair (Ours) (Absolute Difference)T=088.1691.4089.0284.206.072.7488.1993.6490.2187.785.442.3484.2689.0888.3084.264.812.41
T=190.2584.2088.9887.7890.6289.25
" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.494, + 0.483, + 0.676 + ], + "angle": 0, + "content": "tation and data repairing do not significantly improve fairness. Data supplementation, which combines biased real data with balanced synthetic data for co-training, makes it difficult for the model to learn fairness properties from the synthetic data because of the domain gap. While data repairing can create balanced training data, the domain shift between real and synthetic data limits the effectiveness of fair learning. Common transfer learning methods, such as linear probing and full fine-tuning, also face challenges due to domain shifts. Specifically, linear probing is affected by discrepancies in feature representations, while full fine-tuning improves fairness at the cost of reduced model utility." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.682, + 0.483, + 0.818 + ], + "angle": 0, + "content": "Comparisons of Different Partial Fine-Tuning. We also compare our method with other partial fine-tuning approaches, including random selection and sub-tuning [19]. For random selection, we set the updating ratios at \\(40\\%\\), \\(55\\%\\), \\(70\\%\\), and \\(85\\%\\), and select the best result as the final one. For sub-tuning, we perform block-wise fine-tuning and block-wise freezing (i.e., updating one block or freezing one block while updating the rest), also selecting the block with the best result." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.825, + 0.483, + 0.903 + ], + "angle": 0, + "content": "Tab. 4 shows that fine-tuning only one block consistently yields the best accuracy, but at the cost of fairness, while freezing one block tends to improve fairness but often sacrifices model utility. This demonstrates that coarse-grained approaches, such as block-wise updating or freezing, strut" + }, + { + "type": "table_caption", + "bbox": [ + 0.513, + 0.493, + 0.905, + 0.52 + ], + "angle": 0, + "content": "Table 5. Classification results on CelebA dataset (T=Smiling, P=Male) under settings of different prompt types and numbers." + }, + { + "type": "table", + "bbox": [ + 0.516, + 0.521, + 0.905, + 0.638 + ], + "angle": 0, + "content": "
Prompt Types (Number)TargetProtectedT: Smiling; P: MaleProtectedT: Smiling; P: Young
P=0P=1ACC(↑)WST(↑)EO(↓)STD(↓)P=0P=1ACC(↑)WST(↑)EO(↓)STD(↓)
Plain Prompt (1)t=065.1889.4471.6543.2834.2017.0783.1077.7272.7559.539.308.96
t=177.4743.2859.5368.83
Contextual Prompts (25)t=077.1289.2581.8569.7316.37.6576.7267.9079.8067.908.828.75
t=185.9969.7385.1691.09
Contextual Prompts (50)t=089.4789.8082.6672.225.077.6687.7290.4385.8579.674.364.20
t=177.2672.2279.6782.67
Contextual + Head Poses (50)t=086.8289.4085.9878.527.874.0785.4585.7685.1282.423.501.34
t=186.3978.5282.4285.27
" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.644, + 0.907, + 0.735 + ], + "angle": 0, + "content": "gle to achieve an optimal balance between model utility and fairness. In contrast, our method performs fine-grained, parameter-wise updating, which allows the model to enhance fairness while maintaining utility. As a result, our method achieves the best worst-group accuracy while preserving high overall accuracy." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.735, + 0.909, + 0.87 + ], + "angle": 0, + "content": "Additionally, our method uses gradient differences to identify parameter sensitivity to varying data distributions. We also compared absolute gradient differences with the cosine similarity between gradients. The results in Tab. 4 show that using absolute gradient differences yields better results. The possible reason is that the cosine similarity only provides the direction of gradient disparity, whereas the absolute difference directly captures the magnitude of the gradient disparity." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.871, + 0.905, + 0.902 + ], + "angle": 0, + "content": "Comparisons of Different Prompts. To evaluate the performance of the varied prompts used for generating images," + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.946, + 0.519, + 0.957 + ], + "angle": 0, + "content": "28754" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.099, + 0.089, + 0.473, + 0.276 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.28, + 0.483, + 0.336 + ], + "angle": 0, + "content": "Figure 5. Classification results on the CelebA dataset (T=Smiling, P=Male) with different top-k values. The top-k values {40, 45, 50, 55, 60} correspond approximately to \\(\\{64\\%, 72\\%, 80\\%, 88\\%, 96\\% \\}\\) of the total model parameters." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.34, + 0.483, + 0.534 + ], + "angle": 0, + "content": "we trained the model solely on the balanced synthetic data generated with corresponding prompts and tested it on the real test set. Tab. 5 shows the results of different prompt types and quantities, indicating that images generated with contextual prompts lead to significantly better performance in both model accuracy and fairness. We believe this improvement is due to the increased diversity and details in the synthetic images generated from contextual prompts. Moreover, as the number of contextual prompts increases, test performance improves as well. Specifically, incorporating head pose variation into the prompt further enhances both accuracy and fairness in the generated images, as the results shown in the last row of Tab. 5." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.538, + 0.483, + 0.703 + ], + "angle": 0, + "content": "Evaluations of Varied Top-k Values. In our method, to identify the intersection parameters from the two gradient difference rankings, we use the top-k selection to ensure that the chosen parameters are more sensitive to fairness and less affected by domain shift. The results for different top-k values are shown in Fig. 5. When k is set to 55, our method achieves the best fairness performance while retaining overall accuracy. Intuitively, a lower k results in fewer updates, leading to better model utility but worse fairness, while a high k involves more parameter updates, which can negatively impact both accuracy and fairness." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.705, + 0.483, + 0.901 + ], + "angle": 0, + "content": "Evaluations of Varied Bias Ratio for Synthetic Data Construction. To evaluate the model's sensitivity to domain shift and group disparity, we construct three distinct datasets with different distributions. In our method, the bias ratio for the biased synthetic data distribution is treated as a hyperparameter, referenced by the error set of real training examples, as suggested in JTT [24]. To assess the impact of varying bias ratios in synthetic data, we explore different ratios. As shown in Tab. 6, a disparity between the synthetic data bias ratio and the real data bias ratio leads to worse performance in both accuracy and fairness. However, despite these differences in bias ratios, our method still outperforms the fully fine-tuning approach. This demonstrates that our" + }, + { + "type": "table_caption", + "bbox": [ + 0.513, + 0.09, + 0.905, + 0.117 + ], + "angle": 0, + "content": "Table 6. Classification results on CelebA dataset (T=Smiling, P=Male) with varied bias ratio of the biased synthetic data." + }, + { + "type": "table", + "bbox": [ + 0.534, + 0.118, + 0.885, + 0.285 + ], + "angle": 0, + "content": "
Bias RatioTargetProtectedACC (↑)WST (↑)EOD (↓)STD (↓)
P=0P=1
Fully Fine-Tuningt=088.8191.6288.7782.746.833.31
t=189.5582.74
4:6t=085.7291.7189.2383.908.783.77
t=192.6983.90
3:7t=087.0591.6288.9083.527.723.26
t=190.9383.52
2:8t=088.0191.7688.9783.167.303.28
t=190.4683.16
1:9t=088.1691.4089.0284.206.072.74
t=190.2584.20
" + }, + { + "type": "table_caption", + "bbox": [ + 0.513, + 0.292, + 0.905, + 0.319 + ], + "angle": 0, + "content": "Table 7. Classification results on CelebA dataset (T=Smiling, P=Male) with different number of synthetic data." + }, + { + "type": "table", + "bbox": [ + 0.534, + 0.32, + 0.885, + 0.465 + ], + "angle": 0, + "content": "
Ratio To Real DataTargetProtectedACC (↑)WST (↑)EOD (↓)STD (↓)
P=0P=1
0.5t=088.6593.9689.5180.8010.314.90
t=191.1080.80
1t=088.1691.4089.0284.206.072.74
t=190.2584.20
1.5t=089.1892.0588.3481.257.133.98
t=188.3881.25
2t=089.0191.2688.8183.366.082.96
t=191.2689.44
" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.47, + 0.905, + 0.498 + ], + "angle": 0, + "content": "method is able to identify the model's sensitivity under the bias ratios different to real data." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.501, + 0.907, + 0.667 + ], + "angle": 0, + "content": "**Evaluations of Different Number of Synthetic Data.** We also evaluate fine-tuning with varying amounts of balanced synthetic data. As shown in Tab. 7, we use the real training data count as a reference and set different ratios to determine the number of synthetic data. The results indicate that when the amount of synthetic data matches that of real data, the model achieves the best fairness. Additionally, using half the amount of real data results in the best accuracy. We believe that using too much synthetic data during fine-tuning can lead to overfitting, while using too little data may fail to adequately debias the model." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.68, + 0.634, + 0.695 + ], + "angle": 0, + "content": "5. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.705, + 0.907, + 0.901 + ], + "angle": 0, + "content": "In this work, we proposed a method to mitigate bias in machine learning models using synthetic data generated by a text-to-image process. By designing contextual synthetic data generation and selective fine-tuning, we enhance model fairness without requiring demographic group annotations. Our model updates selectively fairness-sensitive parameters, optimizing simultaneously model fairness and utility scores. Empirical results demonstrate that our method outperforms existing techniques, improving fairness while maintaining model utility performance. This work highlights the potential of synthetic data for creating fairer AI systems, offering a promising direction for future research in bias mitigation." + }, + { + "type": "page_number", + "bbox": [ + 0.479, + 0.945, + 0.519, + 0.957 + ], + "angle": 0, + "content": "28755" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.092, + 0.091, + 0.27, + 0.108 + ], + "angle": 0, + "content": "6. Acknowledgments" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.115, + 0.484, + 0.206 + ], + "angle": 0, + "content": "This research utilised Queen Mary's Apocrita HPC facility, supported by QMUL Research-IT. Zengqun Zhao is funded by Queen Mary Principal's PhD Studentships. Zengqun Zhao wants to thank Yining Wang and James Oldfield for the valuable comments and help, and Yiming Lin and Jie Shen for the valuable discussions." + }, + { + "type": "title", + "bbox": [ + 0.093, + 0.218, + 0.188, + 0.234 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.244, + 0.484, + 0.285 + ], + "angle": 0, + "content": "[1] Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.1, + 0.286, + 0.484, + 0.327 + ], + "angle": 0, + "content": "[2] Carolyn Ashurst and Adrian Weller. Fairness without demographic data: A survey of approaches. In EAAMO, pages 1-12, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.328, + 0.483, + 0.396 + ], + "angle": 0, + "content": "[3] Junsong Chen, YU Jincheng, GE Chongjian, Lewei Yao, Enze Xie, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li. Pixart-α: Fast training of diffusion transformer for photorealistic text-to-image synthesis. In ICLR, 2023. 2, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.398, + 0.483, + 0.439 + ], + "angle": 0, + "content": "[4] Kristy Choi, Aditya Grover, Trisha Singh, Rui Shu, and Stefano Ermon. Fair generative modeling via weak supervision. In ICML, pages 1887-1898, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.44, + 0.483, + 0.481 + ], + "angle": 0, + "content": "[5] Sam Corbett-Davies, Johann D Gaebler, Hamed Nilforoshan, Ravi Shroff, and Sharad Goel. The measure and mismeasure of fairness. JMLR, 24(1):14730-14846, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.482, + 0.484, + 0.537 + ], + "angle": 0, + "content": "[6] Elliot Creager, David Madras, Jorn-Henrik Jacobsen, Marissa Weis, Kevin Swersky, Toniann Pitassi, and Richard Zemel. Flexibly fair representation learning by disentangle-ment. In ICML, pages 1436-1445, 2019. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.538, + 0.482, + 0.579 + ], + "angle": 0, + "content": "[7] Elliot Creager, Jorn-Henrik Jacobsen, and Richard Zemel. Environment inference for invariant learning. In ICML, pages 2189-2200, 2021. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.58, + 0.482, + 0.634 + ], + "angle": 0, + "content": "[8] Saloni Dash, Vineeth N Balasubramanian, and Amit Sharma. Evaluating and mitigating bias in image classifiers: A causal perspective using counterfactuals. In WACV, pages 915-924, 2022. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.636, + 0.482, + 0.69 + ], + "angle": 0, + "content": "[9] Moreno D'Inca, Christos Tzelepis, Ioannis Patras, and Nicu Sebe. Improving fairness using vision-language driven image augmentation. In WACV, pages 4695-4704, 2024. 1, 2, 3, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.692, + 0.483, + 0.746 + ], + "angle": 0, + "content": "[10] Michele Donini, Luca Oneto, Shai Ben-David, John S Shawe-Taylor, and Massimiliano Pontil. Empirical risk minimization under fairness constraints. NeurIPS, 31, 2018. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.748, + 0.483, + 0.816 + ], + "angle": 0, + "content": "[11] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.818, + 0.483, + 0.871 + ], + "angle": 0, + "content": "[12] John C Duchi and Hongseok Namkoong. Learning models with uniform performance via distributionally robust optimization. The Annals of Statistics, 49(3):1378-1406, 2021. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.873, + 0.483, + 0.902 + ], + "angle": 0, + "content": "[13] Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. NeurIPS, 29, 2016. 3, 6" + }, + { + "type": "list", + "bbox": [ + 0.094, + 0.244, + 0.484, + 0.902 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.135 + ], + "angle": 0, + "content": "[14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770-778, 2016. 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.137, + 0.905, + 0.178 + ], + "angle": 0, + "content": "[15] Jeremy Howard and Sebastian Ruder. Universal language model fine-tuning for text classification. In ACL, pages 328-339, 2018. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.18, + 0.905, + 0.235 + ], + "angle": 0, + "content": "[16] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. In ICLR, pages 1-13, 2022. 13" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.238, + 0.905, + 0.291 + ], + "angle": 0, + "content": "[17] Jungseock Joo and Kimmo Kärkkäinen. Gender slopes: Counterfactual fairness for computer vision models by attribute manipulation. In FATE/MM, pages 1-5, 2020. 2, 3, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.296, + 0.905, + 0.337 + ], + "angle": 0, + "content": "[18] Sangwon Jung, Sanghyuk Chun, and Taesup Moon. Learning fair classifiers with partially annotated group labels. In CVPR, pages 10348-10357, 2022. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.339, + 0.905, + 0.394 + ], + "angle": 0, + "content": "[19] Gal Kaplun, Andrey Gurevich, Tal Swisa, Mazor David, Shai Shalev-Shwartz, and Eran Malach. Less is more: Selective layer finetuning with subtuning. arXiv preprint arXiv:2302.06354, 2023. 4, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.397, + 0.905, + 0.452 + ], + "angle": 0, + "content": "[20] Adarsh Kappiyath, Abhra Chaudhuri, AJAY KUMAR JAISWAL, Ziquan Liu, Yunpeng Li, Xiatian Zhu, and Lu Yin. Sebra: Debiasing through self-guided bias ranking. In ICLR, pages 1-12, 2025. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.454, + 0.905, + 0.495 + ], + "angle": 0, + "content": "[21] Simon Kornblith, Jonathon Shlens, and Quoc V Le. Do better imagenet models transfer better? In CVPR, pages 2661-2671, 2019. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.498, + 0.905, + 0.552 + ], + "angle": 0, + "content": "[22] Yoonho Lee, Annie S Chen, Fahim Tajwar, Ananya Kumar, Huaxiu Yao, Percy Liang, and Chelsea Finn. Surgical finetuning improves adaptation to distribution shifts. In ICLR, 2022. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.556, + 0.905, + 0.597 + ], + "angle": 0, + "content": "[23] Daniel Levy, Yair Carmon, John C Duchi, and Aaron Sidford. Large-scale methods for distributionally robust optimization. NeurIPS, 33:8847-8860, 2020. 1, 3, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.599, + 0.905, + 0.668 + ], + "angle": 0, + "content": "[24] Evan Z Liu, Behzad Haghgoo, Annie S Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. Just train twice: Improving group robustness without training group information. In ICML, pages 6781-6792, 2021. 3, 6, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.671, + 0.905, + 0.712 + ], + "angle": 0, + "content": "[25] Lydia T Liu, Max Simchowitz, and Moritz Hardt. The implicit fairness criterion of unconstrained learning. In ICML, pages 4051-4060. PMLR, 2019. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.714, + 0.905, + 0.755 + ], + "angle": 0, + "content": "[26] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaou Tang. Deep learning face attributes in the wild. In ICCV, 2015, 5, 12" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.758, + 0.905, + 0.813 + ], + "angle": 0, + "content": "[27] Ziquan Liu, Yi Xu, Xiangyang Ji, and Antoni B Chan. Twins: A fine-tuning framework for improved transferability of adversarial robustness and generalization. In CVPR, pages 16436-16446, 2023. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.816, + 0.905, + 0.857 + ], + "angle": 0, + "content": "[28] Romain Lopez, Jeffrey Regier, Michael I Jordan, and Nir Yosef. Information constraints on auto-encoding variational bayes. NeurIPS, 31, 2018. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.859, + 0.905, + 0.901 + ], + "angle": 0, + "content": "[29] David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. Learning adversarially fair and transferable representations. In ICML, pages 3384-3393, 2018. 2" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.945, + 0.52, + 0.957 + ], + "angle": 0, + "content": "28756" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.093, + 0.482, + 0.146 + ], + "angle": 0, + "content": "[30] Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6):1-35, 2021. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.149, + 0.482, + 0.177 + ], + "angle": 0, + "content": "[31] Sachit Menon and Carl Vondrick. Visual classification via description from large language models. In ICLR, 2022. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.179, + 0.482, + 0.219 + ], + "angle": 0, + "content": "[32] Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, and Jinwoo Shin. Learning from failure: De-biasing classifier from biased classifier. NeurlPS, 33:20673-20684, 2020. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.221, + 0.482, + 0.289 + ], + "angle": 0, + "content": "[33] James Oldfield, Markos Georgopoulos, Grigorios Chrysos, Christos Tzelepis, Yannis Panagakis, Mihalis Nicolaou, Jiankang Deng, and Ioannis Patras. Multilinear mixture of experts: Scalable expert specialization through factorization. NeurIPS, 37:53022-53063, 2025. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.291, + 0.482, + 0.347 + ], + "angle": 0, + "content": "[34] Sungho Park, Sunhee Hwang, Dohyung Kim, and Hyeran Byun. Learning disentangled representation for fair facial attribute classification via fairness-aware information alignment. In AAAI, pages 2403-2411, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.349, + 0.482, + 0.403 + ], + "angle": 0, + "content": "[35] Sungho Park, Jewook Lee, Pilhyeon Lee, Sunhee Hwang, Dohyung Kim, and Hyeran Byun. Fair contrastive learning for facial attribute classification. In CVPR, pages 10389-10398, 2022. 5, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.405, + 0.482, + 0.459 + ], + "angle": 0, + "content": "[36] Momchil Peychev, Anian Ruoss, Mislav Balunovic, Maximilian Baader, and Martin Vechev. Latent space smoothing for individually fair representations. In ECCV, pages 535-554, 2022. 2, 3, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.462, + 0.482, + 0.503 + ], + "angle": 0, + "content": "[37] Mohammad Pezeshki, Diane Bouchacourt, Mark Ibrahim, Nicolas Ballas, Pascal Vincent, and David Lopez-Paz. Discovering environments with xrm. In ICML, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.505, + 0.482, + 0.545 + ], + "angle": 0, + "content": "[38] Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. On fairness and calibration. NeurIPS, 30, 2017. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.548, + 0.482, + 0.616 + ], + "angle": 0, + "content": "[39] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, pages 8748-8763, 2021. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.618, + 0.482, + 0.659 + ], + "angle": 0, + "content": "[40] Vikram V Ramaswamy, Sunnie SY Kim, and Olga Russakovsky. Fair attribute classification through latent space de-biasing. In CVPR, pages 9301-9310, 2021. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.661, + 0.482, + 0.716 + ], + "angle": 0, + "content": "[41] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, pages 1-27, 2022. 2, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.718, + 0.482, + 0.759 + ], + "angle": 0, + "content": "[42] Youngmin Ro and Jin Young Choi. Autolr: Layer-wise pruning and auto-tuning of learning rates in fine-tuning of deep networks. In AAAI, pages 2486-2494, 2021. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.761, + 0.482, + 0.815 + ], + "angle": 0, + "content": "[43] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, pages 10684-10695, 2022. 2, 3, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.817, + 0.482, + 0.858 + ], + "angle": 0, + "content": "[44] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, pages 234-241, 2015. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.86, + 0.482, + 0.901 + ], + "angle": 0, + "content": "[45] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans," + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.093, + 0.482, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.545, + 0.093, + 0.905, + 0.134 + ], + "angle": 0, + "content": "et al. Photorealistic text-to-image diffusion models with deep language understanding. In NeurIPS, pages 36479-36494, 2022. 2, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.137, + 0.905, + 0.206 + ], + "angle": 0, + "content": "[46] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. NeurIPS, 35:36479-36494, 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.209, + 0.905, + 0.25 + ], + "angle": 0, + "content": "[47] Joonghyuk Shin, Minguk Kang, and Jaesik Park. Fill-up: Balancing long-tailed data with generative models. arXiv preprint arXiv:2306.07200, 2023. 1, 2, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.253, + 0.905, + 0.307 + ], + "angle": 0, + "content": "[48] Robik Shrestha, Yang Zou, Qiuyu Chen, Zhiheng Li, Yusheng Xie, and Siqi Deng. Fairrag: Fair human generation via fair retrieval augmentation. In CVPR, pages 11996-12005, 2024. 2, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.311, + 0.905, + 0.365 + ], + "angle": 0, + "content": "[49] Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Nicolas Papernot, Ross Anderson, and Yarin Gal. Ai models collapse when trained on recursively generated data. Nature, 631 (8022):755-759, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.369, + 0.905, + 0.424 + ], + "angle": 0, + "content": "[50] Krishnakant Singh, Thanush Navaratnam, Jannik Holmer, Simone Schaub-Meyer, and Stefan Roth. Is synthetic data all we need? benchmarking the robustness of models trained with synthetic images. In CVPR, pages 2505-2515, 2024. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.427, + 0.905, + 0.494 + ], + "angle": 0, + "content": "[51] Tobias Uelwer, Jan Robine, Stefan Sylvius Wagner, Marc Höftmann, Eric Upschulte, Sebastian Konietzny, Maike Behrendt, and Stefan Harmeling. A survey on self-supervised representation learning. arXiv preprint arXiv:2308.11455, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.498, + 0.905, + 0.539 + ], + "angle": 0, + "content": "[52] Mei Wang and Weihong Deng. Mitigating bias in face recognition using skewness-aware reinforcement learning. In CVPR, pages 9322-9331, 2020. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.542, + 0.905, + 0.596 + ], + "angle": 0, + "content": "[53] Yinong Oliver Wang, Younjoon Chung, Chen Henry Wu, and Fernando De la Torre. Domain gap embeddings for generative dataset augmentation. In CVPR, pages 28684-28694, 2024. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.601, + 0.905, + 0.655 + ], + "angle": 0, + "content": "[54] Zeyu Wang, Klint Qinami, Ioannis Christos Karakozis, Kyle Genova, Prem Nair, Kenji Hata, and Olga Russakovsky. Towards fairness in visual recognition: Effective strategies for bias mitigation. In CVPR, pages 8919-8928, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.658, + 0.905, + 0.728 + ], + "angle": 0, + "content": "[55] Yuyang Xue, Junyu Yan, Raman Dutt, Fasih Haider, Jingshuai Liu, Steven McDonagh, and Sotirios A Tsaftaris. Bmft: Achieving fairness via bias-based weight masking fine-tuning. In MICCAI Workshop on Fairness of AI in Medical Imaging, pages 98-108, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.73, + 0.905, + 0.785 + ], + "angle": 0, + "content": "[56] Moon Ye-Bin, Nam Hyeon-Woo, Wonseok Choi, Nayeong Kim, Suha Kwak, and Tae-Hyun Oh. Exploiting synthetic data for data imbalance problems: Baselines from a data perspective. arXiv preprint arXiv:2308.00994, 2023. 1, 4, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.788, + 0.905, + 0.842 + ], + "angle": 0, + "content": "[57] Cheng Zhang, Xuanbai Chen, Siqi Chai, Chen Henry Wu, Dmitry Lagun, Thabo Beeler, and Fernando De la Torre. Iti-gen: Inclusive text-to-image generation. In ICCV, pages 3969-3980, 2023. 2, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.846, + 0.905, + 0.899 + ], + "angle": 0, + "content": "[58] Fengda Zhang, Kun Kuang, Long Chen, Yuxuan Liu, Chao Wu, and Jun Xiao. Fairness-aware contrastive learning with partially annotated sensitive attributes. In ICLR, 2023. 3, 5, 6" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.899 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.946, + 0.519, + 0.957 + ], + "angle": 0, + "content": "28757" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.093, + 0.482, + 0.16 + ], + "angle": 0, + "content": "[59] Fengda Zhang, Qianpei He, Kun Kuang, Jiashuo Liu, Long Chen, Chao Wu, Jun Xiao, and Hanwang Zhang. Distributionally generative augmentation for fair facial attribute classification. In CVPR, pages 22797-22808, 2024. 2, 3, 4, 5, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.164, + 0.482, + 0.205 + ], + "angle": 0, + "content": "[60] Zhifei Zhang, Yang Song, and Hairong Qi. Age progression/regression by conditional adversarial autoencoder. In CVPR, 2017. 6, 12" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.207, + 0.482, + 0.248 + ], + "angle": 0, + "content": "[61] Qihao Zhao, Yalun Dai, Hao Li, Wei Hu, Fan Zhang, and Jun Liu. Ltgc: Long-tail recognition via leveraging llms-driven generated content. In CVPR, pages 19510-19520, 2024. 13" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.25, + 0.482, + 0.29 + ], + "angle": 0, + "content": "[62] Zengqun Zhao and Ioannis Patras. Prompting visual-language models for dynamic facial expression recognition. In BMVC, pages 1-14, 2023. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.292, + 0.482, + 0.346 + ], + "angle": 0, + "content": "[63] Xiao Zhou, Yong Lin, Renjie Pi, Weizhong Zhang, Renzhe Xu, Peng Cui, and Tong Zhang. Model agnostic sample reweighting for out-of-distribution learning. In ICML, pages 27203-27221, 2022. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.349, + 0.482, + 0.389 + ], + "angle": 0, + "content": "[64] Yongchao Zhou, Hshmat Sahak, and Jimmy Ba. Using synthetic data for data augmentation to improve classification accuracy. In ICML, 2023. 2, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.391, + 0.482, + 0.446 + ], + "angle": 0, + "content": "[65] Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He. A comprehensive survey on transfer learning. Proceedings of the IEEE, 109(1):43-76, 2020. 4" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.093, + 0.482, + 0.446 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.48, + 0.946, + 0.519, + 0.957 + ], + "angle": 0, + "content": "28758" + } + ] +] \ No newline at end of file diff --git a/2025/AIM-Fair_ Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data/e3f123cb-93dd-4a70-bf37-7825bb88c80c_origin.pdf b/2025/AIM-Fair_ Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data/e3f123cb-93dd-4a70-bf37-7825bb88c80c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..fc5ad91364dae145d08400ac3f59cc404817d9f7 --- /dev/null +++ b/2025/AIM-Fair_ Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data/e3f123cb-93dd-4a70-bf37-7825bb88c80c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:46c834d144b767230a6d8fd59353fef9b3eff03a2bf6c82038160194f6fbc0a8 +size 1298040 diff --git a/2025/AIM-Fair_ Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data/full.md b/2025/AIM-Fair_ Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data/full.md new file mode 100644 index 0000000000000000000000000000000000000000..bc8fbc35361f2caa7a5308232e04f74fc397e789 --- /dev/null +++ b/2025/AIM-Fair_ Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data/full.md @@ -0,0 +1,312 @@ +# AIM-Fair: Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data + +Zengqun Zhao Ziquan Liu Yu Cao Shaogang Gong Ioannis Patras Centre for Multimodal AI, Queen Mary University of London + +{zengqun.zhao, ziquan.liu, yu.cao, s.gong, i.patras}@qmul.ac.uk + +# Abstract + +Recent advances in generative models have sparked research on improving model fairness with AI-generated data. However, existing methods often face limitations in the diversity and quality of synthetic data, leading to compromised fairness and overall model accuracy. Moreover, many approaches rely on the availability of demographic group labels, which are often costly to annotate. This paper proposes AIM-Fair, aiming to overcome these limitations and harness the potential of cutting-edge generative models in promoting algorithmic fairness. We investigate a fine-tuning paradigm starting from a biased model initially trained on real-world data without demographic annotations. This model is then fine-tuned using unbiased synthetic data generated by a state-of-the-art diffusion model to improve its fairness. Two key challenges are identified in this fine-tuning paradigm, 1) the low quality of synthetic data, which can still happen even with advanced generative models, and 2) the domain and bias gap between real and synthetic data. To address the limitation of synthetic data quality, we propose Contextual Synthetic Data Generation (CSDG) to generate data using a text-to-image diffusion model (T2I) with prompts generated by a context-aware LLM, ensuring both data diversity and control of bias in synthetic data. To resolve domain and bias shifts, we introduce a novel selective fine-tuning scheme in which only model parameters more sensitive to bias and less sensitive to domain shift are updated. Experiments on CelebA and UTKFace datasets show that our AIM-Fair improves model fairness while maintaining utility, outperforming both fully and partially fine-tuned approaches to model fairness. The code is available at https://github.com/zengqunzhao/AIM-Fair. + +# 1. Introduction + +Recent research has raised significant concerns about fairness and bias in machine learning models [30]. These mod- + +![](images/f8d0530fa245105e65d5f0c4b739b0f8b3dea27cba27db507c87aa7fbc563021.jpg) +Figure 1. Facial attributes classification (FAC) on the CelebA dataset based on different training strategies, in which Smiling is the target attribute and Gender is the protected attribute. This shows different learning strategies result in variable model utility (Overall Accuracy) vs. model fairness (Worst Group Accuracy and Equalized Odds) on demographic groups. A model trained solely on real data if biased exhibits high accuracy but poor fairness scores. Conversely, models trained on balanced synthetic data show better fairness but poorer accuracy due to a "domain gap" between the real and synthetic data, and a lack of selective model fine-tuning when synthetic data is deployed. Strategies to repair imbalances in real data [9, 56] or to supplement real data with synthetic data [47] marginally increase accuracy but do little to improve fairness. Our method for selective fine-tuning of a pre-trained (biased) model with synthetic data not only preserves model accuracy but also substantially improves fairness, outperforming fully fine-tuning (FFT) in both model utility and fairness. + +els often demonstrate varying performance across different demographic groups, leading to unfair outcomes. To mitigate the spurious correlation caused by learning from imbalanced (biased) data, regularizations were used as optimization objectives [1, 10, 12, 23]. Methods like distributionally robust optimization (DRO) [12] optimizes the worst-case performance, while invariant risk minimization (IRM) [1] learns unbiased representations with invariance to different environments. Influenced by the success of repre + +![](images/968ef6fce9fb3dfd9ee650e2f55496e5ce680ace7aa301e85fbf4cca5b71cf7c.jpg) +Figure 2. The effects of selective fine-tuning by layer-wise freezing from facial attribute classifications on the CelebA dataset, with Smiling as the target attribute and Male as the protected attribute: When only the fully connected (FC) layer is frozen the model shows improved worst demographic group accuracy and reduced equalized odds, i.e. more fair, but sacrifices some utility (overall accuracy). This indicates increased cross-domain generalisability (real and synthetic). Conversely, freezing only block 2 while fine-tuning the remaining parameters results in high overall accuracy but poorer fairness, i.e. further enhanced domain-bias specificity. When only block 1 is frozen, the model not only maintained equalized odds but also increased utility (overall accuracy) and worst group accuracies. + +![](images/0ee6f775f8d3b9eaeae3aaa12f11590e01bdf33b2ec7a9f98e8e538dd8a10a16.jpg) + +![](images/3365cae16dacab792e323a67948865d1373f9eaaeba542387d0e645ff9fca91d.jpg) + +sentation learning [51], attempts were made to learn a fair feature representation invariant to protected facial attributes [6, 29, 34, 54]. However, these methods rely on demographic group labelling, often unavailable in practice [2, 4]. More recent advances in generative models have utilized generative augmentation for a more balanced training data distribution [8, 9, 17, 36, 40, 59]. These techniques primarily focus on image editing, either on real data [36, 40] or synthetic data [9], to create less biased training data. Yet, these methods still require additional group annotations of the data they edit. To address this limitation, DiGA [59] proposed to create an unbiased training set by editing spurious facial attributes to a random degree while keeping a target facial attribute unchanged without knowing each sample's group label. However, DiGA increases training data size multiple times, leading to substantially higher training costs, and the quality of generated data is limited due to the unknown group labels. + +Recent advancements in text-to-image generative models showcase impressive data fidelity [43], yet their potential for improving fairness through data expansion has not been fully explored by the fairness community. This raises the question: can AI-generated synthetic data play a crucial role in mitigating biases within machine learning models? This research question is particularly important given the successful application of AI-generated data in fine-tuning large language models (LLMs) [11], data augmentation [64], and long-tail recognition [47], alongside some reported limitations of synthetic data effectiveness [49]. This work presents a comprehensive empirical investigation into whether fine-tuning high-quality, balanced generative data from a contemporary text-to-image model can counteract model biases caused by training on imbalanced real data. We identify two key challenges in bias-correcting fine-tuning with synthetic data: (1) A data-related challenge arising from linguistic ambiguity of the textual prompt and/or model misrepresentation [48, 57], which results in low-quality and low-diversity generated data. (2) A model learning challenge caused by both a domain shift (synthetic vs. real) and a bias shift (unbiased vs. biased) between + +the real and the synthetic data. Fine-tuning blindly on the synthetic will result in a model with decreased utility. + +To address the data quality issue, we propose Contextual Synthetic Data Generation (CSDG). Existing Latent Diffusion Models (LDMs) [3, 36, 41, 45] often use vision-language models, such as CLIP, for cross-modality alignment. To yield more diverse and better fine-grained image generation, we formulate a contextual synthetic data generation strategy that uses more detailed text descriptions from expansive linguistic expressions of LLMs to condition an LDM with richer, context-driven text. As a result, the generated images cover more scenarios and provide more details, enhancing diversity and mitigating bias in real data. In contrast to other methods that edit or manipulate real data samples, a key strength of our method is that it does not require annotating the Protected Group Attributes of real data. + +The model learning challenge is due to the existence of two types of shifts between the generated and the real data, namely, the desired bias shift and the undesired domain shift (image quality, realism, scene characteristics). To address this challenge we propose to locate and update model parameters that are more sensitive to the bias shift and less sensitive to the domain shift. Our observation is that some parameters are more sensitive to data distribution shift (cross-domain generalisability) - we call domain-sensitive parameters, while others are more sensitive to demographic group discrepancies (real data domain bias specificity), which we call fairness-sensitive parameters. This observation is supported by the experiments shown in Fig. 2. This is from fine-tuning a real data pre-trained model on balanced synthetic data while keeping parameters at different layers frozen. To identify which parameters to update, we propose a novel selection scheme in which the gradient differences between the updates by the real (biased) dataset and two synthetic datasets with one unbiased and another biased. Ranking the gradient differences between the synthetic-biased data and the synthetic-unbiased data reveals parameters sensitive to fairness (fairness-sensitive parameters). In contrast, inverse ranking the gradient differences between the real data and the synthetic-biased data + +![](images/8860308feb1162fc43c47e04d965954e07a35434eeed64d6650d343d90084e9c.jpg) +Figure 3. A selective fine-tuning model consisting of three parts: (1) Contextual Synthetic Data Generation (CSDG) for generating diverse images using GPT-4 generated prompts, (2) Selective Mask Generation (SMG) for creating a selection mask that determines which parameters are updated during fine-tuning, and (3) Selective Fine-Tuning (SFT) to enhance model fairness obtained from synthetic data whilst simultaneously to preserve model utility yielded from real data in pre-training. + +discovers parameters less sensitive to domain shift (domain-insensitive parameters). In fine-tuning, a selection mask is constructed as the intersection of the top-k rankings. This selection mask is applied to initialize the pre-trained model, so that only the selected parameters are updated using the balanced synthetic data. + +Fig. 1 presents a comparison of the results across various training strategies using real and/or synthetic data. Fig 3 shows an overview of our model, which consists of three parts: Contextual Synthetic Data Generation (CSDG), Selective Mask Generation (SMG), and Selective Fine-Tuning (SFT). CSDG uses a pre-trained and fixed LDM to generate high-quality images, where a series of contextual prompts serve as the conditions. SMG creates a selection mask that determines which parameters are updated during finetuning. SFT is designed to correct bias in the model. We initialize a pre-trained model using the parameter selection mask. Then the model is fine-tuned on balanced synthetic data to enhance its fairness while retaining the model utility. Our contributions are as follows: + +- We investigate a fine-tuning paradigm that mitigates model bias stemming from unbalanced real data using synthetic data generated by a text-to-image (T2I) process without requiring demographic group annotations. +- We design contextual synthetic data generating by using a T2I diffusion mode with prompts generated by a context-aware LLM, ensuring both data diversity and control of bias in synthetic data. +- We introduce a selective fine-tuning method for fair model learning, which identifies domain-sensitive and fairness-sensitive parameters for improving model fairness and utility simultaneously. +- Our method outperforms both full and partial fine-tuning methods and achieves superior performance compared to state-of-the-art methods across several datasets. + +# 2. Related Work + +Mitigating Model Bias. Current methods for model fairness can be categorized into three types: pre-processing, in-processing, and post-processing. Pre-processing approaches modify sample distributions of protected variables or perform transformations to remove discrimination from the data. Many recent works use generative models to create balanced, unbiased datasets [36, 40, 58]. In-processing methods incorporate fairness metrics into model optimization to maximize both performance and fairness [1, 10, 12, 20, 23, 55]. Some studies explore mitigating bias without group annotations, such as Just Train Twice (JTT) [24], which upweights misclassified examples to improve worst-group performance, and Cross-Tisk Minimization (XRM) [37], which trains twin classifiers to reduce spurious correlations. Post-processing methods apply transformations to model outputs to improve fairness, such as model calibration [25, 33, 38] and thresholding [5, 13] to align predicted positive outcomes with actual positive examples across groups. Our method falls within the pre-processing paradigm, but instead of simply balancing real data with synthetic data, we focus on using synthetic data to enhance model fairness. + +Improving Model Fairness on Generative Data. Generative models have advanced rapidly in recent years [43, 46], with several studies exploring their use to improve model fairness through generative data [9, 17, 59]. Much of the prior research has focused on generating counterfactual samples to assess fairness [8, 17], and generative methods have also made strides in bias mitigation by creating balanced, unbiased datasets [36, 40, 58]. Instead of generating counterfactual samples based on real data, D'Inca et al. [9] use a diffusion model for uncontrolled image generation, followed by manipulation of the synthetic images in a semantic space. However, these approaches require additional + +group annotations. Zhang et al. [59] use generative models to identify spurious attributes that may bias the model, then edit each image to modify these attributes. They train a fair FAC model on the augmented dataset, enhancing its invariance to these spurious variations. While this approach does not require protected group labels for training the FAC model, annotations are still needed to train the generative models that detect and edit spurious attributes. Additionally, generating accurate counterfactual images remains challenging, as the edits are applied randomly to images without known protected attributes. In contrast, our method uses generative data from a text-to-image LDM, enabling greater control over the diversity of the generated data and allowing a broader range of scenarios and variations. + +Model Fine-Tuning. A common approach to transfer learning in the presence of distribution shifts is to fine-tune the last few layers of a pre-trained model, retaining the learned features while adapting to the new task [27, 65]. To prevent overfitting during this fine-tuning process, existing methods suggest using a smaller learning rate compared to the initial pretraining phase [21], freezing the early backbone layers and gradually unfreezing them [15], or applying different learning rates to each layer [42]. Lee et al. [22] introduced surgical fine-tuning, which showed that selectively fine-tuning a subset of layers can either match or outperform conventional fine-tuning strategies. Recent research [19] indicates that training a carefully selected subset of layers while keeping the remaining weights frozen at their initial values can lead to varying contributions from different layers across the network to overall performance. Our proposed method inherits some common findings from these works but we further investigated with parameters-wise selective fine-tuning, which details the parameters' sensibility to the property from the pre-train data distribution and the property from the downstream data distribution. + +# 3. Methods + +This section first introduces how to generate contextual synthetic data with LLM-generated prompts. Then, we detail how to obtain the parameters selection mask. Finally, we present how to conduct model fair fine-tuning with a selection mask on balanced synthetic data. We provide the algorithm for the selective fine-tuning in the Appendix. + +# 3.1. Contextual Synthetic Data Generation + +Our method attempts to correct the biased model trained on imbalanced real data by fine-tuning on balanced synthetic data, so the quality of the synthetic data is crucial to the success of fair learning. Although current text-to-image models have demonstrated remarkable performance, several studies indicate that directly expressing the desired attributes in the prompt often results in sub-optimal outcomes due to linguistic ambiguity or model misrepresentation [48, 57]. For example, as shown in Fig. 4, to generate a face photo with + +![](images/748d899da50ee608b8a1e088410a484573eb1820cb8f563d2335b8625262dfb3.jpg) +Figure 4. Generated images of Smiling Male conditioned on different prompts. Compared to the plain prompt, our contextual prompts enhance the diversity. More on UTKFace in Appendix. + +both target and protected attributes, one might use a prompt like "Portrait face photo of a smiling male." as suggested in [56]. However, such manually designed prompts may lead to stereotypical image generation, excluding certain attributes or minority groups, which in turn can introduce bias in other attributes, such as age or hairstyle [57]. + +The recent work in zero-shot [31] and supervised [62] classification suggest that leveraging additional contextual and detailed information can enhance vision-language alignment. Considering current text-to-image models [3, 36, 41, 45] often use vision-language models, such as CLIP [39], for cross-modality alignment, we propose a contextual synthetic data generation strategy that leverages the powerful linguistic capabilities of large language models, such as GPT-4, to condition an LDM with richer and context-driven text. As a result, the generated images cover more scenarios and provide more details, enhancing diversity and mitigating the bias in the characteristics not involved in the target and protected. The structure of the CSDG is shown in the upper right of Fig. 3. + +Concretely, the instruction provided for the GPT-4 model following the format: “{Task}, {Number of Prompts}, {Target Attribute}, {Protected Attribute}, {Other Detailed Descriptions}, and {Prompt Format}”. For generating facial photos, for example, the other detailed descriptions contain specific facial features, hair characteristics, eye-related details, and head orientation or angle. Our text-to-image generation model is a pre-trained latent diffusion model-Stable Diffusion (SD) [43], which reverses noises applied to the latent embedding of images. SD contains a variational autoencoder (VAE) [28], a CLIP text encoder [39], and a U-Net [44]. During the inference, the random Gaussian noise $\varepsilon_{t} \sim \mathcal{N}(0,\mathbf{I})$ and the contextual prompt features $c = (w_{1}, w_{2}, \ldots, w_{n})$ encoded via a CLIP text + +encoder $\Psi (\cdot)$ will be fed into the U-Net to condition the denoising process, where $n$ is the number of the contextual prompts. We provide detailed instruction and contextual prompts in the Appendix. The empirical results in Tab. 5 demonstrate that the generated contextual synthetic data mitigate domain shift and improve model fairness. + +# 3.2. Selective Mask Generation + +As recent studies [50, 53, 64] have mentioned, synthetic data often fail to align with the real data distribution due to domain shift, suggesting that fine-tuning the pre-trained model directly on the balanced synthetic data may not learn the desired properties effectively. This is caused by the fact that there is both a domain shift (synthetic vs. real) and a bias shift (unbiased vs. biased) between the real and the synthetic data, and fine-tuning with the domain shift will result in a model with decreased utility. The empirical finding shown in Fig. 2 demonstrates that when fine-tuning a real data pre-trained model on balanced synthetic data, some parameters are more sensitive to the data distribution shift, called domain-sensitive parameters, while some are more sensitive to group discrepancies, called fairness-sensitive parameters. To discover the parameters' sensibility towards different scenarios, we propose to construct three distinct datasets with different distributions to elicit the model responses by calculating the parameter-wise gradients. + +We aim to fine-tune a pre-trained model $f_{\theta}(x)$ to improve fairness while accounting for domain differences between real and synthetic data, hence, we want to find the parameters which are more sensitive to fairness while less sensitive to domain shift. Specifically, we start by constructing three different datasets: biased real data $\{(x_i^{(R)},y_i^{(R)})\}_{i = 1}^{N_R}\in \mathcal{D}_R$ representing training data with inherent biases, biased synthetic data $\{(x_i^{(S_1)},y_i^{(S_1)})\}_{i = 1}^{N_{S_1}}\in \mathcal{D}_{S_1}$ mirroring the unfairness of the real data, and unbiased synthetic data $\{(x_i^{(S_2)},y_i^{(S_2)})\}_{i = 1}^{N_{S_2}}\in \mathcal{D}_{S_2}$ designed to be fair, mitigating biases. $N_{R},N_{S_{1}}$ , and $N_{S_2}$ is the image number of the corresponding dataset. + +We then compute the gradients $\pmb{g}_R$ , $\pmb{g}_{S1}$ , and $\pmb{g}_{S2}$ of the loss function $\mathcal{L}(\theta ;x,y)$ , which is defined from a binary softmax loss. Rather than considering fine-grained, scalar-wise parameters, we select parameters at the level of weights and biases, denoted by $\theta$ , which includes weights $W^{(l)}$ and biases $b^{(l)}$ for the Convolution Layer, Batch Normalization, and Fully Connected Layer, where $l$ refers to the layer index. Then, for the model parameters $\theta$ on each dataset: + +$$ +\boldsymbol {g} _ {R} = \frac {1}{N _ {R}} \sum_ {i = 1} ^ {N _ {R}} \nabla_ {\boldsymbol {\theta}} \mathcal {L} \left(\theta ; x _ {i} ^ {(R)}, y _ {i} ^ {(R)}\right) \tag {1} +$$ + +$$ +\boldsymbol {g} _ {S _ {1}} = \frac {1}{N _ {S _ {1}}} \sum_ {i = 1} ^ {N _ {S _ {1}}} \nabla_ {\theta} \mathcal {L} \left(\theta ; x _ {i} ^ {(S _ {1})}, y _ {i} ^ {(S _ {1})}\right) \tag {2} +$$ + +$$ +\boldsymbol {g} _ {S _ {2}} = \frac {1}{N _ {S _ {2}}} \sum_ {i = 1} ^ {N _ {S _ {2}}} \nabla_ {\theta} \mathcal {L} (\theta ; x _ {i} ^ {(S _ {2})}, y _ {i} ^ {(S _ {2})}) \tag {3} +$$ + +where $\nabla$ is the gradient operator. We then calculate the gradient differences to capture how each parameter behaves across different data distributions. For parameter $\theta_{j}$ : + +$$ +\begin{array}{l} \Delta_ {1, j} = \left| \boldsymbol {g} _ {R, j} - \boldsymbol {g} _ {S _ {1}, j} \right| \tag {4} \\ \Delta_ {2, j} = \left| \mathbf {g} _ {S _ {1}, j} - \mathbf {g} _ {S _ {2}, j} \right| \\ \end{array} +$$ + +where $\Delta_{1}$ measures parameters sensitive to the domain shift while $\Delta_{2}$ identifies parameters crucial for fairness, $j$ denotes the index of all the parameters. To obtain the parameters which are less affected by domain differences and more impactful for fairness, we conduct a ranking in ascending order on $\Delta_{1}$ (smaller differences first) and a ranking in descending order on $\Delta_{2}$ (larger differences first): + +$$ +R _ {1} = \operatorname {a r g s o r t} \left(\Delta_ {1}\right); R _ {2} = \operatorname {a r g s o r t} \left(- \Delta_ {2}\right) \tag {5} +$$ + +Then we can find the intersections from the top- $k$ parameters are both less sensitive to the domain gap and significant for fairness by $K = K_{1}\cap K_{2}$ , where + +$$ +\begin{array}{l} \begin{array}{l} K _ {1} = \left\{\theta_ {j} \mid j \in R _ {1} [ 1: k ] \right\} \\ I = \left\{\theta_ {i} \mid i \in D _ {1} [ 1: k ] \right\} \end{array} \tag {6} \\ K _ {2} = \left\{\theta_ {j} \mid j \in R _ {2} [ 1: k ] \right\} \\ \end{array} +$$ + +Finally, the selection mask can be obtained by: + +$$ +M _ {j} = \left\{ \begin{array}{l l} \text {T r u e}, & \text {i f} \theta_ {j} \in K \\ \text {F a l s e}, & \text {o t h e r w i s e} \end{array} \right. \tag {7} +$$ + +# 3.3. Selective Fine-Tuning on Synthetic Data + +Our goal is to fine-tune the pre-trained model $f_{\theta}(x)$ on the balanced synthetic dataset $\mathcal{D}_{S_2}$ following the ERM framework, while only updating selected parameters as indicated by the mask $M$ . Specifically, given the model $f_{\theta}(x)$ pretrained on the biased real data, balanced synthetic data $\{(x_i^{(S)},y_i^{(S)})_{i = 1}^{N_{S_2}}$ , and the parameters selection mask $M$ , during optimization, we apply the mask to the gradients so that only the selected parameters are updated: + +$$ +\theta^ {(t + 1)} = \theta^ {(t)} - \eta \left(M \odot \nabla_ {\theta} \mathcal {L} \left(f _ {\theta^ {(t)}} \left(x _ {i} ^ {(S _ {2})}\right), y _ {i} ^ {(S _ {2})}\right)\right) \tag {8} +$$ + +where $\theta^{(t)}$ are the parameters at iteration $t$ , $\eta$ is the learning rate, and $\odot$ denotes element-wise multiplication. Notably, once the mask $M$ is provided by SMG, it will be applied throughout the entire fine-tuning optimization process. + +# 4. Experiments + +This section presents the experimental setup and results. We begin by describing the datasets, followed by the implementation details. The main results and ablation studies are presented at the end. + +# 4.1. Datasets + +CelebA [26] contains over 200,000 facial images with 40 binary attribute annotations. Following the setting of the previous works [35, 58, 59], we set Male and Young to protected attributes and selected Smiling and Young as the target attribute which has the highest correlation with the protected attributes. For each experiment, we randomly sample a biased subset as a training dataset with a size of 20,000 images, where the majority group and minority group have + +Table 1. Comparisons to other methods on the CelebA dataset under settings of varied target and protected attributes. + +
MethodsT: Smiling, P: MaleT: Smiling, P: YoungT: Young, P: Male
ACC (↑) WST(↑) EO (↓)ACC (↑) WST(↑) EO (↓)ACC (↑) WST(↑) EO (↓)
ERM [14]88.2070.1025.3088.3071.5015.6077.7042.0052.00
CVaR DRO [23]87.3074.0022.8087.0076.1013.9075.4042.3048.80
EIL [7]87.9075.6019.7087.9072.5013.3077.5045.6039.20
LfF [32]87.1077.5017.0085.3072.9014.3077.4044.2043.60
JTT [24]88.0074.8019.4087.6073.3014.2076.3043.6047.70
MAPLE [63]88.1072.0019.6088.1073.6013.6076.3046.2043.50
DiGA [59]88.4081.907.4089.1078.509.5080.0051.3033.30
AIM-Fair (Ours)89.0284.206.0790.2187.785.4478.1954.8928.18
+ +90% and 10% of the sample size respectively. We report performance on the whole original test dataset. UTK Face [60] consists of over 20,000 facial images with three kinds of annotations: gender, age, and ethnicity. Following the experimental setup in the previous works [17, 18, 59], we define a binary spurious attribute "Ethnicity" based on whether the facial image is white or not. The task is to predict the Gender. We randomly sample a biased subset of 10,000 images, with the same bias degree as CelebA. We also construct a balanced and unbiased test dataset consisting of 3,200 images. + +# 4.2. Fairness Metrics + +Our goal is to learn a fair and accurate model. For model utility, we present the overall accuracy (ACC) and group accuracy. For fairness, follow previous work [35, 58, 59] we use equalized odds (EO) [13], defined as: + +$$ +\left. \overline {{\sum}} _ {\forall y, \hat {y}} \left| P _ {s _ {0}} (\hat {Y} = \hat {y} \mid Y = y) - P _ {s _ {1}} (\hat {Y} = \hat {y} \mid Y = y) \right|, \right. \tag {9} +$$ + +where $\overline{\sum}$ is the averaged sum, $Y$ target label, $\hat{Y}$ classifier predictive label, and $s_0,s_1\in S$ protected attributes. The worst-group accuracy (WST) defined as: + +$$ +\min _ {\forall y \in \mathcal {Y}, \forall s \in S} P _ {s} (\hat {Y} = y \mid Y = y) \tag {10} +$$ + +and group standard deviation (STD) [52] defined as: + +$$ +\underset {\forall y \in \mathcal {Y}, \forall s \in S} {\operatorname {s t d}} P _ {s} (\hat {Y} = y \mid Y = y) \tag {11} +$$ + +# 4.3. Implementation Details + +Following previous work [59], we use ResNet-18 [14] as the backbone for all experiments. We use stable diffusion v2.1 as our latent diffusion model for generating images. We generated 10,000 images for each group and then randomly sampled $N_{S}$ . To be consistent with the real data image number $N_{R}$ , we set the $N_{S} = N_{R} / G$ , where $G$ is the group number. During the pre-training phase, we set the batch size to 128 and trained the model for 15 epochs. The initial learning rate was set to 0.01, which was then reduced by a factor of 0.01 at the 10th epoch. During the fine-tuning phase, we set the batch size to 128 and trained the model for 10 epochs. And the learning rate is searched from \{0.4, 0.5, 0.6\}. All models are optimized by the SGD optimizer and trained on the Tesla A100 GPU based on the open-source PyTorch platform. To obtain more stable and reliable results, we conducted all experiments 10 times with different + +Table 2. Comparisons to other methods on the CelebA dataset (T=Smiling, P=Male) under settings of training set sizes. + +
MethodsSamples Ratio = 50%Samples Ratio = 25%Samples Ratio = 10%
ACC (↑)WST(↑)EO (↓)ACC (↑)WST(↑)EO (↓)ACC (↑)WST(↑)EO (↓)
ERM [14]87.5067.8026.1087.1065.9027.7086.9062.8028.90
CVaR DRO [23]86.6072.9022.1086.6072.3022.4085.5069.1027.30
EIL [7]86.2071.3022.5085.9069.6025.4086.8064.2026.70
LfF [32]86.9075.5019.4085.9072.1023.6085.5066.1027.70
JTT [24]87.3072.9020.1086.7071.1020.6086.8067.1023.10
MAPLE [63]87.4073.7023.8087.0072.7024.2085.6069.2027.10
DiGA [59]88.4081.107.8088.4078.308.0088.3078.808.40
AIM-Fair (Ours)88.8583.996.2888.8982.547.5987.9081.738.16
+ +random seeds and then used the average as the final result. + +# 4.4. Comparisons to State of the Art + +We compared our method against seven contemporary techniques, including the baseline ERM [14] method and six debiasing models all of which do not require protected attribute labels: Two regularization-based methods (CVaR DRO [23] and EIIL [7]); three reweighting-based methods (LfF [32], JTT [24], and MAPLE [63]); and generativemodel-based method (DiGA [59]). Our comparison covers two settings: different target and protected attributes and varying numbers of target labels. + +Tab.1 shows that ERM achieves good accuracy but suffers from significant unfairness. Other debiasing methods, while improving fairness to some extent, generally sacrifice accuracy. DiGA [59] improves fairness while maintaining accuracy by using a generative model to edit real data and create more balanced training data. In contrast, our method generates images from scratch using a text-driven latent diffusion model. It is evident that our method outperforms the best of the existing models DiGA [59] in both fairness and accuracy in all categories, except "Young / Male" where we come close second to DiGA. We outperform all other methods consistently. We also conducted comparisons with other methods under smaller training set sizes, with the subsampling ratio of $50\%$ , $25\%$ , and $10\%$ . Tab. 2 shows that our method outperforms all others consistently, except on ACC score for Lable Ratio at $10\%$ where we come close second to DiGA. Critically, our method maintains robust model utility and fairness even with varying amounts of real training data, with these improvements attributed to the selective updating with balanced synthetic data. + +# 4.5. Ablation Analysis + +Comparisons of Different Training Strategies. To evaluate the effectiveness of the proposed selective fine-tuning, we first compare our method with several other strategies trained on different types of data. These include the baseline, which trains the model using conventional ERM on real data, training the model solely on synthetic data, supplementing real data with synthetic data [47], and balancing the real data [9, 56]. Furthermore, we compare common fine-tuning methods, such as linear probing and full fine-tuning. + +The results in Tab. 3 indicate that both data supplen + +Table 3. Comparisons of varied training strategies on CelebA and UTKFace datasets. + +
MethodsTargetCelebAUTKFace
ProtectedT: Smiling; P: MaleProtectedT: Smiling; P: YoungProtectedT: Female; P: White
P=0P=1ACC (↑) WST (↑) EO (↓) STD (↓)P=0P=1ACC (↑) WST (↑) EO (↓) STD (↓)P=0P=1ACC (↑) WST (↑) EO (↓) STD (↓)
Baseline [14]T=083.5798.5089.2371.5223.8410.6578.7296.0990.1978.7217.376.9279.5896.1688.8679.5816.597.26
T=195.3671.5294.3786.4295.7583.96
Trained on Synthetic DataT=086.8289.4085.9878.527.874.0785.4585.7685.1282.423.501.3482.7188.1285.4482.715.501.96
T=186.3978.5282.4285.2784.9086.04
Data Supplementation [47]T=084.3998.3989.7773.6721.969.7678.0595.3690.3578.0517.317.0282.1293.9790.2582.1211.855.24
T=195.4073.6794.9087.6295.6289.28
Data Repairing [9, 56]T=082.8298.1089.7275.0521.189.5079.5095.1990.6879.5015.696.3081.1594.2289.4281.1513.375.89
T=196.0475.0594.5188.3595.7186.61
Linear ProbT=084.5596.2289.6977.4117.517.7185.2093.8190.1485.208.613.1881.0691.5188.4081.0610.595.14
T=194.8277.4190.4387.8894.5986.42
Fully Fine-TuningT=088.8191.6288.7782.746.833.3185.9890.0687.8585.984.371.6883.0088.0487.9083.005.402.99
T=189.5582.7486.0286.8190.7489.83
AIM-Fair (Ours)T=088.1691.4089.0284.206.072.7488.1993.6490.2187.785.442.3484.2689.0888.3084.264.812.41
T=190.2584.2088.9887.7890.6289.25
+ +Table 4. Comparisons of varied partial fine-tuning strategies on CelebA and UTKFace datasets. + +
MethodsTargetCelebAUTKFace
ProtectedT: Smiling; P: MaleProtectedT: Smiling; P: YoungProtectedT: Female; P: White
P=0P=1ACC (↑)WST (↑)EO (↓)STD (↓)P=0P=1ACC (↑)WST (↑)EO (↓)STD (↓)P=0P=1ACC (↑)WST (↑)EO (↓)STD (↓)
Best Random SelectionT=087.9392.8189.3382.179.144.0985.5291.5388.9585.526.022.1683.5088.8587.9283.505.382.61
T=191.3182.1788.2587.6390.3089.01
Best Sub-Tuning [19] (By Updating One Block)T=087.5094.7590.0480.6412.455.5287.2894.6390.3387.287.343.0083.0589.5689.0183.056.513.56
T=193.0980.6489.1587.3292.0791.34
Best Sub-Tuning [19] (By Freezing One Block)T=088.6991.0989.0683.487.003.0086.5591.8988.6586.555.342.2583.5888.3487.4283.584.762.23
T=190.4883.4887.0886.5588.9688.81
Selective Fine-Tuning (Cosine Similarity)T=086.0791.7788.8583.338.403.6187.4392.9690.2687.435.532.0583.7288.8288.2183.725.102.64
T=191.5383.3389.5288.6590.1490.15
AIM-Fair (Ours) (Absolute Difference)T=088.1691.4089.0284.206.072.7488.1993.6490.2187.785.442.3484.2689.0888.3084.264.812.41
T=190.2584.2088.9887.7890.6289.25
+ +tation and data repairing do not significantly improve fairness. Data supplementation, which combines biased real data with balanced synthetic data for co-training, makes it difficult for the model to learn fairness properties from the synthetic data because of the domain gap. While data repairing can create balanced training data, the domain shift between real and synthetic data limits the effectiveness of fair learning. Common transfer learning methods, such as linear probing and full fine-tuning, also face challenges due to domain shifts. Specifically, linear probing is affected by discrepancies in feature representations, while full fine-tuning improves fairness at the cost of reduced model utility. + +Comparisons of Different Partial Fine-Tuning. We also compare our method with other partial fine-tuning approaches, including random selection and sub-tuning [19]. For random selection, we set the updating ratios at $40\%$ , $55\%$ , $70\%$ , and $85\%$ , and select the best result as the final one. For sub-tuning, we perform block-wise fine-tuning and block-wise freezing (i.e., updating one block or freezing one block while updating the rest), also selecting the block with the best result. + +Tab. 4 shows that fine-tuning only one block consistently yields the best accuracy, but at the cost of fairness, while freezing one block tends to improve fairness but often sacrifices model utility. This demonstrates that coarse-grained approaches, such as block-wise updating or freezing, strut + +Table 5. Classification results on CelebA dataset (T=Smiling, P=Male) under settings of different prompt types and numbers. + +
Prompt Types (Number)TargetProtectedT: Smiling; P: MaleProtectedT: Smiling; P: Young
P=0P=1ACC(↑)WST(↑)EO(↓)STD(↓)P=0P=1ACC(↑)WST(↑)EO(↓)STD(↓)
Plain Prompt (1)t=065.1889.4471.6543.2834.2017.0783.1077.7272.7559.539.308.96
t=177.4743.2859.5368.83
Contextual Prompts (25)t=077.1289.2581.8569.7316.37.6576.7267.9079.8067.908.828.75
t=185.9969.7385.1691.09
Contextual Prompts (50)t=089.4789.8082.6672.225.077.6687.7290.4385.8579.674.364.20
t=177.2672.2279.6782.67
Contextual + Head Poses (50)t=086.8289.4085.9878.527.874.0785.4585.7685.1282.423.501.34
t=186.3978.5282.4285.27
+ +gle to achieve an optimal balance between model utility and fairness. In contrast, our method performs fine-grained, parameter-wise updating, which allows the model to enhance fairness while maintaining utility. As a result, our method achieves the best worst-group accuracy while preserving high overall accuracy. + +Additionally, our method uses gradient differences to identify parameter sensitivity to varying data distributions. We also compared absolute gradient differences with the cosine similarity between gradients. The results in Tab. 4 show that using absolute gradient differences yields better results. The possible reason is that the cosine similarity only provides the direction of gradient disparity, whereas the absolute difference directly captures the magnitude of the gradient disparity. + +Comparisons of Different Prompts. To evaluate the performance of the varied prompts used for generating images, + +![](images/e44f458f7551955a43c17778c446f70cb11207d0428beb2a67bac569ce9b1397.jpg) +Figure 5. Classification results on the CelebA dataset (T=Smiling, P=Male) with different top-k values. The top-k values {40, 45, 50, 55, 60} correspond approximately to $\{64\%, 72\%, 80\%, 88\%, 96\% \}$ of the total model parameters. + +we trained the model solely on the balanced synthetic data generated with corresponding prompts and tested it on the real test set. Tab. 5 shows the results of different prompt types and quantities, indicating that images generated with contextual prompts lead to significantly better performance in both model accuracy and fairness. We believe this improvement is due to the increased diversity and details in the synthetic images generated from contextual prompts. Moreover, as the number of contextual prompts increases, test performance improves as well. Specifically, incorporating head pose variation into the prompt further enhances both accuracy and fairness in the generated images, as the results shown in the last row of Tab. 5. + +Evaluations of Varied Top-k Values. In our method, to identify the intersection parameters from the two gradient difference rankings, we use the top-k selection to ensure that the chosen parameters are more sensitive to fairness and less affected by domain shift. The results for different top-k values are shown in Fig. 5. When k is set to 55, our method achieves the best fairness performance while retaining overall accuracy. Intuitively, a lower k results in fewer updates, leading to better model utility but worse fairness, while a high k involves more parameter updates, which can negatively impact both accuracy and fairness. + +Evaluations of Varied Bias Ratio for Synthetic Data Construction. To evaluate the model's sensitivity to domain shift and group disparity, we construct three distinct datasets with different distributions. In our method, the bias ratio for the biased synthetic data distribution is treated as a hyperparameter, referenced by the error set of real training examples, as suggested in JTT [24]. To assess the impact of varying bias ratios in synthetic data, we explore different ratios. As shown in Tab. 6, a disparity between the synthetic data bias ratio and the real data bias ratio leads to worse performance in both accuracy and fairness. However, despite these differences in bias ratios, our method still outperforms the fully fine-tuning approach. This demonstrates that our + +Table 6. Classification results on CelebA dataset (T=Smiling, P=Male) with varied bias ratio of the biased synthetic data. + +
Bias RatioTargetProtectedACC (↑)WST (↑)EOD (↓)STD (↓)
P=0P=1
Fully Fine-Tuningt=088.8191.6288.7782.746.833.31
t=189.5582.74
4:6t=085.7291.7189.2383.908.783.77
t=192.6983.90
3:7t=087.0591.6288.9083.527.723.26
t=190.9383.52
2:8t=088.0191.7688.9783.167.303.28
t=190.4683.16
1:9t=088.1691.4089.0284.206.072.74
t=190.2584.20
+ +Table 7. Classification results on CelebA dataset (T=Smiling, P=Male) with different number of synthetic data. + +
Ratio To Real DataTargetProtectedACC (↑)WST (↑)EOD (↓)STD (↓)
P=0P=1
0.5t=088.6593.9689.5180.8010.314.90
t=191.1080.80
1t=088.1691.4089.0284.206.072.74
t=190.2584.20
1.5t=089.1892.0588.3481.257.133.98
t=188.3881.25
2t=089.0191.2688.8183.366.082.96
t=191.2689.44
+ +method is able to identify the model's sensitivity under the bias ratios different to real data. + +**Evaluations of Different Number of Synthetic Data.** We also evaluate fine-tuning with varying amounts of balanced synthetic data. As shown in Tab. 7, we use the real training data count as a reference and set different ratios to determine the number of synthetic data. The results indicate that when the amount of synthetic data matches that of real data, the model achieves the best fairness. Additionally, using half the amount of real data results in the best accuracy. We believe that using too much synthetic data during fine-tuning can lead to overfitting, while using too little data may fail to adequately debias the model. + +# 5. Conclusion + +In this work, we proposed a method to mitigate bias in machine learning models using synthetic data generated by a text-to-image process. By designing contextual synthetic data generation and selective fine-tuning, we enhance model fairness without requiring demographic group annotations. Our model updates selectively fairness-sensitive parameters, optimizing simultaneously model fairness and utility scores. Empirical results demonstrate that our method outperforms existing techniques, improving fairness while maintaining model utility performance. This work highlights the potential of synthetic data for creating fairer AI systems, offering a promising direction for future research in bias mitigation. + +# 6. Acknowledgments + +This research utilised Queen Mary's Apocrita HPC facility, supported by QMUL Research-IT. Zengqun Zhao is funded by Queen Mary Principal's PhD Studentships. Zengqun Zhao wants to thank Yining Wang and James Oldfield for the valuable comments and help, and Yiming Lin and Jie Shen for the valuable discussions. + +# References + +[1] Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019. 1, 3 +[2] Carolyn Ashurst and Adrian Weller. Fairness without demographic data: A survey of approaches. In EAAMO, pages 1-12, 2023. 2 +[3] Junsong Chen, YU Jincheng, GE Chongjian, Lewei Yao, Enze Xie, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li. Pixart-α: Fast training of diffusion transformer for photorealistic text-to-image synthesis. In ICLR, 2023. 2, 4 +[4] Kristy Choi, Aditya Grover, Trisha Singh, Rui Shu, and Stefano Ermon. Fair generative modeling via weak supervision. In ICML, pages 1887-1898, 2020. 2 +[5] Sam Corbett-Davies, Johann D Gaebler, Hamed Nilforoshan, Ravi Shroff, and Sharad Goel. The measure and mismeasure of fairness. JMLR, 24(1):14730-14846, 2023. 3 +[6] Elliot Creager, David Madras, Jorn-Henrik Jacobsen, Marissa Weis, Kevin Swersky, Toniann Pitassi, and Richard Zemel. Flexibly fair representation learning by disentangle-ment. In ICML, pages 1436-1445, 2019. 2 +[7] Elliot Creager, Jorn-Henrik Jacobsen, and Richard Zemel. Environment inference for invariant learning. In ICML, pages 2189-2200, 2021. 6 +[8] Saloni Dash, Vineeth N Balasubramanian, and Amit Sharma. Evaluating and mitigating bias in image classifiers: A causal perspective using counterfactuals. In WACV, pages 915-924, 2022. 2, 3 +[9] Moreno D'Inca, Christos Tzelepis, Ioannis Patras, and Nicu Sebe. Improving fairness using vision-language driven image augmentation. In WACV, pages 4695-4704, 2024. 1, 2, 3, 6, 7 +[10] Michele Donini, Luca Oneto, Shai Ben-David, John S Shawe-Taylor, and Massimiliano Pontil. Empirical risk minimization under fairness constraints. NeurIPS, 31, 2018. 1, 3 +[11] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024. 2 +[12] John C Duchi and Hongseok Namkoong. Learning models with uniform performance via distributionally robust optimization. The Annals of Statistics, 49(3):1378-1406, 2021. 1, 3 +[13] Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. NeurIPS, 29, 2016. 3, 6 + +[14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770-778, 2016. 6, 7 +[15] Jeremy Howard and Sebastian Ruder. Universal language model fine-tuning for text classification. In ACL, pages 328-339, 2018. 4 +[16] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. In ICLR, pages 1-13, 2022. 13 +[17] Jungseock Joo and Kimmo Kärkkäinen. Gender slopes: Counterfactual fairness for computer vision models by attribute manipulation. In FATE/MM, pages 1-5, 2020. 2, 3, 6 +[18] Sangwon Jung, Sanghyuk Chun, and Taesup Moon. Learning fair classifiers with partially annotated group labels. In CVPR, pages 10348-10357, 2022. 6 +[19] Gal Kaplun, Andrey Gurevich, Tal Swisa, Mazor David, Shai Shalev-Shwartz, and Eran Malach. Less is more: Selective layer finetuning with subtuning. arXiv preprint arXiv:2302.06354, 2023. 4, 7 +[20] Adarsh Kappiyath, Abhra Chaudhuri, AJAY KUMAR JAISWAL, Ziquan Liu, Yunpeng Li, Xiatian Zhu, and Lu Yin. Sebra: Debiasing through self-guided bias ranking. In ICLR, pages 1-12, 2025. 3 +[21] Simon Kornblith, Jonathon Shlens, and Quoc V Le. Do better imagenet models transfer better? In CVPR, pages 2661-2671, 2019. 4 +[22] Yoonho Lee, Annie S Chen, Fahim Tajwar, Ananya Kumar, Huaxiu Yao, Percy Liang, and Chelsea Finn. Surgical finetuning improves adaptation to distribution shifts. In ICLR, 2022. 4 +[23] Daniel Levy, Yair Carmon, John C Duchi, and Aaron Sidford. Large-scale methods for distributionally robust optimization. NeurIPS, 33:8847-8860, 2020. 1, 3, 6 +[24] Evan Z Liu, Behzad Haghgoo, Annie S Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. Just train twice: Improving group robustness without training group information. In ICML, pages 6781-6792, 2021. 3, 6, 8 +[25] Lydia T Liu, Max Simchowitz, and Moritz Hardt. The implicit fairness criterion of unconstrained learning. In ICML, pages 4051-4060. PMLR, 2019. 3 +[26] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaou Tang. Deep learning face attributes in the wild. In ICCV, 2015, 5, 12 +[27] Ziquan Liu, Yi Xu, Xiangyang Ji, and Antoni B Chan. Twins: A fine-tuning framework for improved transferability of adversarial robustness and generalization. In CVPR, pages 16436-16446, 2023. 4 +[28] Romain Lopez, Jeffrey Regier, Michael I Jordan, and Nir Yosef. Information constraints on auto-encoding variational bayes. NeurIPS, 31, 2018. 4 +[29] David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. Learning adversarially fair and transferable representations. In ICML, pages 3384-3393, 2018. 2 + +[30] Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6):1-35, 2021. 1 +[31] Sachit Menon and Carl Vondrick. Visual classification via description from large language models. In ICLR, 2022. 4 +[32] Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, and Jinwoo Shin. Learning from failure: De-biasing classifier from biased classifier. NeurlPS, 33:20673-20684, 2020. 6 +[33] James Oldfield, Markos Georgopoulos, Grigorios Chrysos, Christos Tzelepis, Yannis Panagakis, Mihalis Nicolaou, Jiankang Deng, and Ioannis Patras. Multilinear mixture of experts: Scalable expert specialization through factorization. NeurIPS, 37:53022-53063, 2025. 3 +[34] Sungho Park, Sunhee Hwang, Dohyung Kim, and Hyeran Byun. Learning disentangled representation for fair facial attribute classification via fairness-aware information alignment. In AAAI, pages 2403-2411, 2021. 2 +[35] Sungho Park, Jewook Lee, Pilhyeon Lee, Sunhee Hwang, Dohyung Kim, and Hyeran Byun. Fair contrastive learning for facial attribute classification. In CVPR, pages 10389-10398, 2022. 5, 6 +[36] Momchil Peychev, Anian Ruoss, Mislav Balunovic, Maximilian Baader, and Martin Vechev. Latent space smoothing for individually fair representations. In ECCV, pages 535-554, 2022. 2, 3, 4 +[37] Mohammad Pezeshki, Diane Bouchacourt, Mark Ibrahim, Nicolas Ballas, Pascal Vincent, and David Lopez-Paz. Discovering environments with xrm. In ICML, 2024. 3 +[38] Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. On fairness and calibration. NeurIPS, 30, 2017. 3 +[39] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, pages 8748-8763, 2021. 4 +[40] Vikram V Ramaswamy, Sunnie SY Kim, and Olga Russakovsky. Fair attribute classification through latent space de-biasing. In CVPR, pages 9301-9310, 2021. 2, 3 +[41] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, pages 1-27, 2022. 2, 4 +[42] Youngmin Ro and Jin Young Choi. Autolr: Layer-wise pruning and auto-tuning of learning rates in fine-tuning of deep networks. In AAAI, pages 2486-2494, 2021. 4 +[43] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, pages 10684-10695, 2022. 2, 3, 4 +[44] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, pages 234-241, 2015. 4 +[45] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, + +et al. Photorealistic text-to-image diffusion models with deep language understanding. In NeurIPS, pages 36479-36494, 2022. 2, 4 +[46] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. NeurIPS, 35:36479-36494, 2022. 3 +[47] Joonghyuk Shin, Minguk Kang, and Jaesik Park. Fill-up: Balancing long-tailed data with generative models. arXiv preprint arXiv:2306.07200, 2023. 1, 2, 6, 7 +[48] Robik Shrestha, Yang Zou, Qiuyu Chen, Zhiheng Li, Yusheng Xie, and Siqi Deng. Fairrag: Fair human generation via fair retrieval augmentation. In CVPR, pages 11996-12005, 2024. 2, 4 +[49] Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Nicolas Papernot, Ross Anderson, and Yarin Gal. Ai models collapse when trained on recursively generated data. Nature, 631 (8022):755-759, 2024. 2 +[50] Krishnakant Singh, Thanush Navaratnam, Jannik Holmer, Simone Schaub-Meyer, and Stefan Roth. Is synthetic data all we need? benchmarking the robustness of models trained with synthetic images. In CVPR, pages 2505-2515, 2024. 5 +[51] Tobias Uelwer, Jan Robine, Stefan Sylvius Wagner, Marc Höftmann, Eric Upschulte, Sebastian Konietzny, Maike Behrendt, and Stefan Harmeling. A survey on self-supervised representation learning. arXiv preprint arXiv:2308.11455, 2023. 2 +[52] Mei Wang and Weihong Deng. Mitigating bias in face recognition using skewness-aware reinforcement learning. In CVPR, pages 9322-9331, 2020. 6 +[53] Yinong Oliver Wang, Younjoon Chung, Chen Henry Wu, and Fernando De la Torre. Domain gap embeddings for generative dataset augmentation. In CVPR, pages 28684-28694, 2024. 5 +[54] Zeyu Wang, Klint Qinami, Ioannis Christos Karakozis, Kyle Genova, Prem Nair, Kenji Hata, and Olga Russakovsky. Towards fairness in visual recognition: Effective strategies for bias mitigation. In CVPR, pages 8919-8928, 2020. 2 +[55] Yuyang Xue, Junyu Yan, Raman Dutt, Fasih Haider, Jingshuai Liu, Steven McDonagh, and Sotirios A Tsaftaris. Bmft: Achieving fairness via bias-based weight masking fine-tuning. In MICCAI Workshop on Fairness of AI in Medical Imaging, pages 98-108, 2024. 3 +[56] Moon Ye-Bin, Nam Hyeon-Woo, Wonseok Choi, Nayeong Kim, Suha Kwak, and Tae-Hyun Oh. Exploiting synthetic data for data imbalance problems: Baselines from a data perspective. arXiv preprint arXiv:2308.00994, 2023. 1, 4, 6, 7 +[57] Cheng Zhang, Xuanbai Chen, Siqi Chai, Chen Henry Wu, Dmitry Lagun, Thabo Beeler, and Fernando De la Torre. Iti-gen: Inclusive text-to-image generation. In ICCV, pages 3969-3980, 2023. 2, 4 +[58] Fengda Zhang, Kun Kuang, Long Chen, Yuxuan Liu, Chao Wu, and Jun Xiao. Fairness-aware contrastive learning with partially annotated sensitive attributes. In ICLR, 2023. 3, 5, 6 + +[59] Fengda Zhang, Qianpei He, Kun Kuang, Jiashuo Liu, Long Chen, Chao Wu, Jun Xiao, and Hanwang Zhang. Distributionally generative augmentation for fair facial attribute classification. In CVPR, pages 22797-22808, 2024. 2, 3, 4, 5, 6 +[60] Zhifei Zhang, Yang Song, and Hairong Qi. Age progression/regression by conditional adversarial autoencoder. In CVPR, 2017. 6, 12 +[61] Qihao Zhao, Yalun Dai, Hao Li, Wei Hu, Fan Zhang, and Jun Liu. Ltgc: Long-tail recognition via leveraging llms-driven generated content. In CVPR, pages 19510-19520, 2024. 13 +[62] Zengqun Zhao and Ioannis Patras. Prompting visual-language models for dynamic facial expression recognition. In BMVC, pages 1-14, 2023. 4 +[63] Xiao Zhou, Yong Lin, Renjie Pi, Weizhong Zhang, Renzhe Xu, Peng Cui, and Tong Zhang. Model agnostic sample reweighting for out-of-distribution learning. In ICML, pages 27203-27221, 2022. 6 +[64] Yongchao Zhou, Hshmat Sahak, and Jimmy Ba. Using synthetic data for data augmentation to improve classification accuracy. In ICML, 2023. 2, 5 +[65] Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He. A comprehensive survey on transfer learning. Proceedings of the IEEE, 109(1):43-76, 2020. 4 \ No newline at end of file diff --git a/2025/AIM-Fair_ Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data/images.zip b/2025/AIM-Fair_ Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b660ea79cab744c6f3ae0c5a73f02fe5983a578e --- /dev/null +++ b/2025/AIM-Fair_ Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:717df4e159ca2cba7a1bf3e67e2a50376c839834e681decbbc82d8afcb43ba66 +size 761949 diff --git a/2025/AIM-Fair_ Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data/layout.json b/2025/AIM-Fair_ Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..40d63a6fa3f085491528e6c0302641fdfe191f86 --- /dev/null +++ b/2025/AIM-Fair_ Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data/layout.json @@ -0,0 +1,7919 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 63, + 102, + 547, + 140 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 102, + 547, + 140 + ], + "spans": [ + { + "bbox": [ + 63, + 102, + 547, + 140 + ], + "type": "text", + "content": "AIM-Fair: Advancing Algorithmic Fairness via Selectively Fine-Tuning Biased Models with Contextual Synthetic Data" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 103, + 161, + 506, + 190 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 103, + 161, + 506, + 190 + ], + "spans": [ + { + "bbox": [ + 103, + 161, + 506, + 190 + ], + "type": "text", + "content": "Zengqun Zhao Ziquan Liu Yu Cao Shaogang Gong Ioannis Patras Centre for Multimodal AI, Queen Mary University of London" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 133, + 191, + 474, + 204 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 191, + 474, + 204 + ], + "spans": [ + { + "bbox": [ + 133, + 191, + 474, + 204 + ], + "type": "text", + "content": "{zengqun.zhao, ziquan.liu, yu.cao, s.gong, i.patras}@qmul.ac.uk" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 151, + 231, + 200, + 243 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 231, + 200, + 243 + ], + "spans": [ + { + "bbox": [ + 151, + 231, + 200, + 243 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 256, + 296, + 639 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 256, + 296, + 639 + ], + "spans": [ + { + "bbox": [ + 55, + 256, + 296, + 639 + ], + "type": "text", + "content": "Recent advances in generative models have sparked research on improving model fairness with AI-generated data. However, existing methods often face limitations in the diversity and quality of synthetic data, leading to compromised fairness and overall model accuracy. Moreover, many approaches rely on the availability of demographic group labels, which are often costly to annotate. This paper proposes AIM-Fair, aiming to overcome these limitations and harness the potential of cutting-edge generative models in promoting algorithmic fairness. We investigate a fine-tuning paradigm starting from a biased model initially trained on real-world data without demographic annotations. This model is then fine-tuned using unbiased synthetic data generated by a state-of-the-art diffusion model to improve its fairness. Two key challenges are identified in this fine-tuning paradigm, 1) the low quality of synthetic data, which can still happen even with advanced generative models, and 2) the domain and bias gap between real and synthetic data. To address the limitation of synthetic data quality, we propose Contextual Synthetic Data Generation (CSDG) to generate data using a text-to-image diffusion model (T2I) with prompts generated by a context-aware LLM, ensuring both data diversity and control of bias in synthetic data. To resolve domain and bias shifts, we introduce a novel selective fine-tuning scheme in which only model parameters more sensitive to bias and less sensitive to domain shift are updated. Experiments on CelebA and UTKFace datasets show that our AIM-Fair improves model fairness while maintaining utility, outperforming both fully and partially fine-tuned approaches to model fairness. The code is available at https://github.com/zengqunzhao/AIM-Fair." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 656, + 135, + 668 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 656, + 135, + 668 + ], + "spans": [ + { + "bbox": [ + 55, + 656, + 135, + 668 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 671, + 295, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 671, + 295, + 696 + ], + "spans": [ + { + "bbox": [ + 55, + 671, + 295, + 696 + ], + "type": "text", + "content": "Recent research has raised significant concerns about fairness and bias in machine learning models [30]. These mod-" + } + ] + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 320, + 231, + 545, + 410 + ], + "blocks": [ + { + "bbox": [ + 320, + 231, + 545, + 410 + ], + "lines": [ + { + "bbox": [ + 320, + 231, + 545, + 410 + ], + "spans": [ + { + "bbox": [ + 320, + 231, + 545, + 410 + ], + "type": "image", + "image_path": "f8d0530fa245105e65d5f0c4b739b0f8b3dea27cba27db507c87aa7fbc563021.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 412, + 555, + 598 + ], + "lines": [ + { + "bbox": [ + 313, + 412, + 555, + 598 + ], + "spans": [ + { + "bbox": [ + 313, + 412, + 555, + 598 + ], + "type": "text", + "content": "Figure 1. Facial attributes classification (FAC) on the CelebA dataset based on different training strategies, in which Smiling is the target attribute and Gender is the protected attribute. This shows different learning strategies result in variable model utility (Overall Accuracy) vs. model fairness (Worst Group Accuracy and Equalized Odds) on demographic groups. A model trained solely on real data if biased exhibits high accuracy but poor fairness scores. Conversely, models trained on balanced synthetic data show better fairness but poorer accuracy due to a \"domain gap\" between the real and synthetic data, and a lack of selective model fine-tuning when synthetic data is deployed. Strategies to repair imbalances in real data [9, 56] or to supplement real data with synthetic data [47] marginally increase accuracy but do little to improve fairness. Our method for selective fine-tuning of a pre-trained (biased) model with synthetic data not only preserves model accuracy but also substantially improves fairness, outperforming fully fine-tuning (FFT) in both model utility and fairness." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 605, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 605, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 605, + 556, + 715 + ], + "type": "text", + "content": "els often demonstrate varying performance across different demographic groups, leading to unfair outcomes. To mitigate the spurious correlation caused by learning from imbalanced (biased) data, regularizations were used as optimization objectives [1, 10, 12, 23]. Methods like distributionally robust optimization (DRO) [12] optimizes the worst-case performance, while invariant risk minimization (IRM) [1] learns unbiased representations with invariance to different environments. Influenced by the success of repre" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 496, + 37 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 702, + 270, + 714 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 702, + 270, + 714 + ], + "spans": [ + { + "bbox": [ + 69, + 702, + 270, + 714 + ], + "type": "text", + "content": "Corresponding author: Ziquan Liu {ziquan.liu@qmul.ac.uk}." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "28748" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 67, + 51, + 225, + 162 + ], + "blocks": [ + { + "bbox": [ + 67, + 51, + 225, + 162 + ], + "lines": [ + { + "bbox": [ + 67, + 51, + 225, + 162 + ], + "spans": [ + { + "bbox": [ + 67, + 51, + 225, + 162 + ], + "type": "image", + "image_path": "968ef6fce9fb3dfd9ee650e2f55496e5ce680ace7aa301e85fbf4cca5b71cf7c.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 163, + 555, + 231 + ], + "lines": [ + { + "bbox": [ + 55, + 163, + 555, + 231 + ], + "spans": [ + { + "bbox": [ + 55, + 163, + 555, + 231 + ], + "type": "text", + "content": "Figure 2. The effects of selective fine-tuning by layer-wise freezing from facial attribute classifications on the CelebA dataset, with Smiling as the target attribute and Male as the protected attribute: When only the fully connected (FC) layer is frozen the model shows improved worst demographic group accuracy and reduced equalized odds, i.e. more fair, but sacrifices some utility (overall accuracy). This indicates increased cross-domain generalisability (real and synthetic). Conversely, freezing only block 2 while fine-tuning the remaining parameters results in high overall accuracy but poorer fairness, i.e. further enhanced domain-bias specificity. When only block 1 is frozen, the model not only maintained equalized odds but also increased utility (overall accuracy) and worst group accuracies." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 225, + 50, + 383, + 162 + ], + "blocks": [ + { + "bbox": [ + 225, + 50, + 383, + 162 + ], + "lines": [ + { + "bbox": [ + 225, + 50, + 383, + 162 + ], + "spans": [ + { + "bbox": [ + 225, + 50, + 383, + 162 + ], + "type": "image", + "image_path": "0ee6f775f8d3b9eaeae3aaa12f11590e01bdf33b2ec7a9f98e8e538dd8a10a16.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 383, + 51, + 541, + 162 + ], + "blocks": [ + { + "bbox": [ + 383, + 51, + 541, + 162 + ], + "lines": [ + { + "bbox": [ + 383, + 51, + 541, + 162 + ], + "spans": [ + { + "bbox": [ + 383, + 51, + 541, + 162 + ], + "type": "image", + "image_path": "3365cae16dacab792e323a67948865d1373f9eaaeba542387d0e645ff9fca91d.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 54, + 233, + 297, + 449 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 233, + 297, + 449 + ], + "spans": [ + { + "bbox": [ + 54, + 233, + 297, + 449 + ], + "type": "text", + "content": "sentation learning [51], attempts were made to learn a fair feature representation invariant to protected facial attributes [6, 29, 34, 54]. However, these methods rely on demographic group labelling, often unavailable in practice [2, 4]. More recent advances in generative models have utilized generative augmentation for a more balanced training data distribution [8, 9, 17, 36, 40, 59]. These techniques primarily focus on image editing, either on real data [36, 40] or synthetic data [9], to create less biased training data. Yet, these methods still require additional group annotations of the data they edit. To address this limitation, DiGA [59] proposed to create an unbiased training set by editing spurious facial attributes to a random degree while keeping a target facial attribute unchanged without knowing each sample's group label. However, DiGA increases training data size multiple times, leading to substantially higher training costs, and the quality of generated data is limited due to the unknown group labels." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 451, + 298, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 451, + 298, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 451, + 298, + 714 + ], + "type": "text", + "content": "Recent advancements in text-to-image generative models showcase impressive data fidelity [43], yet their potential for improving fairness through data expansion has not been fully explored by the fairness community. This raises the question: can AI-generated synthetic data play a crucial role in mitigating biases within machine learning models? This research question is particularly important given the successful application of AI-generated data in fine-tuning large language models (LLMs) [11], data augmentation [64], and long-tail recognition [47], alongside some reported limitations of synthetic data effectiveness [49]. This work presents a comprehensive empirical investigation into whether fine-tuning high-quality, balanced generative data from a contemporary text-to-image model can counteract model biases caused by training on imbalanced real data. We identify two key challenges in bias-correcting fine-tuning with synthetic data: (1) A data-related challenge arising from linguistic ambiguity of the textual prompt and/or model misrepresentation [48, 57], which results in low-quality and low-diversity generated data. (2) A model learning challenge caused by both a domain shift (synthetic vs. real) and a bias shift (unbiased vs. biased) between" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 234, + 555, + 258 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 234, + 555, + 258 + ], + "spans": [ + { + "bbox": [ + 313, + 234, + 555, + 258 + ], + "type": "text", + "content": "the real and the synthetic data. Fine-tuning blindly on the synthetic will result in a model with decreased utility." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 258, + 556, + 426 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 258, + 556, + 426 + ], + "spans": [ + { + "bbox": [ + 313, + 258, + 556, + 426 + ], + "type": "text", + "content": "To address the data quality issue, we propose Contextual Synthetic Data Generation (CSDG). Existing Latent Diffusion Models (LDMs) [3, 36, 41, 45] often use vision-language models, such as CLIP, for cross-modality alignment. To yield more diverse and better fine-grained image generation, we formulate a contextual synthetic data generation strategy that uses more detailed text descriptions from expansive linguistic expressions of LLMs to condition an LDM with richer, context-driven text. As a result, the generated images cover more scenarios and provide more details, enhancing diversity and mitigating bias in real data. In contrast to other methods that edit or manipulate real data samples, a key strength of our method is that it does not require annotating the Protected Group Attributes of real data." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 426, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 426, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 426, + 556, + 715 + ], + "type": "text", + "content": "The model learning challenge is due to the existence of two types of shifts between the generated and the real data, namely, the desired bias shift and the undesired domain shift (image quality, realism, scene characteristics). To address this challenge we propose to locate and update model parameters that are more sensitive to the bias shift and less sensitive to the domain shift. Our observation is that some parameters are more sensitive to data distribution shift (cross-domain generalisability) - we call domain-sensitive parameters, while others are more sensitive to demographic group discrepancies (real data domain bias specificity), which we call fairness-sensitive parameters. This observation is supported by the experiments shown in Fig. 2. This is from fine-tuning a real data pre-trained model on balanced synthetic data while keeping parameters at different layers frozen. To identify which parameters to update, we propose a novel selection scheme in which the gradient differences between the updates by the real (biased) dataset and two synthetic datasets with one unbiased and another biased. Ranking the gradient differences between the synthetic-biased data and the synthetic-unbiased data reveals parameters sensitive to fairness (fairness-sensitive parameters). In contrast, inverse ranking the gradient differences between the real data and the synthetic-biased data" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "28749" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 75, + 52, + 534, + 240 + ], + "blocks": [ + { + "bbox": [ + 75, + 52, + 534, + 240 + ], + "lines": [ + { + "bbox": [ + 75, + 52, + 534, + 240 + ], + "spans": [ + { + "bbox": [ + 75, + 52, + 534, + 240 + ], + "type": "image", + "image_path": "8860308feb1162fc43c47e04d965954e07a35434eeed64d6650d343d90084e9c.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 54, + 242, + 555, + 287 + ], + "lines": [ + { + "bbox": [ + 54, + 242, + 555, + 287 + ], + "spans": [ + { + "bbox": [ + 54, + 242, + 555, + 287 + ], + "type": "text", + "content": "Figure 3. A selective fine-tuning model consisting of three parts: (1) Contextual Synthetic Data Generation (CSDG) for generating diverse images using GPT-4 generated prompts, (2) Selective Mask Generation (SMG) for creating a selection mask that determines which parameters are updated during fine-tuning, and (3) Selective Fine-Tuning (SFT) to enhance model fairness obtained from synthetic data whilst simultaneously to preserve model utility yielded from real data in pre-training." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 54, + 289, + 294, + 361 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 289, + 294, + 361 + ], + "spans": [ + { + "bbox": [ + 54, + 289, + 294, + 361 + ], + "type": "text", + "content": "discovers parameters less sensitive to domain shift (domain-insensitive parameters). In fine-tuning, a selection mask is constructed as the intersection of the top-k rankings. This selection mask is applied to initialize the pre-trained model, so that only the selected parameters are updated using the balanced synthetic data." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 364, + 295, + 531 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 364, + 295, + 531 + ], + "spans": [ + { + "bbox": [ + 55, + 364, + 295, + 531 + ], + "type": "text", + "content": "Fig. 1 presents a comparison of the results across various training strategies using real and/or synthetic data. Fig 3 shows an overview of our model, which consists of three parts: Contextual Synthetic Data Generation (CSDG), Selective Mask Generation (SMG), and Selective Fine-Tuning (SFT). CSDG uses a pre-trained and fixed LDM to generate high-quality images, where a series of contextual prompts serve as the conditions. SMG creates a selection mask that determines which parameters are updated during finetuning. SFT is designed to correct bias in the model. We initialize a pre-trained model using the parameter selection mask. Then the model is fine-tuned on balanced synthetic data to enhance its fairness while retaining the model utility. Our contributions are as follows:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 534, + 295, + 712 + ], + "type": "list", + "angle": 0, + "index": 8, + "blocks": [ + { + "bbox": [ + 55, + 534, + 295, + 582 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 534, + 295, + 582 + ], + "spans": [ + { + "bbox": [ + 55, + 534, + 295, + 582 + ], + "type": "text", + "content": "- We investigate a fine-tuning paradigm that mitigates model bias stemming from unbalanced real data using synthetic data generated by a text-to-image (T2I) process without requiring demographic group annotations." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 582, + 295, + 629 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 582, + 295, + 629 + ], + "spans": [ + { + "bbox": [ + 55, + 582, + 295, + 629 + ], + "type": "text", + "content": "- We design contextual synthetic data generating by using a T2I diffusion mode with prompts generated by a context-aware LLM, ensuring both data diversity and control of bias in synthetic data." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 630, + 295, + 677 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 630, + 295, + 677 + ], + "spans": [ + { + "bbox": [ + 55, + 630, + 295, + 677 + ], + "type": "text", + "content": "- We introduce a selective fine-tuning method for fair model learning, which identifies domain-sensitive and fairness-sensitive parameters for improving model fairness and utility simultaneously." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 677, + 295, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 677, + 295, + 712 + ], + "spans": [ + { + "bbox": [ + 55, + 677, + 295, + 712 + ], + "type": "text", + "content": "- Our method outperforms both full and partial fine-tuning methods and achieves superior performance compared to state-of-the-art methods across several datasets." + } + ] + } + ], + "index": 7 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 314, + 289, + 401, + 300 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 289, + 401, + 300 + ], + "spans": [ + { + "bbox": [ + 314, + 289, + 401, + 300 + ], + "type": "text", + "content": "2. Related Work" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 312, + 304, + 555, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 304, + 555, + 567 + ], + "spans": [ + { + "bbox": [ + 312, + 304, + 555, + 567 + ], + "type": "text", + "content": "Mitigating Model Bias. Current methods for model fairness can be categorized into three types: pre-processing, in-processing, and post-processing. Pre-processing approaches modify sample distributions of protected variables or perform transformations to remove discrimination from the data. Many recent works use generative models to create balanced, unbiased datasets [36, 40, 58]. In-processing methods incorporate fairness metrics into model optimization to maximize both performance and fairness [1, 10, 12, 20, 23, 55]. Some studies explore mitigating bias without group annotations, such as Just Train Twice (JTT) [24], which upweights misclassified examples to improve worst-group performance, and Cross-Tisk Minimization (XRM) [37], which trains twin classifiers to reduce spurious correlations. Post-processing methods apply transformations to model outputs to improve fairness, such as model calibration [25, 33, 38] and thresholding [5, 13] to align predicted positive outcomes with actual positive examples across groups. Our method falls within the pre-processing paradigm, but instead of simply balancing real data with synthetic data, we focus on using synthetic data to enhance model fairness." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 570, + 556, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 570, + 556, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 570, + 556, + 714 + ], + "type": "text", + "content": "Improving Model Fairness on Generative Data. Generative models have advanced rapidly in recent years [43, 46], with several studies exploring their use to improve model fairness through generative data [9, 17, 59]. Much of the prior research has focused on generating counterfactual samples to assess fairness [8, 17], and generative methods have also made strides in bias mitigation by creating balanced, unbiased datasets [36, 40, 58]. Instead of generating counterfactual samples based on real data, D'Inca et al. [9] use a diffusion model for uncontrolled image generation, followed by manipulation of the synthetic images in a semantic space. However, these approaches require additional" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 758 + ], + "type": "text", + "content": "28750" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 296, + 239 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 296, + 239 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 296, + 239 + ], + "type": "text", + "content": "group annotations. Zhang et al. [59] use generative models to identify spurious attributes that may bias the model, then edit each image to modify these attributes. They train a fair FAC model on the augmented dataset, enhancing its invariance to these spurious variations. While this approach does not require protected group labels for training the FAC model, annotations are still needed to train the generative models that detect and edit spurious attributes. Additionally, generating accurate counterfactual images remains challenging, as the edits are applied randomly to images without known protected attributes. In contrast, our method uses generative data from a text-to-image LDM, enabling greater control over the diversity of the generated data and allowing a broader range of scenarios and variations." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 240, + 296, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 240, + 296, + 491 + ], + "spans": [ + { + "bbox": [ + 56, + 240, + 296, + 491 + ], + "type": "text", + "content": "Model Fine-Tuning. A common approach to transfer learning in the presence of distribution shifts is to fine-tune the last few layers of a pre-trained model, retaining the learned features while adapting to the new task [27, 65]. To prevent overfitting during this fine-tuning process, existing methods suggest using a smaller learning rate compared to the initial pretraining phase [21], freezing the early backbone layers and gradually unfreezing them [15], or applying different learning rates to each layer [42]. Lee et al. [22] introduced surgical fine-tuning, which showed that selectively fine-tuning a subset of layers can either match or outperform conventional fine-tuning strategies. Recent research [19] indicates that training a carefully selected subset of layers while keeping the remaining weights frozen at their initial values can lead to varying contributions from different layers across the network to overall performance. Our proposed method inherits some common findings from these works but we further investigated with parameters-wise selective fine-tuning, which details the parameters' sensibility to the property from the pre-train data distribution and the property from the downstream data distribution." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 500, + 117, + 513 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 500, + 117, + 513 + ], + "spans": [ + { + "bbox": [ + 55, + 500, + 117, + 513 + ], + "type": "text", + "content": "3. Methods" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 520, + 296, + 592 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 520, + 296, + 592 + ], + "spans": [ + { + "bbox": [ + 55, + 520, + 296, + 592 + ], + "type": "text", + "content": "This section first introduces how to generate contextual synthetic data with LLM-generated prompts. Then, we detail how to obtain the parameters selection mask. Finally, we present how to conduct model fair fine-tuning with a selection mask on balanced synthetic data. We provide the algorithm for the selective fine-tuning in the Appendix." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 594, + 257, + 605 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 594, + 257, + 605 + ], + "spans": [ + { + "bbox": [ + 55, + 594, + 257, + 605 + ], + "type": "text", + "content": "3.1. Contextual Synthetic Data Generation" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 605, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 605, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 605, + 296, + 714 + ], + "type": "text", + "content": "Our method attempts to correct the biased model trained on imbalanced real data by fine-tuning on balanced synthetic data, so the quality of the synthetic data is crucial to the success of fair learning. Although current text-to-image models have demonstrated remarkable performance, several studies indicate that directly expressing the desired attributes in the prompt often results in sub-optimal outcomes due to linguistic ambiguity or model misrepresentation [48, 57]. For example, as shown in Fig. 4, to generate a face photo with" + } + ] + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 331, + 73, + 534, + 261 + ], + "blocks": [ + { + "bbox": [ + 331, + 73, + 534, + 261 + ], + "lines": [ + { + "bbox": [ + 331, + 73, + 534, + 261 + ], + "spans": [ + { + "bbox": [ + 331, + 73, + 534, + 261 + ], + "type": "image", + "image_path": "748d899da50ee608b8a1e088410a484573eb1820cb8f563d2335b8625262dfb3.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 263, + 555, + 298 + ], + "lines": [ + { + "bbox": [ + 313, + 263, + 555, + 298 + ], + "spans": [ + { + "bbox": [ + 313, + 263, + 555, + 298 + ], + "type": "text", + "content": "Figure 4. Generated images of Smiling Male conditioned on different prompts. Compared to the plain prompt, our contextual prompts enhance the diversity. More on UTKFace in Appendix." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 302, + 555, + 374 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 302, + 555, + 374 + ], + "spans": [ + { + "bbox": [ + 313, + 302, + 555, + 374 + ], + "type": "text", + "content": "both target and protected attributes, one might use a prompt like \"Portrait face photo of a smiling male.\" as suggested in [56]. However, such manually designed prompts may lead to stereotypical image generation, excluding certain attributes or minority groups, which in turn can introduce bias in other attributes, such as age or hairstyle [57]." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 376, + 556, + 544 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 376, + 556, + 544 + ], + "spans": [ + { + "bbox": [ + 313, + 376, + 556, + 544 + ], + "type": "text", + "content": "The recent work in zero-shot [31] and supervised [62] classification suggest that leveraging additional contextual and detailed information can enhance vision-language alignment. Considering current text-to-image models [3, 36, 41, 45] often use vision-language models, such as CLIP [39], for cross-modality alignment, we propose a contextual synthetic data generation strategy that leverages the powerful linguistic capabilities of large language models, such as GPT-4, to condition an LDM with richer and context-driven text. As a result, the generated images cover more scenarios and provide more details, enhancing diversity and mitigating the bias in the characteristics not involved in the target and protected. The structure of the CSDG is shown in the upper right of Fig. 3." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 545, + 557, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 545, + 557, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 545, + 557, + 715 + ], + "type": "text", + "content": "Concretely, the instruction provided for the GPT-4 model following the format: “{Task}, {Number of Prompts}, {Target Attribute}, {Protected Attribute}, {Other Detailed Descriptions}, and {Prompt Format}”. For generating facial photos, for example, the other detailed descriptions contain specific facial features, hair characteristics, eye-related details, and head orientation or angle. Our text-to-image generation model is a pre-trained latent diffusion model-Stable Diffusion (SD) [43], which reverses noises applied to the latent embedding of images. SD contains a variational autoencoder (VAE) [28], a CLIP text encoder [39], and a U-Net [44]. During the inference, the random Gaussian noise " + }, + { + "bbox": [ + 313, + 545, + 557, + 715 + ], + "type": "inline_equation", + "content": "\\varepsilon_{t} \\sim \\mathcal{N}(0,\\mathbf{I})" + }, + { + "bbox": [ + 313, + 545, + 557, + 715 + ], + "type": "text", + "content": " and the contextual prompt features " + }, + { + "bbox": [ + 313, + 545, + 557, + 715 + ], + "type": "inline_equation", + "content": "c = (w_{1}, w_{2}, \\ldots, w_{n})" + }, + { + "bbox": [ + 313, + 545, + 557, + 715 + ], + "type": "text", + "content": " encoded via a CLIP text" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 758 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 758 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 758 + ], + "type": "text", + "content": "28751" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 294, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 294, + 144 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 294, + 144 + ], + "type": "text", + "content": "encoder " + }, + { + "bbox": [ + 55, + 72, + 294, + 144 + ], + "type": "inline_equation", + "content": "\\Psi (\\cdot)" + }, + { + "bbox": [ + 55, + 72, + 294, + 144 + ], + "type": "text", + "content": " will be fed into the U-Net to condition the denoising process, where " + }, + { + "bbox": [ + 55, + 72, + 294, + 144 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 55, + 72, + 294, + 144 + ], + "type": "text", + "content": " is the number of the contextual prompts. We provide detailed instruction and contextual prompts in the Appendix. The empirical results in Tab. 5 demonstrate that the generated contextual synthetic data mitigate domain shift and improve model fairness." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 144, + 204, + 156 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 144, + 204, + 156 + ], + "spans": [ + { + "bbox": [ + 55, + 144, + 204, + 156 + ], + "type": "text", + "content": "3.2. Selective Mask Generation" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 157, + 296, + 372 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 157, + 296, + 372 + ], + "spans": [ + { + "bbox": [ + 55, + 157, + 296, + 372 + ], + "type": "text", + "content": "As recent studies [50, 53, 64] have mentioned, synthetic data often fail to align with the real data distribution due to domain shift, suggesting that fine-tuning the pre-trained model directly on the balanced synthetic data may not learn the desired properties effectively. This is caused by the fact that there is both a domain shift (synthetic vs. real) and a bias shift (unbiased vs. biased) between the real and the synthetic data, and fine-tuning with the domain shift will result in a model with decreased utility. The empirical finding shown in Fig. 2 demonstrates that when fine-tuning a real data pre-trained model on balanced synthetic data, some parameters are more sensitive to the data distribution shift, called domain-sensitive parameters, while some are more sensitive to group discrepancies, called fairness-sensitive parameters. To discover the parameters' sensibility towards different scenarios, we propose to construct three distinct datasets with different distributions to elicit the model responses by calculating the parameter-wise gradients." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 373, + 296, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 373, + 296, + 521 + ], + "spans": [ + { + "bbox": [ + 55, + 373, + 296, + 521 + ], + "type": "text", + "content": "We aim to fine-tune a pre-trained model " + }, + { + "bbox": [ + 55, + 373, + 296, + 521 + ], + "type": "inline_equation", + "content": "f_{\\theta}(x)" + }, + { + "bbox": [ + 55, + 373, + 296, + 521 + ], + "type": "text", + "content": " to improve fairness while accounting for domain differences between real and synthetic data, hence, we want to find the parameters which are more sensitive to fairness while less sensitive to domain shift. Specifically, we start by constructing three different datasets: biased real data " + }, + { + "bbox": [ + 55, + 373, + 296, + 521 + ], + "type": "inline_equation", + "content": "\\{(x_i^{(R)},y_i^{(R)})\\}_{i = 1}^{N_R}\\in \\mathcal{D}_R" + }, + { + "bbox": [ + 55, + 373, + 296, + 521 + ], + "type": "text", + "content": " representing training data with inherent biases, biased synthetic data " + }, + { + "bbox": [ + 55, + 373, + 296, + 521 + ], + "type": "inline_equation", + "content": "\\{(x_i^{(S_1)},y_i^{(S_1)})\\}_{i = 1}^{N_{S_1}}\\in \\mathcal{D}_{S_1}" + }, + { + "bbox": [ + 55, + 373, + 296, + 521 + ], + "type": "text", + "content": " mirroring the unfairness of the real data, and unbiased synthetic data " + }, + { + "bbox": [ + 55, + 373, + 296, + 521 + ], + "type": "inline_equation", + "content": "\\{(x_i^{(S_2)},y_i^{(S_2)})\\}_{i = 1}^{N_{S_2}}\\in \\mathcal{D}_{S_2}" + }, + { + "bbox": [ + 55, + 373, + 296, + 521 + ], + "type": "text", + "content": " designed to be fair, mitigating biases. " + }, + { + "bbox": [ + 55, + 373, + 296, + 521 + ], + "type": "inline_equation", + "content": "N_{R},N_{S_{1}}" + }, + { + "bbox": [ + 55, + 373, + 296, + 521 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 55, + 373, + 296, + 521 + ], + "type": "inline_equation", + "content": "N_{S_2}" + }, + { + "bbox": [ + 55, + 373, + 296, + 521 + ], + "type": "text", + "content": " is the image number of the corresponding dataset." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 522, + 296, + 618 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 522, + 296, + 618 + ], + "spans": [ + { + "bbox": [ + 55, + 522, + 296, + 618 + ], + "type": "text", + "content": "We then compute the gradients " + }, + { + "bbox": [ + 55, + 522, + 296, + 618 + ], + "type": "inline_equation", + "content": "\\pmb{g}_R" + }, + { + "bbox": [ + 55, + 522, + 296, + 618 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 522, + 296, + 618 + ], + "type": "inline_equation", + "content": "\\pmb{g}_{S1}" + }, + { + "bbox": [ + 55, + 522, + 296, + 618 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 55, + 522, + 296, + 618 + ], + "type": "inline_equation", + "content": "\\pmb{g}_{S2}" + }, + { + "bbox": [ + 55, + 522, + 296, + 618 + ], + "type": "text", + "content": " of the loss function " + }, + { + "bbox": [ + 55, + 522, + 296, + 618 + ], + "type": "inline_equation", + "content": "\\mathcal{L}(\\theta ;x,y)" + }, + { + "bbox": [ + 55, + 522, + 296, + 618 + ], + "type": "text", + "content": ", which is defined from a binary softmax loss. Rather than considering fine-grained, scalar-wise parameters, we select parameters at the level of weights and biases, denoted by " + }, + { + "bbox": [ + 55, + 522, + 296, + 618 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 55, + 522, + 296, + 618 + ], + "type": "text", + "content": ", which includes weights " + }, + { + "bbox": [ + 55, + 522, + 296, + 618 + ], + "type": "inline_equation", + "content": "W^{(l)}" + }, + { + "bbox": [ + 55, + 522, + 296, + 618 + ], + "type": "text", + "content": " and biases " + }, + { + "bbox": [ + 55, + 522, + 296, + 618 + ], + "type": "inline_equation", + "content": "b^{(l)}" + }, + { + "bbox": [ + 55, + 522, + 296, + 618 + ], + "type": "text", + "content": " for the Convolution Layer, Batch Normalization, and Fully Connected Layer, where " + }, + { + "bbox": [ + 55, + 522, + 296, + 618 + ], + "type": "inline_equation", + "content": "l" + }, + { + "bbox": [ + 55, + 522, + 296, + 618 + ], + "type": "text", + "content": " refers to the layer index. Then, for the model parameters " + }, + { + "bbox": [ + 55, + 522, + 296, + 618 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 55, + 522, + 296, + 618 + ], + "type": "text", + "content": " on each dataset:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 622, + 296, + 652 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 622, + 296, + 652 + ], + "spans": [ + { + "bbox": [ + 104, + 622, + 296, + 652 + ], + "type": "interline_equation", + "content": "\\boldsymbol {g} _ {R} = \\frac {1}{N _ {R}} \\sum_ {i = 1} ^ {N _ {R}} \\nabla_ {\\boldsymbol {\\theta}} \\mathcal {L} \\left(\\theta ; x _ {i} ^ {(R)}, y _ {i} ^ {(R)}\\right) \\tag {1}", + "image_path": "4170775d90ac35c6706346a72ab7c7fba8920c3fb0d5032ebda383a5056a7faa.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 97, + 654, + 296, + 685 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 97, + 654, + 296, + 685 + ], + "spans": [ + { + "bbox": [ + 97, + 654, + 296, + 685 + ], + "type": "interline_equation", + "content": "\\boldsymbol {g} _ {S _ {1}} = \\frac {1}{N _ {S _ {1}}} \\sum_ {i = 1} ^ {N _ {S _ {1}}} \\nabla_ {\\theta} \\mathcal {L} \\left(\\theta ; x _ {i} ^ {(S _ {1})}, y _ {i} ^ {(S _ {1})}\\right) \\tag {2}", + "image_path": "fa3b5043dd52bf0b6bb2ff14fb26ba6d832ffb06a18f72ed2918731130b8a38b.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 97, + 685, + 296, + 715 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 97, + 685, + 296, + 715 + ], + "spans": [ + { + "bbox": [ + 97, + 685, + 296, + 715 + ], + "type": "interline_equation", + "content": "\\boldsymbol {g} _ {S _ {2}} = \\frac {1}{N _ {S _ {2}}} \\sum_ {i = 1} ^ {N _ {S _ {2}}} \\nabla_ {\\theta} \\mathcal {L} (\\theta ; x _ {i} ^ {(S _ {2})}, y _ {i} ^ {(S _ {2})}) \\tag {3}", + "image_path": "68f8108025bbe4c429fd893ca2cd73ac6a6a161c94cd5bed70cfc01900096ce4.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 72, + 553, + 108 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 553, + 108 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 553, + 108 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 72, + 553, + 108 + ], + "type": "inline_equation", + "content": "\\nabla" + }, + { + "bbox": [ + 313, + 72, + 553, + 108 + ], + "type": "text", + "content": " is the gradient operator. We then calculate the gradient differences to capture how each parameter behaves across different data distributions. For parameter " + }, + { + "bbox": [ + 313, + 72, + 553, + 108 + ], + "type": "inline_equation", + "content": "\\theta_{j}" + }, + { + "bbox": [ + 313, + 72, + 553, + 108 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 388, + 110, + 553, + 140 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 388, + 110, + 553, + 140 + ], + "spans": [ + { + "bbox": [ + 388, + 110, + 553, + 140 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\Delta_ {1, j} = \\left| \\boldsymbol {g} _ {R, j} - \\boldsymbol {g} _ {S _ {1}, j} \\right| \\tag {4} \\\\ \\Delta_ {2, j} = \\left| \\mathbf {g} _ {S _ {1}, j} - \\mathbf {g} _ {S _ {2}, j} \\right| \\\\ \\end{array}", + "image_path": "63b038893f8b46ca8174592a5e386c6e372447f7702f5db901ec80b501b6ee6a.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 139, + 555, + 221 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 139, + 555, + 221 + ], + "spans": [ + { + "bbox": [ + 313, + 139, + 555, + 221 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 139, + 555, + 221 + ], + "type": "inline_equation", + "content": "\\Delta_{1}" + }, + { + "bbox": [ + 313, + 139, + 555, + 221 + ], + "type": "text", + "content": " measures parameters sensitive to the domain shift while " + }, + { + "bbox": [ + 313, + 139, + 555, + 221 + ], + "type": "inline_equation", + "content": "\\Delta_{2}" + }, + { + "bbox": [ + 313, + 139, + 555, + 221 + ], + "type": "text", + "content": " identifies parameters crucial for fairness, " + }, + { + "bbox": [ + 313, + 139, + 555, + 221 + ], + "type": "inline_equation", + "content": "j" + }, + { + "bbox": [ + 313, + 139, + 555, + 221 + ], + "type": "text", + "content": " denotes the index of all the parameters. To obtain the parameters which are less affected by domain differences and more impactful for fairness, we conduct a ranking in ascending order on " + }, + { + "bbox": [ + 313, + 139, + 555, + 221 + ], + "type": "inline_equation", + "content": "\\Delta_{1}" + }, + { + "bbox": [ + 313, + 139, + 555, + 221 + ], + "type": "text", + "content": " (smaller differences first) and a ranking in descending order on " + }, + { + "bbox": [ + 313, + 139, + 555, + 221 + ], + "type": "inline_equation", + "content": "\\Delta_{2}" + }, + { + "bbox": [ + 313, + 139, + 555, + 221 + ], + "type": "text", + "content": " (larger differences first):" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 353, + 223, + 553, + 236 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 353, + 223, + 553, + 236 + ], + "spans": [ + { + "bbox": [ + 353, + 223, + 553, + 236 + ], + "type": "interline_equation", + "content": "R _ {1} = \\operatorname {a r g s o r t} \\left(\\Delta_ {1}\\right); R _ {2} = \\operatorname {a r g s o r t} \\left(- \\Delta_ {2}\\right) \\tag {5}", + "image_path": "cfb1c415e5f7a1c48ddfd6b076510d3231fcf8f67886fa0cd4fd9c3892cb02de.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 236, + 553, + 271 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 236, + 553, + 271 + ], + "spans": [ + { + "bbox": [ + 313, + 236, + 553, + 271 + ], + "type": "text", + "content": "Then we can find the intersections from the top- " + }, + { + "bbox": [ + 313, + 236, + 553, + 271 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 313, + 236, + 553, + 271 + ], + "type": "text", + "content": " parameters are both less sensitive to the domain gap and significant for fairness by " + }, + { + "bbox": [ + 313, + 236, + 553, + 271 + ], + "type": "inline_equation", + "content": "K = K_{1}\\cap K_{2}" + }, + { + "bbox": [ + 313, + 236, + 553, + 271 + ], + "type": "text", + "content": ", where" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 379, + 272, + 553, + 300 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 379, + 272, + 553, + 300 + ], + "spans": [ + { + "bbox": [ + 379, + 272, + 553, + 300 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\begin{array}{l} K _ {1} = \\left\\{\\theta_ {j} \\mid j \\in R _ {1} [ 1: k ] \\right\\} \\\\ I = \\left\\{\\theta_ {i} \\mid i \\in D _ {1} [ 1: k ] \\right\\} \\end{array} \\tag {6} \\\\ K _ {2} = \\left\\{\\theta_ {j} \\mid j \\in R _ {2} [ 1: k ] \\right\\} \\\\ \\end{array}", + "image_path": "03ae93897fd2385b63ffbbd154e5b9f3fe8d4ed52c4f6915b30b3e56f8f0e281.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 326, + 299, + 515, + 311 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 326, + 299, + 515, + 311 + ], + "spans": [ + { + "bbox": [ + 326, + 299, + 515, + 311 + ], + "type": "text", + "content": "Finally, the selection mask can be obtained by:" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 376, + 312, + 553, + 342 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 376, + 312, + 553, + 342 + ], + "spans": [ + { + "bbox": [ + 376, + 312, + 553, + 342 + ], + "type": "interline_equation", + "content": "M _ {j} = \\left\\{ \\begin{array}{l l} \\text {T r u e}, & \\text {i f} \\theta_ {j} \\in K \\\\ \\text {F a l s e}, & \\text {o t h e r w i s e} \\end{array} \\right. \\tag {7}", + "image_path": "039cfe646b33a83dc26b6b0cf6ddd29240b8e20c10b70188dead0e8cbdac63dd.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 314, + 342, + 524, + 354 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 342, + 524, + 354 + ], + "spans": [ + { + "bbox": [ + 314, + 342, + 524, + 354 + ], + "type": "text", + "content": "3.3. Selective Fine-Tuning on Synthetic Data" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 355, + 554, + 453 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 355, + 554, + 453 + ], + "spans": [ + { + "bbox": [ + 313, + 355, + 554, + 453 + ], + "type": "text", + "content": "Our goal is to fine-tune the pre-trained model " + }, + { + "bbox": [ + 313, + 355, + 554, + 453 + ], + "type": "inline_equation", + "content": "f_{\\theta}(x)" + }, + { + "bbox": [ + 313, + 355, + 554, + 453 + ], + "type": "text", + "content": " on the balanced synthetic dataset " + }, + { + "bbox": [ + 313, + 355, + 554, + 453 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_{S_2}" + }, + { + "bbox": [ + 313, + 355, + 554, + 453 + ], + "type": "text", + "content": " following the ERM framework, while only updating selected parameters as indicated by the mask " + }, + { + "bbox": [ + 313, + 355, + 554, + 453 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 313, + 355, + 554, + 453 + ], + "type": "text", + "content": ". Specifically, given the model " + }, + { + "bbox": [ + 313, + 355, + 554, + 453 + ], + "type": "inline_equation", + "content": "f_{\\theta}(x)" + }, + { + "bbox": [ + 313, + 355, + 554, + 453 + ], + "type": "text", + "content": " pretrained on the biased real data, balanced synthetic data " + }, + { + "bbox": [ + 313, + 355, + 554, + 453 + ], + "type": "inline_equation", + "content": "\\{(x_i^{(S)},y_i^{(S)})_{i = 1}^{N_{S_2}}" + }, + { + "bbox": [ + 313, + 355, + 554, + 453 + ], + "type": "text", + "content": ", and the parameters selection mask " + }, + { + "bbox": [ + 313, + 355, + 554, + 453 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 313, + 355, + 554, + 453 + ], + "type": "text", + "content": ", during optimization, we apply the mask to the gradients so that only the selected parameters are updated:" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 320, + 453, + 553, + 473 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 453, + 553, + 473 + ], + "spans": [ + { + "bbox": [ + 320, + 453, + 553, + 473 + ], + "type": "interline_equation", + "content": "\\theta^ {(t + 1)} = \\theta^ {(t)} - \\eta \\left(M \\odot \\nabla_ {\\theta} \\mathcal {L} \\left(f _ {\\theta^ {(t)}} \\left(x _ {i} ^ {(S _ {2})}\\right), y _ {i} ^ {(S _ {2})}\\right)\\right) \\tag {8}", + "image_path": "fe0c0310d448fa6228dab989c177858aa8b9eb77d7d84b23e6bc74c96512c3fa.jpg" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 313, + 475, + 554, + 522 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 475, + 554, + 522 + ], + "spans": [ + { + "bbox": [ + 313, + 475, + 554, + 522 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 475, + 554, + 522 + ], + "type": "inline_equation", + "content": "\\theta^{(t)}" + }, + { + "bbox": [ + 313, + 475, + 554, + 522 + ], + "type": "text", + "content": " are the parameters at iteration " + }, + { + "bbox": [ + 313, + 475, + 554, + 522 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 313, + 475, + 554, + 522 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 475, + 554, + 522 + ], + "type": "inline_equation", + "content": "\\eta" + }, + { + "bbox": [ + 313, + 475, + 554, + 522 + ], + "type": "text", + "content": " is the learning rate, and " + }, + { + "bbox": [ + 313, + 475, + 554, + 522 + ], + "type": "inline_equation", + "content": "\\odot" + }, + { + "bbox": [ + 313, + 475, + 554, + 522 + ], + "type": "text", + "content": " denotes element-wise multiplication. Notably, once the mask " + }, + { + "bbox": [ + 313, + 475, + 554, + 522 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 313, + 475, + 554, + 522 + ], + "type": "text", + "content": " is provided by SMG, it will be applied throughout the entire fine-tuning optimization process." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 314, + 534, + 394, + 547 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 534, + 394, + 547 + ], + "spans": [ + { + "bbox": [ + 314, + 534, + 394, + 547 + ], + "type": "text", + "content": "4. Experiments" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 313, + 554, + 554, + 601 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 554, + 554, + 601 + ], + "spans": [ + { + "bbox": [ + 313, + 554, + 554, + 601 + ], + "type": "text", + "content": "This section presents the experimental setup and results. We begin by describing the datasets, followed by the implementation details. The main results and ablation studies are presented at the end." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 314, + 605, + 376, + 616 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 605, + 376, + 616 + ], + "spans": [ + { + "bbox": [ + 314, + 605, + 376, + 616 + ], + "type": "text", + "content": "4.1. Datasets" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 313, + 617, + 555, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 617, + 555, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 617, + 555, + 714 + ], + "type": "text", + "content": "CelebA [26] contains over 200,000 facial images with 40 binary attribute annotations. Following the setting of the previous works [35, 58, 59], we set Male and Young to protected attributes and selected Smiling and Young as the target attribute which has the highest correlation with the protected attributes. For each experiment, we randomly sample a biased subset as a training dataset with a size of 20,000 images, where the majority group and minority group have" + } + ] + } + ], + "index": 23 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "28752" + } + ] + } + ], + "index": 24 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 61, + 92, + 301, + 183 + ], + "blocks": [ + { + "bbox": [ + 61, + 71, + 301, + 92 + ], + "lines": [ + { + "bbox": [ + 61, + 71, + 301, + 92 + ], + "spans": [ + { + "bbox": [ + 61, + 71, + 301, + 92 + ], + "type": "text", + "content": "Table 1. Comparisons to other methods on the CelebA dataset under settings of varied target and protected attributes." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 61, + 92, + 301, + 183 + ], + "lines": [ + { + "bbox": [ + 61, + 92, + 301, + 183 + ], + "spans": [ + { + "bbox": [ + 61, + 92, + 301, + 183 + ], + "type": "table", + "html": "
MethodsT: Smiling, P: MaleT: Smiling, P: YoungT: Young, P: Male
ACC (↑) WST(↑) EO (↓)ACC (↑) WST(↑) EO (↓)ACC (↑) WST(↑) EO (↓)
ERM [14]88.2070.1025.3088.3071.5015.6077.7042.0052.00
CVaR DRO [23]87.3074.0022.8087.0076.1013.9075.4042.3048.80
EIL [7]87.9075.6019.7087.9072.5013.3077.5045.6039.20
LfF [32]87.1077.5017.0085.3072.9014.3077.4044.2043.60
JTT [24]88.0074.8019.4087.6073.3014.2076.3043.6047.70
MAPLE [63]88.1072.0019.6088.1073.6013.6076.3046.2043.50
DiGA [59]88.4081.907.4089.1078.509.5080.0051.3033.30
AIM-Fair (Ours)89.0284.206.0790.2187.785.4478.1954.8928.18
", + "image_path": "a4c911b65802cae282ffecebb49d791e994d9f825fef5ab524763d504778a6b1.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 188, + 296, + 320 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 188, + 296, + 320 + ], + "spans": [ + { + "bbox": [ + 55, + 188, + 296, + 320 + ], + "type": "text", + "content": "90% and 10% of the sample size respectively. We report performance on the whole original test dataset. UTK Face [60] consists of over 20,000 facial images with three kinds of annotations: gender, age, and ethnicity. Following the experimental setup in the previous works [17, 18, 59], we define a binary spurious attribute \"Ethnicity\" based on whether the facial image is white or not. The task is to predict the Gender. We randomly sample a biased subset of 10,000 images, with the same bias degree as CelebA. We also construct a balanced and unbiased test dataset consisting of 3,200 images." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 325, + 156, + 336 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 325, + 156, + 336 + ], + "spans": [ + { + "bbox": [ + 55, + 325, + 156, + 336 + ], + "type": "text", + "content": "4.2. Fairness Metrics" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 338, + 296, + 386 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 338, + 296, + 386 + ], + "spans": [ + { + "bbox": [ + 55, + 338, + 296, + 386 + ], + "type": "text", + "content": "Our goal is to learn a fair and accurate model. For model utility, we present the overall accuracy (ACC) and group accuracy. For fairness, follow previous work [35, 58, 59] we use equalized odds (EO) [13], defined as:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 62, + 387, + 294, + 418 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 387, + 294, + 418 + ], + "spans": [ + { + "bbox": [ + 62, + 387, + 294, + 418 + ], + "type": "interline_equation", + "content": "\\left. \\overline {{\\sum}} _ {\\forall y, \\hat {y}} \\left| P _ {s _ {0}} (\\hat {Y} = \\hat {y} \\mid Y = y) - P _ {s _ {1}} (\\hat {Y} = \\hat {y} \\mid Y = y) \\right|, \\right. \\tag {9}", + "image_path": "38be02ddcc9fc755c9b26b6e5391fa921f94965bc3dd269c1c03f43be8da1329.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 418, + 295, + 453 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 418, + 295, + 453 + ], + "spans": [ + { + "bbox": [ + 55, + 418, + 295, + 453 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 418, + 295, + 453 + ], + "type": "inline_equation", + "content": "\\overline{\\sum}" + }, + { + "bbox": [ + 55, + 418, + 295, + 453 + ], + "type": "text", + "content": " is the averaged sum, " + }, + { + "bbox": [ + 55, + 418, + 295, + 453 + ], + "type": "inline_equation", + "content": "Y" + }, + { + "bbox": [ + 55, + 418, + 295, + 453 + ], + "type": "text", + "content": " target label, " + }, + { + "bbox": [ + 55, + 418, + 295, + 453 + ], + "type": "inline_equation", + "content": "\\hat{Y}" + }, + { + "bbox": [ + 55, + 418, + 295, + 453 + ], + "type": "text", + "content": " classifier predictive label, and " + }, + { + "bbox": [ + 55, + 418, + 295, + 453 + ], + "type": "inline_equation", + "content": "s_0,s_1\\in S" + }, + { + "bbox": [ + 55, + 418, + 295, + 453 + ], + "type": "text", + "content": " protected attributes. The worst-group accuracy (WST) defined as:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 113, + 455, + 294, + 473 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 113, + 455, + 294, + 473 + ], + "spans": [ + { + "bbox": [ + 113, + 455, + 294, + 473 + ], + "type": "interline_equation", + "content": "\\min _ {\\forall y \\in \\mathcal {Y}, \\forall s \\in S} P _ {s} (\\hat {Y} = y \\mid Y = y) \\tag {10}", + "image_path": "aaf44213182875ba755932cd4666c97ea0e432b4c38ea00b4e0d1ecc01573f4b.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 472, + 266, + 483 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 472, + 266, + 483 + ], + "spans": [ + { + "bbox": [ + 55, + 472, + 266, + 483 + ], + "type": "text", + "content": "and group standard deviation (STD) [52] defined as:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 113, + 483, + 294, + 502 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 113, + 483, + 294, + 502 + ], + "spans": [ + { + "bbox": [ + 113, + 483, + 294, + 502 + ], + "type": "interline_equation", + "content": "\\underset {\\forall y \\in \\mathcal {Y}, \\forall s \\in S} {\\operatorname {s t d}} P _ {s} (\\hat {Y} = y \\mid Y = y) \\tag {11}", + "image_path": "9c7afbce4f67a13d40d383bbd4822c5d48208c140eba7b006bdab7b017149a29.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 509, + 188, + 521 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 509, + 188, + 521 + ], + "spans": [ + { + "bbox": [ + 55, + 509, + 188, + 521 + ], + "type": "text", + "content": "4.3. Implementation Details" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 55, + 522, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 522, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 522, + 295, + 713 + ], + "type": "text", + "content": "Following previous work [59], we use ResNet-18 [14] as the backbone for all experiments. We use stable diffusion v2.1 as our latent diffusion model for generating images. We generated 10,000 images for each group and then randomly sampled " + }, + { + "bbox": [ + 55, + 522, + 295, + 713 + ], + "type": "inline_equation", + "content": "N_{S}" + }, + { + "bbox": [ + 55, + 522, + 295, + 713 + ], + "type": "text", + "content": ". To be consistent with the real data image number " + }, + { + "bbox": [ + 55, + 522, + 295, + 713 + ], + "type": "inline_equation", + "content": "N_{R}" + }, + { + "bbox": [ + 55, + 522, + 295, + 713 + ], + "type": "text", + "content": ", we set the " + }, + { + "bbox": [ + 55, + 522, + 295, + 713 + ], + "type": "inline_equation", + "content": "N_{S} = N_{R} / G" + }, + { + "bbox": [ + 55, + 522, + 295, + 713 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 55, + 522, + 295, + 713 + ], + "type": "inline_equation", + "content": "G" + }, + { + "bbox": [ + 55, + 522, + 295, + 713 + ], + "type": "text", + "content": " is the group number. During the pre-training phase, we set the batch size to 128 and trained the model for 15 epochs. The initial learning rate was set to 0.01, which was then reduced by a factor of 0.01 at the 10th epoch. During the fine-tuning phase, we set the batch size to 128 and trained the model for 10 epochs. And the learning rate is searched from \\{0.4, 0.5, 0.6\\}. All models are optimized by the SGD optimizer and trained on the Tesla A100 GPU based on the open-source PyTorch platform. To obtain more stable and reliable results, we conducted all experiments 10 times with different" + } + ] + } + ], + "index": 11 + }, + { + "type": "table", + "bbox": [ + 309, + 92, + 548, + 183 + ], + "blocks": [ + { + "bbox": [ + 307, + 71, + 548, + 92 + ], + "lines": [ + { + "bbox": [ + 307, + 71, + 548, + 92 + ], + "spans": [ + { + "bbox": [ + 307, + 71, + 548, + 92 + ], + "type": "text", + "content": "Table 2. Comparisons to other methods on the CelebA dataset (T=Smiling, P=Male) under settings of training set sizes." + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 309, + 92, + 548, + 183 + ], + "lines": [ + { + "bbox": [ + 309, + 92, + 548, + 183 + ], + "spans": [ + { + "bbox": [ + 309, + 92, + 548, + 183 + ], + "type": "table", + "html": "
MethodsSamples Ratio = 50%Samples Ratio = 25%Samples Ratio = 10%
ACC (↑)WST(↑)EO (↓)ACC (↑)WST(↑)EO (↓)ACC (↑)WST(↑)EO (↓)
ERM [14]87.5067.8026.1087.1065.9027.7086.9062.8028.90
CVaR DRO [23]86.6072.9022.1086.6072.3022.4085.5069.1027.30
EIL [7]86.2071.3022.5085.9069.6025.4086.8064.2026.70
LfF [32]86.9075.5019.4085.9072.1023.6085.5066.1027.70
JTT [24]87.3072.9020.1086.7071.1020.6086.8067.1023.10
MAPLE [63]87.4073.7023.8087.0072.7024.2085.6069.2027.10
DiGA [59]88.4081.107.8088.4078.308.0088.3078.808.40
AIM-Fair (Ours)88.8583.996.2888.8982.547.5987.9081.738.16
", + "image_path": "e6bbc9f4924f25570d8c475cab0f52d0f0d3f79418b99345f6170c9245ea587e.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "table_body" + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 188, + 548, + 200 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 188, + 548, + 200 + ], + "spans": [ + { + "bbox": [ + 313, + 188, + 548, + 200 + ], + "type": "text", + "content": "random seeds and then used the average as the final result." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 204, + 487, + 216 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 204, + 487, + 216 + ], + "spans": [ + { + "bbox": [ + 313, + 204, + 487, + 216 + ], + "type": "text", + "content": "4.4. Comparisons to State of the Art" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 217, + 553, + 324 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 217, + 553, + 324 + ], + "spans": [ + { + "bbox": [ + 313, + 217, + 553, + 324 + ], + "type": "text", + "content": "We compared our method against seven contemporary techniques, including the baseline ERM [14] method and six debiasing models all of which do not require protected attribute labels: Two regularization-based methods (CVaR DRO [23] and EIIL [7]); three reweighting-based methods (LfF [32], JTT [24], and MAPLE [63]); and generativemodel-based method (DiGA [59]). Our comparison covers two settings: different target and protected attributes and varying numbers of target labels." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 313, + 325, + 554, + 564 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 325, + 554, + 564 + ], + "spans": [ + { + "bbox": [ + 313, + 325, + 554, + 564 + ], + "type": "text", + "content": "Tab.1 shows that ERM achieves good accuracy but suffers from significant unfairness. Other debiasing methods, while improving fairness to some extent, generally sacrifice accuracy. DiGA [59] improves fairness while maintaining accuracy by using a generative model to edit real data and create more balanced training data. In contrast, our method generates images from scratch using a text-driven latent diffusion model. It is evident that our method outperforms the best of the existing models DiGA [59] in both fairness and accuracy in all categories, except \"Young / Male\" where we come close second to DiGA. We outperform all other methods consistently. We also conducted comparisons with other methods under smaller training set sizes, with the subsampling ratio of " + }, + { + "bbox": [ + 313, + 325, + 554, + 564 + ], + "type": "inline_equation", + "content": "50\\%" + }, + { + "bbox": [ + 313, + 325, + 554, + 564 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 313, + 325, + 554, + 564 + ], + "type": "inline_equation", + "content": "25\\%" + }, + { + "bbox": [ + 313, + 325, + 554, + 564 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 313, + 325, + 554, + 564 + ], + "type": "inline_equation", + "content": "10\\%" + }, + { + "bbox": [ + 313, + 325, + 554, + 564 + ], + "type": "text", + "content": ". Tab. 2 shows that our method outperforms all others consistently, except on ACC score for Lable Ratio at " + }, + { + "bbox": [ + 313, + 325, + 554, + 564 + ], + "type": "inline_equation", + "content": "10\\%" + }, + { + "bbox": [ + 313, + 325, + 554, + 564 + ], + "type": "text", + "content": " where we come close second to DiGA. Critically, our method maintains robust model utility and fairness even with varying amounts of real training data, with these improvements attributed to the selective updating with balanced synthetic data." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 313, + 568, + 421, + 580 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 568, + 421, + 580 + ], + "spans": [ + { + "bbox": [ + 313, + 568, + 421, + 580 + ], + "type": "text", + "content": "4.5. Ablation Analysis" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 313, + 581, + 554, + 700 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 581, + 554, + 700 + ], + "spans": [ + { + "bbox": [ + 313, + 581, + 554, + 700 + ], + "type": "text", + "content": "Comparisons of Different Training Strategies. To evaluate the effectiveness of the proposed selective fine-tuning, we first compare our method with several other strategies trained on different types of data. These include the baseline, which trains the model using conventional ERM on real data, training the model solely on synthetic data, supplementing real data with synthetic data [47], and balancing the real data [9, 56]. Furthermore, we compare common fine-tuning methods, such as linear probing and full fine-tuning." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 326, + 701, + 553, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 326, + 701, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 326, + 701, + 553, + 713 + ], + "type": "text", + "content": "The results in Tab. 3 indicate that both data supplen" + } + ] + } + ], + "index": 20 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "28753" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 66, + 76, + 541, + 247 + ], + "blocks": [ + { + "bbox": [ + 150, + 66, + 459, + 76 + ], + "lines": [ + { + "bbox": [ + 150, + 66, + 459, + 76 + ], + "spans": [ + { + "bbox": [ + 150, + 66, + 459, + 76 + ], + "type": "text", + "content": "Table 3. Comparisons of varied training strategies on CelebA and UTKFace datasets." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 66, + 76, + 541, + 247 + ], + "lines": [ + { + "bbox": [ + 66, + 76, + 541, + 247 + ], + "spans": [ + { + "bbox": [ + 66, + 76, + 541, + 247 + ], + "type": "table", + "html": "
MethodsTargetCelebAUTKFace
ProtectedT: Smiling; P: MaleProtectedT: Smiling; P: YoungProtectedT: Female; P: White
P=0P=1ACC (↑) WST (↑) EO (↓) STD (↓)P=0P=1ACC (↑) WST (↑) EO (↓) STD (↓)P=0P=1ACC (↑) WST (↑) EO (↓) STD (↓)
Baseline [14]T=083.5798.5089.2371.5223.8410.6578.7296.0990.1978.7217.376.9279.5896.1688.8679.5816.597.26
T=195.3671.5294.3786.4295.7583.96
Trained on Synthetic DataT=086.8289.4085.9878.527.874.0785.4585.7685.1282.423.501.3482.7188.1285.4482.715.501.96
T=186.3978.5282.4285.2784.9086.04
Data Supplementation [47]T=084.3998.3989.7773.6721.969.7678.0595.3690.3578.0517.317.0282.1293.9790.2582.1211.855.24
T=195.4073.6794.9087.6295.6289.28
Data Repairing [9, 56]T=082.8298.1089.7275.0521.189.5079.5095.1990.6879.5015.696.3081.1594.2289.4281.1513.375.89
T=196.0475.0594.5188.3595.7186.61
Linear ProbT=084.5596.2289.6977.4117.517.7185.2093.8190.1485.208.613.1881.0691.5188.4081.0610.595.14
T=194.8277.4190.4387.8894.5986.42
Fully Fine-TuningT=088.8191.6288.7782.746.833.3185.9890.0687.8585.984.371.6883.0088.0487.9083.005.402.99
T=189.5582.7486.0286.8190.7489.83
AIM-Fair (Ours)T=088.1691.4089.0284.206.072.7488.1993.6490.2187.785.442.3484.2689.0888.3084.264.812.41
T=190.2584.2088.9887.7890.6289.25
", + "image_path": "9c9dca4303ad606ad67d01988e4b3dbbc29c8828eb47a1a00d1270697745b2bc.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "type": "table", + "bbox": [ + 67, + 258, + 541, + 386 + ], + "blocks": [ + { + "bbox": [ + 133, + 247, + 476, + 258 + ], + "lines": [ + { + "bbox": [ + 133, + 247, + 476, + 258 + ], + "spans": [ + { + "bbox": [ + 133, + 247, + 476, + 258 + ], + "type": "text", + "content": "Table 4. Comparisons of varied partial fine-tuning strategies on CelebA and UTKFace datasets." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 67, + 258, + 541, + 386 + ], + "lines": [ + { + "bbox": [ + 67, + 258, + 541, + 386 + ], + "spans": [ + { + "bbox": [ + 67, + 258, + 541, + 386 + ], + "type": "table", + "html": "
MethodsTargetCelebAUTKFace
ProtectedT: Smiling; P: MaleProtectedT: Smiling; P: YoungProtectedT: Female; P: White
P=0P=1ACC (↑)WST (↑)EO (↓)STD (↓)P=0P=1ACC (↑)WST (↑)EO (↓)STD (↓)P=0P=1ACC (↑)WST (↑)EO (↓)STD (↓)
Best Random SelectionT=087.9392.8189.3382.179.144.0985.5291.5388.9585.526.022.1683.5088.8587.9283.505.382.61
T=191.3182.1788.2587.6390.3089.01
Best Sub-Tuning [19] (By Updating One Block)T=087.5094.7590.0480.6412.455.5287.2894.6390.3387.287.343.0083.0589.5689.0183.056.513.56
T=193.0980.6489.1587.3292.0791.34
Best Sub-Tuning [19] (By Freezing One Block)T=088.6991.0989.0683.487.003.0086.5591.8988.6586.555.342.2583.5888.3487.4283.584.762.23
T=190.4883.4887.0886.5588.9688.81
Selective Fine-Tuning (Cosine Similarity)T=086.0791.7788.8583.338.403.6187.4392.9690.2687.435.532.0583.7288.8288.2183.725.102.64
T=191.5383.3389.5288.6590.1490.15
AIM-Fair (Ours) (Absolute Difference)T=088.1691.4089.0284.206.072.7488.1993.6490.2187.785.442.3484.2689.0888.3084.264.812.41
T=190.2584.2088.9887.7890.6289.25
", + "image_path": "073a2617a287bebd5ae1956cdc488467fd052d1db2fb9330563cf1bfe102ea34.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 391, + 295, + 535 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 391, + 295, + 535 + ], + "spans": [ + { + "bbox": [ + 55, + 391, + 295, + 535 + ], + "type": "text", + "content": "tation and data repairing do not significantly improve fairness. Data supplementation, which combines biased real data with balanced synthetic data for co-training, makes it difficult for the model to learn fairness properties from the synthetic data because of the domain gap. While data repairing can create balanced training data, the domain shift between real and synthetic data limits the effectiveness of fair learning. Common transfer learning methods, such as linear probing and full fine-tuning, also face challenges due to domain shifts. Specifically, linear probing is affected by discrepancies in feature representations, while full fine-tuning improves fairness at the cost of reduced model utility." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 540, + 295, + 647 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 540, + 295, + 647 + ], + "spans": [ + { + "bbox": [ + 55, + 540, + 295, + 647 + ], + "type": "text", + "content": "Comparisons of Different Partial Fine-Tuning. We also compare our method with other partial fine-tuning approaches, including random selection and sub-tuning [19]. For random selection, we set the updating ratios at " + }, + { + "bbox": [ + 55, + 540, + 295, + 647 + ], + "type": "inline_equation", + "content": "40\\%" + }, + { + "bbox": [ + 55, + 540, + 295, + 647 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 540, + 295, + 647 + ], + "type": "inline_equation", + "content": "55\\%" + }, + { + "bbox": [ + 55, + 540, + 295, + 647 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 540, + 295, + 647 + ], + "type": "inline_equation", + "content": "70\\%" + }, + { + "bbox": [ + 55, + 540, + 295, + 647 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 55, + 540, + 295, + 647 + ], + "type": "inline_equation", + "content": "85\\%" + }, + { + "bbox": [ + 55, + 540, + 295, + 647 + ], + "type": "text", + "content": ", and select the best result as the final one. For sub-tuning, we perform block-wise fine-tuning and block-wise freezing (i.e., updating one block or freezing one block while updating the rest), also selecting the block with the best result." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 653, + 295, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 653, + 295, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 653, + 295, + 715 + ], + "type": "text", + "content": "Tab. 4 shows that fine-tuning only one block consistently yields the best accuracy, but at the cost of fairness, while freezing one block tends to improve fairness but often sacrifices model utility. This demonstrates that coarse-grained approaches, such as block-wise updating or freezing, strut" + } + ] + } + ], + "index": 6 + }, + { + "type": "table", + "bbox": [ + 315, + 412, + 553, + 505 + ], + "blocks": [ + { + "bbox": [ + 313, + 390, + 553, + 411 + ], + "lines": [ + { + "bbox": [ + 313, + 390, + 553, + 411 + ], + "spans": [ + { + "bbox": [ + 313, + 390, + 553, + 411 + ], + "type": "text", + "content": "Table 5. Classification results on CelebA dataset (T=Smiling, P=Male) under settings of different prompt types and numbers." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 315, + 412, + 553, + 505 + ], + "lines": [ + { + "bbox": [ + 315, + 412, + 553, + 505 + ], + "spans": [ + { + "bbox": [ + 315, + 412, + 553, + 505 + ], + "type": "table", + "html": "
Prompt Types (Number)TargetProtectedT: Smiling; P: MaleProtectedT: Smiling; P: Young
P=0P=1ACC(↑)WST(↑)EO(↓)STD(↓)P=0P=1ACC(↑)WST(↑)EO(↓)STD(↓)
Plain Prompt (1)t=065.1889.4471.6543.2834.2017.0783.1077.7272.7559.539.308.96
t=177.4743.2859.5368.83
Contextual Prompts (25)t=077.1289.2581.8569.7316.37.6576.7267.9079.8067.908.828.75
t=185.9969.7385.1691.09
Contextual Prompts (50)t=089.4789.8082.6672.225.077.6687.7290.4385.8579.674.364.20
t=177.2672.2279.6782.67
Contextual + Head Poses (50)t=086.8289.4085.9878.527.874.0785.4585.7685.1282.423.501.34
t=186.3978.5282.4285.27
", + "image_path": "94dcd9d29213e01897686134b27555b736c43439bf341d9022e2a3f7ae760a76.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_body" + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 510, + 555, + 582 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 510, + 555, + 582 + ], + "spans": [ + { + "bbox": [ + 313, + 510, + 555, + 582 + ], + "type": "text", + "content": "gle to achieve an optimal balance between model utility and fairness. In contrast, our method performs fine-grained, parameter-wise updating, which allows the model to enhance fairness while maintaining utility. As a result, our method achieves the best worst-group accuracy while preserving high overall accuracy." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 582, + 556, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 582, + 556, + 689 + ], + "spans": [ + { + "bbox": [ + 313, + 582, + 556, + 689 + ], + "type": "text", + "content": "Additionally, our method uses gradient differences to identify parameter sensitivity to varying data distributions. We also compared absolute gradient differences with the cosine similarity between gradients. The results in Tab. 4 show that using absolute gradient differences yields better results. The possible reason is that the cosine similarity only provides the direction of gradient disparity, whereas the absolute difference directly captures the magnitude of the gradient disparity." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 689, + 553, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 689, + 553, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 689, + 553, + 714 + ], + "type": "text", + "content": "Comparisons of Different Prompts. To evaluate the performance of the varied prompts used for generating images," + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "type": "text", + "content": "28754" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 60, + 70, + 289, + 218 + ], + "blocks": [ + { + "bbox": [ + 60, + 70, + 289, + 218 + ], + "lines": [ + { + "bbox": [ + 60, + 70, + 289, + 218 + ], + "spans": [ + { + "bbox": [ + 60, + 70, + 289, + 218 + ], + "type": "image", + "image_path": "e44f458f7551955a43c17778c446f70cb11207d0428beb2a67bac569ce9b1397.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 221, + 295, + 266 + ], + "lines": [ + { + "bbox": [ + 55, + 221, + 295, + 266 + ], + "spans": [ + { + "bbox": [ + 55, + 221, + 295, + 266 + ], + "type": "text", + "content": "Figure 5. Classification results on the CelebA dataset (T=Smiling, P=Male) with different top-k values. The top-k values {40, 45, 50, 55, 60} correspond approximately to " + }, + { + "bbox": [ + 55, + 221, + 295, + 266 + ], + "type": "inline_equation", + "content": "\\{64\\%, 72\\%, 80\\%, 88\\%, 96\\% \\}" + }, + { + "bbox": [ + 55, + 221, + 295, + 266 + ], + "type": "text", + "content": " of the total model parameters." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 54, + 269, + 295, + 422 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 269, + 295, + 422 + ], + "spans": [ + { + "bbox": [ + 54, + 269, + 295, + 422 + ], + "type": "text", + "content": "we trained the model solely on the balanced synthetic data generated with corresponding prompts and tested it on the real test set. Tab. 5 shows the results of different prompt types and quantities, indicating that images generated with contextual prompts lead to significantly better performance in both model accuracy and fairness. We believe this improvement is due to the increased diversity and details in the synthetic images generated from contextual prompts. Moreover, as the number of contextual prompts increases, test performance improves as well. Specifically, incorporating head pose variation into the prompt further enhances both accuracy and fairness in the generated images, as the results shown in the last row of Tab. 5." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 426, + 295, + 556 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 426, + 295, + 556 + ], + "spans": [ + { + "bbox": [ + 55, + 426, + 295, + 556 + ], + "type": "text", + "content": "Evaluations of Varied Top-k Values. In our method, to identify the intersection parameters from the two gradient difference rankings, we use the top-k selection to ensure that the chosen parameters are more sensitive to fairness and less affected by domain shift. The results for different top-k values are shown in Fig. 5. When k is set to 55, our method achieves the best fairness performance while retaining overall accuracy. Intuitively, a lower k results in fewer updates, leading to better model utility but worse fairness, while a high k involves more parameter updates, which can negatively impact both accuracy and fairness." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 558, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 558, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 558, + 295, + 713 + ], + "type": "text", + "content": "Evaluations of Varied Bias Ratio for Synthetic Data Construction. To evaluate the model's sensitivity to domain shift and group disparity, we construct three distinct datasets with different distributions. In our method, the bias ratio for the biased synthetic data distribution is treated as a hyperparameter, referenced by the error set of real training examples, as suggested in JTT [24]. To assess the impact of varying bias ratios in synthetic data, we explore different ratios. As shown in Tab. 6, a disparity between the synthetic data bias ratio and the real data bias ratio leads to worse performance in both accuracy and fairness. However, despite these differences in bias ratios, our method still outperforms the fully fine-tuning approach. This demonstrates that our" + } + ] + } + ], + "index": 4 + }, + { + "type": "table", + "bbox": [ + 326, + 93, + 541, + 225 + ], + "blocks": [ + { + "bbox": [ + 313, + 71, + 553, + 92 + ], + "lines": [ + { + "bbox": [ + 313, + 71, + 553, + 92 + ], + "spans": [ + { + "bbox": [ + 313, + 71, + 553, + 92 + ], + "type": "text", + "content": "Table 6. Classification results on CelebA dataset (T=Smiling, P=Male) with varied bias ratio of the biased synthetic data." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 326, + 93, + 541, + 225 + ], + "lines": [ + { + "bbox": [ + 326, + 93, + 541, + 225 + ], + "spans": [ + { + "bbox": [ + 326, + 93, + 541, + 225 + ], + "type": "table", + "html": "
Bias RatioTargetProtectedACC (↑)WST (↑)EOD (↓)STD (↓)
P=0P=1
Fully Fine-Tuningt=088.8191.6288.7782.746.833.31
t=189.5582.74
4:6t=085.7291.7189.2383.908.783.77
t=192.6983.90
3:7t=087.0591.6288.9083.527.723.26
t=190.9383.52
2:8t=088.0191.7688.9783.167.303.28
t=190.4683.16
1:9t=088.1691.4089.0284.206.072.74
t=190.2584.20
", + "image_path": "e1e4533cbe7ffde0a115e197e0136d9a55f186d8fe5e9778e2696a1003b5de19.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_body" + } + ], + "index": 6 + }, + { + "type": "table", + "bbox": [ + 326, + 253, + 541, + 368 + ], + "blocks": [ + { + "bbox": [ + 313, + 231, + 553, + 252 + ], + "lines": [ + { + "bbox": [ + 313, + 231, + 553, + 252 + ], + "spans": [ + { + "bbox": [ + 313, + 231, + 553, + 252 + ], + "type": "text", + "content": "Table 7. Classification results on CelebA dataset (T=Smiling, P=Male) with different number of synthetic data." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 326, + 253, + 541, + 368 + ], + "lines": [ + { + "bbox": [ + 326, + 253, + 541, + 368 + ], + "spans": [ + { + "bbox": [ + 326, + 253, + 541, + 368 + ], + "type": "table", + "html": "
Ratio To Real DataTargetProtectedACC (↑)WST (↑)EOD (↓)STD (↓)
P=0P=1
0.5t=088.6593.9689.5180.8010.314.90
t=191.1080.80
1t=088.1691.4089.0284.206.072.74
t=190.2584.20
1.5t=089.1892.0588.3481.257.133.98
t=188.3881.25
2t=089.0191.2688.8183.366.082.96
t=191.2689.44
", + "image_path": "e8f91f1805a7dcf9dee6b7de8772ad303ca911c65ad743972a143b1d89297676.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_body" + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 372, + 553, + 394 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 372, + 553, + 394 + ], + "spans": [ + { + "bbox": [ + 313, + 372, + 553, + 394 + ], + "type": "text", + "content": "method is able to identify the model's sensitivity under the bias ratios different to real data." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 396, + 555, + 528 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 396, + 555, + 528 + ], + "spans": [ + { + "bbox": [ + 313, + 396, + 555, + 528 + ], + "type": "text", + "content": "**Evaluations of Different Number of Synthetic Data.** We also evaluate fine-tuning with varying amounts of balanced synthetic data. As shown in Tab. 7, we use the real training data count as a reference and set different ratios to determine the number of synthetic data. The results indicate that when the amount of synthetic data matches that of real data, the model achieves the best fairness. Additionally, using half the amount of real data results in the best accuracy. We believe that using too much synthetic data during fine-tuning can lead to overfitting, while using too little data may fail to adequately debias the model." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 538, + 388, + 550 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 538, + 388, + 550 + ], + "spans": [ + { + "bbox": [ + 313, + 538, + 388, + 550 + ], + "type": "text", + "content": "5. Conclusion" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 558, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 558, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 558, + 555, + 713 + ], + "type": "text", + "content": "In this work, we proposed a method to mitigate bias in machine learning models using synthetic data generated by a text-to-image process. By designing contextual synthetic data generation and selective fine-tuning, we enhance model fairness without requiring demographic group annotations. Our model updates selectively fairness-sensitive parameters, optimizing simultaneously model fairness and utility scores. Empirical results demonstrate that our method outperforms existing techniques, improving fairness while maintaining model utility performance. This work highlights the potential of synthetic data for creating fairer AI systems, offering a promising direction for future research in bias mitigation." + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 317, + 757 + ], + "type": "text", + "content": "28755" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 165, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 165, + 85 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 165, + 85 + ], + "type": "text", + "content": "6. Acknowledgments" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 91, + 296, + 163 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 91, + 296, + 163 + ], + "spans": [ + { + "bbox": [ + 55, + 91, + 296, + 163 + ], + "type": "text", + "content": "This research utilised Queen Mary's Apocrita HPC facility, supported by QMUL Research-IT. Zengqun Zhao is funded by Queen Mary Principal's PhD Studentships. Zengqun Zhao wants to thank Yining Wang and James Oldfield for the valuable comments and help, and Yiming Lin and Jie Shen for the valuable discussions." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 172, + 115, + 185 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 172, + 115, + 185 + ], + "spans": [ + { + "bbox": [ + 56, + 172, + 115, + 185 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 57, + 193, + 296, + 714 + ], + "type": "list", + "angle": 0, + "index": 16, + "blocks": [ + { + "bbox": [ + 61, + 193, + 296, + 225 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 193, + 296, + 225 + ], + "spans": [ + { + "bbox": [ + 61, + 193, + 296, + 225 + ], + "type": "text", + "content": "[1] Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019. 1, 3" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 226, + 296, + 258 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 226, + 296, + 258 + ], + "spans": [ + { + "bbox": [ + 61, + 226, + 296, + 258 + ], + "type": "text", + "content": "[2] Carolyn Ashurst and Adrian Weller. Fairness without demographic data: A survey of approaches. In EAAMO, pages 1-12, 2023. 2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 61, + 259, + 295, + 313 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 259, + 295, + 313 + ], + "spans": [ + { + "bbox": [ + 61, + 259, + 295, + 313 + ], + "type": "text", + "content": "[3] Junsong Chen, YU Jincheng, GE Chongjian, Lewei Yao, Enze Xie, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li. Pixart-α: Fast training of diffusion transformer for photorealistic text-to-image synthesis. In ICLR, 2023. 2, 4" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 315, + 295, + 347 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 315, + 295, + 347 + ], + "spans": [ + { + "bbox": [ + 62, + 315, + 295, + 347 + ], + "type": "text", + "content": "[4] Kristy Choi, Aditya Grover, Trisha Singh, Rui Shu, and Stefano Ermon. Fair generative modeling via weak supervision. In ICML, pages 1887-1898, 2020. 2" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 348, + 295, + 380 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 348, + 295, + 380 + ], + "spans": [ + { + "bbox": [ + 62, + 348, + 295, + 380 + ], + "type": "text", + "content": "[5] Sam Corbett-Davies, Johann D Gaebler, Hamed Nilforoshan, Ravi Shroff, and Sharad Goel. The measure and mismeasure of fairness. JMLR, 24(1):14730-14846, 2023. 3" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 381, + 296, + 425 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 381, + 296, + 425 + ], + "spans": [ + { + "bbox": [ + 62, + 381, + 296, + 425 + ], + "type": "text", + "content": "[6] Elliot Creager, David Madras, Jorn-Henrik Jacobsen, Marissa Weis, Kevin Swersky, Toniann Pitassi, and Richard Zemel. Flexibly fair representation learning by disentangle-ment. In ICML, pages 1436-1445, 2019. 2" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 426, + 294, + 458 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 426, + 294, + 458 + ], + "spans": [ + { + "bbox": [ + 62, + 426, + 294, + 458 + ], + "type": "text", + "content": "[7] Elliot Creager, Jorn-Henrik Jacobsen, and Richard Zemel. Environment inference for invariant learning. In ICML, pages 2189-2200, 2021. 6" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 62, + 459, + 294, + 502 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 459, + 294, + 502 + ], + "spans": [ + { + "bbox": [ + 62, + 459, + 294, + 502 + ], + "type": "text", + "content": "[8] Saloni Dash, Vineeth N Balasubramanian, and Amit Sharma. Evaluating and mitigating bias in image classifiers: A causal perspective using counterfactuals. In WACV, pages 915-924, 2022. 2, 3" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 62, + 503, + 294, + 546 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 503, + 294, + 546 + ], + "spans": [ + { + "bbox": [ + 62, + 503, + 294, + 546 + ], + "type": "text", + "content": "[9] Moreno D'Inca, Christos Tzelepis, Ioannis Patras, and Nicu Sebe. Improving fairness using vision-language driven image augmentation. In WACV, pages 4695-4704, 2024. 1, 2, 3, 6, 7" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 548, + 295, + 590 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 548, + 295, + 590 + ], + "spans": [ + { + "bbox": [ + 57, + 548, + 295, + 590 + ], + "type": "text", + "content": "[10] Michele Donini, Luca Oneto, Shai Ben-David, John S Shawe-Taylor, and Massimiliano Pontil. Empirical risk minimization under fairness constraints. NeurIPS, 31, 2018. 1, 3" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 57, + 592, + 295, + 646 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 592, + 295, + 646 + ], + "spans": [ + { + "bbox": [ + 57, + 592, + 295, + 646 + ], + "type": "text", + "content": "[11] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024. 2" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 57, + 647, + 295, + 689 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 647, + 295, + 689 + ], + "spans": [ + { + "bbox": [ + 57, + 647, + 295, + 689 + ], + "type": "text", + "content": "[12] John C Duchi and Hongseok Namkoong. Learning models with uniform performance via distributionally robust optimization. The Annals of Statistics, 49(3):1378-1406, 2021. 1, 3" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 57, + 691, + 295, + 714 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 691, + 295, + 714 + ], + "spans": [ + { + "bbox": [ + 57, + 691, + 295, + 714 + ], + "type": "text", + "content": "[13] Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. NeurIPS, 29, 2016. 3, 6" + } + ] + } + ], + "index": 15 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 553, + 713 + ], + "type": "list", + "angle": 0, + "index": 33, + "blocks": [ + { + "bbox": [ + 316, + 73, + 553, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 73, + 553, + 106 + ], + "spans": [ + { + "bbox": [ + 316, + 73, + 553, + 106 + ], + "type": "text", + "content": "[14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770-778, 2016. 6, 7" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 108, + 553, + 140 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 108, + 553, + 140 + ], + "spans": [ + { + "bbox": [ + 316, + 108, + 553, + 140 + ], + "type": "text", + "content": "[15] Jeremy Howard and Sebastian Ruder. Universal language model fine-tuning for text classification. In ACL, pages 328-339, 2018. 4" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 317, + 142, + 553, + 186 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 142, + 553, + 186 + ], + "spans": [ + { + "bbox": [ + 317, + 142, + 553, + 186 + ], + "type": "text", + "content": "[16] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. In ICLR, pages 1-13, 2022. 13" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 317, + 188, + 553, + 230 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 188, + 553, + 230 + ], + "spans": [ + { + "bbox": [ + 317, + 188, + 553, + 230 + ], + "type": "text", + "content": "[17] Jungseock Joo and Kimmo Kärkkäinen. Gender slopes: Counterfactual fairness for computer vision models by attribute manipulation. In FATE/MM, pages 1-5, 2020. 2, 3, 6" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 234, + 553, + 266 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 234, + 553, + 266 + ], + "spans": [ + { + "bbox": [ + 316, + 234, + 553, + 266 + ], + "type": "text", + "content": "[18] Sangwon Jung, Sanghyuk Chun, and Taesup Moon. Learning fair classifiers with partially annotated group labels. In CVPR, pages 10348-10357, 2022. 6" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 268, + 553, + 312 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 268, + 553, + 312 + ], + "spans": [ + { + "bbox": [ + 316, + 268, + 553, + 312 + ], + "type": "text", + "content": "[19] Gal Kaplun, Andrey Gurevich, Tal Swisa, Mazor David, Shai Shalev-Shwartz, and Eran Malach. Less is more: Selective layer finetuning with subtuning. arXiv preprint arXiv:2302.06354, 2023. 4, 7" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 314, + 553, + 357 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 314, + 553, + 357 + ], + "spans": [ + { + "bbox": [ + 316, + 314, + 553, + 357 + ], + "type": "text", + "content": "[20] Adarsh Kappiyath, Abhra Chaudhuri, AJAY KUMAR JAISWAL, Ziquan Liu, Yunpeng Li, Xiatian Zhu, and Lu Yin. Sebra: Debiasing through self-guided bias ranking. In ICLR, pages 1-12, 2025. 3" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 359, + 553, + 392 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 359, + 553, + 392 + ], + "spans": [ + { + "bbox": [ + 316, + 359, + 553, + 392 + ], + "type": "text", + "content": "[21] Simon Kornblith, Jonathon Shlens, and Quoc V Le. Do better imagenet models transfer better? In CVPR, pages 2661-2671, 2019. 4" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 394, + 553, + 437 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 394, + 553, + 437 + ], + "spans": [ + { + "bbox": [ + 316, + 394, + 553, + 437 + ], + "type": "text", + "content": "[22] Yoonho Lee, Annie S Chen, Fahim Tajwar, Ananya Kumar, Huaxiu Yao, Percy Liang, and Chelsea Finn. Surgical finetuning improves adaptation to distribution shifts. In ICLR, 2022. 4" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 440, + 553, + 472 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 440, + 553, + 472 + ], + "spans": [ + { + "bbox": [ + 316, + 440, + 553, + 472 + ], + "type": "text", + "content": "[23] Daniel Levy, Yair Carmon, John C Duchi, and Aaron Sidford. Large-scale methods for distributionally robust optimization. NeurIPS, 33:8847-8860, 2020. 1, 3, 6" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 317, + 474, + 553, + 529 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 474, + 553, + 529 + ], + "spans": [ + { + "bbox": [ + 317, + 474, + 553, + 529 + ], + "type": "text", + "content": "[24] Evan Z Liu, Behzad Haghgoo, Annie S Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. Just train twice: Improving group robustness without training group information. In ICML, pages 6781-6792, 2021. 3, 6, 8" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 316, + 531, + 553, + 563 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 531, + 553, + 563 + ], + "spans": [ + { + "bbox": [ + 316, + 531, + 553, + 563 + ], + "type": "text", + "content": "[25] Lydia T Liu, Max Simchowitz, and Moritz Hardt. The implicit fairness criterion of unconstrained learning. In ICML, pages 4051-4060. PMLR, 2019. 3" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 316, + 565, + 553, + 597 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 565, + 553, + 597 + ], + "spans": [ + { + "bbox": [ + 316, + 565, + 553, + 597 + ], + "type": "text", + "content": "[26] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaou Tang. Deep learning face attributes in the wild. In ICCV, 2015, 5, 12" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 316, + 600, + 553, + 643 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 600, + 553, + 643 + ], + "spans": [ + { + "bbox": [ + 316, + 600, + 553, + 643 + ], + "type": "text", + "content": "[27] Ziquan Liu, Yi Xu, Xiangyang Ji, and Antoni B Chan. Twins: A fine-tuning framework for improved transferability of adversarial robustness and generalization. In CVPR, pages 16436-16446, 2023. 4" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 316, + 646, + 553, + 678 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 646, + 553, + 678 + ], + "spans": [ + { + "bbox": [ + 316, + 646, + 553, + 678 + ], + "type": "text", + "content": "[28] Romain Lopez, Jeffrey Regier, Michael I Jordan, and Nir Yosef. Information constraints on auto-encoding variational bayes. NeurIPS, 31, 2018. 4" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 316, + 680, + 553, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 680, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 680, + 553, + 713 + ], + "type": "text", + "content": "[29] David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. Learning adversarially fair and transferable representations. In ICML, pages 3384-3393, 2018. 2" + } + ] + } + ], + "index": 32 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 748, + 318, + 757 + ], + "type": "text", + "content": "28756" + } + ] + } + ], + "index": 34 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 73, + 294, + 713 + ], + "type": "list", + "angle": 0, + "index": 16, + "blocks": [ + { + "bbox": [ + 56, + 73, + 294, + 115 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 73, + 294, + 115 + ], + "spans": [ + { + "bbox": [ + 56, + 73, + 294, + 115 + ], + "type": "text", + "content": "[30] Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6):1-35, 2021. 1" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 118, + 294, + 140 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 118, + 294, + 140 + ], + "spans": [ + { + "bbox": [ + 56, + 118, + 294, + 140 + ], + "type": "text", + "content": "[31] Sachit Menon and Carl Vondrick. Visual classification via description from large language models. In ICLR, 2022. 4" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 141, + 294, + 173 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 141, + 294, + 173 + ], + "spans": [ + { + "bbox": [ + 56, + 141, + 294, + 173 + ], + "type": "text", + "content": "[32] Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, and Jinwoo Shin. Learning from failure: De-biasing classifier from biased classifier. NeurlPS, 33:20673-20684, 2020. 6" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 175, + 294, + 228 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 175, + 294, + 228 + ], + "spans": [ + { + "bbox": [ + 56, + 175, + 294, + 228 + ], + "type": "text", + "content": "[33] James Oldfield, Markos Georgopoulos, Grigorios Chrysos, Christos Tzelepis, Yannis Panagakis, Mihalis Nicolaou, Jiankang Deng, and Ioannis Patras. Multilinear mixture of experts: Scalable expert specialization through factorization. NeurIPS, 37:53022-53063, 2025. 3" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 230, + 294, + 274 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 230, + 294, + 274 + ], + "spans": [ + { + "bbox": [ + 56, + 230, + 294, + 274 + ], + "type": "text", + "content": "[34] Sungho Park, Sunhee Hwang, Dohyung Kim, and Hyeran Byun. Learning disentangled representation for fair facial attribute classification via fairness-aware information alignment. In AAAI, pages 2403-2411, 2021. 2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 276, + 294, + 319 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 276, + 294, + 319 + ], + "spans": [ + { + "bbox": [ + 56, + 276, + 294, + 319 + ], + "type": "text", + "content": "[35] Sungho Park, Jewook Lee, Pilhyeon Lee, Sunhee Hwang, Dohyung Kim, and Hyeran Byun. Fair contrastive learning for facial attribute classification. In CVPR, pages 10389-10398, 2022. 5, 6" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 320, + 294, + 363 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 320, + 294, + 363 + ], + "spans": [ + { + "bbox": [ + 56, + 320, + 294, + 363 + ], + "type": "text", + "content": "[36] Momchil Peychev, Anian Ruoss, Mislav Balunovic, Maximilian Baader, and Martin Vechev. Latent space smoothing for individually fair representations. In ECCV, pages 535-554, 2022. 2, 3, 4" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 365, + 294, + 398 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 365, + 294, + 398 + ], + "spans": [ + { + "bbox": [ + 56, + 365, + 294, + 398 + ], + "type": "text", + "content": "[37] Mohammad Pezeshki, Diane Bouchacourt, Mark Ibrahim, Nicolas Ballas, Pascal Vincent, and David Lopez-Paz. Discovering environments with xrm. In ICML, 2024. 3" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 399, + 294, + 431 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 399, + 294, + 431 + ], + "spans": [ + { + "bbox": [ + 56, + 399, + 294, + 431 + ], + "type": "text", + "content": "[38] Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. On fairness and calibration. NeurIPS, 30, 2017. 3" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 434, + 294, + 487 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 434, + 294, + 487 + ], + "spans": [ + { + "bbox": [ + 56, + 434, + 294, + 487 + ], + "type": "text", + "content": "[39] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, pages 8748-8763, 2021. 4" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 489, + 294, + 521 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 489, + 294, + 521 + ], + "spans": [ + { + "bbox": [ + 56, + 489, + 294, + 521 + ], + "type": "text", + "content": "[40] Vikram V Ramaswamy, Sunnie SY Kim, and Olga Russakovsky. Fair attribute classification through latent space de-biasing. In CVPR, pages 9301-9310, 2021. 2, 3" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 523, + 294, + 567 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 523, + 294, + 567 + ], + "spans": [ + { + "bbox": [ + 56, + 523, + 294, + 567 + ], + "type": "text", + "content": "[41] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, pages 1-27, 2022. 2, 4" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 56, + 568, + 294, + 601 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 568, + 294, + 601 + ], + "spans": [ + { + "bbox": [ + 56, + 568, + 294, + 601 + ], + "type": "text", + "content": "[42] Youngmin Ro and Jin Young Choi. Autolr: Layer-wise pruning and auto-tuning of learning rates in fine-tuning of deep networks. In AAAI, pages 2486-2494, 2021. 4" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 56, + 602, + 294, + 645 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 602, + 294, + 645 + ], + "spans": [ + { + "bbox": [ + 56, + 602, + 294, + 645 + ], + "type": "text", + "content": "[43] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, pages 10684-10695, 2022. 2, 3, 4" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 56, + 647, + 294, + 679 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 647, + 294, + 679 + ], + "spans": [ + { + "bbox": [ + 56, + 647, + 294, + 679 + ], + "type": "text", + "content": "[44] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, pages 234-241, 2015. 4" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 56, + 681, + 294, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 681, + 294, + 713 + ], + "spans": [ + { + "bbox": [ + 56, + 681, + 294, + 713 + ], + "type": "text", + "content": "[45] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans," + } + ] + } + ], + "index": 15 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 553, + 712 + ], + "type": "list", + "angle": 0, + "index": 31, + "blocks": [ + { + "bbox": [ + 333, + 73, + 553, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 333, + 73, + 553, + 106 + ], + "spans": [ + { + "bbox": [ + 333, + 73, + 553, + 106 + ], + "type": "text", + "content": "et al. Photorealistic text-to-image diffusion models with deep language understanding. In NeurIPS, pages 36479-36494, 2022. 2, 4" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 108, + 553, + 163 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 108, + 553, + 163 + ], + "spans": [ + { + "bbox": [ + 316, + 108, + 553, + 163 + ], + "type": "text", + "content": "[46] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. NeurIPS, 35:36479-36494, 2022. 3" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 165, + 553, + 198 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 165, + 553, + 198 + ], + "spans": [ + { + "bbox": [ + 316, + 165, + 553, + 198 + ], + "type": "text", + "content": "[47] Joonghyuk Shin, Minguk Kang, and Jaesik Park. Fill-up: Balancing long-tailed data with generative models. arXiv preprint arXiv:2306.07200, 2023. 1, 2, 6, 7" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 200, + 553, + 243 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 200, + 553, + 243 + ], + "spans": [ + { + "bbox": [ + 316, + 200, + 553, + 243 + ], + "type": "text", + "content": "[48] Robik Shrestha, Yang Zou, Qiuyu Chen, Zhiheng Li, Yusheng Xie, and Siqi Deng. Fairrag: Fair human generation via fair retrieval augmentation. In CVPR, pages 11996-12005, 2024. 2, 4" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 246, + 553, + 289 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 246, + 553, + 289 + ], + "spans": [ + { + "bbox": [ + 316, + 246, + 553, + 289 + ], + "type": "text", + "content": "[49] Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Nicolas Papernot, Ross Anderson, and Yarin Gal. Ai models collapse when trained on recursively generated data. Nature, 631 (8022):755-759, 2024. 2" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 292, + 553, + 335 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 292, + 553, + 335 + ], + "spans": [ + { + "bbox": [ + 316, + 292, + 553, + 335 + ], + "type": "text", + "content": "[50] Krishnakant Singh, Thanush Navaratnam, Jannik Holmer, Simone Schaub-Meyer, and Stefan Roth. Is synthetic data all we need? benchmarking the robustness of models trained with synthetic images. In CVPR, pages 2505-2515, 2024. 5" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 338, + 553, + 391 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 338, + 553, + 391 + ], + "spans": [ + { + "bbox": [ + 316, + 338, + 553, + 391 + ], + "type": "text", + "content": "[51] Tobias Uelwer, Jan Robine, Stefan Sylvius Wagner, Marc Höftmann, Eric Upschulte, Sebastian Konietzny, Maike Behrendt, and Stefan Harmeling. A survey on self-supervised representation learning. arXiv preprint arXiv:2308.11455, 2023. 2" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 394, + 553, + 426 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 394, + 553, + 426 + ], + "spans": [ + { + "bbox": [ + 316, + 394, + 553, + 426 + ], + "type": "text", + "content": "[52] Mei Wang and Weihong Deng. Mitigating bias in face recognition using skewness-aware reinforcement learning. In CVPR, pages 9322-9331, 2020. 6" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 429, + 553, + 472 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 429, + 553, + 472 + ], + "spans": [ + { + "bbox": [ + 316, + 429, + 553, + 472 + ], + "type": "text", + "content": "[53] Yinong Oliver Wang, Younjoon Chung, Chen Henry Wu, and Fernando De la Torre. Domain gap embeddings for generative dataset augmentation. In CVPR, pages 28684-28694, 2024. 5" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 475, + 553, + 518 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 475, + 553, + 518 + ], + "spans": [ + { + "bbox": [ + 316, + 475, + 553, + 518 + ], + "type": "text", + "content": "[54] Zeyu Wang, Klint Qinami, Ioannis Christos Karakozis, Kyle Genova, Prem Nair, Kenji Hata, and Olga Russakovsky. Towards fairness in visual recognition: Effective strategies for bias mitigation. In CVPR, pages 8919-8928, 2020. 2" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 316, + 521, + 553, + 576 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 521, + 553, + 576 + ], + "spans": [ + { + "bbox": [ + 316, + 521, + 553, + 576 + ], + "type": "text", + "content": "[55] Yuyang Xue, Junyu Yan, Raman Dutt, Fasih Haider, Jingshuai Liu, Steven McDonagh, and Sotirios A Tsaftaris. Bmft: Achieving fairness via bias-based weight masking fine-tuning. In MICCAI Workshop on Fairness of AI in Medical Imaging, pages 98-108, 2024. 3" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 316, + 578, + 553, + 621 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 578, + 553, + 621 + ], + "spans": [ + { + "bbox": [ + 316, + 578, + 553, + 621 + ], + "type": "text", + "content": "[56] Moon Ye-Bin, Nam Hyeon-Woo, Wonseok Choi, Nayeong Kim, Suha Kwak, and Tae-Hyun Oh. Exploiting synthetic data for data imbalance problems: Baselines from a data perspective. arXiv preprint arXiv:2308.00994, 2023. 1, 4, 6, 7" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 316, + 624, + 553, + 666 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 624, + 553, + 666 + ], + "spans": [ + { + "bbox": [ + 316, + 624, + 553, + 666 + ], + "type": "text", + "content": "[57] Cheng Zhang, Xuanbai Chen, Siqi Chai, Chen Henry Wu, Dmitry Lagun, Thabo Beeler, and Fernando De la Torre. Iti-gen: Inclusive text-to-image generation. In ICCV, pages 3969-3980, 2023. 2, 4" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 316, + 670, + 553, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 670, + 553, + 712 + ], + "spans": [ + { + "bbox": [ + 316, + 670, + 553, + 712 + ], + "type": "text", + "content": "[58] Fengda Zhang, Kun Kuang, Long Chen, Yuxuan Liu, Chao Wu, and Jun Xiao. Fairness-aware contrastive learning with partially annotated sensitive attributes. In ICLR, 2023. 3, 5, 6" + } + ] + } + ], + "index": 30 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "type": "text", + "content": "28757" + } + ] + } + ], + "index": 32 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 73, + 294, + 353 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 56, + 73, + 294, + 126 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 73, + 294, + 126 + ], + "spans": [ + { + "bbox": [ + 56, + 73, + 294, + 126 + ], + "type": "text", + "content": "[59] Fengda Zhang, Qianpei He, Kun Kuang, Jiashuo Liu, Long Chen, Chao Wu, Jun Xiao, and Hanwang Zhang. Distributionally generative augmentation for fair facial attribute classification. In CVPR, pages 22797-22808, 2024. 2, 3, 4, 5, 6" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 129, + 294, + 162 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 129, + 294, + 162 + ], + "spans": [ + { + "bbox": [ + 56, + 129, + 294, + 162 + ], + "type": "text", + "content": "[60] Zhifei Zhang, Yang Song, and Hairong Qi. Age progression/regression by conditional adversarial autoencoder. In CVPR, 2017. 6, 12" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 163, + 294, + 196 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 163, + 294, + 196 + ], + "spans": [ + { + "bbox": [ + 56, + 163, + 294, + 196 + ], + "type": "text", + "content": "[61] Qihao Zhao, Yalun Dai, Hao Li, Wei Hu, Fan Zhang, and Jun Liu. Ltgc: Long-tail recognition via leveraging llms-driven generated content. In CVPR, pages 19510-19520, 2024. 13" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 198, + 294, + 229 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 198, + 294, + 229 + ], + "spans": [ + { + "bbox": [ + 56, + 198, + 294, + 229 + ], + "type": "text", + "content": "[62] Zengqun Zhao and Ioannis Patras. Prompting visual-language models for dynamic facial expression recognition. In BMVC, pages 1-14, 2023. 4" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 231, + 294, + 274 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 231, + 294, + 274 + ], + "spans": [ + { + "bbox": [ + 56, + 231, + 294, + 274 + ], + "type": "text", + "content": "[63] Xiao Zhou, Yong Lin, Renjie Pi, Weizhong Zhang, Renzhe Xu, Peng Cui, and Tong Zhang. Model agnostic sample reweighting for out-of-distribution learning. In ICML, pages 27203-27221, 2022. 6" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 276, + 294, + 308 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 276, + 294, + 308 + ], + "spans": [ + { + "bbox": [ + 56, + 276, + 294, + 308 + ], + "type": "text", + "content": "[64] Yongchao Zhou, Hshmat Sahak, and Jimmy Ba. Using synthetic data for data augmentation to improve classification accuracy. In ICML, 2023. 2, 5" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 309, + 294, + 353 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 309, + 294, + 353 + ], + "spans": [ + { + "bbox": [ + 56, + 309, + 294, + 353 + ], + "type": "text", + "content": "[65] Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He. A comprehensive survey on transfer learning. Proceedings of the IEEE, 109(1):43-76, 2020. 4" + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "spans": [ + { + "bbox": [ + 293, + 749, + 317, + 757 + ], + "type": "text", + "content": "28758" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2025/AIpparel_ A Multimodal Foundation Model for Digital Garments/cb5e1214-45b4-45fb-973d-1b1e69c31e9e_content_list.json b/2025/AIpparel_ A Multimodal Foundation Model for Digital Garments/cb5e1214-45b4-45fb-973d-1b1e69c31e9e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e8fb156fc4e54458b7ff89db2269196f76ed117e --- /dev/null +++ b/2025/AIpparel_ A Multimodal Foundation Model for Digital Garments/cb5e1214-45b4-45fb-973d-1b1e69c31e9e_content_list.json @@ -0,0 +1,1450 @@ +[ + { + "type": "text", + "text": "Amparel: A Multimodal Foundation Model for Digital Garments", + "text_level": 1, + "bbox": [ + 155, + 128, + 815, + 151 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Kiyohiro Nakayama $^{1*}$ Jan Ackermann $^{1,2*†}$ Timur Levent Kesdogan $^{1,2*†}$ Yang Zheng $^{1}$ Maria Korosteleva $^{2}$ Olga Sorkine-Hornung $^{2}$ Leonidas J. Guibas $^{1}$ Guandao Yang $^{1}$ Gordon Wetzstein $^{1}$", + "bbox": [ + 192, + 159, + 774, + 214 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{1}$ Stanford University $^{2}$ ETH Zürich", + "bbox": [ + 334, + 219, + 629, + 238 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/dfde9e7929811c6d8a0102986e5e00c0dc227df96c33eeb3a7ca403c46ed2233.jpg", + "image_caption": [ + "Figure 1. Apparel. We present a multimodal foundation model for digital garments trained by fine-tuning a large multimodal model on a custom sewing pattern dataset using a novel tokenization scheme for these patterns. Apparel generates complex, diverse, high-quality sewing patterns based on multimodal inputs, such as text and images, and it unlocks new applications such as language-instructed sewing pattern editing. The generated sewing patterns can be directly used to simulate the corresponding 3D garments." + ], + "image_footnote": [], + "bbox": [ + 93, + 243, + 875, + 470 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 233, + 545, + 313, + 559 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Apparel is essential to human life, offering protection, mirroring cultural identities, and showcasing personal style. Yet, the creation of garments remains a time-consuming process, largely due to the manual work involved in designing them. To simplify this process, we introduce AByarel, a multimodal foundation model for generating and editing sewing patterns. Our model fine-tunes state-of-the-art large multimodal models (LMMs) on a custom-curated large-scale dataset of over 120,000 unique garments, each with multimodal annotations including text, images, and sewing patterns. Additionally, we propose a novel tokenization scheme that concisely encodes these complex sewing patterns so that LLMs can learn to predict them efficiently. AByarel achieves state-of-the-art performance in single-modal tasks, including text-to-garment and image-to-garment prediction, and enables novel multimodal garment generation applications such as interactive garment editing. The project website is at https://georgenakayama.github.io/AByarel/.", + "bbox": [ + 73, + 565, + 472, + 845 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 500, + 549, + 630, + 564 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Clothing plays a crucial role in society, serving as a barrier against the elements, a reflection of societal norms, and a means of personal expression. A key stage in garment production is the development of sewing patterns—a set of flat 2D panels with standardized assembly instructions that form a complete 3D garment [4]. Pattern making is a challenging task due to the complex geometric relationship between the 2D pattern and the draped 3D shape of the sewn garment. Even an experienced tailor must go through multiple iterations, incorporating feedback from various sources, including verbal descriptions of the garment's fit and feel, as well as visual references of its appearance. To simplify the pattern-making process, we explore strategies to leverage emerging generative models with multimodal inputs, such as language and images.", + "bbox": [ + 496, + 575, + 890, + 796 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "State-of-the-art sewing pattern prediction methods are designed to work with one specific input modality, such as 3D points [6, 14, 28], images [43, 47, 65, 70, 74], or language [21]. While effective within their respective domains, these single-modal approaches are often challenging to adapt to garment prediction tasks requiring different or combined input modalities. Expanding these meth", + "bbox": [ + 496, + 797, + 890, + 900 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 810, + 44 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "*Equal Contribution.", + "bbox": [ + 99, + 875, + 215, + 887 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "$\\dagger$ Work done as a visiting researcher at Stanford.", + "bbox": [ + 99, + 887, + 354, + 898 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "8138", + "bbox": [ + 480, + 944, + 514, + 955 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "ods to multimodal pattern prediction presents two primary challenges. First, no large-scale multimodal sewing pattern dataset is publicly available. Second, the capacity to accurately interpret multimodal inputs typically only emerges in large models with billions of parameters [1, 34]. It remains uncertain how to efficiently scale existing methods to models of this size.", + "bbox": [ + 75, + 90, + 468, + 193 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this paper, we propose to build such multimodal garment generative models by extending existing large multimodal models (LMMs) [34, 57, 69] to understand sewing patterns with complex geometries. To achieve this, we annotate the largest sewing pattern dataset [30] with multimodal labels. Our annotated dataset is ten times larger than those used by previous state-of-the-art generative methods [21, 28, 36], including over 120,000 unique sewing patterns paired with detailed text descriptions, images, edited sewing patterns, and editing instructions. Fine-tuning an LMM to perform multimodal garment generation requires representing complex garments in a format that LMMs can understand—tokens. For this task, we develop a novel tokenization method that is both expressive in representing the complex geometries of sewing patterns and concise enough to fit within the limited context length of existing LMMs, making fine-tuning computationally efficient.", + "bbox": [ + 75, + 195, + 470, + 445 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Combining these components, we present AApparel, a large multimodal model for generating sewing patterns. AApparel can predict sewing patterns with complex geometries and it outperforms state-of-the-art methods in single-modal garment prediction, often by a large margin. Moreover, our approach unlocks entirely new multimodal garment-generation tasks. Our contributions include:", + "bbox": [ + 75, + 446, + 468, + 547 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- We present GCD-MM, a multimodal sewing pattern dataset extending the largest public dataset of sewing patterns with multimodal annotations. We plan to publicly release the dataset to inspire innovative garment prediction capabilities and further enable research in multimodal garment generation.", + "- We develop a novel tokenization scheme and a new training objective for fine-tuning LMMs to predict sewing patterns. This tokenization method is critical for retargeting LMMs to multimodal garment prediction tasks efficiently.", + "- We present AIpparel, the first multimodal foundation model for sewing pattern prediction capable of taking language, images, and sewing patterns as input." + ], + "bbox": [ + 76, + 549, + 468, + 739 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related Works", + "text_level": 1, + "bbox": [ + 76, + 756, + 225, + 771 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Garment Generation. Prior works have studied learning-based garment generation represented in various formats, including images [5, 26], 3D meshes [17, 25, 32, 38, 39, 41, 46, 49, 52, 55, 76, 77, 79, 80], and sewing patterns [21, 28, 36, 47, 63]. Our paper focuses on generating sewing patterns, which, compared to other representations, are industry standard and can be directly used for downstream simulation and manufacturing. Earlier works", + "bbox": [ + 75, + 782, + 467, + 900 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "have explored a variety of different ways to generate and predict sewing patterns, including retrieval-based methods [14, 20], predicting sewing pattern templates with few parameters [24, 59, 60, 71], or cutting 3D scans into 2D panels [16, 18, 35, 40, 50, 62]. These algorithms usually require heuristics, such as the output garment templates. This limits their flexibility to extend to different input modalities or more complex garment types. Researchers have successfully applied deep learning methods to generating sewing patterns [21, 28, 36]. While these methods can predict accurate sewing patterns based on input conditioning, they offer task-specific models designed to only work well in a single modality. Extending these single-modal methods to a novel modality is difficult, in part because of the lack of large-scale multimodal sewing-pattern datasets and the requirement to redesign the network architecture. While Wang et al. [63] can predict sewing patterns with multiple modalities, including image, 3D garment, and body measurements, their method is limited to predicting simple garments with a predefined set of parameters. In this paper, we aim to tackle the challenge of creating a large multimodal generative model by curating the first multimodal garment dataset with complex garment geometries and providing a scalable recipe building on existing large multimodal models.", + "bbox": [ + 496, + 90, + 890, + 444 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Extending Large Multimodal Models. Large multimodal models have gained significant attention for their ability to understand language and images [2, 15, 45, 53, 57]. Efforts to extend LMMs to additional domains typically fall into two categories. Optimization-free approaches [23, 37, 51, 64, 66, 69, 72] employ prompt engineering. The other option is to fine-tune LMMs to take the new modality as input and/or output. The latter approach was first introduced for vision-language models [34, 56, 78] and subsequent works extended it to other modalities [11, 19, 31, 33, 67, 75]. Their approaches typically involve using pre-trained encoders [31, 33, 67] or standard discrete representations [11, 75] to convert the input modalities into tokens and align them with the text feature space of the LMMs. In particular, LLaVA [34] is pioneering in fine-tuning Large Language Models (LLMs) for visual understanding. It uses a pre-trained vision encoder to encode images into tokens, and a trainable projection layer to project the visual tokens into the LLM's feature space. We build our work on top of LLaVA by fine-tuning it to understand sewing patterns. This presents unique challenges, however, due to the lack of pretrained encoders or learning-efficient representations for sewing patterns. This motivates us to design an efficient, learning-friendly tokenizer and a fine-tuning objective for sewing pattern prediction.", + "bbox": [ + 496, + 450, + 890, + 819 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Garment Datasets. Garment datasets mostly fall into one of the following three categories: 1) datasets based on 3D scans of real-world garments [3, 9, 22, 39, 54, 68, 79], 2) datasets of designer-created garments [10, 81], and 3) datasets containing mostly procedurally generated sewing", + "bbox": [ + 496, + 827, + 890, + 900 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "8139", + "bbox": [ + 480, + 944, + 514, + 955 + ], + "page_idx": 1 + }, + { + "type": "table", + "img_path": "images/fa254b0157e0addfe4b48a4e3c73dc0ce8c0c6c2c32fd0b9ca972d1150107034.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
DatasetTotalTextImageEdits
Wang et al. [63]8kX
Korosteleva and Lee [27]23.5kXX
Sewfactory [36]19.1kXX
DressCode [21]20.3kXX
GCD [30]130kXX
GCD-MM (Ours)120k
", + "bbox": [ + 76, + 88, + 467, + 207 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Table 1. Modalities of Sewing Pattern Datasets. GCD-MM is a large-scale sewing pattern dataset with multimodal annotations, including text, images, and edited patterns.", + "bbox": [ + 76, + 208, + 468, + 250 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "patterns [7, 25, 27, 36, 42, 48, 58, 61, 63]. While 3D garment scans and designer-created garments can accurately depict the real-world complexity of garments, they are expensive to obtain, which limits the scale of these categories of data. Our work focuses on leveraging large-scale procedurally generated sewing pattern datasets. To the best of our knowledge, the largest synthetic sewing pattern datasets available are DressCode [21], SewFactory [36], and GarmentCodeData (GCD) [29, 30]. None of their annotations, however, contain all combinations of text, images, and sewing pattern edits, making them insufficient for training a multimodal sewing pattern generative model. To overcome this data gap, we curate the first large-scale multimodal sewing pattern dataset by expanding GCD with annotations including images, text, editing pairs, and editing instructions. Tab. 1 compares different sewing pattern datasets and their annotation modalities.", + "bbox": [ + 75, + 263, + 472, + 513 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. Method", + "text_level": 1, + "bbox": [ + 76, + 527, + 169, + 542 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "We propose a large multimodal generative model for sewing patterns by fine-tuning existing LMMs on a multimodal sewing pattern dataset. For this purpose, we first curate a sewing pattern dataset containing multimodal annotations (Sec. 3.1). We then describe how to train our model, $AIP_{\\text{parel}}$ , using an efficient tokenization scheme for sewing patterns using LlaVA 1.5-7B [34] as a base model (Sec. 3.2).", + "bbox": [ + 75, + 551, + 468, + 656 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Multimodal GarmentCode Dataset", + "text_level": 1, + "bbox": [ + 76, + 665, + 382, + 680 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "We create annotations containing many modalities to train a multimodal sewing pattern generative model. Specifically, we build on top of the largest existing sewing-pattern dataset, GarmentCodeData (GCD) [30], to incorporate two other modalities: text descriptions and sewing pattern pairs with editing instructions. We dub our dataset GarmentCodeData-MultiModal (GCD-MM).", + "bbox": [ + 75, + 688, + 468, + 791 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Text Descriptions of Sewing Patterns. To enable applications such as text-conditioned sewing pattern generation, it is important to obtain detailed text annotation describing the sewing patterns [8, 21]. He et al. [21] created short keyword descriptions of sewing patterns by prompting GPT-4V with rendered images. However, this method suffers from hallucination, and the short keywords are in", + "bbox": [ + 75, + 797, + 470, + 902 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "sufficient to describe the garments in detail, leading to irrecoverable ambiguities. Our pipeline improves on this by leveraging the design parameters associated with each synthetically generated sewing pattern to create accurate descriptions that capture the garment's key features. Specifically, we develop a rule-based algorithm to generate a set of short phrases, including a garment type (e.g., \"midi dress\", \"godet skirt\") and brief descriptions based on distinctive characteristics (e.g., \"flared hem\", \"V-neckline\"). To obtain the final sewing pattern description, we prompt GPT-4o [69] using the rule-based short captions and the rendered views of the draped garment. Our approach reduces GPT-4o's hallucination and results in more accurate descriptions in natural language. Please refer to the supplementary for caption comparison with DressCode and the prompts and rules we use to generate them.", + "bbox": [ + 496, + 90, + 893, + 327 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Language-instructed Sewing Pattern Editing. We also augment GCD with language-instructed editing annotations. Specifically, we use the programming abstraction from GarmentCode [29] to create paired sewing patterns with corresponding text instructions describing the applied edits. We first manually specify a series of common sewing pattern edits using the abstraction. This includes edits such as adjustments in skirt and pants length, changing insert and neckline styles, and adding or excluding a hood or sleeve. For each modification, we generate captions using a text template to describe the applied changes. See the supplementary for editing templates and captions examples.", + "bbox": [ + 496, + 334, + 893, + 512 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.2. A1pparel", + "text_level": 1, + "bbox": [ + 500, + 520, + 607, + 537 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "AIParel fine-tunes LLaVA 1.5-7B on our GCD-MM dataset to generate sewing patterns from multimodal conditioning. For this purpose, we need to encode sewing patterns into a compact list of tokens for LLaVA's input. We also propose a novel fine-tuning objective that allows AIParel to generate both discrete tokens and continuous parameters. Figure 2 shows an overview of our method.", + "bbox": [ + 496, + 542, + 893, + 647 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Pattern Representation. Following GCD [30], we define sewing patterns as a set of 2D panels in 3D with stitching information. A sewing pattern $\\mathcal{P} = (P, S)$ is a tuple consisting of $N$ panels $P = \\{P_1, \\ldots, P_N\\}$ and stitching information $S$ . Each panel $P_i$ is a planar surface with vertices $V_i = \\{v_1^{(i)}, \\ldots, v_{n_i}^{(i)}\\}$ and edges $E_i = (e_1^{(i)}, \\ldots, e_{n_i}^{(i)})$ , where each edge contains two endpoints connecting $(v_k^{(i)}, v_{k'}^{(i)})$ with $k' = k \\mod n_i + 1$ . Since each panel is defined in its own coordinate frame, we always set $v_1^{(i)} = 0 \\in \\mathbb{R}^2$ . An edge can be a straight line, a quadratic or cubic Bezier curve, or an arc, and includes its corresponding control vertices $c_k^{(i)}$ . Each panel also includes a rigid 3D transformation $R$ that transforms $P_i$ into the global coordinate frame for draping. Lastly, each panel contains a unique name indicating the panel type for the designers. We define stitching information $S$ as a set of edge pairs among panel", + "bbox": [ + 496, + 654, + 893, + 902 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "8140", + "bbox": [ + 482, + 944, + 516, + 955 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/8cdada81c13a60940fc4d9c27e8e9c971eb48436a6e7c51783a5afb3a18044a0.jpg", + "image_caption": [ + "Figure 2. Illustration of Our Method. Apparel uses a novel sewing pattern tokenizer (light blue region) to tokenize each panel into a set of special tokens (light green region). Panel vertex positions and 3D transformations are incorporated using positional embeddings (colored arrows) to the tokens. Apparel takes in multimodal inputs, such as images and texts (light orange region), to output sewing patterns using autoregressive sampling (light grey region). Finally, the output is decoded to produce simulation-ready sewing patterns (light pink region). See Section 3 for method details." + ], + "image_footnote": [], + "bbox": [ + 83, + 89, + 380, + 340 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/f2065f1f8bdb78b416cf9582bcab7ff18ed0189b4273d6e24ec800a3919eb12e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 385, + 88, + 888, + 340 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "edges, i.e., $S = \\{(e_{k_1}^{(i_1)}, e_{l_1}^{(j_1)}), \\ldots, (e_{k_m}^{(i_m)}, e_{l_m}^{(j_m)})\\}$ where each $(e_{k_s}^{(i_s)}, e_{l_s}^{(j_s)})$ indicates that edge $e_{k_s}^{(i_s)}$ from panel $P_{i_s}$ will be stitched with edge $e_{l_s}^{(j_s)}$ . See the supplementary for representation details.", + "bbox": [ + 75, + 439, + 468, + 511 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Sewing Pattern Tokenization. The sewing pattern representation in GCD contains both continuous parameters, such as panel vertex coordinates, and discrete parameters, such as the number of panels and stitches. This poses challenges in compactly representing each sewing pattern as a set of tokens for the transformer's prediction. Prior works use extensive zero-padding to make sure that all sewing patterns can be represented as a fixed-length vector [21, 28, 36]. This approach is impractical for the complex sewing patterns present in GCD-MM, as it leads to an extremely long context. For example, the tokenization scheme of He et al. [21] requires more than $30\\mathrm{k}$ tokens to represent a typical sewing pattern in the GCD-MM dataset, making it extremely inefficient for generation and learning. $^{1}$", + "bbox": [ + 75, + 516, + 468, + 720 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Inspired by recent work on vector graphics generation [13], we develop a tokenization scheme to efficiently represent sewing patterns as a sequence of drawing commands. Specifically, we introduce four special tokens to indicate garment-start $(<\\text{SoG}>)$ , garment-end $(<\\text{EoG}>)$ , panel-start $(<\\text{SoP}>)$ and panel-end $(<\\text{EoP}>)$ . With these tokens, each sewing pattern can be represented as", + "bbox": [ + 75, + 722, + 468, + 825 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathrm {E} _ {g} (\\mathcal {P}) = < \\mathrm {S} \\circ \\mathrm {G} > \\mathrm {E} _ {p} (P _ {1}, S) \\dots \\mathrm {E} _ {p} (P _ {n}, S) < \\mathrm {E} \\circ \\mathrm {G} >, \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 96, + 835, + 468, + 852 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\mathrm{E}_p$ tokenizes panel $P$ in the form of", + "bbox": [ + 76, + 862, + 468, + 878 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "$\\langle \\mathrm{SoP}\\rangle \\ldots \\langle \\mathrm{EoP}\\rangle$ . $\\mathbf{E}_p$ consists of three pieces of panel information: name, transformation, and edges. The panel name is tokenized using LLaVA-1.5-8B's text tokenizer and inserted after $\\langle \\mathrm{SoP}\\rangle$ . We introduce a new token $\\langle \\mathbb{R}\\rangle$ and place it after the panel name to represent the panel's transformation. Each edge type also corresponds to two special tokens, depending on whether the edge ends at the starting endpoint: line $(< \\mathsf{L}>$ $< \\mathsf{cL}>$ ),quadratic Bézier curve $(< Q>$ $< cQ>$ ), cubic Bézier curve $(< \\mathsf{B}>$ $< c\\mathsf{B}>$ ),and arc $(< A>$ $< cA>$ ).We also introduce a set of stitching tag tokens $\\{< t1 > ,\\dots , < tM > , < tN > \\}$ to represent stitching information $S$ .We associate each edge with a stitching tag so that $(e_{k_s}^{(i_s)},e_{l_s}^{(j_s)})\\in S$ iff there exists $a\\in \\{1,\\dots ,M\\}$ such that $e_{k_s}^{(i_s)}$ and $e_{l_s}^{(j_s)}$ are both associated with $< Ta>$ .If an edge is not stitched to another edge, it is associated with the null tag $< tN>$ .For example, a panel consisting of two lines stitched together, one cubic Bézier curve and an arc is tokenized as", + "bbox": [ + 500, + 441, + 890, + 712 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} < \\mathrm {S o P} > [ \\text {p a n e l n a m e} ] < \\mathrm {R} > < \\mathrm {L} > < \\mathrm {t} 1 > < \\mathrm {L} > < \\mathrm {t} 1 > \\\\ < \\mathrm {B} > < \\mathrm {t N} > < \\mathrm {c A} > < \\mathrm {t N} > < \\mathrm {E o P} >. \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 542, + 720, + 846, + 752 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Compared to the DressCode tokenizer [21], our proposed scheme uses around 100 times fewer tokens to describe the same garment. On average, we represent a sewing pattern with around 250 tokens with a maximum of 838 tokens on GCD-MM, whereas DressCode uses more than 30k tokens for each sewing pattern on the same data.", + "bbox": [ + 496, + 762, + 890, + 851 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Notation. From now on, we use bold letters (e.g., $X \\in \\mathbb{R}^{N \\times D}$ ) to denote the input embedding sequence to the transformer. We denote the $i$ -th embedding in $X$ as $X_{i}$ ,", + "bbox": [ + 498, + 854, + 890, + 900 + ], + "page_idx": 3 + }, + { + "type": "page_footnote", + "text": "See supplementary for a detailed analysis.", + "bbox": [ + 94, + 886, + 323, + 900 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "8141", + "bbox": [ + 480, + 944, + 513, + 955 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/e3a99a1d16832044dd2a5c32b83d46431933184709170c661d80e44894d44b4e.jpg", + "image_caption": [ + "Figure 3. Image-to-Garment Prediction (Qualitative). GCD-MM (Left): our model can reconstruct suitable sewing patterns from the input image alone. In contrast, SewFormer does not produce simulation-ready sewing patterns despite fine-tuning. SewFactory (Right): SewFormer produces inaccurate panels (top row) and incorrect garment types (bottom row) while Apparel accurately recovers sewing patterns from the images, resulting in superior simulation results. See Sec. 4.1." + ], + "image_footnote": [], + "bbox": [ + 81, + 89, + 893, + 311 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/53e5c9f47d94428251a78d8e408fcca6df2ba6040b1106b80eb43b7bfa4df9cd.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
DatasetMethodPanel L2 (↓)#Panel Acc (↑)#Edge Acc (↑)Rot L2 (↓)Transl L2 (↓)#Stitch Acc (↑)
SewfactorySewFormer3.389.899.3.0080.899.2
AApparel2.893.999.9.0050.699.8
GCD-MMSewFormer-FT12.379.444.7.0404.52.8
AApparel5.485.282.7.0202.777.2
", + "bbox": [ + 76, + 383, + 913, + 481 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Table 2. Image-to-Garment Prediction (Quantitative). AIIpparel achieves state-of-the-art performance in both datasets and surpasses SewFormer-FT by a large margin on GCD-MM.", + "bbox": [ + 75, + 486, + 892, + 516 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "and $X_{i}$ are sliced sequences before or after the $i$ -th embedding, respectively. We use $f_{\\phi}$ to denote the language transformer from LLaVA. We use $\\mathbf{X}$ to denote tokens before passing through $f_{\\phi}$ and $\\mathbf{H} = f_{\\phi}(\\mathbf{X})$ as the output hidden embeddings from the transformer.", + "bbox": [ + 75, + 541, + 470, + 617 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Continuous Parameters. The tokenization scheme in Eq. 1 does not include any continuous parameters such as vertex positions, control points for edges, or rigid transformation of panels. Prior works represent continuous parameters as quantized tokens in discrete space [21, 44]. This introduces quantization error for the continuous parameters and uses more tokens per panel, leading to a longer, inefficient representation. Inspired by recent approaches of extending LMMs [19, 31], we propose using small regression heads to map hidden embeddings of the transformer to the continuous parameters. Specifically, we define an MLP $g_{\\theta}^{(\\mathrm{e})}: \\mathbb{R}^D \\to \\mathbb{R}^C$ to map LLaVA's hidden embedding from the last layer to vertices and control points. As illustrated in Fig. 2, $g_{\\theta}^{(\\mathrm{e})}$ takes the output embedding corresponding to the token right before the edge type token. Concretely, if the $i$ -th token, $X_i$ , corresponds to an edge-type token for edge $e$ , its associated output embedding $H_{